Dongxia Wang
Using Information Theory to Improve the Robustness of Trust Systems
Wang, Dongxia; Muller, Tim; Irissappane, Athirai A.; Zhang, Jie; Liu, Yang
Authors
Dr TIM MULLER Tim.Muller@nottingham.ac.uk
ASSISTANT PROFESSOR
Athirai A. Irissappane
Jie Zhang
Yang Liu
Abstract
Unfair rating attacks to trust systems can affect the accuracy of trust evaluation when trust ratings (recommendations) about trustee agents are sought by truster agents from others (advisor agents). A robust trust system should remain accurate, even under the worst-case attacks which yield the least useful recommendations. In this work, we base on information theory to quantify the utility of recommendations. We analyse models where the advisors have the worst-case behaviour. With these models, we formally prove that if the fraction of dishonest advisors exceeds a certain threshold, recommendations become completely useless (in the worst case). Our evaluations on several popular trust models show that they cannot provide accurate trust evaluation under the worst-case as well as many other types of unfair rating attacks. Our way of explicitly modelling dishonest advisors induces a method of computing trust accurately, which can serve to improve the robustness of the trust models.
Citation
Wang, D., Muller, T., Irissappane, A. A., Zhang, J., & Liu, Y. (2015, May). Using Information Theory to Improve the Robustness of Trust Systems. Presented at AAMAS'15: International Conference on Autonomous Agents and Multiagent Systems, Istanbul Turkey
Presentation Conference Type | Edited Proceedings |
---|---|
Conference Name | AAMAS'15: International Conference on Autonomous Agents and Multiagent Systems |
Start Date | May 4, 2015 |
End Date | May 8, 2015 |
Acceptance Date | Jan 28, 2015 |
Publication Date | 2015 |
Deposit Date | Jan 13, 2020 |
Publisher | Association for Computing Machinery (ACM) |
Peer Reviewed | Peer Reviewed |
Pages | 791--799 |
Book Title | AAMAS '15: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems |
ISBN | 978-1-4503-3413-6 |
DOI | https://doi.org/10.5555/2772879.2773255 |
Keywords | information leakage, robustness, trust system, unfair rating, worst-case attack |
Public URL | https://nottingham-repository.worktribe.com/output/2140453 |
Publisher URL | http://dl.acm.org/citation.cfm?id=2772879.2773255 |
You might also like
A Difficulty in Trust Modelling
(2023)
Presentation / Conference Contribution
Pre-Signature Scheme for Trustworthy Offline V2V Communication
(2023)
Presentation / Conference Contribution
Simulating the Impact of Personality on Fake News
(2021)
Presentation / Conference Contribution
Provably Robust Decisions based on Potentially Malicious Sources of Information
(2020)
Presentation / Conference Contribution
The Reputation Lag Attack
(2019)
Presentation / Conference Contribution
Downloadable Citations
About Repository@Nottingham
Administrator e-mail: discovery-access-systems@nottingham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search