Skip to main content

Research Repository

Advanced Search

Using Information Theory to Improve the Robustness of Trust Systems

Wang, Dongxia; Muller, Tim; Irissappane, Athirai A.; Zhang, Jie; Liu, Yang

Authors

Dongxia Wang

TIM MULLER Tim.Muller@nottingham.ac.uk
Assistant Professor

Athirai A. Irissappane

Jie Zhang

Yang Liu



Abstract

Unfair rating attacks to trust systems can affect the accuracy of trust evaluation when trust ratings (recommendations) about trustee agents are sought by truster agents from others (advisor agents). A robust trust system should remain accurate, even under the worst-case attacks which yield the least useful recommendations. In this work, we base on information theory to quantify the utility of recommendations. We analyse models where the advisors have the worst-case behaviour. With these models, we formally prove that if the fraction of dishonest advisors exceeds a certain threshold, recommendations become completely useless (in the worst case). Our evaluations on several popular trust models show that they cannot provide accurate trust evaluation under the worst-case as well as many other types of unfair rating attacks. Our way of explicitly modelling dishonest advisors induces a method of computing trust accurately, which can serve to improve the robustness of the trust models.

Citation

Wang, D., Muller, T., Irissappane, A. A., Zhang, J., & Liu, Y. (2015, May). Using Information Theory to Improve the Robustness of Trust Systems. Presented at AAMAS'15: International Conference on Autonomous Agents and Multiagent Systems, Istanbul Turkey

Presentation Conference Type Edited Proceedings
Conference Name AAMAS'15: International Conference on Autonomous Agents and Multiagent Systems
Start Date May 4, 2015
End Date May 8, 2015
Acceptance Date Jan 28, 2015
Publication Date 2015
Deposit Date Jan 13, 2020
Publisher Association for Computing Machinery (ACM)
Peer Reviewed Peer Reviewed
Pages 791--799
Book Title AAMAS '15: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems
ISBN 978-1-4503-3413-6
DOI https://doi.org/10.5555/2772879.2773255
Keywords information leakage, robustness, trust system, unfair rating, worst-case attack
Public URL https://nottingham-repository.worktribe.com/output/2140453
Publisher URL http://dl.acm.org/citation.cfm?id=2772879.2773255