Dongxia Wang
Using Information Theory to Improve the Robustness of Trust Systems
Wang, Dongxia; Muller, Tim; Irissappane, Athirai A.; Zhang, Jie; Liu, Yang
Authors
TIM MULLER Tim.Muller@nottingham.ac.uk
Assistant Professor
Athirai A. Irissappane
Jie Zhang
Yang Liu
Abstract
Unfair rating attacks to trust systems can affect the accuracy of trust evaluation when trust ratings (recommendations) about trustee agents are sought by truster agents from others (advisor agents). A robust trust system should remain accurate, even under the worst-case attacks which yield the least useful recommendations. In this work, we base on information theory to quantify the utility of recommendations. We analyse models where the advisors have the worst-case behaviour. With these models, we formally prove that if the fraction of dishonest advisors exceeds a certain threshold, recommendations become completely useless (in the worst case). Our evaluations on several popular trust models show that they cannot provide accurate trust evaluation under the worst-case as well as many other types of unfair rating attacks. Our way of explicitly modelling dishonest advisors induces a method of computing trust accurately, which can serve to improve the robustness of the trust models.
Citation
Wang, D., Muller, T., Irissappane, A. A., Zhang, J., & Liu, Y. (2015, May). Using Information Theory to Improve the Robustness of Trust Systems. Presented at AAMAS'15: International Conference on Autonomous Agents and Multiagent Systems, Istanbul Turkey
Presentation Conference Type | Edited Proceedings |
---|---|
Conference Name | AAMAS'15: International Conference on Autonomous Agents and Multiagent Systems |
Start Date | May 4, 2015 |
End Date | May 8, 2015 |
Acceptance Date | Jan 28, 2015 |
Publication Date | 2015 |
Deposit Date | Jan 13, 2020 |
Publisher | Association for Computing Machinery (ACM) |
Peer Reviewed | Peer Reviewed |
Pages | 791--799 |
Book Title | AAMAS '15: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems |
ISBN | 978-1-4503-3413-6 |
DOI | https://doi.org/10.5555/2772879.2773255 |
Keywords | information leakage, robustness, trust system, unfair rating, worst-case attack |
Public URL | https://nottingham-repository.worktribe.com/output/2140453 |
Publisher URL | http://dl.acm.org/citation.cfm?id=2772879.2773255 |
You might also like
Is It Harmful when Advisors Only Pretend to Be Honest?
(2016)
Presentation / Conference Contribution
Information Theoretical Analysis of Unfair Rating Attacks under Subjectivity
(2019)
Journal Article
Provably Secure Decisions based on Potentially Malicious Information
(2024)
Journal Article
Quantifying robustness of trust systems against collusive unfair rating attacks using information theory
(-0001)
Presentation / Conference Contribution
Expressiveness modulo bisimilarity of regular expressions with parallel composition
(-0001)
Presentation / Conference Contribution
Downloadable Citations
About Repository@Nottingham
Administrator e-mail: discovery-access-systems@nottingham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search