Skip to main content

Research Repository

Advanced Search

Quantifying robustness of trust systems against collusive unfair rating attacks using information theory

Wang, Dongxia; Muller, Tim; Zhang, Jie; Liu, Yang

Authors

Dongxia Wang

TIM MULLER Tim.Muller@nottingham.ac.uk
Assistant Professor

Jie Zhang

Yang Liu



Abstract

Unfair rating attacks happen in existing trust and reputation systems, lowering the quality of the systems. There exists a formal model that measures the maximum impact of independent attackers [Wang et al., 2015] - based on information theory. We improve on these results in multiple ways: (1) we alter the methodology to be able to reason about colluding attackers as well, and (2) we extend the method to be able to measure the strength of any attacks (rather than just the strongest attack). Using (1), we identify the strongest collusion attacks, helping construct robust trust system. Using (2), we identify the strength of (classes of) attacks that we found in the literature. Based on this, we help to overcome a shortcoming of current research into collusion-resistance - specific (types of) attacks are used in simulations, disallowing direct comparisons between analyses of systems.

Conference Name International Joint Conferences on Artificial Intelligence
Start Date Jul 25, 2015
End Date Jul 31, 2015
Acceptance Date Apr 16, 2015
Publication Date Jul 1, 2015
Deposit Date Jan 13, 2020
Publisher International Joint Conferences on Artificial Intelligence
Pages 111-117
Book Title IJCAI'15: Proceedings of the 24th International Conference on Artificial Intelligence
ISBN 9781577357384
Public URL https://nottingham-repository.worktribe.com/output/2141626
Publisher URL https://dl.acm.org/doi/10.5555/2832249.2832265