Skip to main content

Research Repository

Advanced Search

Towards a more reliable interpretation of machine learning outputs for safety-critical systems using feature importance fusion

Rengasamy, Divish; Rothwell, Benjamin C.; Figueredo, Grazziela P.

Towards a more reliable interpretation of machine learning outputs for safety-critical systems using feature importance fusion Thumbnail


Authors

Divish Rengasamy



Abstract

When machine learning supports decision-making in safety-critical systems, it is important to verify and understand the reasons why a particular output is produced. Although feature importance calculation approaches assist in interpretation, there is a lack of consensus regarding how features’ importance is quantified, which makes the explanations offered for the outcomes mostly unreliable. A possible solution to address the lack of agreement is to combine the results from multiple feature importance quantifiers to reduce the variance in estimates and to improve the quality of explanations. Our hypothesis is that this leads to more robust and trustworthy explanations of the contribution of each feature to machine learning predictions. To test this hypothesis, we propose an extensible model-agnostic framework divided in four main parts: (i) traditional data pre-processing and preparation for predictive machine learning models, (ii) predictive machine learning, (iii) feature importance quantification, and (iv) feature importance decision fusion using an ensemble strategy. Our approach is tested on synthetic data, where the ground truth is known. We compare different fusion approaches and their results for both training and test sets. We also investigate how different characteristics within the datasets affect the quality of the feature importance ensembles studied. The results show that, overall, our feature importance ensemble framework produces 15% less feature importance errors compared with existing methods. Additionally, the results reveal that different levels of noise in the datasets do not affect the feature importance ensembles’ ability to accurately quantify feature importance, whereas the feature importance quantification error increases with the number of features and number of orthogonal informative features. We also discuss the implications of our findings on the quality of explanations provided to safety-critical systems.

Citation

Rengasamy, D., Rothwell, B. C., & Figueredo, G. P. (2021). Towards a more reliable interpretation of machine learning outputs for safety-critical systems using feature importance fusion. Applied Sciences, 11(24), Article 11854. https://doi.org/10.3390/app112411854

Journal Article Type Article
Acceptance Date Dec 9, 2021
Online Publication Date Dec 13, 2021
Publication Date Dec 13, 2021
Deposit Date Dec 20, 2021
Publicly Available Date Dec 20, 2021
Journal Applied Sciences
Electronic ISSN 2076-3417
Publisher MDPI AG
Peer Reviewed Peer Reviewed
Volume 11
Issue 24
Article Number 11854
DOI https://doi.org/10.3390/app112411854
Keywords Fluid Flow and Transfer Processes; Computer Science Applications; Process Chemistry and Technology; General Engineering; Instrumentation; General Materials Science
Public URL https://nottingham-repository.worktribe.com/output/7052899
Publisher URL https://www.mdpi.com/2076-3417/11/24/11854

Files




You might also like



Downloadable Citations