Articolo in rivista, 2019, ENG, 10.1109/MIS.2019.2957223
Guidotti R.; Monreale A.; Giannotti F.; Pedreschi D.; Ruggieri S.; Turini F.
CNR-ISTI, Pisa, Italy; Università di Pisa, Pisa, Italy; CNR-ISTI, Pisa, Italy; Università di Pisa, Pisa, Italy; Università di Pisa, Pisa, Italy; Università di Pisa, Pisa, Italy;
The rise of sophisticated machine learning models has brought accurate but obscure decision systems, which hide their logic, thus undermining transparency, trust, and the adoption of artificial intelligence (AI) in socially sensitive and safety-critical contexts. We introduce a local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance. The proposed method first learns an interpretable, local classifier on a synthetic neighborhood of the instance under investigation, generated by a genetic algorithm. Then, it derives from the interpretable classifier an explanation consisting of a decision rule, explaining the factual reasons of the decision, and a set of counterfactuals, suggesting the changes in the instance features that would lead to a different outcome. Experimental results show that the proposed method outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.
IEEE intelligent systems 34 (6), pp. 14–22
Genetic algorithms, Intelligent systems, Decision making, Decision trees, Machine learning algorithms, Prediction algorithms, Data models, Explainable AI, Interpretable Machine Learning, Open the Black Box, Explanation Rules, Counterfactuals
Giannotti Fosca, Guidotti Riccardo
ISTI – Istituto di scienza e tecnologie dell'informazione "Alessandro Faedo"
ID: 417414
Year: 2019
Type: Articolo in rivista
Creation: 2020-02-20 14:41:55.000
Last update: 2021-01-12 22:02:03.000
CNR authors
External IDs
CNR OAI-PMH: oai:it.cnr:prodotti:417414
DOI: 10.1109/MIS.2019.2957223
ISI Web of Science (WOS): 000510770600002
Scopus: 2-s2.0-85076272618