Contributo in atti di convegno, 2023, ENG, 10.1109/cibcb56990.2023.10264877

A theoretical framework for AI models explainability with application in biomedicine

Rizzo M.; Veneri A.; Albarelli A.; Lucchese C.; Nobile M.; Conati C.

Ca' Foscari University of Venice, Venezia, Italy; Ca' Foscari University of Venice, Venezia, Italy and CNR-ISTI, Pisa, Italy; Ca' Foscari University of Venice. Venezia, Italy; Ca' Foscari University of Venice, Venezia, Italy; Ca' Foscari University of Venice, Venezia, Italy; University of British Columbia, Vancouver, Canada

EXplainable Artificial Intelligence (XAI) is a vibrant research topic in the artificial intelligence community. It is raising growing interest across methods and domains, especially those involving high stake decision-making, such as the biomedical sector. Much has been written about the subject, yet XAI still lacks shared terminology and a framework capable of providing structural soundness to explanations. In our work, we address these issues by proposing a novel definition of explanation that synthesizes what can be found in the literature. We recognize that explanations are not atomic but the combination of evidence stemming from the model and its input-output mapping, and the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation is an accurate description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation seems convincing to the user). Our theoretical framework simplifies how these properties are operationalized, and it provides new insights into common explanation methods that we analyze as case studies. We also discuss the impact that our framework could have in biomedicine, a very sensitive application domain where XAI can have a central role in generating trust.

CIBCB 2023 - IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology, Eindhoven, The Netherlands, 29-31/08/2023

Keywords

Explainability, Machine learning, Biomedicine

CNR authors

Veneri Alberto

CNR institutes

ISTI – Istituto di scienza e tecnologie dell'informazione "Alessandro Faedo"

ID: 488083

Year: 2023

Type: Contributo in atti di convegno

Creation: 2023-10-31 23:35:22.000

Last update: 2023-11-27 16:35:48.000

CNR authors

External IDs

CNR OAI-PMH: oai:it.cnr:prodotti:488083

DOI: 10.1109/cibcb56990.2023.10264877

Scopus: 2-s2.0-85174953006

ISI Web of Science (WOS): 001090563700025