Depending on the amount of data to process, file generation may take longer.

If it takes too long to generate, you can limit the data by, for example, reducing the range of years.

Article

Download BibTeX

Title

Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions

Authors

[ 1 ] Instytut Informatyki, Wydział Informatyki i Telekomunikacji, Politechnika Poznańska | [ P ] employee

Scientific discipline (Law 2.0)

[2.3] Information and communication technology

Year of publication

2023

Published in

Artificial Intelligence in Medicine

Journal year: 2023 | Journal volume: vol. 135

Article type

scientific article

Publication language

english

Keywords
EN
  • XAI
  • Black-box
  • Explanation
  • Local explanation
  • Interpretable
  • Explainable
  • Fidelity
  • Reliability
  • Post-hoc
  • Model agnostic
  • Surrogate model
Abstract

EN Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to interpret and explain, culminating in black-box machine learning models. Model developers and users alike are often presented with a trade-off between performance and intelligibility, especially in high-stakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations for the predictions of a generic machine learning model, given a specific instance for which the prediction has been made. The method, named AraucanaXAI, is based on surrogate, locally-fitted classification and regression trees that are used to provide post-hoc explanations of the prediction of a generic machine learning model. Advantages of the proposed XAI approach include superior fidelity to the original model, ability to deal with non-linear decision boundaries, and native support to both classification and regression problems. We provide a packaged, open-source implementation of the AraucanaXAI method and evaluate its behaviour in a number of different settings that are commonly encountered in medical applications of AI. These include potential disagreement between the model prediction and physician’s expert opinion and low reliability of the prediction due to data scarcity.

Date of online publication

01.12.2022

Pages (from - to)

102471-1 - 102471-9

DOI

10.1016/j.artmed.2022.102471

URL

https://www.sciencedirect.com/science/article/pii/S0933365722002238

Comments

Article Number: 102471

License type

CC BY-NC-ND (attribution - noncommercial - no derivatives)

Open Access Mode

czasopismo hybrydowe

Open Access Text Version

final published version

Date of Open Access to the publication

in press

Ministry points / journal

100

Impact Factor

7,5 [List 2022]

This website uses cookies to remember the authenticated session of the user. For more information, read about Cookies and Privacy Policy.