Explainable artificial intelligence

Within artificial intelligence (AI), explainable AI (XAI), often overlapping with interpretable AI or explainable machine learning (XML), is a field of research that explores methods that provide humans with the ability of intellectual oversight over AI algorithms.[1][2] The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms,[3] to make them more understandable and transparent.[4] This addresses users' requirement to assess safety and scrutinize the automated decision making in applications.[5] XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.[6][7]

XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.[8] XAI may be an implementation of the social right to explanation.[9] Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions.[10] XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.[11] This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.[12]

  1. ^ Longo, Luca; et al. (2024). "Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions". Information Fusion. 106. arXiv:2310.19775. doi:10.1016/j.inffus.2024.102301.
  2. ^ Mihály, Héder (2023). "Explainable AI: A Brief History of the Concept" (PDF). ERCIM News (134): 9–10.
  3. ^ Phillips, P. Jonathon; Hahn, Carina A.; Fontana, Peter C.; Yates, Amy N.; Greene, Kristen; Broniatowski, David A.; Przybocki, Mark A. (2021-09-29). "Four Principles of Explainable Artificial Intelligence". NIST. doi:10.6028/nist.ir.8312.
  4. ^ Vilone, Giulia; Longo, Luca (2021). "Notions of explainability and evaluation approaches for explainable artificial intelligence". Information Fusion. December 2021 - Volume 76: 89–106. doi:10.1016/j.inffus.2021.05.009.
  5. ^ Confalonieri, Roberto; Coba, Ludovik; Wagner, Benedikt; Besold, Tarek R. (January 2021). "A historical perspective of explainable Artificial Intelligence". WIREs Data Mining and Knowledge Discovery. 11 (1). doi:10.1002/widm.1391. hdl:11577/3471605. ISSN 1942-4787.
  6. ^ Castelvecchi, Davide (2016-10-06). "Can we open the black box of AI?". Nature. 538 (7623): 20–23. Bibcode:2016Natur.538...20C. doi:10.1038/538020a. ISSN 0028-0836. PMID 27708329. S2CID 4465871.
  7. ^ Sample, Ian (5 November 2017). "Computer says no: why making AIs fair, accountable and transparent is crucial". The Guardian. Retrieved 30 January 2018.
  8. ^ Alizadeh, Fatemeh (2021). "I Don't Know, Is AI Also Used in Airbags?: An Empirical Study of Folk Concepts and People's Expectations of Current and Future Artificial Intelligence". Icom. 20 (1): 3–17. doi:10.1515/icom-2021-0009. S2CID 233328352.
  9. ^ Edwards, Lilian; Veale, Michael (2017). "Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For". Duke Law and Technology Review. 16: 18. SSRN 2972855.
  10. ^ Do Couto, Mark (February 22, 2024). "Entering the Age of Explainable AI". TDWI. Retrieved 2024-09-11.
  11. ^ Gunning, D.; Stefik, M.; Choi, J.; Miller, T.; Stumpf, S.; Yang, G.-Z. (2019-12-18). "XAI-Explainable artificial intelligence". Science Robotics. 4 (37): eaay7120. doi:10.1126/scirobotics.aay7120. ISSN 2470-9476. PMID 33137719.
  12. ^ Rieg, Thilo; Frick, Janek; Baumgartl, Hermann; Buettner, Ricardo (2020-12-17). "Demonstration of the potential of white-box machine learning approaches to gain insights from cardiovascular disease electrocardiograms". PLOS ONE. 15 (12): e0243615. Bibcode:2020PLoSO..1543615R. doi:10.1371/journal.pone.0243615. ISSN 1932-6203. PMC 7746264. PMID 33332440.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search