ABSTRACT :
Artificial Intelligence (AI) is rapidly advancing, but its inner workings, particularly in deep neural networks, remain poorly understood. These networks, while producing impressive results, often act as "black boxes," leaving users in the dark about how they arrive at their decisions. This lack of transparency raises concerns about the reliability and trustworthiness of AI models. Therefore, the field of eXplainable Artificial Intelligence (XAI) has emerged, driven by the need to shed light on these neural networks' decision-making processes. XAI researchers aim to provide insights into how AI systems operate, address questions about their trustworthiness, and offer explanations for their choices. By achieving this, XAI not only instills confidence in AI predictions but also helps users discern the circumstances in which these models may fall short. Ultimately, the pursuit of explainability in AI is essential for building trust, ensuring ethical AI usage, and facilitating broader adoption across various domains.
KEYWORDS : Explainable Artificial Intelligence, XAI, Computer VIsion, Deep Learning
Healthcare informatics for fighting COVID-19 and future epidemics, 311-336, 2022
Access publicationElectronics 12 (9), 2027, 2023
Access publicationarXiv preprint arXiv:2305.16361, 2023
Access publicationElectronics 13 (1), 175, 2023
Access publicationProceedings of the IEEE/CVF International Conference on Computer Vision, 806-815, 2023
Access publicationarXiv preprint arXiv:2406.04349, 2024
Access publicationCommunications in Computer and Information Science, 2023
Access publication