With the advancements in Artificial Intelligence (AI), AI techniques are being utilized in various real-world applications. While explainability of AI models is often not necessary when there are no significant consequences for incorrect outcomes, it remains crucial in safety-critical areas like Healthcare and Autonomous Vehicles. This has led to a growing demand for interpretability in Deep Learning models. The number of publications on Explainable AI/Interpretable ML has been increasing in recent years, as evident from major AI conferences such as ICML.
In ICML 2022, around 15 publications focused on the interpretability of Deep Learning methods. However, in ICML 2023, this number has risen to approximately 30 papers on explainable AI. Similar trends have been observed in other prominent AI conferences, prompting us to compile a collection of literature on Explainable AI. This compilation helps track recent trends in terms of the adopted methods for explainable AI, such as the popularity of Concept Identification approaches in recent years. Additionally, having a centralized location to review recent publications serves as a valuable resource for those interested in staying updated with the latest developments in the field of explainable AI.
We welcome anyone interested in Explainable AI to explore our Github repository. It is important to note that the list may not be comprehensive, and we might have missed some papers during our literature review, and that we are only looking at major AI conferences such as NeurIPS, ICML, ICLR, IJCAI, AAAI, KDD, etc. We will continually update the list as we discover new publications. If anyone wishes to contribute by adding additional literature, they can submit a pull request. Alternatively, if we have overlooked any relevant papers, please reach out to us via email at rushrukh@ksu.edu.