Explainable Artificial Intelligence (XAI)

With a growing number of practical applications, notions like transparency, interpretability, and understandability of AI models have received increasing attention in recent years, also due to political regulations claiming a “right to explanation”. The latter refers to cases where AI models make personalized recommendations, make important decisions, or act on behalf of people. In such cases, it is not only the quality, accuracy, or correctness of recommendations, decisions, or actions that matter. Instead, there is a natural desire to understand why a certain decision or prediction was made by the AI model – just think of a medical diagnosis or the recommendation of a medical treatment as an example. However, if the prediction was produced by a machine learning model, which has been trained in a data-driven way, this is not necessarily the case. On the contrary, many machine learning models, such as neural networks, have a “black box” character. The field of explainable artificial intelligence (XAI) is devoted to the development of tools and methods that produce more explainable models or help people understand AI models and the predictions made by such models.

Selected Publications