Much of our research centers around two key cognitive concepts of artificial intelligence: uncertainty and preference.
Machine learning is essentially concerned with extracting models from data and using these models to make predictions. As such, it is inseparably connected with uncertainty. Indeed, learning in the sense of generalizing beyond the data seen so far is necessarily based on a process of induction, i.e., replacing specific observations with general models of the data-generating process. Such models are always hypothetical, and the same holds true for the predictions produced by a model. In addition to the uncertainty inherent in inductive inference, other sources of uncertainty exist, including incorrect model assumptions and noisy data. Our research addresses questions regarding appropriate representations of uncertainty in machine learning, how to learn from uncertain and imprecise data, and how to produce reliable predictions in safety-critical applications.
The notion of “preference” has a long tradition in economics and operational research, where it has been formalised in various ways and studied extensively from different points of view. Nowadays, it is a topic of key importance in artificial intelligence, where it serves as a basic formalism for knowledge representation and problem-solving. The emerging field of preference learning is concerned with methods for learning preference models from explicit or implicit preference information, which are typically used for predicting the preferences of an individual or a group of individuals in new decision contexts. While research on preference learning has been specifically triggered by applications such as “learning to rank” for information retrieval (e.g., Internet search engines) and recommender systems, the methods developed in this field are useful in many other domains as well.
Selected Publications
- Eyke Hüllermeier, Roman Słowiński (2024)
Preference learning and multiple criteria decision aiding: differences, commonalities, and synergies–part I(bib)(pdf)
In: 4OR, Vol. 22, No. 2: pp. 179-209 - Eyke Hüllermeier, Willem Waegeman (2021)
Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods(bib)(pdf)
In: Machine Learning, Vol. 110, No. 3: pp. 457-506 - Vu-Linh Nguyen, Mohammad Hossein Shaker, Eyke Hüllermeier (2022)
How to measure uncertainty in uncertainty sampling for active learning(bib)(pdf)
In: Machine Learning, Vol. 111, No. 1: pp. 89-122 - Viktor Bengs, Róbert Busa-Fekete, Adil El Mesaoudi-Paul, Eyke Hüllermeier (2021)
Preference-based Online Learning with Dueling Bandits: A Survey(bib)(pdf)
In: Journal of Machine Learning Research, Vol. 22, No. 7: pp. 1-108 - Robin Senge, Stefan Bösner, Krzysztof Dembczyński, Jörg Haasenritter, Oliver Hirsch, Norbert Donner-Banzhoff, Eyke Hüllermeier (2014)
Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty(bib)
In: Information Sciences, Vol. 255: pp. 16-29