Uncertainty and Preference in Machine Learning

Much of our research centers around two key cognitive concepts of artificial intelligence: uncertainty and preference.

Machine learning is essentially concerned with extracting models from data and using these models to make predictions. As such, it is inseparably connected with uncertainty. Indeed, learning in the sense of generalizing beyond the data seen so far is necessarily based on a process of induction, i.e., replacing specific observations with general models of the data-generating process. Such models are always hypothetical, and the same holds true for the predictions produced by a model. In addition to the uncertainty inherent in inductive inference, other sources of uncertainty exist, including incorrect model assumptions and noisy data. Our research addresses questions regarding appropriate representations of uncertainty in machine learning, how to learn from uncertain and imprecise data, and how to produce reliable predictions in safety-critical applications.

The notion of “preference” has a long tradition in economics and operational research, where it has been formalised in various ways and studied extensively from different points of view. Nowadays, it is a topic of key importance in artificial intelligence, where it serves as a basic formalism for knowledge representation and problem-solving. The emerging field of preference learning is concerned with methods for learning preference models from explicit or implicit preference information, which are typically used for predicting the preferences of an individual or a group of individuals in new decision contexts. While research on preference learning has been specifically triggered by applications such as “learning to rank” for information retrieval (e.g., Internet search engines) and recommender systems, the methods developed in this field are useful in many other domains as well.

Selected Publications