Extensions of Supervised Learning

Other research works deal with extensions or generalizations of the standard setting of supervised learning. For example, while machine learning methods typically assume data to be represented in vectorial form, representations in terms of structured objects, such as graphs, sequences, or order relations, appear to be more natural in many applications. Moreover, representations in terms of sets or distributions are important to capture uncertainty and imprecision. Developing algorithms for learning from such kinds of data is specifically challenging. Our activities in this field include research on machine learning methods for structured output and multi-target prediction, predictive modeling for complex structures (including preference learning as an important special case), as well as weakly and self-supervised learning.

Another direction in which the standard setting of supervised learning can be generalized is from batch to online learning or, stated differently, from learning in a static to learning in a dynamic environment. In this regard, we are specifically interested in bandit algorithms, reinforcement learning, and learning on data streams. In contrast to the standard batch setting, in which the entire training data is assumed to be available a priori, these settings require incremental algorithms for learning on continuous and potentially unbounded streams of data. Thus, the training and prediction phase are no longer separated but tightly interleaved. The development of algorithms for online learning is especially challenging due to various constraints the learner needs to obey, such as bounded time and memory resources (adaptation and prediction must be fast, perhaps in real-time, and data cannot be stored in its entirety). Besides, learning algorithms must be able to react to possibly changing environmental conditions, including changes in the underlying data-generating process.

Selected Publications