Reinforcement Learning for Flight Planning

In this applied project, we develop new solutions for routing aerial vehicles carrying sensor equipment in environments with dynamic hazards such as radioactive clouds.

Research Focus

Flight planning covers tasks such as guiding aerial vehicles to gather sensor data on given test areas (covering paths) or the dynamic search for points of interest. Important challenges are the avoidance of dynamic no-fly zones such as dangerous areas. Furthermore, the quality of sensor measurements is generally depending of cruising altitude and stable trajectories. For reinforcement learning soutions, this yields challenges with respect to long-term reward structures and multi-objective reward structures.

Publication

  • J. Blake and M. Schubert.Aerial Coverage Path Planning in Nuclear Emergencies A Training and Evaluation Environment. Demonstration Track @IJCAI 2025 - Demonstration Track at the 34th International Joint Conference on Artificial Intelligence (IJCAI 2025). Montreal, Canada, Aug 16-22, 2025