Reinforcement Learning for Flight Planning
In this applied project, we develop new solutions for routing aerial vehicles carrying sensor equipment in environments with dynamic hazards such as radioactive clouds.

In this applied project, we develop new solutions for routing aerial vehicles carrying sensor equipment in environments with dynamic hazards such as radioactive clouds.
Flight planning covers tasks such as guiding aerial vehicles to gather sensor data on given test areas (covering paths) or the dynamic search for points of interest. Important challenges are the avoidance of dynamic no-fly zones such as dangerous areas. Furthermore, the quality of sensor measurements is generally depending of cruising altitude and stable trajectories. For reinforcement learning soutions, this yields challenges with respect to long-term reward structures and multi-objective reward structures.