While working on various Reinforcement Learning applications in aviation, our group and I ran into the problem that it is very difficult to compare research. This is because everyone uses their own simulator, with their own scenarios and their own cost functions. Additionally, everytime anyone wanted to try something, a lot of time gets wasted on re-inventing the wheel, building simulators and frameworks that probably already exist, but are difficult to find or closed source.
To make our lives, and that of our colleagues easier, we came together to develop BlueSky-Gym:
BlueSky-Gym
A gymnasium style library for standardized Reinforcement Learning research in Air Traffic Management developed in Python. Build on BlueSky and The Farama Foundation’s Gymnasium
An example trained agent attempting the merge environment available in BlueSky-Gym.
For a complete list of the currently available environments click here
Installation
pip install bluesky-gym
Note that the pip package is bluesky-gym, for usage however, import as bluesky_gym.
Usage
Using the environments follows the standard API from Gymnasium, an example of which is given below:
import gymnasium as gym
import bluesky_gym
bluesky_gym.register_envs()
env = gym.make('MergeEnv-v0', render_mode='human')
obs, info = env.reset()
done = truncated = False
while not (done or truncated):
action = ... # Your agent code here
obs, reward, done, truncated, info = env.step(action)
Additionally you can directly use algorithms from standardized libraries such as Stable-Baselines3 or RLlib to train a model:
import gymnasium as gym
import bluesky_gym
from stable_baselines3 import DDPG
bluesky_gym.register_envs()
env = gym.make('MergeEnv-v0', render_mode=None)
model = DDPG("MultiInputPolicy",env)
model.learn(total_timesteps=2e6)
model.save()
Contributing and Assistance
If you would like to contribute to BlueSky-Gym or need assistance in setting up or creating your own environments, do not hesitate to open an issue or reach out to one of us via the BlueSky-Gym Discord. Additionally you can have a look at the roadmap for inspiration on where you can contribute and to get an idea of the direction BlueSky-Gym is going.
Citing
If you use BlueSky-Gym in your work, please cite it using:
@misc{bluesky-gym,
author = {Groot, DJ and Leto, G and Vlaskin, A and Moec, A and Ellerbroek, J},
title = {BlueSky-Gym: Reinforcement Learning Environments for Air Traffic Applications},
year = {2024},
journal = {SESAR Innovation Days 2024},
}