Open In Colab


Install the latest CityLearn version from PyPi with the pip command:

[ ]:
!pip install CityLearn==2.0.0

The CityLearn installation comes with a CLI for running simulations. The CLI is useful for use cases where multiple environment-agent setups that have been defined in schemas need to be submitted as 1 job for parallel running e.g. in an HPC.

The CLI documentation is returned by executing the following in a Shell Terminal of Powershell:

!python -m citylearn -h
usage: citylearn [-h] [--version] {learn} ...

An open source OpenAI Gym environment for the implementation of Multi-Agent
Reinforcement Learning (RL) for building energy coordination and demand
response in cities.

optional arguments:
  -h, --help  show this help message and exit
  --version   show program's version number and exit


Running a Simulation using the CLI

The learn command is used to run simulations and it has one positional argument, schema that is a named dataset or path to a schema. Other arguments are optional and have default values. The completed simulation is saved to a pickled object in a file that is specified by the optional argument -f. The -k argument is a flag to indicate that a list of the environment state at the end of each episode be saved as well. The -l argument is an integer that specifies the logging level. The output log includes action and reward values at each time step.

!python -m citylearn learn -h
usage: citylearn learn [-h] [-e EPISODES] [-f FILEPATH] [-k]
                       [-d ENV_HISTORY_DIRECTORY] [-n] [-l LOGGING_LEVEL]

Run simulation.

positional arguments:
  schema                CityLearn dataset name or schema path.

optional arguments:
  -h, --help            show this help message and exit
  -e EPISODES, --episodes EPISODES
                        Number of training episodes. (default: None)
  -f FILEPATH, --filepath FILEPATH
                        Filepath to write simulation pickle object to upon
                        completion. (default: citylearn_learning.pkl)
  -k, --keep_env_history
                        Indicator to store environment state at the end of
                        each episode. (default: False)
                        Directory to save environment history to. (default:
  -n, --deterministic_finish
                        Take deterministic actions in the final episode.
                        (default: False)
  -l LOGGING_LEVEL, --logging_level LOGGING_LEVEL
                        Logging level where increasing the level silences
                        lower level information. (default: 50)

The example below runs a simulation using the citylearn_challenge_2022_phase_1 dataset at a logging level of 100 and is set to store the environment history.

!python -m citylearn learn citylearn_challenge_2022_phase_1 -k -l 100 -e 1

Reading a Saved Simulation

After a CLI simulation completes, the pickled object can be read into memory and its env property can be utilized as would the env from a simulation that was run in a python script:

import pickle

filepath = 'citylearn_learning.pkl'

with (open(filepath, 'rb')) as f:
        model = pickle.load(f)

kpis = model.env.evaluate().pivot(index='cost_function', columns='name', values='value')
kpis = kpis.dropna(how='all')
name Building_1 Building_2 Building_3 Building_4 Building_5 District
annual_peak_average NaN NaN NaN NaN NaN 1.051783
carbon_emissions_total 1.133795 1.157876 1.272483 1.181342 1.185891 1.186277
cost_total 1.043202 1.063490 1.145188 1.097198 1.074609 1.084738
daily_peak_average NaN NaN NaN NaN NaN 1.149743
discomfort_delta_average 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
discomfort_delta_maximum 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
discomfort_delta_minimum 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
electricity_consumption_total 1.184364 1.214805 1.346096 1.236717 1.262303 1.248857
one_minus_load_factor_average NaN NaN NaN NaN NaN 0.987430
ramping_average NaN NaN NaN NaN NaN 1.162248
zero_net_energy 1.117844 1.101070 1.293734 1.084827 1.144977 1.148490