citylearn.wrappers module
- class citylearn.wrappers.DiscreteActionWrapper(env: citylearn.citylearn.CityLearnEnv, bin_sizes: Optional[List[Mapping[str, int]]] = None, default_bin_size: Optional[int] = None)[source]
Bases:
gym.core.ActionWrapper
Wrapper for action space discretization.
- Parameters
env (CityLearnEnv) – CityLearn environment.
bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active action in each building.
default_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building action.
- property action_space: List[gym.spaces.multi_discrete.MultiDiscrete]
Returns action space for discretized actions.
- class citylearn.wrappers.DiscreteObservationWrapper(env: citylearn.citylearn.CityLearnEnv, bin_sizes: Optional[List[Mapping[str, int]]] = None, default_bin_size: Optional[int] = None)[source]
Bases:
gym.core.ObservationWrapper
Wrapper for observation space discretization.
- Parameters
env (CityLearnEnv) – CityLearn environment.
bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active observation in each building.
default_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building observation.
- observation(observations: List[List[float]]) numpy.ndarray [source]
Returns discretized observations.
- property observation_space: List[gym.spaces.multi_discrete.MultiDiscrete]
Returns observation space for discretized observations.
- class citylearn.wrappers.DiscreteSpaceWrapper(env: citylearn.citylearn.CityLearnEnv, observation_bin_sizes: Optional[List[Mapping[str, int]]] = None, action_bin_sizes: Optional[List[Mapping[str, int]]] = None, default_observation_bin_size: Optional[int] = None, default_action_bin_size: Optional[int] = None)[source]
Bases:
gym.core.Wrapper
Wrapper for observation and action spaces discretization.
Wraps env in
citylearn.wrappers.DiscreteObservationWrapper
andcitylearn.wrappers.DiscreteActionWrapper
.- Parameters
env (CityLearnEnv) – CityLearn environment.
observation_bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active observation in each building.
action_bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active action in each building.
default_observation_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building observation.
default_action_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building action.
- class citylearn.wrappers.NormalizedObservationWrapper(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
gym.core.ObservationWrapper
Wrapper for observations min-max and periodic normalization.
Temporal observations including hour, day_type and month are periodically normalized using sine/cosine transformations and then all observations are min-max normalized between 0 and 1.
- Parameters
env (CityLearnEnv) – CityLearn environment.
- observation(observations: List[List[float]]) List[List[float]] [source]
Returns normalized observations.
- property observation_space: List[gym.spaces.box.Box]
Returns observation space for normalized observations.
- class citylearn.wrappers.StableBaselines3ActionWrapper(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
gym.core.ActionWrapper
Action wrapper for
stable-baselines3
algorithms.Wraps actions so that they are returned in a 1-dimensional numpy array. This wrapper is only compatible when the environment is controlled by a central agent i.e.,
citylearn.citylearn.CityLearnEnv.central_agent
= True.- Parameters
env (CityLearnEnv) – CityLearn environment.
- action(actions: List[float]) List[List[float]] [source]
Returns actions as 1-dimensional numpy array.
- property action_space: gym.spaces.box.Box
Returns single spaces Box object.
- class citylearn.wrappers.StableBaselines3ObservationWrapper(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
gym.core.ObservationWrapper
Observation wrapper for
stable-baselines3
algorithms.Wraps observations so that they are returned in a 1-dimensional numpy array. This wrapper is only compatible when the environment is controlled by a central agent i.e.,
citylearn.citylearn.CityLearnEnv.central_agent
= True.- Parameters
env (CityLearnEnv) – CityLearn environment.
- observation(observations: List[List[float]]) numpy.ndarray [source]
Returns observations as 1-dimensional numpy array.
- property observation_space: gym.spaces.box.Box
Returns single spaces Box object.
- class citylearn.wrappers.StableBaselines3RewardWrapper(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
gym.core.RewardWrapper
Reward wrapper for
stable-baselines3
algorithms.Wraps rewards so that it is returned as float value. This wrapper is only compatible when the environment is controlled by a central agent i.e.,
citylearn.citylearn.CityLearnEnv.central_agent
= True.- Parameters
env (CityLearnEnv) – CityLearn environment.
- class citylearn.wrappers.StableBaselines3Wrapper(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
gym.core.Wrapper
Wrapper for
stable-baselines3
algorithms.Wraps env in
citylearn.wrappers.StableBaselines3ObservationWrapper
,citylearn.wrappers.StableBaselines3ActionWrapper
andcitylearn.wrappers.StableBaselines3RewardWrapper
.- Parameters
env (CityLearnEnv) – CityLearn environment.
- class citylearn.wrappers.TabularQLearningActionWrapper(env: citylearn.citylearn.CityLearnEnv, bin_sizes: Optional[List[Mapping[str, int]]] = None, default_bin_size: Optional[int] = None)[source]
Bases:
gym.core.ActionWrapper
Action wrapper for
citylearn.agents.q_learning.TabularQLearning
agent.Wraps env in
citylearn.wrappers.DiscreteActionWrapper
.- Parameters
env (CityLearnEnv) – CityLearn environment.
bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active action in each building.
default_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building action.
- property action_space: List[gym.spaces.discrete.Discrete]
Returns action space for discretized actions.
- class citylearn.wrappers.TabularQLearningObservationWrapper(env: citylearn.citylearn.CityLearnEnv, bin_sizes: Optional[List[Mapping[str, int]]] = None, default_bin_size: Optional[int] = None)[source]
Bases:
gym.core.ObservationWrapper
Observation wrapper for
citylearn.agents.q_learning.TabularQLearning
agent.Wraps env in
citylearn.wrappers.DiscreteObservationWrapper
.- Parameters
env (CityLearnEnv) – CityLearn environment.
bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active observation in each building.
default_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building observation.
- observation(observations: List[List[int]]) List[List[int]] [source]
Returns discretized observations.
- property observation_space: List[gym.spaces.discrete.Discrete]
Returns observation space for discretized observations.
- class citylearn.wrappers.TabularQLearningWrapper(env: citylearn.citylearn.CityLearnEnv, observation_bin_sizes: Optional[List[Mapping[str, int]]] = None, action_bin_sizes: Optional[List[Mapping[str, int]]] = None, default_observation_bin_size: Optional[int] = None, default_action_bin_size: Optional[int] = None)[source]
Bases:
gym.core.Wrapper
Wrapper for
citylearn.agents.q_learning.TabularQLearning
agent.Wraps env in
citylearn.wrappers.DiscreteObservationWrapper
andcitylearn.wrappers.DiscreteActionWrapper
.- Parameters
env (CityLearnEnv) – CityLearn environment.
observation_bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active observation in each building.
action_bin_sizes (List[Mapping[str, int]], optional) – Then number of bins for each active action in each building.
default_observation_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building observation.
default_action_bin_size (int, default = 10) – The default number of bins if bin_sizes is unspecified for any active building action.