citylearn.reward_function module
- class citylearn.reward_function.BuildingDynamicsReward(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
citylearn.reward_function.RewardFunction
- calculate() List[float] [source]
Returned reward assumes that the building-agents act independently of each other, without sharing information through the reward.
Recommended for use with the SAC controllers.
Notes
Reward value is calculated as \([\textrm{min}(-e_0^3, 0), \dots, \textrm{min}(-e_n^3, 0)]\) where \(e\) is electricity_consumption and \(n\) is the number of agents.
- class citylearn.reward_function.IndependentSACReward(env: citylearn.citylearn.CityLearnEnv)[source]
Bases:
citylearn.reward_function.RewardFunction
- calculate() List[float] [source]
Returned reward assumes that the building-agents act independently of each other, without sharing information through the reward.
Recommended for use with the SAC controllers.
Notes
Reward value is calculated as \([\textrm{min}(-e_0^3, 0), \dots, \textrm{min}(-e_n^3, 0)]\) where \(e\) is electricity_consumption and \(n\) is the number of agents.
- class citylearn.reward_function.MARL(env: citylearn.citylearn.CityLearnEnv)[source]
- class citylearn.reward_function.RewardFunction(env: citylearn.citylearn.CityLearnEnv, **kwargs)[source]
Bases:
object
- calculate() List[float] [source]
Calculates default reward.
Notes
Reward value is calculated as \([\textrm{min}(-e_0, 0), \dots, \textrm{min}(-e_n, 0)]\) where \(e\) is electricity_consumption and \(n\) is the number of agents.
- property env: citylearn.citylearn.CityLearnEnv
Simulation environment.