Tianshou/test/base/test_collector.py

962 lines
32 KiB
Python
Raw Normal View History

from collections.abc import Callable, Sequence
from test.base.env import MoveToRightEnv, NXEnv
from typing import Any
Remove kwargs in policy init (#950) Closes #947 This removes all kwargs from all policy constructors. While doing that, I also improved several names and added a whole lot of TODOs. ## Functional changes: 1. Added possibility to pass None as `critic2` and `critic2_optim`. In fact, the default behavior then should cover the absolute majority of cases 2. Added a function called `clone_optimizer` as a temporary measure to support passing `critic2_optim=None` ## Breaking changes: 1. `action_space` is no longer optional. In fact, it already was non-optional, as there was a ValueError in BasePolicy.init. So now several examples were fixed to reflect that 2. `reward_normalization` removed from DDPG and children. It was never allowed to pass it as `True` there, an error would have been raised in `compute_n_step_reward`. Now I removed it from the interface 3. renamed `critic1` and similar to `critic`, in order to have uniform interfaces. Note that the `critic` in DDPG was optional for the sole reason that child classes used `critic1`. I removed this optionality (DDPG can't do anything with `critic=None`) 4. Several renamings of fields (mostly private to public, so backwards compatible) ## Additional changes: 1. Removed type and default declaration from docstring. This kind of duplication is really not necessary 2. Policy constructors are now only called using named arguments, not a fragile mixture of positional and named as before 5. Minor beautifications in typing and code 6. Generally shortened docstrings and made them uniform across all policies (hopefully) ## Comment: With these changes, several problems in tianshou's inheritance hierarchy become more apparent. I tried highlighting them for future work. --------- Co-authored-by: Dominik Jain <d.jain@appliedai.de>
2023-10-08 17:57:03 +02:00
import gymnasium as gym
2020-03-25 14:08:28 +08:00
import numpy as np
import pytest
import tqdm
2020-04-10 18:02:05 +08:00
from tianshou.data import (
AsyncCollector,
Batch,
CachedReplayBuffer,
Collector,
PrioritizedReplayBuffer,
ReplayBuffer,
VectorReplayBuffer,
)
from tianshou.data.batch import BatchProtocol
from tianshou.data.types import ObsBatchProtocol, RolloutBatchProtocol
from tianshou.env import DummyVectorEnv, SubprocVectorEnv
from tianshou.policy import BasePolicy, TrainingStats
2020-03-25 14:08:28 +08:00
try:
import envpool
except ImportError:
envpool = None
2020-03-25 14:08:28 +08:00
class MaxActionPolicy(BasePolicy):
Remove kwargs in policy init (#950) Closes #947 This removes all kwargs from all policy constructors. While doing that, I also improved several names and added a whole lot of TODOs. ## Functional changes: 1. Added possibility to pass None as `critic2` and `critic2_optim`. In fact, the default behavior then should cover the absolute majority of cases 2. Added a function called `clone_optimizer` as a temporary measure to support passing `critic2_optim=None` ## Breaking changes: 1. `action_space` is no longer optional. In fact, it already was non-optional, as there was a ValueError in BasePolicy.init. So now several examples were fixed to reflect that 2. `reward_normalization` removed from DDPG and children. It was never allowed to pass it as `True` there, an error would have been raised in `compute_n_step_reward`. Now I removed it from the interface 3. renamed `critic1` and similar to `critic`, in order to have uniform interfaces. Note that the `critic` in DDPG was optional for the sole reason that child classes used `critic1`. I removed this optionality (DDPG can't do anything with `critic=None`) 4. Several renamings of fields (mostly private to public, so backwards compatible) ## Additional changes: 1. Removed type and default declaration from docstring. This kind of duplication is really not necessary 2. Policy constructors are now only called using named arguments, not a fragile mixture of positional and named as before 5. Minor beautifications in typing and code 6. Generally shortened docstrings and made them uniform across all policies (hopefully) ## Comment: With these changes, several problems in tianshou's inheritance hierarchy become more apparent. I tried highlighting them for future work. --------- Co-authored-by: Dominik Jain <d.jain@appliedai.de>
2023-10-08 17:57:03 +02:00
def __init__(
self,
action_space: gym.spaces.Space | None = None,
dict_state: bool = False,
need_state: bool = True,
action_shape: Sequence[int] | int | None = None,
) -> None:
"""Mock policy for testing, will always return an array of ones of the shape of the action space.
Note that this doesn't make much sense for discrete action space (the output is then intepreted as
logits, meaning all actions would be equally likely).
Remove kwargs in policy init (#950) Closes #947 This removes all kwargs from all policy constructors. While doing that, I also improved several names and added a whole lot of TODOs. ## Functional changes: 1. Added possibility to pass None as `critic2` and `critic2_optim`. In fact, the default behavior then should cover the absolute majority of cases 2. Added a function called `clone_optimizer` as a temporary measure to support passing `critic2_optim=None` ## Breaking changes: 1. `action_space` is no longer optional. In fact, it already was non-optional, as there was a ValueError in BasePolicy.init. So now several examples were fixed to reflect that 2. `reward_normalization` removed from DDPG and children. It was never allowed to pass it as `True` there, an error would have been raised in `compute_n_step_reward`. Now I removed it from the interface 3. renamed `critic1` and similar to `critic`, in order to have uniform interfaces. Note that the `critic` in DDPG was optional for the sole reason that child classes used `critic1`. I removed this optionality (DDPG can't do anything with `critic=None`) 4. Several renamings of fields (mostly private to public, so backwards compatible) ## Additional changes: 1. Removed type and default declaration from docstring. This kind of duplication is really not necessary 2. Policy constructors are now only called using named arguments, not a fragile mixture of positional and named as before 5. Minor beautifications in typing and code 6. Generally shortened docstrings and made them uniform across all policies (hopefully) ## Comment: With these changes, several problems in tianshou's inheritance hierarchy become more apparent. I tried highlighting them for future work. --------- Co-authored-by: Dominik Jain <d.jain@appliedai.de>
2023-10-08 17:57:03 +02:00
:param action_space: the action space of the environment. If None, a dummy Box space will be used.
:param bool dict_state: if the observation of the environment is a dict
:param bool need_state: if the policy needs the hidden state (for RNN)
"""
Remove kwargs in policy init (#950) Closes #947 This removes all kwargs from all policy constructors. While doing that, I also improved several names and added a whole lot of TODOs. ## Functional changes: 1. Added possibility to pass None as `critic2` and `critic2_optim`. In fact, the default behavior then should cover the absolute majority of cases 2. Added a function called `clone_optimizer` as a temporary measure to support passing `critic2_optim=None` ## Breaking changes: 1. `action_space` is no longer optional. In fact, it already was non-optional, as there was a ValueError in BasePolicy.init. So now several examples were fixed to reflect that 2. `reward_normalization` removed from DDPG and children. It was never allowed to pass it as `True` there, an error would have been raised in `compute_n_step_reward`. Now I removed it from the interface 3. renamed `critic1` and similar to `critic`, in order to have uniform interfaces. Note that the `critic` in DDPG was optional for the sole reason that child classes used `critic1`. I removed this optionality (DDPG can't do anything with `critic=None`) 4. Several renamings of fields (mostly private to public, so backwards compatible) ## Additional changes: 1. Removed type and default declaration from docstring. This kind of duplication is really not necessary 2. Policy constructors are now only called using named arguments, not a fragile mixture of positional and named as before 5. Minor beautifications in typing and code 6. Generally shortened docstrings and made them uniform across all policies (hopefully) ## Comment: With these changes, several problems in tianshou's inheritance hierarchy become more apparent. I tried highlighting them for future work. --------- Co-authored-by: Dominik Jain <d.jain@appliedai.de>
2023-10-08 17:57:03 +02:00
action_space = action_space or gym.spaces.Box(-1, 1, (1,))
super().__init__(action_space=action_space)
2020-04-28 20:56:02 +08:00
self.dict_state = dict_state
self.need_state = need_state
self.action_shape = action_shape
2020-03-25 14:08:28 +08:00
def forward(
self,
batch: ObsBatchProtocol,
state: dict | BatchProtocol | np.ndarray | None = None,
**kwargs: Any,
) -> Batch:
if self.need_state:
if state is None:
state = np.zeros((len(batch.obs), 2))
elif isinstance(state, np.ndarray | BatchProtocol):
state += np.int_(1)
elif isinstance(state, dict) and state.get("hidden") is not None:
state["hidden"] += np.int_(1)
2020-04-28 20:56:02 +08:00
if self.dict_state:
if self.action_shape:
action_shape = self.action_shape
elif isinstance(batch.obs, BatchProtocol):
action_shape = len(batch.obs["index"])
else:
action_shape = len(batch.obs)
return Batch(act=np.ones(action_shape), state=state)
action_shape = self.action_shape if self.action_shape else len(batch.obs)
return Batch(act=np.ones(action_shape), state=state)
2020-03-25 14:08:28 +08:00
def learn(self, batch: RolloutBatchProtocol, *args: Any, **kwargs: Any) -> TrainingStats:
raise NotImplementedError
2020-03-25 14:08:28 +08:00
def test_collector() -> None:
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0) for i in [2, 3, 4, 5]]
subproc_venv_4_envs = SubprocVectorEnv(env_fns)
dummy_venv_4_envs = DummyVectorEnv(env_fns)
policy = MaxActionPolicy()
single_env = env_fns[0]()
c_single_env = Collector(
policy,
single_env,
ReplayBuffer(size=100),
)
c_single_env.reset()
c_single_env.collect(n_step=3)
assert len(c_single_env.buffer) == 3
# TODO: direct attr access is an arcane way of using the buffer, it should be never done
# The placeholders for entries are all zeros, so buffer.obs is an array filled with 3
# observations, and 97 zeros.
# However, buffer[:] will have all attributes with length three... The non-filled entries are removed there
# See above. For the single env, we start with obs=0, obs_next=1.
# We move to obs=1, obs_next=2,
# then the env is reset and we move to obs=0
# Making one more step results in obs_next=1
# The final 0 in the buffer.obs is because the buffer is initialized with zeros and the direct attr access
assert np.allclose(c_single_env.buffer.obs[:4, 0], [0, 1, 0, 0])
obs_next = c_single_env.buffer[:].obs_next[..., 0]
assert isinstance(obs_next, np.ndarray)
assert np.allclose(obs_next, [1, 2, 1])
keys = np.zeros(100)
keys[:3] = 1
assert np.allclose(c_single_env.buffer.info["key"], keys)
for e in c_single_env.buffer.info["env"][:3]:
assert isinstance(e, MoveToRightEnv)
assert np.allclose(c_single_env.buffer.info["env_id"], 0)
rews = np.zeros(100)
rews[:3] = [0, 1, 0]
assert np.allclose(c_single_env.buffer.rew, rews)
# At this point, the buffer contains obs 0 -> 1 -> 0
# At start we have 3 entries in the buffer
# We collect 3 episodes, in addition to the transitions we have collected before
# 0 -> 1 -> 0 -> 0 (reset at collection start) -> 1 -> done (0) -> 1 -> done(0)
# obs_next: 1 -> 2 -> 1 -> 1 (reset at collection start) -> 2 -> 1 -> 2 -> 1 -> 2
# In total, we will have 3 + 6 = 9 entries in the buffer
c_single_env.collect(n_episode=3)
assert len(c_single_env.buffer) == 8
assert np.allclose(c_single_env.buffer.obs[:10, 0], [0, 1, 0, 1, 0, 1, 0, 1, 0, 0])
obs_next = c_single_env.buffer[:].obs_next[..., 0]
assert isinstance(obs_next, np.ndarray)
assert np.allclose(obs_next, [1, 2, 1, 2, 1, 2, 1, 2])
assert np.allclose(c_single_env.buffer.info["key"][:8], 1)
for e in c_single_env.buffer.info["env"][:8]:
assert isinstance(e, MoveToRightEnv)
assert np.allclose(c_single_env.buffer.info["env_id"][:8], 0)
assert np.allclose(c_single_env.buffer.rew[:8], [0, 1, 0, 1, 0, 1, 0, 1])
c_single_env.collect(n_step=3, random=True)
c_subproc_venv_4_envs = Collector(
policy,
subproc_venv_4_envs,
VectorReplayBuffer(total_size=100, buffer_num=4),
)
c_subproc_venv_4_envs.reset()
# Collect some steps
c_subproc_venv_4_envs.collect(n_step=8)
obs = np.zeros(100)
valid_indices = [0, 1, 25, 26, 50, 51, 75, 76]
obs[valid_indices] = [0, 1, 0, 1, 0, 1, 0, 1]
assert np.allclose(c_subproc_venv_4_envs.buffer.obs[:, 0], obs)
obs_next = c_subproc_venv_4_envs.buffer[:].obs_next[..., 0]
assert isinstance(obs_next, np.ndarray)
assert np.allclose(obs_next, [1, 2, 1, 2, 1, 2, 1, 2])
keys = np.zeros(100)
keys[valid_indices] = [1, 1, 1, 1, 1, 1, 1, 1]
assert np.allclose(c_subproc_venv_4_envs.buffer.info["key"], keys)
for e in c_subproc_venv_4_envs.buffer.info["env"][valid_indices]:
assert isinstance(e, MoveToRightEnv)
env_ids = np.zeros(100)
env_ids[valid_indices] = [0, 0, 1, 1, 2, 2, 3, 3]
assert np.allclose(c_subproc_venv_4_envs.buffer.info["env_id"], env_ids)
rews = np.zeros(100)
rews[valid_indices] = [0, 1, 0, 0, 0, 0, 0, 0]
assert np.allclose(c_subproc_venv_4_envs.buffer.rew, rews)
# we previously collected 8 steps, 2 from each env, now we collect 4 episodes
# each env will contribute an episode, which will be of lens 2 (first env was reset), 1, 2, 3
# So we get 8 + 2+1+2+3 = 16 steps
c_subproc_venv_4_envs.collect(n_episode=4)
assert len(c_subproc_venv_4_envs.buffer) == 16
valid_indices = [2, 3, 27, 52, 53, 77, 78, 79]
obs[valid_indices] = [0, 1, 2, 2, 3, 2, 3, 4]
assert np.allclose(c_subproc_venv_4_envs.buffer.obs[:, 0], obs)
obs_next = c_subproc_venv_4_envs.buffer[:].obs_next[..., 0]
assert isinstance(obs_next, np.ndarray)
assert np.allclose(
obs_next,
[1, 2, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5],
)
keys[valid_indices] = [1, 1, 1, 1, 1, 1, 1, 1]
assert np.allclose(c_subproc_venv_4_envs.buffer.info["key"], keys)
for e in c_subproc_venv_4_envs.buffer.info["env"][valid_indices]:
assert isinstance(e, MoveToRightEnv)
env_ids[valid_indices] = [0, 0, 1, 2, 2, 3, 3, 3]
assert np.allclose(c_subproc_venv_4_envs.buffer.info["env_id"], env_ids)
rews[valid_indices] = [0, 1, 1, 0, 1, 0, 0, 1]
assert np.allclose(c_subproc_venv_4_envs.buffer.rew, rews)
c_subproc_venv_4_envs.collect(n_episode=4, random=True)
c_dummy_venv_4_envs = Collector(
policy,
dummy_venv_4_envs,
VectorReplayBuffer(total_size=100, buffer_num=4),
)
c_dummy_venv_4_envs.reset()
c_dummy_venv_4_envs.collect(n_episode=7)
obs1 = obs.copy()
obs1[[4, 5, 28, 29, 30]] = [0, 1, 0, 1, 2]
obs2 = obs.copy()
obs2[[28, 29, 30, 54, 55, 56, 57]] = [0, 1, 2, 0, 1, 2, 3]
c2obs = c_dummy_venv_4_envs.buffer.obs[:, 0]
assert np.all(c2obs == obs1) or np.all(c2obs == obs2)
c_dummy_venv_4_envs.reset_env()
c_dummy_venv_4_envs.reset_buffer()
assert c_dummy_venv_4_envs.collect(n_episode=8).n_collected_episodes == 8
valid_indices = [4, 5, 28, 29, 30, 54, 55, 56, 57]
obs[valid_indices] = [0, 1, 0, 1, 2, 0, 1, 2, 3]
assert np.all(c_dummy_venv_4_envs.buffer.obs[:, 0] == obs)
keys[valid_indices] = [1, 1, 1, 1, 1, 1, 1, 1, 1]
assert np.allclose(c_dummy_venv_4_envs.buffer.info["key"], keys)
for e in c_dummy_venv_4_envs.buffer.info["env"][valid_indices]:
assert isinstance(e, MoveToRightEnv)
env_ids[valid_indices] = [0, 0, 1, 1, 1, 2, 2, 2, 2]
assert np.allclose(c_dummy_venv_4_envs.buffer.info["env_id"], env_ids)
rews[valid_indices] = [0, 1, 0, 0, 1, 0, 0, 0, 1]
assert np.allclose(c_dummy_venv_4_envs.buffer.rew, rews)
c_dummy_venv_4_envs.collect(n_episode=4, random=True)
# test corner case
with pytest.raises(ValueError):
Collector(policy, dummy_venv_4_envs, ReplayBuffer(10))
with pytest.raises(ValueError):
Collector(policy, dummy_venv_4_envs, PrioritizedReplayBuffer(10, 0.5, 0.5))
with pytest.raises(ValueError):
c_dummy_venv_4_envs.collect()
def get_env_factory(i: int, t: str) -> Callable[[], NXEnv]:
return lambda: NXEnv(i, t)
# test NXEnv
for obs_type in ["array", "object"]:
envs = SubprocVectorEnv([get_env_factory(i=i, t=obs_type) for i in [5, 10, 15, 20]])
c_suproc_new = Collector(policy, envs, VectorReplayBuffer(total_size=100, buffer_num=4))
c_suproc_new.reset()
c_suproc_new.collect(n_step=6)
assert c_suproc_new.buffer.obs.dtype == object
@pytest.fixture()
def async_collector_and_env_lens() -> tuple[AsyncCollector, list[int]]:
env_lens = [2, 3, 4, 5]
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0.001, random_sleep=True) for i in env_lens]
venv = SubprocVectorEnv(env_fns, wait_num=len(env_fns) - 1)
policy = MaxActionPolicy()
bufsize = 60
async_collector = AsyncCollector(
policy,
venv,
VectorReplayBuffer(total_size=bufsize * 4, buffer_num=4),
)
async_collector.reset()
return async_collector, env_lens
class TestAsyncCollector:
def test_collect_without_argument_gives_error(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
with pytest.raises(ValueError):
c1.collect()
def test_collect_one_episode_async(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
result = c1.collect(n_episode=1)
assert result.n_collected_episodes >= 1
def test_enough_episodes_two_collection_cycles_n_episode_without_reset(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
n_episode = 2
result_c1 = c1.collect(n_episode=n_episode, reset_before_collect=False)
assert result_c1.n_collected_episodes >= n_episode
result_c2 = c1.collect(n_episode=n_episode, reset_before_collect=False)
assert result_c2.n_collected_episodes >= n_episode
def test_enough_episodes_two_collection_cycles_n_episode_with_reset(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
n_episode = 2
result_c1 = c1.collect(n_episode=n_episode, reset_before_collect=True)
assert result_c1.n_collected_episodes >= n_episode
result_c2 = c1.collect(n_episode=n_episode, reset_before_collect=True)
assert result_c2.n_collected_episodes >= n_episode
def test_enough_episodes_and_correct_obs_indices_and_obs_next_iterative_collection_cycles_n_episode(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
ptr = [0, 0, 0, 0]
bufsize = 60
for n_episode in tqdm.trange(1, 30, desc="test async n_episode"):
result = c1.collect(n_episode=n_episode)
assert result.n_collected_episodes >= n_episode
# check buffer data, obs and obs_next, env_id
for i, count in enumerate(np.bincount(result.lens, minlength=6)[2:]):
env_len = i + 2
total = env_len * count
indices = np.arange(ptr[i], ptr[i] + total) % bufsize
ptr[i] = (ptr[i] + total) % bufsize
seq = np.arange(env_len)
buf = c1.buffer.buffers[i]
assert np.all(buf.info.env_id[indices] == i)
assert np.all(buf.obs[indices].reshape(count, env_len) == seq)
assert np.all(buf.obs_next[indices].reshape(count, env_len) == seq + 1)
def test_enough_episodes_and_correct_obs_indices_and_obs_next_iterative_collection_cycles_n_step(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
bufsize = 60
ptr = [0, 0, 0, 0]
for n_step in tqdm.trange(1, 15, desc="test async n_step"):
result = c1.collect(n_step=n_step)
assert result.n_collected_steps >= n_step
for i, count in enumerate(np.bincount(result.lens, minlength=6)[2:]):
env_len = i + 2
total = env_len * count
indices = np.arange(ptr[i], ptr[i] + total) % bufsize
ptr[i] = (ptr[i] + total) % bufsize
seq = np.arange(env_len)
buf = c1.buffer.buffers[i]
assert np.all(buf.info.env_id[indices] == i)
assert np.all(buf.obs[indices].reshape(count, env_len) == seq)
assert np.all(buf.obs_next[indices].reshape(count, env_len) == seq + 1)
def test_enough_episodes_and_correct_obs_indices_and_obs_next_iterative_collection_cycles_first_n_episode_then_n_step(
self,
async_collector_and_env_lens: tuple[AsyncCollector, list[int]],
) -> None:
c1, env_lens = async_collector_and_env_lens
bufsize = 60
ptr = [0, 0, 0, 0]
for n_episode in tqdm.trange(1, 30, desc="test async n_episode"):
result = c1.collect(n_episode=n_episode)
assert result.n_collected_episodes >= n_episode
# check buffer data, obs and obs_next, env_id
for i, count in enumerate(np.bincount(result.lens, minlength=6)[2:]):
env_len = i + 2
total = env_len * count
indices = np.arange(ptr[i], ptr[i] + total) % bufsize
ptr[i] = (ptr[i] + total) % bufsize
seq = np.arange(env_len)
buf = c1.buffer.buffers[i]
assert np.all(buf.info.env_id[indices] == i)
assert np.all(buf.obs[indices].reshape(count, env_len) == seq)
assert np.all(buf.obs_next[indices].reshape(count, env_len) == seq + 1)
# test async n_step, for now the buffer should be full of data, thus no bincount stuff as above
for n_step in tqdm.trange(1, 15, desc="test async n_step"):
result = c1.collect(n_step=n_step)
assert result.n_collected_steps >= n_step
for i in range(4):
env_len = i + 2
seq = np.arange(env_len)
buf = c1.buffer.buffers[i]
assert np.all(buf.info.env_id == i)
assert np.all(buf.obs.reshape(-1, env_len) == seq)
assert np.all(buf.obs_next.reshape(-1, env_len) == seq + 1)
def test_collector_with_dict_state() -> None:
env = MoveToRightEnv(size=5, sleep=0, dict_state=True)
policy = MaxActionPolicy(dict_state=True)
c0 = Collector(policy, env, ReplayBuffer(size=100))
c0.reset()
2020-04-28 20:56:02 +08:00
c0.collect(n_step=3)
c0.collect(n_episode=2)
assert len(c0.buffer) == 10 # 3 + two episodes with 5 steps each
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0, dict_state=True) for i in [2, 3, 4, 5]]
envs = DummyVectorEnv(env_fns)
envs.seed(666)
obs, info = envs.reset()
assert not np.isclose(obs[0]["rand"], obs[1]["rand"])
c1 = Collector(
policy,
envs,
VectorReplayBuffer(total_size=100, buffer_num=4),
)
c1.reset()
c1.collect(n_step=12)
result = c1.collect(n_episode=8)
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
assert result.n_collected_episodes == 8
lens = np.bincount(result.lens)
assert (
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
result.n_collected_steps == 21
and np.all(lens == [0, 0, 2, 2, 2, 2])
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
or result.n_collected_steps == 20
and np.all(lens == [0, 0, 3, 1, 2, 2])
)
batch, _ = c1.buffer.sample(10)
c0.buffer.update(c1.buffer)
assert len(c0.buffer) in [42, 43]
cur_obs = c0.buffer[:].obs
assert isinstance(cur_obs, Batch)
if len(c0.buffer) == 42:
assert np.all(
cur_obs.index[..., 0]
== [
0,
1,
2,
3,
4,
0,
1,
2,
3,
4,
0,
1,
0,
1,
0,
1,
0,
1,
0,
1,
2,
0,
1,
2,
0,
1,
2,
3,
0,
1,
2,
3,
0,
1,
2,
3,
4,
0,
1,
2,
3,
4,
],
), cur_obs.index[..., 0]
else:
assert np.all(
cur_obs.index[..., 0]
== [
0,
1,
2,
3,
4,
0,
1,
2,
3,
4,
0,
1,
0,
1,
0,
1,
0,
1,
2,
0,
1,
2,
0,
1,
2,
0,
1,
2,
3,
0,
1,
2,
3,
0,
1,
2,
3,
4,
0,
1,
2,
3,
4,
],
), cur_obs.index[..., 0]
c2 = Collector(
policy,
envs,
VectorReplayBuffer(total_size=100, buffer_num=4, stack_num=4),
)
c2.reset()
c2.collect(n_episode=10)
batch, _ = c2.buffer.sample(10)
2020-04-28 20:56:02 +08:00
def test_collector_with_multi_agent() -> None:
multi_agent_env = MoveToRightEnv(size=5, sleep=0, ma_rew=4)
policy = MaxActionPolicy()
c_single_env = Collector(policy, multi_agent_env, ReplayBuffer(size=100))
c_single_env.reset()
multi_env_returns = c_single_env.collect(n_step=3).returns
# c_single_env has length 3
# We have no full episodes, so no returns yet
assert len(multi_env_returns) == 0
single_env_returns = c_single_env.collect(n_episode=2).returns
# now two episodes. Since we have 4 a agents, the returns have shape (2, 4)
assert single_env_returns.shape == (2, 4)
assert np.all(single_env_returns == 1)
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0, ma_rew=4) for i in [2, 3, 4, 5]]
envs = DummyVectorEnv(env_fns)
c_multi_env_ma = Collector(
policy,
envs,
VectorReplayBuffer(total_size=100, buffer_num=4),
)
c_multi_env_ma.reset()
multi_env_returns = c_multi_env_ma.collect(n_step=12).returns
# each env makes 3 steps, the first two envs are done and result in two finished episodes
assert multi_env_returns.shape == (2, 4) and np.all(multi_env_returns == 1), multi_env_returns
multi_env_returns = c_multi_env_ma.collect(n_episode=8).returns
assert multi_env_returns.shape == (8, 4)
assert np.all(multi_env_returns == 1)
batch, _ = c_multi_env_ma.buffer.sample(10)
print(batch)
c_single_env.buffer.update(c_multi_env_ma.buffer)
assert len(c_single_env.buffer) in [42, 43]
if len(c_single_env.buffer) == 42:
multi_env_returns = np.array(
[
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
1,
0,
1,
0,
1,
0,
1,
0,
0,
1,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
],
)
else:
multi_env_returns = np.array(
[
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
1,
0,
1,
0,
1,
0,
0,
1,
0,
0,
1,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
1,
0,
0,
0,
0,
1,
0,
0,
0,
0,
1,
],
)
assert np.all(c_single_env.buffer[:].rew == [[x] * 4 for x in multi_env_returns])
assert np.all(c_single_env.buffer[:].done == multi_env_returns)
c2 = Collector(
policy,
envs,
VectorReplayBuffer(total_size=100, buffer_num=4, stack_num=4),
)
c2.reset()
multi_env_returns = c2.collect(n_episode=10).returns
assert multi_env_returns.shape == (10, 4)
assert np.all(multi_env_returns == 1)
batch, _ = c2.buffer.sample(10)
def test_collector_with_atari_setting() -> None:
reference_obs = np.zeros([6, 4, 84, 84])
for i in range(6):
reference_obs[i, 3, np.arange(84), np.arange(84)] = i
reference_obs[i, 2, np.arange(84)] = i
reference_obs[i, 1, :, np.arange(84)] = i
reference_obs[i, 0] = i
# atari single buffer
env = MoveToRightEnv(size=5, sleep=0, array_state=True)
policy = MaxActionPolicy()
c0 = Collector(policy, env, ReplayBuffer(size=100))
c0.reset()
c0.collect(n_step=6)
c0.collect(n_episode=2)
assert c0.buffer.obs.shape == (100, 4, 84, 84)
assert c0.buffer.obs_next.shape == (100, 4, 84, 84)
assert len(c0.buffer) == 15 # 6 + 2 episodes with 5 steps each
obs = np.zeros_like(c0.buffer.obs)
obs[np.arange(15)] = reference_obs[np.arange(15) % 5]
assert np.all(obs == c0.buffer.obs)
c1 = Collector(policy, env, ReplayBuffer(size=100, ignore_obs_next=True))
c1.collect(n_episode=3, reset_before_collect=True)
assert np.allclose(c0.buffer.obs, c1.buffer.obs)
with pytest.raises(AttributeError):
c1.buffer.obs_next # noqa: B018
assert np.all(reference_obs[[1, 2, 3, 4, 4] * 3] == c1.buffer[:].obs_next)
c2 = Collector(
policy,
env,
ReplayBuffer(size=100, ignore_obs_next=True, save_only_last_obs=True),
)
c2.reset()
c2.collect(n_step=8)
assert c2.buffer.obs.shape == (100, 84, 84)
obs = np.zeros_like(c2.buffer.obs)
obs[np.arange(8)] = reference_obs[[0, 1, 2, 3, 4, 0, 1, 2], -1]
assert np.all(c2.buffer.obs == obs)
obs_next = c2.buffer[:].obs_next
assert isinstance(obs_next, np.ndarray)
assert np.allclose(obs_next, reference_obs[[1, 2, 3, 4, 4, 1, 2, 2], -1])
# atari multi buffer
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0, array_state=True) for i in [2, 3, 4, 5]]
envs = DummyVectorEnv(env_fns)
c3 = Collector(policy, envs, VectorReplayBuffer(total_size=100, buffer_num=4))
c3.reset()
c3.collect(n_step=12)
result = c3.collect(n_episode=9)
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
assert result.n_collected_episodes == 9
assert result.n_collected_steps == 23
assert c3.buffer.obs.shape == (100, 4, 84, 84)
obs = np.zeros_like(c3.buffer.obs)
obs[np.arange(8)] = reference_obs[[0, 1, 0, 1, 0, 1, 0, 1]]
obs[np.arange(25, 34)] = reference_obs[[0, 1, 2, 0, 1, 2, 0, 1, 2]]
obs[np.arange(50, 58)] = reference_obs[[0, 1, 2, 3, 0, 1, 2, 3]]
obs[np.arange(75, 85)] = reference_obs[[0, 1, 2, 3, 4, 0, 1, 2, 3, 4]]
assert np.all(obs == c3.buffer.obs)
obs_next = np.zeros_like(c3.buffer.obs_next)
obs_next[np.arange(8)] = reference_obs[[1, 2, 1, 2, 1, 2, 1, 2]]
obs_next[np.arange(25, 34)] = reference_obs[[1, 2, 3, 1, 2, 3, 1, 2, 3]]
obs_next[np.arange(50, 58)] = reference_obs[[1, 2, 3, 4, 1, 2, 3, 4]]
obs_next[np.arange(75, 85)] = reference_obs[[1, 2, 3, 4, 5, 1, 2, 3, 4, 5]]
assert np.all(obs_next == c3.buffer.obs_next)
c4 = Collector(
policy,
envs,
VectorReplayBuffer(
total_size=100,
buffer_num=4,
stack_num=4,
ignore_obs_next=True,
save_only_last_obs=True,
),
)
c4.reset()
c4.collect(n_step=12)
result = c4.collect(n_episode=9)
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
assert result.n_collected_episodes == 9
assert result.n_collected_steps == 23
assert c4.buffer.obs.shape == (100, 84, 84)
obs = np.zeros_like(c4.buffer.obs)
slice_obs = reference_obs[:, -1]
obs[np.arange(8)] = slice_obs[[0, 1, 0, 1, 0, 1, 0, 1]]
obs[np.arange(25, 34)] = slice_obs[[0, 1, 2, 0, 1, 2, 0, 1, 2]]
obs[np.arange(50, 58)] = slice_obs[[0, 1, 2, 3, 0, 1, 2, 3]]
obs[np.arange(75, 85)] = slice_obs[[0, 1, 2, 3, 4, 0, 1, 2, 3, 4]]
assert np.all(c4.buffer.obs == obs)
obs_next = np.zeros([len(c4.buffer), 4, 84, 84])
ref_index = np.array(
[
1,
1,
1,
1,
1,
1,
1,
1,
1,
2,
2,
1,
2,
2,
1,
2,
2,
1,
2,
3,
3,
1,
2,
3,
3,
1,
2,
3,
4,
4,
1,
2,
3,
4,
4,
],
)
obs_next[:, -1] = slice_obs[ref_index]
ref_index -= 1
ref_index[ref_index < 0] = 0
obs_next[:, -2] = slice_obs[ref_index]
ref_index -= 1
ref_index[ref_index < 0] = 0
obs_next[:, -3] = slice_obs[ref_index]
ref_index -= 1
ref_index[ref_index < 0] = 0
obs_next[:, -4] = slice_obs[ref_index]
assert np.all(obs_next == c4.buffer[:].obs_next)
buf = ReplayBuffer(100, stack_num=4, ignore_obs_next=True, save_only_last_obs=True)
c5 = Collector(policy, envs, CachedReplayBuffer(buf, 4, 10))
c5.reset()
result_ = c5.collect(n_step=12)
assert len(buf) == 5
assert len(c5.buffer) == 12
result = c5.collect(n_episode=9)
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
assert result.n_collected_episodes == 9
assert result.n_collected_steps == 23
assert len(buf) == 35
assert np.all(
buf.obs[: len(buf)]
== slice_obs[
[
0,
1,
0,
1,
2,
0,
1,
0,
1,
2,
3,
0,
1,
2,
3,
4,
0,
1,
0,
1,
2,
0,
1,
0,
1,
2,
3,
0,
1,
2,
0,
1,
2,
3,
4,
]
],
)
assert np.all(
buf[:].obs_next[:, -1]
== slice_obs[
[
1,
1,
1,
2,
2,
1,
1,
1,
2,
3,
3,
1,
2,
3,
4,
4,
1,
1,
1,
2,
2,
1,
1,
1,
2,
3,
3,
1,
2,
2,
1,
2,
3,
4,
4,
]
],
)
assert len(buf) == len(c5.buffer)
# test buffer=None
c6 = Collector(policy, envs)
c6.reset()
result1 = c6.collect(n_step=12)
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
for key in ["n_collected_episodes", "n_collected_steps", "returns", "lens"]:
assert np.allclose(getattr(result1, key), getattr(result_, key))
result2 = c6.collect(n_episode=9)
Feature/dataclasses (#996) This PR adds strict typing to the output of `update` and `learn` in all policies. This will likely be the last large refactoring PR before the next release (0.6.0, not 1.0.0), so it requires some attention. Several difficulties were encountered on the path to that goal: 1. The policy hierarchy is actually "broken" in the sense that the keys of dicts that were output by `learn` did not follow the same enhancement (inheritance) pattern as the policies. This is a real problem and should be addressed in the near future. Generally, several aspects of the policy design and hierarchy might deserve a dedicated discussion. 2. Each policy needs to be generic in the stats return type, because one might want to extend it at some point and then also extend the stats. Even within the source code base this pattern is necessary in many places. 3. The interaction between learn and update is a bit quirky, we currently handle it by having update modify special field inside TrainingStats, whereas all other fields are handled by learn. 4. The IQM module is a policy wrapper and required a TrainingStatsWrapper. The latter relies on a bunch of black magic. They were addressed by: 1. Live with the broken hierarchy, which is now made visible by bounds in generics. We use type: ignore where appropriate. 2. Make all policies generic with bounds following the policy inheritance hierarchy (which is incorrect, see above). We experimented a bit with nested TrainingStats classes, but that seemed to add more complexity and be harder to understand. Unfortunately, mypy thinks that the code below is wrong, wherefore we have to add `type: ignore` to the return of each `learn` ```python T = TypeVar("T", bound=int) def f() -> T: return 3 ``` 3. See above 4. Write representative tests for the `TrainingStatsWrapper`. Still, the black magic might cause nasty surprises down the line (I am not proud of it)... Closes #933 --------- Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de> Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-12-30 11:09:03 +01:00
for key in ["n_collected_episodes", "n_collected_steps", "returns", "lens"]:
assert np.allclose(getattr(result2, key), getattr(result, key))
@pytest.mark.skipif(envpool is None, reason="EnvPool doesn't support this platform")
def test_collector_envpool_gym_reset_return_info() -> None:
Improved typing and reduced duplication (#912) # Goals of the PR The PR introduces **no changes to functionality**, apart from improved input validation here and there. The main goals are to reduce some complexity of the code, to improve types and IDE completions, and to extend documentation and block comments where appropriate. Because of the change to the trainer interfaces, many files are affected (more details below), but still the overall changes are "small" in a certain sense. ## Major Change 1 - BatchProtocol **TL;DR:** One can now annotate which fields the batch is expected to have on input params and which fields a returned batch has. Should be useful for reading the code. getting meaningful IDE support, and catching bugs with mypy. This annotation strategy will continue to work if Batch is replaced by TensorDict or by something else. **In more detail:** Batch itself has no fields and using it for annotations is of limited informational power. Batches with fields are not separate classes but instead instances of Batch directly, so there is no type that could be used for annotation. Fortunately, python `Protocol` is here for the rescue. With these changes we can now do things like ```python class ActionBatchProtocol(BatchProtocol): logits: Sequence[Union[tuple, torch.Tensor]] dist: torch.distributions.Distribution act: torch.Tensor state: Optional[torch.Tensor] class RolloutBatchProtocol(BatchProtocol): obs: torch.Tensor obs_next: torch.Tensor info: Dict[str, Any] rew: torch.Tensor terminated: torch.Tensor truncated: torch.Tensor class PGPolicy(BasePolicy): ... def forward( self, batch: RolloutBatchProtocol, state: Optional[Union[dict, Batch, np.ndarray]] = None, **kwargs: Any, ) -> ActionBatchProtocol: ``` The IDE and mypy are now very helpful in finding errors and in auto-completion, whereas before the tools couldn't assist in that at all. ## Major Change 2 - remove duplication in trainer package **TL;DR:** There was a lot of duplication between `BaseTrainer` and its subclasses. Even worse, it was almost-duplication. There was also interface fragmentation through things like `onpolicy_trainer`. Now this duplication is gone and all downstream code was adjusted. **In more detail:** Since this change affects a lot of code, I would like to explain why I thought it to be necessary. 1. The subclasses of `BaseTrainer` just duplicated docstrings and constructors. What's worse, they changed the order of args there, even turning some kwargs of BaseTrainer into args. They also had the arg `learning_type` which was passed as kwarg to the base class and was unused there. This made things difficult to maintain, and in fact some errors were already present in the duplicated docstrings. 2. The "functions" a la `onpolicy_trainer`, which just called the `OnpolicyTrainer.run`, not only introduced interface fragmentation but also completely obfuscated the docstring and interfaces. They themselves had no dosctring and the interface was just `*args, **kwargs`, which makes it impossible to understand what they do and which things can be passed without reading their implementation, then reading the docstring of the associated class, etc. Needless to say, mypy and IDEs provide no support with such functions. Nevertheless, they were used everywhere in the code-base. I didn't find the sacrifices in clarity and complexity justified just for the sake of not having to write `.run()` after instantiating a trainer. 3. The trainers are all very similar to each other. As for my application I needed a new trainer, I wanted to understand their structure. The similarity, however, was hard to discover since they were all in separate modules and there was so much duplication. I kept staring at the constructors for a while until I figured out that essentially no changes to the superclass were introduced. Now they are all in the same module and the similarities/differences between them are much easier to grasp (in my opinion) 4. Because of (1), I had to manually change and check a lot of code, which was very tedious and boring. This kind of work won't be necessary in the future, since now IDEs can be used for changing signatures, renaming args and kwargs, changing class names and so on. I have some more reasons, but maybe the above ones are convincing enough. ## Minor changes: improved input validation and types I added input validation for things like `state` and `action_scaling` (which only makes sense for continuous envs). After adding this, some tests failed to pass this validation. There I added `action_scaling=isinstance(env.action_space, Box)`, after which tests were green. I don't know why the tests were green before, since action scaling doesn't make sense for discrete actions. I guess some aspect was not tested and didn't crash. I also added Literal in some places, in particular for `action_bound_method`. Now it is no longer allowed to pass an empty string, instead one should pass `None`. Also here there is input validation with clear error messages. @Trinkle23897 The functional tests are green. I didn't want to fix the formatting, since it will change in the next PR that will solve #914 anyway. I also found a whole bunch of code in `docs/_static`, which I just deleted (shouldn't it be copied from the sources during docs build instead of committed?). I also haven't adjusted the documentation yet, which atm still mentions the trainers of the type `onpolicy_trainer(...)` instead of `OnpolicyTrainer(...).run()` ## Breaking Changes The adjustments to the trainer package introduce breaking changes as duplicated interfaces are deleted. However, it should be very easy for users to adjust to them --------- Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2023-08-22 18:54:46 +02:00
envs = envpool.make_gymnasium("Pendulum-v1", num_envs=4, gym_reset_return_info=True)
policy = MaxActionPolicy(action_shape=(len(envs), 1))
c0 = Collector(
policy,
envs,
VectorReplayBuffer(len(envs) * 10, len(envs)),
exploration_noise=True,
)
c0.reset()
c0.collect(n_step=8)
env_ids = np.zeros(len(envs) * 10)
env_ids[[0, 1, 10, 11, 20, 21, 30, 31]] = [0, 0, 1, 1, 2, 2, 3, 3]
assert np.allclose(c0.buffer.info["env_id"], env_ids)
def test_collector_with_vector_env() -> None:
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0) for i in [1, 8, 9, 10]]
dum = DummyVectorEnv(env_fns)
policy = MaxActionPolicy()
c2 = Collector(
policy,
dum,
VectorReplayBuffer(total_size=100, buffer_num=4),
)
c2.reset()
c1r = c2.collect(n_episode=2)
assert np.array_equal(np.array([1, 8]), c1r.lens)
c2r = c2.collect(n_episode=10)
assert np.array_equal(np.array([1, 1, 1, 1, 1, 1, 1, 8, 9, 10]), c2r.lens)
c3r = c2.collect(n_step=20)
assert np.array_equal(np.array([1, 1, 1, 1, 1]), c3r.lens)
c4r = c2.collect(n_step=20)
assert np.array_equal(np.array([1, 1, 1, 8, 1, 9, 1, 10]), c4r.lens)
def test_async_collector_with_vector_env() -> None:
env_fns = [lambda x=i: MoveToRightEnv(size=x, sleep=0) for i in [1, 8, 9, 10]]
dum = DummyVectorEnv(env_fns)
policy = MaxActionPolicy()
c1 = AsyncCollector(
policy,
dum,
VectorReplayBuffer(total_size=100, buffer_num=4),
)
c1r = c1.collect(n_episode=10, reset_before_collect=True)
assert np.array_equal(np.array([1, 1, 1, 1, 1, 1, 1, 1, 8, 1, 9]), c1r.lens)
c2r = c1.collect(n_step=20)
assert np.array_equal(np.array([1, 10, 1, 1, 1, 1]), c2r.lens)