update dqn tutorial and add envpool to docs (#526)

Co-authored-by: Jiayi Weng <trinkle23897@gmail.com>
This commit is contained in:
Chengqi Duan 2022-02-15 06:39:47 +08:00 committed by GitHub
parent d29188ee77
commit d85bc19269
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 96 additions and 13 deletions

View File

@ -52,9 +52,11 @@ spelling:
$(call check_install_extra, sphinxcontrib.spelling, sphinxcontrib.spelling pyenchant) $(call check_install_extra, sphinxcontrib.spelling, sphinxcontrib.spelling pyenchant)
cd docs && make spelling SPHINXOPTS="-W" cd docs && make spelling SPHINXOPTS="-W"
clean: doc-clean:
cd docs && make clean cd docs && make clean
clean: doc-clean
commit-checks: format lint mypy check-docstyle spelling commit-checks: format lint mypy check-docstyle spelling
.PHONY: clean spelling doc mypy lint format check-codestyle check-docstyle commit-checks .PHONY: clean spelling doc mypy lint format check-codestyle check-docstyle commit-checks

View File

@ -50,7 +50,8 @@ Here is Tianshou's other features:
- Elegant framework, using only ~4000 lines of code - Elegant framework, using only ~4000 lines of code
- State-of-the-art [MuJoCo benchmark](https://github.com/thu-ml/tianshou/tree/master/examples/mujoco) for REINFORCE/A2C/TRPO/PPO/DDPG/TD3/SAC algorithms - State-of-the-art [MuJoCo benchmark](https://github.com/thu-ml/tianshou/tree/master/examples/mujoco) for REINFORCE/A2C/TRPO/PPO/DDPG/TD3/SAC algorithms
- Support parallel environment simulation (synchronous or asynchronous) for all algorithms [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#parallel-sampling) - Support vectorized environment (synchronous or asynchronous) for all algorithms [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#parallel-sampling)
- Support super-fast vectorized environment [EnvPool](https://github.com/sail-sg/envpool/) for all algorithms [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#envpool-integration)
- Support recurrent state representation in actor network and critic network (RNN-style training for POMDP) [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#rnn-style-training) - Support recurrent state representation in actor network and critic network (RNN-style training for POMDP) [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#rnn-style-training)
- Support any type of environment state/action (e.g. a dict, a self-defined class, ...) [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#user-defined-environment-and-different-state-representation) - Support any type of environment state/action (e.g. a dict, a self-defined class, ...) [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#user-defined-environment-and-different-state-representation)
- Support customized training process [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#customize-training-process) - Support customized training process [Usage](https://tianshou.readthedocs.io/en/master/tutorials/cheatsheet.html#customize-training-process)

BIN
docs/_static/images/pipeline.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

BIN
docs/_static/images/rl-loop.jpg vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -41,7 +41,8 @@ Here is Tianshou's other features:
* Elegant framework, using only ~3000 lines of code * Elegant framework, using only ~3000 lines of code
* State-of-the-art `MuJoCo benchmark <https://github.com/thu-ml/tianshou/tree/master/examples/mujoco>`_ * State-of-the-art `MuJoCo benchmark <https://github.com/thu-ml/tianshou/tree/master/examples/mujoco>`_
* Support parallel environment simulation (synchronous or asynchronous) for all algorithms: :ref:`parallel_sampling` * Support vectorized environment (synchronous or asynchronous) for all algorithms: :ref:`parallel_sampling`
* Support super-fast vectorized environment `EnvPool <https://github.com/sail-sg/envpool/>`_ for all algorithms: :ref:`envpool_integration`
* Support recurrent state representation in actor network and critic network (RNN-style training for POMDP): :ref:`rnn_training` * Support recurrent state representation in actor network and critic network (RNN-style training for POMDP): :ref:`rnn_training`
* Support any type of environment state/action (e.g. a dict, a self-defined class, ...): :ref:`self_defined_env` * Support any type of environment state/action (e.g. a dict, a self-defined class, ...): :ref:`self_defined_env`
* Support :ref:`customize_training` * Support :ref:`customize_training`

View File

@ -9,8 +9,11 @@ optim
eps eps
timelimit timelimit
TimeLimit TimeLimit
envpool
EnvPool
maxsize maxsize
timestep timestep
timesteps
numpy numpy
ndarray ndarray
stackoverflow stackoverflow

View File

@ -63,14 +63,11 @@ To correctly render the data (including several tfevent files), we highly recomm
Parallel Sampling Parallel Sampling
----------------- -----------------
Tianshou provides the following classes for parallel environment simulation: Tianshou provides the following classes for vectorized environment:
- :class:`~tianshou.env.DummyVectorEnv` is for pseudo-parallel simulation (implemented with a for-loop, useful for debugging). - :class:`~tianshou.env.DummyVectorEnv` is for pseudo-parallel simulation (implemented with a for-loop, useful for debugging).
- :class:`~tianshou.env.SubprocVectorEnv` uses multiple processes for parallel simulation. This is the most often choice for parallel simulation. - :class:`~tianshou.env.SubprocVectorEnv` uses multiple processes for parallel simulation. This is the most often choice for parallel simulation.
- :class:`~tianshou.env.ShmemVectorEnv` has a similar implementation to :class:`~tianshou.env.SubprocVectorEnv`, but is optimized (in terms of both memory footprint and simulation speed) for environments with large observations such as images. - :class:`~tianshou.env.ShmemVectorEnv` has a similar implementation to :class:`~tianshou.env.SubprocVectorEnv`, but is optimized (in terms of both memory footprint and simulation speed) for environments with large observations such as images.
- :class:`~tianshou.env.RayVectorEnv` is currently the only choice for parallel simulation in a cluster with multiple machines. - :class:`~tianshou.env.RayVectorEnv` is currently the only choice for parallel simulation in a cluster with multiple machines.
Although these classes are optimized for different scenarios, they have exactly the same APIs because they are sub-classes of :class:`~tianshou.env.BaseVectorEnv`. Just provide a list of functions who return environments upon called, and it is all set. Although these classes are optimized for different scenarios, they have exactly the same APIs because they are sub-classes of :class:`~tianshou.env.BaseVectorEnv`. Just provide a list of functions who return environments upon called, and it is all set.
@ -119,6 +116,24 @@ The figure in the right gives an intuitive comparison among synchronous/asynchro
Otherwise, the outputs of these envs may be the same with each other. Otherwise, the outputs of these envs may be the same with each other.
.. _envpool_integration:
EnvPool Integration
-------------------
`EnvPool <https://github.com/sail-sg/envpool/>`_ is a C++-based vectorized environment implementation and is way faster than the above solutions. The APIs are almost the same as above four classes, so that means you can directly switch the vectorized environment to envpool and get immediate speed-up.
Currently it supports Atari, VizDoom, toy_text and classic_control environments. For more information, please refer to `EnvPool's documentation <https://envpool.readthedocs.io/en/latest/>`_.
::
# install envpool: pip3 install envpool
import envpool
envs = envpool.make_gym("CartPole-v0", num_envs=10)
collector = Collector(policy, envs, buffer)
Here are some examples: https://github.com/sail-sg/envpool/tree/master/examples/tianshou_examples
.. _preprocess_fn: .. _preprocess_fn:

View File

@ -9,10 +9,33 @@ The full script is at `test/discrete/test_dqn.py <https://github.com/thu-ml/tian
Contrary to existing Deep RL libraries such as `RLlib <https://github.com/ray-project/ray/tree/master/rllib/>`_, which could only accept a config specification of hyperparameters, network, and others, Tianshou provides an easy way of construction through the code-level. Contrary to existing Deep RL libraries such as `RLlib <https://github.com/ray-project/ray/tree/master/rllib/>`_, which could only accept a config specification of hyperparameters, network, and others, Tianshou provides an easy way of construction through the code-level.
Overview
--------
In reinforcement learning, the agent interacts with environments to improve itself.
.. image:: /_static/images/rl-loop.jpg
:align: center
:height: 200
There are three types of data flow in RL training pipeline:
1. Agent to environment: ``action`` will be generated by agent and sent to environment;
2. Environment to agent: ``env.step`` takes action, and returns a tuple of ``(observation, reward, done, info)``;
3. Agent-environment interaction to agent training: the data generated by interaction will be stored and sent to the learner of agent.
In the following sections, we will set up (vectorized) environments, policy (with neural network), collector (with buffer), and trainer to successfully run the RL training and evaluation pipeline.
Here is the overall system:
.. image:: /_static/images/pipeline.png
:align: center
:height: 300
Make an Environment Make an Environment
------------------- -------------------
First of all, you have to make an environment for your agent to interact with. For environment interfaces, we follow the convention of `OpenAI Gym <https://github.com/openai/gym>`_. In your Python code, simply import Tianshou and make the environment: First of all, you have to make an environment for your agent to interact with. You can use ``gym.make(environment_name)`` to make an environment for your agent. For environment interfaces, we follow the convention of `OpenAI Gym <https://github.com/openai/gym>`_. In your Python code, simply import Tianshou and make the environment:
:: ::
import gym import gym
@ -20,11 +43,21 @@ First of all, you have to make an environment for your agent to interact with. F
env = gym.make('CartPole-v0') env = gym.make('CartPole-v0')
CartPole-v0 is a simple environment with a discrete action space, for which DQN applies. You have to identify whether the action space is continuous or discrete and apply eligible algorithms. DDPG :cite:`DDPG`, for example, could only be applied to continuous action spaces, while almost all other policy gradient methods could be applied to both, depending on the probability distribution on the action. CartPole-v0 includes a cart carrying a pole moving on a track. This is a simple environment with a discrete action space, for which DQN applies. You have to identify whether the action space is continuous or discrete and apply eligible algorithms. DDPG :cite:`DDPG`, for example, could only be applied to continuous action spaces, while almost all other policy gradient methods could be applied to both.
Here is the detail of useful fields of CartPole-v0:
- ``state``: the position of the cart, the velocity of the cart, the angle of the pole and the velocity of the tip of the pole;
- ``action``: can only be one of ``[0, 1, 2]``, for moving the cart left, no move, and right;
- ``reward``: each timestep you last, you will receive a +1 ``reward``;
- ``done``: if CartPole is out-of-range or timeout (the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center, or you last over 200 timesteps);
- ``info``: extra info from environment simulation.
The goal is to train a good policy that can get the highest reward in this environment.
Setup Multi-environment Wrapper Setup Vectorized Environment
------------------------------- ----------------------------
If you want to use the original ``gym.Env``: If you want to use the original ``gym.Env``:
:: ::
@ -32,7 +65,13 @@ If you want to use the original ``gym.Env``:
train_envs = gym.make('CartPole-v0') train_envs = gym.make('CartPole-v0')
test_envs = gym.make('CartPole-v0') test_envs = gym.make('CartPole-v0')
Tianshou supports parallel sampling for all algorithms. It provides four types of vectorized environment wrapper: :class:`~tianshou.env.DummyVectorEnv`, :class:`~tianshou.env.SubprocVectorEnv`, :class:`~tianshou.env.ShmemVectorEnv`, and :class:`~tianshou.env.RayVectorEnv`. It can be used as follows: (more explanation can be found at :ref:`parallel_sampling`) Tianshou supports vectorized environment for all algorithms. It provides four types of vectorized environment wrapper:
- :class:`~tianshou.env.DummyVectorEnv`: the sequential version, using a single-thread for-loop;
- :class:`~tianshou.env.SubprocVectorEnv`: use python multiprocessing and pipe for concurrent execution;
- :class:`~tianshou.env.ShmemVectorEnv`: use share memory instead of pipe based on SubprocVectorEnv;
- :class:`~tianshou.env.RayVectorEnv`: use Ray for concurrent activities and is currently the only choice for parallel simulation in a cluster with multiple machines. It can be used as follows: (more explanation can be found at :ref:`parallel_sampling`)
:: ::
train_envs = ts.env.DummyVectorEnv([lambda: gym.make('CartPole-v0') for _ in range(10)]) train_envs = ts.env.DummyVectorEnv([lambda: gym.make('CartPole-v0') for _ in range(10)])
@ -40,6 +79,14 @@ Tianshou supports parallel sampling for all algorithms. It provides four types o
Here, we set up 10 environments in ``train_envs`` and 100 environments in ``test_envs``. Here, we set up 10 environments in ``train_envs`` and 100 environments in ``test_envs``.
You can also try the super-fast vectorized environment `EnvPool <https://github.com/sail-sg/envpool/>`_ by
::
import envpool
train_envs = envpool.make_gym("CartPole-v0", num_envs=10)
test_envs = envpool.make_gym("CartPole-v0", num_envs=100)
For the demonstration, here we use the second code-block. For the demonstration, here we use the second code-block.
.. warning:: .. warning::
@ -111,11 +158,25 @@ Setup Collector
The collector is a key concept in Tianshou. It allows the policy to interact with different types of environments conveniently. The collector is a key concept in Tianshou. It allows the policy to interact with different types of environments conveniently.
In each step, the collector will let the policy perform (at least) a specified number of steps or episodes and store the data in a replay buffer. In each step, the collector will let the policy perform (at least) a specified number of steps or episodes and store the data in a replay buffer.
The following code shows how to set up a collector in practice. It is worth noticing that VectorReplayBuffer is to be used in vectorized environment scenarios, and the number of buffers, in the following case 10, is preferred to be set as the number of environments.
:: ::
train_collector = ts.data.Collector(policy, train_envs, ts.data.VectorReplayBuffer(20000, 10), exploration_noise=True) train_collector = ts.data.Collector(policy, train_envs, ts.data.VectorReplayBuffer(20000, 10), exploration_noise=True)
test_collector = ts.data.Collector(policy, test_envs, exploration_noise=True) test_collector = ts.data.Collector(policy, test_envs, exploration_noise=True)
The main function of collector is the collect function, which can be summarized in the following lines:
::
result = self.policy(self.data, last_state) # the agent predicts the batch action from batch observation
act = to_numpy(result.act)
self.data.update(act=act) # update the data with new action/policy
result = self.env.step(act, ready_env_ids) # apply action to environment
obs_next, rew, done, info = result
self.data.update(obs_next=obs_next, rew=rew, done=done, info=info) # update the data with new state/reward/done/info
Train Policy with a Trainer Train Policy with a Trainer
--------------------------- ---------------------------

View File

@ -46,7 +46,7 @@ class ReplayBuffer:
"sample_avail": sample_avail, "sample_avail": sample_avail,
} }
super().__init__() super().__init__()
self.maxsize = size self.maxsize = int(size)
assert stack_num > 0, "stack_num should be greater than 0" assert stack_num > 0, "stack_num should be greater than 0"
self.stack_num = stack_num self.stack_num = stack_num
self._indices = np.arange(size) self._indices = np.arange(size)