317 lines
14 KiB
ReStructuredText
Raw Normal View History

2020-03-29 15:18:33 +08:00
Deep Q Network
==============
Deep reinforcement learning has achieved significant successes in various applications.
**Deep Q Network** (DQN) :cite:`DQN` is the pioneer one.
In this tutorial, we will show how to train a DQN agent on CartPole with Tianshou step by step.
The full script is at `test/discrete/test_dqn.py <https://github.com/thu-ml/tianshou/blob/master/test/discrete/test_dqn.py>`_.
Contrary to existing Deep RL libraries such as `RLlib <https://github.com/ray-project/ray/tree/master/rllib/>`_, which could only accept a config specification of hyperparameters, network, and others, Tianshou provides an easy way of construction through the code-level.
Overview
--------
In reinforcement learning, the agent interacts with environments to improve itself.
.. image:: /_static/images/rl-loop.jpg
:align: center
:height: 200
There are three types of data flow in RL training pipeline:
1. Agent to environment: ``action`` will be generated by agent and sent to environment;
2. Environment to agent: ``env.step`` takes action, and returns a tuple of ``(observation, reward, done, info)``;
3. Agent-environment interaction to agent training: the data generated by interaction will be stored and sent to the learner of agent.
In the following sections, we will set up (vectorized) environments, policy (with neural network), collector (with buffer), and trainer to successfully run the RL training and evaluation pipeline.
Here is the overall system:
.. image:: /_static/images/pipeline.png
:align: center
:height: 300
2020-03-29 15:18:33 +08:00
Make an Environment
2020-06-02 08:51:14 +08:00
-------------------
2020-03-29 15:18:33 +08:00
First of all, you have to make an environment for your agent to interact with. You can use ``gym.make(environment_name)`` to make an environment for your agent. For environment interfaces, we follow the convention of `Gymnasium <https://github.com/Farama-Foundation/Gymnasium>`_. In your Python code, simply import Tianshou and make the environment:
2020-03-29 15:18:33 +08:00
::
import gymnasium as gym
2020-03-29 15:18:33 +08:00
import tianshou as ts
env = gym.make('CartPole-v1')
2020-03-29 15:18:33 +08:00
CartPole-v1 includes a cart carrying a pole moving on a track. This is a simple environment with a discrete action space, for which DQN applies. You have to identify whether the action space is continuous or discrete and apply eligible algorithms. DDPG :cite:`DDPG`, for example, could only be applied to continuous action spaces, while almost all other policy gradient methods could be applied to both.
Here is the detail of useful fields of CartPole-v1:
2020-03-29 15:18:33 +08:00
- ``state``: the position of the cart, the velocity of the cart, the angle of the pole and the velocity of the tip of the pole;
- ``action``: can only be one of ``[0, 1, 2]``, for moving the cart left, no move, and right;
- ``reward``: each timestep you last, you will receive a +1 ``reward``;
- ``done``: if CartPole is out-of-range or timeout (the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center, or you last over 200 timesteps);
- ``info``: extra info from environment simulation.
The goal is to train a good policy that can get the highest reward in this environment.
Setup Vectorized Environment
----------------------------
2020-03-29 15:18:33 +08:00
If you want to use the original ``gym.Env``:
2020-03-29 15:18:33 +08:00
::
train_envs = gym.make('CartPole-v1')
test_envs = gym.make('CartPole-v1')
2020-03-29 15:18:33 +08:00
Tianshou supports vectorized environment for all algorithms. It provides four types of vectorized environment wrapper:
- :class:`~tianshou.env.DummyVectorEnv`: the sequential version, using a single-thread for-loop;
- :class:`~tianshou.env.SubprocVectorEnv`: use python multiprocessing and pipe for concurrent execution;
- :class:`~tianshou.env.ShmemVectorEnv`: use share memory instead of pipe based on SubprocVectorEnv;
- :class:`~tianshou.env.RayVectorEnv`: use Ray for concurrent activities and is currently the only choice for parallel simulation in a cluster with multiple machines. It can be used as follows: (more explanation can be found at :ref:`parallel_sampling`)
2020-03-29 15:18:33 +08:00
::
train_envs = ts.env.DummyVectorEnv([lambda: gym.make('CartPole-v1') for _ in range(10)])
test_envs = ts.env.DummyVectorEnv([lambda: gym.make('CartPole-v1') for _ in range(100)])
2020-03-29 15:18:33 +08:00
Here, we set up 10 environments in ``train_envs`` and 100 environments in ``test_envs``.
2020-03-29 15:18:33 +08:00
You can also try the super-fast vectorized environment `EnvPool <https://github.com/sail-sg/envpool/>`_ by
::
import envpool
train_envs = envpool.make_gymnasium("CartPole-v1", num_envs=10)
test_envs = envpool.make_gymnasium("CartPole-v1", num_envs=100)
For the demonstration, here we use the second code-block.
2020-03-29 15:18:33 +08:00
.. warning::
If you use your own environment, please make sure the ``seed`` method is set up properly, e.g.,
::
def seed(self, seed):
np.random.seed(seed)
Otherwise, the outputs of these envs may be the same with each other.
2020-06-08 21:53:00 +08:00
.. _build_the_network:
2020-03-29 15:18:33 +08:00
Build the Network
2020-06-02 08:51:14 +08:00
-----------------
2020-03-29 15:18:33 +08:00
Tianshou supports any user-defined PyTorch networks and optimizers. Yet, of course, the inputs and outputs must comply with Tianshou's API. Here is an example:
2020-03-29 15:18:33 +08:00
::
import torch, numpy as np
from torch import nn
class Net(nn.Module):
def __init__(self, state_shape, action_shape):
super().__init__()
self.model = nn.Sequential(
2020-03-29 15:18:33 +08:00
nn.Linear(np.prod(state_shape), 128), nn.ReLU(inplace=True),
nn.Linear(128, 128), nn.ReLU(inplace=True),
nn.Linear(128, 128), nn.ReLU(inplace=True),
nn.Linear(128, np.prod(action_shape)),
)
2020-03-29 15:18:33 +08:00
def forward(self, obs, state=None, info={}):
if not isinstance(obs, torch.Tensor):
obs = torch.tensor(obs, dtype=torch.float)
batch = obs.shape[0]
logits = self.model(obs.view(batch, -1))
return logits, state
state_shape = env.observation_space.shape or env.observation_space.n
action_shape = env.action_space.shape or env.action_space.n
net = Net(state_shape, action_shape)
optim = torch.optim.Adam(net.parameters(), lr=1e-3)
You can also use pre-defined MLP networks in :mod:`~tianshou.utils.net.common`, :mod:`~tianshou.utils.net.discrete`, and :mod:`~tianshou.utils.net.continuous`. The rules of self-defined networks are:
2020-03-29 15:18:33 +08:00
2020-06-08 21:53:00 +08:00
1. Input: observation ``obs`` (may be a ``numpy.ndarray``, ``torch.Tensor``, dict, or self-defined class), hidden state ``state`` (for RNN usage), and other information ``info`` provided by the environment.
2. Output: some ``logits``, the next hidden state ``state``. The logits could be a tuple instead of a ``torch.Tensor``, or some other useful variables or results during the policy forwarding procedure. It depends on how the policy class process the network output. For example, in PPO :cite:`PPO`, the return of the network might be ``(mu, sigma), state`` for Gaussian policy.
.. note::
The logits here indicates the raw output of the network. In supervised learning, the raw output of prediction/classification model is called logits, and here we extend this definition to any raw output of the neural network.
2020-03-29 15:18:33 +08:00
Setup Policy
2020-06-02 08:51:14 +08:00
------------
2020-03-29 15:18:33 +08:00
We use the defined ``net`` and ``optim`` above, with extra policy hyper-parameters, to define a policy. Here we define a DQN policy with a target network:
2020-03-29 15:18:33 +08:00
::
Remove kwargs in policy init (#950) Closes #947 This removes all kwargs from all policy constructors. While doing that, I also improved several names and added a whole lot of TODOs. ## Functional changes: 1. Added possibility to pass None as `critic2` and `critic2_optim`. In fact, the default behavior then should cover the absolute majority of cases 2. Added a function called `clone_optimizer` as a temporary measure to support passing `critic2_optim=None` ## Breaking changes: 1. `action_space` is no longer optional. In fact, it already was non-optional, as there was a ValueError in BasePolicy.init. So now several examples were fixed to reflect that 2. `reward_normalization` removed from DDPG and children. It was never allowed to pass it as `True` there, an error would have been raised in `compute_n_step_reward`. Now I removed it from the interface 3. renamed `critic1` and similar to `critic`, in order to have uniform interfaces. Note that the `critic` in DDPG was optional for the sole reason that child classes used `critic1`. I removed this optionality (DDPG can't do anything with `critic=None`) 4. Several renamings of fields (mostly private to public, so backwards compatible) ## Additional changes: 1. Removed type and default declaration from docstring. This kind of duplication is really not necessary 2. Policy constructors are now only called using named arguments, not a fragile mixture of positional and named as before 5. Minor beautifications in typing and code 6. Generally shortened docstrings and made them uniform across all policies (hopefully) ## Comment: With these changes, several problems in tianshou's inheritance hierarchy become more apparent. I tried highlighting them for future work. --------- Co-authored-by: Dominik Jain <d.jain@appliedai.de>
2023-10-08 17:57:03 +02:00
policy = ts.policy.DQNPolicy(
model=net,
optim=optim,
action_space=env.action_space,
discount_factor=0.9,
estimation_step=3,
target_update_freq=320
)
2020-03-29 15:18:33 +08:00
2020-03-29 15:18:33 +08:00
Setup Collector
2020-06-02 08:51:14 +08:00
---------------
2020-03-29 15:18:33 +08:00
The collector is a key concept in Tianshou. It allows the policy to interact with different types of environments conveniently.
2020-03-29 15:18:33 +08:00
In each step, the collector will let the policy perform (at least) a specified number of steps or episodes and store the data in a replay buffer.
The following code shows how to set up a collector in practice. It is worth noticing that VectorReplayBuffer is to be used in vectorized environment scenarios, and the number of buffers, in the following case 10, is preferred to be set as the number of environments.
2020-03-29 15:18:33 +08:00
::
train_collector = ts.data.Collector(policy, train_envs, ts.data.VectorReplayBuffer(20000, 10), exploration_noise=True)
test_collector = ts.data.Collector(policy, test_envs, exploration_noise=True)
2020-03-29 15:18:33 +08:00
The main function of collector is the collect function, which can be summarized in the following lines:
::
result = self.policy(self.data, last_state) # the agent predicts the batch action from batch observation
act = to_numpy(result.act)
self.data.update(act=act) # update the data with new action/policy
result = self.env.step(act, ready_env_ids) # apply action to environment
obs_next, rew, done, info = result
self.data.update(obs_next=obs_next, rew=rew, done=done, info=info) # update the data with new state/reward/done/info
2020-03-29 15:18:33 +08:00
Train Policy with a Trainer
2020-06-02 08:51:14 +08:00
---------------------------
2020-03-29 15:18:33 +08:00
Tianshou provides :class:`~tianshou.trainer.OnpolicyTrainer`, :class:`~tianshou.trainer.OffpolicyTrainer`,
and :class:`~tianshou.trainer.OfflineTrainer`. The trainer will automatically stop training when the policy
reaches the stop condition ``stop_fn`` on test collector. Since DQN is an off-policy algorithm, we use the
:class:`~tianshou.trainer.OffpolicyTrainer` as follows:
2020-03-29 15:18:33 +08:00
::
result = ts.trainer.OffpolicyTrainer(
policy=policy,
train_collector=train_collector,
test_collector=test_collector,
max_epoch=10, step_per_epoch=10000, step_per_collect=10,
update_per_step=0.1, episode_per_test=100, batch_size=64,
train_fn=lambda epoch, env_step: policy.set_eps(0.1),
test_fn=lambda epoch, env_step: policy.set_eps(0.05),
stop_fn=lambda mean_rewards: mean_rewards >= env.spec.reward_threshold
).run()
2020-03-29 15:18:33 +08:00
print(f'Finished training! Use {result["duration"]}')
The meaning of each parameter is as follows (full description can be found at :class:`~tianshou.trainer.OffpolicyTrainer`):
2020-03-29 15:18:33 +08:00
* ``max_epoch``: The maximum of epochs for training. The training process might be finished before reaching the ``max_epoch``;
* ``step_per_epoch``: The number of environment step (a.k.a. transition) collected per epoch;
* ``step_per_collect``: The number of transition the collector would collect before the network update. For example, the code above means "collect 10 transitions and do one policy network update";
* ``episode_per_test``: The number of episodes for one policy evaluation.
2020-04-02 09:07:04 +08:00
* ``batch_size``: The batch size of sample data, which is going to feed in the policy network.
* ``train_fn``: A function receives the current number of epoch and step index, and performs some operations at the beginning of training in this epoch. For example, the code above means "reset the epsilon to 0.1 in DQN before training".
* ``test_fn``: A function receives the current number of epoch and step index, and performs some operations at the beginning of testing in this epoch. For example, the code above means "reset the epsilon to 0.05 in DQN before testing".
2020-03-29 15:18:33 +08:00
* ``stop_fn``: A function receives the average undiscounted returns of the testing result, return a boolean which indicates whether reaching the goal.
* ``logger``: See below.
2020-03-29 15:18:33 +08:00
The trainer supports `TensorBoard <https://www.tensorflow.org/tensorboard>`_ for logging. It can be used as:
::
from torch.utils.tensorboard import SummaryWriter
from tianshou.utils import TensorboardLogger
2020-03-29 15:18:33 +08:00
writer = SummaryWriter('log/dqn')
logger = TensorboardLogger(writer)
2020-03-29 15:18:33 +08:00
Pass the logger into the trainer, and the training result will be recorded into the TensorBoard.
2020-03-29 15:18:33 +08:00
The returned result is a dictionary as follows:
::
{
'train_step': 9246,
'train_episode': 504.0,
'train_time/collector': '0.65s',
'train_time/model': '1.97s',
'train_speed': '3518.79 step/s',
'test_step': 49112,
'test_episode': 400.0,
'test_time': '1.38s',
'test_speed': '35600.52 step/s',
'best_reward': 199.03,
'duration': '4.01s'
}
It shows that within approximately 4 seconds, we finished training a DQN agent on CartPole. The mean returns over 100 consecutive episodes is 199.03.
2020-03-29 15:18:33 +08:00
Save/Load Policy
2020-06-02 08:51:14 +08:00
----------------
2020-03-29 15:18:33 +08:00
Since the policy inherits the class ``torch.nn.Module``, saving and loading the policy are exactly the same as a torch module:
2020-03-29 15:18:33 +08:00
::
torch.save(policy.state_dict(), 'dqn.pth')
policy.load_state_dict(torch.load('dqn.pth'))
2020-04-02 09:07:04 +08:00
Watch the Agent's Performance
2020-06-02 08:51:14 +08:00
-----------------------------
2020-03-29 15:18:33 +08:00
2020-04-02 09:07:04 +08:00
:class:`~tianshou.data.Collector` supports rendering. Here is the example of watching the agent's performance in 35 FPS:
2020-03-29 15:18:33 +08:00
::
policy.eval()
policy.set_eps(0.05)
collector = ts.data.Collector(policy, env, exploration_noise=True)
2020-03-29 15:18:33 +08:00
collector.collect(n_episode=1, render=1 / 35)
If you'd like to manually see the action generated by a well-trained agent:
::
# assume obs is a single environment observation
action = policy(Batch(obs=np.array([obs]))).act[0]
2020-04-02 12:31:22 +08:00
.. _customized_trainer:
2020-03-29 15:18:33 +08:00
Train a Policy with Customized Codes
2020-06-02 08:51:14 +08:00
------------------------------------
2020-03-29 15:18:33 +08:00
"I don't want to use your provided trainer. I want to customize it!"
Tianshou supports user-defined training code. Here is the code snippet:
2020-03-29 15:18:33 +08:00
::
# pre-collect at least 5000 transitions with random action before training
train_collector.collect(n_step=5000, random=True)
2020-03-29 15:18:33 +08:00
policy.set_eps(0.1)
for i in range(int(1e6)): # total step
collect_result = train_collector.collect(n_step=10)
# once if the collected episodes' mean returns reach the threshold,
# or every 1000 steps, we test it on test_collector
if collect_result['rews'].mean() >= env.spec.reward_threshold or i % 1000 == 0:
2020-03-29 15:18:33 +08:00
policy.set_eps(0.05)
result = test_collector.collect(n_episode=100)
if result['rews'].mean() >= env.spec.reward_threshold:
print(f'Finished training! Test mean returns: {result["rews"].mean()}')
2020-03-29 15:18:33 +08:00
break
else:
# back to training eps
policy.set_eps(0.1)
# train policy with a sampled batch data from buffer
losses = policy.update(64, train_collector.buffer)
2020-03-29 15:18:33 +08:00
For further usage, you can refer to the :doc:`/01_tutorials/07_cheatsheet`.
2020-03-29 15:18:33 +08:00
.. rubric:: References
2020-04-04 21:02:06 +08:00
.. bibliography:: /refs.bib
2020-03-29 15:18:33 +08:00
:style: unsrtalpha