Deep reinforcement learning has achieved significant successes in various applications.
**Deep Q Network** (DQN) :cite:`DQN` is the pioneer one.
In this tutorial, we will show how to train a DQN agent on CartPole with Tianshou step by step.
The full script is at `test/discrete/test_dqn.py <https://github.com/thu-ml/tianshou/blob/master/test/discrete/test_dqn.py>`_.
Contrary to existing Deep RL libraries such as `RLlib <https://github.com/ray-project/ray/tree/master/rllib/>`_, which could only accept a config specification of hyperparameters, network, and others, Tianshou provides an easy way of construction through the code-level.
In reinforcement learning, the agent interacts with environments to improve itself.
..image:: /_static/images/rl-loop.jpg
:align:center
:height:200
There are three types of data flow in RL training pipeline:
1. Agent to environment: ``action`` will be generated by agent and sent to environment;
2. Environment to agent: ``env.step`` takes action, and returns a tuple of ``(observation, reward, done, info)``;
3. Agent-environment interaction to agent training: the data generated by interaction will be stored and sent to the learner of agent.
In the following sections, we will set up (vectorized) environments, policy (with neural network), collector (with buffer), and trainer to successfully run the RL training and evaluation pipeline.
First of all, you have to make an environment for your agent to interact with. You can use ``gym.make(environment_name)`` to make an environment for your agent. For environment interfaces, we follow the convention of `Gymnasium <https://github.com/Farama-Foundation/Gymnasium>`_. In your Python code, simply import Tianshou and make the environment:
CartPole-v1 includes a cart carrying a pole moving on a track. This is a simple environment with a discrete action space, for which DQN applies. You have to identify whether the action space is continuous or discrete and apply eligible algorithms. DDPG :cite:`DDPG`, for example, could only be applied to continuous action spaces, while almost all other policy gradient methods could be applied to both.
-``state``: the position of the cart, the velocity of the cart, the angle of the pole and the velocity of the tip of the pole;
-``action``: can only be one of ``[0, 1, 2]``, for moving the cart left, no move, and right;
-``reward``: each timestep you last, you will receive a +1 ``reward``;
-``done``: if CartPole is out-of-range or timeout (the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center, or you last over 200 timesteps);
-``info``: extra info from environment simulation.
Tianshou supports vectorized environment for all algorithms. It provides four types of vectorized environment wrapper:
-:class:`~tianshou.env.DummyVectorEnv`: the sequential version, using a single-thread for-loop;
-:class:`~tianshou.env.SubprocVectorEnv`: use python multiprocessing and pipe for concurrent execution;
-:class:`~tianshou.env.ShmemVectorEnv`: use share memory instead of pipe based on SubprocVectorEnv;
-:class:`~tianshou.env.RayVectorEnv`: use Ray for concurrent activities and is currently the only choice for parallel simulation in a cluster with multiple machines. It can be used as follows: (more explanation can be found at :ref:`parallel_sampling`)
Tianshou supports any user-defined PyTorch networks and optimizers. Yet, of course, the inputs and outputs must comply with Tianshou's API. Here is an example:
You can also use pre-defined MLP networks in :mod:`~tianshou.utils.net.common`, :mod:`~tianshou.utils.net.discrete`, and :mod:`~tianshou.utils.net.continuous`. The rules of self-defined networks are:
1. Input: observation ``obs`` (may be a ``numpy.ndarray``, ``torch.Tensor``, dict, or self-defined class), hidden state ``state`` (for RNN usage), and other information ``info`` provided by the environment.
2. Output: some ``logits``, the next hidden state ``state``. The logits could be a tuple instead of a ``torch.Tensor``, or some other useful variables or results during the policy forwarding procedure. It depends on how the policy class process the network output. For example, in PPO :cite:`PPO`, the return of the network might be ``(mu, sigma), state`` for Gaussian policy.
The logits here indicates the raw output of the network. In supervised learning, the raw output of prediction/classification model is called logits, and here we extend this definition to any raw output of the neural network.
We use the defined ``net`` and ``optim`` above, with extra policy hyper-parameters, to define a policy. Here we define a DQN policy with a target network:
The following code shows how to set up a collector in practice. It is worth noticing that VectorReplayBuffer is to be used in vectorized environment scenarios, and the number of buffers, in the following case 10, is preferred to be set as the number of environments.
*``step_per_epoch``: The number of environment step (a.k.a. transition) collected per epoch;
*``step_per_collect``: The number of transition the collector would collect before the network update. For example, the code above means "collect 10 transitions and do one policy network update";
*``train_fn``: A function receives the current number of epoch and step index, and performs some operations at the beginning of training in this epoch. For example, the code above means "reset the epsilon to 0.1 in DQN before training".
*``test_fn``: A function receives the current number of epoch and step index, and performs some operations at the beginning of testing in this epoch. For example, the code above means "reset the epsilon to 0.05 in DQN before testing".