.. Tianshou documentation master file, created by sphinx-quickstart on Sat Mar 28 15:58:19 2020. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Tianshou! ==================== **Tianshou** (`天授 `_) is a reinforcement learning platform based on pure PyTorch. Unlike existing reinforcement learning libraries, which are mainly based on TensorFlow, have many nested classes, unfriendly API, or slow-speed, Tianshou provides a fast-speed framework and pythonic API for building the deep reinforcement learning agent. The supported interface algorithms include: * :class:`~tianshou.policy.DQNPolicy` `Deep Q-Network `_ * :class:`~tianshou.policy.DQNPolicy` `Double DQN `_ * :class:`~tianshou.policy.DQNPolicy` `Dueling DQN `_ * :class:`~tianshou.policy.BranchingDQNPolicy` `Branching DQN `_ * :class:`~tianshou.policy.C51Policy` `Categorical DQN `_ * :class:`~tianshou.policy.RainbowPolicy` `Rainbow DQN `_ * :class:`~tianshou.policy.QRDQNPolicy` `Quantile Regression DQN `_ * :class:`~tianshou.policy.IQNPolicy` `Implicit Quantile Network `_ * :class:`~tianshou.policy.FQFPolicy` `Fully-parameterized Quantile Function `_ * :class:`~tianshou.policy.PGPolicy` `Policy Gradient `_ * :class:`~tianshou.policy.NPGPolicy` `Natural Policy Gradient `_ * :class:`~tianshou.policy.A2CPolicy` `Advantage Actor-Critic `_ * :class:`~tianshou.policy.TRPOPolicy` `Trust Region Policy Optimization `_ * :class:`~tianshou.policy.PPOPolicy` `Proximal Policy Optimization `_ * :class:`~tianshou.policy.DDPGPolicy` `Deep Deterministic Policy Gradient `_ * :class:`~tianshou.policy.TD3Policy` `Twin Delayed DDPG `_ * :class:`~tianshou.policy.SACPolicy` `Soft Actor-Critic `_ * :class:`~tianshou.policy.REDQPolicy` `Randomized Ensembled Double Q-Learning `_ * :class:`~tianshou.policy.DiscreteSACPolicy` `Discrete Soft Actor-Critic `_ * :class:`~tianshou.policy.ImitationPolicy` Imitation Learning * :class:`~tianshou.policy.BCQPolicy` `Batch-Constrained deep Q-Learning `_ * :class:`~tianshou.policy.CQLPolicy` `Conservative Q-Learning `_ * :class:`~tianshou.policy.TD3BCPolicy` `Twin Delayed DDPG with Behavior Cloning `_ * :class:`~tianshou.policy.DiscreteBCQPolicy` `Discrete Batch-Constrained deep Q-Learning `_ * :class:`~tianshou.policy.DiscreteCQLPolicy` `Discrete Conservative Q-Learning `_ * :class:`~tianshou.policy.DiscreteCRRPolicy` `Critic Regularized Regression `_ * :class:`~tianshou.policy.GAILPolicy` `Generative Adversarial Imitation Learning `_ * :class:`~tianshou.policy.PSRLPolicy` `Posterior Sampling Reinforcement Learning `_ * :class:`~tianshou.policy.ICMPolicy` `Intrinsic Curiosity Module `_ * :class:`~tianshou.data.PrioritizedReplayBuffer` `Prioritized Experience Replay `_ * :meth:`~tianshou.policy.BasePolicy.compute_episodic_return` `Generalized Advantage Estimator `_ * :class:`~tianshou.data.HERReplayBuffer` `Hindsight Experience Replay `_ Here is Tianshou's other features: * Elegant framework, using only ~3000 lines of code * State-of-the-art `MuJoCo benchmark `_ * Support vectorized environment (synchronous or asynchronous) for all algorithms: :ref:`parallel_sampling` * Support super-fast vectorized environment `EnvPool `_ for all algorithms: :ref:`envpool_integration` * Support recurrent state representation in actor network and critic network (RNN-style training for POMDP): :ref:`rnn_training` * Support any type of environment state/action (e.g. a dict, a self-defined class, ...): :ref:`self_defined_env` * Support :ref:`customize_training` * Support n-step returns estimation :meth:`~tianshou.policy.BasePolicy.compute_nstep_return` and prioritized experience replay :class:`~tianshou.data.PrioritizedReplayBuffer` for all Q-learning based algorithms; GAE, nstep and PER are very fast thanks to numba jit function and vectorized numpy operation * Support :doc:`/01_tutorials/04_tictactoe` * Support both `TensorBoard `_ and `W&B `_ log tools * Support multi-GPU training :ref:`multi_gpu` * Comprehensive `unit tests `_, including functional checking, RL pipeline checking, documentation checking, PEP8 code-style checking, and type checking 中文文档位于 `https://tianshou.readthedocs.io/zh/master/ `_ Installation ------------ Tianshou is currently hosted on `PyPI `_ and `conda-forge `_. New releases (and the current state of the master branch) will require Python >= 3.11. You can simply install Tianshou from PyPI with the following command: .. code-block:: bash $ pip install tianshou If you use Anaconda or Miniconda, you can install Tianshou from conda-forge through the following command: .. code-block:: bash $ conda install tianshou -c conda-forge You can also install with the newest version through GitHub: .. code-block:: bash $ pip install git+https://github.com/thu-ml/tianshou.git@master --upgrade After installation, open your python console and type :: import tianshou print(tianshou.__version__) If no error occurs, you have successfully installed Tianshou. Tianshou is still under development, you can also check out the documents in stable version through `tianshou.readthedocs.io/en/stable/ `_. Indices and tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search`