.. Tianshou documentation master file, created by sphinx-quickstart on Sat Mar 28 15:58:19 2020. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to Tianshou! ==================== **Tianshou** (`天授 `_) is a reinforcement learning platform based on pure PyTorch. Unlike existing reinforcement learning libraries, which are mainly based on TensorFlow, have many nested classes, unfriendly API, or slow-speed, Tianshou provides a fast-speed framework and pythonic API for building the deep reinforcement learning agent. The supported interface algorithms include: * :class:`~tianshou.policy.PGPolicy` `Policy Gradient `_ * :class:`~tianshou.policy.DQNPolicy` `Deep Q-Network `_ * :class:`~tianshou.policy.DQNPolicy` `Double DQN `_ * :class:`~tianshou.policy.DQNPolicy` `Dueling DQN `_ * :class:`~tianshou.policy.C51Policy` `C51 `_ * :class:`~tianshou.policy.A2CPolicy` `Advantage Actor-Critic `_ * :class:`~tianshou.policy.DDPGPolicy` `Deep Deterministic Policy Gradient `_ * :class:`~tianshou.policy.PPOPolicy` `Proximal Policy Optimization `_ * :class:`~tianshou.policy.TD3Policy` `Twin Delayed DDPG `_ * :class:`~tianshou.policy.SACPolicy` `Soft Actor-Critic `_ * :class:`~tianshou.policy.DiscreteSACPolicy` `Discrete Soft Actor-Critic `_ * :class:`~tianshou.policy.ImitationPolicy` Imitation Learning * :class:`~tianshou.policy.DiscreteBCQPolicy` `Discrete Batch-Constrained deep Q-Learning `_ * :class:`~tianshou.policy.PSRLPolicy` `Posterior Sampling Reinforcement Learning `_ * :class:`~tianshou.data.PrioritizedReplayBuffer` `Prioritized Experience Replay `_ * :meth:`~tianshou.policy.BasePolicy.compute_episodic_return` `Generalized Advantage Estimator `_ Here is Tianshou's other features: * Elegant framework, using only ~2000 lines of code * Support parallel environment simulation (synchronous or asynchronous) for all algorithms: :ref:`parallel_sampling` * Support recurrent state representation in actor network and critic network (RNN-style training for POMDP): :ref:`rnn_training` * Support any type of environment state/action (e.g. a dict, a self-defined class, ...): :ref:`self_defined_env` * Support :ref:`customize_training` * Support n-step returns estimation :meth:`~tianshou.policy.BasePolicy.compute_nstep_return` and prioritized experience replay :class:`~tianshou.data.PrioritizedReplayBuffer` for all Q-learning based algorithms; GAE, nstep and PER are very fast thanks to numba jit function and vectorized numpy operation * Support :doc:`/tutorials/tictactoe` * Comprehensive `unit tests `_, including functional checking, RL pipeline checking, documentation checking, PEP8 code-style checking, and type checking 中文文档位于 `https://tianshou.readthedocs.io/zh/latest/ `_ Installation ------------ Tianshou is currently hosted on `PyPI `_ and `conda-forge `_. It requires Python >= 3.6. You can simply install Tianshou from PyPI with the following command: .. code-block:: bash $ pip install tianshou If you use Anaconda or Miniconda, you can install Tianshou from conda-forge through the following command: .. code-block:: bash $ conda -c conda-forge install tianshou You can also install with the newest version through GitHub: .. code-block:: bash $ pip install git+https://github.com/thu-ml/tianshou.git@master --upgrade After installation, open your python console and type :: import tianshou print(tianshou.__version__) If no error occurs, you have successfully installed Tianshou. Tianshou is still under development, you can also check out the documents in stable version through `tianshou.readthedocs.io/en/stable/ `_. .. toctree:: :maxdepth: 1 :caption: Tutorials tutorials/dqn tutorials/concepts tutorials/batch tutorials/tictactoe tutorials/trick tutorials/cheatsheet .. toctree:: :maxdepth: 1 :caption: API Docs api/tianshou.data api/tianshou.env api/tianshou.policy api/tianshou.trainer api/tianshou.exploration api/tianshou.utils .. toctree:: :maxdepth: 1 :caption: Community contributing contributor Indices and tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search`