Tianshou/docs/02_notebooks/L0_overview.ipynb
bordeauxred 4f65b131aa
Feat/refactor collector (#1063)
Closes: #1058 

### Api Extensions
- Batch received two new methods: `to_dict` and `to_list_of_dicts`.
#1063
- `Collector`s can now be closed, and their reset is more granular.
#1063
- Trainers can control whether collectors should be reset prior to
training. #1063
- Convenience constructor for `CollectStats` called
`with_autogenerated_stats`. #1063

### Internal Improvements
- `Collector`s rely less on state, the few stateful things are stored
explicitly instead of through a `.data` attribute. #1063
- Introduced a first iteration of a naming convention for vars in
`Collector`s. #1063
- Generally improved readability of Collector code and associated tests
(still quite some way to go). #1063
- Improved typing for `exploration_noise` and within Collector. #1063

### Breaking Changes

- Removed `.data` attribute from `Collector` and its child classes.
#1063
- Collectors no longer reset the environment on initialization. Instead,
the user might have to call `reset`
expicitly or pass `reset_before_collect=True` . #1063
- VectorEnvs now return an array of info-dicts on reset instead of a
list. #1063
- Fixed `iter(Batch(...)` which now behaves the same way as
`Batch(...).__iter__()`. Can be considered a bugfix. #1063

---------

Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2024-03-28 18:02:31 +01:00

259 lines
7.1 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"editable": true,
"id": "r7aE6Rq3cAEE",
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"# Overview\n",
"Before we get started, we must first install Tianshou's library and Gym environment by running the commands below. This tutorials will always keep up with the latest version of Tianshou since they also serve as a test for the latest version. If you are using an older version of Tianshou, please refer to the [documentation](https://tianshou.readthedocs.io/en/latest/) of your version.\n"
]
},
{
"cell_type": "code",
"outputs": [],
"source": [
"# !pip install tianshou gym"
],
"metadata": {
"collapsed": false
},
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "1_mLTSEIcY2c"
},
"source": [
"## Run the code"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IcFNmCjYeIIU"
},
"source": [
"Below is a short script that use a certain DRL algorithm (PPO) to solve the classic CartPole-v1\n",
"problem in Gym. Simply run it and **don't worry** if you can't understand the code very well. That is\n",
"exactly what this tutorial is for.\n",
"\n",
"If the script ends normally, you will see the evaluation result printed out before the first\n",
"epoch is finished."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": [
"hide-cell",
"remove-output"
]
},
"outputs": [],
"source": [
"%%capture\n",
"\n",
"import gymnasium as gym\n",
"import torch\n",
"\n",
"from tianshou.data import Collector, VectorReplayBuffer\n",
"from tianshou.env import DummyVectorEnv\n",
"from tianshou.policy import BasePolicy, PPOPolicy\n",
"from tianshou.trainer import OnpolicyTrainer\n",
"from tianshou.utils.net.common import ActorCritic, Net\n",
"from tianshou.utils.net.discrete import Actor, Critic\n",
"\n",
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"hide-output"
]
},
"outputs": [],
"source": [
"# environments\n",
"env = gym.make(\"CartPole-v1\")\n",
"train_envs = DummyVectorEnv([lambda: gym.make(\"CartPole-v1\") for _ in range(20)])\n",
"test_envs = DummyVectorEnv([lambda: gym.make(\"CartPole-v1\") for _ in range(10)])\n",
"\n",
"# model & optimizer\n",
"assert env.observation_space.shape is not None # for mypy\n",
"net = Net(state_shape=env.observation_space.shape, hidden_sizes=[64, 64], device=device)\n",
"\n",
"assert isinstance(env.action_space, gym.spaces.Discrete) # for mypy\n",
"actor = Actor(preprocess_net=net, action_shape=env.action_space.n, device=device).to(device)\n",
"critic = Critic(preprocess_net=net, device=device).to(device)\n",
"actor_critic = ActorCritic(actor, critic)\n",
"optim = torch.optim.Adam(actor_critic.parameters(), lr=0.0003)\n",
"\n",
"# PPO policy\n",
"dist = torch.distributions.Categorical\n",
"policy: BasePolicy\n",
"policy = PPOPolicy(\n",
" actor=actor,\n",
" critic=critic,\n",
" optim=optim,\n",
" dist_fn=dist,\n",
" action_space=env.action_space,\n",
" action_scaling=False,\n",
")\n",
"\n",
"# collector\n",
"train_collector = Collector(policy, train_envs, VectorReplayBuffer(20000, len(train_envs)))\n",
"test_collector = Collector(policy, test_envs)\n",
"\n",
"# trainer\n",
"train_result = OnpolicyTrainer(\n",
" policy=policy,\n",
" batch_size=256,\n",
" train_collector=train_collector,\n",
" test_collector=test_collector,\n",
" max_epoch=10,\n",
" step_per_epoch=50000,\n",
" repeat_per_collect=10,\n",
" episode_per_test=10,\n",
" step_per_collect=2000,\n",
" stop_fn=lambda mean_reward: mean_reward >= 195,\n",
").run()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"train_result.pprint_asdict()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "G9YEQptYvCgx",
"outputId": "2a9b5b22-be50-4bb7-ae93-af7e65e7442a"
},
"outputs": [],
"source": [
"# Let's watch its performance!\n",
"policy.eval()\n",
"eval_result = test_collector.collect(n_episode=3, render=False)\n",
"print(f\"Final reward: {eval_result.returns.mean()}, length: {eval_result.lens.mean()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xFYlcPo8fpPU"
},
"source": [
"## Tutorial Introduction\n",
"\n",
"A common DRL experiment as is shown above may require many components to work together. The agent, the\n",
"environment (possibly parallelized ones), the replay buffer and the trainer all work together to complete a\n",
"training task.\n",
"\n",
"<div align=center>\n",
"<img src=\"https://tianshou.readthedocs.io/en/master/_images/pipeline.png\">\n",
"\n",
"</div>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kV_uOyimj-bk"
},
"source": [
"In Tianshou, all of these main components are factored out as different building blocks, which you\n",
"can use to create your own algorithm and finish your own experiment.\n",
"\n",
"Building blocks may include:\n",
"- Batch\n",
"- Replay Buffer\n",
"- Vectorized Environment Wrapper\n",
"- Policy (the agent and the training algorithm)\n",
"- Data Collector\n",
"- Trainer\n",
"- Logger\n",
"\n",
"\n",
"These notebooks tutorials will guide you through all the modules one by one."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S0mNKwH9i6Ek"
},
"source": [
"## Further reading"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "M3NPSUnAov4L"
},
"source": [
"### What if I am not familiar with the PPO algorithm itself?\n",
"As for the DRL algorithms themselves, we will refer you to the [Spinning up documentation](https://spinningup.openai.com/en/latest/algorithms/ppo.html), where they provide\n",
"plenty of resources and guides if you want to study the DRL algorithms. In Tianshou's tutorials, we will\n",
"focus on the usages of different modules, but not the algorithms themselves."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}