Tianshou/docs/02_notebooks/L5_Collector.ipynb
bordeauxred 4f65b131aa
Feat/refactor collector (#1063)
Closes: #1058 

### Api Extensions
- Batch received two new methods: `to_dict` and `to_list_of_dicts`.
#1063
- `Collector`s can now be closed, and their reset is more granular.
#1063
- Trainers can control whether collectors should be reset prior to
training. #1063
- Convenience constructor for `CollectStats` called
`with_autogenerated_stats`. #1063

### Internal Improvements
- `Collector`s rely less on state, the few stateful things are stored
explicitly instead of through a `.data` attribute. #1063
- Introduced a first iteration of a naming convention for vars in
`Collector`s. #1063
- Generally improved readability of Collector code and associated tests
(still quite some way to go). #1063
- Improved typing for `exploration_noise` and within Collector. #1063

### Breaking Changes

- Removed `.data` attribute from `Collector` and its child classes.
#1063
- Collectors no longer reset the environment on initialization. Instead,
the user might have to call `reset`
expicitly or pass `reset_before_collect=True` . #1063
- VectorEnvs now return an array of info-dicts on reset instead of a
list. #1063
- Fixed `iter(Batch(...)` which now behaves the same way as
`Batch(...).__iter__()`. Can be considered a bugfix. #1063

---------

Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2024-03-28 18:02:31 +01:00

280 lines
7.1 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "M98bqxdMsTXK"
},
"source": [
"# Collector\n",
"From its literal meaning, we can easily know that the Collector in Tianshou is used to collect training data. More specifically, the Collector controls the interaction between Policy (agent) and the environment. It also helps save the interaction data into the ReplayBuffer and returns episode statistics.\n",
"\n",
"<center>\n",
"<img src=../_static/images/structure.svg></img>\n",
"</center>\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OX5cayLv4Ziu"
},
"source": [
"## Usages\n",
"Collector can be used both for training (data collecting) and evaluation in Tianshou."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Z6XKbj28u8Ze"
},
"source": [
"### Policy evaluation\n",
"We need to evaluate our trained policy from time to time in DRL experiments. Collector can help us with this.\n",
"\n",
"First we have to initialize a Collector with an (vectorized) environment and a given policy (agent)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"id": "w8t9ubO7u69J",
"slideshow": {
"slide_type": ""
},
"tags": [
"hide-cell",
"remove-output"
]
},
"outputs": [],
"source": [
"%%capture\n",
"\n",
"import gymnasium as gym\n",
"import torch\n",
"\n",
"from tianshou.data import Collector, VectorReplayBuffer\n",
"from tianshou.env import DummyVectorEnv\n",
"from tianshou.policy import BasePolicy, PGPolicy\n",
"from tianshou.utils.net.common import Net\n",
"from tianshou.utils.net.discrete import Actor"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"env = gym.make(\"CartPole-v1\")\n",
"test_envs = DummyVectorEnv([lambda: gym.make(\"CartPole-v1\") for _ in range(2)])\n",
"\n",
"# model\n",
"assert env.observation_space.shape is not None # for mypy\n",
"net = Net(\n",
" env.observation_space.shape,\n",
" hidden_sizes=[\n",
" 16,\n",
" ],\n",
")\n",
"\n",
"assert isinstance(env.action_space, gym.spaces.Discrete) # for mypy\n",
"actor = Actor(net, env.action_space.n)\n",
"optim = torch.optim.Adam(actor.parameters(), lr=0.0003)\n",
"\n",
"policy: BasePolicy\n",
"policy = PGPolicy(\n",
" actor=actor,\n",
" optim=optim,\n",
" dist_fn=torch.distributions.Categorical,\n",
" action_space=env.action_space,\n",
" action_scaling=False,\n",
")\n",
"test_collector = Collector(policy, test_envs)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wmt8vuwpzQdR"
},
"source": [
"Now we would like to collect 9 episodes of data to test how our initialized Policy performs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "9SuT6MClyjyH",
"outputId": "1e48f13b-c1fe-4fc2-ca1b-669485efdcae"
},
"outputs": [],
"source": [
"collect_result = test_collector.collect(reset_before_collect=True, n_episode=9)\n",
"\n",
"collect_result.pprint_asdict()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zX9AQY0M0R3C"
},
"source": [
"Now we wonder what is the performance of a random policy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "UEcs8P8P0RLt",
"outputId": "85f02f9d-b79b-48b2-99c6-36a1602f0884"
},
"outputs": [],
"source": [
"# Reset the collector\n",
"collect_result = test_collector.collect(reset_before_collect=True, n_episode=9, random=True)\n",
"\n",
"collect_result.pprint_asdict()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sKQRTiG10ljU"
},
"source": [
"It seems like an initialized policy performs even worse than a random policy without any training."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8RKmHIoG1A1k"
},
"source": [
"### Data Collecting\n",
"Data collecting is mostly used during training, when we need to store the collected data in a ReplayBuffer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"editable": true,
"id": "CB9XB9bF1YPC",
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"train_env_num = 4\n",
"buffer_size = 100\n",
"train_envs = DummyVectorEnv([lambda: gym.make(\"CartPole-v1\") for _ in range(train_env_num)])\n",
"replayBuffer = VectorReplayBuffer(buffer_size, train_env_num)\n",
"\n",
"train_collector = Collector(policy, train_envs, replayBuffer)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rWKDazA42IUQ"
},
"source": [
"Now we can collect 50 steps of data, which will be automatically saved in the replay buffer. You can still choose to collect a certain number of episodes rather than steps. Try it yourself."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "-fUtQOnM2Yi1",
"outputId": "dceee987-433e-4b75-ed9e-823c20a9e1c2"
},
"outputs": [],
"source": [
"train_collector.reset()\n",
"replayBuffer.reset()\n",
"\n",
"print(f\"Replay buffer before collecting is empty, and has length={len(replayBuffer)} \\n\")\n",
"n_step = 50\n",
"collect_result = train_collector.collect(n_step=n_step)\n",
"print(\n",
" f\"Replay buffer after collecting {n_step} steps has length={len(replayBuffer)}.\\n\"\n",
" f\"This may exceed n_step when it is not a multiple of train_env_num because of vectorization.\\n\",\n",
")\n",
"collect_result.pprint_asdict()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sample some data from the replay buffer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"replayBuffer.sample(10)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8NP7lOBU3-VS"
},
"source": [
"## Further Reading\n",
"The above collector actually collects 52 data at a time because 52 % 4 = 0. There is one asynchronous collector which allows you collect exactly 50 steps. Check the [documentation](https://tianshou.readthedocs.io/en/master/api/tianshou.data.html#asynccollector) for details."
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}