2021-11-28 07:10:28 -08:00
# Atari
2020-08-30 05:48:09 +08:00
The sample speed is \~3000 env step per second (\~12000 Atari frame per second in fact since we use frame_stack=4) under the normal mode (use a CNN policy and a collector, also storing data into the buffer). The main bottleneck is training the convolutional neural network.
The Atari env seed cannot be fixed due to the discussion [here ](https://github.com/openai/gym/issues/1478 ), but it is not a big issue since on Atari it will always have the similar results.
The env wrapper is a crucial thing. Without wrappers, the agent cannot perform well enough on Atari games. Many existing RL codebases use [OpenAI wrapper ](https://github.com/openai/baselines/blob/master/baselines/common/atari_wrappers.py ), but it is not the original DeepMind version ([related issue ](https://github.com/openai/baselines/issues/240 )). Dopamine has a different [wrapper ](https://github.com/google/dopamine/blob/master/dopamine/discrete_domains/atari_lib.py ) but unfortunately it cannot work very well in our codebase.
# DQN (single run)
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
| task | best reward | reward curve | parameters | time cost |
| --------------------------- | ----------- | ------------------------------------- | ------------------------------------------------------------ | ------------------- |
2020-09-26 16:35:37 +08:00
| PongNoFrameskip-v4 | 20 |  | `python3 atari_dqn.py --task "PongNoFrameskip-v4" --batch-size 64` | ~30 min (~15 epoch) |
| BreakoutNoFrameskip-v4 | 316 |  | `python3 atari_dqn.py --task "BreakoutNoFrameskip-v4" --test-num 100` | 3~4h (100 epoch) |
| EnduroNoFrameskip-v4 | 670 |  | `python3 atari_dqn.py --task "EnduroNoFrameskip-v4 " --test-num 100` | 3~4h (100 epoch) |
| QbertNoFrameskip-v4 | 7307 |  | `python3 atari_dqn.py --task "QbertNoFrameskip-v4" --test-num 100` | 3~4h (100 epoch) |
| MsPacmanNoFrameskip-v4 | 2107 |  | `python3 atari_dqn.py --task "MsPacmanNoFrameskip-v4" --test-num 100` | 3~4h (100 epoch) |
| SeaquestNoFrameskip-v4 | 2088 |  | `python3 atari_dqn.py --task "SeaquestNoFrameskip-v4" --test-num 100` | 3~4h (100 epoch) |
| SpaceInvadersNoFrameskip-v4 | 812.2 |  | `python3 atari_dqn.py --task "SpaceInvadersNoFrameskip-v4" --test-num 100` | 3~4h (100 epoch) |
Note: The `eps_train_final` and `eps_test` in the original DQN paper is 0.1 and 0.01, but [some works ](https://github.com/google/dopamine/tree/master/baselines ) found that smaller eps helps improve the performance. Also, a large batchsize (say 64 instead of 32) will help faster convergence but will slow down the training speed.
2020-08-30 05:48:09 +08:00
We haven't tuned this result to the best, so have fun with playing these hyperparameters!
2021-01-06 10:17:45 +08:00
# C51 (single run)
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
| task | best reward | reward curve | parameters |
| --------------------------- | ----------- | ------------------------------------- | ------------------------------------------------------------ |
| PongNoFrameskip-v4 | 20 |  | `python3 atari_c51.py --task "PongNoFrameskip-v4" --batch-size 64` |
| BreakoutNoFrameskip-v4 | 536.6 |  | `python3 atari_c51.py --task "BreakoutNoFrameskip-v4" --n-step 1` |
| EnduroNoFrameskip-v4 | 1032 |  | `python3 atari_c51.py --task "EnduroNoFrameskip-v4 " ` |
| QbertNoFrameskip-v4 | 16245 |  | `python3 atari_c51.py --task "QbertNoFrameskip-v4"` |
| MsPacmanNoFrameskip-v4 | 3133 |  | `python3 atari_c51.py --task "MsPacmanNoFrameskip-v4"` |
| SeaquestNoFrameskip-v4 | 6226 |  | `python3 atari_c51.py --task "SeaquestNoFrameskip-v4"` |
| SpaceInvadersNoFrameskip-v4 | 988.5 |  | `python3 atari_c51.py --task "SpaceInvadersNoFrameskip-v4"` |
2021-01-20 02:13:04 -08:00
Note: The selection of `n_step` is based on Figure 6 in the [Rainbow ](https://arxiv.org/abs/1710.02298 ) paper.
2021-01-28 09:27:05 +08:00
# QRDQN (single run)
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
| task | best reward | reward curve | parameters |
| --------------------------- | ----------- | ------------------------------------- | ------------------------------------------------------------ |
| PongNoFrameskip-v4 | 20 |  | `python3 atari_qrdqn.py --task "PongNoFrameskip-v4" --batch-size 64` |
| BreakoutNoFrameskip-v4 | 409.2 |  | `python3 atari_qrdqn.py --task "BreakoutNoFrameskip-v4" --n-step 1` |
| EnduroNoFrameskip-v4 | 1055.9 |  | `python3 atari_qrdqn.py --task "EnduroNoFrameskip-v4"` |
| QbertNoFrameskip-v4 | 14990 |  | `python3 atari_qrdqn.py --task "QbertNoFrameskip-v4"` |
| MsPacmanNoFrameskip-v4 | 2886 |  | `python3 atari_qrdqn.py --task "MsPacmanNoFrameskip-v4"` |
| SeaquestNoFrameskip-v4 | 5676 |  | `python3 atari_qrdqn.py --task "SeaquestNoFrameskip-v4"` |
| SpaceInvadersNoFrameskip-v4 | 938 |  | `python3 atari_qrdqn.py --task "SpaceInvadersNoFrameskip-v4"` |
2021-05-28 18:44:23 -07:00
# IQN (single run)
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
| task | best reward | reward curve | parameters |
| --------------------------- | ----------- | ------------------------------------- | ------------------------------------------------------------ |
2021-06-09 18:05:25 -07:00
| PongNoFrameskip-v4 | 20.3 |  | `python3 atari_iqn.py --task "PongNoFrameskip-v4" --batch-size 64` |
| BreakoutNoFrameskip-v4 | 496.7 |  | `python3 atari_iqn.py --task "BreakoutNoFrameskip-v4" --n-step 1` |
| EnduroNoFrameskip-v4 | 1545 |  | `python3 atari_iqn.py --task "EnduroNoFrameskip-v4"` |
| QbertNoFrameskip-v4 | 15342.5 |  | `python3 atari_iqn.py --task "QbertNoFrameskip-v4"` |
| MsPacmanNoFrameskip-v4 | 2915 |  | `python3 atari_iqn.py --task "MsPacmanNoFrameskip-v4"` |
| SeaquestNoFrameskip-v4 | 4874 |  | `python3 atari_iqn.py --task "SeaquestNoFrameskip-v4"` |
| SpaceInvadersNoFrameskip-v4 | 1498.5 |  | `python3 atari_iqn.py --task "SpaceInvadersNoFrameskip-v4"` |
2021-05-28 18:44:23 -07:00
2021-06-14 20:59:02 -07:00
# FQF (single run)
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
| task | best reward | reward curve | parameters |
| --------------------------- | ----------- | ------------------------------------- | ------------------------------------------------------------ |
| PongNoFrameskip-v4 | 20.7 |  | `python3 atari_fqf.py --task "PongNoFrameskip-v4" --batch-size 64` |
| BreakoutNoFrameskip-v4 | 517.3 |  | `python3 atari_fqf.py --task "BreakoutNoFrameskip-v4" --n-step 1` |
| EnduroNoFrameskip-v4 | 2240.5 |  | `python3 atari_fqf.py --task "EnduroNoFrameskip-v4"` |
| QbertNoFrameskip-v4 | 16172.5 |  | `python3 atari_fqf.py --task "QbertNoFrameskip-v4"` |
| MsPacmanNoFrameskip-v4 | 2429 |  | `python3 atari_fqf.py --task "MsPacmanNoFrameskip-v4"` |
| SeaquestNoFrameskip-v4 | 10775 |  | `python3 atari_fqf.py --task "SeaquestNoFrameskip-v4"` |
| SpaceInvadersNoFrameskip-v4 | 2482 |  | `python3 atari_fqf.py --task "SpaceInvadersNoFrameskip-v4"` |
2021-08-29 08:34:59 -07:00
# Rainbow (single run)
One epoch here is equal to 100,000 env step, 100 epochs stand for 10M.
| task | best reward | reward curve | parameters |
| --------------------------- | ----------- | ------------------------------------- | ------------------------------------------------------------ |
| PongNoFrameskip-v4 | 21 |  | `python3 atari_rainbow.py --task "PongNoFrameskip-v4" --batch-size 64` |
| BreakoutNoFrameskip-v4 | 684.6 |  | `python3 atari_rainbow.py --task "BreakoutNoFrameskip-v4" --n-step 1` |
| EnduroNoFrameskip-v4 | 1625.9 |  | `python3 atari_rainbow.py --task "EnduroNoFrameskip-v4"` |
| QbertNoFrameskip-v4 | 16192.5 |  | `python3 atari_rainbow.py --task "QbertNoFrameskip-v4"` |
| MsPacmanNoFrameskip-v4 | 3101 |  | `python3 atari_rainbow.py --task "MsPacmanNoFrameskip-v4"` |
| SeaquestNoFrameskip-v4 | 2126 |  | `python3 atari_rainbow.py --task "SeaquestNoFrameskip-v4"` |
| SpaceInvadersNoFrameskip-v4 | 1794.5 |  | `python3 atari_rainbow.py --task "SpaceInvadersNoFrameskip-v4"` |