This PR implements BCQPolicy, which could be used to train an offline agent in the environment of continuous action space. An experimental result 'halfcheetah-expert-v1' is provided, which is a d4rl environment (for Offline Reinforcement Learning).
Example usage is in the examples/offline/offline_bcq.py.
This is the PR for QR-DQN algorithm: https://arxiv.org/abs/1710.10044
1. add QR-DQN policy in tianshou/policy/modelfree/qrdqn.py.
2. add QR-DQN net in examples/atari/atari_network.py.
3. add QR-DQN atari example in examples/atari/atari_qrdqn.py.
4. add QR-DQN statement in tianshou/policy/init.py.
5. add QR-DQN unit test in test/discrete/test_qrdqn.py.
6. add QR-DQN atari results in examples/atari/results/qrdqn/.
7. add compute_q_value in DQNPolicy and C51Policy for simplify forward function.
8. move `with torch.no_grad():` from `_target_q` to BasePolicy
By running "python3 atari_qrdqn.py --task "PongNoFrameskip-v4" --batch-size 64", get best_result': '19.8 ± 0.40', in epoch 8.
This is the PR for C51algorithm: https://arxiv.org/abs/1707.06887
1. add C51 policy in tianshou/policy/modelfree/c51.py.
2. add C51 net in tianshou/utils/net/discrete.py.
3. add C51 atari example in examples/atari/atari_c51.py.
4. add C51 statement in tianshou/policy/__init__.py.
5. add C51 test in test/discrete/test_c51.py.
6. add C51 atari results in examples/atari/results/c51/.
By running "python3 atari_c51.py --task "PongNoFrameskip-v4" --batch-size 64", get best_result': '20.50 ± 0.50', in epoch 9.
By running "python3 atari_c51.py --task "BreakoutNoFrameskip-v4" --n-step 1 --epoch 40", get best_reward: 407.400000 ± 31.155096 in epoch 39.
Add an indicator(i.e. `self.learning`) of learning will be convenient for distinguishing state of policy.
Meanwhile, the state of `self.training` will be undisputed in the training stage.
Related issue: #211
Others:
- fix a bug in DDQN: target_q could not be sampled from np.random.rand
- fix a bug in DQN atari net: it should add a ReLU before the last layer
- fix a bug in collector timing
Co-authored-by: n+e <463003665@qq.com>
- fix a bug in MAPolicy: `buffer.rew = Batch()` doesn't change `buffer.rew` (thanks mypy)
- polish examples/box2d/bipedal_hardcore_sac.py
- several docs update
- format setup.py and bump version to 0.2.7
* make fileds with empty Batch rather than None after reset
* dummy code
* remove dummy
* add reward_length argument for collector
* Improve Batch (#126)
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
* bugfix for reward_length
* add get_final_reward_fn argument to collector to deal with marl
* minor polish
* remove multibuf
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* make fileds with empty Batch rather than None after reset
* dummy code
* remove dummy
* add reward_length argument for collector
* bugfix for reward_length
* add get_final_reward_fn argument to collector to deal with marl
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
* dummy code
* remove dummy
* add multi-agent example: tic-tac-toe
* move TicTacToeEnv to a separate file
* remove dummy MANet
* code refactor
* move tic-tac-toe example to test
* update doc with marl-example
* fix docs
* reduce the threshold
* revert
* update player id to start from 1 and change player to agent; keep coding
* add reward_length argument for collector
* Improve Batch (#128)
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* fix docs
* fix docs
* fix docs [ci skip]
* fix docs [ci skip]
Co-authored-by: Trinkle23897 <463003665@qq.com>
* refact
* re-implement Batch.stack and add testcases
* add doc for Batch.stack
* reward_metric
* modify flag
* minor fix
* reuse _create_values and refactor stack_ & cat_
* fix pep8
* fix reward stat in collector
* fix stat of collector, simplify test/base/env.py
* fix docs
* minor fix
* raise exception for stacking with partial keys and axis!=0
* minor fix
* minor fix
* minor fix
* marl-examples
* add condense; bugfix for torch.Tensor; code refactor
* marl example can run now
* enable tic tac toe with larger board size and win-size
* add test dependency
* Fix padding of inconsistent keys with Batch.stack and Batch.cat (#130)
* re-implement Batch.stack and add testcases
* add doc for Batch.stack
* reuse _create_values and refactor stack_ & cat_
* fix pep8
* fix docs
* raise exception for stacking with partial keys and axis!=0
* minor fix
* minor fix
Co-authored-by: Trinkle23897 <463003665@qq.com>
* stash
* let agent learn to play as agent 2 which is harder
* code refactor
* Improve collector (#125)
* remove multibuf
* reward_metric
* make fileds with empty Batch rather than None after reset
* many fixes and refactor
Co-authored-by: Trinkle23897 <463003665@qq.com>
* marl for tic-tac-toe and general gomoku
* update default gamma to 0.1 for tic tac toe to win earlier
* fix name typo; change default game config; add rew_norm option
* fix pep8
* test commit
* mv test dir name
* add rew flag
* fix torch.optim import error and madqn rew_norm
* remove useless kwargs
* Vector env enable select worker (#132)
* Enable selecting worker for vector env step method.
* Update collector to match new vecenv selective worker behavior.
* Bug fix.
* Fix rebase
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* show the last move of tictactoe by capital letters
* add multi-agent tutorial
* fix link
* Standardized behavior of Batch.cat and misc code refactor (#137)
* code refactor; remove unused kwargs; add reward_normalization for dqn
* bugfix for __setitem__ with torch.Tensor; add Batch.condense
* minor fix
* support cat with empty Batch
* remove the dependency of is_empty on len; specify the semantic of empty Batch by test cases
* support stack with empty Batch
* remove condense
* refactor code to reflect the shared / partial / reserved categories of keys
* add is_empty(recursive=False)
* doc fix
* docfix and bugfix for _is_batch_set
* add doc for key reservation
* bugfix for algebra operators
* fix cat with lens hint
* code refactor
* bugfix for storing None
* use ValueError instead of exception
* hide lens away from users
* add comment for __cat
* move the computation of the initial value of lens in cat_ itself.
* change the place of doc string
* doc fix for Batch doc string
* change recursive to recurse
* doc string fix
* minor fix for batch doc
* write tutorials to specify the standard of Batch (#142)
* add doc for len exceptions
* doc move; unify is_scalar_value function
* remove some issubclass check
* bugfix for shape of Batch(a=1)
* keep moving doc
* keep writing batch tutorial
* draft version of Batch tutorial done
* improving doc
* keep improving doc
* batch tutorial done
* rename _is_number
* rename _is_scalar
* shape property do not raise exception
* restore some doc string
* grammarly [ci skip]
* grammarly + fix warning of building docs
* polish docs
* trim and re-arrange batch tutorial
* go straight to the point
* minor fix for batch doc
* add shape / len in basic usage
* keep improving tutorial
* unify _to_array_with_correct_type to remove duplicate code
* delegate type convertion to Batch.__init__
* further delegate type convertion to Batch.__init__
* bugfix for setattr
* add a _parse_value function
* remove dummy function call
* polish docs
Co-authored-by: Trinkle23897 <463003665@qq.com>
* bugfix for mapolicy
* pretty code
* remove debug code; remove condense
* doc fix
* check before get_agents in tutorials/tictactoe
* tutorial
* fix
* minor fix for batch doc
* minor polish
* faster test_ttt
* improve tic-tac-toe environment
* change default epoch and step-per-epoch for tic-tac-toe
* fix mapolicy
* minor polish for mapolicy
* 90% to 80% (need to change the tutorial)
* win rate
* show step number at board
* simplify mapolicy
* minor polish for mapolicy
* remove MADQN
* fix pep8
* change legal_actions to mask (need to update docs)
* simplify maenv
* fix typo
* move basevecenv to single file
* separate RandomAgent
* update docs
* grammarly
* fix pep8
* win rate typo
* format in cheatsheet
* use bool mask directly
* update doc for boolean mask
Co-authored-by: Trinkle23897 <463003665@qq.com>
Co-authored-by: Alexis DUBURCQ <alexis.duburcq@gmail.com>
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* add doc for len exceptions
* doc move; unify is_scalar_value function
* remove some issubclass check
* bugfix for shape of Batch(a=1)
* keep moving doc
* keep writing batch tutorial
* draft version of Batch tutorial done
* improving doc
* keep improving doc
* batch tutorial done
* rename _is_number
* rename _is_scalar
* shape property do not raise exception
* restore some doc string
* grammarly [ci skip]
* grammarly + fix warning of building docs
* polish docs
* trim and re-arrange batch tutorial
* go straight to the point
* minor fix for batch doc
* add shape / len in basic usage
* keep improving tutorial
* unify _to_array_with_correct_type to remove duplicate code
* delegate type convertion to Batch.__init__
* further delegate type convertion to Batch.__init__
* bugfix for setattr
* add a _parse_value function
* remove dummy function call
* polish docs
Co-authored-by: Trinkle23897 <463003665@qq.com>
* add sum_tree.py
* add prioritized replay buffer
* del sum_tree.py
* fix some format issues
* fix weight_update bug
* simply replace replaybuffer in test_dqn without weight update
* weight default set to 1
* fix sampling bug when buffer is not full
* rename parameter
* fix formula error, add accuracy check
* add PrioritizedDQN test
* add test_pdqn.py
* add update_weight() doc
* add ref of prio dqn in readme.md and index.rst
* restore test_dqn.py, fix args of test_pdqn.py