This PR separates the `global_step` into `env_step` and `gradient_step`. In the future, the data from the collecting state will be stored under `env_step`, and the data from the updating state will be stored under `gradient_step`.
Others:
- add `rew_std` and `best_result` into the monitor
- fix network unbounded in `test/continuous/test_sac_with_il.py` and `examples/box2d/bipedal_hardcore_sac.py`
- change the dependency of ray to 1.0.0 since ray-project/ray#10134 has been resolved
Add an indicator(i.e. `self.learning`) of learning will be convenient for distinguishing state of policy.
Meanwhile, the state of `self.training` will be undisputed in the training stage.
Related issue: #211
Others:
- fix a bug in DDQN: target_q could not be sampled from np.random.rand
- fix a bug in DQN atari net: it should add a ReLU before the last layer
- fix a bug in collector timing
Co-authored-by: n+e <463003665@qq.com>
Cherry-pick from #200
- update the function signature
- format code-style
- move _compile into separate functions
- fix a bug in to_torch and to_numpy (Batch)
- remove None in action_range
In short, the code-format only contains function-signature style and `'` -> `"`. (pick up from [black](https://github.com/psf/black))
- replace DiagGuassian with Independent(Normal) (pytorch has already supported this)
- detach alpha from autograd
- add value/alpha to result (more informational)
- revert #204 to fix#211
Co-authored-by: Trinkle23897 <463003665@qq.com>
1. add policy.eval() in all test scripts' "watch performance"
2. remove dict return support for collector preprocess_fn
3. add `__contains__` and `pop` in batch: `key in batch`, `batch.pop(key, deft)`
4. exact n_episode for a list of n_episode limitation and save fake data in cache_buffer when self.buffer is None (#184)
5. fix tensorboard logging: h-axis stands for env step instead of gradient step; add test results into tensorboard
6. add test_returns (both GAE and nstep)
7. change the type-checking order in batch.py and converter.py in order to meet the most often case first
8. fix shape inconsistency for torch.Tensor in replay buffer
9. remove `**kwargs` in ReplayBuffer
10. remove default value in batch.split() and add merge_last argument (#185)
11. improve nstep efficiency
12. add max_batchsize in onpolicy algorithms
13. potential bugfix for subproc.wait
14. fix RecurrentActorProb
15. improve the code-coverage (from 90% to 95%) and remove the dead code
16. fix some incorrect type annotation
The above improvement also increases the training FPS: on my computer, the previous version is only ~1800 FPS and after that, it can reach ~2050 (faster than v0.2.4.post1).
* add policy.update to enable post process and remove collector.sample
* update doc in policy concept
* remove collector.sample in doc
* doc update of concepts
* docs
* polish
* polish policy
* remove collector.sample in docs
* minor fix
* Apply suggestions from code review
just a test
* doc fix
Co-authored-by: Trinkle23897 <463003665@qq.com>
Unify the implementation with multi-environments (wrap a single environment in a multi-environment with one envs) to greatly simplify the code.
This changed the behavior of single-environment.
Prior to this pr, for single environment, collector.collect(n_step=n) will step n steps.
After this pr, for single environment, collector.collect(n_step=n) will step m episodes until the steps are greater than n.
That is to say, collectors now always collect full episodes.
* make fileds with empty Batch rather than None after reset
* dummy code
* remove dummy
* add reward_length argument for collector
* Improve Batch (#126)
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
* bugfix for reward_length
* add get_final_reward_fn argument to collector to deal with marl
* minor polish
* remove multibuf
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* make fileds with empty Batch rather than None after reset
* dummy code
* remove dummy
* add reward_length argument for collector
* bugfix for reward_length
* add get_final_reward_fn argument to collector to deal with marl
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
* dummy code
* remove dummy
* add multi-agent example: tic-tac-toe
* move TicTacToeEnv to a separate file
* remove dummy MANet
* code refactor
* move tic-tac-toe example to test
* update doc with marl-example
* fix docs
* reduce the threshold
* revert
* update player id to start from 1 and change player to agent; keep coding
* add reward_length argument for collector
* Improve Batch (#128)
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* fix docs
* fix docs
* fix docs [ci skip]
* fix docs [ci skip]
Co-authored-by: Trinkle23897 <463003665@qq.com>
* refact
* re-implement Batch.stack and add testcases
* add doc for Batch.stack
* reward_metric
* modify flag
* minor fix
* reuse _create_values and refactor stack_ & cat_
* fix pep8
* fix reward stat in collector
* fix stat of collector, simplify test/base/env.py
* fix docs
* minor fix
* raise exception for stacking with partial keys and axis!=0
* minor fix
* minor fix
* minor fix
* marl-examples
* add condense; bugfix for torch.Tensor; code refactor
* marl example can run now
* enable tic tac toe with larger board size and win-size
* add test dependency
* Fix padding of inconsistent keys with Batch.stack and Batch.cat (#130)
* re-implement Batch.stack and add testcases
* add doc for Batch.stack
* reuse _create_values and refactor stack_ & cat_
* fix pep8
* fix docs
* raise exception for stacking with partial keys and axis!=0
* minor fix
* minor fix
Co-authored-by: Trinkle23897 <463003665@qq.com>
* stash
* let agent learn to play as agent 2 which is harder
* code refactor
* Improve collector (#125)
* remove multibuf
* reward_metric
* make fileds with empty Batch rather than None after reset
* many fixes and refactor
Co-authored-by: Trinkle23897 <463003665@qq.com>
* marl for tic-tac-toe and general gomoku
* update default gamma to 0.1 for tic tac toe to win earlier
* fix name typo; change default game config; add rew_norm option
* fix pep8
* test commit
* mv test dir name
* add rew flag
* fix torch.optim import error and madqn rew_norm
* remove useless kwargs
* Vector env enable select worker (#132)
* Enable selecting worker for vector env step method.
* Update collector to match new vecenv selective worker behavior.
* Bug fix.
* Fix rebase
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* show the last move of tictactoe by capital letters
* add multi-agent tutorial
* fix link
* Standardized behavior of Batch.cat and misc code refactor (#137)
* code refactor; remove unused kwargs; add reward_normalization for dqn
* bugfix for __setitem__ with torch.Tensor; add Batch.condense
* minor fix
* support cat with empty Batch
* remove the dependency of is_empty on len; specify the semantic of empty Batch by test cases
* support stack with empty Batch
* remove condense
* refactor code to reflect the shared / partial / reserved categories of keys
* add is_empty(recursive=False)
* doc fix
* docfix and bugfix for _is_batch_set
* add doc for key reservation
* bugfix for algebra operators
* fix cat with lens hint
* code refactor
* bugfix for storing None
* use ValueError instead of exception
* hide lens away from users
* add comment for __cat
* move the computation of the initial value of lens in cat_ itself.
* change the place of doc string
* doc fix for Batch doc string
* change recursive to recurse
* doc string fix
* minor fix for batch doc
* write tutorials to specify the standard of Batch (#142)
* add doc for len exceptions
* doc move; unify is_scalar_value function
* remove some issubclass check
* bugfix for shape of Batch(a=1)
* keep moving doc
* keep writing batch tutorial
* draft version of Batch tutorial done
* improving doc
* keep improving doc
* batch tutorial done
* rename _is_number
* rename _is_scalar
* shape property do not raise exception
* restore some doc string
* grammarly [ci skip]
* grammarly + fix warning of building docs
* polish docs
* trim and re-arrange batch tutorial
* go straight to the point
* minor fix for batch doc
* add shape / len in basic usage
* keep improving tutorial
* unify _to_array_with_correct_type to remove duplicate code
* delegate type convertion to Batch.__init__
* further delegate type convertion to Batch.__init__
* bugfix for setattr
* add a _parse_value function
* remove dummy function call
* polish docs
Co-authored-by: Trinkle23897 <463003665@qq.com>
* bugfix for mapolicy
* pretty code
* remove debug code; remove condense
* doc fix
* check before get_agents in tutorials/tictactoe
* tutorial
* fix
* minor fix for batch doc
* minor polish
* faster test_ttt
* improve tic-tac-toe environment
* change default epoch and step-per-epoch for tic-tac-toe
* fix mapolicy
* minor polish for mapolicy
* 90% to 80% (need to change the tutorial)
* win rate
* show step number at board
* simplify mapolicy
* minor polish for mapolicy
* remove MADQN
* fix pep8
* change legal_actions to mask (need to update docs)
* simplify maenv
* fix typo
* move basevecenv to single file
* separate RandomAgent
* update docs
* grammarly
* fix pep8
* win rate typo
* format in cheatsheet
* use bool mask directly
* update doc for boolean mask
Co-authored-by: Trinkle23897 <463003665@qq.com>
Co-authored-by: Alexis DUBURCQ <alexis.duburcq@gmail.com>
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* code refactor; remove unused kwargs; add reward_normalization for dqn
* bugfix for __setitem__ with torch.Tensor; add Batch.condense
* minor fix
* support cat with empty Batch
* remove the dependency of is_empty on len; specify the semantic of empty Batch by test cases
* support stack with empty Batch
* remove condense
* refactor code to reflect the shared / partial / reserved categories of keys
* add is_empty(recursive=False)
* doc fix
* docfix and bugfix for _is_batch_set
* add doc for key reservation
* bugfix for algebra operators
* fix cat with lens hint
* code refactor
* bugfix for storing None
* use ValueError instead of exception
* hide lens away from users
* add comment for __cat
* move the computation of the initial value of lens in cat_ itself.
* change the place of doc string
* doc fix for Batch doc string
* change recursive to recurse
* doc string fix
* minor fix for batch doc
* add_pybullet_ens_test
test on pybullet envs
modify some log config
* delete DS_Store file
* add pybullet_envs test
add HalfCheetahBulletEnv-v0 test
modify log config
* fix pep 8 errors
* add pybullet to dev
* delete a line
* by pass F401
* add log_interval to onpolicy_trainer
* add comments
* Update halfcheetahBullet_v0_sac.py
* update atari.py
* fix setup.py
pass the pytest
* fix setup.py
pass the pytest
* add args "render"
* change the tensorboard writter
* change the tensorboard writter
* change device, render, tensorboard log location
* change device, render, tensorboard log location
* remove some wrong local files
* fix some tab mistakes and the envs name in continuous/test_xx.py
* add examples and point robot maze environment
* fix some bugs during testing examples
* add dqn network and fix some args
* change back the tensorboard writter's frequency to ensure ppo and a2c can write things normally
* add a warning to collector
* rm some unrelated files
* reformat
* fix a bug in test_dqn due to the model wrong selection