This PR adds a new method for getting actions from an env's observation
and info. This is useful for standard inference and stands in contrast
to batch-based methods that are currently used in training and
evaluation. Without this, users have to do some kind of gymnastics to
actually perform inference with a trained policy. I have also added a
test for the new method.
In future PRs, this method should be included in the examples (in the
the "watch" section).
To add this required improving multiple typing things and, importantly,
_simplifying the signature of `forward` in many policies!_ This is a
**breaking change**, but it will likely affect no users. The `input`
parameter of forward was a rather hacky mechanism, I believe it is good
that it's gone now. It will also help with #948 .
The main functional change is the addition of `compute_action` to
`BasePolicy`.
Other minor changes:
- improvements in typing
- updated PR and Issue templates
- Improved handling of `max_action_num`
Closes#981
(should be dev dependency only) by introducing a new
place where jsonargparse can be configured:
logging.run_cli, which is also slightly more convenient
of number of environments in SamplingConfig is used
(values are now passed to factory method)
This is clearer and removes the need to pass otherwise
unnecessary configuration to environment factories at
construction
* Add persistence/restoration of Experiment instance
* Add file logging in experiment
* Allow all persistence/logging to be disabled
* Disable persistence in tests
* Add example atari_iqn_hl
* Factor out trainer callbacks to new module atari_callbacks
* Extract base class for DQN-based agent factories
* Improved module factory interface design, achieving higher generality
* Changed machanism for reusing actor's preprocessing module in critics
to avoid special handling in AgentFactory implementations, improving
separation of concerns:
- Added CriticFactoryReuseActor as the new critic factory
- Added ActorFactoryTransientStorageDecorator to pass on the actor
data
- Added helper classes ActorFuture, ActorFutureProviderProtocol
* Add example atari_sac_hl
* Implement example mujoco_redq_hl
* Add abstraction CriticEnsembleFactory with default implementations
to suit REDQ
* Fix type annotation of linear_layer in Net, MLP, Critic
(was incompatible with REDQ usage)
* Allow to specify trainer callbacks (train_fn, test_fn, stop_fn)
in high-level API, adding the necessary abstractions and pass-on
mechanisms
* Add example atari_dqn_hl
* Add common based class for A2C and PPO agent factories
* Add default for dist_fn parameter, adding corresponding factories
* Add example mujoco_a2c_hl
* Use prefix convention (subclasses have superclass names as prefix) to
facilitate discoverability of relevant classes via IDE autocompletion
* Use dual naming, adding an alternative concise name that omits the
precise OO semantics and retains only the essential part of the name
(which can be more pleasing to users not accustomed to
convoluted OO naming)
* Created mixins for agent factories to reduce code duplication
* Further factorised params & mixins for experiment factories
* Additional parameter abstractions
* Implement high-level MuJoCo TD3 example
Closes#947
This removes all kwargs from all policy constructors. While doing that,
I also improved several names and added a whole lot of TODOs.
## Functional changes:
1. Added possibility to pass None as `critic2` and `critic2_optim`. In
fact, the default behavior then should cover the absolute majority of
cases
2. Added a function called `clone_optimizer` as a temporary measure to
support passing `critic2_optim=None`
## Breaking changes:
1. `action_space` is no longer optional. In fact, it already was
non-optional, as there was a ValueError in BasePolicy.init. So now
several examples were fixed to reflect that
2. `reward_normalization` removed from DDPG and children. It was never
allowed to pass it as `True` there, an error would have been raised in
`compute_n_step_reward`. Now I removed it from the interface
3. renamed `critic1` and similar to `critic`, in order to have uniform
interfaces. Note that the `critic` in DDPG was optional for the sole
reason that child classes used `critic1`. I removed this optionality
(DDPG can't do anything with `critic=None`)
4. Several renamings of fields (mostly private to public, so backwards
compatible)
## Additional changes:
1. Removed type and default declaration from docstring. This kind of
duplication is really not necessary
2. Policy constructors are now only called using named arguments, not a
fragile mixture of positional and named as before
5. Minor beautifications in typing and code
6. Generally shortened docstrings and made them uniform across all
policies (hopefully)
## Comment:
With these changes, several problems in tianshou's inheritance hierarchy
become more apparent. I tried highlighting them for future work.
---------
Co-authored-by: Dominik Jain <d.jain@appliedai.de>