which stores the entire policy (new default), supporting applications
where it is desired to be bale to load the policy without having
to instantiate an environment or recreate a corresponding policy
object
(should be dev dependency only) by introducing a new
place where jsonargparse can be configured:
logging.run_cli, which is also slightly more convenient
of number of environments in SamplingConfig is used
(values are now passed to factory method)
This is clearer and removes the need to pass otherwise
unnecessary configuration to environment factories at
construction
* Add persistence/restoration of Experiment instance
* Add file logging in experiment
* Allow all persistence/logging to be disabled
* Disable persistence in tests
* Add example atari_iqn_hl
* Factor out trainer callbacks to new module atari_callbacks
* Extract base class for DQN-based agent factories
* Improved module factory interface design, achieving higher generality
* Changed machanism for reusing actor's preprocessing module in critics
to avoid special handling in AgentFactory implementations, improving
separation of concerns:
- Added CriticFactoryReuseActor as the new critic factory
- Added ActorFactoryTransientStorageDecorator to pass on the actor
data
- Added helper classes ActorFuture, ActorFutureProviderProtocol
* Add example atari_sac_hl
* Implement example mujoco_redq_hl
* Add abstraction CriticEnsembleFactory with default implementations
to suit REDQ
* Fix type annotation of linear_layer in Net, MLP, Critic
(was incompatible with REDQ usage)
After adding a function to create ModuleOpt instances directly from
AgentFactory and CriticFactory,
* several mixins for AgentFactories are no longer needed (deleted)
* additional abstractions for ModuleOptFactories are no longer needed (deleted)
* Allow to specify trainer callbacks (train_fn, test_fn, stop_fn)
in high-level API, adding the necessary abstractions and pass-on
mechanisms
* Add example atari_dqn_hl