* Add class ExperimentCollection to improve usability
* Remove parameters from ExperimentBuilder.build
* Renamed ExperimentBuilder.build_default_seeded_experiments to build_seeded_collection,
changing the return type to ExperimentCollection
* Replace temp_config_mutation (which was not appropriate for the public API) with
method copy (which performs a safe deep copy)
* Remove flag `eval_mode` from Collector.collect
* Replace flag `is_eval` in BasePolicy with `is_within_training_step` (negating usages)
and set it appropriately in BaseTrainer
New method training_step, which
* collects training data (method _collect_training_data)
* performs "test in train" (method _test_in_train)
* performs policy update
The old method named train_step performed only the first two points
and was now split into two separate methods
This PR fixes a bug in DQN and lifts a limination in reusing the actor's
preprocessing network for continuous environments.
* `atari_network.DQN`:
* Fix input validation
* Fix output_dim not being set if features_only=True and
output_dim_added_layer not None
* `continuous.Critic`:
* Add flag `apply_preprocess_net_to_obs_only` to allow the
preprocessing network to be applied to the observations only (without
the actions concatenated), which is essential for the case where we want
to reuse the actor's preprocessing network
* CriticFactoryReuseActor: Use the flag, fixing the case where we want
to reuse an actor's
preprocessing network for the critic (must be applied before
concatenating
the actions)
* Minor improvements in docs/docstrings
preprocessing network to be applied to the observations only (without
the actions concatenated), which is essential for the case where we want
to reuse the actor's preprocessing network
# Changes
## Dependencies
- New extra "eval"
## Api Extension
- `Experiment` and `ExperimentConfig` now have a `name`, that can
however be overridden when `Experiment.run()` is called
- When building an `Experiment` from an `ExperimentConfig`, the user has
the option to add info about seeds to the name.
- New method in `ExperimentConfig` called
`build_default_seeded_experiments`
- `SamplingConfig` has an explicit training seed, `test_seed` is
inferred.
- New `evaluation` package for repeating the same experiment with
multiple seeds and aggregating the results (important extension!).
Currently in alpha state.
- Loggers can now restore the logged data into python by using the new
`restore_logged_data`
## Breaking Changes
- `AtariEnvFactory` (in examples) now receives explicit train and test
seeds
- `EnvFactoryRegistered` now requires an explicit `test_seed`
- `BaseLogger.prepare_dict_for_logging` is now abstract
---------
Co-authored-by: Maximilian Huettenrauch <m.huettenrauch@appliedai.de>
Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
Co-authored-by: Michael Panchenko <35432522+MischaPanch@users.noreply.github.com>
Closes: https://github.com/aai-institute/tianshou/issues/1116
### API Extensions
- Batch received new method: `to_torch_`. #1117
### Breaking Changes
- The method `to_torch` in `data.utils.batch.Batch` is not in-place
anymore. Instead, a new method `to_torch_` does the conversion in-place.
#1117
Closes: https://github.com/thu-ml/tianshou/issues/1086
### Api Extensions
- Batch received new method: `to_numpy_`. #1098
- `to_dict` in Batch supports also non-recursive conversion. #1098
- Batch `__eq__` now implemented, semantic equality check of batches is
now possible. #1098
### Breaking Changes
- The method `to_numpy` in `data.utils.batch.Batch` is not in-place
anymore. Instead, a new method `to_numpy_` does the conversion in-place.
#1098