Closes #952 - `SamplingConfig` supports `batch_size=None`. #1077 - tests and examples are covered by `mypy`. #1077 - `NetBase` is more used, stricter typing by making it generic. #1077 - `utils.net.common.Recurrent` now receives and returns a `RecurrentStateBatch` instead of a dict. #1077 --------- Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
Inverse Reinforcement Learning
In inverse reinforcement learning setting, the agent learns a policy from interaction with an environment without reward and a fixed dataset which is collected with an expert policy.
Continuous control
Once the dataset is collected, it will not be changed during training. We use d4rl datasets to train agent for continuous control. You can refer to d4rl to see how to use d4rl datasets.
We provide implementation of GAIL algorithm for continuous control.
Train
You can parse d4rl datasets into a ReplayBuffer
, and set it as the parameter expert_buffer
of GAILPolicy
. irl_gail.py
is an example of inverse RL using the d4rl dataset.
To train an agent with BCQ algorithm:
python irl_gail.py --task HalfCheetah-v2 --expert-data-task halfcheetah-expert-v2