Daniel Plop 8a0629ded6
Fix mypy issues in tests and examples (#1077)
Closes #952 

- `SamplingConfig` supports `batch_size=None`. #1077
- tests and examples are covered by `mypy`. #1077
- `NetBase` is more used, stricter typing by making it generic. #1077
- `utils.net.common.Recurrent` now receives and returns a
`RecurrentStateBatch` instead of a dict. #1077

---------

Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
2024-04-03 18:07:51 +02:00
..

Inverse Reinforcement Learning

In inverse reinforcement learning setting, the agent learns a policy from interaction with an environment without reward and a fixed dataset which is collected with an expert policy.

Continuous control

Once the dataset is collected, it will not be changed during training. We use d4rl datasets to train agent for continuous control. You can refer to d4rl to see how to use d4rl datasets.

We provide implementation of GAIL algorithm for continuous control.

Train

You can parse d4rl datasets into a ReplayBuffer , and set it as the parameter expert_buffer of GAILPolicy. irl_gail.py is an example of inverse RL using the d4rl dataset.

To train an agent with BCQ algorithm:

python irl_gail.py --task HalfCheetah-v2 --expert-data-task halfcheetah-expert-v2

GAIL (single run)

task best reward reward curve parameters
HalfCheetah-v2 5177.07 python3 irl_gail.py --task "HalfCheetah-v2" --expert-data-task "halfcheetah-expert-v2"
Hopper-v2 1761.44 python3 irl_gail.py --task "Hopper-v2" --expert-data-task "hopper-expert-v2"
Walker2d-v2 2020.77 python3 irl_gail.py --task "Walker2d-v2" --expert-data-task "walker2d-expert-v2"