# Goals of the PR The PR introduces **no changes to functionality**, apart from improved input validation here and there. The main goals are to reduce some complexity of the code, to improve types and IDE completions, and to extend documentation and block comments where appropriate. Because of the change to the trainer interfaces, many files are affected (more details below), but still the overall changes are "small" in a certain sense. ## Major Change 1 - BatchProtocol **TL;DR:** One can now annotate which fields the batch is expected to have on input params and which fields a returned batch has. Should be useful for reading the code. getting meaningful IDE support, and catching bugs with mypy. This annotation strategy will continue to work if Batch is replaced by TensorDict or by something else. **In more detail:** Batch itself has no fields and using it for annotations is of limited informational power. Batches with fields are not separate classes but instead instances of Batch directly, so there is no type that could be used for annotation. Fortunately, python `Protocol` is here for the rescue. With these changes we can now do things like ```python class ActionBatchProtocol(BatchProtocol): logits: Sequence[Union[tuple, torch.Tensor]] dist: torch.distributions.Distribution act: torch.Tensor state: Optional[torch.Tensor] class RolloutBatchProtocol(BatchProtocol): obs: torch.Tensor obs_next: torch.Tensor info: Dict[str, Any] rew: torch.Tensor terminated: torch.Tensor truncated: torch.Tensor class PGPolicy(BasePolicy): ... def forward( self, batch: RolloutBatchProtocol, state: Optional[Union[dict, Batch, np.ndarray]] = None, **kwargs: Any, ) -> ActionBatchProtocol: ``` The IDE and mypy are now very helpful in finding errors and in auto-completion, whereas before the tools couldn't assist in that at all. ## Major Change 2 - remove duplication in trainer package **TL;DR:** There was a lot of duplication between `BaseTrainer` and its subclasses. Even worse, it was almost-duplication. There was also interface fragmentation through things like `onpolicy_trainer`. Now this duplication is gone and all downstream code was adjusted. **In more detail:** Since this change affects a lot of code, I would like to explain why I thought it to be necessary. 1. The subclasses of `BaseTrainer` just duplicated docstrings and constructors. What's worse, they changed the order of args there, even turning some kwargs of BaseTrainer into args. They also had the arg `learning_type` which was passed as kwarg to the base class and was unused there. This made things difficult to maintain, and in fact some errors were already present in the duplicated docstrings. 2. The "functions" a la `onpolicy_trainer`, which just called the `OnpolicyTrainer.run`, not only introduced interface fragmentation but also completely obfuscated the docstring and interfaces. They themselves had no dosctring and the interface was just `*args, **kwargs`, which makes it impossible to understand what they do and which things can be passed without reading their implementation, then reading the docstring of the associated class, etc. Needless to say, mypy and IDEs provide no support with such functions. Nevertheless, they were used everywhere in the code-base. I didn't find the sacrifices in clarity and complexity justified just for the sake of not having to write `.run()` after instantiating a trainer. 3. The trainers are all very similar to each other. As for my application I needed a new trainer, I wanted to understand their structure. The similarity, however, was hard to discover since they were all in separate modules and there was so much duplication. I kept staring at the constructors for a while until I figured out that essentially no changes to the superclass were introduced. Now they are all in the same module and the similarities/differences between them are much easier to grasp (in my opinion) 4. Because of (1), I had to manually change and check a lot of code, which was very tedious and boring. This kind of work won't be necessary in the future, since now IDEs can be used for changing signatures, renaming args and kwargs, changing class names and so on. I have some more reasons, but maybe the above ones are convincing enough. ## Minor changes: improved input validation and types I added input validation for things like `state` and `action_scaling` (which only makes sense for continuous envs). After adding this, some tests failed to pass this validation. There I added `action_scaling=isinstance(env.action_space, Box)`, after which tests were green. I don't know why the tests were green before, since action scaling doesn't make sense for discrete actions. I guess some aspect was not tested and didn't crash. I also added Literal in some places, in particular for `action_bound_method`. Now it is no longer allowed to pass an empty string, instead one should pass `None`. Also here there is input validation with clear error messages. @Trinkle23897 The functional tests are green. I didn't want to fix the formatting, since it will change in the next PR that will solve #914 anyway. I also found a whole bunch of code in `docs/_static`, which I just deleted (shouldn't it be copied from the sources during docs build instead of committed?). I also haven't adjusted the documentation yet, which atm still mentions the trainers of the type `onpolicy_trainer(...)` instead of `OnpolicyTrainer(...).run()` ## Breaking Changes The adjustments to the trainer package introduce breaking changes as duplicated interfaces are deleted. However, it should be very easy for users to adjust to them --------- Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
210 lines
7.9 KiB
Python
210 lines
7.9 KiB
Python
import copy
|
|
from typing import Any, Dict, Optional, Union
|
|
|
|
import numpy as np
|
|
import torch
|
|
import torch.nn.functional as F
|
|
|
|
from tianshou.data import Batch, to_torch
|
|
from tianshou.data.batch import BatchProtocol
|
|
from tianshou.data.types import RolloutBatchProtocol
|
|
from tianshou.policy import BasePolicy
|
|
from tianshou.utils.net.continuous import VAE
|
|
|
|
|
|
class BCQPolicy(BasePolicy):
|
|
"""Implementation of BCQ algorithm. arXiv:1812.02900.
|
|
|
|
:param Perturbation actor: the actor perturbation. (s, a -> perturbed a)
|
|
:param torch.optim.Optimizer actor_optim: the optimizer for actor network.
|
|
:param torch.nn.Module critic1: the first critic network. (s, a -> Q(s, a))
|
|
:param torch.optim.Optimizer critic1_optim: the optimizer for the first
|
|
critic network.
|
|
:param torch.nn.Module critic2: the second critic network. (s, a -> Q(s, a))
|
|
:param torch.optim.Optimizer critic2_optim: the optimizer for the second
|
|
critic network.
|
|
:param VAE vae: the VAE network, generating actions similar
|
|
to those in batch. (s, a -> generated a)
|
|
:param torch.optim.Optimizer vae_optim: the optimizer for the VAE network.
|
|
:param Union[str, torch.device] device: which device to create this model on.
|
|
Default to "cpu".
|
|
:param float gamma: discount factor, in [0, 1]. Default to 0.99.
|
|
:param float tau: param for soft update of the target network.
|
|
Default to 0.005.
|
|
:param float lmbda: param for Clipped Double Q-learning. Default to 0.75.
|
|
:param int forward_sampled_times: the number of sampled actions in forward
|
|
function. The policy samples many actions and takes the action with the
|
|
max value. Default to 100.
|
|
:param int num_sampled_action: the number of sampled actions in calculating
|
|
target Q. The algorithm samples several actions using VAE, and perturbs
|
|
each action to get the target Q. Default to 10.
|
|
:param lr_scheduler: a learning rate scheduler that adjusts the learning rate in
|
|
optimizer in each policy.update(). Default to None (no lr_scheduler).
|
|
|
|
.. seealso::
|
|
|
|
Please refer to :class:`~tianshou.policy.BasePolicy` for more detailed
|
|
explanation.
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
actor: torch.nn.Module,
|
|
actor_optim: torch.optim.Optimizer,
|
|
critic1: torch.nn.Module,
|
|
critic1_optim: torch.optim.Optimizer,
|
|
critic2: torch.nn.Module,
|
|
critic2_optim: torch.optim.Optimizer,
|
|
vae: VAE,
|
|
vae_optim: torch.optim.Optimizer,
|
|
device: Union[str, torch.device] = "cpu",
|
|
gamma: float = 0.99,
|
|
tau: float = 0.005,
|
|
lmbda: float = 0.75,
|
|
forward_sampled_times: int = 100,
|
|
num_sampled_action: int = 10,
|
|
**kwargs: Any
|
|
) -> None:
|
|
# actor is Perturbation!
|
|
super().__init__(**kwargs)
|
|
self.actor = actor
|
|
self.actor_target = copy.deepcopy(self.actor)
|
|
self.actor_optim = actor_optim
|
|
|
|
self.critic1 = critic1
|
|
self.critic1_target = copy.deepcopy(self.critic1)
|
|
self.critic1_optim = critic1_optim
|
|
|
|
self.critic2 = critic2
|
|
self.critic2_target = copy.deepcopy(self.critic2)
|
|
self.critic2_optim = critic2_optim
|
|
|
|
self.vae = vae
|
|
self.vae_optim = vae_optim
|
|
|
|
self.gamma = gamma
|
|
self.tau = tau
|
|
self.lmbda = lmbda
|
|
self.device = device
|
|
self.forward_sampled_times = forward_sampled_times
|
|
self.num_sampled_action = num_sampled_action
|
|
|
|
def train(self, mode: bool = True) -> "BCQPolicy":
|
|
"""Set the module in training mode, except for the target network."""
|
|
self.training = mode
|
|
self.actor.train(mode)
|
|
self.critic1.train(mode)
|
|
self.critic2.train(mode)
|
|
return self
|
|
|
|
def forward(
|
|
self,
|
|
batch: RolloutBatchProtocol,
|
|
state: Optional[Union[dict, BatchProtocol, np.ndarray]] = None,
|
|
**kwargs: Any,
|
|
) -> Batch:
|
|
"""Compute action over the given batch data."""
|
|
# There is "obs" in the Batch
|
|
# obs_group: several groups. Each group has a state.
|
|
obs_group: torch.Tensor = to_torch(batch.obs, device=self.device)
|
|
act_group = []
|
|
for obs in obs_group:
|
|
# now obs is (state_dim)
|
|
obs = (obs.reshape(1, -1)).repeat(self.forward_sampled_times, 1)
|
|
# now obs is (forward_sampled_times, state_dim)
|
|
|
|
# decode(obs) generates action and actor perturbs it
|
|
act = self.actor(obs, self.vae.decode(obs))
|
|
# now action is (forward_sampled_times, action_dim)
|
|
q1 = self.critic1(obs, act)
|
|
# q1 is (forward_sampled_times, 1)
|
|
max_indice = q1.argmax(0)
|
|
act_group.append(act[max_indice].cpu().data.numpy().flatten())
|
|
act_group = np.array(act_group)
|
|
return Batch(act=act_group)
|
|
|
|
def sync_weight(self) -> None:
|
|
"""Soft-update the weight for the target network."""
|
|
self.soft_update(self.critic1_target, self.critic1, self.tau)
|
|
self.soft_update(self.critic2_target, self.critic2, self.tau)
|
|
self.soft_update(self.actor_target, self.actor, self.tau)
|
|
|
|
def learn(self, batch: RolloutBatchProtocol, *args: Any,
|
|
**kwargs: Any) -> Dict[str, float]:
|
|
# batch: obs, act, rew, done, obs_next. (numpy array)
|
|
# (batch_size, state_dim)
|
|
batch: Batch = to_torch(batch, dtype=torch.float, device=self.device)
|
|
obs, act = batch.obs, batch.act
|
|
batch_size = obs.shape[0]
|
|
|
|
# mean, std: (state.shape[0], latent_dim)
|
|
recon, mean, std = self.vae(obs, act)
|
|
recon_loss = F.mse_loss(act, recon)
|
|
# (....) is D_KL( N(mu, sigma) || N(0,1) )
|
|
KL_loss = (-torch.log(std) + (std.pow(2) + mean.pow(2) - 1) / 2).mean()
|
|
vae_loss = recon_loss + KL_loss / 2
|
|
|
|
self.vae_optim.zero_grad()
|
|
vae_loss.backward()
|
|
self.vae_optim.step()
|
|
|
|
# critic training:
|
|
with torch.no_grad():
|
|
# repeat num_sampled_action times
|
|
obs_next = batch.obs_next.repeat_interleave(self.num_sampled_action, dim=0)
|
|
# now obs_next: (num_sampled_action * batch_size, state_dim)
|
|
|
|
# perturbed action generated by VAE
|
|
act_next = self.vae.decode(obs_next)
|
|
# now obs_next: (num_sampled_action * batch_size, action_dim)
|
|
target_Q1 = self.critic1_target(obs_next, act_next)
|
|
target_Q2 = self.critic2_target(obs_next, act_next)
|
|
|
|
# Clipped Double Q-learning
|
|
target_Q = \
|
|
self.lmbda * torch.min(target_Q1, target_Q2) + \
|
|
(1 - self.lmbda) * torch.max(target_Q1, target_Q2)
|
|
# now target_Q: (num_sampled_action * batch_size, 1)
|
|
|
|
# the max value of Q
|
|
target_Q = target_Q.reshape(batch_size, -1).max(dim=1)[0].reshape(-1, 1)
|
|
# now target_Q: (batch_size, 1)
|
|
|
|
target_Q = \
|
|
batch.rew.reshape(-1, 1) + \
|
|
(1 - batch.done).reshape(-1, 1) * self.gamma * target_Q
|
|
|
|
current_Q1 = self.critic1(obs, act)
|
|
current_Q2 = self.critic2(obs, act)
|
|
|
|
critic1_loss = F.mse_loss(current_Q1, target_Q)
|
|
critic2_loss = F.mse_loss(current_Q2, target_Q)
|
|
|
|
self.critic1_optim.zero_grad()
|
|
self.critic2_optim.zero_grad()
|
|
critic1_loss.backward()
|
|
critic2_loss.backward()
|
|
self.critic1_optim.step()
|
|
self.critic2_optim.step()
|
|
|
|
sampled_act = self.vae.decode(obs)
|
|
perturbed_act = self.actor(obs, sampled_act)
|
|
|
|
# max
|
|
actor_loss = -self.critic1(obs, perturbed_act).mean()
|
|
|
|
self.actor_optim.zero_grad()
|
|
actor_loss.backward()
|
|
self.actor_optim.step()
|
|
|
|
# update target network
|
|
self.sync_weight()
|
|
|
|
result = {
|
|
"loss/actor": actor_loss.item(),
|
|
"loss/critic1": critic1_loss.item(),
|
|
"loss/critic2": critic2_loss.item(),
|
|
"loss/vae": vae_loss.item(),
|
|
}
|
|
return result
|