W&B: Add usage in the docs (#463)

This commit is contained in:
Ayush Chaurasia 2021-10-13 20:58:25 +05:30 committed by GitHub
parent 926ec0b9b1
commit 63d752ee0b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 65 additions and 0 deletions

View File

@ -194,6 +194,7 @@ buffer_size = 20000
eps_train, eps_test = 0.1, 0.05
step_per_epoch, step_per_collect = 10000, 10
logger = ts.utils.TensorboardLogger(SummaryWriter('log/dqn')) # TensorBoard is supported!
# For other loggers: https://tianshou.readthedocs.io/en/master/tutorials/logger.html
```
Make environments:

View File

@ -93,6 +93,7 @@ Tianshou is still under development, you can also check out the documents in sta
tutorials/concepts
tutorials/batch
tutorials/tictactoe
tutorials/logger
tutorials/benchmark
tutorials/cheatsheet

View File

@ -14,6 +14,7 @@ timestep
numpy
ndarray
stackoverflow
tensorboard
len
tac
fqf

62
docs/tutorials/logger.rst Normal file
View File

@ -0,0 +1,62 @@
Logging Experiments
===================
Tianshou comes with multiple experiment tracking and logging solutions to manage and reproduce your experiments.
The dashboard loggers currently available are:
* :class:`~tianshou.utils.TensorboardLogger`
* :class:`~tianshou.utils.WandbLogger`
* :class:`~tianshou.utils.LazyLogger`
TensorboardLogger
-----------------
Tensorboard tracks your experiment metrics in a local dashboard. Here is how you can use TensorboardLogger in your experiment:
::
from torch.utils.tensorboard import SummaryWriter
from tianshou.utils import TensorboardLogger
log_path = os.path.join(args.logdir, args.task, "dqn")
writer = SummaryWriter(log_path)
writer.add_text("args", str(args))
logger = TensorboardLogger(writer)
result = trainer(..., logger=logger)
WandbLogger
-----------
:class:`~tianshou.utils.WandbLogger` can be used to visualize your experiments in a hosted `W&B dashboard <https://wandb.ai/home>`_. It can be installed via ``pip install wandb``. You can also save your checkpoints in the cloud and restore your runs from those checkpoints. Here is how you can enable WandbLogger:
::
from tianshou.utils import WandbLogger
logger = WandbLogger(...)
result = trainer(..., logger=logger)
Please refer to :class:`~tianshou.utils.WandbLogger` documentation for advanced configuration.
For logging checkpoints on any device, you need to define a ``save_checkpoint_fn`` which saves the experiment checkpoint and returns the path of the saved checkpoint:
::
def save_checkpoint_fn(epoch, env_step, gradient_step):
ckpt_path = ...
# save model
return ckpt_path
Then, use this function with ``WandbLogger`` to automatically version your experiment checkpoints after every ``save_interval`` step.
For resuming runs from checkpoint artifacts on any device, pass the W&B ``run_id`` of the run that you want to continue in ``WandbLogger``. It will then download the latest version of the checkpoint and resume your runs from the checkpoint.
The example scripts are under `test_psrl.py <https://github.com/thu-ml/tianshou/blob/master/test/modelbased/test_psrl.py>`_ and `atari_dqn.py <https://github.com/thu-ml/tianshou/blob/master/examples/atari/atari_dqn.py>`_.
LazyLogger
----------
This is a place-holder logger that does nothing.