* make fileds with empty Batch rather than None after reset * dummy code * remove dummy * add reward_length argument for collector * Improve Batch (#126) * make sure the key type of Batch is string, and add unit tests * add is_empty() function and unit tests * enable cat of mixing dict and Batch, just like stack * bugfix for reward_length * add get_final_reward_fn argument to collector to deal with marl * minor polish * remove multibuf * minor polish * improve and implement Batch.cat_ * bugfix for buffer.sample with field impt_weight * restore the usage of a.cat_(b) * fix 2 bugs in batch and add corresponding unittest * code fix for update * update is_empty to recognize empty over empty; bugfix for len * bugfix for update and add testcase * add testcase of update * make fileds with empty Batch rather than None after reset * dummy code * remove dummy * add reward_length argument for collector * bugfix for reward_length * add get_final_reward_fn argument to collector to deal with marl * make sure the key type of Batch is string, and add unit tests * add is_empty() function and unit tests * enable cat of mixing dict and Batch, just like stack * dummy code * remove dummy * add multi-agent example: tic-tac-toe * move TicTacToeEnv to a separate file * remove dummy MANet * code refactor * move tic-tac-toe example to test * update doc with marl-example * fix docs * reduce the threshold * revert * update player id to start from 1 and change player to agent; keep coding * add reward_length argument for collector * Improve Batch (#128) * minor polish * improve and implement Batch.cat_ * bugfix for buffer.sample with field impt_weight * restore the usage of a.cat_(b) * fix 2 bugs in batch and add corresponding unittest * code fix for update * update is_empty to recognize empty over empty; bugfix for len * bugfix for update and add testcase * add testcase of update * fix docs * fix docs * fix docs [ci skip] * fix docs [ci skip] Co-authored-by: Trinkle23897 <463003665@qq.com> * refact * re-implement Batch.stack and add testcases * add doc for Batch.stack * reward_metric * modify flag * minor fix * reuse _create_values and refactor stack_ & cat_ * fix pep8 * fix reward stat in collector * fix stat of collector, simplify test/base/env.py * fix docs * minor fix * raise exception for stacking with partial keys and axis!=0 * minor fix * minor fix * minor fix * marl-examples * add condense; bugfix for torch.Tensor; code refactor * marl example can run now * enable tic tac toe with larger board size and win-size * add test dependency * Fix padding of inconsistent keys with Batch.stack and Batch.cat (#130) * re-implement Batch.stack and add testcases * add doc for Batch.stack * reuse _create_values and refactor stack_ & cat_ * fix pep8 * fix docs * raise exception for stacking with partial keys and axis!=0 * minor fix * minor fix Co-authored-by: Trinkle23897 <463003665@qq.com> * stash * let agent learn to play as agent 2 which is harder * code refactor * Improve collector (#125) * remove multibuf * reward_metric * make fileds with empty Batch rather than None after reset * many fixes and refactor Co-authored-by: Trinkle23897 <463003665@qq.com> * marl for tic-tac-toe and general gomoku * update default gamma to 0.1 for tic tac toe to win earlier * fix name typo; change default game config; add rew_norm option * fix pep8 * test commit * mv test dir name * add rew flag * fix torch.optim import error and madqn rew_norm * remove useless kwargs * Vector env enable select worker (#132) * Enable selecting worker for vector env step method. * Update collector to match new vecenv selective worker behavior. * Bug fix. * Fix rebase Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu> * show the last move of tictactoe by capital letters * add multi-agent tutorial * fix link * Standardized behavior of Batch.cat and misc code refactor (#137) * code refactor; remove unused kwargs; add reward_normalization for dqn * bugfix for __setitem__ with torch.Tensor; add Batch.condense * minor fix * support cat with empty Batch * remove the dependency of is_empty on len; specify the semantic of empty Batch by test cases * support stack with empty Batch * remove condense * refactor code to reflect the shared / partial / reserved categories of keys * add is_empty(recursive=False) * doc fix * docfix and bugfix for _is_batch_set * add doc for key reservation * bugfix for algebra operators * fix cat with lens hint * code refactor * bugfix for storing None * use ValueError instead of exception * hide lens away from users * add comment for __cat * move the computation of the initial value of lens in cat_ itself. * change the place of doc string * doc fix for Batch doc string * change recursive to recurse * doc string fix * minor fix for batch doc * write tutorials to specify the standard of Batch (#142) * add doc for len exceptions * doc move; unify is_scalar_value function * remove some issubclass check * bugfix for shape of Batch(a=1) * keep moving doc * keep writing batch tutorial * draft version of Batch tutorial done * improving doc * keep improving doc * batch tutorial done * rename _is_number * rename _is_scalar * shape property do not raise exception * restore some doc string * grammarly [ci skip] * grammarly + fix warning of building docs * polish docs * trim and re-arrange batch tutorial * go straight to the point * minor fix for batch doc * add shape / len in basic usage * keep improving tutorial * unify _to_array_with_correct_type to remove duplicate code * delegate type convertion to Batch.__init__ * further delegate type convertion to Batch.__init__ * bugfix for setattr * add a _parse_value function * remove dummy function call * polish docs Co-authored-by: Trinkle23897 <463003665@qq.com> * bugfix for mapolicy * pretty code * remove debug code; remove condense * doc fix * check before get_agents in tutorials/tictactoe * tutorial * fix * minor fix for batch doc * minor polish * faster test_ttt * improve tic-tac-toe environment * change default epoch and step-per-epoch for tic-tac-toe * fix mapolicy * minor polish for mapolicy * 90% to 80% (need to change the tutorial) * win rate * show step number at board * simplify mapolicy * minor polish for mapolicy * remove MADQN * fix pep8 * change legal_actions to mask (need to update docs) * simplify maenv * fix typo * move basevecenv to single file * separate RandomAgent * update docs * grammarly * fix pep8 * win rate typo * format in cheatsheet * use bool mask directly * update doc for boolean mask Co-authored-by: Trinkle23897 <463003665@qq.com> Co-authored-by: Alexis DUBURCQ <alexis.duburcq@gmail.com> Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
137 lines
5.2 KiB
Python
137 lines
5.2 KiB
Python
import gym
|
|
import numpy as np
|
|
from functools import partial
|
|
from typing import Tuple, Optional
|
|
|
|
from tianshou.env import MultiAgentEnv
|
|
|
|
|
|
class TicTacToeEnv(MultiAgentEnv):
|
|
"""This is a simple implementation of the Tic-Tac-Toe game, where two
|
|
agents play against each other.
|
|
|
|
The implementation is intended to show how to wrap an environment to
|
|
satisfy the interface of :class:`~tianshou.env.MultiAgentEnv`.
|
|
|
|
:param size: the size of the board (square board)
|
|
:param win_size: how many units in a row is considered to win
|
|
"""
|
|
|
|
def __init__(self, size: int = 3, win_size: int = 3):
|
|
super().__init__()
|
|
assert size > 0, f'board size should be positive, but got {size}'
|
|
self.size = size
|
|
assert win_size > 0, f'win-size should be positive, but got {win_size}'
|
|
self.win_size = win_size
|
|
assert win_size <= size, f'win-size {win_size} should not ' \
|
|
f'be larger than board size {size}'
|
|
self.convolve_kernel = np.ones(win_size)
|
|
self.observation_space = gym.spaces.Box(
|
|
low=-1.0, high=1.0, shape=(size, size), dtype=np.float32)
|
|
self.action_space = gym.spaces.Discrete(size * size)
|
|
self.current_board = None
|
|
self.current_agent = None
|
|
self._last_move = None
|
|
self.step_num = None
|
|
|
|
def reset(self) -> dict:
|
|
self.current_board = np.zeros((self.size, self.size), dtype=np.int32)
|
|
self.current_agent = 1
|
|
self._last_move = (-1, -1)
|
|
self.step_num = 0
|
|
return {
|
|
'agent_id': self.current_agent,
|
|
'obs': np.array(self.current_board),
|
|
'mask': self.current_board.flatten() == 0
|
|
}
|
|
|
|
def step(self, action: [int, np.ndarray]
|
|
) -> Tuple[dict, np.ndarray, np.ndarray, dict]:
|
|
if self.current_agent is None:
|
|
raise ValueError(
|
|
"calling step() of unreset environment is prohibited!")
|
|
assert 0 <= action < self.size * self.size
|
|
assert self.current_board.item(action) == 0
|
|
_current_agent = self.current_agent
|
|
self._move(action)
|
|
mask = self.current_board.flatten() == 0
|
|
is_win, is_opponent_win = False, False
|
|
is_win = self._test_win()
|
|
# the game is over when one wins or there is only one empty place
|
|
done = is_win
|
|
if sum(mask) == 1:
|
|
done = True
|
|
self._move(np.where(mask)[0][0])
|
|
is_opponent_win = self._test_win()
|
|
if is_win:
|
|
reward = 1
|
|
elif is_opponent_win:
|
|
reward = -1
|
|
else:
|
|
reward = 0
|
|
obs = {
|
|
'agent_id': self.current_agent,
|
|
'obs': np.array(self.current_board),
|
|
'mask': mask
|
|
}
|
|
rew_agent_1 = reward if _current_agent == 1 else (-reward)
|
|
rew_agent_2 = reward if _current_agent == 2 else (-reward)
|
|
vec_rew = np.array([rew_agent_1, rew_agent_2], dtype=np.float32)
|
|
if done:
|
|
self.current_agent = None
|
|
return obs, vec_rew, np.array(done), {}
|
|
|
|
def _move(self, action):
|
|
row, col = action // self.size, action % self.size
|
|
if self.current_agent == 1:
|
|
self.current_board[row, col] = 1
|
|
else:
|
|
self.current_board[row, col] = -1
|
|
self.current_agent = 3 - self.current_agent
|
|
self._last_move = (row, col)
|
|
self.step_num += 1
|
|
|
|
def _test_win(self):
|
|
"""test if someone wins by checking the situation around last move"""
|
|
row, col = self._last_move
|
|
rboard = self.current_board[row, :]
|
|
cboard = self.current_board[:, col]
|
|
current = self.current_board[row, col]
|
|
rightup = [self.current_board[row - i, col + i]
|
|
for i in range(1, self.size - col) if row - i >= 0]
|
|
leftdown = [self.current_board[row + i, col - i]
|
|
for i in range(1, col + 1) if row + i < self.size]
|
|
rdiag = np.array(leftdown[::-1] + [current] + rightup)
|
|
rightdown = [self.current_board[row + i, col + i]
|
|
for i in range(1, self.size - col) if row + i < self.size]
|
|
leftup = [self.current_board[row - i, col - i]
|
|
for i in range(1, col + 1) if row - i >= 0]
|
|
diag = np.array(leftup[::-1] + [current] + rightdown)
|
|
results = [np.convolve(k, self.convolve_kernel, mode='valid')
|
|
for k in (rboard, cboard, rdiag, diag)]
|
|
return any([(np.abs(x) == self.win_size).any() for x in results])
|
|
|
|
def seed(self, seed: Optional[int] = None) -> int:
|
|
pass
|
|
|
|
def render(self, **kwargs) -> None:
|
|
print(f'board (step {self.step_num}):')
|
|
pad = '==='
|
|
top = pad + '=' * (2 * self.size - 1) + pad
|
|
print(top)
|
|
|
|
def f(i, data):
|
|
j, number = data
|
|
last_move = i == self._last_move[0] and j == self._last_move[1]
|
|
if number == 1:
|
|
return 'X' if last_move else 'x'
|
|
if number == -1:
|
|
return 'O' if last_move else 'o'
|
|
return '_'
|
|
for i, row in enumerate(self.current_board):
|
|
print(pad + ' '.join(map(partial(f, i), enumerate(row))) + pad)
|
|
print(top)
|
|
|
|
def close(self) -> None:
|
|
pass
|