Closes: https://github.com/thu-ml/tianshou/issues/1086
### Api Extensions
- Batch received new method: `to_numpy_`. #1098
- `to_dict` in Batch supports also non-recursive conversion. #1098
- Batch `__eq__` now implemented, semantic equality check of batches is
now possible. #1098
### Breaking Changes
- The method `to_numpy` in `data.utils.batch.Batch` is not in-place
anymore. Instead, a new method `to_numpy_` does the conversion in-place.
#1098
Closes#952
- `SamplingConfig` supports `batch_size=None`. #1077
- tests and examples are covered by `mypy`. #1077
- `NetBase` is more used, stricter typing by making it generic. #1077
- `utils.net.common.Recurrent` now receives and returns a
`RecurrentStateBatch` instead of a dict. #1077
---------
Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
Closes: #1058
### Api Extensions
- Batch received two new methods: `to_dict` and `to_list_of_dicts`.
#1063
- `Collector`s can now be closed, and their reset is more granular.
#1063
- Trainers can control whether collectors should be reset prior to
training. #1063
- Convenience constructor for `CollectStats` called
`with_autogenerated_stats`. #1063
### Internal Improvements
- `Collector`s rely less on state, the few stateful things are stored
explicitly instead of through a `.data` attribute. #1063
- Introduced a first iteration of a naming convention for vars in
`Collector`s. #1063
- Generally improved readability of Collector code and associated tests
(still quite some way to go). #1063
- Improved typing for `exploration_noise` and within Collector. #1063
### Breaking Changes
- Removed `.data` attribute from `Collector` and its child classes.
#1063
- Collectors no longer reset the environment on initialization. Instead,
the user might have to call `reset`
expicitly or pass `reset_before_collect=True` . #1063
- VectorEnvs now return an array of info-dicts on reset instead of a
list. #1063
- Fixed `iter(Batch(...)` which now behaves the same way as
`Batch(...).__iter__()`. Can be considered a bugfix. #1063
---------
Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
Closes#947
This removes all kwargs from all policy constructors. While doing that,
I also improved several names and added a whole lot of TODOs.
## Functional changes:
1. Added possibility to pass None as `critic2` and `critic2_optim`. In
fact, the default behavior then should cover the absolute majority of
cases
2. Added a function called `clone_optimizer` as a temporary measure to
support passing `critic2_optim=None`
## Breaking changes:
1. `action_space` is no longer optional. In fact, it already was
non-optional, as there was a ValueError in BasePolicy.init. So now
several examples were fixed to reflect that
2. `reward_normalization` removed from DDPG and children. It was never
allowed to pass it as `True` there, an error would have been raised in
`compute_n_step_reward`. Now I removed it from the interface
3. renamed `critic1` and similar to `critic`, in order to have uniform
interfaces. Note that the `critic` in DDPG was optional for the sole
reason that child classes used `critic1`. I removed this optionality
(DDPG can't do anything with `critic=None`)
4. Several renamings of fields (mostly private to public, so backwards
compatible)
## Additional changes:
1. Removed type and default declaration from docstring. This kind of
duplication is really not necessary
2. Policy constructors are now only called using named arguments, not a
fragile mixture of positional and named as before
5. Minor beautifications in typing and code
6. Generally shortened docstrings and made them uniform across all
policies (hopefully)
## Comment:
With these changes, several problems in tianshou's inheritance hierarchy
become more apparent. I tried highlighting them for future work.
---------
Co-authored-by: Dominik Jain <d.jain@appliedai.de>
- [X] I have marked all applicable categories:
+ [ ] exception-raising fix
+ [ ] algorithm implementation fix
+ [ ] documentation modification
+ [X] new feature
- [X] I have reformatted the code using `make format` (**required**)
- [X] I have checked the code using `make commit-checks` (**required**)
- [X] If applicable, I have mentioned the relevant/related issue(s)
- [X] If applicable, I have listed every items in this Pull Request
below
Closes#914
Additional changes:
- Deprecate python below 11
- Remove 3rd party and throughput tests. This simplifies install and
test pipeline
- Remove gym compatibility and shimmy
- Format with 3.11 conventions. In particular, add `zip(...,
strict=True/False)` where possible
Since the additional tests and gym were complicating the CI pipeline
(flaky and dist-dependent), it didn't make sense to work on fixing the
current tests in this PR to then just delete them in the next one. So
this PR changes the build and removes these tests at the same time.
Preparation for #914 and #920
Changes formatting to ruff and black. Remove python 3.8
## Additional Changes
- Removed flake8 dependencies
- Adjusted pre-commit. Now CI and Make use pre-commit, reducing the
duplication of linting calls
- Removed check-docstyle option (ruff is doing that)
- Merged format and lint. In CI the format-lint step fails if any
changes are done, so it fulfills the lint functionality.
---------
Co-authored-by: Jiayi Weng <jiayi@openai.com>
# Goals of the PR
The PR introduces **no changes to functionality**, apart from improved
input validation here and there. The main goals are to reduce some
complexity of the code, to improve types and IDE completions, and to
extend documentation and block comments where appropriate. Because of
the change to the trainer interfaces, many files are affected (more
details below), but still the overall changes are "small" in a certain
sense.
## Major Change 1 - BatchProtocol
**TL;DR:** One can now annotate which fields the batch is expected to
have on input params and which fields a returned batch has. Should be
useful for reading the code. getting meaningful IDE support, and
catching bugs with mypy. This annotation strategy will continue to work
if Batch is replaced by TensorDict or by something else.
**In more detail:** Batch itself has no fields and using it for
annotations is of limited informational power. Batches with fields are
not separate classes but instead instances of Batch directly, so there
is no type that could be used for annotation. Fortunately, python
`Protocol` is here for the rescue. With these changes we can now do
things like
```python
class ActionBatchProtocol(BatchProtocol):
logits: Sequence[Union[tuple, torch.Tensor]]
dist: torch.distributions.Distribution
act: torch.Tensor
state: Optional[torch.Tensor]
class RolloutBatchProtocol(BatchProtocol):
obs: torch.Tensor
obs_next: torch.Tensor
info: Dict[str, Any]
rew: torch.Tensor
terminated: torch.Tensor
truncated: torch.Tensor
class PGPolicy(BasePolicy):
...
def forward(
self,
batch: RolloutBatchProtocol,
state: Optional[Union[dict, Batch, np.ndarray]] = None,
**kwargs: Any,
) -> ActionBatchProtocol:
```
The IDE and mypy are now very helpful in finding errors and in
auto-completion, whereas before the tools couldn't assist in that at
all.
## Major Change 2 - remove duplication in trainer package
**TL;DR:** There was a lot of duplication between `BaseTrainer` and its
subclasses. Even worse, it was almost-duplication. There was also
interface fragmentation through things like `onpolicy_trainer`. Now this
duplication is gone and all downstream code was adjusted.
**In more detail:** Since this change affects a lot of code, I would
like to explain why I thought it to be necessary.
1. The subclasses of `BaseTrainer` just duplicated docstrings and
constructors. What's worse, they changed the order of args there, even
turning some kwargs of BaseTrainer into args. They also had the arg
`learning_type` which was passed as kwarg to the base class and was
unused there. This made things difficult to maintain, and in fact some
errors were already present in the duplicated docstrings.
2. The "functions" a la `onpolicy_trainer`, which just called the
`OnpolicyTrainer.run`, not only introduced interface fragmentation but
also completely obfuscated the docstring and interfaces. They themselves
had no dosctring and the interface was just `*args, **kwargs`, which
makes it impossible to understand what they do and which things can be
passed without reading their implementation, then reading the docstring
of the associated class, etc. Needless to say, mypy and IDEs provide no
support with such functions. Nevertheless, they were used everywhere in
the code-base. I didn't find the sacrifices in clarity and complexity
justified just for the sake of not having to write `.run()` after
instantiating a trainer.
3. The trainers are all very similar to each other. As for my
application I needed a new trainer, I wanted to understand their
structure. The similarity, however, was hard to discover since they were
all in separate modules and there was so much duplication. I kept
staring at the constructors for a while until I figured out that
essentially no changes to the superclass were introduced. Now they are
all in the same module and the similarities/differences between them are
much easier to grasp (in my opinion)
4. Because of (1), I had to manually change and check a lot of code,
which was very tedious and boring. This kind of work won't be necessary
in the future, since now IDEs can be used for changing signatures,
renaming args and kwargs, changing class names and so on.
I have some more reasons, but maybe the above ones are convincing
enough.
## Minor changes: improved input validation and types
I added input validation for things like `state` and `action_scaling`
(which only makes sense for continuous envs). After adding this, some
tests failed to pass this validation. There I added
`action_scaling=isinstance(env.action_space, Box)`, after which tests
were green. I don't know why the tests were green before, since action
scaling doesn't make sense for discrete actions. I guess some aspect was
not tested and didn't crash.
I also added Literal in some places, in particular for
`action_bound_method`. Now it is no longer allowed to pass an empty
string, instead one should pass `None`. Also here there is input
validation with clear error messages.
@Trinkle23897 The functional tests are green. I didn't want to fix the
formatting, since it will change in the next PR that will solve #914
anyway. I also found a whole bunch of code in `docs/_static`, which I
just deleted (shouldn't it be copied from the sources during docs build
instead of committed?). I also haven't adjusted the documentation yet,
which atm still mentions the trainers of the type
`onpolicy_trainer(...)` instead of `OnpolicyTrainer(...).run()`
## Breaking Changes
The adjustments to the trainer package introduce breaking changes as
duplicated interfaces are deleted. However, it should be very easy for
users to adjust to them
---------
Co-authored-by: Michael Panchenko <m.panchenko@appliedai.de>
- [ ] I have marked all applicable categories:
+ [x] exception-raising fix
+ [ ] algorithm implementation fix
+ [ ] documentation modification
+ [ ] new feature
- [ ] I have reformatted the code using `make format` (**required**)
- [ ] I have checked the code using `make commit-checks` (**required**)
- [ ] If applicable, I have mentioned the relevant/related issue(s)
- [ ] If applicable, I have listed every items in this Pull Request
below
I'm developing a new PettingZoo environment. It is a two players turns
board game.
```
obs_space = dict(
board = gym.spaces.MultiBinary([8, 8]),
player = gym.spaces.Tuple([gym.spaces.Discrete(8)] * 2),
other_player = gym.spaces.Tuple([gym.spaces.Discrete(8)] * 2)
)
self._observation_space = gym.spaces.Dict(spaces=obs_space)
self._action_space = gym.spaces.Tuple([gym.spaces.Discrete(8)] * 2)
...
# this cache ensures that same space object is returned for the same
agent
# allows action space seeding to work as expected
@functools.lru_cache(maxsize=None)
def observation_space(self, agent):
# gymnasium spaces are defined and documented here:
https://gymnasium.farama.org/api/spaces/
return self._observation_space
@functools.lru_cache(maxsize=None)
def action_space(self, agent):
return self._action_space
```
My test is:
```
def test_with_tianshou():
action = None
# env = gym.make('qwertyenv/CollectCoins-v0', pieces=['rock', 'rock'])
env = CollectCoinsEnv(pieces=['rock', 'rock'], with_mask=True)
def another_action_taken(action_taken):
nonlocal action
action = action_taken
# Wrapping the original environment as to make sure a valid action will
be taken.
env = EnsureValidAction(
env,
env.check_action_valid,
env.provide_alternative_valid_action,
another_action_taken
)
env = PettingZooEnv(env)
policies = MultiAgentPolicyManager([RandomPolicy(), RandomPolicy()],
env)
env = DummyVectorEnv([lambda: env])
collector = Collector(policies, env)
result = collector.collect(n_step=200, render=0.1)
```
I have also a wrapper that may be redundant as of Tianshou capability to action_mask, yet it is still part of the code:
```
from typing import TypeVar, Callable
import gymnasium as gym
from pettingzoo.utils.wrappers import BaseWrapper
Action = TypeVar("Action")
class ActionWrapper(BaseWrapper):
def __init__(self, env: gym.Env):
super().__init__(env)
def step(self, action):
action = self.action(action)
self.env.step(action)
def action(self, action):
pass
def render(self, *args, **kwargs):
self.env.render(*args, **kwargs)
class EnsureValidAction(ActionWrapper):
"""
A gym environment wrapper to help with the case that the agent wants to
take invalid actions.
For example consider a Chess game, where you let the action_space be any
piece moving to any square on the board,
but then when a wrong move is taken, instead of returing a big negative
reward, you just take another action,
this time a valid one. To make sure the learning algorithm is aware of
the action taken, a callback should be provided.
"""
def __init__(self, env: gym.Env,
check_action_valid: Callable[[Action], bool],
provide_alternative_valid_action: Callable[[Action], Action],
alternative_action_cb: Callable[[Action], None]):
super().__init__(env)
self.check_action_valid = check_action_valid
self.provide_alternative_valid_action = provide_alternative_valid_action
self.alternative_action_cb = alternative_action_cb
def action(self, action: Action) -> Action:
if self.check_action_valid(action):
return action
alternative_action = self.provide_alternative_valid_action(action)
self.alternative_action_cb(alternative_action)
return alternative_action
```
To make above work I had to patch a bit PettingZoo (opened a pull-request there), and a small patch here (this PR).
Maybe I'm doing something wrong, yet I fail to see it.
With my both fixes of PZ and of Tianshou, I have two tests, one of the environment by itself, and the other as of above.
- Batch: do not raise error when it finds list of np.array with different shape[0].
- Venv's obs: add try...except block for np.stack(obs_list)
- remove venv.__del__ since it is buggy
Change the behavior of to_numpy and to_torch: from now on, dict is automatically converted to Batch and list is automatically converted to np.ndarray (if an error occurs, raise the exception instead of converting each element in the list).
This is the third PR of 6 commits mentioned in #274, which features refactor of Collector to fix#245. You can check #274 for more detail.
Things changed in this PR:
1. refactor collector to be more cleaner, split AsyncCollector to support asyncvenv;
2. change buffer.add api to add(batch, bffer_ids); add several types of buffer (VectorReplayBuffer, PrioritizedVectorReplayBuffer, etc.)
3. add policy.exploration_noise(act, batch) -> act
4. small change in BasePolicy.compute_*_returns
5. move reward_metric from collector to trainer
6. fix np.asanyarray issue (different version's numpy will result in different output)
7. flake8 maxlength=88
8. polish docs and fix test
Co-authored-by: n+e <trinkle23897@gmail.com>
Cherry-pick from #200
- update the function signature
- format code-style
- move _compile into separate functions
- fix a bug in to_torch and to_numpy (Batch)
- remove None in action_range
In short, the code-format only contains function-signature style and `'` -> `"`. (pick up from [black](https://github.com/psf/black))
1. add policy.eval() in all test scripts' "watch performance"
2. remove dict return support for collector preprocess_fn
3. add `__contains__` and `pop` in batch: `key in batch`, `batch.pop(key, deft)`
4. exact n_episode for a list of n_episode limitation and save fake data in cache_buffer when self.buffer is None (#184)
5. fix tensorboard logging: h-axis stands for env step instead of gradient step; add test results into tensorboard
6. add test_returns (both GAE and nstep)
7. change the type-checking order in batch.py and converter.py in order to meet the most often case first
8. fix shape inconsistency for torch.Tensor in replay buffer
9. remove `**kwargs` in ReplayBuffer
10. remove default value in batch.split() and add merge_last argument (#185)
11. improve nstep efficiency
12. add max_batchsize in onpolicy algorithms
13. potential bugfix for subproc.wait
14. fix RecurrentActorProb
15. improve the code-coverage (from 90% to 95%) and remove the dead code
16. fix some incorrect type annotation
The above improvement also increases the training FPS: on my computer, the previous version is only ~1800 FPS and after that, it can reach ~2050 (faster than v0.2.4.post1).
* make fileds with empty Batch rather than None after reset
* dummy code
* remove dummy
* add reward_length argument for collector
* Improve Batch (#126)
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
* bugfix for reward_length
* add get_final_reward_fn argument to collector to deal with marl
* minor polish
* remove multibuf
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* make fileds with empty Batch rather than None after reset
* dummy code
* remove dummy
* add reward_length argument for collector
* bugfix for reward_length
* add get_final_reward_fn argument to collector to deal with marl
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
* dummy code
* remove dummy
* add multi-agent example: tic-tac-toe
* move TicTacToeEnv to a separate file
* remove dummy MANet
* code refactor
* move tic-tac-toe example to test
* update doc with marl-example
* fix docs
* reduce the threshold
* revert
* update player id to start from 1 and change player to agent; keep coding
* add reward_length argument for collector
* Improve Batch (#128)
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* fix docs
* fix docs
* fix docs [ci skip]
* fix docs [ci skip]
Co-authored-by: Trinkle23897 <463003665@qq.com>
* refact
* re-implement Batch.stack and add testcases
* add doc for Batch.stack
* reward_metric
* modify flag
* minor fix
* reuse _create_values and refactor stack_ & cat_
* fix pep8
* fix reward stat in collector
* fix stat of collector, simplify test/base/env.py
* fix docs
* minor fix
* raise exception for stacking with partial keys and axis!=0
* minor fix
* minor fix
* minor fix
* marl-examples
* add condense; bugfix for torch.Tensor; code refactor
* marl example can run now
* enable tic tac toe with larger board size and win-size
* add test dependency
* Fix padding of inconsistent keys with Batch.stack and Batch.cat (#130)
* re-implement Batch.stack and add testcases
* add doc for Batch.stack
* reuse _create_values and refactor stack_ & cat_
* fix pep8
* fix docs
* raise exception for stacking with partial keys and axis!=0
* minor fix
* minor fix
Co-authored-by: Trinkle23897 <463003665@qq.com>
* stash
* let agent learn to play as agent 2 which is harder
* code refactor
* Improve collector (#125)
* remove multibuf
* reward_metric
* make fileds with empty Batch rather than None after reset
* many fixes and refactor
Co-authored-by: Trinkle23897 <463003665@qq.com>
* marl for tic-tac-toe and general gomoku
* update default gamma to 0.1 for tic tac toe to win earlier
* fix name typo; change default game config; add rew_norm option
* fix pep8
* test commit
* mv test dir name
* add rew flag
* fix torch.optim import error and madqn rew_norm
* remove useless kwargs
* Vector env enable select worker (#132)
* Enable selecting worker for vector env step method.
* Update collector to match new vecenv selective worker behavior.
* Bug fix.
* Fix rebase
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* show the last move of tictactoe by capital letters
* add multi-agent tutorial
* fix link
* Standardized behavior of Batch.cat and misc code refactor (#137)
* code refactor; remove unused kwargs; add reward_normalization for dqn
* bugfix for __setitem__ with torch.Tensor; add Batch.condense
* minor fix
* support cat with empty Batch
* remove the dependency of is_empty on len; specify the semantic of empty Batch by test cases
* support stack with empty Batch
* remove condense
* refactor code to reflect the shared / partial / reserved categories of keys
* add is_empty(recursive=False)
* doc fix
* docfix and bugfix for _is_batch_set
* add doc for key reservation
* bugfix for algebra operators
* fix cat with lens hint
* code refactor
* bugfix for storing None
* use ValueError instead of exception
* hide lens away from users
* add comment for __cat
* move the computation of the initial value of lens in cat_ itself.
* change the place of doc string
* doc fix for Batch doc string
* change recursive to recurse
* doc string fix
* minor fix for batch doc
* write tutorials to specify the standard of Batch (#142)
* add doc for len exceptions
* doc move; unify is_scalar_value function
* remove some issubclass check
* bugfix for shape of Batch(a=1)
* keep moving doc
* keep writing batch tutorial
* draft version of Batch tutorial done
* improving doc
* keep improving doc
* batch tutorial done
* rename _is_number
* rename _is_scalar
* shape property do not raise exception
* restore some doc string
* grammarly [ci skip]
* grammarly + fix warning of building docs
* polish docs
* trim and re-arrange batch tutorial
* go straight to the point
* minor fix for batch doc
* add shape / len in basic usage
* keep improving tutorial
* unify _to_array_with_correct_type to remove duplicate code
* delegate type convertion to Batch.__init__
* further delegate type convertion to Batch.__init__
* bugfix for setattr
* add a _parse_value function
* remove dummy function call
* polish docs
Co-authored-by: Trinkle23897 <463003665@qq.com>
* bugfix for mapolicy
* pretty code
* remove debug code; remove condense
* doc fix
* check before get_agents in tutorials/tictactoe
* tutorial
* fix
* minor fix for batch doc
* minor polish
* faster test_ttt
* improve tic-tac-toe environment
* change default epoch and step-per-epoch for tic-tac-toe
* fix mapolicy
* minor polish for mapolicy
* 90% to 80% (need to change the tutorial)
* win rate
* show step number at board
* simplify mapolicy
* minor polish for mapolicy
* remove MADQN
* fix pep8
* change legal_actions to mask (need to update docs)
* simplify maenv
* fix typo
* move basevecenv to single file
* separate RandomAgent
* update docs
* grammarly
* fix pep8
* win rate typo
* format in cheatsheet
* use bool mask directly
* update doc for boolean mask
Co-authored-by: Trinkle23897 <463003665@qq.com>
Co-authored-by: Alexis DUBURCQ <alexis.duburcq@gmail.com>
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* add doc for len exceptions
* doc move; unify is_scalar_value function
* remove some issubclass check
* bugfix for shape of Batch(a=1)
* keep moving doc
* keep writing batch tutorial
* draft version of Batch tutorial done
* improving doc
* keep improving doc
* batch tutorial done
* rename _is_number
* rename _is_scalar
* shape property do not raise exception
* restore some doc string
* grammarly [ci skip]
* grammarly + fix warning of building docs
* polish docs
* trim and re-arrange batch tutorial
* go straight to the point
* minor fix for batch doc
* add shape / len in basic usage
* keep improving tutorial
* unify _to_array_with_correct_type to remove duplicate code
* delegate type convertion to Batch.__init__
* further delegate type convertion to Batch.__init__
* bugfix for setattr
* add a _parse_value function
* remove dummy function call
* polish docs
Co-authored-by: Trinkle23897 <463003665@qq.com>
* code refactor; remove unused kwargs; add reward_normalization for dqn
* bugfix for __setitem__ with torch.Tensor; add Batch.condense
* minor fix
* support cat with empty Batch
* remove the dependency of is_empty on len; specify the semantic of empty Batch by test cases
* support stack with empty Batch
* remove condense
* refactor code to reflect the shared / partial / reserved categories of keys
* add is_empty(recursive=False)
* doc fix
* docfix and bugfix for _is_batch_set
* add doc for key reservation
* bugfix for algebra operators
* fix cat with lens hint
* code refactor
* bugfix for storing None
* use ValueError instead of exception
* hide lens away from users
* add comment for __cat
* move the computation of the initial value of lens in cat_ itself.
* change the place of doc string
* doc fix for Batch doc string
* change recursive to recurse
* doc string fix
* minor fix for batch doc
* minor polish
* improve and implement Batch.cat_
* bugfix for buffer.sample with field impt_weight
* restore the usage of a.cat_(b)
* fix 2 bugs in batch and add corresponding unittest
* code fix for update
* update is_empty to recognize empty over empty; bugfix for len
* bugfix for update and add testcase
* add testcase of update
* fix docs
* fix docs
* fix docs [ci skip]
* fix docs [ci skip]
Co-authored-by: Trinkle23897 <463003665@qq.com>
* make sure the key type of Batch is string, and add unit tests
* add is_empty() function and unit tests
* enable cat of mixing dict and Batch, just like stack
This PR does the following:
- improvement: dramatic reduce of the call to _is_batch_set
- bugfix: list(Batch()) fail; Batch(a=[torch.ones(3), torch.ones(3)]) fail;
- misc: add type check for each element rather than the first element; add test case; _create_value with torch.Tensor does not have np.object type;
* in-place empty_ for Batch
* change Batch.empty to in-place fill; add copy option for Batch construction
* type signiture & remove shadow names for copy
* add doc for data type (only support numbers and object data type)
* add unit test for Batch copy
* fix pep8
* add test case for Batch.empty
* doc fix
* fix pep8
* use object to test Batch
* test commit
* refact
* change Batch(copy) testcase
* minor fix
Co-authored-by: Trinkle23897 <463003665@qq.com>
* Use lower-level API to reduce overhead.
* Further improvements.
* Buffer _add_to_buffer improvement.
* Do not use _data field to store Batch data to avoid overhead. Add back _meta field in Buffer.
* Restore metadata attribute to store batch in Buffer.
* Move out nested methods.
* Update try/catch instead of actual check to efficiency.
* Remove unsed branches for efficiency.
* Use np.array over list when possible for efficiency.
* Final performance improvement.
* Add unit tests for Batch size method.
* Add missing stack unit tests.
* Enforce Buffer initialization to zero.
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* Fix support of batch over batch for Buffer.
* Do not use internal __dict__ attribute to store batch data since it breaks inheritance.
* Various fixes.
* Improve robustness of Batch/Buffer by avoiding direct attribute assignment. Buffer refactoring.
* Add axis optional argument to Batch stack method.
* Add item assignment to Batch class.
* Fix list support for Buffer.
* Convert list to np.array by default for efficiency.
* Add missing unit test for Batch. Fix unit tests.
* Batch item assignment is now robust to key order.
* Do not use getattr/setattr explicity for simplicity.
* More flexible __setitem__.
* Fixes
* Remove broacasting at Batch level since it is unreliable.
* Forbid item assignement for inconsistent batches.
* Implement broadcasting at Buffer level.
* Add more unit test for Batch item assignment.
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>
* Fix support of 0-dim numpy array.
* Do not raise exception if Batch index does not make sense since it breaks existing code.
Co-authored-by: Alexis Duburcq <alexis.duburcq@wandercraft.eu>