Kh4L opened a new pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543
## Description ##
This PR separates the partitioning backend from hybridize.
`hybridize` now only sets the CachedOp args and activate the hybridization.
`optimize_for` is responsible of setting the backend, backend options and
running the partitioning with backend.
If the user wish to use any partitioning backend, they have use optimize_for.
It also changes the default value of `optimize_for`'s `clear` arg, making
the chaining of the backend the default behavior.
The PR also make the CachedOp kwargs explicit and documented :
`static_alloc, static_shape, inline_limit, forward_bulk_size,
backward_bulk_size`.
## Examples ##
### How optimize_for changed
#### Before
```
blk.optimize_for(x, backend="someBackend",
backend_opts={'dedup_subgraph':True})
```
#### After
```
blk.optimize_for(x, backend="someBackend", dedup_subgraph=True)
```
### How hybridize changed
#### Before
```
blk.hybridize(backend="someBackend", static_alloc=True)
blk(x)
```
#### After
Hybridize can't be used to set the backend anymore, we now have to use
`optimize_for`, which will call hybridize internally.
```
blk.optimize_for(x, backend="someBackend", static_alloc=True)
```
### How chaining backends changed
#### Before
```
blk.optimize_for(x, backend="firstBackend", static_alloc=True)
blk.optimize_for(x, backend="secondBackend", clear=False,
dedup_subgraph=True)
```
#### After
`clear` default value is `False`, we simply chain them
```
blk.optimize_for(x, backend="firstBackend", static_alloc=True)
blk.optimize_for(x, backend="secondBackend", dedup_subgraph=True)
```
cc @samskalicky who helped to design the API offline and reviewed the `1.x`
related PR #19386
@mseth10 who helped by reviewing the `1.x` PR
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]