samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524869683
##########
File path: example/extensions/lib_subgraph/README.md
##########
@@ -102,35 +96,35 @@ Partitioning APIs in MXNet are available in both Symbol
and Gluon APIs. For the
sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
```
-The `optimize_for` API takes at least 1 argument, `backend` which is a string
that identifies which backend to partition the model for. The `args` and `aux`
arguments are optional and take a list of NDArray or dict of str to NDArray.
They are used to infer shapes and types and before partitioning, and passed to
the backend to use during compilation. The `ctx` argument is optional and takes
a device context to infer storage types. It also takes any other user-specified
options that will be passed to the backend partitioning APIs.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string
that identifies which backend to partition the model for. The `args` and `aux`
arguments are optional and take a list of NDArray or dict of str to NDArray.
They are used to infer shapes and types and before partitioning, and passed to
the backend to use during compilation. The `ctx` argument is optional and takes
a device context to infer storage types. It also takes any other user-specified
options that will be passed to the backend partitioning APIs. The backend
options can be passed as kwargs.
-For the Gluon API, `hybridize` can be called on HybridBlocks to partition the
internal CachedOp Symbol.
+When the `optimize_for` API is called on a HybridBlock it partitions
immediately. This lets users export the partitioned model without running a
complete forward pass. Chaining multiple optimizations is as simple as calling
`optimize_for` multiple times.
```python
-block.hybridize(backend=None, backend_opts=None, clear=True, **kwargs)
+block.optimize_for(x, backend='myPart')
+block.optimize_for(x, backend='myOtherPart')
+block.export('partitioned')
```
-The `hybridize` function prepares the HybridBlock to be converted into a
backend symbol. The `backend` argument is a string that identifies which
backend that will partition the model. The `backend_opts` are other
user-specified options (as a Python dictionary of strings mapped to strings)
that will be passed to the backend partitioning APIs. The `clear` argument
defaults to `True` and clears any previous optimizations done on the block. If
you want to chain optimizations together, set `clear` to `False`. The actual
partitioning takes place during the forward pass. If you want to use
`hybridize` to chain multiple optimizations, be sure to execute a forward pass
after each call to `hybridize`.
-
-If you just want to partition the HybridBlock but not run a complete forward
pass, you can use the `optimize_for` API that combines the work done in the
`hybridize` API with part of the work done in the forward pass.
+For the Gluon API, hybridization is needed, so calling `optimize_for` on a
non-hybridized block will hybridize it.
+If the users need to pass some hybridization parameters, they can either call
`hybridize` explicitedly, or directly pass the arguments to `optimize_for`.
+This:
```python
-block.optimize_for(x, backend=None, backend_opts=None, clear=True, **kwargs)
+block.hybridize(static_shape=True, static_alloc=False)
+block.optimize_for(x, backend='myPart')
```
-
-When the `optimize_for` API is called on a HybridBlock it partitions
immediately. This lets users export the partitioned model without running a
complete forward pass. Chaining multiple optimizations is as simple as calling
`optimize_for` multiple times, no need to execute a forward pass (as opposed to
`hybridize`).
-
+is equivalent to:
```python
-block.optimize_for(x, backend='myPart')
-block.optimize_for(x, backend='myOtherPart', clear=False)
-block.export('partitioned')
+block.optimize_for(x, backend='myPart', static_shape=True, static_alloc=False)
```
-But you can also use `optimize_for` in place of `hybridize` and run inference
immediately after too.
+It's important to note that `hybridize` clars the CachedOp and any previous
optimization.
Review comment:
clars --> clears
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]