samskalicky commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r524855961
##########
File path: example/extensions/lib_pass/README.md
##########
@@ -85,13 +86,7 @@ sym.optimize_for(backend, args=None, aux=None, ctx=None,
**kwargs)
The `optimize_for` API takes at least 1 argument, `backend` which is a string
that identifies which backend to use to optimize the model. The `args` and
`aux` arguments are optional and take a list of NDArray or dict of str to
NDArray. They are used to infer shapes and types and before executing the graph
pass. The `ctx` argument is optional and takes a device context to infer
storage types. It also takes any other user-specified options that will be
passed to the backend APIs.
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a
graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a
backend symbol. The `backend` argument is a string that identifies which pass
that will be executed on the model. The `backend_opts` takes other
user-specified options that will be passed to the backend APIs. The actual pass
runs once just before the first the forward pass.
+The `hybridize` function prepares the HybridBlock to be converted into a
backend symbol.
If you just want to run a graph pass on the HybridBlock but not run a complete
forward pass, you can use the `optimize_for` API that combines the work done in
the `hybridize` API with part of the work done in the forward pass.
Review comment:
lets remove this sentence, it isnt needed anymore
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]