waytrue17 commented on a change in pull request #19543:
URL: https://github.com/apache/incubator-mxnet/pull/19543#discussion_r525539102



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1259,45 +1282,55 @@ def register_child(self, block, name=None):
             self._active = False
         self._clear_cached_op()
 
-    def hybridize(self, active=True, backend=None, backend_opts=None, 
clear=True, partition_if_dynamic=False, **kwargs):
+    def hybridize(self, active=True,
+                  partition_if_dynamic=False,
+                  static_alloc=False,
+                  static_shape=False,
+                  inline_limit=2,
+                  forward_bulk_size=None,
+                  backward_bulk_size=None):
         """Activates or deactivates :py:class:`HybridBlock` s recursively. Has 
no effect on
         non-hybrid children.
 
         Parameters
         ----------
         active : bool, default True
             Whether to turn hybrid on or off.
-        backend : str
-            The name of backend, as registered in `SubgraphBackendRegistry`, 
default None
-        backend_opts : dict of user-specified options to pass to the backend 
for partitioning, optional
-            Passed on to `PrePartition` and `PostPartition` functions of 
`SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may 
increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.

Review comment:
       Same here

##########
File path: example/extensions/lib_pass/README.md
##########
@@ -83,17 +84,7 @@ APIs in MXNet are available in both Symbol and Gluon APIs. 
For the Symbol API, `
 sym.optimize_for(backend, args=None, aux=None, ctx=None, **kwargs)
 ```
 
-The `optimize_for` API takes at least 1 argument, `backend` which is a string 
that identifies which backend to use to optimize the model. The `args` and 
`aux` arguments are optional and take a list of NDArray or dict of str to 
NDArray. They are used to infer shapes and types and before executing the graph 
pass. The `ctx` argument is optional and takes a device context to infer 
storage types. It also takes any other user-specified options that will be 
passed to the backend APIs.
-
-For the Gluon API, `hybridize` can be called on HybridBlocks to execute a 
graph pass on the internal CachedOp Symbol.
-
-```python
-block.hybridize(backend=None, backend_opts=None, **kwargs)
-```
-
-The `hybridize` function prepares the HybridBlock to be converted into a 
backend symbol. The `backend` argument is a string that identifies which pass 
that will be executed on the model. The `backend_opts` takes other 
user-specified options that will be passed to the backend APIs. The actual pass 
runs once just before the first the forward pass.
-
-If you just want to run a graph pass on the HybridBlock but not run a complete 
forward pass, you can use the `optimize_for` API that combines the work done in 
the `hybridize` API with part of the work done in the forward pass.
+The `optimize_for` API takes at least 1 argument, `backend` which is a string 
that identifies which backend to use to optimize the model. The `args` and 
`aux` arguments are optional and take a list of NDArray or dict of str to 
NDArray. They are used to infer shapes and types and before executing the graph 
pass. The `ctx` argument is optional and takes a device context to infer 
storage types. It also takes any other user-specified options that will be 
passed to the backend APIs (in the `kwargs`).

Review comment:
       Does `optimize_for` take at least 2 arguments, `x` and `backend`?

##########
File path: python/mxnet/gluon/block.py
##########
@@ -1205,19 +1212,32 @@ def optimize_for(self, x, *args, backend=None, 
backend_opts=None, clear=True, pa
             The name of backend, as registered in `SubgraphBackendRegistry`, 
default None
         backend_opts : dict of user-specified options to pass to the backend 
for partitioning, optional
             Passed on to `PrePartition` and `PostPartition` functions of 
`SubgraphProperty`
-        clear : clears any previous optimizations
-        partition_if_dynamic : bool
+        clear : bool, default False
+            clears any previous optimizations
+        partition_if_dynamic : bool, default False
             whether to partition the graph when dynamic shape op exists
         static_alloc : bool, default False
             Statically allocate memory to improve speed. Memory usage may 
increase.
         static_shape : bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.

Review comment:
       Should this be "during backward pass"?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to