samskalicky commented on a change in pull request #19386:
URL: https://github.com/apache/incubator-mxnet/pull/19386#discussion_r523369938



##########
File path: python/mxnet/gluon/block.py
##########
@@ -1149,30 +1156,43 @@ def hybridize(self, active=True, backend=None, 
backend_opts=None, clear=True, **
             The name of backend, as registered in `SubgraphBackendRegistry`, 
default None
         backend_opts : dict of user-specified options to pass to the backend 
for partitioning, optional
             Passed on to `PrePartition` and `PostPartition` functions of 
`SubgraphProperty`
-        clear : clears any previous optimizations
-        static_alloc : bool, default False
+        clear : bool, default True
+            Clears any previous optimizations
+        static_alloc : optional bool, default False
             Statically allocate memory to improve speed. Memory usage may 
increase.
-        static_shape : bool, default False
+        static_shape : optional bool, default False
             Optimize for invariant input shapes between iterations. Must also
             set static_alloc to True. Change of input shapes is still allowed
             but slower.
+        inline_limit : optional int, default 2
+            Maximum number of operators that can be inlined.
+        forward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
+        backward_bulk_size : optional int, default None
+            Segment size of bulk execution during forward pass.
         """

Review comment:
       we still need to take backend args in hybridize. so after making all the 
cachedOp kwargs actual args in the function, kwargs will become just backend 
options




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to