samskalicky commented on a change in pull request #19386:
URL: https://github.com/apache/incubator-mxnet/pull/19386#discussion_r521775055
##########
File path: python/mxnet/gluon/block.py
##########
@@ -1087,19 +1087,23 @@ def optimize_for(self, x, *args, backend=None,
backend_opts=None, clear=True, **
other inputs to model
backend : str
The name of backend, as registered in `SubgraphBackendRegistry`,
default None
- backend_opts : dict of user-specified options to pass to the backend
for partitioning, optional
- Passed on to `PrePartition` and `PostPartition` functions of
`SubgraphProperty`
- clear : clears any previous optimizations
+ clear : bool, default False
+ Clears any previous optimizations
static_alloc : bool, default False
Statically allocate memory to improve speed. Memory usage may
increase.
static_shape : bool, default False
Optimize for invariant input shapes between iterations. Must also
set static_alloc to True. Change of input shapes is still allowed
but slower.
+ **kwargs: The backend options, optional
+ Passed on to `PrePartition` and `PostPartition` functions of
`SubgraphProperty`
"""
+ if len(kwargs) > 0:
+ self._backend_opts = kwargs
- # do hybrize API call
- self.hybridize(True, backend, backend_opts, clear, **kwargs)
+ if clear or not self._active:
+ # do hybrize API call
+ self.hybridize(True, backend, clear, static_alloc=static_alloc,
static_shape=static_shape)
Review comment:
@Kh4L why dont we just separate out the hybridize & optimize_for flows.
We'll leave hybridize as the way to set CachedOp flags. Optimize_for will only
need to set _active=True, thats the only line that would be duplicated. setting
backend would only be done in optimize_for. This will achieve the goal you set
out to: removing backend_opts and enable specifying those options via kwargs.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]