DickJC123 edited a comment on issue #13598: More fine-grained operator 
implementation dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-447533189
 
 
   I looked this over and concluded it's a complicated issue, so I'm not ready 
to take a strong stand.  Some thoughts though:
   
   One desire of mine would be to have an op implementation be a slight tweek 
of the existing "base op" without copying the entire op description.  Also, 
whatever is decided on here should play well with the existing op registration 
mechanism and the subgraph api.
   
   I haven't studied subgraph frankly, but I'd hope one could make 
implementation decisions (and graph alterations) based on all the information 
normally provided the forward/backward op calls (so context, shapes, dtypes, 
stypes, etc.).  So during subgraph graph passes, would there be a lightweight 
way to swap in a tweeked operator?
   
   So, for discussion, what about:
   ```
   NNVM_REGISTER_OP(Convolution_CUDNN_Impl)
   .clone_of(Convolution)                          // clones with a new name 
the Convolution op
   .override_attr<FInplaceOption>("FInplaceOption",
       [] FInplaceOption(const NodeAttrs& attrs) { return {0,0}; })
   ```
   Then use the subgraph API to swap in a Convolution_CUDNN_Impl node for a 
Convolution node if the parameters, gpu arch, etc. supported it?  
Alternatively, in a subgraph API graph pass, could one keep the node, but 
attach an attribute that preempts the default FInplaceOption (or overwrites the 
FInplaceOption function pointer used by the node directly)?
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to