samskalicky commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs 
that might be a good idea to break in 2.0)
URL: 
https://github.com/apache/incubator-mxnet/issues/9686#issuecomment-513092163
 
 
   @marcoabreu We can definitely remove some of these pre-processor statements 
with the accelerator API (MKLDNN) but not all of them like @szha points out. 
USE_CUDA needs to stay since we have GPU embedded pretty tightly. We might be 
able to push it out into an accelerator library, but not in time for 2.0. 
   
   I agree with @szha ONNX is not the way to do this. We need to keep our 
operator registration in NNVM for now. What we could separate out are the 
operator definitions (NNVM reg) from the compute functions (infer shape/type, 
fcompute, etc) though. But again I think we should take this slow, enable 
actual accelerators first. Then see if it makes sense for TensorRT/MKLDNN and 
then maybe GPU. 
   
   I would like to see the accelerator API (or a first pass at it) as part of 
2.0 though. Is this feasible from your perspective @szha ?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to