marcoabreu commented on issue #13598: More fine-grained operator implementation 
dispatch & memory planning flow 
URL: 
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-445856236
 
 
   Hey Haibin,
   
   I really like the direction your proposal is going! Instead of using 
preprocessor statements, would it be possible to make use of class hierarchies 
and abstraction to achieve the same goal? Especially the GPU preprocessor flags 
and storage types strike me as a problem here - imagine we start support 
another accelerator, like AMD GPUs or other backends that also need a special 
case. 
   
   In general, I think it would be great if we could abstract the task (e.g. 
InferStorageType or CovolutionCompute) from the backend implementation 
(CovolutionComputeCUDNN, CovolutionComputeMKLDNN, etc). That way, we could 
extend MXNet with as many accelerator backends as we want without having to 
modify the logic. At the moment, there's a strong coupling which causes issues 
for the maintenance.
   
   @DickJC123 already started something towards that direction. Maybe you can 
provide a bit input here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to