Hi folks. I have a question to mxnet team related to previous talk. In our 
design of androidnn we approach a situation when we need to pass devices to 
androidnn backend. Usually, other backends (mkl, tensort) get a device through 
Context. The problem is context support limited list of devices (CPU,GPU). On 
the other hand, androidnn support other set of devices (cpu, gpu, npu...) with 
indexes specific to android and acquired via android ANeuralNetworks_getDevice 
api. So we need custom context and we have choices: 1) Modify existing Context 
by adding additional fields and defining a preprocessor flag 
MXNET_USE_ANDROIDNN in CMake. So if user pass USE_ANDROIDNN option to CMake he 
will use a custom context. This solution motivated by the fact that if there is 
a structure for passing devices - we should use it. Previous backends feel 
comfortable with provided set of devices, now, it's time to add support for new 
devices. 2) The second option is to pass all custom options, including device 
name and id, through MXOptimizeForBackend api which support options_map which 
was designed for passing custom options to backend and we can use it by passing 
all custom info required. Then use it when partition graph by adding a custom 
device to each subgraph as node attribute. Further, based on attribute, we will 
create a model in backend for a device based on this field. Thank you for 
response!

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/19521#issuecomment-751628566

Reply via email to