reminisce commented on issue #17097: [mxnet 2.0][item 10.1] MXNet Imperative Op 
Invocation Overhead
URL: 
https://github.com/apache/incubator-mxnet/issues/17097#issuecomment-568034840
 
 
   @tqchen Thanks for explaining things inside out. Please know that I'm not 
against TVM FFI design. In fact, it's great to know that you think passing 
Python native data structures can be accelerated by engineering through TVM 
FFI. This is vital for keeping the future MXNet runtime API in a limited set 
for scalable and sustainable maintainability.
   
   Putting the design decision aside, I want to share that there has been 
extremely strong motivation and need of squeezing out the latency of passing 
Python native data structures in op interface. Since MXNet is embracing NumPy 
compatibility, we want to get the op invocation performance on par with NumPy 
to be appealing to classic machine learning community. We have compared the 
performance between MXNet and NumPy using a bunch of classic machine learning 
models and found that optimizing passing Python native data structures is 
critical for MXNet to be on par with NumPy. Even for deep learning itself, this 
is also important in some applications. In 
https://github.com/apache/incubator-mxnet/pull/16716, `reset_arrays` was 
corroborated to get rid of passing tuple objects too many times in 
`mx.nd.zeros` so that the training performance can be increased by 5%.
   
   With that being said, please allow me to summarize what we have reached 
here. I think we are aligned on exploring TVM FFI to have a clear engineering 
view of accelerating passing Python native data structures as arguments. We can 
start from tuples, and extend the findings to lists and strings later.
   
   Thank everyone for a great discussion and sorry for the late responses since 
I have been on vacation this week. I didn't expect this task item of PoC to 
become a full-fledged RFC that has involved this many interested folks. I will 
be sure to make the post more descriptive and self-explanatory next time. Have 
a nice holiday! :)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to