pengzhao-intel commented on issue #13671: Error when set export 
MXNET_SUBGRAPH_BACKEND=MKLDNN
URL: 
https://github.com/apache/incubator-mxnet/issues/13671#issuecomment-448145665
 
 
   @Soonhwan-Kwon Currently, the flow is CNN friendly and other components are 
still on the development, such as #12922 for quantization FC. 
   In general, the INT8 performance depends on several key points
   *  how much layers can be executed by INT8?
   *  how much layers can be fused to reduce the overhead of quantization and 
dequantization?
   *  hardware 
       The VNNI instruction will provide 4X speedup for INT8 but the current 
generation only can 1.33X faster on the computation parts (we also can get the 
benefit from less memory acess). 
   
https://www.anandtech.com/show/13239/intel-at-hot-chips-2018-showing-the-ankle-of-cascade-lake/2
   
   How much accuracy drop of your network with the quantization flow?
   If you can share piece of your network, we can take a look. 
   
   Thanks for the feedback.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to