xinyu-intel commented on a change in pull request #12808: MKL-DNN Quantization 
Examples and README
URL: https://github.com/apache/incubator-mxnet/pull/12808#discussion_r225605837
 
 

 ##########
 File path: MKLDNN_README.md
 ##########
 @@ -292,7 +294,27 @@ MKL_VERBOSE Intel(R) MKL 2018.0 Update 1 Product build 
20171007 for Intel(R) 64
 MKL_VERBOSE 
SGEMM(T,N,12,10,8,0x7f7f927b1378,0x1bc2140,8,0x1ba8040,8,0x7f7f927b1380,0x7f7f7400a280,12)
 8.93ms CNR:OFF Dyn:1 FastMM:1 TID:0  NThr:40 WDiv:HOST:+0.000
 ```
 
-<h2 id="6">Next Steps and Support</h2>
+<h2 id="6">Enable graph optimizaiton</h2>
+
+Intel(R) MKL-DNN based graph optimization by subgraph feature are available in 
master branch. You can build from source and then use below command to enable 
this *experimental* feature for extreme performance:
+
+```
+export MXNET_SUBGRAPH_BACKEND=MKLDNN
+```
+
+This limitations of this experimental feature are:
+
+- This feature only support inference optimization. You should unset this 
environment variable for training.
+
+- On a build integrating both MKL-DNN and CUDA backends, only CPU features are 
fully supported.  
 
 Review comment:
   It will crash. This is a known issue and will be fixed later.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to