aaronmarkham commented on a change in pull request #12808: MKL-DNN Quantization 
Examples and README
URL: https://github.com/apache/incubator-mxnet/pull/12808#discussion_r225369854
 
 

 ##########
 File path: docs/faq/perf.md
 ##########
 @@ -18,12 +18,15 @@ Performance is mainly affected by the following 4 factors:
 ## Intel CPU
 
 For using Intel Xeon CPUs for training and inference, we suggest enabling
-`USE_MKLDNN = 1` in`config.mk`. 
+`USE_MKLDNN = 1` in `config.mk`. 
 
-We also find that setting the following two environment variables can help:
-- `export KMP_AFFINITY=granularity=fine,compact,1,0` if there are two physical 
CPUs
-- `export OMP_NUM_THREADS=vCPUs / 2` in which `vCPUs` is the number of virtual 
CPUs.
-  Whe using Linux, we can access this information by running `cat 
/proc/cpuinfo  | grep processor | wc -l`
+We also find that setting the following environment variables can help:
+
+| Variable  | Description |
+| :-------- | :---------- |
+| `OMP_NUM_THREADS`            | Suggested value: `vCPUs / 2` in which `vCPUs` 
is the number of virtual CPUs. For more information please see 
[here](https://software.intel.com/en-us/mkl-windows-developer-guide-setting-the-number-of-threads-using-an-openmp-environment-variable)
 |
+| `KMP_AFFINITY`               | Suggested value: 
`granularity=fine,compact,1,0`.  For more information please see 
[here](https://software.intel.com/en-us/node/522691). |
 
 Review comment:
   instead of here, provide some info for what this link is

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to