ChaiBapchya commented on issue #17980:
URL: 
https://github.com/apache/incubator-mxnet/issues/17980#issuecomment-629385677


   > Tested with MXNet 
[cfb474b](https://github.com/apache/incubator-mxnet/commit/cfb474ba743d5ea85161bf19875488f4cb409d3c).
 Compiled with mostly-default cmake settings:
   > 
   > ```shell
   > cmake -GNinja -DUSE_CUDA=OFF -DCMAKE_BUILD_TYPE=Release ..
   > ```
   > 
   > Then when I run
   > 
   > ```
   > export MKL_VERBOSE=1
   > export MKLDNN_VERBOSE=1
   > python3
   > Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
   > [GCC 8.3.0] on linux
   > Type "help", "copyright", "credits" or "license" for more information.
   > >>> import mxnet as mx
   > Numpy + Intel(R) MKL: THREADING LAYER: (null)
   > Numpy + Intel(R) MKL: setting Intel(R) MKL to use INTEL OpenMP runtime
   > Numpy + Intel(R) MKL: preloading libiomp5.so runtime
   > ```
   
   Running on Ubuntu 18.04 [which doesn't have MKL installed by default] with 
default cmake config doesn't use MKL as blas.
   Hence we can't get the above exports.
   
   Thus for Ubuntu 18.04 base AMI, one has to install MKL in /opt/intel & 
update the cmake command to
   ```
   cmake -GNinja -DUSE_CUDA=OFF -DCMAKE_BUILD_TYPE=Release -DUSE_BLAS=mkl ..
   ```
   This I found uses mkl as blas & export MKL_VERBOSE=1 confirms it.
   
   With this addition to both [default & workaround] I reran the opperf & I 
didn't see much perf differences.
   
   Default
   ```
   cmake -GNinja -DUSE_CUDA=OFF -DCMAKE_BUILD_TYPE=Release -DUSE_BLAS=mkl ..
   ```
   
   Workaround
   ```
   export CXXFLAGS="${CXXFLAGS} -DUSE_MKL -I/opt/intel/mkl/include"
   cmake -GNinja -DUSE_CUDA=OFF -DCMAKE_BUILD_TYPE=Release -DUSE_BLAS=mkl ..
   ```
   
   Results
    Operator    | LHS                   | RHS                   | MKL Default   
| MKL Workaround        |
   |----------- |----------------       |----------------       |-------------  
|----------------       |
   | Dot        | (4, 512, 512)         | (4, 512, 512)         | 4.1112        
| 4.8241                |
   |            | (5, 512, 512)         | (5, 512, 512)         | 6.4421        
| 7.607                 |
   |            | (5, 512, 1536         | (5, 512, 1536)        | 20.3648       
| 19.2217               |
   |            | (5, 512, 2048)        | (5, 512, 2048)        | 23.3236       
| 23.2849               |
   |            | (5, 2048, 512)        | (5, 2048, 512)        | 123.1235      
| 123.9806              |
   |            |                       |                       |               
|                       |
   | Batch_dot  | (4, 512, 512)         | (4, 512, 512)         | 1.4105        
| 1.407                 |
   |            | (5, 512, 512)         | (5, 512, 512)         | 1.7558        
| 1.7511                |
   |            | (5, 512, 1536)        | (5, 512, 1536)        | 6.5931        
| 6.5585                |
   |            | (5, 512, 2048)        | (5, 512, 2048)        | 9.1452        
| 9.1031                |
   |            | (5, 2048, 512)        | (5, 2048, 512)        | 29.0192       
| 28.9236               |
   
    Operator    | Data                  | Weight                | MKL Default   
| MKL Workaround        |
   |----------- |----------------       |----------------       |-------------  
|----------------       |
   | FC         | (4, 512)              | (512, 512)            | 0.057         
| 0.0685                |
   |            | (5, 512)              | (512, 512)            | 0.0591        
| 0.0698                |
   |            | (5, 512)              | (1536, 512)           | 0.0823        
| 0.0939                |
   |            | (5, 512)              | (2048, 512)           | 0.0916        
| 0.1026                |
   |            | (5, 2048)             | (512, 2048)           | 0.1146        
| 0.1267                |


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to