yma11 commented on issue #27546: [SPARK-30773][ML]Support NativeBlas for 
level-1 routines
URL: https://github.com/apache/spark/pull/27546#issuecomment-586294938
 
 
   Hi @srowen and @mengxr ,
   Thanks for your comments on this PR. Please refer the attached performance 
result. It includes the performance data using different BLAS implementation 
(f2jBLAS, Intel MKL and OpenBLAS). You can see nativeBLAS has quite big 
advantage in axpy(), dot() and scal(doule, dense) for both small and big 
vectors. As I tested, this advantage may can be improved more if do further 
tuning against the nativeBLAS variable such as 
MKL_NUM_THREADS/OPENBLAS_NUM_THREADS based on the data size, cores used, etc.
   I also attached the microbenchmark lib for you to reproduce the result. Here 
is my test details for your information:
   
   **Spark version:** 
   master branch with head commit “e2d984aa1c79eb389cc8d333f656196b17af1c32: 
[SPARK-30733][R][HOTFIX] Fix SparkR tests per testthat and R version upgrade, 
and disable CRAN”
   **Test environment:**
   CPU: Skylake 6252(Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz)
   OS: CentOS 7
   Kernel: 3.10.0-862.el7.x86_64
   Java version: 1.8.0_112
   MKL version: 2019
   OpenBLAS version: libopenblas_haswellp-r0.3.8.dev
   **Test command:**
   /opt/spark/bin/spark-submit --name test  --master local --num-executors 1 
--executor-cores 4 --executor-memory 10g --class 
org.intel.spark.TestMlAxpy(TestMlDot/TestMlScal) microbenchmark.jar
   
   
[Level1-routines-perf.xlsx](https://github.com/apache/spark/files/4204686/Level1-routines-perf.xlsx)
   
[microbenchmark.zip](https://github.com/apache/spark/files/4204689/microbenchmark.zip)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to