Laurawly opened a new pull request #4550: [Perf] Add CublasLt extern support 
for better Igemm performance
URL: https://github.com/apache/incubator-tvm/pull/4550
 
 
   Currently, when computing int8 gemm using Cublas, TVM will call 
CublasGemmEx, which doesn't actually take advantage of int8 tensor core's 
performance.  By using 
[cublasLt](https://docs.nvidia.com/cuda/cublas/index.html#cublasLt-example-tensorop)
 (for cuda version >= 10.1), we speedup int8 gemm by up to 3.5x. Here's a 
performance comparison between CublasLt and current CublasGemmEx called by TVM 
on Telsa T4 GPU. 
   
   | (m, n, k) | CublasLt (TOPS) | Cublas(TOPS) |
   | --- | --- | --- |
   | 1024   |   13.28    |    8.49    |
   | 2048   |   25.46 |    10.4     |
   | 4096   |   37.75  |     11.87   |
   | 6144    |   41.02  |     11.6     |
   | 8192    | 42.19    |    14.18 |
   
   Note that cublasLt Igemm requires input matrices to satisfy certain layouts 
in order to trigger IMMA tensor operations for tensor core: Matrix A and C 
memory ordering should be the same (CUBLASLT_ORDER_COL32), and matrix B should 
be in CUBLASLT_ORDER_COL4_4R2_8C layout. For best performance, we do these 
matrix transformations only once on python side, before passing the matrices to 
cublasLt.
   
   @Hzfengsy @masahi Could you review?
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to