ibsidorenko opened a new pull request, #16892:
URL: https://github.com/apache/tvm/pull/16892

   This is attempt to bring 
[commit](https://github.com/octoml/tvm/commit/f21b9c9c561e7bcb7a81ae12d71568c6e7c1fc49)
 and align `octoml/tvm` with `apache/tvm`.
   This commit replaces fp16 compute dtype and scale dtype by fp32 in cublas 
matmul.
   
   According to cuBLAS 
[docs](https://docs.nvidia.com/cuda/cublas/index.html#cublasltmatmul) there are 
two possible options for compute/scale dtype when input/output dtype is fp16:
   1. compute dtype is `fp16` and scale dtype is `fp16`
   2. compute dtype is `fp32` and scale dtype is `fp32`
   
   By default, we use 1) in apache/tvm and 2) in octoml/tvm. This commit aligns 
different behaviour and set `fp32` as default.
   
   cc @vinx13 @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to