wpan11nv opened a new pull request #4867: [TOPI][CUDA] Enable vectorization on 
fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867
 
 
   - This allows to better utilize the memory bandwidth
   
   - Note that not all cases are vectorized for fp16 datatype. For
     instance, when the size is not a multiple of 1024, the inner loop
     may be an expression that cannot be vectorized. In this case, a
     small inner loop is still benefical for latency hidding.
   
   Signed-off-by: Wei Pan <w...@nvidia.com>
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to