AndrewZhaoLuo commented on pull request #9186:
URL: https://github.com/apache/tvm/pull/9186#issuecomment-974512711


   The error has to do with the return type. It expects an int16 return type, 
not int32. 
   
   It looks like there might be a problem with mixed precision support for 
batch_matmul on cuda. Looking into it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to