comaniac commented on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-854866628


   While @csullivan proposed a long term solution to resolve the implementation 
difference between targets, this issue on CUDA has been workaround in PyTorch 
frontend in the PR mentioned above. Specifically, now if either one of the two 
inputs of matmul is 2D, then PyTorch frontend reshapes the 3D tensor to 2D and 
uses `dense` instead of expanding the 2D tensor to 3D and using `batch_matmul`. 
Meanwhile, other frontends may still have this issue, so I'll see if I can get 
time to file a PR to fix the CuBLAS issue next week.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to