comaniac commented on pull request #8108: URL: https://github.com/apache/tvm/pull/8108#issuecomment-847271637
> @comaniac Disabling the flag makes the tests pass. What should we do here? Accept lower accuracy for performance? I personally prefer to keep the accuracy, because it seems not right to tolerate 1e-2 for a single batch_matmul op. It means the end-to-end accuracy of all models with cublas.batch_matmul may be larger than 1e-2. cc @Hzfengsy @Laurawly as they added this flag at the time it hasn't been deprecated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
