Zheng-Bicheng commented on PR #16653:
URL: https://github.com/apache/tvm/pull/16653#issuecomment-1985373112

   > I'm suggesting that tests need to be added when new code is contributed. 
Now, to what degree of specific tests are practical with specific 
contributions, that is part of the contribution process.
   > 
   > It is a known fact that TVM will have numerical discrepancies with various 
input frameworks, but that hasn't been a blocker to adding tests to the 
existing implementation in all different frameworks. I don't see why 
PaddlePaddle should be an exception to that, and it is not clear in the long 
term, what is the rush into merging contributions without tests, as it 
increases technical debt to the project.
   
   Sorry, English isn't my native language, so I might not have expressed my 
point clearly. The issue we're discussing isn't about 'whether to add tests' 
but rather 'what level of error is acceptable in the tests.'
   
   There's an inherent error between TVM and various frameworks, and without 
altering the core TVM code, we can only tolerate a larger margin of error. This 
error isn't unique to the PaddlePaddle framework; it's also present in the ONNX 
framework. It's not that PaddlePaddle models are a special case.
   
   The pull request for supporting ONNX quantized models was merged earlier 
than the PR for supporting PaddlePaddle quantized models. Since we allow for 
errors in ONNX quantized models in TVM, why can't we allow for the same level 
of error in PaddlePaddle quantized models in TVM?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to