Zheng-Bicheng commented on PR #16653:
URL: https://github.com/apache/tvm/pull/16653#issuecomment-1984932842

   > Generally speaking, I suggest adding tests that will exercise the path 
being proposed here, that is from PaddlePaddle to CMSIS-NN, including at least 
one softmax operator. Does that make sense?
   
   Are you suggesting using a PaddlePaddle model with softmax parameters and 
specifying the runtime as CMSIS-NN, then validating whether the output results 
of CMSIS-NN match those of the PaddlePaddle model? This approach might not be 
feasible at the moment. I've already highlighted the potential issues with this 
method in TVM Pull Request 16651.
   
   In simple terms, in the current version of TVM, when a quantized 
PaddlePaddle model is converted to a TVM model, there are discrepancies in the 
model's computation results. I'm confident this isn't an issue with my porting 
efforts because the same problem exists with ONNX models.
   
   You can review my detailed test code in [TVM Pull Request 
16651](https://github.com/apache/tvm/pull/16651), where I convert the quantized 
Paddle model to a TVM model and specify the target to be llvm running on CPU.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to