huochaitiantang commented on pull request #7937:
URL: https://github.com/apache/tvm/pull/7937#issuecomment-829092342


   @mbrookhart Thanks for your comment!
   I have checked the supported input types of operators in ORT:
   https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md
   
   `Conv`: tensor(float)
   `Gemm`: tensor(double), tensor(float)
   `Add`, `Mul`, `Sub`: tensor(double), tensor(float), tensor(int32), 
tensor(int64)
   
   It shows that the above operators cannot accept int8 tensor in ORT, so these 
operators between QuantizeLinear and DequantizeLinear cannot run in ORT, even 
if they should be quantized.
   
   My question is that, even if ORT cannot run these models successfully, 
should TVM support the import of them and generate correct qnn ops? It will 
determine whether this PR is necessary.
   
   In addition, ORT quantization operators like `ConvInteger` and `QLinearConv` 
can be run successfully with uint8 tensors as input. It may be natural to 
import these operators later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to