FrozenGene commented on issue #4828: [QNN][TFLite] TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-596083786
 
 
   > I'm fine with both choices. The pooling implementation doesn't affect any 
other test except for this bit-exact computation. It can be regarded as bug or 
feature to me : ) (as np.mean choose FLOOR implementation for integer inputs). 
@FrozenGene @tqchen
   
   I prefer adding it in this pr.  I wish we could port it back to 0.6 as 
@u99127 said before
   
   > > I think you are confusing the GPU container with GPU target. The tests 
are still running on CPU.
   > > 
   > > * You might have done this already. But have you run mobilenetv2 locally?
   > 
   > Yes, and it passed smoothly
   
   I doubt it is because of  different LLVM version.
   Another thing: "Another way is we provide rounding args for qnn.add / 
qnn.mul / qnn.conconcate, because they use requantize in fact too, so they need 
rounding". On mobilenet v2 quantized model, we have `quantized add`, I am 
curious why we could get the same result without support of `quantized add`.
   
   > I have LLVM 9.0.1, compiled from source code. In my local env, 
`_test_forward_elemwise(partial(_test_add, fused_activation_function="RELU6"))` 
is failing while not in ci test. I'm not sure about the llvm version used in 
the ci.
   
   In code `tvm.testing.assert_allclose(tflite_output[i], tvm_output[i], 
atol=1e-5, rtol=1e-5)` maybe too strict. You could change it to `atol=1e-3, 
rtol=1e-3` to see it is ok on your machine.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to