LiangHao151941 commented on issue #4828: [QNN][TFLite] TFLite rounding mode 
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-583271045
 
 
   Some update on further experiments @FrozenGene @anijain2305 
   
   > 1. if we have TFLITE rounding support, we should make TFLite frontend 
using TFLITE rounding and we should get the bit exact result as TFLite.
   
   This is added in tflite relay frontend. 
   
   > 2. you should also modify the `test_forward.py` for tflite, like test_qnn* 
related test cases, we shouldn't need `atol=1` any more ideally.
   
   For the current label comparison, the new rounding mode works fine. 
Unfortunately, the new rounding mode does not bring bit-exact execution. In 
fact, for the 3 qnn models in`test_forward.py`,  with L1 norm of prediction 
error as the metric, `mobilenet_v1`and`mobilenet_v2` do drop with "TFLITE" 
rounding, but `inception_v1` increases the metric (means more error). 
   Also note that performance will degrade, but not quantitatively measured for 
now.
   
   > 3. you could add `q_conv2d` unit testing case, we could get the same 
result compared with TFLite. we lack of this unit testing.
   
   This would be essential to further investigate what are the root causes of 
bit mismatch. Will follow up.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to