LiangHao151941 commented on issue #4828: [QNN][TFLite] TFLite rounding mode 
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-595210343
 
 
   > Maybe we have to handle it a bit more. Please see: 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/internal/optimized/integer_ops/pooling.h#L300
   > 
   > The default behaviour of TFLite is `optimize`, so TFLite will run 
`optimzed` of `kernel` directory. For average pool, it has two conditions. The 
type of `acc` will be `uint16` if `params.filter_height * params.filter_width 
<= 16 * 16` , otherwise, the type of `acc` will be `int32` as our 
implementation. If we want to keep bit-exact result, we should handle it here 
too. You remind me this situation, I met this bug before if we don't handle it.
   
   I think acc being uint16 is fine, as in that case largest acc value will be 
256*255 < 65535, where there should not be any overflow problems (and of course 
int32 is fine as well). So that might not be a problem. 
   
   > Another way is we provide rounding args for qnn.add / qnn.mul / 
qnn.conconcate, because they use requantize in fact too, so they need rounding.
   
   Maybe we can mark these as todos due to lack of testcases for now, as make 
it for the next pr?
   
   A new problem is that changing the default pooling behavior fails some pool 
op tests. I'm going to add an tflite_mode bool flag for relay.nn.avg_pool2d, 
how do you think? @FrozenGene 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to