FrozenGene commented on issue #4803: [WIP][Frontend] Asymmetric padding of 
convolution support
URL: https://github.com/apache/incubator-tvm/pull/4803#issuecomment-581488095
 
 
   Initial Investigation:
   
   For QNN, before asymmetric padding support, we will handle it in tflite 
frontend inserting `nn.pad` so that the `attr['padding']` is always 0 even it 
is `same` padding. However, the things become different when we have asymmetric 
padding support. If we pass 4D padding (top, left, bottom, right) directly to 
QNN, if we are in the target of CPU, we will call
   ```python
   def helper_no_fast_int8_hw_legalization
   ```
   if we don't have int8 acceleration. there isn't pad handling here.
   
   The logic of QNN is always do like `Conv2DPadInput`. This is not perfect way 
because we will always have `pad` operator before QNN. We should have 
`pad_const` and let the tvm backend to handle pad value.
   
   I prefer this PR make FP32 be  asymmetric padding and leave QNN padding 
support as previous way (i.e. inserting `nn.pad` in tflite frontend). However, 
I wish we could add one attribute `pad_const` to `Conv2DAttr` and related api 
to support asymmetric padding computation of QNN. How about you? @anijain2305

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to