jackwish commented on a change in pull request #4351: [QNN] Lowering for 
Depthwise Convolution.
URL: https://github.com/apache/incubator-tvm/pull/4351#discussion_r347752773
 
 

 ##########
 File path: src/relay/qnn/op/convolution.cc
 ##########
 @@ -391,7 +526,20 @@ Expr Conv2DCombineTerms(const Expr& term1, const Expr& 
term2, const Expr& term3,
  *         gives an opportunity to reuse alter_op_layout infrastructure.
  *         3) For dilated conv, in current lowering, we need dilated pool. So 
as
  *         a workaround, we fall back to simpler lowering using int32 conv if
- *         the conv is dilated. We fallback also in case of depthwise conv.
+ *         the conv is dilated. We fallback also in case of grouped conv.
+ *
+ *       For depthwise, we can similarly unroll the computation. The intial 
compute is as follows
+ *       wehere cm = channel_multiplier
+ *
+ *       Qc(n, oc, oh, ow) = Sigma(r, s) (Qw(oc/m, oc%/m, r, s) - zp_w)
 
 Review comment:
   Suggesting to rewrite related formulas with tex: [block 
example](https://stackoverflow.com/questions/10081633/using-displaymath-directives-in-docstrings-formulas),
 or [inline 
style](https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/nn/dense.py#L66).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to