cee1 commented on pull request #10650:
URL: https://github.com/apache/tvm/pull/10650#issuecomment-1070624588


   > @FrozenGene Yes, it only happens when the fused subgraph output data type 
is different from conv2d's output data type, such as 'Int8' with 'Int32'. The 
performance bottleneck changes from computation to memory access.
   
   Let's say a conv and a subgraph containing it,  given a question:
   - Which takes more time if the same schedule template and parameters for 
conv is applied ?
   
   Intuitively, the former takes less time, since it requires lesser computing 
Ops.
   
   Which is NOT true for qnn:
   - a "conv" eats a tensor of dtype=int8/uint8, and outputs a tensor of 
dtype=int32
   - a "subgraph", e.g. "conv + bias + requantize", eats the same input tensor 
but outputs a tensor of dtype=int8/uint8, thanks to the fused "requantize"
   
   The "subgraph", aka the fused Op, has much lesser "write-back" 
(dtype=int8/uint8 vs dtype=int32). Which will outperform the single "conv", 
even the latter one has lesser computing Ops, in our case
   
   Performing AutoTvm on subgraph granularity makes much sense here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to