padreofthegame commented on PR #15472:
URL: https://github.com/apache/tvm/pull/15472#issuecomment-1673611147

   Hello @ekalda, thank you for the comment. 
   
   I will try to explain my observations on this problem.
   
   Basically, I was working with simple quantized tflite model with conv2D and 
bias_add layer, which was working fine with parameter `groups == 1`, but 
failing for example with `groups == 2` with error message:
   
   ```
   TVMError: Traceback (most recent call last):
     12: TVMFuncCall
     11: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule
 (tvm::transform::Pass, 
tvm::IRModule)>::AssignTypedLambda<tvm::transform::{lambda(tvm::transform::Pass,
 tvm::IRModule)#7}>(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string<char, std::char_traits<char>, 
std::allocator<char> >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
     10: tvm::transform::Pass::operator()(tvm::IRModule) const
     9: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     8: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     7: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     6: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
     5: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<tvm::IRModule
 (tvm::IRModule, 
tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule,
 tvm::transform::PassContext 
const&)#1}>(tvm::relay::transform::InferType()::{lambda(tvm::IRModule, 
tvm::transform::PassContext const&)#1})::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
     4: tvm::relay::transform::InferType()::{lambda(tvm::IRModule, 
tvm::transform::PassContext const&)#1}::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const [clone .isra.0]
     3: tvm::DiagnosticContext::Render()
     2: tvm::DiagnosticRenderer::Render(tvm::DiagnosticContext const&)
     1: 
tvm::runtime::PackedFuncObj::Extractor<tvm::runtime::PackedFuncSubObj<tvm::runtime::TypedPackedFunc<void
 
(tvm::DiagnosticContext)>::AssignTypedLambda<tvm::TerminalRenderer(std::ostream&)::{lambda(tvm::DiagnosticContext
 
const&)#1}>(tvm::TerminalRenderer(std::ostream&)::{lambda(tvm::DiagnosticContext
 const&)#1})::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
     0: tvm::ReportAt(tvm::DiagnosticContext const&, std::ostream&, tvm::Span 
const&, tvm::Diagnostic const&)
     File "/home/padre/TVM_full_repo/tvm/src/ir/diagnostic.cc", line 264
   TVMError: The source maps are not populated for this module.` 
   Please use `tvm.relay.transform.AnnotateSpans` `to attach source maps for 
error reporting.
   
   Error: The Relay type checker is unable to show the following types match:
     Tensor[(16), float32]
     Tensor[(2), float32]
   In particular:
     dimension 0 conflicts: 16 does not match 2.
   ```
   
   While further testing I realized that problem occures every time when the 
parameter `groups` differ from 1 or depth of input tensor (should be equivalent 
to depth convolution). When looking deeper into the code I track that problem 
occurs in qnn/op/convolution.cc, specifically in the part 
   ```
   
        AssignType(types[5], DataType::Float(32), weight->shape[i_axis] * 
weight->shape[o_axis],
                  reporter);  // weight_scale
   
   ```
   Since the `types[5]` parameter corresponds to `kernel_scale`, and is a 
tensor of length `num_kernels`, `weight->shape[i_axis] * weight->shape[o_axis] 
`will generally differ from `num_kernels`, thus the error of tensor type 
matching will occur. 
   
   In solution, I just added an additional if statement that will check if the 
convolution is depthwise (similar to the code in Conv2dRel), and kept the 
current workflow, while in other case the shape of `types[5]` will be set 
according to conv2d requirements.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to