masahi commented on issue #8284:
URL: https://github.com/apache/tvm/issues/8284#issuecomment-864467897


   This is a very strange model in that there are multiple ONNX `Loop` for no 
good reason. In particular, there is a loop at the beginning that does input 
image preprocessing, and for some reason the output of the loop is already 
dynamic in all dimensions. So the input to the first convolution op is already 
dynamic in H and W dimensions, which result in the error above.
   
   ```
   ...
   %37 = subtract(%36, meta[relay.Constant][5] /* ty=Tensor[(1, 1, 1, 1), 
float32] */) /* ty=Tensor[(?, ?, ?, ?), float32] */;
   %38 = nn.conv2d(%37, meta[relay.Constant][6] /* ty=Tensor[(32, 3, 3, 3), 
float32] */, strides=[2, 2], padding=[0, 0, 1, 1], kernel_size=[3, 3]) /* 
ty=Tensor[(?, 32, ?, ?), float32] */;
   ...
   ``` 
   
   I have a feeling that our ONNX `Loop` support does not preserve static shape 
information precisely, since it does not make sense to have a dynamic input at 
the first conv2d op after the preprocessing loop. Also this could be one of the 
reasons MaskRCNN import does not work well with ONNX, since it has a loop and 
compilation fails at dynamic H and W dimension which should not exist. @jwfromm 
@mbrookhart 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to