alnah005 opened a new issue #9908:
URL: https://github.com/apache/tvm/issues/9908


   I'm using 
[`from_onnx`](https://github.com/apache/tvm/blob/44fe7ef816565f43380c50e0b43fd626fad9d029/python/tvm/relay/frontend/onnx.py#L5063)
 to convert my model 
   
[dense_h32_w32_c3_sNone_pNone_kNone.pt_quant.onnx.zip](https://github.com/apache/tvm/files/7851265/dense_h32_w32_c3_sNone_pNone_kNone.pt_quant.onnx.zip)
 into tvm relay. It looks like `b_scale` in 
[`QLinearMatMul`](https://github.com/apache/tvm/blob/44fe7ef816565f43380c50e0b43fd626fad9d029/python/tvm/relay/frontend/onnx.py#L3760)
 expects a scalar scale not a vector of scales. 
   I'm not sure if this is a problem with onnx creating a vector for the 
weights, or if QLinearMatMul should support scale vectors.
   
   Info about the model:
   1. Input: (batch, channel, height, width) -> (1, 3, 32, 32)
   2. Global Average pool: (batch, channel, 1, 1) -> (1, 3, 1, 1)
   3. Reshape: (batch, channel) -> (1, 3)
   4. Dense Layer: (batch, channel, 20) -> (3, 20) 
       > Dense Layer is where the error occurs 
   
   > You might notice when you debug `b_scale`'s shape that it's has shape=20. 
   
   ### Expected behavior
   
   No errors converting my model 
[dense_h32_w32_c3_sNone_pNone_kNone.pt_quant.onnx.zip](https://github.com/apache/tvm/files/7851265/dense_h32_w32_c3_sNone_pNone_kNone.pt_quant.onnx.zip)
 into tvm relay using  
[`from_onnx`](https://github.com/apache/tvm/blob/44fe7ef816565f43380c50e0b43fd626fad9d029/python/tvm/relay/frontend/onnx.py#L5063).
   
   
   ### Actual behavior
   
   ```
   assert num_elem == 1, "Cannot squeeze tensor shape {} to scalar 
form.".format(x_shape)
   E       AssertionError: Cannot squeeze tensor shape (20,) to scalar form.
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to