oviazlo opened a new issue, #13070:
URL: https://github.com/apache/tvm/issues/13070

    When running unit tests from 
`tests/python/frontend/pytorch/test_fx_quant.py` with `Torch 1.12.0` and 
`Torchvision 0.13.0` environment, `test_ssd_vgg` test fails.
   
   ### Expected behavior
   
   Unit test to run successfully.
   
   ### Actual behavior
   
   Unit tests fail with the following error message:
   
   ```python 
   test_fx_quant.py:41: in quantize_and_build
       mod, _ = relay.frontend.from_pytorch(script_module, [(input_name, 
inp.shape)])
   ../../../../python/tvm/relay/frontend/pytorch.py:4626: in from_pytorch
       outputs = converter.convert_operators(tmp, outputs, ret_name)
   ../../../../python/tvm/relay/frontend/pytorch.py:3998: in convert_operators
       relay_out = relay_op(
   ../../../../python/tvm/relay/frontend/qnn_torch.py:627: in _impl
       inputs[0], _expr.const(inputs[1]), _expr.const(inputs[2]), 
out_dtype="uint8", axis=axis
   ../../../../python/tvm/relay/expr.py:508: in const
       value.dtype, None
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ 
   
   self = Var(model_backbone_scale_0, ty=TensorType([], float32)), name = 
'dtype'
   
       def __getattr__(self, name):
           # specially check handle since
           # this is required for PackedFunc calls
           if name == "handle":
               raise AttributeError("handle is not set")
       
           try:
               return _ffi_node_api.NodeGetAttr(self, name)
           except AttributeError:
   >           raise AttributeError("%s has no attribute %s" % 
(str(type(self)), name)) from None
   E           AttributeError: <class 'tvm.relay.expr.Var'> has no attribute 
dtype
   
   ../../../../python/tvm/runtime/object.py:67: AttributeError
   ```
   
   The problem is happening at the moment of executing the following code:
   ```python
   relay_out = relay_op(
       inputs, _get_input_types(op_node, outputs, 
default_dtype=self.default_dtype)
   )
   ```
   with the following arguments:
   ```python
   op_node = {Node} %quantize_per_tensor_14 : QUInt8(512, strides=[1], 
requires_grad=0, device=cpu) = aten::quantize_per_tensor(%scale_weight, 
%model_backbone_scale_0, %model_backbone_zero_point_0, %217) # 
<eval_with_key>.10:25:0
   inputs = {list:4} [Var(model.backbone.scale_weight, ty=TensorType([512], 
float32)), Var(model_backbone_scale_0, ty=TensorType([], float32)), 
Var(model_backbone_zero_point_0, ty=TensorType([], int64)), 13]
   ```
   
   The problem is that scale (`model_backbone_scale_0`) and point zero 
(`model_backbone_zero_point_0`) parameters for the operator 
`aten::quantize_per_tensor` has to be canonicalized by function 
[inline_input_quant_params_for_fx](https://github.com/apache/tvm/blob/b44f1343a10ccc908de5e65b864012c72d564a7b/python/tvm/relay/frontend/qnn_torch.py#L537).
 However, it's not happening because in the implementation of 
[inline_input_quant_params_for_fx](https://github.com/apache/tvm/blob/b44f1343a10ccc908de5e65b864012c72d564a7b/python/tvm/relay/frontend/qnn_torch.py#L537)
 function only parameters that have `"_input_scale"` or `"_input_zero_point"` 
inside their names are canonicalized:
   
   
https://github.com/apache/tvm/blob/b44f1343a10ccc908de5e65b864012c72d564a7b/python/tvm/relay/frontend/qnn_torch.py#L567-L584
   
   which is not the case for the parameters with names `model_backbone_scale_0` 
and `model_backbone_zero_point_0`.
   
   The following problem can be solved by simply changing keywords from 
`"_input_scale"`, `"_input_zero_point"` to `"_scale"`,  `"_zero_point"` inside 
[inline_input_quant_params_for_fx](https://github.com/apache/tvm/blob/b44f1343a10ccc908de5e65b864012c72d564a7b/python/tvm/relay/frontend/qnn_torch.py#L537)
 function. I have tried this solution and the test started to pass 
successfully. 
   
   ### Environment
   
   - TVM version: 
[v0.10.0rc0](https://github.com/apache/tvm/releases/tag/v0.10.0rc0) 
(https://github.com/apache/tvm/commit/b44f1343a10ccc908de5e65b864012c72d564a7b)
   - Ubuntu 20.04.1, x86_64, GNU/Linux 5.15.0-1021-aws
   - torch **1.12.0+cpu**
   - torchvision    **0.13.0+cpu**
   
   ### Steps to reproduce
   
   With `Torch 1.12` and `Torchvision 0.13.0` environment, run:
   `python -m pytest tests/python/frontend/pytorch/test_fx_quant.py`
   
   ### Triage
   
   * frontend:pytorch
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to