kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487256873
##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
# operator implementation
def _elemwise(name):
def _impl(inputs, input_types):
- data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
- return get_relay_op(name)(data0, data1)
+ dtype0, dtype1 = input_types[:2]
+ if isinstance(inputs[0], _expr.Expr):
+ dtype0 = _infer_type(inputs[0]).checked_type.dtype
+ if isinstance(inputs[1], _expr.Expr):
+ dtype1 = _infer_type(inputs[1]).checked_type.dtype
+
Review comment:
This comes from weird behavior of ```prim::NumToTensor```. It converts
int32 to int64 silently:
```
%11 : int = aten::size(%img.1, %10), scope: __module.model #
/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py:62:0
%im_h : Long() = prim::NumToTensor(%11), scope: __module.model
```
Right now py frontend just follow use the same dtype for this op output. For
an elemwise op, pytorch input dtype is ["int64", "int64"] which is fine.
However, the actual input dtype is ["int64", "32"]. What I can do is to enhance
```_pytorch_promote_types``` so that we do _infer_type for every input and get
actual input dtype, rather than solely relying on pytorch input dtype. Sounds
like a plan?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]