kevinthesun commented on a change in pull request #6449:
URL: https://github.com/apache/incubator-tvm/pull/6449#discussion_r487296417



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -127,8 +128,22 @@ def _is_quantized_tensor(data, prelude):
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])
-        return get_relay_op(name)(data0, data1)
+        dtype0, dtype1 = input_types[:2]
+        if isinstance(inputs[0], _expr.Expr):
+            dtype0 = _infer_type(inputs[0]).checked_type.dtype
+        if isinstance(inputs[1], _expr.Expr):
+            dtype1 = _infer_type(inputs[1]).checked_type.dtype
+

Review comment:
       ```%11 : int = aten::size(%img.1, %10)``` generates int32 but ```%im_h : 
Long() = prim::NumToTensor(%11)``` automatically converts it to int64, without 
any hint. When we converting ```prim::NumToTenso```, we can just follow the 
input type which is int32 here since there is no any other information. So this 
is about the weird behavior of ```prim::NumToTenso``` rather than indexing. I'm 
not sure how many other ops in pytorch has such behavior, but it looks like 
inferring actual input type in ```_pytorch_promote_types``` would fix these 
kind of issues.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to