masahi edited a comment on issue #6268:
URL: https://github.com/apache/incubator-tvm/issues/6268#issuecomment-689960903


   ok reproduced on torch 1.4
   
   First, this is the input Torchscript IR:
   ```
   graph(%x : Long(4, 5)):
     %1 : int = prim::Constant[value=1]() # test.py:8:0
     %2 : int = aten::size(%x, %1) # test.py:8:0
     %3 : Long() = prim::NumToTensor(%2)
     %4 : Long(4, 5) = aten::div_(%x, %3) # test.py:8:0
     return (%4)
   ```
   
   It seems this is due to the unclear behavior of `torch.result_type` which we 
use to promote dtype of lhs and rhs: 
https://github.com/apache/incubator-tvm/blob/eee413f9d9f1157b777737adf39060dda1991841/python/tvm/relay/frontend/pytorch.py#L130
   
   Even though both lhs and rhs are clearly int64, result_type can return 
float32:
   
   ```
   import torch
   import numpy as np
   
   lhs = torch.zeros((), dtype=torch.int64)
   rhs = 5 * np.ones([]).astype("int64")  # what prim::NumToTensor(5) above 
converts to in our frontend
   
   print(torch.result_type(lhs, 5))
   print(torch.result_type(lhs, rhs))
   ```
   This is the output:
   ```
   torch.int64
   torch.float32
   ```
   
   Since PyTorch decides that float32 is the right type, unnecessary cast is 
introduced, giving the error above.
   
   cc @t-vi @siju-samuel What should we do about it? The easiest solution seems 
to be just returning a python integer instead of making numpy scalar in 
`numtotensor` converter below.
   
   
https://github.com/apache/incubator-tvm/blob/eee413f9d9f1157b777737adf39060dda1991841/python/tvm/relay/frontend/pytorch.py#L1101


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to