SeanWangJS opened a new issue, #13211:
URL: https://github.com/apache/tvm/issues/13211

   ### Expected behavior
   
   The same output value as pytorch function.
   
   ### Actual behavior
   
   Random output I guess.
   
   ### Environment
   
   OS: Ubuntu 18.04
   PyTorch: 1.10.0
   TVM: 0.10.0rc0.dev73+g2a2dd9ac5
   
   ### Steps to reproduce
   
   ```python
   from torchvision import ops
   import torchvision.transforms.functional as F
   import torch
   import tvm
   from tvm import relay
   from tvm.contrib import graph_executor
   
   
   torch.manual_seed(123)
   
   imgs = torch.randn(1, 3, 14, 14)
   rois = torch.FloatTensor([[ 0.,   0.,  56., 448., 392]])
   
   roi_align = ops.RoIAlign(2, spatial_scale = 14 / 448.0, sampling_ratio = -1, 
aligned = True)
   out1 = roi_align(imgs, rois)
   
   print("pytorch output: ", out1)
   
   script_roi_align = torch.jit.trace(roi_align, (imgs, rois)).eval()
   shape_list = [("xs", imgs.shape), ("rois", rois.shape)]
   
   mod, params = relay.frontend.from_pytorch(script_roi_align, shape_list)
   target = tvm.target.Target("llvm", host="llvm")
   
   with tvm.transform.PassContext(opt_level=0):
       lib = relay.build(mod, target=target, params = params)
   
   dev = tvm.cpu(0)
   
   
   model = graph_executor.GraphModule(lib['default'](dev))
   model.set_input("xs", imgs)
   model.set_input("rois", rois)
   model.run()
   out3 = model.get_output(0)
   print("tvm output: ", out3)
   ```
   
   The tvm output seems random value at every run, however, Its OK when I set 
"aligned" of RoIAlign to False.
   
   ### Triage
   
   I'am not sure if current version of TVM is support PyTorch 1.10.0 perfectly 
since I noticed that they say now TVM supports PyTorch 1.7 and 1.4.
   
   * needs-triage
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to