crawlingcub opened a new issue, #12629:
URL: https://github.com/apache/tvm/issues/12629
The outputs of this onnx model differ when run in cpu vs cuda. The results
with cpu agree with what I am getting with pytorch model. This model is a
variant of mobilenet-v3. I added some noise to some Conv2d layers here compared
to the original model.
Any idea what could be wrong?
```python
import torch
import tvm
from tvm import relay
from tvm.contrib.download import download_testdata
from tvm.contrib import graph_executor
import os
import pickle as pkl
import numpy as np
import sys
import onnx
DEVICE='cuda'
model=onnx.load(os.path.join(sys.argv[1], "model.onnx"))
data=torch.load(os.path.join(sys.argv[1], "data.pt"))
pt_inp=torch.unsqueeze(data[0][0], 0)
input_name = "input0"
target = tvm.target.cuda()
dev = tvm.gpu(0)
#scripted_model = torch.jit.trace(model, pt_inp).eval()
input_data = pt_inp.numpy()
shape_list = {input_name: input_data.shape}
mod, params = relay.frontend.from_onnx(model, shape_list)
# cuda
with tvm.transform.PassContext(opt_level=0):
lib = relay.build(mod, target=target, params=params)
m = graph_executor.GraphModule(lib["default"](dev))
m.set_input(input_name, tvm.nd.array(pt_inp.numpy()))
m.run()
tvm_cuda_out = m.get_output(0).asnumpy()
# cpu
target = tvm.target.Target("llvm", host="llvm")
dev = tvm.device(str(target))
with tvm.transform.PassContext(opt_level=0):
lib = relay.build(mod, target=target, params=params)
m = graph_executor.GraphModule(lib["default"](dev))
m.set_input(input_name, tvm.nd.array(pt_inp.numpy()))
m.run()
tvm_cpu_out = m.get_output(0).asnumpy()
print(np.max(np.abs(tvm_cpu_out-tvm_cuda_out)))
print(np.argmax(tvm_cpu_out, axis=-1), np.argmax(tvm_cuda_out, axis=-1))
```
Find the model and data
[here](https://drive.google.com/drive/folders/1UiLAZgUlTybzBcbkSBnMoGajVCb2C3Tx?usp=sharing)
### Expected behavior
Results should be same
### Actual behavior
```
nan
[0] [8]
```
### Environment
```
torch==1.12.1
torchvision==0.13.1
python 3.8.13
Ubuntu 18.04
onnx==1.12.0
cuda 11.0
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]