crawlingcub opened a new issue, #12630:
URL: https://github.com/apache/tvm/issues/12630
I am getting incorrect results (lower accuracy) at optimization level 4 with
this densenet-121 model. The results with optimization level 0 agree with
pytorch results. Results are correct up to optimization level 2. Maybe there is
a buggy optimization here?
Repro code:
```python
import torch
import tvm
from tvm import relay
from tvm.contrib.download import download_testdata
from tvm.contrib import graph_executor
import os
import pickle as pkl
import numpy as np
import sys
import onnx
DEVICE='cuda'
#model=torch.load(os.path.join(sys.argv[1], "model.pt"))
model=onnx.load(os.path.join(sys.argv[1], "model.onnx"))
data=torch.load(os.path.join(sys.argv[1], "data.pt"))
pt_inp=torch.unsqueeze(data[0][0], 0)
#pt_inp=pt_inp.to(DEVICE)
#model.to(DEVICE)
input_name = "input0"
target = tvm.target.cuda()
dev = tvm.gpu(0)
#scripted_model = torch.jit.trace(model, pt_inp).eval()
input_data = pt_inp.numpy()
shape_list = {input_name: input_data.shape}
mod, params = relay.frontend.from_onnx(model, shape_list)
# cuda opt 0
with tvm.transform.PassContext(opt_level=0):
lib = relay.build(mod, target=target, params=params)
m = graph_executor.GraphModule(lib["default"](dev))
m.set_input(input_name, tvm.nd.array(pt_inp.numpy()))
m.run()
tvm_opt0_out = m.get_output(0).asnumpy()
# cuda opt 4
with tvm.transform.PassContext(opt_level=4):
lib = relay.build(mod, target=target, params=params)
m = graph_executor.GraphModule(lib["default"](dev))
m.set_input(input_name, tvm.nd.array(pt_inp.numpy()))
m.run()
tvm_opt4_out = m.get_output(0).asnumpy()
print(np.max(np.abs(tvm_opt4_out-tvm_opt0_out)))
print(np.argmax(tvm_opt4_out, axis=-1), np.argmax(tvm_opt0_out, axis=-1))
```
Find the model and data
[here](https://drive.google.com/drive/folders/1TW4mU0DwWe2UCiQKFLhm706aLfVjJ3RJ?usp=sharing)
### Expected behavior
Results should be same
### Actual behavior
Output:
```
3.0984282
[766] [1]
```
### Environment
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]