jwfromm commented on a change in pull request #7300:
URL: https://github.com/apache/tvm/pull/7300#discussion_r559058323
##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -2138,8 +1928,8 @@ def check_torch_conversion(model, input_size):
# Set verbose=True for more output
torch.onnx.export(model(), dummy_input, file_name, export_params=True,
verbose=False)
onnx_model = onnx.load(file_name)
- input_data = np.random.uniform(size=input_size).astype("int32")
- verify_with_ort_with_inputs(onnx_model, [input_data])
+ input_data = np.random.uniform(size=input_size).astype("float32")
+ verify_with_ort_with_inputs(onnx_model, [input_data], apply_softmax=True)
Review comment:
This is a fun one that I wanted to point out. Previously we were casting
inputs to `int32`, however because they were generated with `np.random.uniform`
they all were just being cast to 0. Using non-zero inputs caused some minor
mismatch on outputs due to numerical instability but applying softmax (which
torchvision models don't use by default) reduces the numerical difference well
below our test threshold.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]