KJlaccHoeUM9l commented on code in PR #13184:
URL: https://github.com/apache/tvm/pull/13184#discussion_r1004345468
##########
tests/python/frontend/onnx/test_forward.py:
##########
@@ -5656,7 +5668,7 @@ def test_biasgelu(target, dev, data_type, op_name):
"""test_biasgelu"""
dtype = np.dtype(data_type)
tensor_type = mapping.NP_TYPE_TO_TENSOR_TYPE[dtype]
- absolute_tolerance = 1e-3 if data_type == "float16" else 1e-5
+ absolute_tolerance = 1e-2 if data_type == "float16" else 1e-5
Review Comment:
Why did you increase the tolerance? Is it related to `FastGelu`?
When I added `FastGelu`
([PR#13119](https://github.com/apache/tvm/pull/13119)), I also extended this
test to `float16` data type. After expanding the test, I found that the
previously added `BiasGelu` did not pass this accuracy test (with `atol=1e-5`,
`rtol=1e-5` parameters). In order for this test to continue working, I
increased the tolerance to `atol=1e-3`. However, this test did not require
increasing the tolerance to `atol=1e-2`.
Also, why did you choose this particular configuration? Looking at
[documentation](https://numpy.org/doc/stable/reference/generated/numpy.allclose.html),
the following expression applies to each element of the two tensors (`a`, `b`):
```Shell
abs(a - b) <= (atol + rtol * abs(b))
```
This means that we can use other configurations, for example: `atol=1e-4`,
`rtol=1e-3`.
Should we also increase the tolerance at line 5636 (`test_gelu`)?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]