RuRo commented on pull request #18054:
URL: https://github.com/apache/incubator-mxnet/pull/18054#issuecomment-638095415


   @szha I am not too familiar with `pytest`/`unittest`. I've tried making a 
quick fix by adding an explicit `exit` call with an appropriate exit status, 
based on whether the tests are failing, but this doesn't work.
   
   The problem is that the `tests/python/unittest/onnx/backend_test.py` file is 
scanned by pytest, when some other test are running. This means that the `exit` 
call is triggered during the test collection phase, which `pytest` treats as an 
error, no matter, what the exit status is (this behavior is probably correct).
   
   <details>
   
   ```python
   [2020-06-02T13:52:46.852Z] ==================================== ERRORS 
====================================
   [2020-06-02T13:52:46.852Z] _________ ERROR collecting 
tests/python/unittest/onnx/backend_test.py __________
   [2020-06-02T13:52:46.852Z] tests/python/unittest/onnx/backend_test.py:96: in 
<module>
   [2020-06-02T13:52:46.852Z]     exit(any(res.failures for res in results))
   [2020-06-02T13:52:46.852Z] /usr/lib/python3.6/_sitebuiltins.py:26: in 
__call__
   [2020-06-02T13:52:46.852Z]     raise SystemExit(code)
   [2020-06-02T13:52:46.852Z] E   SystemExit: False
   ```
   
   </details>
   
   There are also some failing/flaky ONNX tests on `centos-cpu` and 
`windows-cpu/gpu`, which are **not** failing on `unix-cpu`.
   
   The `windows` tests are apparently (?) running in a dirty environment and 
some tests are failing, because the tests need to download some onnx model 
files and apparently the target directories already exist.
   
   <details>
   
   ```python
   [2020-06-02T14:22:01.951Z] 
======================================================================
   [2020-06-02T14:22:01.951Z] ERROR: test_bvlc_alexnet_cpu 
(backend_test.OnnxBackendRealModelTest)
   [2020-06-02T14:22:01.951Z] 
----------------------------------------------------------------------
   [2020-06-02T14:22:01.951Z] Traceback (most recent call last):
   [2020-06-02T14:22:01.951Z]   File 
"C:\Python37\lib\site-packages\onnx\backend\test\runner\__init__.py", line 248, 
in device_test_func
   [2020-06-02T14:22:01.951Z]     return test_func(*args, device=device, 
**kwargs)
   [2020-06-02T14:22:01.951Z]   File 
"C:\Python37\lib\site-packages\onnx\backend\test\runner\__init__.py", line 268, 
in run
   [2020-06-02T14:22:01.951Z]     model_dir = 
self.prepare_model_data(model_test)
   [2020-06-02T14:22:01.951Z]   File 
"C:\Python37\lib\site-packages\onnx\backend\test\runner\__init__.py", line 218, 
in prepare_model_data
   [2020-06-02T14:22:01.951Z]     os.makedirs(model_dir)
   [2020-06-02T14:22:01.951Z]   File "C:\Python37\lib\os.py", line 221, in 
makedirs
   [2020-06-02T14:22:01.951Z]     mkdir(name, mode)
   [2020-06-02T14:22:01.951Z] FileExistsError: [WinError 183] Cannot create a 
file when that file already exists: 
'C:\\Windows\\system32\\config\\systemprofile\\.onnx\\models\\bvlc_alexnet'
   [2020-06-02T14:22:01.951Z] 
   ```
   
   </details>
   
   The `centos-cpu` tests are having some issues with precision and maybe 
partial model downloads. I am really not sure.
   
   <details>
   
   Partial downloads (maybe?).
   ```python
   [2020-06-01T20:41:37.094Z] 
======================================================================
   [2020-06-01T20:41:37.094Z] ERROR: test_resnet50_cpu 
(backend_test.OnnxBackendRealModelTest)
   [2020-06-01T20:41:37.094Z] 
----------------------------------------------------------------------
   [2020-06-01T20:41:37.094Z] Traceback (most recent call last):
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/backend/test/runner/__init__.py",
 line 248, in device_test_func
   [2020-06-01T20:41:37.094Z]     return test_func(*args, device=device, 
**kwargs)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/backend/test/runner/__init__.py",
 line 272, in run
   [2020-06-01T20:41:37.094Z]     model = onnx.load(model_pb_path)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/__init__.py", 
line 114, in load_model
   [2020-06-01T20:41:37.094Z]     model = load_model_from_string(s, 
format=format)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/__init__.py", 
line 151, in load_model_from_string
   [2020-06-01T20:41:37.094Z]     return _deserialize(s, ModelProto())
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/__init__.py", 
line 94, in _deserialize
   [2020-06-01T20:41:37.094Z]     decoded = cast(Optional[int], 
proto.ParseFromString(s))
   [2020-06-01T20:41:37.094Z] google.protobuf.message.DecodeError: Error 
parsing message
   [2020-06-01T20:41:37.094Z] 
   [2020-06-01T20:41:37.094Z] 
======================================================================
   ```
   
   Precision problems. It's possible, the root cause of this error is the same 
as the `google.protobuf.message.DecodeError` issue, since they seem to occur 
together.
   ```python
   [2020-06-01T20:41:37.094Z] 
======================================================================
   [2020-06-01T20:41:37.094Z] FAIL: test_vgg19_cpu 
(backend_test.OnnxBackendRealModelTest)
   [2020-06-01T20:41:37.094Z] 
----------------------------------------------------------------------
   [2020-06-01T20:41:37.094Z] Traceback (most recent call last):
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/backend/test/runner/__init__.py",
 line 248, in device_test_func
   [2020-06-01T20:41:37.094Z]     return test_func(*args, device=device, 
**kwargs)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/backend/test/runner/__init__.py",
 line 290, in run
   [2020-06-01T20:41:37.094Z]     atol=model_test.atol)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/onnx/backend/test/runner/__init__.py",
 line 178, in assert_similar_outputs
   [2020-06-01T20:41:37.094Z]     atol=atol)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/numpy/testing/_private/utils.py",
 line 1533, in assert_allclose
   [2020-06-01T20:41:37.094Z]     verbose=verbose, header=header, 
equal_nan=equal_nan)
   [2020-06-01T20:41:37.094Z]   File 
"/opt/rh/rh-python36/root/usr/lib64/python3.6/site-packages/numpy/testing/_private/utils.py",
 line 846, in assert_array_compare
   [2020-06-01T20:41:37.094Z]     raise AssertionError(msg)
   [2020-06-01T20:41:37.094Z] AssertionError: 
   [2020-06-01T20:41:37.094Z] Not equal to tolerance rtol=0.001, atol=1e-07
   [2020-06-01T20:41:37.094Z] 
   [2020-06-01T20:41:37.094Z] Mismatched elements: 997 / 1000 (99.7%)
   [2020-06-01T20:41:37.094Z] Max absolute difference: 0.0330687
   [2020-06-01T20:41:37.094Z] Max relative difference: 2.9896083
   [2020-06-01T20:41:37.094Z]  x: array([[2.939573e-04, 1.110041e-03, 
4.796557e-04, 1.204622e-03,
   [2020-06-01T20:41:37.094Z]         2.784455e-03, 4.492797e-03, 1.323598e-02, 
1.580402e-04,
   [2020-06-01T20:41:37.094Z]         1.594951e-04, 3.026387e-04, 3.349203e-04, 
2.087024e-04,...
   [2020-06-01T20:41:37.094Z]  y: array([[7.276282e-04, 2.052591e-03, 
7.338484e-04, 1.470099e-03,
   [2020-06-01T20:41:37.094Z]         3.712713e-03, 3.797364e-03, 1.069483e-02, 
3.067438e-04,
   [2020-06-01T20:41:37.094Z]         3.405935e-04, 4.645915e-04, 3.827364e-04, 
3.712076e-04,...
   [2020-06-01T20:41:37.094Z] 
   [2020-06-01T20:41:37.094Z] 
----------------------------------------------------------------------
   ```
   
   </details>
   
   The current tests (ONNX or otherwise) are quite a mess, IMO. I don't think 
it's feasible to fix everything in one PR (also, I don't really have too much 
time to spend on fixing the other issues). This PR was mainly focused on fixing 
the individual node tests broken by the ONNX version bump to 1.5.0 and I think, 
that the other issues should be resolved in a separate PR.
   
   ## TL;DR
   I propose, that we merge this PR as it currently is and open separate issues 
for fixing the `unittest`/`pytest` integration and the `windows`/`centos` ONNX 
fails.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to