Zha0q1 commented on a change in pull request #20226:
URL: https://github.com/apache/incubator-mxnet/pull/20226#discussion_r622593037



##########
File path: tests/python-pytest/onnx/test_operators.py
##########
@@ -1260,8 +1262,11 @@ def test_onnx_export_RNN(tmp_path, mode, dtype, 
state_size, input_size, num_laye
     if mode == 'lstm':
         cell = mx.nd.random.uniform(-1, 1, [num_layers, batch_size, 
state_size], dtype=dtype)
         op_export_test('rnn', M, [x, param, state, cell], tmp_path)
+    elif mode == 'rnn_relu':
+        # set large atol as relu can outputs big numbers
+        op_export_test('rnn', M, [x, param, state], tmp_path, atol=1e20)
     else:
-        op_export_test('rnn', M, [x, param, state], tmp_path)
+        op_export_test('rnn', M, [x, param, state], tmp_path, atol=1e-2)

Review comment:
       Do we know how large was the difference?

##########
File path: tests/python-pytest/onnx/test_operators.py
##########
@@ -1234,21 +1235,22 @@ def test_onnx_export_sequence_reverse(tmp_path, dtype, 
params):
 
 
 # onnx LSTM from opset 11 does not support float64
[email protected]('mode', ['lstm', 'gru'])
[email protected]('mode', ['lstm', 'gru', 'rnn_tanh', 'rnn_relu'])
 @pytest.mark.parametrize('dtype', ['float32'])
[email protected]('state_size', [16, 32])
[email protected]('state_size', [16, 32, 64])
 @pytest.mark.parametrize('input_size', [16, 32, 64])
 @pytest.mark.parametrize('num_layers', [1, 2])
 @pytest.mark.parametrize('batch_size', [1, 2, 4])
[email protected]('seq_length', [16, 32])

Review comment:
       why removing 32?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to