anirudh2290 commented on a change in pull request #10078: [MXNET-92] Support 
float16 in L2Normalization operator
URL: https://github.com/apache/incubator-mxnet/pull/10078#discussion_r174246995
 
 

 ##########
 File path: tests/python/unittest/test_operator.py
 ##########
 @@ -2396,21 +2396,22 @@ def check_l2_normalization(in_shape, mode, 
norm_eps=1e-10):
     exe = out.simple_bind(ctx=ctx, data=in_data.shape)
     output = exe.forward(is_train=True, data=in_data)
     # compare numpy + mxnet
-    assert_almost_equal(exe.outputs[0].asnumpy(), np_out, rtol=1e-5)
+    assert_almost_equal(exe.outputs[0].asnumpy(), np_out, rtol=1e-2 if dtype 
is 'float16' else 1e-5)
 
 Review comment:
   can  you also pass atol here. Default is 1e-20 which may result in test 
becoming flaky if the numbers are small.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to