leezu commented on issue #11938: l2_normalization for fp16 got 0.0 when data is very large URL: https://github.com/apache/incubator-mxnet/issues/11938#issuecomment-409411171 @TccccD the reason is that `mx.symbol.norm` uses a numerically stable algorithm to compute the 2-norm (https://github.com/apache/incubator-mxnet/pull/11573), whereas the L2Normalization is prone to under or overflow. Below a shorter example of the problem ``` In [15]: a = mx.nd.random.uniform(-5, 5, (512,100000), ctx=mx.gpu(0), dtype='float16') In [16]: mx.nd.L2Normalization(a) Out[16]: [[ 0. -0. 0. ..., 0. -0. 0.] [-0. 0. -0. ..., 0. 0. 0.] [ 0. -0. -0. ..., 0. 0. -0.] ..., [-0. 0. -0. ..., 0. 0. -0.] [ 0. -0. 0. ..., 0. -0. -0.] [-0. -0. 0. ..., -0. -0. -0.]] <NDArray 512x100000 @gpu(0)> In [17]: a / mx.nd.norm(a, axis=1, keepdims=True) Out[17]: [[ 2.19726562e-03 -3.61824036e-03 1.11007690e-03 ..., 3.14950943e-04 -4.92572784e-04 3.10516357e-03] [ -4.07028198e-03 4.61578369e-03 -4.51278687e-03 ..., 2.33650208e-03 5.40542603e-03 3.78608704e-03] [ 5.27572632e-03 -1.81293488e-03 -1.17683411e-03 ..., 1.86920166e-03 4.87518311e-03 -3.04412842e-03] ..., [ -4.39834595e-03 3.74794006e-04 -4.21905518e-03 ..., 1.11007690e-03 3.81278992e-03 -3.80134583e-03] [ 7.90953636e-05 -5.31387329e-03 4.95910645e-03 ..., 3.52859497e-03 -2.10952759e-03 -4.76837158e-04] [ -4.53186035e-03 -3.03459167e-03 2.37083435e-03 ..., -3.93295288e-03 -4.21524048e-03 -5.36727905e-03]] <NDArray 512x100000 @gpu(0)> ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
