kshitij12345 commented on issue #14992: [MXNET-978] Support higher order gradient for `log`. URL: https://github.com/apache/incubator-mxnet/pull/14992#issuecomment-497807950 @apeforest @larroy https://github.com/kshitij12345/incubator-mxnet/blob/7b343d1fcde73b61322985580080333d9eee9e82/src/operator/tensor/elemwise_unary_op_basic.cc#L1077-L1079 We multiply `gx * gx` where `gx = ograd * f'(x)`, getting `ograd^2 * f'(x)^2`, however we want only `ograd * f'(x)^2` which can be achieved in a similar fashion to the implementation of `_backward_log10/2`. I have validated the expected results. ```python from mxnet import nd, autograd import numpy import math grad_grad_op = lambda x: (-1/x**2) x = nd.random.normal(0,1,(3,3)) x.attach_grad() with autograd.record(): y = nd.log(x) y_grad = autograd.grad(y, x, head_grads= nd.ones_like(y) * 0.5, create_graph=True, retain_graph=True)[0] y_grad.backward(nd.ones_like(y_grad) * 0.6) numpy.testing.assert_allclose(x.grad.asnumpy() , ( grad_grad_op(x) * 0.5 * 0.6).asnumpy(), rtol=1e-7, atol=1e-7) ``` Which fails with current code. **Should make a new PR, or add commits in this PR itself?** Have confirmed the behaviour with Pytorch as well. ```python import torch import numpy import math grad_grad_op = lambda x: (-1/x**2) x = torch.randn(2,3) x.requires_grad = True y = torch.log(x) y_grad = torch.autograd.grad(y, x, grad_outputs=torch.ones_like(y) * 0.5, create_graph=True, retain_graph=True)[0] y_grad.backward(torch.ones_like(y_grad) * 0.6) numpy.testing.assert_allclose(x.grad.detach().numpy() , ( grad_grad_op(x) * 0.5 * 0.6).detach().numpy(), rtol=1e-7, atol=1e-7) ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
