arcadiaphy commented on issue #15007: Add matrix determinant operator in linalg
URL: https://github.com/apache/incubator-mxnet/pull/15007#issuecomment-494907371
 
 
   Regarding this point:
   
   > The grad of determinant is derived from Jacobi's formula, which has a 
pretty friendly closed form solution for numerical computing when input matrix 
A is invertible. The non-invertible case is not easy to implement since it 
involves adjugate matrix. In tensorflow, this case is ignored; while pytorch 
uses SVD to compute the grad. In this PR, it's left for future work, and now as 
a temporary method, no grad is passed backwards when det = 0. My inclination is 
to re-use LU instead of SVD for non-invertible case since it's already 
calculated.
   
   I've looked into pytorch code more carefully, I think their implementation 
is wrong. A simple example to show this:
   
   ```
   In [1]: import torch
   
   In [2]: x = torch.autograd.Variable(torch.tensor([[1., 2.], [2., 4.]]), 
requires_grad=True)
   
   In [3]: y = x.det()
   
   In [4]: y.backward(torch.ones_like(y))
   
   In [5]: x.grad
   Out[5]:
   tensor([[0., 0.],
           [0., 0.]])
   ```
   
   Since in actual computing it's very hard to hit upon det == 0 using float, 
is it really necessary to implement non-invertible case?
   
   Also I haven't thought of a good method now, any suggestions are welcomed. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to