gaurav-gireesh commented on a change in pull request #12697: [MXNET -1004] 
Poisson NegativeLog Likelihood loss
URL: https://github.com/apache/incubator-mxnet/pull/12697#discussion_r223806706
 
 

 ##########
 File path: tests/python/unittest/test_loss.py
 ##########
 @@ -348,6 +348,44 @@ def test_triplet_loss():
             optimizer='adam')
     assert mod.score(data_iter, eval_metric=mx.metric.Loss())[0][1] < 0.05
 
+@with_seed()
+def test_poisson_nllloss():
+    pred = mx.nd.random.normal(shape=(3, 4))
+    min_pred = mx.nd.min(pred)
+    #This is necessary to ensure only positive random values are generated for 
prediction,
+    # to avoid ivalid log calculation
+    pred[:] = pred + mx.nd.abs(min_pred)
+    target = mx.nd.random.normal(shape=(3, 4))
+    min_target = mx.nd.min(target)
+    #This is necessary to ensure only positive random values are generated for 
prediction,
+    # to avoid ivalid log calculation
+    target[:] += mx.nd.abs(min_target)
+
+    Loss = gluon.loss.PoissonNLLLoss(from_logits=True)
+    Loss_no_logits = gluon.loss.PoissonNLLLoss(from_logits=False)
+    #Calculating by brute formula for default value of from_logits = True
+
+    # 1) Testing for flag logits = True
+    brute_loss = np.mean(np.exp(pred.asnumpy()) - target.asnumpy() * 
pred.asnumpy())
+    loss_withlogits = Loss(pred, target)
+    assert_almost_equal(brute_loss, loss_withlogits.asscalar())
 
 Review comment:
   Thank you for the suggestion. It is a valid point however, getting a 
synthetic data set to behave in a way that Random input X's are correlated to a 
target ~ PoissonDistribution(target belonging to a Poisson Distribution) is 
difficult to gather. However, I have tried with random data sets and trained 
the model but the loss gradient was not observed. This could be incrementally 
contributed if I come across such a dataset which shows the loss gradient with 
epochs as other models like LogisticRegression, Linear Regression and so on.
   The formula to compute a loss value however, is implemented in the function 
which can be tested by raw calculations. This is also something that we see in 
unit tests of loss functions such as : test_bce_loss(Binary Cross Entropy).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to