altanh commented on a change in pull request #8056:
URL: https://github.com/apache/tvm/pull/8056#discussion_r638229466
##########
File path: python/tvm/relay/op/nn/nn.py
##########
@@ -2973,6 +2973,40 @@ def cross_entropy_with_logits(predictions, targets):
return _make.cross_entropy_with_logits(predictions, targets)
+def nll_loss(predictions, targets, weights, reduction="mean",
ignore_index=-100):
+ """Negative log likelihood loss.
+
+ output{n, i_1, i_2, ..., i_k} = predictions{n, t, i_1, i_2, i_k}
+ where t = target{n, i_1, i_2, ..., i_k}
+
+ result = reduction(output)
+
+ Parameters
+ ----------
+ predictions : tvm.relay.Expr
+ The predictions.
+
+ targets : tvm.relay.Expr
+ The target value of each prediction.
+
+ weights : tvm.relay.Expr
Review comment:
Hmm not sure, that's a good point- let's just keep the weights for now.
As for not needing a gradient, currently there is no other way than just
putting some dummy value. It might make sense for us to introduce a
`stop_gradient` dummy op which cuts the gradient computation from going further
at undifferentiable arguments (this can be a future PR). Thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]