RuRo edited a comment on issue #9582: Misleading calculation of 
mxnet.metric.Accuracy
URL: 
https://github.com/apache/incubator-mxnet/issues/9582#issuecomment-538273489
 
 
   Why was this issue closed?
   
   The behavior of `Accuracy.update` is still wrong for one-hot labels. It 
still doesn't raise any error/warning and just silently gives the wrong values. 
The current documentation for the `Accuracy` class doesn't mention, whether 
preds/labels can be one-hots or indices.
   
   The doc string for the `update` method does mention, that labels should 
contain "class indices as values", but the way it's worded doesn't strongly 
imply, that it *can't* be a one-hot vector. Given, that `preds` **does** accept 
a probability vector, it's a reasonable assumption, that `labels` also would.
   
   Also, I think, the docstring for the `update` method doesn't actually get 
rendered into the current [web 
docs](https://mxnet.incubator.apache.org/api/python/docs/api/gluon-related/_autogen/mxnet.metric.Accuracy.html#mxnet.metric.Accuracy).
 At least I can't find it anywhere.
   
   I don't see, what is the "use case" for the current behavior. For example, 
if `preds.shape == labels.shape == (32, 10)`, the current implementation would 
just truncate both `preds` and `labels` to integers and compare them for 
equality. Why would this be useful?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to