[GitHub] szha commented on a change in pull request #9583: use nd for accuracy calculation

2018-03-19 Thread GitBox
szha commented on a change in pull request #9583: use nd for accuracy 
calculation
URL: https://github.com/apache/incubator-mxnet/pull/9583#discussion_r175583728
 
 

 ##
 File path: python/mxnet/metric.py
 ##
 @@ -380,23 +380,27 @@ def update(self, labels, preds):
 Parameters
 --
 labels : list of `NDArray`
-The labels of the data.
+The labels of the data with class indices as values, one per 
sample.
 
 preds : list of `NDArray`
-Predicted values.
+Prediction values for samples. Each prediction value can either be 
the class index,
+or a vector of likelihoods for all classes.
 """
 check_label_shapes(labels, preds)
 
 for label, pred_label in zip(labels, preds):
 if pred_label.shape != label.shape:
 pred_label = ndarray.argmax(pred_label, axis=self.axis)
-pred_label = pred_label.asnumpy().astype('int32')
-label = label.asnumpy().astype('int32')
+pred_label = pred_label.astype('int32')
+label = label.astype('int32')
 
 check_label_shapes(label, pred_label)
 
-self.sum_metric += (pred_label.flat == label.flat).sum()
-self.num_inst += len(pred_label.flat)
+if pred_label.context != label.context:
+pred_label = pred_label.as_in_context(label.context)
+
+self.sum_metric += (pred_label.flatten() == 
label.flatten()).sum().asscalar()
 
 Review comment:
   Computation happens before asnumpy() happens, so nothing is happening in the 
numpy world other than passing out a scalar value.
   
   May I ask what your interest is in this PR? Do you have a use case that 
benefits from using ndarray for metric?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9583: use nd for accuracy calculation

2018-01-26 Thread GitBox
szha commented on a change in pull request #9583: use nd for accuracy 
calculation
URL: https://github.com/apache/incubator-mxnet/pull/9583#discussion_r164257186
 
 

 ##
 File path: python/mxnet/metric.py
 ##
 @@ -380,23 +380,24 @@ def update(self, labels, preds):
 Parameters
 --
 labels : list of `NDArray`
-The labels of the data.
+The labels of the data with class indices as values, one per 
sample.
 
 preds : list of `NDArray`
-Predicted values.
+Prediction values for samples. Each prediction value can either be 
the class index,
+or a vector of likelihoods for all classes.
 """
 check_label_shapes(labels, preds)
 
 for label, pred_label in zip(labels, preds):
 if pred_label.shape != label.shape:
 pred_label = ndarray.argmax(pred_label, axis=self.axis)
-pred_label = pred_label.asnumpy().astype('int32')
-label = label.asnumpy().astype('int32')
+pred_label = pred_label.astype('int32')
 
 Review comment:
   This requires larger space, which can show when the prediction class is 
large (such as in NLP applications). Should I make it an option?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9583: use nd for accuracy calculation

2018-01-26 Thread GitBox
szha commented on a change in pull request #9583: use nd for accuracy 
calculation
URL: https://github.com/apache/incubator-mxnet/pull/9583#discussion_r164257186
 
 

 ##
 File path: python/mxnet/metric.py
 ##
 @@ -380,23 +380,24 @@ def update(self, labels, preds):
 Parameters
 --
 labels : list of `NDArray`
-The labels of the data.
+The labels of the data with class indices as values, one per 
sample.
 
 preds : list of `NDArray`
-Predicted values.
+Prediction values for samples. Each prediction value can either be 
the class index,
+or a vector of likelihoods for all classes.
 """
 check_label_shapes(labels, preds)
 
 for label, pred_label in zip(labels, preds):
 if pred_label.shape != label.shape:
 pred_label = ndarray.argmax(pred_label, axis=self.axis)
-pred_label = pred_label.asnumpy().astype('int32')
-label = label.asnumpy().astype('int32')
+pred_label = pred_label.astype('int32')
 
 Review comment:
   This requires larger space can show when the prediction class is large (such 
as in NLP applications). Should I make it an option?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services