sxjscience commented on issue #9583: use nd for accuracy calculation URL: https://github.com/apache/incubator-mxnet/pull/9583#issuecomment-374425245 Sure. Let’s move the discussion there. Get Outlook for iOS<https://aka.ms/o0ukef> ________________________________ From: ThomasDelteil <[email protected]> Sent: Monday, March 19, 2018 4:17:58 PM To: apache/incubator-mxnet Cc: Xingjian SHI; Mention Subject: Re: [apache/incubator-mxnet] use nd for accuracy calculation (#9583) @ThomasDelteil commented on this pull request. ________________________________ In python/mxnet/metric.py<https://github.com/apache/incubator-mxnet/pull/9583#discussion_r175615075>: > check_label_shapes(label, pred_label) - self.sum_metric += (pred_label.flat == label.flat).sum() - self.num_inst += len(pred_label.flat) + if pred_label.context != label.context: + pred_label = pred_label.as_in_context(label.context) + + self.sum_metric += (pred_label.flatten() == label.flatten()).sum().asscalar() Thanks yes that's my understanding. However I think it should be left to the user to decide when to block, since it depends highly on their GPU and model size (like every 100 batches or every epoch). Also is there a reason why the accuracy is stored on the CPU rather than on specific context? My measures showed great improvements when storing the accuracy on GPU. Maybe if you don't mind we can continue the discussion there: #9571<https://github.com/apache/incubator-mxnet/issues/9571> — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<https://github.com/apache/incubator-mxnet/pull/9583#discussion_r175615075>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AE8D7uxiHGeRbYA_vJVn5ffFHvCLZn-5ks5tgDymgaJpZM4RvD6W>.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
