sxjscience commented on issue #9833: [Metric] Accelerate the calculation of F1 URL: https://github.com/apache/incubator-mxnet/pull/9833#issuecomment-367170345 Without `.asscalar()`: ```python import mxnet as mx import mxnet.ndarray as nd import numpy as np import time # Warm up the GPU for _ in range(10): a = nd.ones((100, 100), ctx=mx.gpu()) b = a * 2 b.asnumpy() N = 100 # Test the speed for data_shape in [(16,), (64,), (256,), (1024,)]: dat_npy = np.random.uniform(0, 1, data_shape) dat_nd_gpu = nd.array(dat_npy, ctx=mx.gpu()) dat_nd_cpu = nd.array(dat_npy, ctx=mx.cpu()) nd.waitall() start = time.time() for _ in range(N): np_ret = np.sum(dat_npy) end = time.time() np_time = end - start start = time.time() for _ in range(N): nd_ret = nd.sum(dat_nd_gpu) nd.waitall() end = time.time() nd_gpu_time = end - start start = time.time() for _ in range(N): nd_ret = nd.sum(dat_nd_cpu) nd.waitall() end = time.time() nd_cpu_time = end - start print('sum, data_shape=%s, numpy time=%g, mxnet gpu time=%g, mxnet cpu time=%g' %(str(data_shape), np_time, nd_gpu_time, nd_cpu_time)) ``` Result: ``` sum, data_shape=(16,), numpy time=0.000400066, mxnet gpu time=0.0181644, mxnet cpu time=0.0328023 sum, data_shape=(64,), numpy time=0.000379086, mxnet gpu time=0.00761223, mxnet cpu time=0.037406 sum, data_shape=(256,), numpy time=0.000583172, mxnet gpu time=0.0079515, mxnet cpu time=0.064379 sum, data_shape=(1024,), numpy time=0.00065589, mxnet gpu time=0.00781155, mxnet cpu time=0.00705242 ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
