sxjscience commented on issue #9833: [Metric] Accelerate the calculation of F1
URL: https://github.com/apache/incubator-mxnet/pull/9833#issuecomment-367144647
 
 
   After some investigation, I find using GPU is not faster because the 
batch_size tested is rather small, i.e, 16, 64, 256, 1024 and NDArray + GPU is 
not as fast as numpy in these cases.
   I've used this script to test the speed:
   ```python
   import mxnet as mx
   import mxnet.ndarray as nd
   import numpy as np
   import time
   
   # Warm up the GPU
   for _ in range(10):
       a = nd.ones((100, 100), ctx=mx.gpu())
       b = a * 2
       b.asnumpy()
   
   N = 100
   
   # Test the speed
   for data_shape in [(16,), (64,), (256,), (1024,)]:
       dat_npy = np.random.uniform(0, 1, data_shape)
       dat_nd_gpu = nd.array(dat_npy, ctx=mx.gpu())
       dat_nd_cpu = nd.array(dat_npy, ctx=mx.cpu())
       nd.waitall()
       start = time.time()
       for _ in range(N):
           np_ret = np.sum(dat_npy)
       end = time.time()
       np_time = end - start
       start = time.time()
       for _ in range(N):
           nd_ret = nd.sum(dat_nd_gpu).asscalar()
       end = time.time()
       nd_gpu_time = end - start
       start = time.time()
       for _ in range(N):
           nd_ret = nd.sum(dat_nd_cpu).asscalar()
       end = time.time()
       nd_cpu_time = end - start
       print('sum, data_shape=%s, numpy time=%g, mxnet gpu time=%g, mxnet cpu 
time=%g' %(str(data_shape), np_time, nd_gpu_time, nd_cpu_time))
   ```
   
   Result
   ```
   sum, data_shape=(16,), numpy time=0.00067687, mxnet gpu time=0.0206566, 
mxnet cpu time=0.193971
   sum, data_shape=(64,), numpy time=0.000299454, mxnet gpu time=0.0147879, 
mxnet cpu time=0.00626922
   sum, data_shape=(256,), numpy time=0.000304699, mxnet gpu time=0.0141888, 
mxnet cpu time=0.00622177
   sum, data_shape=(1024,), numpy time=0.000349522, mxnet gpu time=0.015424, 
mxnet cpu time=0.00976443
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to