roywei edited a comment on issue #15429: Operator Performance Regression on CPU
URL: 
https://github.com/apache/incubator-mxnet/issues/15429#issuecomment-508863497
 
 
   There is also no significant regression on BatchNorm op between 1.4.1 and 
1.5.0 
   Also the speed actually improves if turning int64 flag on.
   
     | 1.4.1 (int64) runs | 1.5.0 (int 32) runs | 1.5.0 (int 64) runs
   -- | -- | -- | --
   BatchNorm | 2.609942 | 2.63 | 2.031
   2nd run | 2.621809 | 2.63 | 2.054
   3rd run | 2.621809 | 2.611 | 2.041
   average | 2.60764 | 2.61148 | 2.042131
   
   
   
   script:
   ```
   import mxnet as  mx
   import time
   
   
   mx.random.seed(0)
   
   data = mx.nd.random.uniform(0, 256, (32, 3, 256, 256))
   beta = mx.nd.random.uniform(shape=(3,))
   gamma = mx.nd.random.uniform(shape=(3,))
   mean = mx.nd.random.uniform(shape=(3,))
   var = mx.nd.random.uniform(shape=(3,))
   repeat = 1000
   
   mx.nd.waitall()
   start = time.time()
   for _ in range(repeat):
       #c = mx.nd.broadcast_add(a, b)
       c = mx.nd.BatchNorm(data=data, gamma=gamma, beta=beta, moving_mean=mean, 
moving_var=var)
       # c = mx.nd.elemwise_add(a, b)
       c.wait_to_read()
   elapse = time.time() - start
   
   print("elapse time: %fms" % (elapse * 1000 / repeat))
   
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to