roywei commented on issue #15429: Operator Performance Regression on CPU URL: https://github.com/apache/incubator-mxnet/issues/15429#issuecomment-508863497 There is also no significant regression on BatchNorm op between 1.4.1 and 1.5.0 | 1.4.1 (int64) | 1.4.1 (int64) | 1.4.1 (int64) | average | 1.5.0 (int 32) | 1.5.0 (int 32) | 1.5.0 (int 32) | average -- | -- | -- | -- | -- | -- | -- | -- | -- BatchNorm | 2.609942 | 2.621809 | 2.608 | 2.607639 | 2.63 | 2.594 | 2.611 | 2.61147967 script: ``` import mxnet as mx import time mx.random.seed(0) data = mx.nd.random.uniform(0, 256, (32, 3, 256, 256)) beta = mx.nd.random.uniform(shape=(3,)) gamma = mx.nd.random.uniform(shape=(3,)) mean = mx.nd.random.uniform(shape=(3,)) var = mx.nd.random.uniform(shape=(3,)) repeat = 1000 mx.nd.waitall() start = time.time() for _ in range(repeat): #c = mx.nd.broadcast_add(a, b) c = mx.nd.BatchNorm(data=data, gamma=gamma, beta=beta, moving_mean=mean, moving_var=var) # c = mx.nd.elemwise_add(a, b) c.wait_to_read() elapse = time.time() - start print("elapse time: %fms" % (elapse * 1000 / repeat)) ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
