TaoLv commented on issue #14818: Support 3D input for MKL-DNN softmax operator
URL: https://github.com/apache/incubator-mxnet/pull/14818#issuecomment-487289944
 
 
   Tests should be covered by
   
https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_operator.py#L4697
   and
   
https://github.com/apache/incubator-mxnet/blob/master/tests/cpp/operator/mkldnn_operator_test.cc#L1288.
   I used below code snippet for performance benchmarking:
   ```python
   def test_performance():
       shapes = [(1024,), (96, 512), (96, 128, 128), (96, 256, 256), (1, 8, 
1024, 1024)]
       for sh in shapes:
           a = mx.nd.random.uniform(shape=sh)
           # warm up
           b = mx.nd.softmax(a, axis=-1)
           b.wait_to_read()
   
           tic = time.time()
           for i in range(1000):
               b = mx.nd.softmax(a, axis=-1)
               b.wait_to_read()
   
           toc = time.time()
           print("softmax %s, take %f ms" % (sh, (toc - tic)/1000*1000.0))
   ```
   Some performance numbers as following:
   mxnet==1.5.0b20190426
   ```
   softmax (1024,), take 0.103340 ms
   softmax (96, 512), take 0.127465 ms
   softmax (96, 128, 128), take 1.655400 ms
   softmax (96, 256, 256), take 6.369653 ms
   softmax (1, 8, 1024, 1024), take 11.450656 ms
   ```
   This PR with MKL-DNN backend:
   ```
   softmax (1024,), take 0.062743 ms
   softmax (96, 512), take 0.104104 ms
   softmax (96, 128, 128), take 0.385350 ms
   softmax (96, 256, 256), take 0.463220 ms
   softmax (1, 8, 1024, 1024), take 1.704757 ms
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to