wuxun-zhang commented on issue #17159: Performance regression from 1.4.1 to 
1.5.1
URL: 
https://github.com/apache/incubator-mxnet/issues/17159#issuecomment-568717154
 
 
   I tried to use resnext50 model from [Gluon-CV 
modelzoo](https://gluon-cv.mxnet.io/model_zoo/classification.html#resnext) 
since I cannot get access to the model in s3. I used the local machine CLX-8280 
with 28 physical cores and the inference command is like below:
   
   ```
   export OMP_NUM_THREADS=28
   export KMP_AFFINITY=granularity=fine,noduplicates,compact,1,0
   numactl --physcpubind=0-27 --membind=0 python gluon_resnext50.py --iteration 
10000
   ```
   
   Output based on **mxnet-mkl==1.5.1**
   ```
   EIA context not available, trying GPU...
   GPU not available, trying CPU...
   1.5.1 <module 'mxnet' from 
'~/anaconda3/envs/mxnet_v1.5/lib/python3.6/site-packages/mxnet/__init__.py'>
   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
   starting test for Model: resnext-50 as float32 with batch size 1
   [18:00:41] src/nnvm/legacy_json_util.cc:204: Warning: loading symbol saved 
by MXNet version 10600 with lower version of MXNet v10501. May cause undefined 
behavior. Please update MXNet if you encounter any issue
   
~/anaconda3/envs/mxnet_v1.5/lib/python3.6/site-packages/mxnet/gluon/block.py:1159:
 UserWarning: Cannot decide type for the following arguments. Consider 
providing them as input:
           data: None
     input_sym_arg_type = in_param.infer_type()[0]
   
~/anaconda3/envs/mxnet_v1.5/lib/python3.6/site-packages/mxnet/gluon/block.py:548:
 UserWarning: The 1-th input to HybridBlock is not used by any computation. Is 
this intended?
     out = self.forward(*args)
   [18:03:18] src/nnvm/legacy_json_util.cc:204: Warning: loading symbol saved 
by MXNet version 10600 with lower version of MXNet v10501. May cause undefined 
behavior. Please update MXNet if you encounter any issue
   Model: resnext-50
           ----Latency----
           Model Partition Time: 0
           Model Load Time: 79.47278022766113
           First Inference: 108.0780029296875
           p99: 15.761613845825195
           p90: 15.525102615356445
           p50: 15.347003936767578
           Avg: 15.376607843213481
           StdDev: 0.1906057337722647
   ```
   
   Output based on **mxnet 
[master](https://github.com/apache/incubator-mxnet/commit/d000c3baa32171964f1b8ed3780472af0e05be1a)**
   ```
   EIA context not available, trying GPU...
   GPU not available, trying CPU...
   1.6.0 <module 'mxnet' from 
'~/github/incubator-mxnet/python/mxnet/__init__.py'>
   @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
   starting test for Model: resnext-50 as float32 with batch size 1
   ~/github/incubator-mxnet/python/mxnet/gluon/block.py:1398: UserWarning: 
Cannot decide type for the following arguments. Consider providing them as 
input:
           data: None
     input_sym_arg_type = in_param.infer_type()[0]
   ~/github/incubator-mxnet/python/mxnet/gluon/block.py:693: UserWarning: The 
1-th input to HybridBlock is not used by any computation. Is this intended?
     out = self.forward(*args)
   Model: resnext-50
           ----Latency----
           Model Partition Time: 0
           Model Load Time: 85.40558815002441
           First Inference: 113.36183547973633
           p99: 16.376495361328125
           p90: 16.173124313354492
           p50: 15.985250473022461
           Avg: 16.017432952330072
           StdDev: 0.18273338901509342
   ```
   
   From my side, I didn't see there is any significant performance change from 
1.5.1 to 1.6.0. I didn't test with mxnet-mkl 1.4.1 since resnext50 model from 
gluoncv modelzoo is not compatible with mxnet 1.4.1. So, it would be a great 
help if you guys can share the resnext50 model you used, then I can give it a 
try with mxnet 1.4.1. thanks

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to