[GitHub] [incubator-mxnet] zhhoper commented on issue #17231: cannot quantization example
zhhoper commented on issue #17231: cannot quantization example URL: https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-575823751 Hi, the mxnet build from source does not seem to work. I install the mxnet with pip, it can compress the network but the run time is super slow. The mxnet version is 1.6.0. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zhhoper commented on issue #17231: cannot quantization example
zhhoper commented on issue #17231: cannot quantization example URL: https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-574543735 @wuxun-zhang @ZhennanQin I run the example using mxnet 1.6.0, it seems to work ok. However, the run time of quantized model is much slower (more than 10 times) than the original one. Is there anything I need to set up in order to speed up the quantized model? I test resnet152 For float32: command: python imagenet_inference.py --symbol-file=./model/imagenet1k-resnet-152-symbol.json --param-file=./model/imagenet1k-resnet-152-.params --num-skipped-batches=50 --batch-size=64 --num-inference-batches=500 --dataset=./data/val_256_q90.rec --ctx=cpu Output: INFO:logger:batch size = 64 for inference INFO:logger:rgb_mean = 0,0,0 INFO:logger:rgb_std = 1,1,1 INFO:logger:label_name = softmax_label INFO:logger:Input data shape = (3, 224, 224) INFO:logger:Dataset for inference: ./data/val_256_q90.rec [07:03:16] ../src/io/iter_image_recordio_2.cc:831: Create ImageRecordIter2 optimized for CPU backend.Use omp threads instead of preprocess_threads. [07:03:16] ../src/io/iter_image_recordio_2.cc:178: ImageRecordIOParser2: ./data/val_256_q90.rec, use 16 threads for decoding.. [07:03:16] ../src/base.cc:84: Upgrade advisory: this mxnet has been built against cuDNN lib version 7401, which is older than the oldest version tested by CI (7600). Set MXNET_CUDNN_LIB_CHECKING=0 to quiet this warning. INFO:logger:Loading symbol from file /home/ubuntu/software/incubator-mxnet/example/quantization/./model/imagenet1k-resnet-152-symbol.json [07:03:18] ../src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.8.0. Attempting to upgrade... [07:03:18] ../src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! INFO:logger:Loading params from file /home/ubuntu/software/incubator-mxnet/example/quantization/./model/imagenet1k-resnet-152-.params INFO:logger:Skipping the first 50 batches INFO:logger:Running model ./model/imagenet1k-resnet-152-symbol.json for inference [07:03:19] ../src/executor/graph_executor.cc:1982: Subgraph backend MKLDNN is activated. INFO:logger:Finished inference with 32000 images **INFO:logger:Finished with 22.124158 images per second** WARNING:logger:Note: GPU performance is expected to be slower than CPU. Please refer quantization/README.md for details INFO:logger:('accuracy', 0.7676875) INFO:logger:('top_k_accuracy_5', 0.93034375) For quantized model command: python imagenet_inference.py --symbol-file=./model/imagenet1k-resnet-152-quantized-5batches-naive-symbol.json --param-file=./model/imagenet1k-resnet-152-quantized-.params --num-skipped-batches=50 --batch-size=64 --num-inference-batches=500 --dataset=./data/val_256_q90.rec --ctx=cpu output: INFO:logger:batch size = 64 for inference INFO:logger:rgb_mean = 0,0,0 INFO:logger:rgb_std = 1,1,1 INFO:logger:label_name = softmax_label INFO:logger:Input data shape = (3, 224, 224) INFO:logger:Dataset for inference: ./data/val_256_q90.rec [00:37:40] ../src/io/iter_image_recordio_2.cc:831: Create ImageRecordIter2 optimized for CPU backend.Use omp threads instead of preprocess_threads. [00:37:40] ../src/io/iter_image_recordio_2.cc:178: ImageRecordIOParser2: ./data/val_256_q90.rec, use 16 threads for decoding.. [00:37:40] ../src/base.cc:84: Upgrade advisory: this mxnet has been built against cuDNN lib version 7401, which is older than the oldest version tested by CI (7600). Set MXNET_CUDNN_LIB_CHECKING=0 to quiet this warning. INFO:logger:Loading symbol from file /home/ubuntu/software/incubator-mxnet/example/quantization/./model/imagenet1k-resnet-152-quantized-5batches-naive-symbol.json INFO:logger:Loading params from file /home/ubuntu/software/incubator-mxnet/example/quantization/./model/imagenet1k-resnet-152-quantized-.params INFO:logger:Skipping the first 50 batches INFO:logger:Running model ./model/imagenet1k-resnet-152-quantized-5batches-naive-symbol.json for inference [00:37:43] ../src/executor/graph_executor.cc:1982: Subgraph backend MKLDNN is activated. INFO:logger:Finished inference with 32000 images **INFO:logger:Finished with 1.495486 images per second** WARNING:logger:Note: GPU performance is expected to be slower than CPU. Please refer quantization/README.md for details INFO:logger:('accuracy', 0.76328125) INFO:logger:('top_k_accuracy_5', 0.92859375) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zhhoper commented on issue #17231: cannot quantization example
zhhoper commented on issue #17231: cannot quantization example URL: https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-574399327 @wuxun-zhang Sorry that I haven't been able to touch that after reporting the bug. Will take a look at that and let you know if the bug is still there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zhhoper commented on issue #17231: cannot quantization example
zhhoper commented on issue #17231: cannot quantization example URL: https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-571373823 @ZhennanQin I tried to set calib-mode to 'naive', met the same error. Error message as follows INFO:logger:Namespace(batch_size=32, calib_dataset='data/val_256_q90.rec', calib_mode='naive', data_nthreads=60, enable_calib_quantize=True, epoch=0, exclude_first_conv=False, image_shape='3,224,224', label_name='softmax_label', model='resnet50_v1', no_pretrained=False, num_calib_batches=10, quantized_dtype='auto', quiet=False, shuffle_chunk_seed=3982304, shuffle_dataset=True, shuffle_seed=48564309) INFO:logger:shuffle_dataset=True INFO:logger:calibration mode set to naive INFO:logger:Get pre-trained model from MXNet or Gluoncv modelzoo. INFO:logger:If you want to use custom model, please set --no-pretrained. INFO:logger:model resnet50_v1 is converted from GluonCV INFO:logger:Converting model from Gluon-CV ModelZoo resnet50_v1... into path /home/ubuntu/software/incubator-mxnet/example/quantization/model /home/ubuntu/anaconda3/envs/mxnet_0.15/lib/python3.6/site-packages/mxnet-1.6.0-py3.6.egg/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label']) warnings.warn(msg) [00:17:25] ../src/executor/graph_executor.cc:1982: Subgraph backend MKLDNN is activated. INFO:logger:batch size = 32 for calibration INFO:logger:number of batches = 10 for calibration INFO:logger:These layers have been excluded [] INFO:logger:label_name = softmax_label INFO:logger:Input data shape = (3, 224, 224) INFO:logger:rgb_mean = 123.68,116.779,103.939 INFO:logger:rgb_std = 58.393, 57.12, 57.375 INFO:logger:Creating ImageRecordIter for reading calibration dataset [00:17:26] ../src/io/iter_image_recordio_2.cc:178: ImageRecordIOParser2: data/val_256_q90.rec, use 16 threads for decoding.. Segmentation fault: 11 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services