[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-552684130 That would be great, thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-552681563 I've tried the updates from the pull request, it works like a charm. Seems like the degraded quantization performance comes from the model precision itself and the large kernel dw conv. Though I still have some questions about why and how these factors influence the quantization performance, they are not related to this issue anymore. Hi @ZhennanQin, I might throw some educational questions on the pr thread after more experiments. If you got some time in the future, any quick answer would be appreciated and it might save a good amount time of mine Thanks a lot for the help again @pengzhao-intel @ZhennanQin! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-551277963 Hi @ZhennanQin, thanks a lot to your effort! I tried to verify the quantized model's performance with the nightly built release (`mxnet-mkl-1.6.0b20191107`, the merge commit was completed on 1106 so I assumed this release already contains the updated codes) on the Mac to get a quick result. ```sh git clone https://github.com/CanyonWind/Single-Path-One-Shot-NAS-MXNet.git cd Single-Path-One-Shot-NAS-MXNet python3 -m venv env source env/bin/activate pip install mxnet-mkl --pre cd quantization ``` I've tried the following: With calib-mode: `none` ```sh # quantize model python3 quantize_mkldnn.py --model=ShuffleNas_fixArch --num-calib-batches=5 --calib-mode=none # verify performance python3 imagenet_inference.py --symbol-file model/ShuffleNas_fixArch-quantized-symbol.json --param-file model/ShuffleNas_fixArch-quantized-.params --rgb-mean=123.68,116.779,103.939 --rgb-std=58.393,57.12,57.375 --num-skipped-batches=50 --batch-size=64 --num-inference-batches=5 --dataset=./data/val_256_q90.rec --ctx=cpu # accuracy: 0.009375 ``` With calib-mode: `naive` ```sh # quantize model python quantize_mkldnn.py --model=ShuffleNas_fixArch --num-calib-batches=5 --calib-mode=naive # verify performance python3 imagenet_inference.py --symbol-file model/ShuffleNas_fixArch-quantized-5batches-naive-symbol.json --param-file model/ShuffleNas_fixArch-quantized-.params --rgb-mean=123.68,116.779,103.939 --rgb-std=58.393,57.12,57.375 --num-skipped-batches=50 --batch-size=64 --num-inference-batches=5 --dataset=./data/val_256_q90.rec --ctx=cpu # accuracy: 0.003125 ``` With calib-mode: `entropy` ```sh # quantize model python3 quantize_mkldnn.py --model=ShuffleNas_fixArch --num-calib-batches=5 --calib-mode=entropy # verify performance python3 imagenet_inference.py --symbol-file model/ShuffleNas_fixArch-quantized-5batches-entropy-symbol.json --param-file model/ShuffleNas_fixArch-quantized-.params --rgb-mean=123.68,116.779,103.939 --rgb-std=58.393,57.12,57.375 --num-skipped-batches=50 --batch-size=64 --num-inference-batches=5 --dataset=./data/val_256_q90.rec --ctx=cpu # error was thrown when doing inference ``` Could you please guide me how did you verify the quantization accuracy or could you please try any of the above quantization procedure (it wouldn't take more than 10min to finish) at your convenience? Thanks again for your generous help, I do appreciate it a lot! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-541470103 Thank you very much for taking the time to help. I will try it too and let you know if there is any update. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-541338910 @pengzhao-intel Got it. Thanks @ZhennanQin Thanks for the help! I will give it a try. A quick question is that is this problem caused by reshaping? If I can find some way to avoid using it in the model, should the quantization work? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-541241345 Sure, I'm happy to do that. Could you please let me know which quantization method (paper) you are using in the MKL so that I could cite it. Thank This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-540967887 > This bug can be reproduced locally, and found the root cause. Internal patch is ready, need more time for verification. Hi @ZhennanQin, thanks for the prompt response. I'm participating a competition and desperately need this tool to quantize. Could you please let me know whether there could be a quick fix / walk-around? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services