pengzhao-intel commented on issue #15108: The test time of the model on GPU is normal, but the test time on CPU is very long. URL: https://github.com/apache/incubator-mxnet/issues/15108#issuecomment-500182206 Thanks for the data and I can reproduce your issue now. Seems the convolution time is quick longer when you switched to the new dataset. Debugging now and will back to you soon. ``` (base) [patric@mlt-skx132 test]$ python test.py mkldnn_verbose,info,Intel(R) MKL-DNN v0.19.0 (Git Hash 41bee20d7eb4a67feeeeb8d597b3598994eb1959),Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) with AVX512BW, AVX512VL, and AVX512DQ extensions mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw16c,num:1,1x64x112x112,0.628906 mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,64x64x3x3,0.027832 mkldnn_verbose,exec,convolution,jit:avx512_common,forward_inference,fsrc:nChw16c fwei:OIhw16i16o fbia:undef fdst:nChw16c,alg:convolution_direct,mb1_ic64oc64_ih112oh56kh3sh2dh0ph1_iw112ow56kw3sw2dw0pw1,**22.302** mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,64x64x3x3,0.027832 0.030622 (base) [patric@mlt-skx132 test]$ python test.py mkldnn_verbose,info,Intel(R) MKL-DNN v0.19.0 (Git Hash 41bee20d7eb4a67feeeeb8d597b3598994eb1959),Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) with AVX512BW, AVX512VL, and AVX512DQ extensions mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw16c,num:1,1x64x112x112,0.468994 mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,64x64x3x3,0.0290527 mkldnn_verbose,exec,convolution,jit:avx512_common,forward_inference,fsrc:nChw16c fwei:OIhw16i16o fbia:undef fdst:nChw16c,alg:convolution_direct,mb1_ic64oc64_ih112oh56kh3sh2dh0ph1_iw112ow56kw3sw2dw0pw1,**0.687012** mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,64x64x3x3,0.026123 0.009102 ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services
