ZhennanQin commented on a change in pull request #12530: Implement mkldnn 
convolution fusion and quantization.
URL: https://github.com/apache/incubator-mxnet/pull/12530#discussion_r223569976
 
 

 ##########
 File path: src/operator/quantization/mkldnn/mkldnn_quantize-inl.h
 ##########
 @@ -75,6 +75,11 @@ static void MKLDNNQuantizeComputeKer(const 
std::vector<NDArray>& inputs,
   auto i_mpd = i_mem->get_primitive_desc();
   auto i_desc = i_mpd.desc();
   mkldnn::memory::format i_fmt = 
static_cast<mkldnn::memory::format>(i_desc.data.format);
+  if (i_fmt == mkldnn::memory::format::nchw ||
+      i_fmt == mkldnn::memory::format::nChw8c ||
+      i_fmt == mkldnn_nChw16c) {
+    i_fmt = mkldnn::memory::format::nhwc;
+  }
 
 Review comment:
   @KellenSunderland Sorry for responding late. For mkldnn quantization flow, 
we won't offline quantize any params, but using fp32 params(eg. weight and 
bias) for quantized convolution. We will online quantize convolution params in 
first forwarding. For the code here, it's for the default output format of 
'quantize' op when using in mkldnn quantization flow, this won't effect 
non-mkldnn quantization flow.  

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to