bartekkuncer commented on code in PR #20987:
URL: https://github.com/apache/incubator-mxnet/pull/20987#discussion_r842754068


##########
src/operator/quantization/requantize.cc:
##########
@@ -29,6 +29,26 @@
 
 namespace mxnet {
 namespace op {
+
+#if MXNET_USE_ONEDNN == 1
+void RequantizeForwardExCPU(const nnvm::NodeAttrs& attrs,
+                            const OpContext& ctx,
+                            const std::vector<NDArray>& inputs,
+                            const std::vector<OpReqType>& req,
+                            const std::vector<NDArray>& outputs) {
+  const RequantizeParam& param = nnvm::get<RequantizeParam>(attrs.parsed);
+  auto out_type                = GetQuantizeOutputType(param);

Review Comment:
   Here you created a variable for out_type and above (for quantization) you 
did not. Maybe unify this? Also as the conditions are the same for both this 
ops maybe it is worth to create a SupportDNNLQuantize func with them as there 
is such SupportDNNL* func for most other ops?



##########
src/operator/quantization/dnnl/dnnl_quantize-inl.h:
##########
@@ -67,9 +67,6 @@ static void DNNLQuantizeComputeKer(const 
std::vector<NDArray>& inputs,
   attr.set_output_scales(mask, scales);
   dnnl::engine cpu_engine = mxnet::CpuEngine::Get()->get_engine();
   NDArray in_buffer       = inputs[0];
-  if (inputs[0].IsView() && inputs[0].IsDNNLData())

Review Comment:
   Why have you removed this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to