bgawrych commented on code in PR #21032:
URL: https://github.com/apache/incubator-mxnet/pull/21032#discussion_r901692474


##########
src/operator/nn/dnnl/dnnl_reduce.cc:
##########
@@ -111,10 +109,9 @@ bool SupportDNNLReduceImpl(const NumpyReduceAxesParam& 
param,
   }
   // initial value not supported by oneDNN
   param_supported = param_supported && !param.initial.has_value();
-  return param_supported &&
-         (input.dtype() == mshadow::kFloat32 || input.dtype() == 
mshadow::kBfloat16) &&
-         (output.dtype() == mshadow::kFloat32 || output.dtype() == 
mshadow::kBfloat16) &&
-         in_ndim >= 1 && out_size > 0 && in_size > 1;
+  // oneDNN does not support recution of tensors with size equal to 1

Review Comment:
   ```suggestion
     // oneDNN does not support reduction of tensors with size equal to 1
   ```



##########
src/operator/nn/dnnl/dnnl_softmax_output.cc:
##########
@@ -90,9 +90,10 @@ static DNNLSoftmaxOutputFwd& GetSoftmaxOutputForward(const 
SoftmaxOutputParam& p
   return it->second;
 }
 
-//  This is only used for forward. For backward ,need double check 
compatibility
-bool SupportDNNLSoftmaxOutput(const SoftmaxOutputParam& param) {
-  return param.multi_output ? false : true;
+//  This is only used for forward. For backward one needs to need double check 
compatibility.

Review Comment:
   two spaces at the beggining, second phrase is strange for me



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to