bartekkuncer commented on a change in pull request #20904:
URL: https://github.com/apache/incubator-mxnet/pull/20904#discussion_r818636800
##########
File path: src/operator/tensor/elemwise_binary_scalar_op_extended.cc
##########
@@ -49,8 +49,47 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_minimum_scalar)
.set_attr_parser(ParamParser<NumpyBinaryScalarParam>)
.set_attr<FCompute>("FCompute<cpu>", BinaryScalarOp::Backward<cpu,
mshadow_op::le>);
+#if MXNET_USE_ONEDNN == 1
+bool PowerStorageType(const nnvm::NodeAttrs& attrs,
+ const int dev_mask,
+ DispatchMode* dispatch_mode,
+ std::vector<int>* inputs,
+ std::vector<int>* outputs) {
+ CHECK_EQ(inputs->size(), 1);
+ CHECK_EQ(outputs->size(), 1);
+
+ return DNNLStorageType(attrs, dev_mask, true, dispatch_mode, inputs,
outputs);
+}
+
+void DNNLPowerForward(const nnvm::NodeAttrs& attrs,
Review comment:
Why is this function declared here? I believe that usually we add the
declaration of this kind of functions to dnnl_ops-inl.h file and definition to
dnnl_*op_name*.cc file. Is this operator's implementation somehow preventing
that or maybe is there any other reason to do it in this way?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]