TaoLv commented on a change in pull request #13699: add mkldnn softmax_output
URL: https://github.com/apache/incubator-mxnet/pull/13699#discussion_r251202549
##########
File path: src/operator/softmax_output.cc
##########
@@ -121,23 +230,41 @@ MXNET_REGISTER_OP_PROPERTY(SoftmaxOutput,
SoftmaxOutputProp)
- ``'valid'``: divide the gradient by the number of instances which are
not ignored.
)code" ADD_FILELINE)
+.set_num_inputs(2)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser<SoftmaxOutputParam>)
+#if MXNET_USE_MKLDNN == 1
+.set_attr<FInferStorageType>("FInferStorageType", SoftmaxOutputStorageType)
+.set_attr<bool>("TIsMKLDNN", true)
+.set_attr<FComputeEx>("FComputeEx<cpu>", SoftmaxOutputComputeExCPU)
+#endif
+.set_attr<nnvm::FListInputNames>("FListInputNames", [](const NodeAttrs& attrs)
{
+ return std::vector<std::string>{"data", "label"};
+})
+.set_attr<nnvm::FListOutputNames>("FListOutputNames", [](const NodeAttrs&
attrs) {
+ return std::vector<std::string>{"output"};
+})
+.set_attr<nnvm::FInferShape>("FInferShape", SoftmaxOutputShape)
+.set_attr<nnvm::FInferType>("FInferType", SoftmaxOutputType)
+.set_attr<FCompute>("FCompute<cpu>", SoftmaxOutputCompute<cpu>)
+.set_attr<nnvm::FGradient>("FGradient",
SoftmaxOutputGrad{"_backward_SoftmaxOutput"})
+.set_attr<nnvm::FInplaceOption>("FInplaceOption", [](const NodeAttrs& attrs){
+ return std::vector<std::pair<int, int> >{{0, 0}};
+})
.add_argument("data", "NDArray-or-Symbol", "Input array.")
.add_argument("label", "NDArray-or-Symbol", "Ground truth label.")
.add_arguments(SoftmaxOutputParam::__FIELDS__());
+// Softmax symbol is renamed to SoftmaxOutput and deprecated since Dec, 2015
+NNVM_REGISTER_OP(SoftmaxOutput).add_alias("Softmax");
-MXNET_REGISTER_OP_PROPERTY(Softmax, DeprecatedSoftmaxProp)
Review comment:
@szha could you help to take a look at this change? `SoftmaxOutput` is
re-writed with NNVM flavor in this PR and the deprecated `Softmax` is moved to
be an alias of `SoftmaxOutput`. Need your confirm that it doesn't break any API.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services