szha commented on a change in pull request #20372:
URL: https://github.com/apache/incubator-mxnet/pull/20372#discussion_r660258923
##########
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##########
@@ -161,10 +161,34 @@ The storage type of ``log_sigmoid`` output is always dense
)code" ADD_FILELINE)
.set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu,
mshadow_op::log_sigmoid>)
-.set_attr<nnvm::FGradient>("FGradient",
ElemwiseGradUseIn{"_backward_log_sigmoid"});
Review comment:
The previous version looks correct with the `ElemwiseGradUseIn` which
makes the input to the gradient function the input of the elementwise function.
Could you elaborate on in which cases this would fail and why you need to
change it to ElemwiseGradUseOut and the definition?
##########
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##########
@@ -161,10 +161,34 @@ The storage type of ``log_sigmoid`` output is always dense
)code" ADD_FILELINE)
.set_attr<FCompute>("FCompute<cpu>", UnaryOp::Compute<cpu,
mshadow_op::log_sigmoid>)
-.set_attr<nnvm::FGradient>("FGradient",
ElemwiseGradUseIn{"_backward_log_sigmoid"});
Review comment:
I'm not sure how scalar array would trigger the problem yet
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]