[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300197400
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -106,6 +106,35 @@ def grad_grad_op(x):
 check_second_order_unary(array, log10, grad_grad_op)
 
 
+@with_seed()
+def test_reciprocal():
+def reciprocal(x):
+return nd.reciprocal(x)
+
+def grad_grad_op(x):
+return 2/x**3
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, reciprocal, grad_grad_op)
+
+
+@with_seed()
+def test_abs():
+def abs(x):
+return nd.abs(x)
+
+def grad_grad_op(x):
+return nd.zeros_like(x)
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, abs, grad_grad_op)
+
 
 Review comment:
   two lines between functions as per pep8: 
https://stackoverflow.com/questions/2953250/python-pep8-blank-lines-convention


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300197400
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -106,6 +106,35 @@ def grad_grad_op(x):
 check_second_order_unary(array, log10, grad_grad_op)
 
 
+@with_seed()
+def test_reciprocal():
+def reciprocal(x):
+return nd.reciprocal(x)
+
+def grad_grad_op(x):
+return 2/x**3
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, reciprocal, grad_grad_op)
+
+
+@with_seed()
+def test_abs():
+def abs(x):
+return nd.abs(x)
+
+def grad_grad_op(x):
+return nd.zeros_like(x)
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, abs, grad_grad_op)
+
 
 Review comment:
   two lines between functions as per pep8


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300197312
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -736,7 +767,26 @@ The storage type of ``abs`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseIn{"_backward_abs"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // ograds[0]: dL/dxgrad
+  // inputs[0]: dL/dy
+  // inputs[1]: x
+  // f(x) -> abs(x)
+  // f'(x) = 1 if x > 0 else -1
+  // f''(x) = 0
+  auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+   {nnvm::NodeEntry{n}, n->inputs[0]}, nullptr, );
+
+  std::vector ret;
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
 
 Review comment:
   same question as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300197249
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -736,7 +767,26 @@ The storage type of ``abs`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseIn{"_backward_abs"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // ograds[0]: dL/dxgrad
+  // inputs[0]: dL/dy
+  // inputs[1]: x
+  // f(x) -> abs(x)
+  // f'(x) = 1 if x > 0 else -1
+  // f''(x) = 0
+  auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+   {nnvm::NodeEntry{n}, n->inputs[0]}, nullptr, );
+
+  std::vector ret;
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+{ograds[0], nnvm::NodeEntry(dydx)}, nullptr, 
));
+  ret.emplace_back(MakeNode("zeros_like", n->attrs.name + 
"_backward_grad_grad_in",
 
 Review comment:
   Ok.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300196947
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -717,7 +717,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
 
 Review comment:
   Maybe a comment would help here, this one is the output corresponding to 
dL/dy from the first backward right?
   
   I'm still unclear since the previous PRs on what
   dL/dxgrad * dy/dx  represents.  To me is not obvious after spending more 
than half an hour thinking.
   
   https://github.com/apache/incubator-mxnet/pull/15120


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300197052
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -717,7 +717,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+ {ograds[0], nnvm::NodeEntry{dydx}}, nullptr, ));
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad_inp",
 
 Review comment:
   This seems ok.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300196947
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -717,7 +717,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
 
 Review comment:
   Maybe a comment would help here, this one is the output corresponding to 
dL/dy from the first backward right?
   
   I'm still unclear since the previous PRs on what
   dL/dxgrad * dy/dx  represents.
   
   https://github.com/apache/incubator-mxnet/pull/15120


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-03 Thread GitBox
larroy commented on a change in pull request #15413: [MXNET-978] Higher Order 
Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300196827
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -717,7 +717,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
 
 Review comment:
   Small thing, Could we get fx from the first backward (node->inputs) if we do 
ElemwiseGradUseInOut ?   I guess we would avoid additional divisions if so.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services