[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-07 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r300885216
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -717,7 +717,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
 
 Review comment:
   This term will be useful when you calculate the third order (and above) 
gradient.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299302800
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -708,7 +739,26 @@ The storage type of ``abs`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseIn{"_backward_abs"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_abs, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  // ograds[0]: dL/dxgrad
+  // inputs[0]: dL/dy
+  // inputs[1]: y
 
 Review comment:
   Shouldn't this term be x? _backward_abs is using ElemwiseGradUseIn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218314
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+ {ograds[0], nnvm::NodeEntry{dydx}}, nullptr, ));
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad_inp",
 
 Review comment:
   This term is actually `head_grad_grads` * `head_grads` * `f''(x)` = 
`head_grad_grads` * `head_grads` * -2 * `f'(x)` * `f(x)`, right? Then we don't 
need the explicit dydx node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299302321
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -106,6 +106,35 @@ def grad_grad_op(x):
 check_second_order_unary(array, log10, grad_grad_op)
 
 
+@with_seed()
+def test_reciprocal():
+def reciprocal(x):
+return nd.reciprocal(x)
+
+def grad_grad_op(x):
+return 2/x**3
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, reciprocal, grad_grad_op)
+
+
+@with_seed()
+def test_abs():
+def abs(x):
+return nd.abs(x)
+
+def grad_grad_op(x):
+return nd.zeros_like(x)
+
+for dim in range(1, 5):
+shape = rand_shape_nd(dim)
+array = random_arrays(shape)
+check_second_order_unary(array, abs, grad_grad_op)
+
 
 Review comment:
   nit: please remove extra line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299297193
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
 
 Review comment:
   Now I see that you need this node for the first output "_backward_grad_grad"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218852
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -106,6 +106,35 @@ def grad_grad_op(x):
 check_second_order_unary(array, log10, grad_grad_op)
 
 
+@with_seed()
+def test_reciprocal():
+def reciprocal(x):
+return nd.reciprocal(x)
+
+def grad_grad_op(x):
+return 2/x**3
 
 Review comment:
   space between `/`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218852
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -106,6 +106,35 @@ def grad_grad_op(x):
 check_second_order_unary(array, log10, grad_grad_op)
 
 
+@with_seed()
+def test_reciprocal():
+def reciprocal(x):
+return nd.reciprocal(x)
+
+def grad_grad_op(x):
+return 2/x**3
 
 Review comment:
   add space between `/`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218314
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+ {ograds[0], nnvm::NodeEntry{dydx}}, nullptr, ));
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad_inp",
 
 Review comment:
   This term is actually `head_grad_grads` * `head_grads` * `f''(x)` = 
`head_grad_grads` * `head_grads` * -2 * `f`(x)` * `f(x)`, right? Then we don't 
need the explicit dydx node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218314
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+ {ograds[0], nnvm::NodeEntry{dydx}}, nullptr, ));
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad_inp",
 
 Review comment:
   This term is actually `head_grad_grads` * `head_grads` * `f''(x)` = 
`head_grad_grads` * `head_grads` * -1 * `f`(x)` * `f(x)`, right? Then we don't 
need the explicit dydx node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299218314
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
+ {dydx_mul_dldy, n->inputs[0]}, nullptr, );
+auto fx = MakeNode("reciprocal", n->attrs.name + "_fx",
+   {n->inputs[1]}, nullptr, );
+
+auto d2ydx2_mid = MakeNode("elemwise_mul", n->attrs.name + "_d2ydx2_mid",
+   {dydx_mul_dldy, nnvm::NodeEntry{fx}}, nullptr, 
);
+
+auto d2ydx2 = MakeNode("_mul_scalar", n->attrs.name + "_d2ydx2",
+   {nnvm::NodeEntry{d2ydx2_mid}}, , );
+
+std::vector ret;
+
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+ {ograds[0], nnvm::NodeEntry{dydx}}, nullptr, ));
+ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad_inp",
 
 Review comment:
   This term is actually `head_grad_grads` * `head_grads` * `f''(x)` = 
`head_grad_grads` * `head_grads` * -2 * `f'(x)` * `f(x)`, right? Then we don't 
need the explicit dydx node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15413: [MXNET-978] Higher Order Gradient Support `reciprocal`, `abs`.

2019-07-01 Thread GitBox
apeforest commented on a change in pull request #15413: [MXNET-978] Higher 
Order Gradient Support `reciprocal`, `abs`.
URL: https://github.com/apache/incubator-mxnet/pull/15413#discussion_r299217648
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -689,7 +689,38 @@ Example::
 
 MXNET_OPERATOR_REGISTER_BINARY(_backward_reciprocal)
 .set_attr("FCompute",
-  ElemwiseBinaryOp::Compute >);
+  ElemwiseBinaryOp::Compute >)
+.set_attr("FGradient",
+  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+// ograds[0]: dL/dxgrad
+// inputs[0]: dL/dy
+// inputs[1]: x
+// f(x) = y = 1/x
+// f'(x) = -1/x^2
+// f''(x) = 2/x^3 = -2 * (f'(x) * f(x))
+
+const std::unordered_map args = {{"scalar", 
"-2.0"}};
+
+auto dydx_mul_dldy = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto dydx = MakeNode("elemwise_div", n->attrs.name + "_dydx",
 
 Review comment:
   Do we need to divide this explicitly here? I think the final 
_backward_grad_grad_input will also carry the term head_grads in the output, we 
may not need this extra node?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services