[incubator-mxnet] branch numpy updated: [WIP][numpy] Fix for D2L Chapters 2/3/4 (#15139)

2019-06-04 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a commit to branch numpy
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/numpy by this push:
 new 517451b  [WIP][numpy] Fix for D2L Chapters 2/3/4 (#15139)
517451b is described below

commit 517451bf68e9824f32640727021e8ae438d00b29
Author: reminisce 
AuthorDate: Tue Jun 4 22:55:10 2019 -0700

[WIP][numpy] Fix for D2L Chapters 2/3/4 (#15139)

* Fix

* Fix linear regression gluon

* More fix

* Fix pylint

* Fix for chapter 4

* Add np.add mul div mod pow sub and shuffle

* Fix model selection, underfitting, overfitting

* Fix weight decay

* Fix dropout

* Fix

* Fix chapter 4
---
 python/mxnet/gluon/data/dataloader.py  |  20 +-
 python/mxnet/gluon/data/vision/transforms.py   |   6 +-
 python/mxnet/gluon/loss.py |  26 +-
 python/mxnet/gluon/nn/activations.py   |   5 +-
 python/mxnet/gluon/nn/basic_layers.py  |  13 +-
 python/mxnet/gluon/utils.py|  50 ++--
 python/mxnet/ndarray/numpy/_op.py  | 199 ++-
 python/mxnet/ndarray/register.py   |   8 +-
 python/mxnet/numpy/multiarray.py   | 326 ++---
 python/mxnet/numpy_extension/__init__.py   |   5 +-
 python/mxnet/optimizer/optimizer.py|  10 +-
 python/mxnet/symbol/numpy/_symbol.py   | 194 ---
 python/mxnet/symbol/register.py|   8 +-
 python/mxnet/symbol/symbol.py  |   4 +
 python/mxnet/util.py   |  38 ++-
 src/operator/nn/activation.cc  |   1 +
 src/operator/nn/batch_norm.cc  |   1 +
 src/operator/nn/convolution.cc |   1 +
 src/operator/nn/fully_connected.cc |   1 +
 src/operator/nn/pooling.cc |   3 +-
 src/operator/random/shuffle_op.cc  |   1 +
 src/operator/tensor/elemwise_unary_op_basic.cc |   1 +
 src/operator/tensor/matrix_op.cc   |   1 +
 tests/python/unittest/test_numpy_gluon.py  |   6 +-
 24 files changed, 696 insertions(+), 232 deletions(-)

diff --git a/python/mxnet/gluon/data/dataloader.py 
b/python/mxnet/gluon/data/dataloader.py
index 934f2d5..a1d6513 100644
--- a/python/mxnet/gluon/data/dataloader.py
+++ b/python/mxnet/gluon/data/dataloader.py
@@ -18,6 +18,7 @@
 # coding: utf-8
 # pylint: disable=ungrouped-imports
 """Dataset generator."""
+from __future__ import absolute_import
 __all__ = ['DataLoader']
 
 import pickle
@@ -37,6 +38,8 @@ except ImportError:
 
 from . import sampler as _sampler
 from ... import nd, context
+from ...util import is_np_array
+from ... import numpy as _mx_np  #pylint: disable=reimported
 
 if sys.platform == 'darwin' or sys.platform == 'win32':
 def rebuild_ndarray(*args):
@@ -127,13 +130,14 @@ class SimpleQueue(multiprocessing.queues.SimpleQueue):
 def default_batchify_fn(data):
 """Collate data into batch."""
 if isinstance(data[0], nd.NDArray):
-return nd.stack(*data)
+return _mx_np.stack(data) if is_np_array() else nd.stack(*data)
 elif isinstance(data[0], tuple):
 data = zip(*data)
 return [default_batchify_fn(i) for i in data]
 else:
 data = np.asarray(data)
-return nd.array(data, dtype=data.dtype)
+array_fn = _mx_np.array if is_np_array() else nd.array
+return array_fn(data, dtype=data.dtype)
 
 
 def default_mp_batchify_fn(data):
@@ -141,20 +145,26 @@ def default_mp_batchify_fn(data):
 if isinstance(data[0], nd.NDArray):
 out = nd.empty((len(data),) + data[0].shape, dtype=data[0].dtype,
ctx=context.Context('cpu_shared', 0))
-return nd.stack(*data, out=out)
+if is_np_array():
+out = out.as_np_ndarray()
+return _mx_np.stack(data, out=out)
+else:
+return nd.stack(*data, out=out)
 elif isinstance(data[0], tuple):
 data = zip(*data)
 return [default_mp_batchify_fn(i) for i in data]
 else:
 data = np.asarray(data)
-return nd.array(data, dtype=data.dtype,
+array_fn = _mx_np.array if is_np_array() else nd.array
+return array_fn(data, dtype=data.dtype,
 ctx=context.Context('cpu_shared', 0))
 
 
 def _as_in_context(data, ctx):
 """Move data into new context."""
 if isinstance(data, nd.NDArray):
-return data.as_in_context(ctx)
+out = data.as_in_context(ctx)
+return out.as_np_ndarray() if is_np_array() else out
 elif isinstance(data, (list, tuple)):
 return [_as_in_context(d, ctx) for d in data]
 return data
diff --git a/python/mxnet/gluon/data/vision/transforms.py 
b/python/mxnet/gluon/data/vision/transforms.py
index dff7f66..d888398 100644
--- 

[GitHub] [incubator-mxnet] reminisce merged pull request #15139: [WIP][numpy] Fix for D2L Chapters 2/3/4

2019-06-04 Thread GitBox
reminisce merged pull request #15139: [WIP][numpy] Fix for D2L Chapters 2/3/4
URL: https://github.com/apache/incubator-mxnet/pull/15139
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #15120: [bug] fix higher 
grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290587434
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -27,52 +27,79 @@ def test_log():
 def log(x):
 return nd.log(x)
 
+def grad_op(x):
+return 1/x
+
 def grad_grad_op(x):
 return -1/(x**2)
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log, grad_grad_op)
+check_second_order_unary(array, log, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log2():
 def log2(x):
 return nd.log2(x)
 
+def grad_op(x):
+return 1/(x * math.log(2))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(2))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log2, grad_grad_op)
+check_second_order_unary(array, log2, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log10():
 def log10(x):
 return nd.log10(x)
 
+def grad_op(x):
+return 1/(x * math.log(10))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(10))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log10, grad_grad_op)
+check_second_order_unary(array, log10, grad_op, grad_grad_op)
 
 
-def check_second_order_unary(x, op, grad_grad_op):
+def check_second_order_unary(x, op, grad_op, grad_grad_op):
 x = nd.array(x)
-expect_grad_grad = grad_grad_op(x)
+grad_x = grad_op(x)
+grad_grad_x = grad_grad_op(x)
 x.attach_grad()
+
+# Manual head_grads.
+head_grads = nd.random.normal(shape=x.shape)
+head_grad_grads = nd.random.normal(shape=x.shape)
+head_grads.attach_grad()
+
+# Perform compute.
 with autograd.record():
 y = op(x)
-y_grad = autograd.grad(y, x, create_graph=True, retain_graph=True)[0]
-y_grad.backward()
-assert_almost_equal(expect_grad_grad.asnumpy(), x.grad.asnumpy())
+y_grad = autograd.grad(y, x, head_grads=head_grads,
+   create_graph=True, retain_graph=True)[0]
+
+y_grad.backward(head_grad_grads)
+
+# Compute expected values.
+expected_grad_grad = grad_grad_x.asnumpy() * head_grad_grads.asnumpy() * \
+head_grads.asnumpy()
+expected_heads_grad = grad_x.asnumpy()
+
+# Validate the gradients.
+assert_almost_equal(expected_grad_grad, x.grad.asnumpy())
+assert_almost_equal(expected_heads_grad, head_grads.grad.asnumpy())
 
 Review comment:
   Yeah to verify the fix. 
   I expected `y_grad.backward(head_grad_grads)` to update the 
`head_grads.grad` similar to the Pytorch Script from the description.
   
   Thanks for the suggestion,
   I will surely try that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #15120: [bug] fix higher 
grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290587014
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   Sure thing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #15120: [bug] fix higher 
grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290586766
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
+{n->inputs[1]}, nullptr, );
 auto ggx_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_backward_mid_grad_grad",
-{gx, gx}, nullptr, );
+{gx_mul_head_grads, nnvm::NodeEntry{g_lx}}, 
nullptr, );
 auto ggx = MakeNode("negative", n->attrs.name + "_backward_grad_grad",
 {nnvm::NodeEntry{ggx_mid}}, nullptr, );
 
 std::vector ret;
 
 ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
- {ograds[0], gx}, nullptr, ));
+ {ograds[0], nnvm::NodeEntry{g_lx}}, nullptr, ));
 
 Review comment:
   Sorry for the confusion. I forgot to add the line from the test file. 
   Sure waiting to know what you find.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #15120: [bug] fix higher 
grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r289614132
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
+{n->inputs[1]}, nullptr, );
 auto ggx_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_backward_mid_grad_grad",
-{gx, gx}, nullptr, );
+{gx_mul_head_grads, nnvm::NodeEntry{g_lx}}, 
nullptr, );
 auto ggx = MakeNode("negative", n->attrs.name + "_backward_grad_grad",
 {nnvm::NodeEntry{ggx_mid}}, nullptr, );
 
 std::vector ret;
 
 ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
- {ograds[0], gx}, nullptr, ));
+ {ograds[0], nnvm::NodeEntry{g_lx}}, nullptr, ));
 
 Review comment:
   
https://github.com/apache/incubator-mxnet/blob/37ce3b87268a8154f5c0ad97ce2522478038ee06/tests/python/unittest/test_higher_order_grad.py#L102
   
   I am having trouble with `head_grads.grad` which is being returned as `0's` 
(I guess they are somehow not being updated) while I expect it to be the output 
of this line.
   Please help.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xianyujie commented on issue #15108: The test time of the model on GPU is normal, but the test time on CPU is very long.

2019-06-04 Thread GitBox
xianyujie commented on issue #15108: The test time of the model on GPU is 
normal, but the test time on CPU is very long.
URL: 
https://github.com/apache/incubator-mxnet/issues/15108#issuecomment-498944552
 
 
   @pengzhao-intel I think you misunderstood the result. Take a look at the 
following results, different inputs have a great influence on the operation 
time of convolution layer, What could be the reason for this result?
   
   **same image as input, get the output(pre_output1,pre_output2) from 
stage1_unit1_relu1 layer
   of the two models.Then test the time of Conv layer, pre_output1 as the input 
of my model,
   pre_output2 as the input of original model.**
   (my_model: 0.075896, original model: 0.006333)
   **The pre_output1 as the input of the two models, test the time of Conv 
layer**
   (my_model: 0.072311, original model: 0.072548)
   **The pre_output2 as the input of the two models, test the time of Conv 
layer**
   (my_model: 0.0055, original model: 0.005653)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] lanking520 commented on a change in pull request #15155: Fix Scala release

2019-06-04 Thread GitBox
lanking520 commented on a change in pull request #15155: Fix Scala release
URL: https://github.com/apache/incubator-mxnet/pull/15155#discussion_r290582549
 
 

 ##
 File path: scala-package/externalPom/pom.xml
 ##
 @@ -0,0 +1,153 @@
+
+
+
 
 Review comment:
   Duplicated


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial 
failure: test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498939914
 
 
   Current conclusion is this only happens on CI machines with 
`NODE_LINUX_GPU_P3`
   
https://github.com/apache/incubator-mxnet/blob/master/tests/nightly/JenkinsfileForBinaries#L131


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial 
failure: test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498939531
 
 
   However, I was not able to reproduce this on a EC2 P3.8xLarge instance.
   All tutorial test passes 
   ```
   ci/build.py --docker-registry mxnetci --nvidiadocker --platform 
ubuntu_nightly_gpu --docker-build-retries 3 --shm-size 1500m 
/work/runtime_functions.sh nightly_tutorial_test_ubuntu_python2_gpu
   ```
   both test_amp and test_kvstore passes
   ```
   [success] 4.03% test_tutorials.test_amp: 71.8112s
   [success] 3.73% test_tutorials.test_gluon_end_to_end: 66.5005s
   [success] 3.41% test_tutorials.test_gluon_learning_rate_finder: 60.8975s
   [success] 2.23% test_tutorials.test_vision_cnn_visualization: 39.6999s
   [success] 1.73% test_tutorials.test_basic_data: 30.7975s
   [success] 1.44% test_tutorials.test_python_mnist: 25.7051s
   [success] 1.33% test_tutorials.test_gluon_save_load_params: 23.7645s
   [success] 1.15% test_tutorials.test_basic_module: 20.5767s
   [success] 1.01% test_tutorials.test_python_kvstore: 18.0461s
   
   Ran 48 tests in 1783.625s
   
   OK
   build.py: 2019-06-04 23:58:56,307Z INFO Waiting for status of container 
8fdb8033689e for 600 s.
   build.py: 2019-06-04 23:58:56,484Z INFO Container exit status: 
{'StatusCode': 0, 'Error': None}
   build.py: 2019-06-04 23:58:56,484Z INFO Container exited with success 
   build.py: 2019-06-04 23:58:56,484Z INFO Stopping container: 8fdb8033689e
   build.py: 2019-06-04 23:58:56,486Z INFO Removing container: 8fdb8033689e
   ```
   
   Also manually running all the commands in 
https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/python/kvstore.md
 passes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei commented on issue #15152: [CI][nightly] nightly test tutorial failure: 
test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498939914
 
 
   Current conclusion is this only happens on CI machines with 
`NODE_LINUX_GPU_P3`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial 
failure: test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498939531
 
 
   However, I was not able to reproduce this on a EC2 P3.8xLarge instance.
   All tutorial test passes 
   ```
   ci/build.py --docker-registry mxnetci --nvidiadocker --platform 
ubuntu_nightly_gpu --docker-build-retries 3 --shm-size 1500m 
/work/runtime_functions.sh nightly_tutorial_test_ubuntu_python2_gpu
   ```
   both test_amp and test_kvstore passes
   ```
   **[success] 4.03% test_tutorials.test_amp: 71.8112s**
   [success] 3.73% test_tutorials.test_gluon_end_to_end: 66.5005s
   [success] 3.41% test_tutorials.test_gluon_learning_rate_finder: 60.8975s
   [success] 2.23% test_tutorials.test_vision_cnn_visualization: 39.6999s
   [success] 1.73% test_tutorials.test_basic_data: 30.7975s
   [success] 1.44% test_tutorials.test_python_mnist: 25.7051s
   [success] 1.33% test_tutorials.test_gluon_save_load_params: 23.7645s
   [success] 1.15% test_tutorials.test_basic_module: 20.5767s
   **[success] 1.01% test_tutorials.test_python_kvstore: 18.0461s**
   
   Ran 48 tests in 1783.625s
   
   OK
   build.py: 2019-06-04 23:58:56,307Z INFO Waiting for status of container 
8fdb8033689e for 600 s.
   build.py: 2019-06-04 23:58:56,484Z INFO Container exit status: 
{'StatusCode': 0, 'Error': None}
   build.py: 2019-06-04 23:58:56,484Z INFO Container exited with success 
   build.py: 2019-06-04 23:58:56,484Z INFO Stopping container: 8fdb8033689e
   build.py: 2019-06-04 23:58:56,486Z INFO Removing container: 8fdb8033689e
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial 
failure: test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498939531
 
 
   However, I was not able to reproduce this on a EC2 P3.8xLarge instance.
   All tutorial test passes 
   ```
   ci/build.py --docker-registry mxnetci --nvidiadocker --platform 
ubuntu_nightly_gpu --docker-build-retries 3 --shm-size 1500m 
/work/runtime_functions.sh nightly_tutorial_test_ubuntu_python2_gpu
   ```
   both test_amp and test_kvstore passes
   ```
   [success] 4.03% test_tutorials.test_amp: 71.8112s
   [success] 3.73% test_tutorials.test_gluon_end_to_end: 66.5005s
   [success] 3.41% test_tutorials.test_gluon_learning_rate_finder: 60.8975s
   [success] 2.23% test_tutorials.test_vision_cnn_visualization: 39.6999s
   [success] 1.73% test_tutorials.test_basic_data: 30.7975s
   [success] 1.44% test_tutorials.test_python_mnist: 25.7051s
   [success] 1.33% test_tutorials.test_gluon_save_load_params: 23.7645s
   [success] 1.15% test_tutorials.test_basic_module: 20.5767s
   [success] 1.01% test_tutorials.test_python_kvstore: 18.0461s
   
   Ran 48 tests in 1783.625s
   
   OK
   build.py: 2019-06-04 23:58:56,307Z INFO Waiting for status of container 
8fdb8033689e for 600 s.
   build.py: 2019-06-04 23:58:56,484Z INFO Container exit status: 
{'StatusCode': 0, 'Error': None}
   build.py: 2019-06-04 23:58:56,484Z INFO Container exited with success 
   build.py: 2019-06-04 23:58:56,484Z INFO Stopping container: 8fdb8033689e
   build.py: 2019-06-04 23:58:56,486Z INFO Removing container: 8fdb8033689e
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei commented on issue #15152: [CI][nightly] nightly test tutorial failure: 
test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498939531
 
 
   However, I was not able to reproduce this on a EC2 P3.8xLarge instance.
   All tutorial test passes 
   ```
   ci/build.py --docker-registry mxnetci --nvidiadocker --platform 
ubuntu_nightly_gpu --docker-build-retries 3 --shm-size 1500m 
/work/runtime_functions.sh nightly_tutorial_test_ubuntu_python2_gpu
   ```
   ```
   Ran 48 tests in 1783.625s
   
   OK
   build.py: 2019-06-04 23:58:56,307Z INFO Waiting for status of container 
8fdb8033689e for 600 s.
   build.py: 2019-06-04 23:58:56,484Z INFO Container exit status: 
{'StatusCode': 0, 'Error': None}
   build.py: 2019-06-04 23:58:56,484Z INFO Container exited with success 
   build.py: 2019-06-04 23:58:56,484Z INFO Stopping container: 8fdb8033689e
   build.py: 2019-06-04 23:58:56,486Z INFO Removing container: 8fdb8033689e
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei edited a comment on issue #15152: [CI][nightly] nightly test tutorial 
failure: test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498938781
 
 
   This tutorial test was passing when running  on 1 GPU machine.
   
https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/python/kvstore.md
   ```
   # The numbers used below assume 4 GPUs
   gpus = mx.context.num_gpus()
   if gpus > 0:
   contexts = [mx.gpu(i) for i in range(gpus)]
   else:
   contexts = [mx.cpu(i) for i in range(4)]
   ```
   However, when I changed to P3 instances with 4 gpus in 
https://github.com/apache/incubator-mxnet/pull/15141. it fails.
   ```
   
   
   MXNetError: [01:12:52] src/imperative/./imperative_utils.h:71: Check failed: 
inputs[i]->ctx().dev_mask() == ctx.dev_mask() (1 vs. 2) : Operator 
broadcast_add require all inputs live on the same context. But the first 
argument is on gpu(0) while the 2-th argument is on cpu(0)
   
   Stack trace:
   
 [bt] (0) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x3c)
 [0x7f08e7052c1c]
   
 [bt] (1) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::imperative::GetContext(nnvm::NodeAttrs
 const&, std::vector > const&, 
std::vector > const&, 
mxnet::Context const&)+0x823) [0x7f08e9fbf343]
   
 [bt] (2) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::Imperative::Invoke(mxnet::Context
 const&, nnvm::NodeAttrs const&, std::vector > const&, std::vector > const&)+0xdb) [0x7f08e9fcd47b]
   
 [bt] (3) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(MXImperativeInvokeImpl(void*, 
int, void**, int*, void***, int, char const**, char const**)+0x1c9) 
[0x7f08eaab99d9]
   
 [bt] (4) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(MXImperativeInvokeEx+0x8f) 
[0x7f08eaab9edf]
   
 [bt] (5) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c)
 [0x7f093764ae20]
   
 [bt] (6) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call+0x2eb)
 [0x7f093764a88b]
   
 [bt] (7) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(_ctypes_callproc+0x49a)
 [0x7f093764501a]
   
 [bt] (8) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(+0x9fcb) 
[0x7f0937638fcb]
   
   ```
   
   error comes from this part of the code during broad_cast add, where `stored` 
is `b` and on GPU, `input` is `mx.nd.ones(shape)` on CPU. but it should not 
give an error.
   ```
   def update(key, input, stored):
   print("update on key: %d" % key)
   stored += input * 2
   kv._set_updater(update)
   kv.pull(3, out=a)
   print(a.asnumpy())
   
   kv.push(3, mx.nd.ones(shape))
   #
   kv.pull(3, out=a)
   print(a.asnumpy())
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei commented on issue #15152: [CI][nightly] nightly test tutorial failure: 
test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498938781
 
 
   This tutorial test was passing when running  on 1 GPU machine.
   
https://github.com/apache/incubator-mxnet/blob/master/docs/tutorials/python/kvstore.md
   ```
   # The numbers used below assume 4 GPUs
   gpus = mx.context.num_gpus()
   if gpus > 0:
   contexts = [mx.gpu(i) for i in range(gpus)]
   else:
   contexts = [mx.cpu(i) for i in range(4)]
   ```
   However, when I changed to P3 instances with 4 gpus in 
https://github.com/apache/incubator-mxnet/pull/15141. it fails.
   ```
   
   
   MXNetError: [01:12:52] src/imperative/./imperative_utils.h:71: Check failed: 
inputs[i]->ctx().dev_mask() == ctx.dev_mask() (1 vs. 2) : Operator 
broadcast_add require all inputs live on the same context. But the first 
argument is on gpu(0) while the 2-th argument is on cpu(0)
   
   Stack trace:
   
 [bt] (0) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x3c)
 [0x7f08e7052c1c]
   
 [bt] (1) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::imperative::GetContext(nnvm::NodeAttrs
 const&, std::vector > const&, 
std::vector > const&, 
mxnet::Context const&)+0x823) [0x7f08e9fbf343]
   
 [bt] (2) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::Imperative::Invoke(mxnet::Context
 const&, nnvm::NodeAttrs const&, std::vector > const&, std::vector > const&)+0xdb) [0x7f08e9fcd47b]
   
 [bt] (3) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(MXImperativeInvokeImpl(void*, 
int, void**, int*, void***, int, char const**, char const**)+0x1c9) 
[0x7f08eaab99d9]
   
 [bt] (4) 
/work/mxnet/python/mxnet/../../lib/libmxnet.so(MXImperativeInvokeEx+0x8f) 
[0x7f08eaab9edf]
   
 [bt] (5) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call_unix64+0x4c)
 [0x7f093764ae20]
   
 [bt] (6) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(ffi_call+0x2eb)
 [0x7f093764a88b]
   
 [bt] (7) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(_ctypes_callproc+0x49a)
 [0x7f093764501a]
   
 [bt] (8) 
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so(+0x9fcb) 
[0x7f0937638fcb]
   
   ```
   
   error comes from this part of the code during broad_cast add 
   ```
   def update(key, input, stored):
   print("update on key: %d" % key)
   stored += input * 2
   kv._set_updater(update)
   kv.pull(3, out=a)
   print(a.asnumpy())
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] insikk commented on issue #2870: Does the mxnet support the binary operations?

2019-06-04 Thread GitBox
insikk commented on issue #2870: Does the mxnet support the binary operations?
URL: 
https://github.com/apache/incubator-mxnet/issues/2870#issuecomment-498929418
 
 
   I wish there is xor operator. +1 for this feature request


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sandeep-krishnamurthy commented on a change in pull request #14977: Add an utility for operator benchmarks

2019-06-04 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #14977: Add an 
utility for operator benchmarks
URL: https://github.com/apache/incubator-mxnet/pull/14977#discussion_r290570895
 
 

 ##
 File path: benchmark/opperf/README.md
 ##
 @@ -0,0 +1,182 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# MXNet Operator Performance Benchmarks
+
+A Python utility for benchmarking and profiling individual MXNet operator 
execution.
+
+With this utility, for each MXNet operator you can get the following details:
+
+**Timing**
+1. Forward execution time
+2. Backward execution time
+3. Time spent for memory management
+
+**Memory**
+1. Total memory allocated
+
+# Motivation
+
+Benchmarks are usually done end-to-end for a given Network Architecture. For 
example: ResNet-50 benchmarks on ImageNet data. This is good measurement of 
overall performance and health of a deep learning framework. However, it is 
important to note the following important factors:
+1. Users use a lot more operators that are not part of a standard network like 
ResNet. Example: Tensor manipulation operators like mean, max, topk, argmax, 
sort etc.   
+2. A standard Network Architecture like ResNet-50 is made up of many operators 
Ex: Convolution2D, Softmax, Dense and more. Consider the following scenarios:
+1. We improved the performance of Convolution2D operator, but due to a 
bug, Softmax performance went down. Overall, we may observe end to end 
benchmarks are running fine, we may miss out the performance degradation of a 
single operator which can accumulate and become untraceable.
+2. You need to see in a given network, which operator is taking maximum 
time and plan optimization work. With end to end benchmarks, it is hard to get 
more fine grained numbers at operator level.
+3. We need to know on different hardware infrastructure (Ex: CPU with MKLDNN, 
GPU with NVIDIA CUDA and cuDNN) how different operators performs. With these 
details, we can plan the optimization work at operator level, which could 
exponentially boost up end to end performance.
+4. You want to have nightly performance tests across all operators in a deep 
learning framework to catch regressions early. 
+5. We can integrate this framework with a CI/CD system to run per operator 
performance tests for PRs. Example: When a PR modifies the kernel of 
TransposeConv2D, we can run benchmarks of TransposeConv2D operator to verify 
performance.
+
+Hence, in this utility, we will build the functionality to allow users and 
developers of deep learning frameworks to easily run benchmarks for individual 
operators.
+
+# How to use
+
+## Prerequisites
+
+This utility uses MXNet profiler under the hood to fetch compute and memory 
metrics. Hence, you need to build MXNet with `USE_PROFILER=1` flag.
+
+Make sure to build the flavor of MXNet, for example - with/without MKL, with 
CUDA 9 or 10.1 etc., on which you would like to measure operator performance. 
Finally, you need to add path to your cloned MXNet repository to the PYTHONPATH.
+
+```
+export PYTHONPATH=$PYTHONPATH:/path/to/incubator-mxnet/
+```
+
+## Usecase 1 - Run benchmarks for all the operators
+
+Below command runs all the MXNet operators (NDArray) benchmarks with default 
inputs and saves the final result as JSON in the given file.
+
+```
+python incubator-mxnet/benchmark/opperf/opperf.py --output-format json 
--output-file mxnet_operator_benchmark_results.json
+```
+
+**Other Supported Options:**
+
+1. **output-format** : `json` or `md` for markdown file output.
+
+2. **ctx** : `cpu` or `gpu`. By default, cpu on CPU machine, gpu(0) on GPU 
machine. You can override and set the global context for all operator 
benchmarks. Example: --ctx gpu(2).
+
+3. **dtype** : By default, `float32`. You can override and set the global 
dtype for all operator benchmarks. Example: --dtype float64.
+
+## Usecase 2 - Run benchmarks for all the operators in a specific category
+
+For example, you want to run benchmarks for all NDArray Broadcast Binary 
Operators, Ex: broadcast_add, broadcast_mod, broadcast_pow etc., You just run 
the following python script.
+
+```
+#!/usr/bin/python
+from benchmark.opperf.tensor_operations.binary_broadcast_operators import 
run_mx_binary_broadcast_operators_benchmarks
+
+# Run all Binary Broadcast operations benchmarks with default input values
+print(run_mx_binary_broadcast_operators_benchmarks())
+```
+
+Output for the above benchmark run, on a CPU machine, would look something 
like below:
+
+```
+{'broadcast_mod': [{'avg_time_forward_broadcast_mod': 28.7063, 
'avg_time_mem_alloc_cpu/0': 4194.3042,
+'avg_time_backward_broadcast_mod': 12.0954, 'inputs': 
{'lhs': (1024, 1024), 'rhs': (1024, 1024)}},
+   {'avg_time_forward_broadcast_mod': 2.7332, 
'avg_time_mem_alloc_cpu/0': 400.0,
+'avg_time_backward_broadcast_mod': 1.1288, 'inputs': 
{'lhs': (1, 10), 'rhs': (1, 10)}},
+   

[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290561369
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -27,52 +27,79 @@ def test_log():
 def log(x):
 return nd.log(x)
 
+def grad_op(x):
+return 1/x
+
 def grad_grad_op(x):
 return -1/(x**2)
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log, grad_grad_op)
+check_second_order_unary(array, log, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log2():
 def log2(x):
 return nd.log2(x)
 
+def grad_op(x):
+return 1/(x * math.log(2))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(2))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log2, grad_grad_op)
+check_second_order_unary(array, log2, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log10():
 def log10(x):
 return nd.log10(x)
 
+def grad_op(x):
+return 1/(x * math.log(10))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(10))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log10, grad_grad_op)
+check_second_order_unary(array, log10, grad_op, grad_grad_op)
 
 
-def check_second_order_unary(x, op, grad_grad_op):
+def check_second_order_unary(x, op, grad_op, grad_grad_op):
 x = nd.array(x)
-expect_grad_grad = grad_grad_op(x)
+grad_x = grad_op(x)
+grad_grad_x = grad_grad_op(x)
 x.attach_grad()
+
+# Manual head_grads.
+head_grads = nd.random.normal(shape=x.shape)
+head_grad_grads = nd.random.normal(shape=x.shape)
+head_grads.attach_grad()
+
+# Perform compute.
 with autograd.record():
 y = op(x)
-y_grad = autograd.grad(y, x, create_graph=True, retain_graph=True)[0]
-y_grad.backward()
-assert_almost_equal(expect_grad_grad.asnumpy(), x.grad.asnumpy())
+y_grad = autograd.grad(y, x, head_grads=head_grads,
+   create_graph=True, retain_graph=True)[0]
+
+y_grad.backward(head_grad_grads)
+
+# Compute expected values.
+expected_grad_grad = grad_grad_x.asnumpy() * head_grad_grads.asnumpy() * \
+head_grads.asnumpy()
+expected_heads_grad = grad_x.asnumpy()
+
+# Validate the gradients.
+assert_almost_equal(expected_grad_grad, x.grad.asnumpy())
+assert_almost_equal(expected_heads_grad, head_grads.grad.asnumpy())
 
 Review comment:
   can you try 
   
   `y_grad_grad = autograd.grad(y_grad, x, ..., create_graph = False...)[0]`
   
   
   and in validation
   `assert_almost_equal(expected_heads_grad, y_grad_grad.asnumpy())`
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] moneypi commented on issue #15133: Compile failed when disable OPENMP

2019-06-04 Thread GitBox
moneypi commented on issue #15133: Compile failed when disable OPENMP
URL: 
https://github.com/apache/incubator-mxnet/issues/15133#issuecomment-498908780
 
 
   I repeat the problem when memory is 8G.
   And compile success after change to 16G.
   So, that's the reason.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-06-04 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a7486a3  Bump the publish timestamp.
a7486a3 is described below

commit a7486a3fa001c5428cf0dba3868c961bb5488b23
Author: mxnet-ci 
AuthorDate: Wed Jun 5 01:16:57 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..bc0be2f
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Jun  5 01:16:57 UTC 2019



[GitHub] [incubator-mxnet] ZhennanQin commented on issue #15151: SSD INT8 ARMv8 Operator _sg_mkldnn_conv is not registered

2019-06-04 Thread GitBox
ZhennanQin commented on issue #15151: SSD INT8 ARMv8 Operator _sg_mkldnn_conv 
is not registered
URL: 
https://github.com/apache/incubator-mxnet/issues/15151#issuecomment-498898161
 
 
   Gluon pre-trained int8 model requires mkldnn library and AVX512 instruction 
sets. MXNet and Gluon doesn't provide int8 solution on ARM backend at moment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk opened a new pull request #15155: Fix Scala release

2019-06-04 Thread GitBox
zachgk opened a new pull request #15155: Fix Scala release
URL: https://github.com/apache/incubator-mxnet/pull/15155
 
 
   Add GPG signing to pom.xml through additional externalPom module
   Fix staging target repository
   
   @lanking520 @frankfliu 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stephenrawls commented on issue #13967: Hybrid Variable Pass Through [Bug]

2019-06-04 Thread GitBox
stephenrawls commented on issue #13967: Hybrid Variable Pass Through [Bug]
URL: 
https://github.com/apache/incubator-mxnet/issues/13967#issuecomment-498893060
 
 
   In one sense, yes it offers a solution.
   
   However it is a brittle api. Users have to know that you aren't allowed to 
return an unmodified input variable in a hybrid block and must wrap it in 
F.Identity(). It would probably be a better user experience not to have to do 
that.
   
   But for my purposes, yes I can use that and it's fine. Thanks for pointing 
F.Identity out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15130: Add NaiveEngine tests in CI

2019-06-04 Thread GitBox
pengzhao-intel commented on issue #15130: Add NaiveEngine tests in CI
URL: https://github.com/apache/incubator-mxnet/pull/15130#issuecomment-498890272
 
 
   @marcoabreu I heard some people are using the naive engine for the better 
performance but the naive engine can't work very well. I think at least we need 
to add 1 or 2 tests on CI for the naive engine. 
   
   Let's use this PR to fix the bugs in the naive engine first and then see how 
much time from threaded and naive engine :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #14152: data layer grad problem

2019-06-04 Thread GitBox
leleamol commented on issue #14152: data layer grad problem
URL: 
https://github.com/apache/incubator-mxnet/issues/14152#issuecomment-498885682
 
 
   @songziqin please let us know if you got the answer on forum. It would also 
help us more in resolving the issue, if there is a reprodicible example


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
apeforest commented on a change in pull request #15120: [bug] fix higher grad 
log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290538567
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
+{n->inputs[1]}, nullptr, );
 auto ggx_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_backward_mid_grad_grad",
-{gx, gx}, nullptr, );
+{gx_mul_head_grads, nnvm::NodeEntry{g_lx}}, 
nullptr, );
 auto ggx = MakeNode("negative", n->attrs.name + "_backward_grad_grad",
 {nnvm::NodeEntry{ggx_mid}}, nullptr, );
 
 std::vector ret;
 
 ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
- {ograds[0], gx}, nullptr, ));
+ {ograds[0], nnvm::NodeEntry{g_lx}}, nullptr, ));
 
 Review comment:
   Still looking into this. The first output should be the gradient of y_grad. 
However, the `head_grads.grad` does not get the value. I suspect the returned 
value from this function is dropped in the gradient calculation in 
imperative.cc. I will look more into this. Stay tuned.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #12849: [cmake][cpp-package] Building with cmake does not install the cpp-package API

2019-06-04 Thread GitBox
leleamol commented on issue #12849: [cmake][cpp-package] Building with cmake 
does not install the cpp-package API
URL: 
https://github.com/apache/incubator-mxnet/issues/12849#issuecomment-498884206
 
 
   @inglada Please let us know if you were able to build cpp-package using 
CMakefiles so that we can close this issue.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol commented on issue #12849: [cmake][cpp-package] Building with cmake does not install the cpp-package API

2019-06-04 Thread GitBox
leleamol commented on issue #12849: [cmake][cpp-package] Building with cmake 
does not install the cpp-package API
URL: 
https://github.com/apache/incubator-mxnet/issues/12849#issuecomment-498884272
 
 
   @mxnet-label-bot add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290528016
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   can we add a comment about the inputs and what is g_lx?  it would help 
reason about the code.  Are the inputs of n (backward_log)
   - 0: input gradient
   - 1: x 
   ?
   
   So g_lx is a node having 1/x ? or the derivative of the log right?  can we 
rename to g_logx ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leleamol opened a new pull request #15154: [MXNET-1412] Backporting the fix to v1.4.x branch to prevent the crash in naive engine

2019-06-04 Thread GitBox
leleamol opened a new pull request #15154: [MXNET-1412] Backporting the fix to 
v1.4.x branch to prevent the crash in naive engine
URL: https://github.com/apache/incubator-mxnet/pull/15154
 
 
   ## Description ##
   This change prevents the crash that happens due to early destruction of 
shared pointers during NaiveEngine shutdown. The  change is available in 
master. This PR is backporting the change to v1.4.x branch.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [y] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [y] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290528016
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   can we add a comment about the inputs and what is g_lx?  it would help 
reason about the code.  Are the inputs of n (backward_log)
   - 0: input gradient
   - 1: x 
   ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290528016
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   can we add a comment about the inputs?  it would help reason about the code. 
 Are the inputs of n (backward_log)
   - 0: input gradient
   - 1: x 
   ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290528016
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   can we add a comment about the inputs?  it would help reason about the code. 
 Are the inputs
   - 0: ograd
   - 1: f(x)
   
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290528016
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   can we add a comment about the inputs?  it would help reason about the code. 
 Are the inputs
   - [0]: ograd
   - [1]: f(x)
   
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290528016
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
 
 Review comment:
   can we add a comment about the inputs?  it would help reason about the code. 
 Are the inputs 0: ograd .1: f(x) . ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch numpy updated: numpy concatenate (#15104)

2019-06-04 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a commit to branch numpy
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/numpy by this push:
 new aa5153f  numpy concatenate (#15104)
aa5153f is described below

commit aa5153fdd463c9a37192b4dd2b3378c4f21e40bf
Author: Hao Jin 
AuthorDate: Tue Jun 4 15:55:27 2019 -0700

numpy concatenate (#15104)
---
 python/mxnet/ndarray/numpy/_op.py | 27 -
 python/mxnet/numpy/multiarray.py  | 29 +-
 python/mxnet/symbol/numpy/_symbol.py  | 27 -
 src/operator/nn/concat.cc | 12 +++---
 src/operator/numpy/np_matrix_op.cc| 58 +++
 src/operator/numpy/np_matrix_op.cu|  4 ++
 src/operator/quantization/quantized_concat.cc | 12 +++---
 tests/python/unittest/test_numpy_op.py| 51 +++
 8 files changed, 204 insertions(+), 16 deletions(-)

diff --git a/python/mxnet/ndarray/numpy/_op.py 
b/python/mxnet/ndarray/numpy/_op.py
index 34218e3..6c83e1f 100644
--- a/python/mxnet/ndarray/numpy/_op.py
+++ b/python/mxnet/ndarray/numpy/_op.py
@@ -24,7 +24,7 @@ from ...util import _sanity_check_params, set_module
 from ...context import current_context
 from . import _internal as _npi
 
-__all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'arange', 'argmax']
+__all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax']
 
 
 @set_module('mxnet.ndarray.numpy')
@@ -277,3 +277,28 @@ def argmax(a, axis=None, out=None):
 with the dimension along `axis` removed.
 """
 return _npi.argmax(a, axis=axis, keepdims=False, out=out)
+
+
+@set_module('mxnet.ndarray.numpy')
+def concatenate(seq, axis=0, out=None):
+"""Join a sequence of arrays along an existing axis.
+
+Parameters
+--
+a1, a2, ... : sequence of array_like
+The arrays must have the same shape, except in the dimension
+corresponding to `axis` (the first, by default).
+axis : int, optional
+The axis along which the arrays will be joined.  If axis is None,
+arrays are flattened before use.  Default is 0.
+out : ndarray, optional
+If provided, the destination to place the result. The shape must be
+correct, matching that of what concatenate would have returned if no
+out argument were specified.
+
+Returns
+---
+res : ndarray
+The concatenated array.
+"""
+return _npi.concatenate(*seq, dim=axis, out=out)
diff --git a/python/mxnet/numpy/multiarray.py b/python/mxnet/numpy/multiarray.py
index 212dfe3..6b3dcde 100644
--- a/python/mxnet/numpy/multiarray.py
+++ b/python/mxnet/numpy/multiarray.py
@@ -37,8 +37,8 @@ from ..context import current_context
 from ..ndarray import numpy as _mx_nd_np
 from ..ndarray.numpy import _internal as _npi
 
-__all__ = ['ndarray', 'empty', 'array', 'zeros', 'ones', 'maximum', 'minimum', 
'stack', 'arange',
-   'argmax']
+__all__ = ['ndarray', 'empty', 'array', 'zeros', 'ones', 'maximum', 'minimum', 
'stack',
+   'concatenate', 'arange', 'argmax']
 
 
 # This function is copied from ndarray.py since pylint
@@ -1486,3 +1486,28 @@ def argmax(a, axis=None, out=None):
 with the dimension along `axis` removed.
 """
 return _mx_nd_np.argmax(a, axis, out)
+
+
+@set_module('mxnet.numpy')
+def concatenate(seq, axis=0, out=None):
+"""Join a sequence of arrays along an existing axis.
+
+Parameters
+--
+a1, a2, ... : sequence of array_like
+The arrays must have the same shape, except in the dimension
+corresponding to `axis` (the first, by default).
+axis : int, optional
+The axis along which the arrays will be joined.  If axis is None,
+arrays are flattened before use.  Default is 0.
+out : ndarray, optional
+If provided, the destination to place the result. The shape must be
+correct, matching that of what concatenate would have returned if no
+out argument were specified.
+
+Returns
+---
+res : ndarray
+The concatenated array.
+"""
+return _mx_nd_np.concatenate(seq, axis=axis, out=out)
diff --git a/python/mxnet/symbol/numpy/_symbol.py 
b/python/mxnet/symbol/numpy/_symbol.py
index b2d8a5b..7a55547 100644
--- a/python/mxnet/symbol/numpy/_symbol.py
+++ b/python/mxnet/symbol/numpy/_symbol.py
@@ -29,7 +29,7 @@ from ..symbol import Symbol
 from .._internal import _set_np_symbol_class
 from . import _internal as _npi
 
-__all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'arange', 'argmax']
+__all__ = ['zeros', 'ones', 'maximum', 'minimum', 'stack', 'concatenate', 
'arange', 'argmax']
 
 
 @set_module('mxnet.symbol.numpy')
@@ -1061,6 +1061,31 @@ def stack(arrays, axis=0, out=None):
 
 
 @set_module('mxnet.symbol.numpy')
+def concatenate(seq, 

[GitHub] [incubator-mxnet] reminisce merged pull request #15104: Numpy-compatible Concatenate

2019-06-04 Thread GitBox
reminisce merged pull request #15104: Numpy-compatible Concatenate
URL: https://github.com/apache/incubator-mxnet/pull/15104
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290527160
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -27,52 +27,79 @@ def test_log():
 def log(x):
 return nd.log(x)
 
+def grad_op(x):
+return 1/x
+
 def grad_grad_op(x):
 return -1/(x**2)
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log, grad_grad_op)
+check_second_order_unary(array, log, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log2():
 def log2(x):
 return nd.log2(x)
 
+def grad_op(x):
+return 1/(x * math.log(2))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(2))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log2, grad_grad_op)
+check_second_order_unary(array, log2, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log10():
 def log10(x):
 return nd.log10(x)
 
+def grad_op(x):
+return 1/(x * math.log(10))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(10))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log10, grad_grad_op)
+check_second_order_unary(array, log10, grad_op, grad_grad_op)
 
 
-def check_second_order_unary(x, op, grad_grad_op):
+def check_second_order_unary(x, op, grad_op, grad_grad_op):
 x = nd.array(x)
-expect_grad_grad = grad_grad_op(x)
+grad_x = grad_op(x)
+grad_grad_x = grad_grad_op(x)
 x.attach_grad()
+
+# Manual head_grads.
+head_grads = nd.random.normal(shape=x.shape)
+head_grad_grads = nd.random.normal(shape=x.shape)
+head_grads.attach_grad()
+
+# Perform compute.
 with autograd.record():
 y = op(x)
-y_grad = autograd.grad(y, x, create_graph=True, retain_graph=True)[0]
-y_grad.backward()
-assert_almost_equal(expect_grad_grad.asnumpy(), x.grad.asnumpy())
+y_grad = autograd.grad(y, x, head_grads=head_grads,
+   create_graph=True, retain_graph=True)[0]
+
+y_grad.backward(head_grad_grads)
+
+# Compute expected values.
+expected_grad_grad = grad_grad_x.asnumpy() * head_grad_grads.asnumpy() * \
+head_grads.asnumpy()
+expected_heads_grad = grad_x.asnumpy()
+
+# Validate the gradients.
+assert_almost_equal(expected_grad_grad, x.grad.asnumpy())
+assert_almost_equal(expected_heads_grad, head_grads.grad.asnumpy())
 
 Review comment:
   Now I understand your question, i don't think anything is updating 
head_grads here. Why do you want to set the head gradients manually? To verify 
your fix?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290527160
 
 

 ##
 File path: tests/python/unittest/test_higher_order_grad.py
 ##
 @@ -27,52 +27,79 @@ def test_log():
 def log(x):
 return nd.log(x)
 
+def grad_op(x):
+return 1/x
+
 def grad_grad_op(x):
 return -1/(x**2)
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log, grad_grad_op)
+check_second_order_unary(array, log, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log2():
 def log2(x):
 return nd.log2(x)
 
+def grad_op(x):
+return 1/(x * math.log(2))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(2))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log2, grad_grad_op)
+check_second_order_unary(array, log2, grad_op, grad_grad_op)
 
 
 @with_seed()
 def test_log10():
 def log10(x):
 return nd.log10(x)
 
+def grad_op(x):
+return 1/(x * math.log(10))
+
 def grad_grad_op(x):
 return -1/((x**2) * math.log(10))
 
 arrays = random_arrays((2, 2), (2, 3), (4, 5, 2), (3, 1, 4, 5))
 
 for array in arrays:
-check_second_order_unary(array, log10, grad_grad_op)
+check_second_order_unary(array, log10, grad_op, grad_grad_op)
 
 
-def check_second_order_unary(x, op, grad_grad_op):
+def check_second_order_unary(x, op, grad_op, grad_grad_op):
 x = nd.array(x)
-expect_grad_grad = grad_grad_op(x)
+grad_x = grad_op(x)
+grad_grad_x = grad_grad_op(x)
 x.attach_grad()
+
+# Manual head_grads.
+head_grads = nd.random.normal(shape=x.shape)
+head_grad_grads = nd.random.normal(shape=x.shape)
+head_grads.attach_grad()
+
+# Perform compute.
 with autograd.record():
 y = op(x)
-y_grad = autograd.grad(y, x, create_graph=True, retain_graph=True)[0]
-y_grad.backward()
-assert_almost_equal(expect_grad_grad.asnumpy(), x.grad.asnumpy())
+y_grad = autograd.grad(y, x, head_grads=head_grads,
+   create_graph=True, retain_graph=True)[0]
+
+y_grad.backward(head_grad_grads)
+
+# Compute expected values.
+expected_grad_grad = grad_grad_x.asnumpy() * head_grad_grads.asnumpy() * \
+head_grads.asnumpy()
+expected_heads_grad = grad_x.asnumpy()
+
+# Validate the gradients.
+assert_almost_equal(expected_grad_grad, x.grad.asnumpy())
+assert_almost_equal(expected_heads_grad, head_grads.grad.asnumpy())
 
 Review comment:
   Now I understand your question, i don't think anything is updating 
head_grads.grad here (this is done when running backward). Why do you want to 
set the head gradients manually? To verify your fix?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290526536
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
+{n->inputs[1]}, nullptr, );
 auto ggx_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_backward_mid_grad_grad",
-{gx, gx}, nullptr, );
+{gx_mul_head_grads, nnvm::NodeEntry{g_lx}}, 
nullptr, );
 auto ggx = MakeNode("negative", n->attrs.name + "_backward_grad_grad",
 {nnvm::NodeEntry{ggx_mid}}, nullptr, );
 
 std::vector ret;
 
 ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
- {ograds[0], gx}, nullptr, ));
+ {ograds[0], nnvm::NodeEntry{g_lx}}, nullptr, ));
 
 Review comment:
   Hi. What do you mean by head_grads.grad? NodeEntry doesn't have a grad 
field. Could you clarify?  Are you referring to the python code below?  The 
gradient is always 0 when attach_grad() is called. The value is updated after 
running backward on an output, or using autograd.grad.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (134a3e8 -> 910583e)

2019-06-04 Thread wkcn
This is an automated email from the ASF dual-hosted git repository.

wkcn pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 134a3e8  fix nightly (#15141)
 add 910583e  fix misspell (#15149)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/profiler.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-mxnet] eric-haibin-lin opened a new pull request #15153: [DOC] Add clarification to MXNET_CPU_WORKER_NTHREADS

2019-06-04 Thread GitBox
eric-haibin-lin opened a new pull request #15153: [DOC] Add clarification to 
MXNET_CPU_WORKER_NTHREADS
URL: https://github.com/apache/incubator-mxnet/pull/15153
 
 
   ## Description ##
   As title 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on a change in pull request #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#discussion_r290526536
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -1074,16 +1074,19 @@ 
MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(_backward_log,
   [](const nnvm::NodePtr& n, const std::vector& ograds) {
 // For f(x) -> f = log
 // f''(x) = -1 * (f'(x) * f'(x))
-auto gx = nnvm::NodeEntry{n};
+auto gx_mul_head_grads = nnvm::NodeEntry{n};  // f'(x) * head_grads
+auto head_grads = nnvm::NodeEntry{n->inputs[0]};
+auto g_lx = MakeNode("reciprocal", n->attrs.name + "_backward_log_grad",
+{n->inputs[1]}, nullptr, );
 auto ggx_mid = MakeNode("elemwise_mul", n->attrs.name + 
"_backward_mid_grad_grad",
-{gx, gx}, nullptr, );
+{gx_mul_head_grads, nnvm::NodeEntry{g_lx}}, 
nullptr, );
 auto ggx = MakeNode("negative", n->attrs.name + "_backward_grad_grad",
 {nnvm::NodeEntry{ggx_mid}}, nullptr, );
 
 std::vector ret;
 
 ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
- {ograds[0], gx}, nullptr, ));
+ {ograds[0], nnvm::NodeEntry{g_lx}}, nullptr, ));
 
 Review comment:
   Hi. What do you mean by head_grads.grad? NodeEntry doesn't have a grad 
field. Could you clarify?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn merged pull request #15149: Fix Misspell in Profiler

2019-06-04 Thread GitBox
wkcn merged pull request #15149: Fix Misspell in Profiler
URL: https://github.com/apache/incubator-mxnet/pull/15149
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
larroy commented on issue #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#issuecomment-498870894
 
 
   Hi @kshitij12345 thanks for looking into this.
   
   I think we need to clarify what exactly we have in the first parameter of 
FGradient "node". We were a bit puzzled with @apeforest looking at your PR. I 
validated the results with the tests but I think I tried only one log, don't 
remember which base. But the result seemed correct to me, I guess I missed this 
problem.
   
   Why do you say that node is ograd*f'(x)? the node argument I understand 
is the node to calculate the gradient for, in this case we are calculating the 
gradient of the backward of the log. So are you saying that by chain rule, the 
node is ograd(of log) * d (log(x)) / dx  = ograd * reciprocal?
   
   Would be great if we could add this to the documentation, either to the 
FGradient typedef or to new_op. Otherwise I always have to dig through the code 
to refresh this. I think is poorly documented and tricky.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
mxnet-label-bot commented on issue #15152: [CI][nightly] nightly test tutorial 
failure: test_tutorials.test_python_kvstore
URL: 
https://github.com/apache/incubator-mxnet/issues/15152#issuecomment-498851184
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Test, CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei opened a new issue #15152: [CI][nightly] nightly test tutorial failure: test_tutorials.test_python_kvstore

2019-06-04 Thread GitBox
roywei opened a new issue #15152: [CI][nightly] nightly test tutorial failure: 
test_tutorials.test_python_kvstore
URL: https://github.com/apache/incubator-mxnet/issues/15152
 
 
   FAIL: test_tutorials.test_python_kvstore
   
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/NightlyTestsForBinaries/detail/master/335/pipeline/147


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apivovarov opened a new issue #15151: SSD INT8 ARMv8 Operator _sg_mkldnn_conv is not registered

2019-06-04 Thread GitBox
apivovarov opened a new issue #15151: SSD INT8 ARMv8 Operator _sg_mkldnn_conv 
is not registered
URL: https://github.com/apache/incubator-mxnet/issues/15151
 
 
   I tried to use quantized SSD model `ssd_512_mobilenet1.0_voc_int8` on ARMv8.
   Error:
   ```
   Traceback (most recent call last):
 File "./run-gluoncv.py", line 11, in 
   net = model_zoo.get_model('ssd_512_mobilenet1.0_voc_int8', 
pretrained=True)
 File 
"/usr/local/lib/python3.5/dist-packages/gluoncv/model_zoo/model_zoo.py", line 
231, in get_model
   net = _models[name](**kwargs)
 File 
"/usr/local/lib/python3.5/dist-packages/gluoncv/model_zoo/quantized/quantized.py",
 line 45, in func
   sym_net = SymbolBlock.imports(json_file, ['data'], None, ctx=ctx)
 File "/usr/local/lib/python3.5/dist-packages/mxnet/gluon/block.py", line 
1018, in imports
   sym = symbol.load(symbol_file)
 File "/usr/local/lib/python3.5/dist-packages/mxnet/symbol/symbol.py", line 
2728, in load
   check_call(_LIB.MXSymbolCreateFromFile(c_str(fname), 
ctypes.byref(handle)))
 File "/usr/local/lib/python3.5/dist-packages/mxnet/base.py", line 253, in 
check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: Failed loading Op quantized_sg_mkldnn_conv_bn_relu_0 
of type _sg_mkldnn_conv: [21:05:05] ../3rdparty/tvm/nnvm/src/core/op.cc:74: 
Check failed: op != nullptr: Operator _sg_mkldnn_conv is not registered
   Stack trace:
 [bt] (0) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(nnvm::Op::Get(std::__cxx11::basic_string, std::allocator > const&)+0x328) [0xa069d5d0]
 [bt] (1) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(+0x232d29c) 
[0xa06e429c]
 [bt] (2) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(dmlc::JSONObjectReadHelper::ReadAllFields(dmlc::JSONReader*)+0xd0)
 [0xa06e8258]
 [bt] (3) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(+0x2329a48) 
[0xa06e0a48]
 [bt] (4) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(+0x232aa80) 
[0xa06e1a80]
 [bt] (5) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(std::_Function_handler::_M_invoke(std::_Any_data const&, 
nnvm::Graph&&)+0xcc) [0x9ed2efbc]
 [bt] (6) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(nnvm::ApplyPasses(nnvm::Graph,
 std::vector, 
std::allocator >, std::allocator, std::allocator > > > const&)+0x28c) 
[0xa06a0194]
 [bt] (7) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(mxnet::LoadLegacyJSONPass(nnvm::Graph)+0x44c)
 [0x9ed35294]
 [bt] (8) 
/usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(std::_Function_handler::_M_invoke(std::_Any_data const&, 
nnvm::Graph&&)+0xcc) [0x9ed2efbc]
   ```
   I built `libmxnet.so` on aarch64 cpu without MKL support
   incubator-mxnet version 134a3e8c (Jun 4):
   ```
   cmake -GNinja \
   -DUSE_CUDA=OFF \
   -DUSE_MKL_IF_AVAILABLE=OFF \
   -DCMAKE_BUILD_TYPE=Release \
   -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
   -DCMAKE_C_COMPILER_LAUNCHER=ccache \
   -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
   ..
   
   ninja -j16
   ```
   
   run inference script:
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15151: SSD INT8 ARMv8 Operator _sg_mkldnn_conv is not registered

2019-06-04 Thread GitBox
mxnet-label-bot commented on issue #15151: SSD INT8 ARMv8 Operator 
_sg_mkldnn_conv is not registered
URL: 
https://github.com/apache/incubator-mxnet/issues/15151#issuecomment-498843835
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-06-04 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new cd06f3c  Bump the publish timestamp.
cd06f3c is described below

commit cd06f3cb029bdae774579c7f7f9e5781bbb26581
Author: mxnet-ci 
AuthorDate: Tue Jun 4 20:51:13 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..226b50f
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jun  4 20:51:13 UTC 2019



[GitHub] [incubator-mxnet] abhinavs95 edited a comment on issue #14712: Conv3DTranspose not work in Ubuntu.

2019-06-04 Thread GitBox
abhinavs95 edited a comment on issue #14712: Conv3DTranspose not work in Ubuntu.
URL: 
https://github.com/apache/incubator-mxnet/issues/14712#issuecomment-498827624
 
 
   Duplicate of #13135 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15150: Fix dumps for Constant initializer

2019-06-04 Thread GitBox
piyushghai commented on issue #15150: Fix dumps for Constant initializer
URL: https://github.com/apache/incubator-mxnet/pull/15150#issuecomment-498827812
 
 
   @mxnet-label-bot Add [pr-awaiting-review, NDArray]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] abhinavs95 commented on issue #14712: Conv3DTranspose not work in Ubuntu.

2019-06-04 Thread GitBox
abhinavs95 commented on issue #14712: Conv3DTranspose not work in Ubuntu.
URL: 
https://github.com/apache/incubator-mxnet/issues/14712#issuecomment-498827624
 
 
   Possibly related to #13135 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15147: Python test failures under windows now report the exit code

2019-06-04 Thread GitBox
piyushghai commented on issue #15147: Python test failures under windows now 
report the exit code
URL: https://github.com/apache/incubator-mxnet/pull/15147#issuecomment-498827351
 
 
   @david-seiler Thanks for your contributions. Could you look into the CI 
failure on windows-gpu ? 
   
   @mxnet-label-bot Add [pr-awaiting-review, Test]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15149: Fix Misspell in Profiler

2019-06-04 Thread GitBox
piyushghai commented on issue #15149: Fix Misspell in Profiler
URL: https://github.com/apache/incubator-mxnet/pull/15149#issuecomment-498826811
 
 
   @mxnet-label-bot Add [Docs, pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15144: Relax Visual Studio version constraint in the specialization of `dmlc::type_name_helper` for `DT=mxnet::Tuple`

2019-06-04 Thread GitBox
piyushghai commented on issue #15144: Relax Visual Studio version constraint in 
the specialization of `dmlc::type_name_helper` for `DT=mxnet::Tuple`
URL: https://github.com/apache/incubator-mxnet/pull/15144#issuecomment-498826459
 
 
   Thanks for your contributions @Vigilans 
   @mxnet-label-bot Add[pr-awaiting-review, Backend]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15142: [Dependency Update] Bump up cudnn version

2019-06-04 Thread GitBox
piyushghai commented on issue #15142: [Dependency Update] Bump up cudnn version
URL: https://github.com/apache/incubator-mxnet/pull/15142#issuecomment-498825409
 
 
   @stu1130 Can you look into the CI failures ? 
   
   @mxnet-label-bot Add[pr-awaiting-review, Backend]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15139: [WIP][numpy] Fix for D2L Chapters 2 and 3

2019-06-04 Thread GitBox
piyushghai commented on issue #15139: [WIP][numpy] Fix for D2L Chapters 2 and 3
URL: https://github.com/apache/incubator-mxnet/pull/15139#issuecomment-498824930
 
 
   Thanks for your contributions @reminisce 
   @mxnet-label-bot Add [numpy, pr-awaiting-review] 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15137: [WIP] 1.5.0 news

2019-06-04 Thread GitBox
piyushghai commented on issue #15137: [WIP] 1.5.0 news
URL: https://github.com/apache/incubator-mxnet/pull/15137#issuecomment-498823631
 
 
   Thanks for your contributions @roywei 
   @mxnet-label-bot Add [Doc, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15132: Profiler API Enhancements

2019-06-04 Thread GitBox
piyushghai commented on issue #15132: Profiler API Enhancements
URL: https://github.com/apache/incubator-mxnet/pull/15132#issuecomment-498823245
 
 
   Thanks for your contributions @Zha0q1 . 
   @mxnet-label-bot Add [Profiler, pr-work-in-progress]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15130: Add NaiveEngine tests in CI

2019-06-04 Thread GitBox
piyushghai commented on issue #15130: Add NaiveEngine tests in CI
URL: https://github.com/apache/incubator-mxnet/pull/15130#issuecomment-498822818
 
 
   Thanks for your contributions @xinyu-intel . 
   I agree with Marco, if we want to run NaiveEngine tests in CI, nightly might 
be the more appropriate place for them as the normal unit test suite runs on 
every PR and thus speed is of essence. 
   
   @mxnet-label-bot Add [CI]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15128: update LICENSE

2019-06-04 Thread GitBox
piyushghai commented on issue #15128: update LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/15128#issuecomment-498822106
 
 
   Thanks for your contributions @roywei . 
   @mxnet-label-bot Add [pr-awaiting-review, Licenses]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15124: [MXNET-1294] Priority-based parameter propagation for improved data parallel training throughput

2019-06-04 Thread GitBox
piyushghai commented on issue #15124: [MXNET-1294] Priority-based parameter 
propagation for improved data parallel training throughput
URL: https://github.com/apache/incubator-mxnet/pull/15124#issuecomment-498821764
 
 
   Thanks for your contribution @anandj91. Can you look into the CI failures ? 
   @mxnet-label-bot Add [pr-awaiting-review, Backend]. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] piyushghai commented on issue #15120: [bug] fix higher grad log

2019-06-04 Thread GitBox
piyushghai commented on issue #15120: [bug] fix higher grad log 
URL: https://github.com/apache/incubator-mxnet/pull/15120#issuecomment-498820614
 
 
   Thanks for your contributions @kshitij12345 
   @mxnet-label-bot Add [pr-awaiting-review, Operator]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] abhinavs95 commented on issue #15145: Installation problem on XUbuntu

2019-06-04 Thread GitBox
abhinavs95 commented on issue #15145: Installation problem on XUbuntu
URL: 
https://github.com/apache/incubator-mxnet/issues/15145#issuecomment-498817090
 
 
   @mxnet-label-bot add [Installation]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] abhinavs95 commented on issue #15146: Installation problem on Windows 7

2019-06-04 Thread GitBox
abhinavs95 commented on issue #15146: Installation problem on Windows 7
URL: 
https://github.com/apache/incubator-mxnet/issues/15146#issuecomment-498816128
 
 
   @mxnet-label-bot add [Installation, Windows]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] abhinavs95 commented on issue #15102: the grad of lars should be scaled in lbsgd

2019-06-04 Thread GitBox
abhinavs95 commented on issue #15102: the grad of lars should be scaled in lbsgd
URL: 
https://github.com/apache/incubator-mxnet/issues/15102#issuecomment-498813833
 
 
   Hi @starimpact Could you provide some more info like a brief description of 
the problem with a minimum reproducible example?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-06-04 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 73a6877  Bump the publish timestamp.
73a6877 is described below

commit 73a68770cedf91593b236c64b28ff51fca001b30
Author: mxnet-ci 
AuthorDate: Tue Jun 4 19:16:52 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..48f74bc
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jun  4 19:16:52 UTC 2019



[GitHub] [incubator-mxnet] ddavydenko commented on issue #15148: Very Large CPU RAM Memory Consumption (>1GB)

2019-06-04 Thread GitBox
ddavydenko commented on issue #15148: Very Large CPU RAM Memory Consumption 
(>1GB)
URL: 
https://github.com/apache/incubator-mxnet/issues/15148#issuecomment-498803843
 
 
   @mxnet-label-bot Add [Performance, Memory]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on a change in pull request #15128: update LICENSE

2019-06-04 Thread GitBox
zachgk commented on a change in pull request #15128: update LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/15128#discussion_r290430818
 
 

 ##
 File path: tests/nightly/estimator/test_sentiment_rnn.py
 ##
 @@ -101,7 +101,19 @@ def download_imdb(data_dir='/tmp/data'):
 '''
 Download and extract the IMDB dataset
 '''
-url = ('http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz')
+# dataset from http://ai.stanford.edu/~amaas/data/sentiment/
 
 Review comment:
   While it is good to include the full citation here, also add the information 
on licensing and copyrights to the README or whatever docs people read which 
tells them to download the data. The idea is that some of these licenses 
actually have consequences. For example, we don't want to let commercial users 
accidentally work with a non-commercial dataset. So, our goal is to make sure 
that any time we inform users about a dataset, we also explain what legal 
requirements come with that dataset as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on a change in pull request #15128: update LICENSE

2019-06-04 Thread GitBox
zachgk commented on a change in pull request #15128: update LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/15128#discussion_r290428325
 
 

 ##
 File path: LICENSE
 ##
 @@ -349,6 +350,19 @@
  Copyright 2012 Continuum Analytics, Inc.
 
 
+
===
+Creative Commons Attribution 4.0 International (CC BY 4.0)
 
 Review comment:
   Note that there are 6 different licenses as part of CC BY 4.0 
(https://creativecommons.org/licenses/). It is important to know which one 
because some of them will prevent commercial usage, prevent derivative works, 
or require others to use the same license.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zachgk commented on a change in pull request #15128: update LICENSE

2019-06-04 Thread GitBox
zachgk commented on a change in pull request #15128: update LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/15128#discussion_r290424190
 
 

 ##
 File path: LICENSE
 ##
 @@ -276,6 +276,7 @@
   Copyright (c) 2015 by Contributors
   Copyright 1984, 1987, 1992 by Stephen L. Moshier
 
+27. CNN Text Classification Example - For details, see 
example/cnn_text_classification/data_helpers.py
 
 Review comment:
   The LICENSE file is more specifically our source release license. It should 
only refer to things which are bundled as part of the source release 
(http://www.apache.org/dev/licensing-howto.html). Maybe we could move this to a 
separate DATASET_LICENSE file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] abhinavs95 opened a new pull request #15150: update dumps for const init

2019-06-04 Thread GitBox
abhinavs95 opened a new pull request #15150: update dumps for const init
URL: https://github.com/apache/incubator-mxnet/pull/15150
 
 
   ## Description ##
   Fixes #12404 
   Override the dumps method for Constant initializer to take care of NDArray 
input.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher 
order gradient support for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#discussion_r290415647
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -85,8 +85,23 @@ The storage type of ``relu`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{"_backward_relu"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu,
-   
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  std::vector ret;
+  // ograds[0]: d^2L/dx^2
+  // inputs[0]: dL/dy
+  // inputs[1]: y
+  // f(x) -> relu(x)
+  // f'(x) = 1 if x > 0 else 0
+  // f''(x) = 0
+  auto gx = nnvm::NodeEntry{n};  // f'(x)
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+{ograds[0], gx}, nullptr, ));
 
 Review comment:
   Similar to what you have done below for `sin` and `cos`.
   
   `gx` is actually `f'(x) * {head_grads/output_gradient}`.  
   It should actually only be `f'(x)`.
   
   **Explanation** :  `gx = f'(x) * head_grads`.
   Therefore, `gx w.r.t. f'(x) = head_grads`,
   Similarly `gx w.r.t. head_grads = f'(x)`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher 
order gradient support for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#discussion_r290415647
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -85,8 +85,23 @@ The storage type of ``relu`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{"_backward_relu"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu,
-   
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  std::vector ret;
+  // ograds[0]: d^2L/dx^2
+  // inputs[0]: dL/dy
+  // inputs[1]: y
+  // f(x) -> relu(x)
+  // f'(x) = 1 if x > 0 else 0
+  // f''(x) = 0
+  auto gx = nnvm::NodeEntry{n};  // f'(x)
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+{ograds[0], gx}, nullptr, ));
 
 Review comment:
   Similar to what you have done below for `sin` and `cos`.
   
   `gx` is actually `f'(x) * head_grads/output_gradient`.  
   It should actually only be `f'(x)`.
   
   **Explanation** :  `gx = f'(x) * head_grads`.
   Therefore, `gx w.r.t. f'(x) = head_grads`,
   Similarly `gx w.r.t. head_grads = f'(x)`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on issue #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
kshitij12345 commented on issue #14613: [MXNET-978] Higher order gradient 
support for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#issuecomment-498775984
 
 
   @apeforest
   
   Could you update the `check_second_order_unary` as per
   
   
https://github.com/apache/incubator-mxnet/blob/37ce3b87268a8154f5c0ad97ce2522478038ee06/tests/python/unittest/test_higher_order_grad.py#L76-L103
   
   This covers check for gradient of the first input argument as well. Have 
tested a similar Pytorch Script which works ( code in PR #15120 ).
   
   However do note that for PR #15120 , 
   
https://github.com/apache/incubator-mxnet/blob/37ce3b87268a8154f5c0ad97ce2522478038ee06/tests/python/unittest/test_higher_order_grad.py#L102
   
   Assertion fails with `head_grads.grad.asnumpy()` being all `0's`. 
   
   Please check to see if it works for you.
   Thank You.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 opened a new pull request #15104: Numpy-compatible Concatenate

2019-06-04 Thread GitBox
haojin2 opened a new pull request #15104: Numpy-compatible Concatenate
URL: https://github.com/apache/incubator-mxnet/pull/15104
 
 
   ## Description ##
   As title.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Numpy-compatible concatenate
   - [x] Unit tests
   
   ## Comments ##
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Zha0q1 opened a new pull request #15149: Fix Misspell in profiler

2019-06-04 Thread GitBox
Zha0q1 opened a new pull request #15149: Fix Misspell in profiler
URL: https://github.com/apache/incubator-mxnet/pull/15149
 
 
   ## Description ##
   In python/mxnet/profiler.py parameter "continuous_dump" of function 
set_config() was spelled wrong as "contiguous_dump". This PR fixes it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 closed pull request #15104: Numpy-compatible Concatenate

2019-06-04 Thread GitBox
haojin2 closed pull request #15104: Numpy-compatible Concatenate
URL: https://github.com/apache/incubator-mxnet/pull/15104
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher 
order gradient support for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#discussion_r290415647
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -85,8 +85,23 @@ The storage type of ``relu`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{"_backward_relu"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu,
-   
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  std::vector ret;
+  // ograds[0]: d^2L/dx^2
+  // inputs[0]: dL/dy
+  // inputs[1]: y
+  // f(x) -> relu(x)
+  // f'(x) = 1 if x > 0 else 0
+  // f''(x) = 0
+  auto gx = nnvm::NodeEntry{n};  // f'(x)
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+{ograds[0], gx}, nullptr, ));
 
 Review comment:
   `gx` is actually `f'(x) * head_grads/output_gradient`.  
   It should actually only be `f'(x)`.
   
   Explanation :  gx = `f'(x) * head_grads`.
   Therefore, `gx w.r.t. f'(x) = head_grads`,
   Similarly `gx w.r.t. head_grads = f'(x)`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher 
order gradient support for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#discussion_r290415647
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -85,8 +85,23 @@ The storage type of ``relu`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{"_backward_relu"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu,
-   
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  std::vector ret;
+  // ograds[0]: d^2L/dx^2
+  // inputs[0]: dL/dy
+  // inputs[1]: y
+  // f(x) -> relu(x)
+  // f'(x) = 1 if x > 0 else 0
+  // f''(x) = 0
+  auto gx = nnvm::NodeEntry{n};  // f'(x)
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+{ograds[0], gx}, nullptr, ));
 
 Review comment:
   `gx` is actually `f'(x) * head_grads/output_gradient`.  
   It should actually only be `f'(x)`.
   
   **Explanation** :  gx = `f'(x) * head_grads`.
   Therefore, `gx w.r.t. f'(x) = head_grads`,
   Similarly `gx w.r.t. head_grads = f'(x)`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
kshitij12345 commented on a change in pull request #14613: [MXNET-978] Higher 
order gradient support for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#discussion_r290415647
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op_basic.cc
 ##
 @@ -85,8 +85,23 @@ The storage type of ``relu`` output depends upon the input 
storage type:
 )code" ADD_FILELINE)
 .set_attr("FGradient", ElemwiseGradUseOut{"_backward_relu"});
 
-MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu,
-   
unary_bwd);
+MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU(_backward_relu, 
unary_bwd)
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n, const std::vector& ograds) {
+  std::vector ret;
+  // ograds[0]: d^2L/dx^2
+  // inputs[0]: dL/dy
+  // inputs[1]: y
+  // f(x) -> relu(x)
+  // f'(x) = 1 if x > 0 else 0
+  // f''(x) = 0
+  auto gx = nnvm::NodeEntry{n};  // f'(x)
+  ret.emplace_back(MakeNode("elemwise_mul", n->attrs.name + 
"_backward_grad_grad",
+{ograds[0], gx}, nullptr, ));
 
 Review comment:
   `gx` is actually `f'(x) * head_grads/output_gradient`.  
   It should actually only be `f'(x)`.
   
   **Explanation** :  `gx = f'(x) * head_grads`.
   Therefore, `gx w.r.t. f'(x) = head_grads`,
   Similarly `gx w.r.t. head_grads = f'(x)`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rvardimon commented on issue #14883: [Discussion] Overhead in MXNet Execution

2019-06-04 Thread GitBox
rvardimon commented on issue #14883: [Discussion] Overhead in MXNet Execution
URL: 
https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-498759181
 
 
   https://github.com/apache/incubator-mxnet/issues/15148


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15148: Very Large CPU RAM Memory Consumption (>1GB)

2019-06-04 Thread GitBox
mxnet-label-bot commented on issue #15148: Very Large CPU RAM Memory 
Consumption (>1GB)
URL: 
https://github.com/apache/incubator-mxnet/issues/15148#issuecomment-498758716
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Performance


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rvardimon opened a new issue #15148: Very Large CPU RAM Memory Consumption (>1GB)

2019-06-04 Thread GitBox
rvardimon opened a new issue #15148: Very Large CPU RAM Memory Consumption 
(>1GB)
URL: https://github.com/apache/incubator-mxnet/issues/15148
 
 
   ## Description
   Mxnet consumes nearly 2GB CPU RAM even when loading a relatively small model 
(e.g. Resnet-18) on directed on GPU (`ctx=mxnet.gpu()`). From what I 
understand, there is no real need to load so much CPU memory, when the model is 
running on GPU.
   
   This issue is extremely prohibitive when trying to run multiple processes 
with mxnet on the same machine, and IMO gives it a significant disadvantage 
compared to other frameworks for being used in AI production systems.
   
   ## Environment info
   ```
   /usr/bin/python3.6 
/home/ran-face/src/CameraResearch/workspace/mxnet_diagnose.py
   --Python Info--
   Version  : 3.6.7
   Compiler : GCC 8.2.0
   Build: ('default', 'Oct 22 2018 11:32:17')
   Arch : ('64bit', 'ELF')
   Pip Info---
   Version  : 9.0.1
   Directory: /usr/lib/python3/dist-packages/pip
   --MXNet Info---
   Version  : 1.3.1
   Directory: /usr/local/lib/python3.6/dist-packages/mxnet
   Commit Hash   : 19c501680183237d52a862e6ae1dc4ddc296305b
   --System Info--
   Platform : Linux-4.15.0-45-generic-x86_64-with-Ubuntu-18.04-bionic
   system   : Linux
   node : ranface-Lenovo-Y720-15IKB
   release  : 4.15.0-45-generic
   version  : #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:x86_64
   CPU op-mode(s):  32-bit, 64-bit
   Byte Order:  Little Endian
   CPU(s):  8
   On-line CPU(s) list: 0-7
   Thread(s) per core:  2
   Core(s) per socket:  4
   Socket(s):   1
   NUMA node(s):1
   Vendor ID:   GenuineIntel
   CPU family:  6
   Model:   158
   Model name:  Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
   Stepping:9
   CPU MHz: 2411.711
   CPU max MHz: 3800.
   CPU min MHz: 800.
   BogoMIPS:5616.00
   Virtualization:  VT-x
   L1d cache:   32K
   L1i cache:   32K
   L2 cache:256K
   L3 cache:6144K
   NUMA node0 CPU(s):   0-7
   Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 
monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 
x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 
3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow 
vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid 
mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm 
ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0694 
sec, LOAD: 0.9665 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0841 sec, LOAD: 
1.2019 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0988 sec, LOAD: 
0.9435 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0708 sec, LOAD: 1.2242 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0637 sec, LOAD: 
1.3118 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0499 sec, 
LOAD: 0.3116 sec.
   
   Process finished with exit code 0
   ```
   
   Package used (Python/R/Scala/Julia):
   I'm using Python3
   
   ## Build info
   mxnet installed using pip3
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. Run the following code
   ```
   import mxnet as mx
   import time
   
   if __name__ == '__main__':
   
   path='http://data.mxnet.io/models/imagenet/'
   [mx.test_utils.download(path+'resnet/18-layers/resnet-18-.params'),
mx.test_utils.download(path+'resnet/18-layers/resnet-18-symbol.json'),
mx.test_utils.download(path+'synset.txt')]
   
   ctx = mx.gpu()
   
   sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-18', 0)
   mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None)
   mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))],
label_shapes=mod._label_shapes)
   mod.set_params(arg_params, aux_params, allow_missing=True)
   
   time.sleep(100)
   ```
   2. check process memory (run top, shift M to sort processes by memory usage)
   3. memory usage is about ~1.5-2GB RAM
   
   
   


[GitHub] [incubator-mxnet] apeforest commented on issue #14613: [MXNET-978] Higher order gradient support for some unary operators

2019-06-04 Thread GitBox
apeforest commented on issue #14613: [MXNET-978] Higher order gradient support 
for some unary operators
URL: https://github.com/apache/incubator-mxnet/pull/14613#issuecomment-498758177
 
 
   @kshitij12345 could you please take a look at the PR again. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #15128: update LICENSE

2019-06-04 Thread GitBox
roywei commented on issue #15128: update LICENSE
URL: https://github.com/apache/incubator-mxnet/pull/15128#issuecomment-498755613
 
 
   @zachgk @lanking520 could you help review the license? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #14981: [CI][NightlyTestsForBinaries] Test Large Tensor: GPU Failing

2019-06-04 Thread GitBox
roywei commented on issue #14981: [CI][NightlyTestsForBinaries] Test Large 
Tensor: GPU Failing
URL: 
https://github.com/apache/incubator-mxnet/issues/14981#issuecomment-498748546
 
 
   Currently, both CPU and GPU tests have been disabled due to the same memory 
issue. Had a discussion with @access2rohit and @apeforest,  we can try a few 
things:
   1. change to P3 instances  here 
https://github.com/apache/incubator-mxnet/blob/master/tests/nightly/JenkinsfileForBinaries#L82
   2. further increase shared memory to 50G
   3. stop running large tensor test parallelly with other tests.
   
   We are having problems testing the above solutions on CI machines that have 
multiple jobs running in parallel.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] david-seiler commented on a change in pull request #14617: PDF operators for the random samplers, and also the Dirichlet

2019-06-04 Thread GitBox
david-seiler commented on a change in pull request #14617: PDF operators for 
the random samplers, and also the Dirichlet
URL: https://github.com/apache/incubator-mxnet/pull/14617#discussion_r290353676
 
 

 ##
 File path: ci/windows/test_py3_cpu.ps1
 ##
 @@ -24,7 +24,7 @@ $env:MXNET_HOME=[io.path]::combine($PSScriptRoot, 
'mxnet_home')
 
 C:\Python37\Scripts\pip install -r tests\requirements.txt
 C:\Python37\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 
--timer-filter warning,error --with-xunit --xunit-file nosetests_unittest.xml 
tests\python\unittest
-if (! $?) { Throw ("Error running unittest") }
+if (! $?) { Throw ("Error running unittest) }
 
 Review comment:
   whoops, I was factoring some error-handling code out to PR-15147 and got a 
little too aggressive.  Good catch, fixed now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: fix nightly (#15141)

2019-06-04 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 134a3e8  fix nightly (#15141)
134a3e8 is described below

commit 134a3e8cd36ee66426deedd3c8add6888378c043
Author: Lai Wei 
AuthorDate: Tue Jun 4 07:43:49 2019 -0700

fix nightly (#15141)

* fix nightly

* disable large tensor

* update issue link
---
 tests/nightly/JenkinsfileForBinaries | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/tests/nightly/JenkinsfileForBinaries 
b/tests/nightly/JenkinsfileForBinaries
index e4b9ff1..d5f1ebd 100755
--- a/tests/nightly/JenkinsfileForBinaries
+++ b/tests/nightly/JenkinsfileForBinaries
@@ -86,14 +86,15 @@ core_logic: {
 }
   }
 },*/
-'Test Large Tensor Size: GPU': {
+// https://github.com/apache/incubator-mxnet/issues/14981
+/*'Test Large Tensor Size: GPU': {
   node(NODE_LINUX_GPU) {
 ws('workspace/large_tensor-gpu') {
 utils.unpack_and_init('gpu_int64', mx_cmake_lib)
 utils.docker_run('ubuntu_nightly_gpu', 
'nightly_test_large_tensor', true)
 }
   }
-},
+},*/
 'StraightDope: Python2 Single-GPU': {
   node(NODE_LINUX_GPU_P3) {
 ws('workspace/straight_dope-single_gpu') {
@@ -127,7 +128,7 @@ core_logic: {
   }
 },
 'Tutorial: Python2': {
-  node(NODE_LINUX_GPU) {
+  node(NODE_LINUX_GPU_P3) {
 ws('workspace/tutorial-test-python2') {
   utils.unpack_and_init('gpu', mx_lib)
   utils.docker_run('ubuntu_nightly_gpu', 
'nightly_tutorial_test_ubuntu_python2_gpu', true, '1500m')
@@ -135,7 +136,7 @@ core_logic: {
   }
 },
 'Tutorial: Python3': {
-  node(NODE_LINUX_GPU) {
+  node(NODE_LINUX_GPU_P3) {
 ws('workspace/tutorial-test-python3') {
   utils.unpack_and_init('gpu', mx_lib)
   utils.docker_run('ubuntu_nightly_gpu', 
'nightly_tutorial_test_ubuntu_python3_gpu', true, '1500m')



[GitHub] [incubator-mxnet] sandeep-krishnamurthy merged pull request #15141: fix nightly

2019-06-04 Thread GitBox
sandeep-krishnamurthy merged pull request #15141: fix nightly
URL: https://github.com/apache/incubator-mxnet/pull/15141
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] lebeg commented on a change in pull request #14617: PDF operators for the random samplers, and also the Dirichlet

2019-06-04 Thread GitBox
lebeg commented on a change in pull request #14617: PDF operators for the 
random samplers, and also the Dirichlet
URL: https://github.com/apache/incubator-mxnet/pull/14617#discussion_r290332802
 
 

 ##
 File path: ci/windows/test_py3_cpu.ps1
 ##
 @@ -24,7 +24,7 @@ $env:MXNET_HOME=[io.path]::combine($PSScriptRoot, 
'mxnet_home')
 
 C:\Python37\Scripts\pip install -r tests\requirements.txt
 C:\Python37\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 
--timer-filter warning,error --with-xunit --xunit-file nosetests_unittest.xml 
tests\python\unittest
-if (! $?) { Throw ("Error running unittest") }
+if (! $?) { Throw ("Error running unittest) }
 
 Review comment:
   Are you sure you want to remove the `"`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] david-seiler opened a new pull request #15147: Python test failures under windows now report the exit code

2019-06-04 Thread GitBox
david-seiler opened a new pull request #15147: Python test failures under 
windows now report the exit code
URL: https://github.com/apache/incubator-mxnet/pull/15147
 
 
   ## Description ##
   While working on PR-14617, I hit an obscure Windows-only crash.  Adding the 
$LastExitCode to the error message helped me debug it; a hex-formatted version 
of that change appears here.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - The only changes are on the error paths for ci/windows/test_*.ps1, so the 
PR doesn't test all that well.  In the first version I've deliberately broken 
test_py2_gpu.ps1 (see line 27), so we should see that test fail and report an 
exit code of 0x0.  Assuming that behaves as expected, I'll amend the PR to make 
it pass.
   
   ## Comments ##
   - No rush to merge this, I just promised Marco I'd give it its own PR.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15108: The test time of the model on GPU is normal, but the test time on CPU is very long.

2019-06-04 Thread GitBox
pengzhao-intel commented on issue #15108: The test time of the model on GPU is 
normal, but the test time on CPU is very long.
URL: 
https://github.com/apache/incubator-mxnet/issues/15108#issuecomment-498676696
 
 
   It makes sense because your input image size is about 10X in both H and W 
direction from 12 to 112.
   So the runtime is increased. Btw, the no mkl build will be very slow and 
it's not performant.
   
   I suggest you change to MKLDNN build for the start point.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [clojure] clojurify function names in image.clj namespace (#15121)

2019-06-04 Thread kedarb
This is an automated email from the ASF dual-hosted git repository.

kedarb pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 28c528e  [clojure] clojurify function names in image.clj namespace 
(#15121)
28c528e is described below

commit 28c528e16be70b31287b65d949205396bbfec6e8
Author: Arthur Caillau 
AuthorDate: Tue Jun 4 15:37:04 2019 +0200

[clojure] clojurify function names in image.clj namespace (#15121)

* [clojure] clojurify function names in image.clj namespace

* move deprecated to the proper location for defn

* rename color-flag to color and use :color :grayscale as values

* add rm dest-path in with-file

* change `color-flag` to `color` in `color->int`
---
 .../src/org/apache/clojure_mxnet/image.clj | 116 +++--
 .../test/org/apache/clojure_mxnet/image_test.clj   |  63 +++
 2 files changed, 151 insertions(+), 28 deletions(-)

diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj 
b/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj
index f81a358..68dcbfe 100644
--- a/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj
+++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj
@@ -17,6 +17,7 @@
 
 (ns org.apache.clojure-mxnet.image
   "Image API of Clojure package."
+  (:refer-clojure :exclude [read])
   (:require [t6.from-scala.core :refer [$ $$] :as $]
 [org.apache.clojure-mxnet.dtype :as dtype]
 [org.apache.clojure-mxnet.ndarray :as ndarray]
@@ -38,8 +39,10 @@
 (s/def ::decode-image-opts
   (s/keys :opt-un [::color-flag ::to-rgb ::output]))
 
-(defn decode-image
-  "Decodes an image from an input stream with OpenCV
+(defn ^:deprecated decode-image
+  "DEPRECATED: use `decode` instead.
+
+   Decodes an image from an input stream with OpenCV
 `input-stream`: `InputStream` - Contains the binary encoded image
 `color-flag`: 0 or 1 - Convert decoded image to grayscale (0) or color (1)
 `to-rgb`: boolean - Whether to convert decoded image to mxnet's default RGB
@@ -60,14 +63,47 @@
   ([input-stream]
(decode-image input-stream {})))
 
+(s/def ::color #{:grayscale :color})
+(s/def ::decode-image-opts-2 (s/keys :opt-un [::color ::to-rgb ::output]))
+
+(defn- color->int [color]
+  (case color
+:grayscale 0
+:color 1))
+
+(defn decode
+  "Decodes an image from an input stream with OpenCV.
+`input-stream`: `InputStream` - Contains the binary encoded image
+`color`: keyword in `#{:color :grayscale}` - Convert decoded image to
+ grayscale or color
+`to-rgb`: boolean - Whether to convert decoded image to mxnet's default RGB
+format (instead of opencv's default BGR)
+`output`: nil or `NDArray`
+returns: `NDArray` with dtype uint8
+
+  Ex:
+(decode input-stream)
+(decode input-stream {:color :color})
+(decode input-stream {:color :grayscale :output nd})"
+  ([input-stream {:keys [color to-rgb output]
+  :or {color :color to-rgb true output nil}
+  :as opts}]
+   (util/validate! ::input-stream input-stream "Invalid input stream")
+   (util/validate! ::decode-image-opts-2 opts "Invalid options for decoding")
+   (Image/imDecode input-stream (color->int color) to-rgb ($/option output)))
+  ([input-stream]
+   (decode input-stream {})))
+
 (s/def ::filename string?)
 (s/def ::optional-color-flag
   (s/or :none nil? :some ::color-flag))
 (s/def ::optional-to-rgb
   (s/or :none nil? :some ::to-rgb))
 
-(defn read-image
-  "Reads an image file and returns an ndarray with OpenCV. It returns image in
+(defn ^:deprecated read-image
+  "DEPRECATED: use `read` instead.
+
+   Reads an image file and returns an ndarray with OpenCV. It returns image in
RGB by default instead of OpenCV's default BGR.
 `filename`: string - Name of the image file to be loaded
 `color-flag`: 0 or 1 - Convert decoded image to grayscale (0) or color (1)
@@ -95,11 +131,43 @@
   ([filename]
(read-image filename {})))
 
+(defn read
+  "Reads an image file and returns an ndarray with OpenCV. It returns image in
+   RGB by default instead of OpenCV's default BGR.
+`filename`: string - Name of the image file to be loaded
+`color`: keyword in `#{:color :grayscale}` - Convert decoded image to
+ grayscale or color
+`to-rgb`: boolean - Whether to convert decoded image to mxnet's default RGB
+format (instead of opencv's default BGR)
+`output`: nil or `NDArray`
+returns: `NDArray` with dtype uint8
+
+   Ex:
+ (read \"cat.jpg\")
+ (read \"cat.jpg\" {:color :grayscale})
+ (read \"cat.jpg\" {:color :color :output nd})"
+  ([filename {:keys [color to-rgb output]
+  :or {color :color to-rgb nil output nil}
+  :as opts}]
+   (util/validate! ::filename filename "Invalid 

[GitHub] [incubator-mxnet] kedarbellare merged pull request #15121: [clojure] clojurify function names in image.clj namespace

2019-06-04 Thread GitBox
kedarbellare merged pull request #15121: [clojure] clojurify function names in 
image.clj namespace
URL: https://github.com/apache/incubator-mxnet/pull/15121
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-06-04 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new f72f94f  Bump the publish timestamp.
f72f94f is described below

commit f72f94f9a7caa44615ea424e0a0ee284b483e4bc
Author: mxnet-ci 
AuthorDate: Tue Jun 4 13:16:06 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..9eb57a0
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Jun  4 13:16:05 UTC 2019



[GitHub] [incubator-mxnet] junrushao1994 commented on issue #15143: dmlc::type_name_helper specialization of mxnet::tuple should not be disabled for MSVC

2019-06-04 Thread GitBox
junrushao1994 commented on issue #15143: dmlc::type_name_helper 
specialization of mxnet::tuple should not be disabled for MSVC
URL: 
https://github.com/apache/incubator-mxnet/issues/15143#issuecomment-498638720
 
 
   (CC: @reminisce)
   
   Thanks for bringing this up! Please be aware that I am not the original 
author of the file, and I am not super familiar with MSVC.
   
   @reminisce Is there any reason that we disable this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #15146: Installation problem on Windows 7

2019-06-04 Thread GitBox
mxnet-label-bot commented on issue #15146: Installation problem on Windows 7
URL: 
https://github.com/apache/incubator-mxnet/issues/15146#issuecomment-498627824
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Installation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >