[incubator-mxnet] branch master updated: Update LICENSE (#10370)

2018-04-02 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e0df199  Update LICENSE (#10370)
e0df199 is described below

commit e0df19984d5fe7377c53c36472210764255f2b2a
Author: Haibin Lin 
AuthorDate: Mon Apr 2 12:23:44 2018 -0700

Update LICENSE (#10370)
---
 LICENSE | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/LICENSE b/LICENSE
index b783e3c..a9ec986 100644
--- a/LICENSE
+++ b/LICENSE
@@ -224,7 +224,7 @@
 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE
 8. 3rdparty/nnvm/dmlc-core - For details, see, 
3rdparty/nnvm/dmlc-core/LICENSE
 9. 3rdparty/nnvm - For details, see, 3rdparty/nnvm/LICENSE
-10. nnvm-fusion - For details, see, 
3rdparty/nnvm/plugin/nnvm-fusion/LICENSE
+10. 3rdparty/nnvm/plugin/nnvm-fusion - For details, see, 
3rdparty/nnvm/plugin/nnvm-fusion/LICENSE
 11. 3rdparty/ps-lite - For details, see, 3rdparty/ps-lite/LICENSE
 12. 3rdparty/nnvm/tvm - For details, see, 3rdparty/nnvm/tvm/LICENSE
 13. googlemock scripts/generator - For details, see, 
3rdparty/googletest/googlemock/scripts/generator/LICENSE

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] piiswrong closed pull request #10370: [MXNET-16] Update LICENSE for nnvm-fusion path

2018-04-02 Thread GitBox
piiswrong closed pull request #10370: [MXNET-16] Update LICENSE for nnvm-fusion 
path
URL: https://github.com/apache/incubator-mxnet/pull/10370
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/LICENSE b/LICENSE
index b783e3ce3ca..a9ec986404e 100644
--- a/LICENSE
+++ b/LICENSE
@@ -224,7 +224,7 @@
 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE
 8. 3rdparty/nnvm/dmlc-core - For details, see, 
3rdparty/nnvm/dmlc-core/LICENSE
 9. 3rdparty/nnvm - For details, see, 3rdparty/nnvm/LICENSE
-10. nnvm-fusion - For details, see, 
3rdparty/nnvm/plugin/nnvm-fusion/LICENSE
+10. 3rdparty/nnvm/plugin/nnvm-fusion - For details, see, 
3rdparty/nnvm/plugin/nnvm-fusion/LICENSE
 11. 3rdparty/ps-lite - For details, see, 3rdparty/ps-lite/LICENSE
 12. 3rdparty/nnvm/tvm - For details, see, 3rdparty/nnvm/tvm/LICENSE
 13. googlemock scripts/generator - For details, see, 
3rdparty/googletest/googlemock/scripts/generator/LICENSE


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil opened a new pull request #10372: Adding CNN-character level example notebook

2018-04-02 Thread GitBox
ThomasDelteil opened a new pull request #10372: Adding CNN-character level 
example notebook
URL: https://github.com/apache/incubator-mxnet/pull/10372
 
 
   adding an example to the example list


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10315: [MXNET-249] Add inplace support to mkldnn sum

2018-04-02 Thread GitBox
piiswrong commented on a change in pull request #10315: [MXNET-249] Add inplace 
support to mkldnn sum
URL: https://github.com/apache/incubator-mxnet/pull/10315#discussion_r178621496
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_sum.cc
 ##
 @@ -49,23 +49,42 @@ void Sum(const mkldnn::memory , const mkldnn::memory 
,
 void MKLDNNSumForward(const nnvm::NodeAttrs& attrs, const OpContext ,
   const std::vector , const OpReqType ,
   const NDArray _data) {
+  if (req == kNullOp) {
+return;
+  }
+
   TmpMemMgr::Get()->Init(ctx.requested[0]);
   std::vector in_prims;
   std::vector in_pds(inputs.size());
   std::vector scales(inputs.size(), 1);
   in_prims.reserve(inputs.size());
+  bool pd_same = true;
   for (size_t i = 0; i < inputs.size(); i++) {
 auto in_mem = inputs[i].GetMKLDNNData();
 in_prims.push_back(*in_mem);
 in_pds[i] = in_mem->get_primitive_desc();
+// pd_same = pd_same && (in_pds[i] == in_pds[0]);
 
 Review comment:
   please remove the code instead of commenting it out


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10336: Fix MKLDNN sigmoid/softrelu issue

2018-04-02 Thread GitBox
piiswrong closed pull request #10336: Fix MKLDNN sigmoid/softrelu issue
URL: https://github.com/apache/incubator-mxnet/pull/10336
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/nn/activation-inl.h b/src/operator/nn/activation-inl.h
index 89a369c6717..32a7a5ad617 100644
--- a/src/operator/nn/activation-inl.h
+++ b/src/operator/nn/activation-inl.h
@@ -201,7 +201,7 @@ void ActivationGradCompute(const nnvm::NodeAttrs& attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
   CHECK_EQ(inputs.size(), 3U);
 #else
   CHECK_EQ(inputs.size(), 2U);
diff --git a/src/operator/nn/activation.cc b/src/operator/nn/activation.cc
index 89059321b69..08028265c48 100644
--- a/src/operator/nn/activation.cc
+++ b/src/operator/nn/activation.cc
@@ -44,7 +44,7 @@ struct ActivationGrad {
   const std::vector& 
ograds) const {
 std::vector heads(ograds.begin(), ograds.end());
 heads.emplace_back(nnvm::NodeEntry{n, activation::kOut, 0});
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
 heads.push_back(n->inputs[activation::kData]);
 #endif
 return MakeGradNode(op_name, n, heads, n->attrs.dict);
@@ -74,15 +74,11 @@ void ActivationGradComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-#if MXNET_USE_CUDNN == 1
   CHECK_EQ(inputs.size(), 3U);
-#else
-  CHECK_EQ(inputs.size(), 2U);
-#endif
   const ActivationParam& param = nnvm::get(attrs.parsed);
   if (SupportMKLDNN(inputs[0])) {
 MKLDNN_OPCHECK_INIT(true, outputs.size(), inputs, outputs);
-MKLDNNActivationBackward(attrs, ctx, inputs[0], inputs[1], req[0],
+MKLDNNActivationBackward(attrs, ctx, inputs[0], inputs[2], req[0],
  outputs[0]);
   MKLDNN_OPCHECK_RUN(ActivationGradCompute, attrs, ctx, inputs, req, 
outputs);
 return;
@@ -116,13 +112,13 @@ inline static bool BackwardActStorageType(const 
nnvm::NodeAttrs& attrs,
   DispatchMode* dispatch_mode,
   std::vector *in_attrs,
   std::vector *out_attrs) {
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
   CHECK_EQ(in_attrs->size(), 3U);
 #else
   CHECK_EQ(in_attrs->size(), 2U);
 #endif
   CHECK_EQ(out_attrs->size(), 1U);
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
   bool ret = ElemwiseStorageType<3, 1, false, false, false>(attrs, dev_mask,
 dispatch_mode,
 in_attrs, 
out_attrs);
diff --git a/src/operator/nn/mkldnn/mkldnn_act.cc 
b/src/operator/nn/mkldnn/mkldnn_act.cc
index 8c19850ced3..9be5bfbc150 100644
--- a/src/operator/nn/mkldnn/mkldnn_act.cc
+++ b/src/operator/nn/mkldnn/mkldnn_act.cc
@@ -43,13 +43,10 @@ namespace mxnet {
 namespace op {
 
 bool SupportMKLDNNAct(const ActivationParam& param) {
-  // We only enable ReLU for now. It seems other activations have some 
precision
-  // problems.
-  return param.act_type == activation::kReLU;
-#if 0
+  return param.act_type == activation::kReLU
   || param.act_type == activation::kSigmoid
-  || param.act_type == activation::kSoftReLU;
-#endif
+  || param.act_type == activation::kSoftReLU
+  || param.act_type == activation::kTanh;
 }
 
 static inline mkldnn::algorithm GetMKLDNNActAlgo(const ActivationParam& param) 
{
diff --git a/tests/python/unittest/test_gluon.py 
b/tests/python/unittest/test_gluon.py
index 952fdf7e366..d91b3f02cd3 100644
--- a/tests/python/unittest/test_gluon.py
+++ b/tests/python/unittest/test_gluon.py
@@ -717,8 +717,8 @@ def test_lambda():
 
 input_data = mx.nd.random.uniform(shape=(2, 3, 5, 7))
 out1, out2, out3 = net1(input_data), net2(input_data), net3(input_data)
-assert_almost_equal(out1.asnumpy(), out2.asnumpy())
-assert_almost_equal(out1.asnumpy(), out3.asnumpy())
+assert_almost_equal(out1.asnumpy(), out2.asnumpy(), rtol=1e-3)
+assert_almost_equal(out1.asnumpy(), out3.asnumpy(), rtol=1e-3)
 
 
 @with_seed()


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache 

[incubator-mxnet] branch master updated: Fix MKLDNN sigmoid/softrelu issue (#10336)

2018-04-02 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e0df25c  Fix MKLDNN sigmoid/softrelu issue (#10336)
e0df25c is described below

commit e0df25c0175e2a7f2227e3cb160de26cd40d4408
Author: Jin Huang <34262351+jinhuang...@users.noreply.github.com>
AuthorDate: Tue Apr 3 03:02:44 2018 +0800

Fix MKLDNN sigmoid/softrelu issue (#10336)

* Fix MKLDNN sigmoid/softrelu issue

* Enable Sigmoid and SoftRelu for MKLDNN

* Add activation kData for backward calculation for MKLDNN

* Add tanh support for MKLDNN activation

* Adjust rtol to pass tanh tests for MKLDNN
---
 src/operator/nn/activation-inl.h |  2 +-
 src/operator/nn/activation.cc| 12 
 src/operator/nn/mkldnn/mkldnn_act.cc |  9 +++--
 tests/python/unittest/test_gluon.py  |  4 ++--
 4 files changed, 10 insertions(+), 17 deletions(-)

diff --git a/src/operator/nn/activation-inl.h b/src/operator/nn/activation-inl.h
index 89a369c..32a7a5a 100644
--- a/src/operator/nn/activation-inl.h
+++ b/src/operator/nn/activation-inl.h
@@ -201,7 +201,7 @@ void ActivationGradCompute(const nnvm::NodeAttrs& attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
   CHECK_EQ(inputs.size(), 3U);
 #else
   CHECK_EQ(inputs.size(), 2U);
diff --git a/src/operator/nn/activation.cc b/src/operator/nn/activation.cc
index 8905932..0802826 100644
--- a/src/operator/nn/activation.cc
+++ b/src/operator/nn/activation.cc
@@ -44,7 +44,7 @@ struct ActivationGrad {
   const std::vector& 
ograds) const {
 std::vector heads(ograds.begin(), ograds.end());
 heads.emplace_back(nnvm::NodeEntry{n, activation::kOut, 0});
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
 heads.push_back(n->inputs[activation::kData]);
 #endif
 return MakeGradNode(op_name, n, heads, n->attrs.dict);
@@ -74,15 +74,11 @@ void ActivationGradComputeExCPU(const nnvm::NodeAttrs& 
attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-#if MXNET_USE_CUDNN == 1
   CHECK_EQ(inputs.size(), 3U);
-#else
-  CHECK_EQ(inputs.size(), 2U);
-#endif
   const ActivationParam& param = nnvm::get(attrs.parsed);
   if (SupportMKLDNN(inputs[0])) {
 MKLDNN_OPCHECK_INIT(true, outputs.size(), inputs, outputs);
-MKLDNNActivationBackward(attrs, ctx, inputs[0], inputs[1], req[0],
+MKLDNNActivationBackward(attrs, ctx, inputs[0], inputs[2], req[0],
  outputs[0]);
   MKLDNN_OPCHECK_RUN(ActivationGradCompute, attrs, ctx, inputs, req, 
outputs);
 return;
@@ -116,13 +112,13 @@ inline static bool BackwardActStorageType(const 
nnvm::NodeAttrs& attrs,
   DispatchMode* dispatch_mode,
   std::vector *in_attrs,
   std::vector *out_attrs) {
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
   CHECK_EQ(in_attrs->size(), 3U);
 #else
   CHECK_EQ(in_attrs->size(), 2U);
 #endif
   CHECK_EQ(out_attrs->size(), 1U);
-#if MXNET_USE_CUDNN == 1
+#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1)
   bool ret = ElemwiseStorageType<3, 1, false, false, false>(attrs, dev_mask,
 dispatch_mode,
 in_attrs, 
out_attrs);
diff --git a/src/operator/nn/mkldnn/mkldnn_act.cc 
b/src/operator/nn/mkldnn/mkldnn_act.cc
index 8c19850..9be5bfb 100644
--- a/src/operator/nn/mkldnn/mkldnn_act.cc
+++ b/src/operator/nn/mkldnn/mkldnn_act.cc
@@ -43,13 +43,10 @@ namespace mxnet {
 namespace op {
 
 bool SupportMKLDNNAct(const ActivationParam& param) {
-  // We only enable ReLU for now. It seems other activations have some 
precision
-  // problems.
-  return param.act_type == activation::kReLU;
-#if 0
+  return param.act_type == activation::kReLU
   || param.act_type == activation::kSigmoid
-  || param.act_type == activation::kSoftReLU;
-#endif
+  || param.act_type == activation::kSoftReLU
+  || param.act_type == activation::kTanh;
 }
 
 static inline mkldnn::algorithm GetMKLDNNActAlgo(const ActivationParam& param) 
{
diff --git a/tests/python/unittest/test_gluon.py 
b/tests/python/unittest/test_gluon.py
index 952fdf7..d91b3f0 100644
--- a/tests/python/unittest/test_gluon.py
+++ b/tests/python/unittest/test_gluon.py
@@ -717,8 +717,8 @@ def test_lambda():
 
 input_data = mx.nd.random.uniform(shape=(2, 3, 5, 7))
 out1, out2, out3 = net1(input_data), net2(input_data), 

[GitHub] cgraywang commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
cgraywang commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-378009760
 
 
   @piiswrong Put 2017 as option 1, and add one more detail about setting up 
the vs 2017 version suggested by @yajiedesign .


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub commented on issue #9733: BUG in MultiBoxTargetForward when there is single box label

2018-04-02 Thread GitBox
indhub commented on issue #9733: BUG in MultiBoxTargetForward when there is 
single box label
URL: 
https://github.com/apache/incubator-mxnet/issues/9733#issuecomment-378007151
 
 
   @zhreshold Thoughts?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10345: allow block setattr to reset the prefix when setting new block

2018-04-02 Thread GitBox
piiswrong commented on issue #10345: allow block setattr to reset the prefix 
when setting new block
URL: https://github.com/apache/incubator-mxnet/pull/10345#issuecomment-378005952
 
 
   We should raise a warning when assign blocks with non-matching prefix.
   In python copy on assignment only happens for builtin types. This kind of 
hidden protocol that decides when to copy is a hack.
   
   Also `net2.block1 = net1.block1` itself is a hack. The right approach is to 
define a new block and compose them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #9358: Why does running 1 round of an MXNET model training produce Train-mse=NaN?

2018-04-02 Thread GitBox
anirudhacharya commented on issue #9358: Why does running 1 round of an MXNET 
model training produce Train-mse=NaN?
URL: 
https://github.com/apache/incubator-mxnet/issues/9358#issuecomment-378004852
 
 
   @alexmosc can you verify the bug fix and close the issue accordingly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Replace std::swap_ranges with memcpy (#10351)

2018-04-02 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b37b3f5  Replace std::swap_ranges with memcpy (#10351)
b37b3f5 is described below

commit b37b3f5014abc25ce70bb4b89ed96d1fc9ac7fb3
Author: Deokjae Lee <36436141+asitsta...@users.noreply.github.com>
AuthorDate: Tue Apr 3 03:33:05 2018 +0900

Replace std::swap_ranges with memcpy (#10351)
---
 src/operator/random/shuffle_op.cc | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/src/operator/random/shuffle_op.cc 
b/src/operator/random/shuffle_op.cc
index d2a3e2d..983f879 100644
--- a/src/operator/random/shuffle_op.cc
+++ b/src/operator/random/shuffle_op.cc
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 #ifdef USE_GNU_PARALLEL_SHUFFLE
   #include 
 #endif
@@ -55,18 +56,24 @@ void Shuffle1D(DType* const out, const index_t size, Rand* 
const prnd) {
 
 template
 void ShuffleND(DType* const out, const index_t size, const index_t 
first_axis_len,
-Rand* const prnd) {
+Rand* const prnd, const OpContext& ctx) {
   // Fisher-Yates shuffling
+  using namespace mxnet_op;
   const index_t stride = size / first_axis_len;
   auto rand_n = [prnd](index_t n) {
 std::uniform_int_distribution dist(0, n - 1);
 return dist(*prnd);
   };
   CHECK_GT(first_axis_len, 0U);
+  const size_t stride_bytes = sizeof(DType) * stride;
+  Tensor buf =
+ctx.requested[1].get_space_typed(Shape1(stride_bytes), 
ctx.get_stream());
   for (index_t i = first_axis_len - 1; i > 0; --i) {
 const index_t j = rand_n(i + 1);
 if (i != j) {
-  std::swap_ranges(out + stride * i, out + stride * (i + 1), out + stride 
* j);
+  std::memcpy(buf.dptr_, out + stride * i, stride_bytes);
+  std::memcpy(out + stride * i, out + stride * j, stride_bytes);
+  std::memcpy(out + stride * j, buf.dptr_, stride_bytes);
 }
   }
 }
@@ -97,7 +104,7 @@ void ShuffleForwardCPU(const nnvm::NodeAttrs& attrs,
 if (input_shape.ndim() == 1) {
   Shuffle1D(out.dptr_, size, );
 } else {
-  ShuffleND(out.dptr_, size, first_axis_len, );
+  ShuffleND(out.dptr_, size, first_axis_len, , ctx);
 }
   });
 }

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] piiswrong closed pull request #10351: [MXNET-259] Performance improvement of random.shuffle

2018-04-02 Thread GitBox
piiswrong closed pull request #10351: [MXNET-259] Performance improvement of 
random.shuffle
URL: https://github.com/apache/incubator-mxnet/pull/10351
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/random/shuffle_op.cc 
b/src/operator/random/shuffle_op.cc
index d2a3e2d3df0..983f879888c 100644
--- a/src/operator/random/shuffle_op.cc
+++ b/src/operator/random/shuffle_op.cc
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 #ifdef USE_GNU_PARALLEL_SHUFFLE
   #include 
 #endif
@@ -55,18 +56,24 @@ void Shuffle1D(DType* const out, const index_t size, Rand* 
const prnd) {
 
 template
 void ShuffleND(DType* const out, const index_t size, const index_t 
first_axis_len,
-Rand* const prnd) {
+Rand* const prnd, const OpContext& ctx) {
   // Fisher-Yates shuffling
+  using namespace mxnet_op;
   const index_t stride = size / first_axis_len;
   auto rand_n = [prnd](index_t n) {
 std::uniform_int_distribution dist(0, n - 1);
 return dist(*prnd);
   };
   CHECK_GT(first_axis_len, 0U);
+  const size_t stride_bytes = sizeof(DType) * stride;
+  Tensor buf =
+ctx.requested[1].get_space_typed(Shape1(stride_bytes), 
ctx.get_stream());
   for (index_t i = first_axis_len - 1; i > 0; --i) {
 const index_t j = rand_n(i + 1);
 if (i != j) {
-  std::swap_ranges(out + stride * i, out + stride * (i + 1), out + stride 
* j);
+  std::memcpy(buf.dptr_, out + stride * i, stride_bytes);
+  std::memcpy(out + stride * i, out + stride * j, stride_bytes);
+  std::memcpy(out + stride * j, buf.dptr_, stride_bytes);
 }
   }
 }
@@ -97,7 +104,7 @@ void ShuffleForwardCPU(const nnvm::NodeAttrs& attrs,
 if (input_shape.ndim() == 1) {
   Shuffle1D(out.dptr_, size, );
 } else {
-  ShuffleND(out.dptr_, size, first_axis_len, );
+  ShuffleND(out.dptr_, size, first_axis_len, , ctx);
 }
   });
 }


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10360: extend ndarray in-place reshape

2018-04-02 Thread GitBox
piiswrong commented on a change in pull request #10360: extend ndarray in-place 
reshape
URL: https://github.com/apache/incubator-mxnet/pull/10360#discussion_r178609880
 
 

 ##
 File path: python/mxnet/ndarray/ndarray.py
 ##
 @@ -996,10 +994,10 @@ def reshape(self, *shape, **kwargs):
 handle = NDArrayHandle()
 
 # Actual reshape
-check_call(_LIB.MXNDArrayReshape(self.handle,
- len(shape),
- c_array_buf(ctypes.c_int, 
native_array('i', shape)),
- ctypes.byref(handle)))
+check_call(_LIB.MXNDArrayReshape64(self.handle,
+   len(shape),
+   c_array(ctypes.c_longlong, shape),
 
 Review comment:
   does ctypes have c_int64?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10360: extend ndarray in-place reshape

2018-04-02 Thread GitBox
piiswrong commented on a change in pull request #10360: extend ndarray in-place 
reshape
URL: https://github.com/apache/incubator-mxnet/pull/10360#discussion_r178609994
 
 

 ##
 File path: python/mxnet/ndarray/ndarray.py
 ##
 @@ -943,13 +943,8 @@ def reshape(self, *shape, **kwargs):
 shape : tuple of int, or n ints
 The new shape should not change the array size, namely
 ``np.prod(new_shape)`` should be equal to ``np.prod(self.shape)``.
-
-One dimension can be -1. In this case, the value is inferred
-from the length of the array and remaining dimensions.
-
-0 Dimensions in shape will be copied from original shape, i.e.
-if x.shape == (3, 4, 5), x.reshape((0, 20)).shape will be (3, 20).
-
+Some dimensions of the shape can take special values from the set 
{0, -1, -2, -3, -4}.
 
 Review comment:
   paste the doc here instead of using link


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub closed pull request #10039: [MXNET-103] Added tutorial on types of data augmentations.

2018-04-02 Thread GitBox
indhub closed pull request #10039: [MXNET-103] Added tutorial on types of data 
augmentations.
URL: https://github.com/apache/incubator-mxnet/pull/10039
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index 8a597e95bfb..db3c8acac95 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -169,6 +169,8 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Train a Linear Regression Model with Sparse 
Symbols](http://mxnet.incubator.apache.org/tutorials/sparse/train.html)
 
+- [Types of Data 
Augmentation](http://mxnet.incubator.apache.org/tutorials/python/types_of_data_augmentation.html)
+
 
 
  
diff --git a/docs/tutorials/python/types_of_data_augmentation.md 
b/docs/tutorials/python/types_of_data_augmentation.md
new file mode 100644
index 000..9ace047e922
--- /dev/null
+++ b/docs/tutorials/python/types_of_data_augmentation.md
@@ -0,0 +1,379 @@
+
+# Types of Data Augmentation
+
+Data Augmentation is a regularization technique that's used to avoid 
overfitting when training Machine Learning models. Although the technique can 
be applied in a variety of domains, it's very common in Computer Vision, and 
this will be the focus of the tutorial.
+
+Adjustments are made to the original images in the training dataset before 
being used in training. Some example adjustments include translating, croping, 
scaling, rotating, changing brightness and contrast. We do this to reduce the 
dependence of the model on spurious characteristics; e.g. training data may 
only contain faces that fill 1/4 of the image, so the model trainied without 
data augmentation might unhelpfully learn that faces can only be of this size.
+
+After defining some utility functions to visualise the example images, this 
tutorial details each different augmentation that can be used to adjust both 
the position and the colors of images. We discuss augmentations that are 
combined into single functions, and conclude with a FAQ section.
+
+
+```python
+%matplotlib inline
+from matplotlib.pyplot import imshow
+import mxnet as mx  # used version '1.0.0' at time of writing
+import numpy as np
+
+mx.random.seed(42) # set seed for repeatability
+```
+
+We define a utility function below, that will be used for visualising the 
augmentations in the tutorial.
+
+
+```python
+def plot_mx_array(array):
+"""
+Array expected to be height x width x 3 (channels), and values are floats 
between 0 and 255.
+"""
+assert array.shape[2] == 3, "RGB Channel should be last"
+imshow((array.clip(0, 255)/255).asnumpy())
+```
+
+We load an example image, this will be the target for our augmentations in the 
tutorial. 
+
+```python
+!wget 
https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/inputs/0.jpg
+```
+
+```python
+example_image = mx.image.imread("./0.jpg")
+assert str(example_image.dtype) == ""
+```
+
+
+You'll notice that the image is loaded with with `numpy.int8` datatype. Some 
functions such as `swapaxes` don't work on `int` types, so we'll convert to 
`float32`, and visualize.
+
+
+```python
+example_image = example_image.astype("float32")
+plot_mx_array(example_image)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/outputs/types_of/output_8_0.png)
+
+
+# Position Augmentation
+
+One form of augmentation affects the position of pixel values. Using 
combinations of slicing, scaling, translating, rotating and fliping the values 
of the original image can be shifted to create new images. Some operations 
(like scaling and rotation) require interpolation as pixels in the new image 
are combinations of pixels in the original image.
+
+### Crop
+
+You can use 
[`mxnet.image.RandomCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=randomcropaug#mxnet.image.RandomCropAug)
 and 
[`mxnet.image.CenterCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=centercropaug#mxnet.image.CenterCropAug)
 to create instances of the Augmenter class, which can be called just like a 
function.
+
+It's worth noting that the randomisation for `RandomCropAug` happens when 
calling the Augmenter, and not at the point of instantiation. You'll end up 
with different images each time you call the Augmenter, so it can't be used to 
apply the same augmentation to another image. You can use 
[`mxnet.random.seed`](https://mxnet.incubator.apache.org/api/python/symbol/random.html?highlight=seed#mxnet.random.seed)
 for random but repeatable augmentations.
+
+`CenterCropAug` is determanistic and just takes the most central crop of given 
size.
+
+
+```python
+aug = 

[incubator-mxnet] branch master updated: [MXNET-103] Added tutorial on types of data augmentations. (#10039)

2018-04-02 Thread indhub
This is an automated email from the ASF dual-hosted git repository.

indhub pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new fe6220b  [MXNET-103] Added tutorial on types of data augmentations. 
(#10039)
fe6220b is described below

commit fe6220b66d2079f27bdc23823595304abd004bfc
Author: Thom Lane 
AuthorDate: Mon Apr 2 11:11:39 2018 -0700

[MXNET-103] Added tutorial on types of data augmentations. (#10039)

* Added tutorial on types of data augmentations. With images to demonstrate 
effects.

* Removed Gluon references.

Moved custom augmentation to FAQ.

* Added to index.md
---
 docs/tutorials/index.md|   2 +
 .../tutorials/python/types_of_data_augmentation.md | 379 +
 2 files changed, 381 insertions(+)

diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index b254cdd..62cb5eb 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -173,6 +173,8 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Train a linear regression model with sparse 
symbols](http://mxnet.incubator.apache.org/tutorials/sparse/train.html)
 
+- [Types of Data 
Augmentation](http://mxnet.incubator.apache.org/tutorials/python/types_of_data_augmentation.html)
+
 
 
  
diff --git a/docs/tutorials/python/types_of_data_augmentation.md 
b/docs/tutorials/python/types_of_data_augmentation.md
new file mode 100644
index 000..9ace047
--- /dev/null
+++ b/docs/tutorials/python/types_of_data_augmentation.md
@@ -0,0 +1,379 @@
+
+# Types of Data Augmentation
+
+Data Augmentation is a regularization technique that's used to avoid 
overfitting when training Machine Learning models. Although the technique can 
be applied in a variety of domains, it's very common in Computer Vision, and 
this will be the focus of the tutorial.
+
+Adjustments are made to the original images in the training dataset before 
being used in training. Some example adjustments include translating, croping, 
scaling, rotating, changing brightness and contrast. We do this to reduce the 
dependence of the model on spurious characteristics; e.g. training data may 
only contain faces that fill 1/4 of the image, so the model trainied without 
data augmentation might unhelpfully learn that faces can only be of this size.
+
+After defining some utility functions to visualise the example images, this 
tutorial details each different augmentation that can be used to adjust both 
the position and the colors of images. We discuss augmentations that are 
combined into single functions, and conclude with a FAQ section.
+
+
+```python
+%matplotlib inline
+from matplotlib.pyplot import imshow
+import mxnet as mx  # used version '1.0.0' at time of writing
+import numpy as np
+
+mx.random.seed(42) # set seed for repeatability
+```
+
+We define a utility function below, that will be used for visualising the 
augmentations in the tutorial.
+
+
+```python
+def plot_mx_array(array):
+"""
+Array expected to be height x width x 3 (channels), and values are floats 
between 0 and 255.
+"""
+assert array.shape[2] == 3, "RGB Channel should be last"
+imshow((array.clip(0, 255)/255).asnumpy())
+```
+
+We load an example image, this will be the target for our augmentations in the 
tutorial. 
+
+```python
+!wget 
https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/inputs/0.jpg
+```
+
+```python
+example_image = mx.image.imread("./0.jpg")
+assert str(example_image.dtype) == ""
+```
+
+
+You'll notice that the image is loaded with with `numpy.int8` datatype. Some 
functions such as `swapaxes` don't work on `int` types, so we'll convert to 
`float32`, and visualize.
+
+
+```python
+example_image = example_image.astype("float32")
+plot_mx_array(example_image)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/data_aug/outputs/types_of/output_8_0.png)
+
+
+# Position Augmentation
+
+One form of augmentation affects the position of pixel values. Using 
combinations of slicing, scaling, translating, rotating and fliping the values 
of the original image can be shifted to create new images. Some operations 
(like scaling and rotation) require interpolation as pixels in the new image 
are combinations of pixels in the original image.
+
+### Crop
+
+You can use 
[`mxnet.image.RandomCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=randomcropaug#mxnet.image.RandomCropAug)
 and 
[`mxnet.image.CenterCropAug`](https://mxnet.incubator.apache.org/api/python/image/image.html?highlight=centercropaug#mxnet.image.CenterCropAug)
 to create instances of the Augmenter class, which can be called just like a 
function.
+
+It's worth noting that the randomisation for `RandomCropAug` happens when 

[GitHub] themummy commented on issue #7792: Multiplatform docker based builds

2018-04-02 Thread GitBox
themummy commented on issue #7792: Multiplatform docker based builds
URL: https://github.com/apache/incubator-mxnet/pull/7792#issuecomment-377996886
 
 
   Has anyone successfully amalgamated mxnet for Android x86-64? I need to run 
mxnet on an Android Emulator on Jenkins.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Update docs for multiple functions (#10362)

2018-04-02 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e208f4c  Update docs for multiple functions (#10362)
e208f4c is described below

commit e208f4c2a12036cde7dcc4cb3b20721810f17f98
Author: Da Zheng 
AuthorDate: Mon Apr 2 11:05:28 2018 -0700

Update docs for multiple functions (#10362)

* rewrite the comment of Function.

* Update autograd.py

* Update batchnorm_v1.

* Update correlation.

* Update autograd.py
---
 python/mxnet/autograd.py  | 13 -
 src/operator/batch_norm_v1.cc |  2 ++
 src/operator/correlation.cc   |  1 +
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/python/mxnet/autograd.py b/python/mxnet/autograd.py
index 608733d..c0f9a1b 100644
--- a/python/mxnet/autograd.py
+++ b/python/mxnet/autograd.py
@@ -362,11 +362,14 @@ def get_symbol(x):
 
 
 class Function(object):
-"""User-defined differentiable function.
-
-Function allows defining both forward and backward computation for
-custom operators. During gradient computation, the used-defined
-backward function will be used instead of the default chain-rule.
+"""Customize differentiation in autograd.
+
+If you don't want to use the gradients computed by the default
+chain-rule, you can use Function to customize differentiation for
+computation. You define your computation in
+the forward method and provide the customized differentiation
+in the backward method. During gradient computation, autograd will
+use the user-defined backward function instead of the default chain-rule.
 You can also cast to numpy array and back for some operations in
 forward and backward.
 
diff --git a/src/operator/batch_norm_v1.cc b/src/operator/batch_norm_v1.cc
index 9611137..5da4af2 100644
--- a/src/operator/batch_norm_v1.cc
+++ b/src/operator/batch_norm_v1.cc
@@ -49,6 +49,8 @@ DMLC_REGISTER_PARAMETER(BatchNormV1Param);
 MXNET_REGISTER_OP_PROPERTY(BatchNorm_v1, BatchNormV1Prop)
 .describe(R"code(Batch normalization.
 
+This operator is DEPRECATED. Perform BatchNorm on the input.
+
 Normalizes a data batch by mean and variance, and applies a scale ``gamma`` as
 well as offset ``beta``.
 
diff --git a/src/operator/correlation.cc b/src/operator/correlation.cc
index d69206d..be54c05 100644
--- a/src/operator/correlation.cc
+++ b/src/operator/correlation.cc
@@ -175,6 +175,7 @@ For now we consider only a single comparison of two 
patches. The 'correlation' o
 :math:`x_{2}` in the second map is then defined as:
 
 .. math::
+
c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} 
 
 for a square patch of size :math:`K:=2k+1`.

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] piiswrong closed pull request #10362: Update docs for multiple functions

2018-04-02 Thread GitBox
piiswrong closed pull request #10362: Update docs for multiple functions
URL: https://github.com/apache/incubator-mxnet/pull/10362
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/autograd.py b/python/mxnet/autograd.py
index 608733d5c8e..c0f9a1b63c7 100644
--- a/python/mxnet/autograd.py
+++ b/python/mxnet/autograd.py
@@ -362,11 +362,14 @@ def get_symbol(x):
 
 
 class Function(object):
-"""User-defined differentiable function.
-
-Function allows defining both forward and backward computation for
-custom operators. During gradient computation, the used-defined
-backward function will be used instead of the default chain-rule.
+"""Customize differentiation in autograd.
+
+If you don't want to use the gradients computed by the default
+chain-rule, you can use Function to customize differentiation for
+computation. You define your computation in
+the forward method and provide the customized differentiation
+in the backward method. During gradient computation, autograd will
+use the user-defined backward function instead of the default chain-rule.
 You can also cast to numpy array and back for some operations in
 forward and backward.
 
diff --git a/src/operator/batch_norm_v1.cc b/src/operator/batch_norm_v1.cc
index 96111374b07..5da4af25368 100644
--- a/src/operator/batch_norm_v1.cc
+++ b/src/operator/batch_norm_v1.cc
@@ -49,6 +49,8 @@ DMLC_REGISTER_PARAMETER(BatchNormV1Param);
 MXNET_REGISTER_OP_PROPERTY(BatchNorm_v1, BatchNormV1Prop)
 .describe(R"code(Batch normalization.
 
+This operator is DEPRECATED. Perform BatchNorm on the input.
+
 Normalizes a data batch by mean and variance, and applies a scale ``gamma`` as
 well as offset ``beta``.
 
diff --git a/src/operator/correlation.cc b/src/operator/correlation.cc
index d69206dbeee..be54c05c8e9 100644
--- a/src/operator/correlation.cc
+++ b/src/operator/correlation.cc
@@ -175,6 +175,7 @@ For now we consider only a single comparison of two 
patches. The 'correlation' o
 :math:`x_{2}` in the second map is then defined as:
 
 .. math::
+
c(x_{1}, x_{2}) = \sum_{o \in [-k,k] \times [-k,k]} 
 
 for a square patch of size :math:`K:=2k+1`.


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10364: [MXNET-260]remove use_fast_math

2018-04-02 Thread GitBox
szha commented on issue #10364: [MXNET-260]remove use_fast_math
URL: https://github.com/apache/incubator-mxnet/pull/10364#issuecomment-377996353
 
 
   Makefile doesn't have that flag for nvcc compilation lines. It doesn't seem 
like the cmake development includes the consistency of compilation flags as 
part of the requirement.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
piiswrong commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-377996125
 
 
   I think we should put 2017 as option 1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cjolivier01 commented on a change in pull request #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
cjolivier01 commented on a change in pull request #10365: [MXNET-261]Update 
MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#discussion_r178606123
 
 

 ##
 File path: tests/cpp/operator/mkldnn.cc
 ##
 @@ -77,4 +77,11 @@ TEST(MKLDNN_UTIL_FUNC, AlignMem) {
   LOG(INFO) << "Skipped for GCC " << __GNUC__ << "." << __GNUC_MINOR__;
 #endif
 }
+
+TEST(MKLDNN_UTIL_FUNC, MemFormat) {
+  // Check whether the number of format is correct.
+  CHECK_EQ(mkldnn_format_last, 56);
 
 Review comment:
   Is mkldnn_format_last an enum or constant? If so, you can use 
static_assert<> somewhere in the code  and it doesn't have to be a unit test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10364: [MXNET-260]remove use_fast_math

2018-04-02 Thread GitBox
piiswrong commented on issue #10364: [MXNET-260]remove use_fast_math
URL: https://github.com/apache/incubator-mxnet/pull/10364#issuecomment-377995749
 
 
   Also need to turn it off for Makefile?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context

2018-04-02 Thread GitBox
piiswrong commented on issue #10367: [MXNET-262] Implement 
mx.random.seed_context to seed random number generators of a specific device 
context
URL: https://github.com/apache/incubator-mxnet/pull/10367#issuecomment-377995564
 
 
   I would add a `ctx=None` argument to seed instead


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10359: Correct way to detect JSON file contains MXNet model

2018-04-02 Thread GitBox
anirudhacharya commented on issue #10359: Correct way to detect JSON file 
contains MXNet model
URL: 
https://github.com/apache/incubator-mxnet/issues/10359#issuecomment-377994986
 
 
   @szha please tag this - "HowTo", "Visualization".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
zheng-da commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-377994182
 
 
   are you sure this is a MKL problem? it fails in all configurations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #10371: [MXNET-263] [WIP] Support for dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU

2018-04-02 Thread GitBox
haojin2 opened a new pull request #10371: [MXNET-263] [WIP] Support for 
dot(dns, csr) = dns and dot(dns, csr.T) = dns on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10371
 
 
   ## Description ##
   Support dot(dns, csr) = dns and dot(dns, csr.T) = dns.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-263]
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Support dot(dns, csr.T) = dns
   - [ ] Support dot(dns, csr) = dns
   - [ ] Tests for dot(dns, csr.T)
   - [ ] Tests for dot(dns, csr)
   
   ## Comments ##
   This only works on GPU, the CPU case will produce a CSR output instead of a 
dense.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10359: Correct way to detect JSON file contains MXNet model

2018-04-02 Thread GitBox
anirudhacharya commented on issue #10359: Correct way to detect JSON file 
contains MXNet model
URL: 
https://github.com/apache/incubator-mxnet/issues/10359#issuecomment-377993495
 
 
   Here is a potential solution to this problem, without having any 
restrictions on the filename of the json file.
   
   The following is a sample json file that was obtained after serializing an 
mxnet symbol graph. 
   
   ```
   {
 "nodes": [
   {
 "op": "null", 
 "name": "a", 
 "inputs": []
   }, 
   {
 "op": "null", 
 "name": "b", 
 "inputs": []
   }, 
   {
 "op": "elemwise_mul", 
 "name": "_mul5", 
 "inputs": [[0, 0, 0], [1, 0, 0]]
   }, 
   {
 "op": "dot", 
 "name": "dot5", 
 "inputs": [[0, 0, 0], [1, 0, 0]]
   }, 
   {
 "op": "elemwise_add", 
 "name": "_plus11", 
 "inputs": [[2, 0, 0], [3, 0, 0]]
   }, 
   {
 "op": "Reshape", 
 "name": "reshape6", 
 "attrs": {"shape": "(1, 4)"}, 
 "inputs": [[4, 0, 0]]
   }
 ], 
 "arg_nodes": [0, 1], 
 "node_row_ptr": [0, 1, 2, 3, 4, 5, 6], 
 "heads": [[5, 0, 0]], 
 "attrs": {"mxnet_version": ["int", 10200]}
   }
   ```
   
   I have checked with a couple of other serialized graphs and all of them have 
this particular attribute in their json - 
   ```
   "attrs": {"mxnet_version": ["int", 10200]}
   ```
   We could possibly use the presence of this attribute in the json file to 
verify if it is an MXNet Symbol graph or not. I doubt Keras models will contain 
that particular attribute in its json.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9396: inference speed drop after updating mxnet from 0.10.0 to 1.0.0

2018-04-02 Thread GitBox
eric-haibin-lin commented on issue #9396: inference speed drop after updating 
mxnet from 0.10.0 to 1.0.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9396#issuecomment-377986916
 
 
   @DickJC123 gentle ping - any update? Were you able to reproduce the issue? 
Just want to check if there's any update since the next release candidate will 
be cut in a week or so


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse 
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178596771
 
 

 ##
 File path: python/mxnet/ndarray/sparse.py
 ##
 @@ -1151,6 +1177,284 @@ def _ndarray_cls(handle, writable=True, 
stype=_STORAGE_TYPE_UNDEFINED):
 _set_ndarray_class(_ndarray_cls)
 
 
+def add(lhs, rhs):
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
reminisce commented on issue #10368: asscalar is very slow
URL: 
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-37798
 
 
   For OOM problem, seems it's caused by always pushing operations to engine 
for every mini-batch in an epoch. You need to sync before another mini-batch 
starts. Otherwise, new memory are alway allocated when a new mini-batch starts. 
You can try defining `train_loss` as a scalar, and replace `train_loss += 
nd.mean(loss_)` with `train_loss += nd.mean(loss_).asscalar()`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #10343: [MXNET-116] Optimized functions with batch input

2018-04-02 Thread GitBox
nswamy commented on a change in pull request #10343: [MXNET-116] Optimized 
functions with batch input
URL: https://github.com/apache/incubator-mxnet/pull/10343#discussion_r178595028
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
 ##
 @@ -195,19 +195,28 @@ object ImageClassifier {
 
   /**
 * Loading input batch of images
-* @param inputImageDirPath
-* @return List of buffered images
+* @param inputImageFiles
+* @return List of buffered images-
 */
-  def loadInputBatch(inputImageDirPath: String): List[BufferedImage] = {
+  def loadInputBatch(inputImageFiles: List[File]): List[BufferedImage] = {
 
 Review comment:
   take path to files and return a traversable of Buffered Image
   
   check the file exists in the function


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on a change in pull request #10343: [MXNET-116] Optimized functions with batch input

2018-04-02 Thread GitBox
nswamy commented on a change in pull request #10343: [MXNET-116] Optimized 
functions with batch input
URL: https://github.com/apache/incubator-mxnet/pull/10343#discussion_r178594869
 
 

 ##
 File path: 
scala-package/infer/src/main/scala/ml/dmlc/mxnet/infer/ImageClassifier.scala
 ##
 @@ -195,19 +195,28 @@ object ImageClassifier {
 
   /**
 * Loading input batch of images
-* @param inputImageDirPath
-* @return List of buffered images
+* @param inputImageFiles
+* @return List of buffered images-
 */
-  def loadInputBatch(inputImageDirPath: String): List[BufferedImage] = {
+  def loadInputBatch(inputImageFiles: List[File]): List[BufferedImage] = {
+inputImageFiles.map(file => ImageIO.read(file))
+  }
+
+  def generateBatches(inputImageDirPath: String, batchSize: Int = 100): 
List[List[File]] = {
 
 Review comment:
   don't feel like this belongs in ImageClassifier, please move to the example


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
reminisce commented on issue #10368: asscalar is very slow
URL: 
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377982754
 
 
   This timing does not reflect the correct time spent on training and 
`asscalar` respectively. As said before, all the lines before the second 
`print` are async operations. You need to sync before `asscalar` is called.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow
URL: 
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377979496
 
 
   Thank you for the answer.
   And I have another problem, could you please help me fix it.
   When I use mxnet with gpu context, the first iteration is running well, but 
it raising a OOM error by cuda.  Dose it mean it didn't release the gpu memory 
after the first iteration? Thank you.
   ```
   what():  [20:36:33] src/engine/./threaded_engine.h:359: [20:36:33] 
src/storage/./pooled_storage_manager.h:107: cudaMalloc failed: out of memory
   
   Stack trace returned 10 entries:
   [bt] (0) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2a9e78) 
[0x7f9f0fea2e78]
   [bt] (1) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2aa288) 
[0x7f9f0fea3288]
   [bt] (2) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x29056cf) 
[0x7f9f124fe6cf]
   [bt] (3) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x29089a8) 
[0x7f9f125019a8]
   [bt] (4) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x6c814f) 
[0x7f9f102c114f]
   [bt] (5) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x24b5568) 
[0x7f9f120ae568]
   [bt] (6) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x24b6313) 
[0x7f9f120af313]
   [bt] (7) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x24b6946) 
[0x7f9f120af946]
   [bt] (8) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x24379b3) 
[0x7f9f120309b3]
   [bt] (9) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x243e2ed) 
[0x7f9f120372ed]
   
   A fatal error occurred in asynchronous engine operation. If you do not know 
what caused this error, you can try set environment variable MXNET_ENGINE_TYPE 
to NaiveEngine and run with debugger (i.e. gdb). This will force all operations 
to be synchronous and backtrace will give you the series of calls that lead to 
this error. Remember to set MXNET_ENGINE_TYPE back to empty after debugging.
   
   Stack trace returned 9 entries:
   [bt] (0) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2a9e78) 
[0x7f9f0fea2e78]
   [bt] (1) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2aa288) 
[0x7f9f0fea3288]
   [bt] (2) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x243e594) 
[0x7f9f12037594]
   [bt] (3) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2442bdb) 
[0x7f9f1203bbdb]
   [bt] (4) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x2442db6) 
[0x7f9f1203bdb6]
   [bt] (5) 
/root/miniconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x243f68b) 
[0x7f9f1203868b]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse 
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178591508
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_broadcast_op_basic.cc
 ##
 @@ -120,8 +120,13 @@ Example::
broadcast_mul(x, y) = [[ 0.,  0.,  0.],
   [ 1.,  1.,  1.]]
 
+Supported sparse operations:
+   broadcast_mul(csr, dense(1D)) = csr
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse 
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178591486
 
 

 ##
 File path: python/mxnet/ndarray/sparse.py
 ##
 @@ -1176,6 +1480,7 @@ def zeros(stype, shape, ctx=None, dtype=None, **kwargs):
 >>> mx.nd.sparse.zeros('row_sparse', (1,2), ctx=mx.cpu(), 
dtype='float16').asnumpy()
 array([[ 0.,  0.]], dtype=float16)
 """
+# pylint: disable= no-member, protected-access
 
 Review comment:
   Not added to every function, so will keep this in this way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse 
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178591121
 
 

 ##
 File path: tests/python/unittest/test_sparse_operator.py
 ##
 @@ -1675,6 +1675,30 @@ def check_sparse_embedding(in_dim, out_dim, batch, 
densities, deterministic):
 check_sparse_embedding(in_dim, out_dim, batch, densities, False)
 
 
+@with_seed()
+def test_sparse_broadcast_mul_div():
+from scipy.sparse import random, csr_matrix
+def check_broadcast_mul(mx_lhs, mx_rhs, np_lhs, np_rhs, dtype):
+assert_almost_equal(mx.nd.broadcast_mul(mx_lhs, mx_rhs).asnumpy(), 
np.multiply(np_lhs, np_rhs), atol=1e-4)
+def check_broadcast_div(mx_lhs, mx_rhs, np_lhs, np_rhs, dtype):
+assert_almost_equal(mx.nd.broadcast_div(mx_lhs, mx_rhs).asnumpy(), 
np.divide(np_lhs, np_rhs), atol=1e-4)
+shape = (4,3)
+np_lhs = random(shape[0], shape[1], density=0.25, dtype=np.float32).tocsr()
+mx_lhs = mx.nd.sparse.csr_matrix((np_lhs.data, np_lhs.indices, 
np_lhs.indptr), shape=shape)
 
 Review comment:
   Done, was not aware of that helper function


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse 
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178591078
 
 

 ##
 File path: tests/python/unittest/test_sparse_operator.py
 ##
 @@ -1675,6 +1675,30 @@ def check_sparse_embedding(in_dim, out_dim, batch, 
densities, deterministic):
 check_sparse_embedding(in_dim, out_dim, batch, densities, False)
 
 
+@with_seed()
+def test_sparse_broadcast_mul_div():
+from scipy.sparse import random, csr_matrix
 
 Review comment:
   Removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10231: [MXNET-16] move dmlc-core & nnvm

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10231: [MXNET-16] move 
dmlc-core & nnvm
URL: https://github.com/apache/incubator-mxnet/pull/10231#discussion_r178591087
 
 

 ##
 File path: LICENSE
 ##
 @@ -220,13 +220,13 @@
 3. scala-package - For details, see, scala-package/LICENSE
 4. Warp-CTC - For details, see, src/operator/contrib/ctc_include/LICENSE
 5. 3rdparty/dlpack - For details, see, 3rdparty/dlpack/LICENSE
-6. dmlc-core - For details, see, dmlc-core/LICENSE
+6. 3rdparty/dmlc-core - For details, see, 3rdparty/dmlc-core/LICENSE
 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE
-8. nnvm/dmlc-core - For details, see, nnvm/dmlc-core/LICENSE
-9. nnvm - For details, see, nnvm/LICENSE
-10. nnvm-fusion - For details, see, nnvm/plugin/nnvm-fusion/LICENSE
+8. 3rdparty/nnvm/dmlc-core - For details, see, 
3rdparty/nnvm/dmlc-core/LICENSE
+9. 3rdparty/nnvm - For details, see, 3rdparty/nnvm/LICENSE
+10. nnvm-fusion - For details, see, 
3rdparty/nnvm/plugin/nnvm-fusion/LICENSE
 
 Review comment:
   Updated in https://github.com/apache/incubator-mxnet/pull/10370 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #10370: Update LICENSE for nnvm-fusion path

2018-04-02 Thread GitBox
eric-haibin-lin opened a new pull request #10370: Update LICENSE for 
nnvm-fusion path
URL: https://github.com/apache/incubator-mxnet/pull/10370
 
 
   ## Description ##
   (Brief description on what this PR is about)
   @mbaijal 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nju-luke commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke commented on issue #10368: asscalar is very slow
URL: 
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377979496
 
 
   This is the  version with time, and the third out put time.
   ``` python
   print(datetime.datetime.now())
   train_loss = nd.array([0.0], ctx=ctx)
   for data, label in tqdm.tqdm(train_data):
   label = label.as_in_context(ctx)
   data = data.as_in_context(ctx)
   with autograd.record():
   output = net(data)
   loss_ = loss(output, label)
   loss_.backward()
   trainer.step(batch_size, ignore_stale_grad=True)
   train_loss += nd.mean(loss_)
   print(datetime.datetime.now())
   train_loss = train_loss.asscalar()
   print(datetime.datetime.now())
   ```
   ``` shell
   2018-04-03 01:00:47.525819
   100%|██| 30/30 [00:01<00:00, 28.00it/s]
   2018-04-03 01:00:48.624371
   2018-04-03 01:02:18.453842
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands opened a new issue #10369: Proper seeding of the random number generators for parallel CPU threads and multiple GPU devices

2018-04-02 Thread GitBox
asitstands opened a new issue #10369: Proper seeding of the random number 
generators for parallel CPU threads and multiple GPU devices
URL: https://github.com/apache/incubator-mxnet/issues/10369
 
 
   It looks like the random number generators for CPU threads are seeded with 
consecutive integers. 
https://github.com/apache/incubator-mxnet/blob/5a480f0237c28231bcf486f39f917b65513ee6a3/src/common/random_generator.h#L92
 So if the first generator is seeded with `s`, the `n`-th generator is seeded 
with `s+n-1`. Is it right? If so, is there a reason? Of course, seeding 
`std::mt19937` generators with consecutive 32 bit integers does not mean 
setting their internal states with consecutive integers. However, as far as I 
know, it is not a good way to seed the generators for parallel use. It could 
introduce unexpected correlations between random sequences.
   
   The best way may be to divide the entire random number sequence that 
`mt19937` generates into `N` non-overlapping subsequences (`N` is the number of 
generators) and seed each generator to produce one of the subsequences. 
`mt19937` has period of `2^19937`. That is a huge number and, in practice, we 
can seed the generators to just skip over enough number of random numbers 
instead of dividing by `N`. C++'s random engines, including `mt19937`, provides 
`discard(long long n)` method to skip over `n` numbers. We can use this to seed 
the parallel generators so that the generators safely generate non-overlapping 
sequence of 2^64 numbers. Or we can call `discard` multiple times for logner 
sequence. The method is fast enough and usually seeding is not frequent. I 
think that 2^64 is an enough number for usual use cases. I'm not sure whether 
the current seeding method provides any guarantee like this.
   
   I think that seeding multiple GPU devices also has the similar problem. 
cuRAND takes care of the forementioned issue for the threads in a device but we 
need to care multuple devices. Is the current formula using the magic number 
and the device id a proven jump function for Philox4_32_10 generator? 
https://github.com/apache/incubator-mxnet/blob/5a480f0237c28231bcf486f39f917b65513ee6a3/src/resource.cc#L306
 If not, I think that we can utilize `skipahead` function which has the same 
effect of `discard` of c++ standard.
   
   Random number generations could have subtle issues and I'm not aware of the 
past discussions on the current implementation, so I could miss something. If 
the above argument looks sound to the community, I'll work on this issue.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10293: [MXNET-72] Improve sparse sgd on GPU

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10293: [MXNET-72] 
Improve sparse sgd on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10293#discussion_r178586231
 
 

 ##
 File path: src/operator/optimizer_op.cc
 ##
 @@ -98,6 +98,38 @@ Where the parameter ``momentum`` is the decay rate of 
momentum estimates at each
 .add_argument("mom", "NDArray-or-Symbol", "Momentum")
 .add_arguments(SignumParam::__FIELDS__());
 
+template
+struct SGDMomStdDnsRspDnsKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, index_t row_length, DType* out_data,
+DType* mom_data, const DType* weight_data, const IType* grad_idx,
+const DType* grad_data, const RType* prefix_sum, const DType clip_gradient,
+const DType momentum, const DType lr, const DType wd, const DType 
rescale_grad) {
+const DType rate = lr * wd;
+const bool non_zero = (i == 0) ? prefix_sum[0] > 0
+   : prefix_sum[i] > prefix_sum[i-1];
+
+const index_t row_i = i * row_length;
+const RType grad_i = (prefix_sum[i]-1) * row_length;
+for (index_t j = 0; j < row_length; j++) {
+  const index_t data_i = row_i + j;
+  const DType grad = non_zero ? grad_data[grad_i + j]
+  : static_cast(0);
+  if (clip_gradient >= 0.0f) {
+mom_data[data_i] = momentum * mom_data[data_i]
+- rate * weight_data[data_i]
+- lr *
+mshadow_op::clip::Map(rescale_grad * grad,
 
 Review comment:
   Don't think it's necessary to add/remove that extra line break. Please 
provide constructive feedbacks/review comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-141] Add tutorial Gluon Datasets and DataLoaders (#10251)

2018-04-02 Thread indhub
This is an automated email from the ASF dual-hosted git repository.

indhub pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9a0d002  [MXNET-141] Add tutorial Gluon Datasets and DataLoaders 
(#10251)
9a0d002 is described below

commit 9a0d0028695cbc3553ffadf3537b1003bb0006c4
Author: Thom Lane 
AuthorDate: Mon Apr 2 09:53:43 2018 -0700

[MXNET-141] Add tutorial Gluon Datasets and DataLoaders (#10251)

* Added tutorial for Gluon datasets and data loaders.

* Changes as per code review.

* Added link to tutorial in index.md.

* Cut section on RecordIO. Moved num_workers discussion higher up. Removed 
Gluon DataLoader to Module DataIter wrapper.
---
 docs/tutorials/gluon/datasets.md | 310 +++
 docs/tutorials/index.md  |   2 +
 2 files changed, 312 insertions(+)

diff --git a/docs/tutorials/gluon/datasets.md b/docs/tutorials/gluon/datasets.md
new file mode 100644
index 000..248ea02
--- /dev/null
+++ b/docs/tutorials/gluon/datasets.md
@@ -0,0 +1,310 @@
+
+# Gluon `Dataset`s and `DataLoader`
+
+One of the most critical steps for model training and inference is loading the 
data: without data you can't do Machine Learning! In this tutorial we use the 
Gluon API to define a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 and use a 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)
 to iterate through the dataset in mini-batches.
+
+## Introduction to `Dataset`s
+
+[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 objects are used to represent collections of data, and include methods to load 
and parse the data (that is often stored on disk). Gluon has a number of 
different 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 classes for working with image data straight out-of-the-box, but we'll use the 
[`ArrayDataset` [...]
+
+We first start by generating random data `X` (with 3 variables) and 
corresponding random labels `y` to simulate a typical supervised learning task. 
We generate 10 samples and we pass them all to the 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset).
+
+
+```python
+import mxnet as mx
+
+X = mx.random.uniform(shape=(10, 3))
+y = mx.random.uniform(shape=(10, 1))
+dataset = mx.gluon.data.dataset.ArrayDataset(X, y)
+```
+
+A key feature of a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 is the __*ability to retrieve a single sample given an index*__. Our random 
data and labels were generated in memory, so this 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset)
 doesn't have to load anything from disk, but the interface is the same for all 
[`Dataset`](https [...]
+
+
+```python
+sample_idx = 4
+sample = dataset[sample_idx]
+
+assert len(sample) == 2
+assert sample[0].shape == (3, )
+assert sample[1].shape == (1, )
+print(sample)
+```
+
+(
+ [ 0.4375872   0.29753461  0.89177299]
+ , 
+ [ 0.83261985]
+ )
+
+
+We get a tuple of a data sample and its corresponding label, which makes sense 
because we passed the data `X` and the labels `y` in that order when we 
instantiated the 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset).
 We don't usually retrieve individual samples from 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 objects though (unless [...]
+
+## Introduction to `DataLoader`
+
+A 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)
 is used to create mini-batches of samples from a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset),
 and provides a convenient iterator interface for looping these batches. It's 
typically much more efficient to pass a mini-batch of data through a neural 
network than a single sample at a time, be [...]
+
+Another benefit of using 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)
 is the ability to easily load data in parallel using 
[`multiprocessing`](https://docs.python.org/3.6/library/multiprocessing.html). 
Just set the `num_workers` parameter to the number of CPUs avaliable on your 
machine for maximum 

[GitHub] indhub closed pull request #10251: [MXNET-141] Add tutorial Gluon Datasets and DataLoaders

2018-04-02 Thread GitBox
indhub closed pull request #10251: [MXNET-141] Add tutorial Gluon Datasets and 
DataLoaders
URL: https://github.com/apache/incubator-mxnet/pull/10251
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/gluon/datasets.md b/docs/tutorials/gluon/datasets.md
new file mode 100644
index 000..248ea02f5c1
--- /dev/null
+++ b/docs/tutorials/gluon/datasets.md
@@ -0,0 +1,310 @@
+
+# Gluon `Dataset`s and `DataLoader`
+
+One of the most critical steps for model training and inference is loading the 
data: without data you can't do Machine Learning! In this tutorial we use the 
Gluon API to define a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 and use a 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)
 to iterate through the dataset in mini-batches.
+
+## Introduction to `Dataset`s
+
+[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 objects are used to represent collections of data, and include methods to load 
and parse the data (that is often stored on disk). Gluon has a number of 
different 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 classes for working with image data straight out-of-the-box, but we'll use the 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset)
 to introduce the idea of a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset).
+
+We first start by generating random data `X` (with 3 variables) and 
corresponding random labels `y` to simulate a typical supervised learning task. 
We generate 10 samples and we pass them all to the 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset).
+
+
+```python
+import mxnet as mx
+
+X = mx.random.uniform(shape=(10, 3))
+y = mx.random.uniform(shape=(10, 1))
+dataset = mx.gluon.data.dataset.ArrayDataset(X, y)
+```
+
+A key feature of a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 is the __*ability to retrieve a single sample given an index*__. Our random 
data and labels were generated in memory, so this 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset)
 doesn't have to load anything from disk, but the interface is the same for all 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)s.
+
+
+```python
+sample_idx = 4
+sample = dataset[sample_idx]
+
+assert len(sample) == 2
+assert sample[0].shape == (3, )
+assert sample[1].shape == (1, )
+print(sample)
+```
+
+(
+ [ 0.4375872   0.29753461  0.89177299]
+ , 
+ [ 0.83261985]
+ )
+
+
+We get a tuple of a data sample and its corresponding label, which makes sense 
because we passed the data `X` and the labels `y` in that order when we 
instantiated the 
[`ArrayDataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=arraydataset#mxnet.gluon.data.ArrayDataset).
 We don't usually retrieve individual samples from 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset)
 objects though (unless we're quality checking the output samples). Instead we 
use a 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader).
+
+## Introduction to `DataLoader`
+
+A 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)
 is used to create mini-batches of samples from a 
[`Dataset`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataset#mxnet.gluon.data.Dataset),
 and provides a convenient iterator interface for looping these batches. It's 
typically much more efficient to pass a mini-batch of data through a neural 
network than a single sample at a time, because the computation can be 
performed in parallel. A required parameter of 
[`DataLoader`](https://mxnet.incubator.apache.org/api/python/gluon/data.html?highlight=dataloader#mxnet.gluon.data.DataLoader)
 is the size of the mini-batches you want to create, called `batch_size`.
+
+Another benefit of using 

[GitHub] sxjscience commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
sxjscience commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-377976220
 
 
   Also, we have another place that shows the windows setup doc. 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-10363/1/install/windows_setup.html
 . @aaronmarkham Will these two be merged later? Should we revise only one page?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
sxjscience commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-377975456
 
 
   Looks great!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10364: [MXNET-260]remove use_fast_math

2018-04-02 Thread GitBox
sxjscience commented on issue #10364: [MXNET-260]remove use_fast_math
URL: https://github.com/apache/incubator-mxnet/pull/10364#issuecomment-377974852
 
 
   This solves https://github.com/apache/incubator-mxnet/issues/9572 . Also, I 
find that (-1)**2 and (-1)**3 are not tested in the code. We'd better test 
these cases here 
(https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_operator.py#L431-L438).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10293: [MXNET-72] Improve sparse sgd on GPU

2018-04-02 Thread GitBox
eric-haibin-lin commented on a change in pull request #10293: [MXNET-72] 
Improve sparse sgd on GPU
URL: https://github.com/apache/incubator-mxnet/pull/10293#discussion_r178586231
 
 

 ##
 File path: src/operator/optimizer_op.cc
 ##
 @@ -98,6 +98,38 @@ Where the parameter ``momentum`` is the decay rate of 
momentum estimates at each
 .add_argument("mom", "NDArray-or-Symbol", "Momentum")
 .add_arguments(SignumParam::__FIELDS__());
 
+template
+struct SGDMomStdDnsRspDnsKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, index_t row_length, DType* out_data,
+DType* mom_data, const DType* weight_data, const IType* grad_idx,
+const DType* grad_data, const RType* prefix_sum, const DType clip_gradient,
+const DType momentum, const DType lr, const DType wd, const DType 
rescale_grad) {
+const DType rate = lr * wd;
+const bool non_zero = (i == 0) ? prefix_sum[0] > 0
+   : prefix_sum[i] > prefix_sum[i-1];
+
+const index_t row_i = i * row_length;
+const RType grad_i = (prefix_sum[i]-1) * row_length;
+for (index_t j = 0; j < row_length; j++) {
+  const index_t data_i = row_i + j;
+  const DType grad = non_zero ? grad_data[grad_i + j]
+  : static_cast(0);
+  if (clip_gradient >= 0.0f) {
+mom_data[data_i] = momentum * mom_data[data_i]
+- rate * weight_data[data_i]
+- lr *
+mshadow_op::clip::Map(rescale_grad * grad,
 
 Review comment:
   Don't think it's necessary to add that extra line break. Please provide 
constructive feedbacks/review comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse operator broadcast_mul/div(csr, dense) = csr

2018-04-02 Thread GitBox
haojin2 commented on a change in pull request #10208: [MXNET-117] Sparse 
operator broadcast_mul/div(csr, dense) = csr
URL: https://github.com/apache/incubator-mxnet/pull/10208#discussion_r178585958
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_broadcast_op_basic.cc
 ##
 @@ -120,8 +120,13 @@ Example::
broadcast_mul(x, y) = [[ 0.,  0.,  0.],
   [ 1.,  1.,  1.]]
 
+Supported sparse operations:
+   broadcast_mul(csr, dense(1D)) = csr
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10366: fix bug in sgd

2018-04-02 Thread GitBox
sxjscience commented on issue #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-377972673
 
 
   I think the temporary workspace is used in the sparse update 
@eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #10366: fix bug in sgd

2018-04-02 Thread GitBox
sxjscience commented on issue #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366#issuecomment-377972673
 
 
   I think the temporary workspace is used in the code @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
reminisce commented on issue #10368: asscalar is very slow
URL: 
https://github.com/apache/incubator-mxnet/issues/10368#issuecomment-377963542
 
 
   How did you time it? `asscalar` is a blocking function call, while the other 
lines are async. It would not be accurate to simply time it line by line in 
Python. You will need to add `mx.nd.waitall()` before the last line to time the 
training part reasonably.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nju-luke opened a new issue #10368: asscalar is very slow

2018-04-02 Thread GitBox
nju-luke opened a new issue #10368: asscalar is very slow
URL: https://github.com/apache/incubator-mxnet/issues/10368
 
 
   The train part cost 0.01 second, but the asscalar operation cost a few 
second sometimes more than 10. I install the newest version of mxnet by pip, 
and this was happened with cpu and gpu context both.
   Does anyone have any idea for fixing this? Thank you very much.
   ``` python
   for epoch in range(num_epochs):
   train_loss = nd.array([0.0], ctx=ctx)
   for data, label in tqdm.tqdm(train_data):
   label = label.as_in_context(ctx)
   data = data.as_in_context(ctx)
   with autograd.record():
   output = net(data)
   loss_ = loss(output, label)
   loss_.backward()
   trainer.step(batch_size, ignore_stale_grad=True)
   train_loss += nd.mean(loss_)
   train_loss = train_loss.asscalar()
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands opened a new pull request #10367: [MXNET-262] Implement mx.random.seed_context to seed random number generators of a specific device context

2018-04-02 Thread GitBox
asitstands opened a new pull request #10367: [MXNET-262] Implement 
mx.random.seed_context to seed random number generators of a specific device 
context
URL: https://github.com/apache/incubator-mxnet/pull/10367
 
 
   ## Description ##
   
   This PR introduces a function `mx.random.seed_context` to seed random number 
generators of a specific device context. `mx.random.seed_context(seed, ctx)` 
seeds the parallel and non-parallel generators of `ctx` where `ctx` is optional 
and the default is the current context. The random number sequence generated on 
the device is completely determined by the seed, differently from existing 
`mx.random.seed` which implicitly uses the device id of each context together 
with the given seed. Using device id is reasonable to seed all generators at 
once with a given seed, but to reproduce the same random number sequence we 
need to set the running context besides seeding with the same number. Sometimes 
setting a context would be inconvenient or the running context could be not 
deterministic. `mx.random.seed_context` would be helpful in such cases. 
   
   The implementation is simple. It just hands over the given seed to the 
underlying generators. The unit tests are an adaptation of the existing tests 
for `mx.random.seed`.
   
   Here is an example.
   ```python
   >>> # Seeding with `mx.random.seed`. Different results on gpu(0) and gpu(1).
   >>> with mx.Context(mx.gpu(0)):
   ... mx.random.seed(99)
   ... print(mx.nd.random.uniform(0, 1, 3))
   [0.29560053 0.07938761 0.29997164]
   
   >>> with mx.Context(mx.gpu(1)):
   ... mx.random.seed(99)
   ... print(mx.nd.random.uniform(0, 1, 3))
   [0.8797334 0.8857584 0.3797555]
   
   
   >>> # Seeding with `mx.random.seed_context`. Identical results on gpu(0) and 
gpu(1).
   >>> # This seeds the generator of the current context. Other generators are 
not touched.
   >>> # To seed a specific device context, set the optional argument `ctx`.
   >>> with mx.Context(mx.gpu(0)):
   ... mx.random.seed_context(99)
   ... print(mx.nd.random.uniform(0, 1, 3))
   [0.29560053 0.07938761 0.29997164]
   
   >>> with mx.Context(mx.gpu(1)):
   ... mx.random.seed_context(99)
   ... print(mx.nd.random.uniform(0, 1, 3))
   [0.29560053 0.07938761 0.29997164]
   
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on a change in pull request #10351: [MXNET-259] Performance improvement of random.shuffle

2018-04-02 Thread GitBox
asitstands commented on a change in pull request #10351: [MXNET-259] 
Performance improvement of random.shuffle
URL: https://github.com/apache/incubator-mxnet/pull/10351#discussion_r178562672
 
 

 ##
 File path: src/operator/random/shuffle_op.cc
 ##
 @@ -55,18 +56,24 @@ void Shuffle1D(DType* const out, const index_t size, Rand* 
const prnd) {
 
 template
 void ShuffleND(DType* const out, const index_t size, const index_t 
first_axis_len,
-Rand* const prnd) {
+Rand* const prnd, const OpContext& ctx) {
   // Fisher-Yates shuffling
+  using namespace mxnet_op;
   const index_t stride = size / first_axis_len;
   auto rand_n = [prnd](index_t n) {
 std::uniform_int_distribution dist(0, n - 1);
 return dist(*prnd);
   };
   CHECK_GT(first_axis_len, 0U);
+  const size_t stride_bytes = sizeof(DType) * stride;
+  Tensor buf =
+ctx.requested[1].get_space_typed(Shape1(stride_bytes), 
ctx.get_stream());
 
 Review comment:
   It is 
[there](https://github.com/apache/incubator-mxnet/blob/5a480f0237c28231bcf486f39f917b65513ee6a3/src/operator/random/shuffle_op.cc#L124)
 because the GPU version already needed a temp space.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
marcoabreu commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-37790
 
 
   I'd vote for using a stable version instead of the latest master. 
   
   @xinyu-intel @pengzhao-intel what's the stability of the master branch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10353: CI does not check with CuDNN=0

2018-04-02 Thread GitBox
marcoabreu commented on issue #10353: CI does not check with CuDNN=0
URL: 
https://github.com/apache/incubator-mxnet/issues/10353#issuecomment-377909647
 
 
   I'd propose to add specific excludes (for these non-runnable tests) to the 
nosetest call in runtime_functions.sh. https://pypi.python.org/pypi/nose-exclude
   
   Other options would have been to set an environment variable and check it in 
the test or catch the CuDNN-not-present-exception and drop it. But I'd not be 
in favour of those because this would require CI-specific modifications in 
tests and (in the latter case) could mask valid errors due to CuDNN missing 
while expecting it to be present.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10353: CI does not check with CuDNN=0

2018-04-02 Thread GitBox
marcoabreu commented on issue #10353: CI does not check with CuDNN=0
URL: 
https://github.com/apache/incubator-mxnet/issues/10353#issuecomment-377909647
 
 
   I'd propose to add specific excludes (for these non-runnable tests) to the 
nosetest call in runtime_functions.sh.
   
   Other options would have been to set an environment variable and check it in 
the test or catch the CuDNN-not-present-exception and drop it. But I'd not be 
in favour of those because this would require CI-specific modifications in 
tests and (in the latter case) could mask valid errors due to CuDNN missing 
while expecting it to be present.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
marcoabreu commented on issue #10365: [MXNET-261]Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-37790
 
 
   I'd vote for using a stable version instead of the latest master


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] solin319 opened a new pull request #10366: fix bug in sgd

2018-04-02 Thread GitBox
solin319 opened a new pull request #10366: fix bug in sgd
URL: https://github.com/apache/incubator-mxnet/pull/10366
 
 
   `ResourceRequest` will make `sgd_mom_update` can't overlap the backward 
computation. #9782
   So can we delete `ResourceRequest` when register `sgd_mom_update`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10353: CI does not check with CuDNN=0

2018-04-02 Thread GitBox
zheng-da commented on issue #10353: CI does not check with CuDNN=0
URL: 
https://github.com/apache/incubator-mxnet/issues/10353#issuecomment-377878897
 
 
   We should definitely test build. The question is whether we should run the 
code and how? Some test code can't run without CuDNN as we have experienced in 
https://github.com/apache/incubator-mxnet/pull/10302


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on a change in pull request #10336: Fix MKLDNN sigmoid/softrelu issue

2018-04-02 Thread GitBox
jinhuang415 commented on a change in pull request #10336: Fix MKLDNN 
sigmoid/softrelu issue
URL: https://github.com/apache/incubator-mxnet/pull/10336#discussion_r178503829
 
 

 ##
 File path: tests/python/unittest/test_gluon.py
 ##
 @@ -717,8 +717,8 @@ def test_lambda():
 
 input_data = mx.nd.random.uniform(shape=(2, 3, 5, 7))
 out1, out2, out3 = net1(input_data), net2(input_data), net3(input_data)
-assert_almost_equal(out1.asnumpy(), out2.asnumpy())
-assert_almost_equal(out1.asnumpy(), out3.asnumpy())
+assert_almost_equal(out1.asnumpy(), out2.asnumpy(), rtol=1e-3)
+assert_almost_equal(out1.asnumpy(), out3.asnumpy(), rtol=1e-3)
 
 Review comment:
   It would still fail intermittently if setting to 1e-4, by checking another 
similar function check_consistency(), the threshold is set to 1e-3 for FP32, 
since for this test_lambda() case the data type is FP32 (mx.nd.random.uniform() 
default output FP32 type) so I set the rtol to 1e-3 as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10365: Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
zheng-da commented on issue #10365: Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365#issuecomment-377878370
 
 
   Should we update MKLDNN to the latest commit in its master branch? or should 
we always attach it to a certain version tag? What rule should we follow?
   @szha @cjolivier01 @piiswrong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel opened a new pull request #10365: Update MKLDNN & Add CPP Test

2018-04-02 Thread GitBox
xinyu-intel opened a new pull request #10365: Update MKLDNN & Add CPP Test
URL: https://github.com/apache/incubator-mxnet/pull/10365
 
 
   ## Description ##
   This pr aims to fix bugs in #8712 by update MKLDNN to the newest. CPP tests 
are added to monitor data format change of  MKL-DNN 
[MXNET-98](https://issues.apache.org/jira/browse/MXNET-98)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Update MKLDNN
   - [x] Add cpp tests
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10336: Fix MKLDNN sigmoid/softrelu issue

2018-04-02 Thread GitBox
zheng-da commented on a change in pull request #10336: Fix MKLDNN 
sigmoid/softrelu issue
URL: https://github.com/apache/incubator-mxnet/pull/10336#discussion_r178500575
 
 

 ##
 File path: tests/python/unittest/test_gluon.py
 ##
 @@ -717,8 +717,8 @@ def test_lambda():
 
 input_data = mx.nd.random.uniform(shape=(2, 3, 5, 7))
 out1, out2, out3 = net1(input_data), net2(input_data), net3(input_data)
-assert_almost_equal(out1.asnumpy(), out2.asnumpy())
-assert_almost_equal(out1.asnumpy(), out3.asnumpy())
+assert_almost_equal(out1.asnumpy(), out2.asnumpy(), rtol=1e-3)
+assert_almost_equal(out1.asnumpy(), out3.asnumpy(), rtol=1e-3)
 
 Review comment:
   maybe we should set a smaller tolerance. changing from 1e-5 to 1e-3 seems to 
be a big jump.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10336: Fix MKLDNN sigmoid/softrelu issue

2018-04-02 Thread GitBox
zheng-da commented on a change in pull request #10336: Fix MKLDNN 
sigmoid/softrelu issue
URL: https://github.com/apache/incubator-mxnet/pull/10336#discussion_r178500575
 
 

 ##
 File path: tests/python/unittest/test_gluon.py
 ##
 @@ -717,8 +717,8 @@ def test_lambda():
 
 input_data = mx.nd.random.uniform(shape=(2, 3, 5, 7))
 out1, out2, out3 = net1(input_data), net2(input_data), net3(input_data)
-assert_almost_equal(out1.asnumpy(), out2.asnumpy())
-assert_almost_equal(out1.asnumpy(), out3.asnumpy())
+assert_almost_equal(out1.asnumpy(), out2.asnumpy(), rtol=1e-3)
+assert_almost_equal(out1.asnumpy(), out3.asnumpy(), rtol=1e-3)
 
 Review comment:
   maybe we should set a higher precision.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cgraywang commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
cgraywang commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-377867492
 
 
   @aaronmarkham 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cgraywang commented on issue #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
cgraywang commented on issue #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363#issuecomment-377866992
 
 
   @sxjscience please take a look at this as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign opened a new pull request #10364: [MXNET-260]remove use_fast_math

2018-04-02 Thread GitBox
yajiedesign opened a new pull request #10364: [MXNET-260]remove use_fast_math
URL: https://github.com/apache/incubator-mxnet/pull/10364
 
 
   ## Description ##
   remove use_fast_math
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   
   
   ### Changes ###
   
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] cgraywang opened a new pull request #10363: Fix windows setup doc using VS 2017

2018-04-02 Thread GitBox
cgraywang opened a new pull request #10363: Fix windows setup doc using VS 2017
URL: https://github.com/apache/incubator-mxnet/pull/10363
 
 
   ## Description ##
   
   Update the MXNet windows installation doc by adding the details about how to 
install MXNet from source on windows using VS 2017. Besides, also add more 
details about the installation with OpenCV, Cmake, CUDA, cuDNN.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10283: [MXNET-242][Tutorial] Fine-tuning ONNX model in Gluon

2018-04-02 Thread GitBox
anirudhacharya commented on issue #10283: [MXNET-242][Tutorial] Fine-tuning 
ONNX model in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10283#issuecomment-377863372
 
 
   @ThomasDelteil please update this tutorial in the ONNX-MXNet API page - 
https://github.com/apache/incubator-mxnet/blob/master/docs/api/python/contrib/onnx.md#onnx-tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


<    1   2