[GitHub] rachelmint opened a new issue #11071: GPU memory usage for VGG16 prediction

2018-05-25 Thread GitBox
rachelmint opened a new issue #11071: GPU memory usage for VGG16 prediction
URL: https://github.com/apache/incubator-mxnet/issues/11071
 
 
   Hi,
   
   I found that image prediction with VGG16  took 770M GPU memory (for 1 
image). I felt like mxnet use more memory than caffe in this case. Is that a 
normal memory cost?



This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #11070: bump up version number to 1.3.0

2018-05-25 Thread GitBox
haojin2 opened a new pull request #11070: bump up version number to 1.3.0
URL: https://github.com/apache/incubator-mxnet/pull/11070
 
 
   ## Description ##
   As title.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-05-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 9902a1c  Bump the publish timestamp.
9902a1c is described below

commit 9902a1c2e8d1fe78e34095ba344dd4082f242de4
Author: mxnet-ci 
AuthorDate: Sat May 26 02:01:13 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..2a03e15
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat May 26 02:01:13 UTC 2018

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] solin319 commented on issue #11061: mx.nd.argmax is slow

2018-05-25 Thread GitBox
solin319 commented on issue #11061: mx.nd.argmax is slow
URL: 
https://github.com/apache/incubator-mxnet/issues/11061#issuecomment-392223217
 
 
   @z01nl1o02 
   "print max[0]" is used to make sure argmax completed. 
   We can change it to 'max.wait_to_read()', the result is same.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10921: Test cases improvement for MKLDNN on Gluon

2018-05-25 Thread GitBox
marcoabreu commented on issue #10921: Test cases improvement for MKLDNN on Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10921#issuecomment-392219240
 
 
   Aaaah no no, definitely not. It was just a general statement :)
   
   Da Zheng  schrieb am Sa., 26. Mai 2018, 02:14:
   
   > @marcoabreu  i'm not sure what you mean.
   > do you mean we should support NDArrays of 85GB? i don't think we can use
   > malloc to allocate a single piece of memory of this size?
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > 
,
   > or mute the thread
   > 

   > .
   >
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11066: migrating docs build and publish job to secure nodes

2018-05-25 Thread GitBox
marcoabreu commented on issue #11066: migrating docs build and publish job to 
secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066#issuecomment-392219038
 
 
   Did you rename the Jenkins job to restricted-blabla?
   
   The error that this slave does not exist is fine. It will automatically be 
deployed by auto scaling.
   
   If the job does not picked up, you probably ran into a security measure that 
prevents accessing restricted nodes from unrestricted jobs or Vica versa. Let's 
review that on Monday


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10921: Test cases improvement for MKLDNN on Gluon

2018-05-25 Thread GitBox
zheng-da commented on issue #10921: Test cases improvement for MKLDNN on Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10921#issuecomment-392218718
 
 
   @marcoabreu i'm not sure what you mean. do you mean we should support 
NDArrays of 85GB? i don't think we can use malloc to allocate a single piece of 
memory of this size?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpmurali opened a new pull request #11069: [MXNET-480] New version select for Install page

2018-05-25 Thread GitBox
kpmurali opened a new pull request #11069: [MXNET-480] New version select for 
Install page
URL: https://github.com/apache/incubator-mxnet/pull/11069
 
 
   ## Description ##
   New Version select for install page with query string features
   
   ## Checklist ##
   
   ### Changes ###
   - [ x ] Add version select drop-down in install page
   - [ x ] Add query string modifying capabilities in the URL
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11066: migrating docs build and publish job to secure nodes

2018-05-25 Thread GitBox
aaronmarkham commented on issue #11066: migrating docs build and publish job to 
secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066#issuecomment-392212683
 
 
   @marcoabreu - what's this `what()` failure on CI? I've seen it a couple of 
times. Something flakey?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11068: [MXNET-478] Fixing the xml markup

2018-05-25 Thread GitBox
aaronmarkham commented on issue #11068: [MXNET-478] Fixing the xml markup
URL: https://github.com/apache/incubator-mxnet/pull/11068#issuecomment-392212156
 
 
   @nswamy Please review/merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpmurali opened a new pull request #11068: [MXNET-478] Fixing the xml markup

2018-05-25 Thread GitBox
kpmurali opened a new pull request #11068: [MXNET-478] Fixing the xml markup
URL: https://github.com/apache/incubator-mxnet/pull/11068
 
 
   ## Description ##
   Fix the broken XML markup in IntelliJ tutorials
   
   ## Checklist ##
   ### Changes ###
   - [ x ] Fix the broken XML markup in IntelliJ tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-473] Fix for dist_sync_kvstore (#11058)

2018-05-25 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new d528453  [MXNET-473] Fix for dist_sync_kvstore (#11058)
d528453 is described below

commit d52845372ddfa3275d0f589cd32944f3a64f7760
Author: Hao Jin 
AuthorDate: Fri May 25 16:14:41 2018 -0700

[MXNET-473] Fix for dist_sync_kvstore (#11058)

[MXNET-473] Fix for dist_sync_kvstore and test_operator.test_op_roi_align
---
 src/operator/tensor/elemwise_binary_op-inl.h | 2 +-
 tests/python/unittest/test_operator.py   | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/operator/tensor/elemwise_binary_op-inl.h 
b/src/operator/tensor/elemwise_binary_op-inl.h
index 2cf6481..c74f1f9 100644
--- a/src/operator/tensor/elemwise_binary_op-inl.h
+++ b/src/operator/tensor/elemwise_binary_op-inl.h
@@ -552,7 +552,7 @@ void ElemwiseBinaryOp::DnsRspDnsOp(mshadow::Stream *s,
   TBlob rsp_data = rsp.data();
   TBlob rsp_indices = rsp.aux_data(rowsparse::kIdx);
 
-  MSHADOW_SGL_DBL_TYPE_SWITCH(rsp_data.type_flag_, DType, {
+  MSHADOW_TYPE_SWITCH(rsp_data.type_flag_, DType, {
 MSHADOW_IDX_TYPE_SWITCH(rsp_indices.type_flag_, IType, {
   MXNET_ASSIGN_REQ_SWITCH(req, Req, {
 if (reverse && std::is_same::value) {
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index c5bdee1..3f08971 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -6019,7 +6019,7 @@ def test_context_num_gpus():
 if str(e).find("CUDA") == -1:
 raise e
 
-
+
 @with_seed()
 def test_op_roi_align():
 # Adapted from 
https://github.com/wkcn/MobulaOP/blob/master/tests/test_roi_align_op.py
@@ -6146,7 +6146,7 @@ def test_op_roi_align():
 real_output, [dx, drois] = roialign_forward_backward(data.asnumpy(), 
rois.asnumpy(), pooled_size, spatial_scale, sampling_ratio, dy.asnumpy())
 assert np.allclose(output.asnumpy(), real_output)
 # It seems that the precision between Cfloat and Pyfloat is different.
-assert np.allclose(data.grad.asnumpy(), dx, atol = 1e-6), 
np.abs(data.grad.asnumpy() - dx).max()
+assert np.allclose(data.grad.asnumpy(), dx, atol = 1e-5), 
np.abs(data.grad.asnumpy() - dx).max()
 assert np.allclose(rois.grad.asnumpy(), drois)
 
 # modified from test_roipooling()

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] eric-haibin-lin closed issue #11064: Flaky test: test_operator.test_op_roi_align

2018-05-25 Thread GitBox
eric-haibin-lin closed issue #11064: Flaky test: test_operator.test_op_roi_align
URL: https://github.com/apache/incubator-mxnet/issues/11064
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #11058: [MXNET-473] Fix for dist_sync_kvstore and test_operator.test_op_roi_align

2018-05-25 Thread GitBox
eric-haibin-lin closed pull request #11058: [MXNET-473] Fix for 
dist_sync_kvstore and test_operator.test_op_roi_align
URL: https://github.com/apache/incubator-mxnet/pull/11058
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/elemwise_binary_op-inl.h 
b/src/operator/tensor/elemwise_binary_op-inl.h
index 2cf64818aba..c74f1f93603 100644
--- a/src/operator/tensor/elemwise_binary_op-inl.h
+++ b/src/operator/tensor/elemwise_binary_op-inl.h
@@ -552,7 +552,7 @@ void ElemwiseBinaryOp::DnsRspDnsOp(mshadow::Stream *s,
   TBlob rsp_data = rsp.data();
   TBlob rsp_indices = rsp.aux_data(rowsparse::kIdx);
 
-  MSHADOW_SGL_DBL_TYPE_SWITCH(rsp_data.type_flag_, DType, {
+  MSHADOW_TYPE_SWITCH(rsp_data.type_flag_, DType, {
 MSHADOW_IDX_TYPE_SWITCH(rsp_indices.type_flag_, IType, {
   MXNET_ASSIGN_REQ_SWITCH(req, Req, {
 if (reverse && std::is_same::value) {
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index c5bdee1d1e5..3f089717b78 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -6019,7 +6019,7 @@ def test_context_num_gpus():
 if str(e).find("CUDA") == -1:
 raise e
 
-
+
 @with_seed()
 def test_op_roi_align():
 # Adapted from 
https://github.com/wkcn/MobulaOP/blob/master/tests/test_roi_align_op.py
@@ -6146,7 +6146,7 @@ def test_roi_align_value():
 real_output, [dx, drois] = roialign_forward_backward(data.asnumpy(), 
rois.asnumpy(), pooled_size, spatial_scale, sampling_ratio, dy.asnumpy())
 assert np.allclose(output.asnumpy(), real_output)
 # It seems that the precision between Cfloat and Pyfloat is different.
-assert np.allclose(data.grad.asnumpy(), dx, atol = 1e-6), 
np.abs(data.grad.asnumpy() - dx).max()
+assert np.allclose(data.grad.asnumpy(), dx, atol = 1e-5), 
np.abs(data.grad.asnumpy() - dx).max()
 assert np.allclose(rois.grad.asnumpy(), drois)
 
 # modified from test_roipooling()


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11064: Flaky test: test_operator.test_op_roi_align

2018-05-25 Thread GitBox
haojin2 commented on issue #11064: Flaky test: test_operator.test_op_roi_align
URL: 
https://github.com/apache/incubator-mxnet/issues/11064#issuecomment-392208742
 
 
   @zhreshold increased the rtol to 1e-4 and passed 500 consecutive test runs, 
the change is included in #11058 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11064: Flaky test: test_operator.test_op_roi_align

2018-05-25 Thread GitBox
haojin2 commented on issue #11064: Flaky test: test_operator.test_op_roi_align
URL: 
https://github.com/apache/incubator-mxnet/issues/11064#issuecomment-392208742
 
 
   @zhreshold increased the rtol to 1e-5 and passed 500 consecutive test runs, 
the change is included in #11058 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11058: [MXNET-473] Fix for dist_sync_kvstore

2018-05-25 Thread GitBox
haojin2 commented on issue #11058: [MXNET-473] Fix for dist_sync_kvstore
URL: https://github.com/apache/incubator-mxnet/pull/11058#issuecomment-392208577
 
 
   @rahul003 this should fix the issue of dist_sync_kvstore


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #11027: Add standard ResNet data augmentation for ImageRecordIter

2018-05-25 Thread GitBox
rahul003 commented on a change in pull request #11027: Add standard ResNet data 
augmentation for ImageRecordIter
URL: https://github.com/apache/incubator-mxnet/pull/11027#discussion_r191026394
 
 

 ##
 File path: src/io/image_aug_default.cc
 ##
 @@ -104,16 +127,37 @@ struct DefaultImageAugmentParam : public 
dmlc::Parameter

[GitHub] szha commented on issue #11037: Website landing page for MMS

2018-05-25 Thread GitBox
szha commented on issue #11037: Website landing page for MMS
URL: https://github.com/apache/incubator-mxnet/pull/11037#issuecomment-392199128
 
 
   Pinging @piiswrong for review


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11058: [MXNET-473] Fix for dist_sync_kvstore

2018-05-25 Thread GitBox
haojin2 commented on issue #11058: [MXNET-473] Fix for dist_sync_kvstore
URL: https://github.com/apache/incubator-mxnet/pull/11058#issuecomment-392195581
 
 
   @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #11012: [WIP] leaky relu speed

2018-05-25 Thread GitBox
eric-haibin-lin commented on a change in pull request #11012: [WIP] leaky relu 
speed
URL: https://github.com/apache/incubator-mxnet/pull/11012#discussion_r191014378
 
 

 ##
 File path: src/operator/leaky_relu-inl.h
 ##
 @@ -115,11 +116,27 @@ class LeakyReLUOp : public Operator {
   case leakyrelu::kPReLU: {
 weight = in_data[leakyrelu::kGamma].get(s);
 if (weight.shape_.Size() == 1) {
-  Assign(out, req[leakyrelu::kOut],
- F(data, 
mshadow::expr::broadcast_scalar(weight, out.shape_)));
+  MXNET_ASSIGN_REQ_SWITCH(req[leakyrelu::kOut], Req, {
+mxnet_op::Kernel, 
xpu>::Launch(
+  s, in_data[leakyrelu::kData].Size(), 
out_data[leakyrelu::kOut].dptr(),
+  in_data[leakyrelu::kData].dptr(), DType(weight[0]));
 
 Review comment:
   Cannot access gpu ptr outside kernel


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy closed pull request #11063: [MXNET-386] NDArray Bug fix

2018-05-25 Thread GitBox
nswamy closed pull request #11063: [MXNET-386] NDArray Bug fix
URL: https://github.com/apache/incubator-mxnet/pull/11063
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala 
b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
index bbe786f5a0a..56cc3255fa6 100644
--- a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
+++ b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
@@ -92,9 +92,11 @@ private[mxnet] object NDArrayMacro {
 val isContrib: Boolean = c.prefix.tree match {
   case q"new AddNDArrayAPIs($b)" => c.eval[Boolean](c.Expr(b))
 }
+
 val newNDArrayFunctions = {
-  if (isContrib) ndarrayFunctions.filter(_.name.startsWith("_contrib_"))
-  else ndarrayFunctions.filter(!_.name.startsWith("_contrib_"))
+  if (isContrib) ndarrayFunctions.filter(
+func => func.name.startsWith("_contrib_") || 
!func.name.startsWith("_"))
+  else ndarrayFunctions.filterNot(_.name.startsWith("_"))
 }
 
 val functionDefs = newNDArrayFunctions map { ndarrayfunction =>


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-386] NDArray Bug fix (#11063)

2018-05-25 Thread nswamy
This is an automated email from the ASF dual-hosted git repository.

nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1ac2bce  [MXNET-386] NDArray Bug fix (#11063)
1ac2bce is described below

commit 1ac2bce3c4f7900a07e714891bfba1bddc3460e6
Author: Lanking 
AuthorDate: Fri May 25 14:09:00 2018 -0700

[MXNET-386] NDArray Bug fix (#11063)

NDArray Bug fix
---
 .../macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala   | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala 
b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
index bbe786f..56cc325 100644
--- a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
+++ b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
@@ -92,9 +92,11 @@ private[mxnet] object NDArrayMacro {
 val isContrib: Boolean = c.prefix.tree match {
   case q"new AddNDArrayAPIs($b)" => c.eval[Boolean](c.Expr(b))
 }
+
 val newNDArrayFunctions = {
-  if (isContrib) ndarrayFunctions.filter(_.name.startsWith("_contrib_"))
-  else ndarrayFunctions.filter(!_.name.startsWith("_contrib_"))
+  if (isContrib) ndarrayFunctions.filter(
+func => func.name.startsWith("_contrib_") || 
!func.name.startsWith("_"))
+  else ndarrayFunctions.filterNot(_.name.startsWith("_"))
 }
 
 val functionDefs = newNDArrayFunctions map { ndarrayfunction =>

-- 
To stop receiving notification emails like this one, please contact
nsw...@apache.org.


[GitHub] thomelane opened a new pull request #11067: Added links to tutorials index page

2018-05-25 Thread GitBox
thomelane opened a new pull request #11067: Added links to tutorials index page
URL: https://github.com/apache/incubator-mxnet/pull/11067
 
 
   ## Description ##
   
   Corrections to tutorials index page.
   
   1) Added back tutorial links for Data Augmentation tutorials: Gluon, Module 
and Types Of.
   2) Added back Module ONNX tutorial.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #11066: migrating docs build and publish job to secure nodes

2018-05-25 Thread GitBox
aaronmarkham opened a new pull request #11066: migrating docs build and publish 
job to secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066
 
 
   ## Description ##
   Migrating docs build to secure/restricted jenkins nodes.
   - [x] Created a new jenkins job called 
[restricted-website-publish](http://jenkins.mxnet-ci.amazon-ml.com/job/restricted-website-publish/)
   - [x] Updated nodes in Jenkinsfile `node('restricted-mxnetlinux-cpu')`
   - [x] Updated Jenkinsfile to call new job `restricted-website-publish`
   
   ## Comments
   I would assume that the config in the Jenkins job for `Restrict where this 
project can be run` would be updated from `mxnetlinux-cpu` to 
`restricted-mxnetlinux-cpu`. However, I get an error if I try to do this:
   ```
   There’s no agent/cloud that matches this assignment. Did you mean 
‘mxnetlinux-cpu’ instead of ‘restricted-mxnetlinux-cpu’?
   ```
   @marcoabreu  - what are your thoughts on this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #11041: gpu mem pool strategy

2018-05-25 Thread GitBox
eric-haibin-lin commented on a change in pull request #11041: gpu mem pool 
strategy
URL: https://github.com/apache/incubator-mxnet/pull/11041#discussion_r191000176
 
 

 ##
 File path: src/storage/storage.cc
 ##
 @@ -118,7 +118,21 @@ void StorageImpl::Alloc(Storage::Handle* handle) {
 #if MXNET_USE_CUDA
 CUDA_CALL(cudaGetDeviceCount(_gpu_device));
 CHECK_GT(num_gpu_device, 0) << "GPU usage requires at least 1 GPU";
-ptr = new storage::GPUPooledStorageManager();
+
+const char *type = getenv("MXNET_GPU_MEM_POOL_TYPE");
+const bool default_pool = (type == nullptr);
+if (default_pool) type = "Naive";
+std::string strategy = type;
+
+if (strategy == "Round") {
+  ptr = new storage::GPUPooledRoundedStorageManager();
+  LOG(INFO) << "Using GPUPooledRoundedStorageManager.";
+} else {
+  if (strategy != "Naive") {
+LOG(INFO) << "Unknown memory pool strategy specified: " << 
strategy << ".";
 
 Review comment:
   log(fatal)? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #11065: [MXNET-477] [TEST PR] [DO NOT MERGE] [WIP] CI engine test bug fix test

2018-05-25 Thread GitBox
haojin2 opened a new pull request #11065: [MXNET-477] [TEST PR] [DO NOT MERGE] 
[WIP] CI engine test bug fix test
URL: https://github.com/apache/incubator-mxnet/pull/11065
 
 
   ## Description ##
   As title
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on issue #10628: [MXNET-342] Fix the multi worker Dataloader

2018-05-25 Thread GitBox
zhreshold commented on issue #10628: [MXNET-342] Fix the multi worker Dataloader
URL: https://github.com/apache/incubator-mxnet/pull/10628#issuecomment-392134395
 
 
   I think pytorch is also suffering from similar problems: 
https://github.com/pytorch/pytorch/issues/973
   
   I think we can use ForkingPickler.register(recordio.MXRecordIO, 
reopen_recordio) to force reload record files when workers are forked.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-413] Fixing the broken links - Week of 5/7 (#10895)

2018-05-25 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new feb0757  [MXNET-413] Fixing the broken links - Week of 5/7 (#10895)
feb0757 is described below

commit feb075738590d88e39fb1ab1e239c86c0fe0f2a4
Author: kpmurali <37911926+kpmur...@users.noreply.github.com>
AuthorDate: Fri May 25 10:55:06 2018 -0700

[MXNET-413] Fixing the broken links - Week of 5/7 (#10895)

* Fixing the broken links - Week of 5/7

* Adding line breaks to the scala-package docs broken links to comply with 
lint checker

* 1. Adding the download links for 1.2.0  2. Change the links for 1.1.0 to 
archive links   3. Drop the md5 column

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI
---
 docs/install/download.md  | 15 ---
 docs/tutorials/scala/mxnet_scala_on_intellij.md   |  2 +-
 .../infer/imageclassifier/ImageClassifierExample.scala|  6 +++---
 .../infer/objectdetector/SSDClassifierExample.scala   |  6 +++---
 4 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/docs/install/download.md b/docs/install/download.md
index 1d6d6d4..ad3762e 100644
--- a/docs/install/download.md
+++ b/docs/install/download.md
@@ -2,11 +2,12 @@
 
 These source archives are generated from tagged releases. Updates and patches 
will not have been applied. For any updates refer to the corresponding branches 
in the [GitHub repository](https://github.com/apache/incubator-mxnet). Choose 
your flavor of download from the following links:
 
-| Version | Source 
 | PGP  
   | 
SHA 
   | MD5
 |
-|-|-|-||-|
-| 1.1.0   | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz)
  | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz.asc)
  | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz.sha512)
  | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz.md5)
  |
-| 1.0.0   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-1.0.0-incubating.tar.gz)
   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-1.0.0-incubating.tar.gz.asc)
   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-1.0.0-incubating.tar.gz.sha512)
   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-0.12.1-incubating.tar.gz.md5)
  |
-| 0.12.1  | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz.asc)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz.sha512)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz.md5)
 |
-| 0.12.0  | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz.asc)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz.sha512)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz.md5)
 |
-| 0.11.0  | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz.asc)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz.sha512)
 | 

[GitHub] anirudh2290 closed pull request #10895: [MXNET-413] Fixing the broken links - Week of 5/7

2018-05-25 Thread GitBox
anirudh2290 closed pull request #10895: [MXNET-413] Fixing the broken links - 
Week of 5/7
URL: https://github.com/apache/incubator-mxnet/pull/10895
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/install/download.md b/docs/install/download.md
index 1d6d6d477db..ad3762ea9fa 100644
--- a/docs/install/download.md
+++ b/docs/install/download.md
@@ -2,11 +2,12 @@
 
 These source archives are generated from tagged releases. Updates and patches 
will not have been applied. For any updates refer to the corresponding branches 
in the [GitHub repository](https://github.com/apache/incubator-mxnet). Choose 
your flavor of download from the following links:
 
-| Version | Source 
 | PGP  
   | 
SHA 
   | MD5
 |
-|-|-|-||-|
-| 1.1.0   | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz)
  | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz.asc)
  | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz.sha512)
  | 
[Download](https://www.apache.org/dist/incubator/mxnet/1.1.0/apache-mxnet-src-1.1.0-incubating.tar.gz.md5)
  |
-| 1.0.0   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-1.0.0-incubating.tar.gz)
   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-1.0.0-incubating.tar.gz.asc)
   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-1.0.0-incubating.tar.gz.sha512)
   | 
[Download](http://archive.apache.org/dist/incubator/mxnet/1.0.0/apache-mxnet-src-0.12.1-incubating.tar.gz.md5)
  |
-| 0.12.1  | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz.asc)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz.sha512)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.1/apache-mxnet-src-0.12.1-incubating.tar.gz.md5)
 |
-| 0.12.0  | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz.asc)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz.sha512)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.12.0/apache-mxnet-src-0.12.0-incubating.tar.gz.md5)
 |
-| 0.11.0  | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz.asc)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz.sha512)
 | 
[Download](http://archive.apache.org/dist/incubator/mxnet/0.11.0/apache-mxnet-src-0.11.0-incubating.tar.gz.md5)
 |
+| Version | Source 
 | PGP  
   | 
SHA 
   |
+|-|-|-|-|
+| 1.2.0   | 

svn commit: r27108 - in /dev/incubator/mxnet: 1.2.0.rc0/ 1.2.0.rc1/ 1.2.0.rc2/ 1.2.0.rc3/

2018-05-25 Thread anirudh2290
Author: anirudh2290
Date: Fri May 25 17:52:54 2018
New Revision: 27108

Log:
Removing rc dirs

Removed:
dev/incubator/mxnet/1.2.0.rc0/
dev/incubator/mxnet/1.2.0.rc1/
dev/incubator/mxnet/1.2.0.rc2/
dev/incubator/mxnet/1.2.0.rc3/



svn commit: r27107 - /release/incubator/mxnet/1.1.0/

2018-05-25 Thread anirudh2290
Author: anirudh2290
Date: Fri May 25 17:51:37 2018
New Revision: 27107

Log:
Remove 1.1.0

Removed:
release/incubator/mxnet/1.1.0/



[GitHub] zhreshold commented on issue #11064: Flaky test: test_operator.test_op_roi_align

2018-05-25 Thread GitBox
zhreshold commented on issue #11064: Flaky test: test_operator.test_op_roi_align
URL: 
https://github.com/apache/incubator-mxnet/issues/11064#issuecomment-392125397
 
 
   Relax with rtol should be fine, the diff is acceptable


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new issue #11064: Flaky test: test_operator.test_op_roi_align

2018-05-25 Thread GitBox
eric-haibin-lin opened a new issue #11064: Flaky test: 
test_operator.test_op_roi_align
URL: https://github.com/apache/incubator-mxnet/issues/11064
 
 
   ```
   ==
   
   FAIL: test_operator.test_op_roi_align
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   
   self.test(*self.arg)
   
 File "/work/mxnet/tests/python/unittest/common.py", line 157, in test_new
   
   orig_test(*args, **kwargs)
   
 File "/work/mxnet/tests/python/unittest/test_operator.py", line 6170, in 
test_op_roi_align
   
   test_roi_align_value()
   
 File "/work/mxnet/tests/python/unittest/test_operator.py", line 6149, in 
test_roi_align_value
   
   assert np.allclose(data.grad.asnumpy(), dx, atol = 1e-6), 
np.abs(data.grad.asnumpy() - dx).max()
   
   AssertionError: 1.3150275e-06
   
    >> begin captured logging << 
   
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=1619190489 to reproduce.
   
   - >> end captured logging << -
   
   ```
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11058/1/pipeline
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix bugs in MKLDNN. (#10979)

2018-05-25 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new d497b37  Fix bugs in MKLDNN. (#10979)
d497b37 is described below

commit d497b37876ffb5d9bc01812bf7f295039bfe35f6
Author: Da Zheng 
AuthorDate: Fri May 25 10:11:45 2018 -0700

Fix bugs in MKLDNN. (#10979)

* Fix bugs in MKLDNN.

* add more test cases.

* Fix CopyFrom when it's the view of an NDArray.

* add test.

* check same shape correctly.

* add unit test for CopyFrom.

* Fix warning.

* Add test sum.

* fix sum.

* Fix fallback.

* Fix fallback of sum.

* add tests.

* Update mkldnn.cc
---
 src/ndarray/ndarray.cc  | 111 +---
 src/operator/nn/mkldnn/mkldnn_base.cc   |   5 +-
 src/operator/nn/mkldnn/mkldnn_sum.cc|  22 +++-
 src/operator/tensor/elemwise_binary_op_basic.cc |  12 +-
 tests/cpp/operator/mkldnn.cc| 165 +---
 5 files changed, 235 insertions(+), 80 deletions(-)

diff --git a/src/ndarray/ndarray.cc b/src/ndarray/ndarray.cc
index d87e8bc..94d3d90 100644
--- a/src/ndarray/ndarray.cc
+++ b/src/ndarray/ndarray.cc
@@ -200,6 +200,7 @@ NDArray NDArray::MKLDNNDataReshape(const TShape ) 
const {
 ret.ptr_->delay_alloc = false;
 ret.ptr_->static_data = true;
 ret.byte_offset_ = byte_offset_;
+ret.reuse_ = false;
 return ret;
   }
 }
@@ -217,6 +218,7 @@ NDArray NDArray::Reshape(const TShape ) const {
   // Otherwise, reshape only works on the default layout.
   CHECK_EQ(storage_type(), kDefaultStorage);
   ret.shape_ = shape;
+  ret.reuse_ = false;
   return ret;
 }
 
@@ -249,6 +251,7 @@ NDArray NDArray::Slice(index_t begin, index_t end) const {
   MSHADOW_TYPE_SWITCH(ret.dtype(), DType, {
 ret.byte_offset_ += begin * length * sizeof(DType);
   });
+  ret.reuse_ = false;
   ret.shape_[0] = end - begin;
   return ret;
 }
@@ -555,6 +558,7 @@ NDArray NDArray::Reorder2Default() const {
   // reshape as needed
   ret.shape_ = shape_;
   ret.byte_offset_ = byte_offset_;
+  ret.reuse_ = false;
   return ret;
 }
 
@@ -584,39 +588,39 @@ void NDArray::MKLDNNDataReorderAsync(const 
mkldnn::memory::primitive_desc )
 
 const mkldnn::memory *NDArray::GetMKLDNNData() const {
   CHECK(storage_type() == kDefaultStorage);
+  bool is_view = IsView();
   if (IsMKLDNNData()) {
 // If this array uses MKLDNN layout, we have to make sure it's not a view.
 // Otherwise, we'll have to change the layout inside the array.
-CHECK(!IsView());
+CHECK(!is_view);
 MKLDNNStream::Get()->RegisterMem(ptr_->mkl_mem_->GetMem());
 // If this array uses MKLDNN format, we should return now. Otherwise,
 // SetMKLMem may mess up mkl_mem_.
 return ptr_->mkl_mem_->GetRaw();
-  }
-  ptr_->SetMKLMem(IsView() ? ptr_->storage_shape : shape_, dtype_);
-  MKLDNNStream::Get()->RegisterMem(ptr_->mkl_mem_->GetMem());
-  if (IsView()) {
-mkldnn::memory::primitive_desc pd = ptr_->mkl_mem_->GetPrimitiveDesc();
-// Sliced array must use the default layout.
-CHECK_EQ(GetDefaultFormat(pd.desc()), pd.desc().data.format);
-void *off_addr = static_cast(ptr_->mkl_mem_->GetDataHandle())
-+ byte_offset_;
-
+  } else if (is_view) {
+// If this is a view, we can't create a MKLDNN memory for the chunk
+// because we don't have the complete data type and shape information for
+// the chunk.
+void *off_addr = static_cast(ptr_->shandle.dptr) + byte_offset_;
 // Create the primitive desc for the new mkldnn memory.
 mkldnn::memory::dims dims(shape().ndim());
 for (size_t i = 0; i < dims.size(); i++)
   dims[i] = shape()[i];
 mkldnn::memory::format cpp_format = static_cast(
 GetDefaultFormat(shape().ndim()));
-mkldnn::memory::data_type cpp_type = 
static_cast(
-pd.desc().data.data_type);
+mkldnn::memory::data_type cpp_type = get_mkldnn_type(dtype_);
 mkldnn::memory::desc data_md(dims, cpp_type, cpp_format);
-mkldnn::memory::primitive_desc new_pd(data_md, pd.get_engine());
+mkldnn::memory::primitive_desc new_pd(data_md,
+  CpuEngine::Get()->get_engine());
 
 std::shared_ptr ret(new mkldnn::memory(new_pd, off_addr));
 MKLDNNStream::Get()->RegisterMem(ret);
 return ret.get();
   } else {
+// If this isn't a view, we can create a MKLDNN memory and store it in the
+// chunk.
+ptr_->SetMKLMem(shape_, dtype_);
+MKLDNNStream::Get()->RegisterMem(ptr_->mkl_mem_->GetMem());
 return ptr_->mkl_mem_->GetRaw();
   }
 }
@@ -637,20 +641,23 @@ void NDArray::CopyFrom(const mkldnn::memory ) {
   MKLDNNStream *stream = MKLDNNStream::Get();
   // If this array uses MKLDNN layout, we have to make sure 

[GitHub] piiswrong closed pull request #10979: Fix bugs in MKLDNN.

2018-05-25 Thread GitBox
piiswrong closed pull request #10979: Fix bugs in MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/10979
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/ndarray/ndarray.cc b/src/ndarray/ndarray.cc
index d87e8bc95ea..94d3d90413a 100644
--- a/src/ndarray/ndarray.cc
+++ b/src/ndarray/ndarray.cc
@@ -200,6 +200,7 @@ NDArray NDArray::MKLDNNDataReshape(const TShape ) 
const {
 ret.ptr_->delay_alloc = false;
 ret.ptr_->static_data = true;
 ret.byte_offset_ = byte_offset_;
+ret.reuse_ = false;
 return ret;
   }
 }
@@ -217,6 +218,7 @@ NDArray NDArray::Reshape(const TShape ) const {
   // Otherwise, reshape only works on the default layout.
   CHECK_EQ(storage_type(), kDefaultStorage);
   ret.shape_ = shape;
+  ret.reuse_ = false;
   return ret;
 }
 
@@ -249,6 +251,7 @@ NDArray NDArray::Slice(index_t begin, index_t end) const {
   MSHADOW_TYPE_SWITCH(ret.dtype(), DType, {
 ret.byte_offset_ += begin * length * sizeof(DType);
   });
+  ret.reuse_ = false;
   ret.shape_[0] = end - begin;
   return ret;
 }
@@ -555,6 +558,7 @@ NDArray NDArray::Reorder2Default() const {
   // reshape as needed
   ret.shape_ = shape_;
   ret.byte_offset_ = byte_offset_;
+  ret.reuse_ = false;
   return ret;
 }
 
@@ -584,39 +588,39 @@ void NDArray::MKLDNNDataReorderAsync(const 
mkldnn::memory::primitive_desc )
 
 const mkldnn::memory *NDArray::GetMKLDNNData() const {
   CHECK(storage_type() == kDefaultStorage);
+  bool is_view = IsView();
   if (IsMKLDNNData()) {
 // If this array uses MKLDNN layout, we have to make sure it's not a view.
 // Otherwise, we'll have to change the layout inside the array.
-CHECK(!IsView());
+CHECK(!is_view);
 MKLDNNStream::Get()->RegisterMem(ptr_->mkl_mem_->GetMem());
 // If this array uses MKLDNN format, we should return now. Otherwise,
 // SetMKLMem may mess up mkl_mem_.
 return ptr_->mkl_mem_->GetRaw();
-  }
-  ptr_->SetMKLMem(IsView() ? ptr_->storage_shape : shape_, dtype_);
-  MKLDNNStream::Get()->RegisterMem(ptr_->mkl_mem_->GetMem());
-  if (IsView()) {
-mkldnn::memory::primitive_desc pd = ptr_->mkl_mem_->GetPrimitiveDesc();
-// Sliced array must use the default layout.
-CHECK_EQ(GetDefaultFormat(pd.desc()), pd.desc().data.format);
-void *off_addr = static_cast(ptr_->mkl_mem_->GetDataHandle())
-+ byte_offset_;
-
+  } else if (is_view) {
+// If this is a view, we can't create a MKLDNN memory for the chunk
+// because we don't have the complete data type and shape information for
+// the chunk.
+void *off_addr = static_cast(ptr_->shandle.dptr) + byte_offset_;
 // Create the primitive desc for the new mkldnn memory.
 mkldnn::memory::dims dims(shape().ndim());
 for (size_t i = 0; i < dims.size(); i++)
   dims[i] = shape()[i];
 mkldnn::memory::format cpp_format = static_cast(
 GetDefaultFormat(shape().ndim()));
-mkldnn::memory::data_type cpp_type = 
static_cast(
-pd.desc().data.data_type);
+mkldnn::memory::data_type cpp_type = get_mkldnn_type(dtype_);
 mkldnn::memory::desc data_md(dims, cpp_type, cpp_format);
-mkldnn::memory::primitive_desc new_pd(data_md, pd.get_engine());
+mkldnn::memory::primitive_desc new_pd(data_md,
+  CpuEngine::Get()->get_engine());
 
 std::shared_ptr ret(new mkldnn::memory(new_pd, off_addr));
 MKLDNNStream::Get()->RegisterMem(ret);
 return ret.get();
   } else {
+// If this isn't a view, we can create a MKLDNN memory and store it in the
+// chunk.
+ptr_->SetMKLMem(shape_, dtype_);
+MKLDNNStream::Get()->RegisterMem(ptr_->mkl_mem_->GetMem());
 return ptr_->mkl_mem_->GetRaw();
   }
 }
@@ -637,20 +641,23 @@ void NDArray::CopyFrom(const mkldnn::memory ) {
   MKLDNNStream *stream = MKLDNNStream::Get();
   // If this array uses MKLDNN layout, we have to make sure it's not a view.
   // Otherwise, we'll have to change the layout inside the array.
-  if (IsMKLDNNData())
-CHECK(!IsView());
-  ptr_->SetMKLMem(IsView() ? ptr_->storage_shape : shape_,
-  dtype_);
-  stream->RegisterMem(ptr_->mkl_mem_->GetMem());
-  mkldnn::memory::desc from_desc = mem.get_primitive_desc().desc();
-  mkldnn::memory::desc this_desc = ptr_->mkl_mem_->GetPrimitiveDesc().desc();
+
+  if (IsMKLDNNData() && IsView())
+ptr_->Reorder2Default();
+
+  const mkldnn::memory *this_mem = GetMKLDNNData();
+  mkldnn::memory::primitive_desc from_pd = mem.get_primitive_desc();
+  mkldnn::memory::desc from_desc = from_pd.desc();
+  mkldnn::memory::primitive_desc this_pd = this_mem->get_primitive_desc();
+  mkldnn::memory::desc this_desc = this_pd.desc();
   mkldnn_memory_format_t from_def_format = GetDefaultFormat(from_desc);
+  

[GitHub] eric-haibin-lin commented on issue #10974: Cpp CI may be broken

2018-05-25 Thread GitBox
eric-haibin-lin commented on issue #10974: Cpp CI may be broken
URL: 
https://github.com/apache/incubator-mxnet/issues/10974#issuecomment-392115048
 
 
   Also blocked my PR: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11001/11/pipeline
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #11047: Enhance mkldnn pooling to support full convention

2018-05-25 Thread GitBox
TaoLv commented on issue #11047: Enhance mkldnn pooling to support full 
convention
URL: https://github.com/apache/incubator-mxnet/pull/11047#issuecomment-392113596
 
 
   @zheng-da I see. Will add test case later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 opened a new pull request #11063: [MXNET-386] NDArray Bug fix

2018-05-25 Thread GitBox
lanking520 opened a new pull request #11063: [MXNET-386] NDArray Bug fix
URL: https://github.com/apache/incubator-mxnet/pull/11063
 
 
   ## Description ##
   Currently, we should not generate the underscore (unsupported) API for 
Scala. This is a fix on this. 
   @nswamy @yzhliu 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10921: Test cases improvement for MKLDNN on Gluon

2018-05-25 Thread GitBox
marcoabreu commented on issue #10921: Test cases improvement for MKLDNN on Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10921#issuecomment-392105517
 
 
   I think it's a good these tests failed - it means that we actually have some 
issues and are being inconsistent. That's a great point to start from.
   
   I'd appreciate it if you could make the necessary changes in order to make 
all parts consistent. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10921: Test cases improvement for MKLDNN on Gluon

2018-05-25 Thread GitBox
zheng-da commented on issue #10921: Test cases improvement for MKLDNN on Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10921#issuecomment-392087536
 
 
   your input size is too large for Dense. Very few systems can actually 
allocate a single piece of memory of 85GB. There is no reason to test the error 
in https://github.com/apache/incubator-mxnet/issues/10807. No system can handle 
it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10807: Ndarray.asnumpy() error with gluon dense under both GPU and CPU environment

2018-05-25 Thread GitBox
zheng-da commented on issue #10807: Ndarray.asnumpy() error with gluon dense 
under both GPU and CPU  environment
URL: 
https://github.com/apache/incubator-mxnet/issues/10807#issuecomment-392086315
 
 
   i think so


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #11047: Enhance mkldnn pooling to support full convention

2018-05-25 Thread GitBox
zheng-da commented on issue #11047: Enhance mkldnn pooling to support full 
convention
URL: https://github.com/apache/incubator-mxnet/pull/11047#issuecomment-392084012
 
 
   the test can't really cover the cases we want. it didn't detect the bug 
before @ashokei disabled full convention.
   you can change its stride to (3, 3) or something to make sure `i_h +pad_top 
+ pad_bottom - k_h` can't be divided by `stride_h `.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #11031: Use dtype=int for the indices returned by TopK

2018-05-25 Thread GitBox
asitstands commented on issue #11031: Use dtype=int for the indices returned by 
TopK
URL: 
https://github.com/apache/incubator-mxnet/issues/11031#issuecomment-392075252
 
 
   This could break existing code. Sometimes the resulting array of indices is 
subject to another operation. Then the result can become different if the dtype 
of the index array changes. Or it can make an error as some opertions can be 
applied to only floating point arrays.
   ```python
   nd.topk(nd.arange(10), k=2).mean()
   ```
   Currently the result is `[8.5]`, but the result is `[8]` if the dtype is 
integer. 
   ```python
   nd.linalg.gemm2(nd.topk(x, k=2), y)
   ```
   Currently this is allowed, but it'll cause an error if the dtype changes to 
integer.
   
   `multinomial` has a similar but opposite issue. It returns an index array of 
integer type, but it would be convenient to have floating point array sometimes 
(PR #10970). I think that it would be better if we can make `multinomial` and 
`topk` consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] z01nl1o02 commented on issue #11061: mx.nd.argmax is slow

2018-05-25 Thread GitBox
z01nl1o02 commented on issue #11061: mx.nd.argmax is slow
URL: 
https://github.com/apache/incubator-mxnet/issues/11061#issuecomment-392078671
 
 
   remove the line  "print max[0]" mx.nd will be faster.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #11031: Use dtype=int for the indices returned by TopK

2018-05-25 Thread GitBox
asitstands commented on issue #11031: Use dtype=int for the indices returned by 
TopK
URL: 
https://github.com/apache/incubator-mxnet/issues/11031#issuecomment-392075252
 
 
   This could break existing code. Sometimes the resulting array of indices is 
subject to another operation. Then the result can become different if the dtype 
of the index array changes. Or it can make an error as some opertions can be 
applied to only floating point arrays.
   ```python
   nd.topk(nd.arange(10), k=2).mean()
   ```
   Currently the result is `[8.5]`, but the result is `[8]` if the dtype is 
integer. 
   ```python
   nd.gemm2(nd.topk(x, k=2), y)
   ```
   Currently this is allowed, but it'll cause an error if the dtype changes to 
integer.
   
   `multinomial` has a similar but opposite issue. It returns an index array of 
integer type, but it would be convenient to have floating point array sometimes 
(PR #10970). I think that it would be better if we can make `multinomial` and 
`topk` consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #11031: Use dtype=int for the indices returned by TopK

2018-05-25 Thread GitBox
asitstands commented on issue #11031: Use dtype=int for the indices returned by 
TopK
URL: 
https://github.com/apache/incubator-mxnet/issues/11031#issuecomment-392075252
 
 
   This could break existing code. Sometimes the resulting array of indices is 
subject to another operation. Then the result can become different if the dtype 
of the index array changes. Or it can make an error as some opertions can be 
applied to only floating point arrays.
   ```python
   nd.topk(nd.arange(10), k=2).mean()
   ```
   Currently the result is `[8.5]`, but the result is `[8]` if the dtype is 
integer. 
   ```python
   nd.gemm2(nd.topk(x, k=2), y)
   ```
   Currently this is allowed, but it'll cause an error if the dtype changes to 
integer.
   
   `multinomial` has a similar but opposite issue. It returns an index array of 
integer type, but it would be convenient to have floating point array sometimes 
(#10970). I think that it would be better if we can make `multinomial` and 
`topk` consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #11031: Use dtype=int for the indices returned by TopK

2018-05-25 Thread GitBox
asitstands commented on issue #11031: Use dtype=int for the indices returned by 
TopK
URL: 
https://github.com/apache/incubator-mxnet/issues/11031#issuecomment-392075252
 
 
   This could break existing code. Sometimes the resulting array of indices is 
subject to another operation. Then the result can become different if the dtype 
of the index array changes. Or it can make an error as some opertions can be 
applied to only floating point arrays.
   ```python
   nd.topk(nd.arange(10), k=2).mean()
   ```
   Currently the result is `[8.5]`, but the result is `[8]` if the dtype is 
integer. 
   ```python
   nd.gemm2(nd.topk(nd.arange(10), k=2), x)
   ```
   Currently this is allowed, but it'll cause an error if the dtype changes to 
integer.
   
   `multinomial` has a similar but opposite issue. It returns an index array of 
integer type, but it would be convenient to have floating point array sometimes 
(#10970). I think that it would be better if we can make `multinomial` and 
`topk` consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] CodingCat commented on issue #10462: [MXNET-62] add test against spark integration

2018-05-25 Thread GitBox
CodingCat commented on issue #10462: [MXNET-62] add test against spark 
integration
URL: https://github.com/apache/incubator-mxnet/pull/10462#issuecomment-392073379
 
 
   @yzhliu  want to take another look? I think the failed test is unrelated to 
my change 
   
   you can check the stdout of scala cpu run, Spark ML unit tests are executed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zchrissirhcz commented on issue #9113: float division by zero error for small network

2018-05-25 Thread GitBox
zchrissirhcz commented on issue #9113: float division by zero error for small 
network
URL: 
https://github.com/apache/incubator-mxnet/issues/9113#issuecomment-392058102
 
 
   Is this because our GPU based machine runs too fast?
   
   I also encounter this problem when playing the linear regression toy  
network in 
https://mxnet.incubator.apache.org/tutorials/python/linear-regression.html.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yjcn commented on issue #11052: Nvidia Jetson TX2 Check failed: (err) == (cudaSuccess) Name: mxnet_generic_kernel ErrStr:no kernel image is available for execution on the device

2018-05-25 Thread GitBox
yjcn commented on issue #11052: Nvidia Jetson TX2  Check failed: (err) == 
(cudaSuccess) Name: mxnet_generic_kernel ErrStr:no kernel image is available 
for execution on the device
URL: 
https://github.com/apache/incubator-mxnet/issues/11052#issuecomment-392057536
 
 
   I have solved this problem. The default CUDA_ARCH doesn't contain the TX2 's 
CUDA_ARCH. So we need to add "CUDA_ARCH = -gencode arch=compute_62,code=sm_62" 
to make/config.mk manually.
   (https://discuss.gluon.ai/t/topic/6589)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kaleidoscopical opened a new issue #11062: how to manually occupy all gpu memory like tensorflow?

2018-05-25 Thread GitBox
kaleidoscopical opened a new issue #11062: how to manually occupy all gpu 
memory like tensorflow?
URL: https://github.com/apache/incubator-mxnet/issues/11062
 
 
   MxNet is excellent on memory saving. But is there a way to manually occupy 
all gpu memory like tensorflow?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wsokolow commented on issue #11040: Big acc difference when fine-tune using vgg16 and resnet-50

2018-05-25 Thread GitBox
wsokolow commented on issue #11040: Big acc difference when fine-tune using 
vgg16 and resnet-50
URL: 
https://github.com/apache/incubator-mxnet/issues/11040#issuecomment-392050897
 
 
   Which engine did you use for execution? GPU or CPU (with MKLDNN)? I'm 
observing issues with ~10% accuracy while running AlexNet with MKLDNN engine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11036: [MXNET-472] Add ccache support to CI

2018-05-25 Thread GitBox
lebeg commented on a change in pull request #11036: [MXNET-472] Add ccache 
support to CI
URL: https://github.com/apache/incubator-mxnet/pull/11036#discussion_r190878691
 
 

 ##
 File path: make/config.mk
 ##
 @@ -37,8 +37,8 @@
 # choice of compiler
 #
 
-export CC = gcc
-export CXX = g++
+#export CC = gcc
 
 Review comment:
   It would be better to set it if not set already


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liuzx32 commented on issue #11060: wget https://s3-us-west-2.amazonaws.com/mxnet.liuyz/data/mnist/train.txt throw 404

2018-05-25 Thread GitBox
liuzx32 commented on issue #11060: wget 
https://s3-us-west-2.amazonaws.com/mxnet.liuyz/data/mnist/train.txt throw 404
URL: 
https://github.com/apache/incubator-mxnet/issues/11060#issuecomment-392022915
 
 
   @szha @yzhliu  Thank you for the training data! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
larroy commented on a change in pull request #10297: [MXNET-244] Fixes for 
cross compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190862675
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -77,7 +82,7 @@ build_armv6() {
 -DUSE_OPENCV=OFF \
 -DUSE_OPENMP=OFF \
 -DUSE_SIGNAL_HANDLER=ON \
--DCMAKE_BUILD_TYPE=Release \
+-DCMAKE_BUILD_TYPE=RelWithDebInfo \
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
larroy commented on a change in pull request #10297: [MXNET-244] Fixes for 
cross compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190862619
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -40,19 +40,24 @@ build_jetson() {
 pushd .
 mv make/crosscompile.jetson.mk make/config.mk
 make -j$(nproc)
-
 export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
 cd /work/mxnet/python
-python setup.py bdist_wheel --universal
+build_wheel
+popd
+}
 
-# Fix pathing issues in the wheel.  We need to move libmxnet.so from the 
data folder to the
-# mxnet folder, then repackage the wheel.
+build_wheel() {
+set -ex
+pushd .
+python setup.py bdist_wheel --universal
 WHEEL=`readlink -f dist/*.whl`
 TMPDIR=`mktemp -d`
 unzip -d $TMPDIR $WHEEL
 rm $WHEEL
 cd $TMPDIR
 mv *.data/data/mxnet/libmxnet.so mxnet
+cd $TMPDIR
+#zip -r $WHEEL $TMPDIR
 
 Review comment:
   I think there were problems, this works.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
larroy commented on a change in pull request #10297: [MXNET-244] Fixes for 
cross compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190861901
 
 

 ##
 File path: ci/docker/Dockerfile.build.armv7
 ##
 @@ -20,13 +20,13 @@
 
 FROM dockcross/linux-armv7
 
-ENV ARCH armv71
+ENV ARCH armv7
 ENV CC /usr/bin/arm-linux-gnueabihf-gcc
 ENV CXX /usr/bin/arm-linux-gnueabihf-g++
 
-RUN apt-get update && \
-apt-get install -y libopenblas-dev:armhf && \
-rm -rf /var/lib/apt/lists/*
+RUN apt-get update
+RUN apt-get install -y libopenblas-dev:armhf unzip
 
 Review comment:
   sure, I would make another PR for that.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
larroy commented on a change in pull request #10297: [MXNET-244] Fixes for 
cross compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190861774
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -711,9 +711,8 @@ def get_values(ensure_unique):
  k=dat_size*dat_size*dat_size*dat_size, is_ascend=False)
 assert_almost_equal(nd_ret_argsort, gt)
 
-# test topk with a big shape
-a = mx.nd.arange(0, 54686454, step=1, repeat=1)
-assert_almost_equal(a.topk(k=54686454).asnumpy(), a.asnumpy()[::-1])
+a = mx.nd.arange(0, 1024, step=1, repeat=1)
 
 Review comment:
   it should, so we can pass tests in pi


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross 
compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190845064
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -40,19 +40,24 @@ build_jetson() {
 pushd .
 mv make/crosscompile.jetson.mk make/config.mk
 make -j$(nproc)
-
 export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
 cd /work/mxnet/python
-python setup.py bdist_wheel --universal
+build_wheel
+popd
+}
 
-# Fix pathing issues in the wheel.  We need to move libmxnet.so from the 
data folder to the
-# mxnet folder, then repackage the wheel.
+build_wheel() {
+set -ex
+pushd .
+python setup.py bdist_wheel --universal
 WHEEL=`readlink -f dist/*.whl`
 TMPDIR=`mktemp -d`
 unzip -d $TMPDIR $WHEEL
 rm $WHEEL
 cd $TMPDIR
 mv *.data/data/mxnet/libmxnet.so mxnet
+cd $TMPDIR
+#zip -r $WHEEL $TMPDIR
 
 Review comment:
   Maybe choose one style? I think ```zip -r $WHEEL $TMPDIR``` is good


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross 
compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190844837
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -77,7 +82,7 @@ build_armv6() {
 -DUSE_OPENCV=OFF \
 -DUSE_OPENMP=OFF \
 -DUSE_SIGNAL_HANDLER=ON \
--DCMAKE_BUILD_TYPE=Release \
+-DCMAKE_BUILD_TYPE=RelWithDebInfo \
 
 Review comment:
   I think this should stay Release, otherwise it's too big


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross 
compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190845261
 
 

 ##
 File path: ci/docker/Dockerfile.build.armv7
 ##
 @@ -20,13 +20,13 @@
 
 FROM dockcross/linux-armv7
 
-ENV ARCH armv71
+ENV ARCH armv7
 ENV CC /usr/bin/arm-linux-gnueabihf-gcc
 ENV CXX /usr/bin/arm-linux-gnueabihf-g++
 
-RUN apt-get update && \
-apt-get install -y libopenblas-dev:armhf && \
-rm -rf /var/lib/apt/lists/*
+RUN apt-get update
+RUN apt-get install -y libopenblas-dev:armhf unzip
 
 Review comment:
   Let's build openblas ouself like for other arm platforms?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross compilation in ARM

2018-05-25 Thread GitBox
lebeg commented on a change in pull request #10297: [MXNET-244] Fixes for cross 
compilation in ARM
URL: https://github.com/apache/incubator-mxnet/pull/10297#discussion_r190845478
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -711,9 +711,8 @@ def get_values(ensure_unique):
  k=dat_size*dat_size*dat_size*dat_size, is_ascend=False)
 assert_almost_equal(nd_ret_argsort, gt)
 
-# test topk with a big shape
-a = mx.nd.arange(0, 54686454, step=1, repeat=1)
-assert_almost_equal(a.topk(k=54686454).asnumpy(), a.asnumpy()[::-1])
+a = mx.nd.arange(0, 1024, step=1, repeat=1)
 
 Review comment:
   This shouldn't be part of the change, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11053: Fixed armv7 wheel

2018-05-25 Thread GitBox
lebeg commented on a change in pull request #11053: Fixed armv7 wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#discussion_r190837223
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -83,31 +102,40 @@ build_armv6() {
 -DBUILD_CPP_EXAMPLES=OFF \
 -Dmxnet_LINKER_LIBS=-lgfortran \
 -G Ninja /work/mxnet
+
 ninja
-export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
-cd /work/mxnet/python
-python setup.py bdist_wheel --universal
-cp dist/*.whl /work/build
+build_wheel
+
 popd
 }
 
 build_armv7() {
 set -ex
 pushd .
 cd /work/build
-cmake\
--DUSE_CUDA=OFF\
--DUSE_OPENCV=OFF\
--DUSE_OPENMP=OFF\
--DUSE_SIGNAL_HANDLER=ON\
--DCMAKE_BUILD_TYPE=RelWithDebInfo\
--DUSE_MKL_IF_AVAILABLE=OFF\
+
+# Lapack functionality will be included and statically linked to openblas.
 
 Review comment:
   Sure, I already tried that and this will not be easy. I suggest this will be 
part of a separate PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] solin319 opened a new issue #11061: mx.nd.argmax is slow

2018-05-25 Thread GitBox
solin319 opened a new issue #11061: mx.nd.argmax is slow
URL: https://github.com/apache/incubator-mxnet/issues/11061
 
 
   ```
   import time
   import mxnet as mx
   import numpy as np
   import os
   tmp = mx.nd.random.uniform(-1, 1, shape=(64,30), ctx=mx.gpu())
   #tmp = np.random.RandomState().uniform(-1, 1, (64,30))
   tic = time.time()
   for i in range(20):
  if (i == 5):
   begin = time.time();
  elif (i == 15):
   end = time.time();
  tic = time.time()
  max = mx.nd.argmax(tmp, axis=1)
  #max = np.argmax(tmp, axis=1)
  print max[0]
  toc = time.time() - tic
  print ("used time %f:"%toc)
   avg = (end - begin) / 10
   print ("avg time %f"%avg)
   
   ```
   We use code before to test the speed of argmax.
   NDArray in cpu use 0.112895s.
   NDArray in gpu use 0.280577.
   numpy.array only use 0.018021.
   
   Why mx.nd.argmax is so slow?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services