[GitHub] [incubator-mxnet] TaoLv commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
TaoLv commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586564858
 
 
   > Does the slow down come from more computation in the new algorithm or the 
sub-optimal implementation?
   
   The new implementation increases both memory load and additional bit-wise 
operations. So performance slow down is expected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d352673 -> 149975c)

2020-02-14 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d352673  Fix transformer.cu interleaved matmul for cuda arch < 5  
(#17596)
 add 149975c  [numpy][Do Not Review]add op insert (#16865)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 110 +-
 python/mxnet/numpy/multiarray.py   |  85 -
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  65 +++-
 src/operator/numpy/np_insert_op-inl.h  | 371 +
 src/operator/numpy/np_insert_op_scalar-inl.h   | 160 +
 src/operator/numpy/np_insert_op_scalar.cc  | 135 
 .../multi_lars.cu => numpy/np_insert_op_scalar.cu} |  16 +-
 src/operator/numpy/np_insert_op_slice-inl.h| 200 +++
 src/operator/numpy/np_insert_op_slice.cc   | 160 +
 .../multi_lars.cu => numpy/np_insert_op_slice.cu}  |  16 +-
 src/operator/numpy/np_insert_op_tensor-inl.h   | 229 +
 src/operator/numpy/np_insert_op_tensor.cc  | 152 +
 .../multi_lars.cu => numpy/np_insert_op_tensor.cu} |  16 +-
 .../python/unittest/test_numpy_interoperability.py |  12 +
 tests/python/unittest/test_numpy_op.py | 113 +++
 16 files changed, 1814 insertions(+), 27 deletions(-)
 create mode 100644 src/operator/numpy/np_insert_op-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_scalar-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_scalar.cc
 copy src/operator/{contrib/multi_lars.cu => numpy/np_insert_op_scalar.cu} (75%)
 create mode 100644 src/operator/numpy/np_insert_op_slice-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_slice.cc
 copy src/operator/{contrib/multi_lars.cu => numpy/np_insert_op_slice.cu} (75%)
 create mode 100644 src/operator/numpy/np_insert_op_tensor-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_tensor.cc
 copy src/operator/{contrib/multi_lars.cu => numpy/np_insert_op_tensor.cu} (75%)



[incubator-mxnet] branch master updated (d352673 -> 149975c)

2020-02-14 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d352673  Fix transformer.cu interleaved matmul for cuda arch < 5  
(#17596)
 add 149975c  [numpy][Do Not Review]add op insert (#16865)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 110 +-
 python/mxnet/numpy/multiarray.py   |  85 -
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  65 +++-
 src/operator/numpy/np_insert_op-inl.h  | 371 +
 src/operator/numpy/np_insert_op_scalar-inl.h   | 160 +
 src/operator/numpy/np_insert_op_scalar.cc  | 135 
 .../multi_lars.cu => numpy/np_insert_op_scalar.cu} |  16 +-
 src/operator/numpy/np_insert_op_slice-inl.h| 200 +++
 src/operator/numpy/np_insert_op_slice.cc   | 160 +
 .../multi_lars.cu => numpy/np_insert_op_slice.cu}  |  16 +-
 src/operator/numpy/np_insert_op_tensor-inl.h   | 229 +
 src/operator/numpy/np_insert_op_tensor.cc  | 152 +
 .../multi_lars.cu => numpy/np_insert_op_tensor.cu} |  16 +-
 .../python/unittest/test_numpy_interoperability.py |  12 +
 tests/python/unittest/test_numpy_op.py | 113 +++
 16 files changed, 1814 insertions(+), 27 deletions(-)
 create mode 100644 src/operator/numpy/np_insert_op-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_scalar-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_scalar.cc
 copy src/operator/{contrib/multi_lars.cu => numpy/np_insert_op_scalar.cu} (75%)
 create mode 100644 src/operator/numpy/np_insert_op_slice-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_slice.cc
 copy src/operator/{contrib/multi_lars.cu => numpy/np_insert_op_slice.cu} (75%)
 create mode 100644 src/operator/numpy/np_insert_op_tensor-inl.h
 create mode 100644 src/operator/numpy/np_insert_op_tensor.cc
 copy src/operator/{contrib/multi_lars.cu => numpy/np_insert_op_tensor.cu} (75%)



[GitHub] [incubator-mxnet] haojin2 merged pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 merged pull request #16865: [numpy][Do Not Review]add op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-14 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new b73b3ea  Bump the publish timestamp.
b73b3ea is described below

commit b73b3eab845bd1a0f9c35f8402fd3b65402a0879
Author: mxnet-ci 
AuthorDate: Sat Feb 15 06:45:42 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..00774e1
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Feb 15 06:45:42 UTC 2020



[GitHub] [incubator-mxnet] leezu opened a new pull request #17603: Backport #17596

2020-02-14 Thread GitBox
leezu opened a new pull request #17603: Backport #17596
URL: https://github.com/apache/incubator-mxnet/pull/17603
 
 
   Backport #17596


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17017: USE_BLAS=apple broken on OSX 10.15

2020-02-14 Thread GitBox
leezu commented on issue #17017: USE_BLAS=apple broken on OSX 10.15
URL: 
https://github.com/apache/incubator-mxnet/issues/17017#issuecomment-586560692
 
 
   Fixed by https://github.com/apache/incubator-mxnet/pull/17602 for cmake build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17599: Fixed Embedding op for LT input

2020-02-14 Thread GitBox
apeforest commented on a change in pull request #17599: Fixed Embedding op for 
LT input
URL: https://github.com/apache/incubator-mxnet/pull/17599#discussion_r379766442
 
 

 ##
 File path: src/operator/tensor/indexing_op.h
 ##
 @@ -66,7 +66,7 @@ enum QuantizedEmbeddingOpResource {kTempSpace};
 
 
 struct SparseEmbeddingParam: public dmlc::Parameter {
-  int input_dim;
+  index_t input_dim;
   int output_dim;
 
 Review comment:
   Would there be a case where output_dim also exceeds 2^32?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
apeforest commented on a change in pull request #16735: Use single-bit for mask 
in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#discussion_r379763372
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -152,15 +181,20 @@ class DropoutOp {
   const std::vector _grad) {
 Stream *s = ctx.get_stream();
 Tensor grad = out_grad[dropout::kOut].FlatTo2D(s);
-Tensor mask = out_data[dropout::kMask].FlatTo2D(s);
+Tensor mask = out_data[dropout::kMask].FlatTo1D(s);
 Tensor gdata = in_grad[dropout::kData].FlatTo2D(s);
 DType *ingradptr = gdata.dptr_;
 const DType *outgradptr = grad.dptr_;
-const DType *maskptr = mask.dptr_;
-const int count = mask.shape_[0] * mask.shape_[1];
-#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
-for (int i = 0; i < count; ++i) {
-  ingradptr[i] = outgradptr[i] * maskptr[i];
+const uint8_t *maskptr = mask.dptr_;
+const index_t count = grad.shape_[0] * grad.shape_[1];
+const float pk_1 = 1.0f / this->pkeep_;
+const int nthr = engine::OpenMP::Get()->GetRecommendedOMPThreadCount();
+#pragma omp parallel for num_threads(nthr) schedule(static, 8)
+for (index_t i = 0; i < count; ++i) {
+  auto mask_idx = i >> 3;  // div 8;
+  uint8_t mask_offset = i & 7;  // mod 8
+  bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
+  ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 
 Review comment:
   However there is no read from `ingradptr`, therefore this is not a case of 
the false sharing, right? I tried this block and didn't noticed any noticeable 
performance gain.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17602: Fix OS X staticbuild and add tests

2020-02-14 Thread GitBox
leezu opened a new pull request #17602: Fix OS X staticbuild and add tests
URL: https://github.com/apache/incubator-mxnet/pull/17602
 
 
   ## Description ##
   Fix OS X staticbuild and add tests based on Github Actions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (9ee4f04 -> d352673)

2020-02-14 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9ee4f04  Support broadcast assign for `npi_boolean_mask_assign_tensor` 
(#17131)
 add d352673  Fix transformer.cu interleaved matmul for cuda arch < 5  
(#17596)

No new revisions were added by this update.

Summary of changes:
 src/operator/contrib/transformer.cu | 71 ++---
 1 file changed, 59 insertions(+), 12 deletions(-)



[incubator-mxnet] branch master updated (9ee4f04 -> d352673)

2020-02-14 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9ee4f04  Support broadcast assign for `npi_boolean_mask_assign_tensor` 
(#17131)
 add d352673  Fix transformer.cu interleaved matmul for cuda arch < 5  
(#17596)

No new revisions were added by this update.

Summary of changes:
 src/operator/contrib/transformer.cu | 71 ++---
 1 file changed, 59 insertions(+), 12 deletions(-)



[GitHub] [incubator-mxnet] leezu merged pull request #17596: Fix transformer.cu interleaved matmul for cuda arch < 5

2020-02-14 Thread GitBox
leezu merged pull request #17596: Fix transformer.cu interleaved matmul for 
cuda arch < 5
URL: https://github.com/apache/incubator-mxnet/pull/17596
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new pull request #17601: Port conv3d dilate > 1 fix to 1.5.x

2020-02-14 Thread GitBox
reminisce opened a new pull request #17601: Port conv3d dilate > 1 fix to 1.5.x
URL: https://github.com/apache/incubator-mxnet/pull/17601
 
 
   Port https://github.com/apache/incubator-mxnet/pull/17491 to 1.5.x.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
TaoLv commented on issue #17595: MKLDNN incompatibility with large tensor (dim 
>= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586552974
 
 
   I can reproduce the crash.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (f619c52 -> 9ee4f04)

2020-02-14 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f619c52  Python 2 cleanup (#17583)
 add 9ee4f04  Support broadcast assign for `npi_boolean_mask_assign_tensor` 
(#17131)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/ndarray.py  |  5 +-
 python/mxnet/numpy/multiarray.py |  4 +-
 src/operator/numpy/np_boolean_mask_assign.cc | 68 +---
 src/operator/numpy/np_boolean_mask_assign.cu | 36 ++-
 src/operator/tensor/indexing_op.cc   |  4 +-
 src/operator/tensor/indexing_op.cu   |  4 +-
 src/operator/tensor/init_op.h|  2 +-
 tests/python/unittest/test_numpy_op.py   | 21 ++---
 8 files changed, 92 insertions(+), 52 deletions(-)



[GitHub] [incubator-mxnet] reminisce merged pull request #17131: Support broadcast assign for `npi_boolean_mask_assign_tensor`

2020-02-14 Thread GitBox
reminisce merged pull request #17131: Support broadcast assign for 
`npi_boolean_mask_assign_tensor`
URL: https://github.com/apache/incubator-mxnet/pull/17131
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new pull request #17600: Fix Non-ASCII character in docstring

2020-02-14 Thread GitBox
reminisce opened a new pull request #17600: Fix Non-ASCII character in docstring
URL: https://github.com/apache/incubator-mxnet/pull/17600
 
 
   ## Description ##
   As title. Expected to complement 
https://github.com/apache/incubator-mxnet/pull/17593 for fixing 
https://github.com/apache/incubator-mxnet/issues/17562.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
TaoLv commented on issue #17595: MKLDNN incompatibility with large tensor (dim 
>= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586548992
 
 
   Thank you for reporting the issue. I will take a look at this. But my 
initial thought is that MKL-DNN itself already supports int64 shape since the 
v1.0 upgrading, while I don't think the current integration of MKL/openblas 
supports int64 GEMM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17576: Fix storage type infer of softmax backward

2020-02-14 Thread GitBox
TaoLv commented on issue #17576: Fix storage type infer of softmax backward
URL: https://github.com/apache/incubator-mxnet/pull/17576#issuecomment-586548359
 
 
   @leeze, Got it. Thank you for the merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
TaoLv commented on a change in pull request #16735: Use single-bit for mask in 
dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#discussion_r379713840
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -152,15 +181,20 @@ class DropoutOp {
   const std::vector _grad) {
 Stream *s = ctx.get_stream();
 Tensor grad = out_grad[dropout::kOut].FlatTo2D(s);
-Tensor mask = out_data[dropout::kMask].FlatTo2D(s);
+Tensor mask = out_data[dropout::kMask].FlatTo1D(s);
 Tensor gdata = in_grad[dropout::kData].FlatTo2D(s);
 DType *ingradptr = gdata.dptr_;
 const DType *outgradptr = grad.dptr_;
-const DType *maskptr = mask.dptr_;
-const int count = mask.shape_[0] * mask.shape_[1];
-#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
-for (int i = 0; i < count; ++i) {
-  ingradptr[i] = outgradptr[i] * maskptr[i];
+const uint8_t *maskptr = mask.dptr_;
+const index_t count = grad.shape_[0] * grad.shape_[1];
+const float pk_1 = 1.0f / this->pkeep_;
+const int nthr = engine::OpenMP::Get()->GetRecommendedOMPThreadCount();
+#pragma omp parallel for num_threads(nthr) schedule(static, 8)
+for (index_t i = 0; i < count; ++i) {
+  auto mask_idx = i >> 3;  // div 8;
+  uint8_t mask_offset = i & 7;  // mod 8
+  bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
+  ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 
 Review comment:
   We're writing to `ingradptr`. We also hope the elements in one cache line 
will be handled by one openmp thread. With the original parallelization, one 
cache line is loaded and only one element in it is handled by the current 
thread. For the next thread, it need load the same cache line, and handle the 
next element. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

2020-02-14 Thread GitBox
rongzha1 commented on a change in pull request #17265: Add bfloat16 
floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r379713720
 
 

 ##
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##
 @@ -0,0 +1,167 @@
+/*!
 
 Review comment:
   done  thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17599: Fixed Embedding op for LT input

2020-02-14 Thread GitBox
connorgoggins commented on issue #17599: Fixed Embedding op for LT input
URL: https://github.com/apache/incubator-mxnet/pull/17599#issuecomment-586543671
 
 
   @access2rohit great question. Here is an example of when sparseEmbedding 
params are used without my fix, and the resulting error (same error as with 
Embedding):
   
   ```
   >>> mx.contrib.nd.SparseEmbedding(data=mx.nd.random_normal(shape=(2**32,1)), 
weight=mx.nd.random_normal(shape=(2**32, 1)), input_dim=2**32, output_dim=1)
   
   mxnet.base.MXNetError: MXNetError: Invalid Parameter format for input_dim 
expect int but value='4294967296', in operator 
_contrib_SparseEmbedding(name="", output_dim="1", input_dim="4294967296")
   ```
   
   With my fix, the call above passes without any issues - output below:
   ```
   [[[-0.5190417]]
   
[[-1.4388928]]
   
[[ 1.1367434]]
   
...
   
[[-1.4388928]]
   
[[-1.4388928]]
   
[[-0.5190417]]]
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on issue #17265: Add bfloat16 floating-point format support based on AMP

2020-02-14 Thread GitBox
zhreshold commented on issue #17265: Add bfloat16 floating-point format support 
based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-586543482
 
 
   @larroy Do you have idea how to display more logs for the edge tests? It 
consistently fail at this stage.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17592: Add cmake build support for macOS static build.

2020-02-14 Thread GitBox
leezu commented on issue #17592: Add cmake build support for macOS static build.
URL: https://github.com/apache/incubator-mxnet/pull/17592#issuecomment-586543059
 
 
   As per 
https://github.com/leezu/mxnet/commit/0bbd44c45f34c37bf70c90635e29a5fcf2e6f0cc/checks/447232316/logs
 libtiff 4.1 does not work either.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit edited a comment on issue #17599: Fixed Embedding op for LT input

2020-02-14 Thread GitBox
access2rohit edited a comment on issue #17599: Fixed Embedding op for LT input
URL: https://github.com/apache/incubator-mxnet/pull/17599#issuecomment-586540131
 
 
   @connorgoggins can you paste output of a case where sparseEmbedding params 
are used ? Are both used in the same example use case presented here ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #17599: Fixed Embedding op for LT input

2020-02-14 Thread GitBox
access2rohit commented on issue #17599: Fixed Embedding op for LT input
URL: https://github.com/apache/incubator-mxnet/pull/17599#issuecomment-586540131
 
 
   @connorgoggins can you paste output of a case where sparseEmbedding params 
are used ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (40c0c54 -> f619c52)

2020-02-14 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 40c0c54  Fix storage type infer of softmax backward (#17576)
 add f619c52  Python 2 cleanup (#17583)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |   2 +-
 amalgamation/python/mxnet_predict.py   |  50 +++-
 ci/docker/runtime_functions.sh |   7 +-
 ci/jenkins/Jenkins_steps.groovy|  10 +-
 .../python/tutorials/packages/gluon/text/gnmt.rst  |   7 +-
 python/mxnet/__init__.py   |   1 -
 python/mxnet/_ctypes/ndarray.py|   1 -
 python/mxnet/_ctypes/symbol.py |   1 -
 python/mxnet/_cy2/README   |   1 -
 python/mxnet/_cy2/__init__.py  |  18 ---
 python/mxnet/attribute.py  |   1 -
 python/mxnet/autograd.py   |   2 -
 python/mxnet/base.py   | 137 ++---
 python/mxnet/callback.py   |   1 -
 python/mxnet/context.py|   1 -
 python/mxnet/contrib/autograd.py   |   2 -
 python/mxnet/contrib/io.py |   1 -
 python/mxnet/contrib/onnx/mx2onnx/__init__.py  |   1 -
 .../mxnet/contrib/onnx/mx2onnx/_op_translations.py |   4 -
 python/mxnet/contrib/onnx/mx2onnx/export_model.py  |   4 -
 python/mxnet/contrib/onnx/mx2onnx/export_onnx.py   |   4 -
 .../contrib/onnx/onnx2mx/_translation_utils.py |   1 -
 python/mxnet/contrib/onnx/onnx2mx/import_onnx.py   |   1 -
 python/mxnet/contrib/quantization.py   |   1 -
 python/mxnet/contrib/tensorboard.py|   1 -
 python/mxnet/contrib/text/_constants.py|   2 -
 python/mxnet/contrib/text/embedding.py |   5 +-
 python/mxnet/contrib/text/utils.py |   2 -
 python/mxnet/contrib/text/vocab.py |   2 -
 python/mxnet/cython/ndarray.pyx|   1 -
 python/mxnet/cython/symbol.pyx |   1 -
 python/mxnet/engine.py |   1 -
 python/mxnet/executor.py   |   1 -
 python/mxnet/executor_manager.py   |   1 -
 python/mxnet/gluon/contrib/data/text.py|   3 +-
 python/mxnet/gluon/data/dataloader.py  |  35 ++
 python/mxnet/gluon/loss.py |   1 -
 python/mxnet/gluon/model_zoo/model_store.py|   5 +-
 python/mxnet/gluon/model_zoo/vision/resnet.py  |   1 -
 python/mxnet/gluon/model_zoo/vision/vgg.py |   1 -
 python/mxnet/gluon/parameter.py|   1 -
 python/mxnet/gluon/rnn/rnn_layer.py|   1 -
 python/mxnet/gluon/utils.py|   9 +-
 python/mxnet/image/detection.py|   1 -
 python/mxnet/image/image.py|   3 +-
 python/mxnet/initializer.py|   1 -
 python/mxnet/io/__init__.py|   1 -
 python/mxnet/io/io.py  |   1 -
 python/mxnet/kvstore/base.py   |   1 -
 python/mxnet/kvstore/kvstore.py|   1 -
 python/mxnet/kvstore/kvstore_server.py |   1 -
 python/mxnet/libinfo.py|   1 -
 python/mxnet/library.py|   1 -
 python/mxnet/log.py|   8 +-
 python/mxnet/metric.py |   1 -
 python/mxnet/model.py  |   1 -
 python/mxnet/monitor.py|   1 -
 python/mxnet/name.py   |   1 -
 python/mxnet/ndarray/_internal.py  |   5 +-
 python/mxnet/ndarray/contrib.py|   1 -
 python/mxnet/ndarray/ndarray.py|  16 ++-
 python/mxnet/ndarray/numpy/_op.py  |   1 -
 python/mxnet/ndarray/numpy/linalg.py   |   1 -
 python/mxnet/ndarray/numpy/random.py   |   1 -
 python/mxnet/ndarray/numpy_extension/random.py |   1 -
 python/mxnet/ndarray/register.py   |   1 -
 python/mxnet/ndarray/sparse.py |   2 -
 python/mxnet/ndarray_doc.py|   1 -
 python/mxnet/numpy/__init__.py |   1 -
 python/mxnet/numpy/_register.py|   1 -
 python/mxnet/numpy/arrayprint.py   |   1 -
 python/mxnet/numpy/fallback.py |   1 -
 python/mxnet/numpy/fallback_linalg.py  |   1 -
 python/mxnet/numpy/function_base.py|   1 -
 python/mxnet/numpy/io.py   |   1 -
 python/mxnet/numpy/linalg.py   |   1 -
 

[incubator-mxnet] branch master updated (40c0c54 -> f619c52)

2020-02-14 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 40c0c54  Fix storage type infer of softmax backward (#17576)
 add f619c52  Python 2 cleanup (#17583)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |   2 +-
 amalgamation/python/mxnet_predict.py   |  50 +++-
 ci/docker/runtime_functions.sh |   7 +-
 ci/jenkins/Jenkins_steps.groovy|  10 +-
 .../python/tutorials/packages/gluon/text/gnmt.rst  |   7 +-
 python/mxnet/__init__.py   |   1 -
 python/mxnet/_ctypes/ndarray.py|   1 -
 python/mxnet/_ctypes/symbol.py |   1 -
 python/mxnet/_cy2/README   |   1 -
 python/mxnet/_cy2/__init__.py  |  18 ---
 python/mxnet/attribute.py  |   1 -
 python/mxnet/autograd.py   |   2 -
 python/mxnet/base.py   | 137 ++---
 python/mxnet/callback.py   |   1 -
 python/mxnet/context.py|   1 -
 python/mxnet/contrib/autograd.py   |   2 -
 python/mxnet/contrib/io.py |   1 -
 python/mxnet/contrib/onnx/mx2onnx/__init__.py  |   1 -
 .../mxnet/contrib/onnx/mx2onnx/_op_translations.py |   4 -
 python/mxnet/contrib/onnx/mx2onnx/export_model.py  |   4 -
 python/mxnet/contrib/onnx/mx2onnx/export_onnx.py   |   4 -
 .../contrib/onnx/onnx2mx/_translation_utils.py |   1 -
 python/mxnet/contrib/onnx/onnx2mx/import_onnx.py   |   1 -
 python/mxnet/contrib/quantization.py   |   1 -
 python/mxnet/contrib/tensorboard.py|   1 -
 python/mxnet/contrib/text/_constants.py|   2 -
 python/mxnet/contrib/text/embedding.py |   5 +-
 python/mxnet/contrib/text/utils.py |   2 -
 python/mxnet/contrib/text/vocab.py |   2 -
 python/mxnet/cython/ndarray.pyx|   1 -
 python/mxnet/cython/symbol.pyx |   1 -
 python/mxnet/engine.py |   1 -
 python/mxnet/executor.py   |   1 -
 python/mxnet/executor_manager.py   |   1 -
 python/mxnet/gluon/contrib/data/text.py|   3 +-
 python/mxnet/gluon/data/dataloader.py  |  35 ++
 python/mxnet/gluon/loss.py |   1 -
 python/mxnet/gluon/model_zoo/model_store.py|   5 +-
 python/mxnet/gluon/model_zoo/vision/resnet.py  |   1 -
 python/mxnet/gluon/model_zoo/vision/vgg.py |   1 -
 python/mxnet/gluon/parameter.py|   1 -
 python/mxnet/gluon/rnn/rnn_layer.py|   1 -
 python/mxnet/gluon/utils.py|   9 +-
 python/mxnet/image/detection.py|   1 -
 python/mxnet/image/image.py|   3 +-
 python/mxnet/initializer.py|   1 -
 python/mxnet/io/__init__.py|   1 -
 python/mxnet/io/io.py  |   1 -
 python/mxnet/kvstore/base.py   |   1 -
 python/mxnet/kvstore/kvstore.py|   1 -
 python/mxnet/kvstore/kvstore_server.py |   1 -
 python/mxnet/libinfo.py|   1 -
 python/mxnet/library.py|   1 -
 python/mxnet/log.py|   8 +-
 python/mxnet/metric.py |   1 -
 python/mxnet/model.py  |   1 -
 python/mxnet/monitor.py|   1 -
 python/mxnet/name.py   |   1 -
 python/mxnet/ndarray/_internal.py  |   5 +-
 python/mxnet/ndarray/contrib.py|   1 -
 python/mxnet/ndarray/ndarray.py|  16 ++-
 python/mxnet/ndarray/numpy/_op.py  |   1 -
 python/mxnet/ndarray/numpy/linalg.py   |   1 -
 python/mxnet/ndarray/numpy/random.py   |   1 -
 python/mxnet/ndarray/numpy_extension/random.py |   1 -
 python/mxnet/ndarray/register.py   |   1 -
 python/mxnet/ndarray/sparse.py |   2 -
 python/mxnet/ndarray_doc.py|   1 -
 python/mxnet/numpy/__init__.py |   1 -
 python/mxnet/numpy/_register.py|   1 -
 python/mxnet/numpy/arrayprint.py   |   1 -
 python/mxnet/numpy/fallback.py |   1 -
 python/mxnet/numpy/fallback_linalg.py  |   1 -
 python/mxnet/numpy/function_base.py|   1 -
 python/mxnet/numpy/io.py   |   1 -
 python/mxnet/numpy/linalg.py   |   1 -
 

[GitHub] [incubator-mxnet] leezu merged pull request #17583: Python 2 cleanup

2020-02-14 Thread GitBox
leezu merged pull request #17583: Python 2 cleanup
URL: https://github.com/apache/incubator-mxnet/pull/17583
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17596: Fix transformer.cu interleaved matmul for cuda arch < 5

2020-02-14 Thread GitBox
leezu commented on issue #17596: Fix transformer.cu interleaved matmul for cuda 
arch < 5
URL: https://github.com/apache/incubator-mxnet/pull/17596#issuecomment-586539281
 
 
   Verified this patch by finetuning Bert on P2 instance.
   
   Verification was initially blocked / delayed by 
https://github.com/apache/incubator-mxnet/pull/17576 ...
   
   ```
   % python finetune_classifier.py --task_name RTE --batch_size 32 --epochs 3 
--gpu 0 --lr 2e-5
   INFO:root:01:21:10 Namespace(accumulate=None, batch_size=32, 
bert_dataset='book_corpus_wiki_en_uncased', bert_model='bert_12_768_12', 
calib_mode='customize', deploy=False, dev_batch_size=8, dtype='float32', 
early_stop=None, epochs=3, epsilon=1e-06, gpu=0, log_interval=10, lr=2e-05, 
max_len=128, model_parameters=None, model_prefix=None, num_calib_batches=5, 
only_calibration=False, only_inference=False, optimizer='bertadam', 
output_dir='./output_dir', pretrained_bert_parameters=None, 
quantized_dtype='auto', round_to=None, seed=2, task_name='RTE', 
training_steps=None, warmup_ratio=0.1)
   [01:21:12] ../src/base.cc:84: Upgrade advisory: this mxnet has been built 
against cuDNN lib version 7501, which is older than the oldest version tested 
by CI (7600).  Set MXNET_CUDNN_LIB_CHECKING=0 to quiet this warning.
   INFO:root:01:21:26 processing dataset...
   INFO:root:01:21:35 Now we are doing BERT classification training on gpu(0)!
   INFO:root:01:21:35 training steps=233
   INFO:root:01:21:45 [Epoch 1 Batch 10/82] loss=0.7479, lr=0.078, 
metrics:accuracy:0.5507
   INFO:root:01:21:54 [Epoch 1 Batch 20/82] loss=0.7263, lr=0.165, 
metrics:accuracy:0.5235
   INFO:root:01:22:02 [Epoch 1 Batch 30/82] loss=0.6821, lr=0.194, 
metrics:accuracy:0.5306
   INFO:root:01:22:12 [Epoch 1 Batch 40/82] loss=0.6718, lr=0.185, 
metrics:accuracy:0.5370
   INFO:root:01:22:21 [Epoch 1 Batch 50/82] loss=0.6743, lr=0.175, 
metrics:accuracy:0.5518
   INFO:root:01:22:31 [Epoch 1 Batch 60/82] loss=0.6894, lr=0.166, 
metrics:accuracy:0.5551
   INFO:root:01:22:39 [Epoch 1 Batch 70/82] loss=0.6872, lr=0.156, 
metrics:accuracy:0.5587
   INFO:root:01:22:48 [Epoch 1 Batch 80/82] loss=0.6626, lr=0.147, 
metrics:accuracy:0.5693
   INFO:root:01:22:50 Now we are doing evaluation on dev with gpu(0).
   INFO:root:01:22:51 [Batch 10/35] loss=0.6449, metrics:accuracy:0.6750
   INFO:root:01:22:52 [Batch 20/35] loss=0.6266, metrics:accuracy:0.6813
   INFO:root:01:22:54 [Batch 30/35] loss=0.6930, metrics:accuracy:0.6625
   INFO:root:01:22:54 validation metrics:accuracy:0.6715
   INFO:root:01:22:54 Time cost=4.00s, throughput=69.97 samples/s
   INFO:root:01:22:55 params saved in: ./output_dir/model_bert_RTE_0.params
   INFO:root:01:22:55 Time cost=79.30s
   INFO:root:01:23:03 [Epoch 2 Batch 10/82] loss=0.5310, lr=0.135, 
metrics:accuracy:0.7719
   INFO:root:01:23:12 [Epoch 2 Batch 20/82] loss=0.5022, lr=0.126, 
metrics:accuracy:0.7650
   INFO:root:01:23:22 [Epoch 2 Batch 30/82] loss=0.4835, lr=0.116, 
metrics:accuracy:0.7733
   INFO:root:01:23:31 [Epoch 2 Batch 40/82] loss=0.4762, lr=0.107, 
metrics:accuracy:0.7754
   INFO:root:01:23:40 [Epoch 2 Batch 50/82] loss=0.4412, lr=0.097, 
metrics:accuracy:0.7728
   INFO:root:01:23:48 [Epoch 2 Batch 60/82] loss=0.4915, lr=0.088, 
metrics:accuracy:0.7741
   INFO:root:01:23:57 [Epoch 2 Batch 70/82] loss=0.4512, lr=0.078, 
metrics:accuracy:0.7767
   INFO:root:01:24:05 [Epoch 2 Batch 80/82] loss=0.3897, lr=0.069, 
metrics:accuracy:0.7832
   INFO:root:01:24:06 Now we are doing evaluation on dev with gpu(0).
   INFO:root:01:24:08 [Batch 10/35] loss=0.6482, metrics:accuracy:0.7125
   INFO:root:01:24:09 [Batch 20/35] loss=0.6311, metrics:accuracy:0.7125
   INFO:root:01:24:10 [Batch 30/35] loss=0.7034, metrics:accuracy:0.7042
   INFO:root:01:24:10 validation metrics:accuracy:0.7076
   INFO:root:01:24:10 Time cost=4.00s, throughput=70.06 samples/s
   INFO:root:01:24:11 params saved in: ./output_dir/model_bert_RTE_1.params
   INFO:root:01:24:11 Time cost=76.11s
   INFO:root:01:24:21 [Epoch 3 Batch 10/82] loss=0.2911, lr=0.057, 
metrics:accuracy:0.9125
   INFO:root:01:24:30 [Epoch 3 Batch 20/82] loss=0.2762, lr=0.048, 
metrics:accuracy:0.9092
   INFO:root:01:24:39 [Epoch 3 Batch 30/82] loss=0.2438, lr=0.038, 
metrics:accuracy:0.9121
   INFO:root:01:24:47 [Epoch 3 Batch 40/82] loss=0.2719, lr=0.029, 
metrics:accuracy:0.9077
   INFO:root:01:24:56 [Epoch 3 Batch 50/82] loss=0.2787, lr=0.019, 
metrics:accuracy:0.9054
   INFO:root:01:25:05 [Epoch 3 Batch 60/82] loss=0.3279, lr=0.010, 
metrics:accuracy:0.9049
   INFO:root:01:25:12 Finish training step: 233
   INFO:root:01:25:12 Now we are doing evaluation on dev with gpu(0).
   INFO:root:01:25:14 [Batch 10/35] loss=0.7463, metrics:accuracy:0.7125
   INFO:root:01:25:15 [Batch 20/35] loss=0.6660, metrics:accuracy:0.7250
   INFO:root:01:25:16 [Batch 30/35] loss=0.7802, metrics:accuracy:0.7125
   INFO:root:01:25:16 validation 

[incubator-mxnet] branch master updated (39b158f -> 40c0c54)

2020-02-14 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 39b158f  quantile_scalar (#17572)
 add 40c0c54  Fix storage type infer of softmax backward (#17576)

No new revisions were added by this update.

Summary of changes:
 src/operator/nn/softmax.cc | 16 +++-
 1 file changed, 7 insertions(+), 9 deletions(-)



[incubator-mxnet] branch master updated (39b158f -> 40c0c54)

2020-02-14 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 39b158f  quantile_scalar (#17572)
 add 40c0c54  Fix storage type infer of softmax backward (#17576)

No new revisions were added by this update.

Summary of changes:
 src/operator/nn/softmax.cc | 16 +++-
 1 file changed, 7 insertions(+), 9 deletions(-)



[GitHub] [incubator-mxnet] leezu merged pull request #17576: Fix storage type infer of softmax backward

2020-02-14 Thread GitBox
leezu merged pull request #17576: Fix storage type infer of softmax backward
URL: https://github.com/apache/incubator-mxnet/pull/17576
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] D-Roberts commented on issue #17590: [numpy] Implement Weibull backward

2020-02-14 Thread GitBox
D-Roberts commented on issue #17590: [numpy] Implement Weibull backward
URL: https://github.com/apache/incubator-mxnet/pull/17590#issuecomment-586538103
 
 
   @haojin2 @xidulu Ready for review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17576: Fix storage type infer of softmax backward

2020-02-14 Thread GitBox
leezu commented on issue #17576: Fix storage type infer of softmax backward
URL: https://github.com/apache/incubator-mxnet/pull/17576#issuecomment-586537272
 
 
   @TaoLv could you next time revert the buggy commit? I spent some time today 
to track down the bug, which could have been avoided.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17599: Fixed Embedding op for LT input

2020-02-14 Thread GitBox
connorgoggins commented on issue #17599: Fixed Embedding op for LT input
URL: https://github.com/apache/incubator-mxnet/pull/17599#issuecomment-586535371
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins opened a new pull request #17599: Fixed Embedding op for LT input

2020-02-14 Thread GitBox
connorgoggins opened a new pull request #17599: Fixed Embedding op for LT input
URL: https://github.com/apache/incubator-mxnet/pull/17599
 
 
   ## Description ##
   The Embedding op was previously breaking on large tensor (dimension >= 2^32) 
data. With the following input:
   ```
   nd.Embedding(data=nd.random_normal(shape=(2**32,1)), 
weight=nd.random_normal(shape=(2**32,1)), input_dim=2**32, output_dim=1)
   ```
   the following error was thrown:
   ```
   mxnet.base.MXNetError: MXNetError: Invalid Parameter format for input_dim 
expect int but value='4294967296', in operator Embedding(name="", 
output_dim="1", input_dim="4294967296")
   ```
   
   To fix this issue, I modified `indexing_op.h` to switch from storing 
input_dim as an `int` to storing it as an `index_t`. After implementing my fix 
and rebuilding, the previous input command displayed the correct output:
   ```
   [[[-0.5190417]]
   
[[-1.4388928]]
   
[[ 1.1367434]]
   
...
   
[[-1.4388928]]
   
[[-1.4388928]]
   
[[-0.5190417]]]
   
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] Code is well-documented
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - M src/operator/tensor/indexing_op.h
   
   ## Comments ##
   Tested on r5dn.24xl-ubuntu 16.04 and p2.16xl-ubuntu 16.04 with
   1. Individual op run
   2. Full OpPerf run
   
   ## Results ##
   [Single operator test - Embedding op 
(GPU)](https://gist.github.com/connorgoggins/2ead27ac3b142e458ddba43eb4093bf7)
   [Single operator test - Embedding op 
(CPU)](https://gist.github.com/connorgoggins/cdd5659abd5d2152a29f44a1f1ee03c7)
   
   [Full OpPerf test 
(GPU)](https://gist.github.com/connorgoggins/09131cdbbdcbba1dc39a93099dccaad4)
   [Full OpPerf test 
(CPU)](https://gist.github.com/connorgoggins/3fc3c7a40dae8b5cea7f82d9ca4b1a72)
   
   @apeforest @access2rohit 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17592: Add cmake build support for macOS static build.

2020-02-14 Thread GitBox
leezu commented on a change in pull request #17592: Add cmake build support for 
macOS static build.
URL: https://github.com/apache/incubator-mxnet/pull/17592#discussion_r379702140
 
 

 ##
 File path: tools/dependencies/curl.sh
 ##
 @@ -35,6 +35,7 @@ if [[ ! -f $DEPS_PATH/lib/libcurl.a ]]; then
 CONFIG_FLAG="--with-darwinssl"
 fi
 ./configure $CONFIG_FLAG \
+--without-libidn2 \
 
 Review comment:
   > No, there's no known security reason to avoid enabling libidn2 in curl 
builds.
   For generic curl builds I would recommend building with it so that users can
   use international domain names in URLs
   
   https://curl.haxx.se/mail/lib-2017-10/0158.html
   
   Why disable this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-14 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new fe0b694  Bump the publish timestamp.
fe0b694 is described below

commit fe0b6944b48982ffe156a315b98397116cd592ce
Author: mxnet-ci 
AuthorDate: Sat Feb 15 00:43:03 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..c84af6f
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Feb 15 00:43:03 UTC 2020



[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
apeforest commented on a change in pull request #16735: Use single-bit for mask 
in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#discussion_r379688573
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -152,15 +181,20 @@ class DropoutOp {
   const std::vector _grad) {
 Stream *s = ctx.get_stream();
 Tensor grad = out_grad[dropout::kOut].FlatTo2D(s);
-Tensor mask = out_data[dropout::kMask].FlatTo2D(s);
+Tensor mask = out_data[dropout::kMask].FlatTo1D(s);
 Tensor gdata = in_grad[dropout::kData].FlatTo2D(s);
 DType *ingradptr = gdata.dptr_;
 const DType *outgradptr = grad.dptr_;
-const DType *maskptr = mask.dptr_;
-const int count = mask.shape_[0] * mask.shape_[1];
-#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
-for (int i = 0; i < count; ++i) {
-  ingradptr[i] = outgradptr[i] * maskptr[i];
+const uint8_t *maskptr = mask.dptr_;
+const index_t count = grad.shape_[0] * grad.shape_[1];
+const float pk_1 = 1.0f / this->pkeep_;
+const int nthr = engine::OpenMP::Get()->GetRecommendedOMPThreadCount();
+#pragma omp parallel for num_threads(nthr) schedule(static, 8)
+for (index_t i = 0; i < count; ++i) {
+  auto mask_idx = i >> 3;  // div 8;
+  uint8_t mask_offset = i & 7;  // mod 8
+  bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
+  ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 
 Review comment:
   After more thoughts, I think we actually don't need to do blocking in the 
backward pass as there is no write to maskptr and hence no cache eviction nor 
race condition.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest edited a comment on issue #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
apeforest edited a comment on issue #16735: Use single-bit for mask in dropout 
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586519348
 
 
   > Does the slow down come from more computation in the new algorithm or the 
sub-optimal implementation?
   @PatricZhao The slowdown comes from extra computation in the new algorithm 
when Dropout uses MKL implementation. MKL already computed the mask but stored 
each mask as integer. The new algorithm simply repackage this int32 based mask 
into bit-based mask and therefore introduced extra runtime. In the ideal case, 
it would be to enhance MKL dropout to store mask using bits. But it requires 
modification of the VSL APIs.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
apeforest commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586519348
 
 
   > Does the slow down come from more computation in the new algorithm or the 
sub-optimal implementation?
   @PatricZhao The slowdown comes from extra computation in the new algorithm 
when dropout uses MKL. MKL already does the masking but stores each mask as 
integer. The new algorithm simply repackage this int32 based mask into 
bit-based mask and therefore introduced extra runtime. In the ideal case, it 
would be to enhance MKL dropout to store mask using bits. But it requires 
modification of the VSL APIs.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
apeforest commented on a change in pull request #16735: Use single-bit for mask 
in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#discussion_r379688573
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -152,15 +181,20 @@ class DropoutOp {
   const std::vector _grad) {
 Stream *s = ctx.get_stream();
 Tensor grad = out_grad[dropout::kOut].FlatTo2D(s);
-Tensor mask = out_data[dropout::kMask].FlatTo2D(s);
+Tensor mask = out_data[dropout::kMask].FlatTo1D(s);
 Tensor gdata = in_grad[dropout::kData].FlatTo2D(s);
 DType *ingradptr = gdata.dptr_;
 const DType *outgradptr = grad.dptr_;
-const DType *maskptr = mask.dptr_;
-const int count = mask.shape_[0] * mask.shape_[1];
-#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
-for (int i = 0; i < count; ++i) {
-  ingradptr[i] = outgradptr[i] * maskptr[i];
+const uint8_t *maskptr = mask.dptr_;
+const index_t count = grad.shape_[0] * grad.shape_[1];
+const float pk_1 = 1.0f / this->pkeep_;
+const int nthr = engine::OpenMP::Get()->GetRecommendedOMPThreadCount();
+#pragma omp parallel for num_threads(nthr) schedule(static, 8)
+for (index_t i = 0; i < count; ++i) {
+  auto mask_idx = i >> 3;  // div 8;
+  uint8_t mask_offset = i & 7;  // mod 8
+  bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
+  ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 
 Review comment:
   After more thoughts, I think we actually don't need to do blocking in the 
backward pass as there is no write to maskptr and hence no race condition 
either.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682542
 
 

 ##
 File path: src/operator/numpy/np_insert_op_tensor-inl.h
 ##
 @@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op-inl.h
+ * \brief Function definition of insert operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_TENSOR_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_TENSOR_INL_H_
+
+#include 
+#include 
+#include 
+#include "../../common/utils.h"
+#include "../tensor/sort_op.h"
+#include "../tensor/init_op.h"
+#include "../operator_common.h"
+#include "../mxnet_op.h"
+#include "./np_delete_op-inl.h"
+#include "./np_insert_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * Only support tensor indices (the type of param 'obj' is tensor).
+ */
+template
+void NumpyInsertTensorCompute(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const NumpyInsertParam& param = nnvm::get(attrs.parsed);
+  int input_count = param.val.has_value() ? 1 : 2;
+  int insize = input_count + 1;
+  CHECK_EQ(inputs.size(), insize);
+  CHECK_EQ(outputs.size(), 1);
+  CHECK_EQ(req.size(), 1);
+  mshadow::Stream *s = ctx.get_stream();
+  const int arr_pos = 0;
+  const int val_pos = param.val.has_value() ? 0 : 1;
+  const int obj_pos = val_pos + 1;
+  const int out_pos = 0;
+  int ndim = inputs[arr_pos].shape_.ndim();
+  int axis = param.axis.has_value() ? param.axis.value() : 0;
+  TBlob arr;
+  TBlob values = param.val.has_value() ?
+ TBlob(nullptr, mxnet::TShape(0, 1), xpu::kDevMask, 
outputs[out_pos].type_flag_) :
+ inputs[val_pos];
+  if (!param.axis.has_value()) {
+arr = inputs[arr_pos].reshape(Shape1(inputs[arr_pos].shape_.Size()));
+ndim = 1;
+  } else if (ndim == 0) {
+if (param.val.has_value()) {
+  CHECK_EQ(inputs[val_pos].shape_.ndim(), 0)
+<< "'arr' is a 0-d array, 'values' can not assign to it. "
+<< "alueError: assignment to 0-d array.";
+  mxnet_op::copy(s, outputs[out_pos], inputs[val_pos]);
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[out_pos].type_flag_, DType, {
+Fill(s, outputs[out_pos], req[0], 
static_cast(param.val.value()));
+  });
+}
+return;
+  } else {
+arr = inputs[arr_pos];
+CHECK(axis >= -1 * arr.shape_.ndim() && axis < arr.shape_.ndim())
+  << "Axis should be in the range of [-r, r-1] where r is the rank of 
input tensor";
+axis += (axis < 0) ? arr.shape_.ndim() : 0;
+  }
+
+  int N = arr.shape_[axis];
+  size_t indices_len = inputs[obj_pos].shape_.Size();  // indices amount
+
+  // get and check indices from tensor
+  int numnew = 0;  // numnew = output.shape[axis] - arr.shape[axis]
+  mxnet::TShape val_newshape(arr.shape_.ndim(), -1);
+  // modify values's ndim to arr's ndim, for broadcast easily later
+  // e.g. value shape: (2,) arr shape: (3, 2) => value shape: (1, 2)
+  for (int i = values.shape_.ndim() - 1, j = arr.shape_.ndim() - 1;
+i >= 0 || j >= 0;
+--i, --j) {
+if (i >= 0 && j >= 0) {
+  val_newshape[j] = values.shape_[i];
+} else if (i >= 0) {
+  CHECK_EQ(values.shape_[i], 1) << "index exceed limits.";
+} else {
+  val_newshape[j] = 1;
+}
+  }
+  values.shape_.assign(val_newshape.begin(), val_newshape.end());
+
+  // get numnew
+  mxnet::TShape old_valshape(values.shape_);
+  if (inputs[obj_pos].shape_.ndim() == 0) {  // scaler
+// values = moveaxis(values, 0, axis), will change values's shape
+numnew = values.shape_[0];
+mxnet::TShape axes(values.ndim(), -1);  // moved axes
+mxnet::TShape val_newshape(values.ndim(), -1);
+int axes_id = 0;
+for (int i = 1; i <= axis; ++i) {
+  axes[axes_id++] = i;
+}
+axes[axes_id++] = 0;
+for (int i = axis + 1; i < values.ndim(); ++i) {
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682473
 
 

 ##
 File path: src/operator/numpy/np_insert_op_tensor-inl.h
 ##
 @@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op-inl.h
+ * \brief Function definition of insert operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_TENSOR_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_TENSOR_INL_H_
+
+#include 
+#include 
+#include 
+#include "../../common/utils.h"
+#include "../tensor/sort_op.h"
+#include "../tensor/init_op.h"
+#include "../operator_common.h"
+#include "../mxnet_op.h"
+#include "./np_delete_op-inl.h"
+#include "./np_insert_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * Only support tensor indices (the type of param 'obj' is tensor).
+ */
+template
+void NumpyInsertTensorCompute(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const NumpyInsertParam& param = nnvm::get(attrs.parsed);
+  int input_count = param.val.has_value() ? 1 : 2;
+  int insize = input_count + 1;
+  CHECK_EQ(inputs.size(), insize);
+  CHECK_EQ(outputs.size(), 1);
+  CHECK_EQ(req.size(), 1);
+  mshadow::Stream *s = ctx.get_stream();
+  const int arr_pos = 0;
+  const int val_pos = param.val.has_value() ? 0 : 1;
+  const int obj_pos = val_pos + 1;
+  const int out_pos = 0;
+  int ndim = inputs[arr_pos].shape_.ndim();
+  int axis = param.axis.has_value() ? param.axis.value() : 0;
+  TBlob arr;
+  TBlob values = param.val.has_value() ?
+ TBlob(nullptr, mxnet::TShape(0, 1), xpu::kDevMask, 
outputs[out_pos].type_flag_) :
+ inputs[val_pos];
+  if (!param.axis.has_value()) {
+arr = inputs[arr_pos].reshape(Shape1(inputs[arr_pos].shape_.Size()));
+ndim = 1;
+  } else if (ndim == 0) {
+if (param.val.has_value()) {
+  CHECK_EQ(inputs[val_pos].shape_.ndim(), 0)
+<< "'arr' is a 0-d array, 'values' can not assign to it. "
+<< "alueError: assignment to 0-d array.";
+  mxnet_op::copy(s, outputs[out_pos], inputs[val_pos]);
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[out_pos].type_flag_, DType, {
+Fill(s, outputs[out_pos], req[0], 
static_cast(param.val.value()));
+  });
+}
+return;
+  } else {
+arr = inputs[arr_pos];
+CHECK(axis >= -1 * arr.shape_.ndim() && axis < arr.shape_.ndim())
+  << "Axis should be in the range of [-r, r-1] where r is the rank of 
input tensor";
+axis += (axis < 0) ? arr.shape_.ndim() : 0;
+  }
+
+  int N = arr.shape_[axis];
+  size_t indices_len = inputs[obj_pos].shape_.Size();  // indices amount
+
+  // get and check indices from tensor
+  int numnew = 0;  // numnew = output.shape[axis] - arr.shape[axis]
+  mxnet::TShape val_newshape(arr.shape_.ndim(), -1);
+  // modify values's ndim to arr's ndim, for broadcast easily later
+  // e.g. value shape: (2,) arr shape: (3, 2) => value shape: (1, 2)
+  for (int i = values.shape_.ndim() - 1, j = arr.shape_.ndim() - 1;
+i >= 0 || j >= 0;
+--i, --j) {
+if (i >= 0 && j >= 0) {
+  val_newshape[j] = values.shape_[i];
+} else if (i >= 0) {
+  CHECK_EQ(values.shape_[i], 1) << "index exceed limits.";
+} else {
+  val_newshape[j] = 1;
+}
+  }
+  values.shape_.assign(val_newshape.begin(), val_newshape.end());
+
+  // get numnew
+  mxnet::TShape old_valshape(values.shape_);
+  if (inputs[obj_pos].shape_.ndim() == 0) {  // scaler
+// values = moveaxis(values, 0, axis), will change values's shape
+numnew = values.shape_[0];
+mxnet::TShape axes(values.ndim(), -1);  // moved axes
+mxnet::TShape val_newshape(values.ndim(), -1);
+int axes_id = 0;
+for (int i = 1; i <= axis; ++i) {
+  axes[axes_id++] = i;
+}
+axes[axes_id++] = 0;
+for (int i = axis + 1; i < values.ndim(); ++i) {
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682391
 
 

 ##
 File path: src/operator/numpy/np_insert_op_slice-inl.h
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op_slice-inl.h
+ * \brief Function definition of insert operators (insert by slice index)
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_SLICE_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_SLICE_INL_H_
+
+#include 
+#include 
+#include "./np_insert_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * Only support slice index (the type of param 'obj' is slice).
+ */
+template
+void NumpyInsertSliceCompute(const nnvm::NodeAttrs& attrs,
+ const OpContext& ctx,
+ const std::vector& inputs,
+ const std::vector& req,
+ const std::vector& outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const NumpyInsertParam& param = nnvm::get(attrs.parsed);
+  CHECK_EQ(inputs.size(), (param.val.has_value() ? 1 : 2));
+  CHECK_EQ(outputs.size(), 1);
+  CHECK_EQ(req.size(), 1);
+  mshadow::Stream *s = ctx.get_stream();
+  const int arr_pos = 0;
+  const int val_pos = param.val.has_value() ? 0 : 1;
+  const int out_pos = 0;
+  int ndim = inputs[arr_pos].shape_.ndim();
+  int axis = param.axis.has_value() ? param.axis.value() : 0;
+  TBlob arr;
+  TBlob values = param.val.has_value() ?
+ TBlob(nullptr, mxnet::TShape(0, 1), xpu::kDevMask, 
outputs[out_pos].type_flag_) :
+ inputs[val_pos];
+  if (!param.axis.has_value()) {
+arr = inputs[arr_pos].reshape(Shape1(inputs[arr_pos].shape_.Size()));
+ndim = 1;
+  } else if (ndim == 0) {
+if (param.val.has_value()) {
+  CHECK_EQ(inputs[val_pos].shape_.ndim(), 0)
+<< "'arr' is a 0-d array, 'values' can not assign to it. "
+<< "alueError: assignment to 0-d array.";
+  mxnet_op::copy(s, outputs[out_pos], inputs[val_pos]);
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[out_pos].type_flag_, DType, {
+Fill(s, outputs[out_pos], req[0], 
static_cast(param.val.value()));
+  });
+}
+return;
+  } else {
+arr = inputs[arr_pos];
+CHECK(axis >= -1 * arr.shape_.ndim() && axis < arr.shape_.ndim())
+  << "Axis should be in the range of [-r, r-1] where r is the rank of 
input tensor";
+axis += (axis < 0) ? arr.shape_.ndim() : 0;
+  }
+
+  int N = arr.shape_[axis];
+  size_t indices_len = 0;  // indices amount
+  int start = 0, stop = 0, step = 0;  // arguments from 'obj' when it's 'slice'
+
+  // get and check indices from slice or sequence of ints
+  SliceIndices(param.start, param.stop, param.step,
+N, , , , _len);
+
+  int numnew = 0;  // numnew = output.shape[axis] - arr.shape[axis]
+  int index = 0;  // save modified index, because index may be negative integer
+  mxnet::TShape val_newshape(arr.shape_.ndim(), -1);
+  // modify values's ndim to arr's ndim, for broadcast easily later
+  // e.g. value shape: (2,) arr shape: (3, 2) => value shape: (1, 2)
+  for (int i = values.shape_.ndim() - 1, j = arr.shape_.ndim() - 1;
+i >= 0 || j >= 0;
+--i, --j) {
+if (i >= 0 && j >= 0) {
+  val_newshape[j] = values.shape_[i];
+} else if (i >= 0) {
+  CHECK_EQ(values.shape_[i], 1) << "index exceed limits.";
+} else {
+  val_newshape[j] = 1;
+}
+  }
+  values.shape_.assign(val_newshape.begin(), val_newshape.end());
+
+  // get numnew
+  mxnet::TShape old_valshape(values.shape_);
+  if (indices_len == 1) {  // tensor with only one element
+numnew = values.shape_[axis];
+index = start;
+CHECK(index >= -1 * N && index <= N)
+  << "Index should be in the range of [-r, r-1] where r is the dim size in 
'axis'";
+if (index < 0) {
+  index += N;
+}
+  } else {
+numnew = static_cast(indices_len);
+  }
+
+  const mxnet::TShape& outshape = outputs[out_pos].shape_;
+  int dtype = outputs[out_pos].type_flag_;
+  int vtype = param.val.has_value() ?
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682345
 
 

 ##
 File path: src/operator/numpy/np_insert_op_slice-inl.h
 ##
 @@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op_slice-inl.h
+ * \brief Function definition of insert operators (insert by slice index)
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_SLICE_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_SLICE_INL_H_
+
+#include 
+#include 
+#include "./np_insert_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * Only support slice index (the type of param 'obj' is slice).
+ */
+template
+void NumpyInsertSliceCompute(const nnvm::NodeAttrs& attrs,
+ const OpContext& ctx,
+ const std::vector& inputs,
+ const std::vector& req,
+ const std::vector& outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const NumpyInsertParam& param = nnvm::get(attrs.parsed);
+  CHECK_EQ(inputs.size(), (param.val.has_value() ? 1 : 2));
+  CHECK_EQ(outputs.size(), 1);
+  CHECK_EQ(req.size(), 1);
+  mshadow::Stream *s = ctx.get_stream();
+  const int arr_pos = 0;
+  const int val_pos = param.val.has_value() ? 0 : 1;
+  const int out_pos = 0;
+  int ndim = inputs[arr_pos].shape_.ndim();
+  int axis = param.axis.has_value() ? param.axis.value() : 0;
+  TBlob arr;
+  TBlob values = param.val.has_value() ?
+ TBlob(nullptr, mxnet::TShape(0, 1), xpu::kDevMask, 
outputs[out_pos].type_flag_) :
+ inputs[val_pos];
+  if (!param.axis.has_value()) {
+arr = inputs[arr_pos].reshape(Shape1(inputs[arr_pos].shape_.Size()));
+ndim = 1;
+  } else if (ndim == 0) {
+if (param.val.has_value()) {
+  CHECK_EQ(inputs[val_pos].shape_.ndim(), 0)
+<< "'arr' is a 0-d array, 'values' can not assign to it. "
+<< "alueError: assignment to 0-d array.";
+  mxnet_op::copy(s, outputs[out_pos], inputs[val_pos]);
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[out_pos].type_flag_, DType, {
+Fill(s, outputs[out_pos], req[0], 
static_cast(param.val.value()));
+  });
+}
+return;
+  } else {
+arr = inputs[arr_pos];
+CHECK(axis >= -1 * arr.shape_.ndim() && axis < arr.shape_.ndim())
+  << "Axis should be in the range of [-r, r-1] where r is the rank of 
input tensor";
+axis += (axis < 0) ? arr.shape_.ndim() : 0;
+  }
+
+  int N = arr.shape_[axis];
+  size_t indices_len = 0;  // indices amount
+  int start = 0, stop = 0, step = 0;  // arguments from 'obj' when it's 'slice'
+
+  // get and check indices from slice or sequence of ints
+  SliceIndices(param.start, param.stop, param.step,
+N, , , , _len);
+
+  int numnew = 0;  // numnew = output.shape[axis] - arr.shape[axis]
+  int index = 0;  // save modified index, because index may be negative integer
+  mxnet::TShape val_newshape(arr.shape_.ndim(), -1);
+  // modify values's ndim to arr's ndim, for broadcast easily later
+  // e.g. value shape: (2,) arr shape: (3, 2) => value shape: (1, 2)
+  for (int i = values.shape_.ndim() - 1, j = arr.shape_.ndim() - 1;
+i >= 0 || j >= 0;
+--i, --j) {
+if (i >= 0 && j >= 0) {
+  val_newshape[j] = values.shape_[i];
+} else if (i >= 0) {
+  CHECK_EQ(values.shape_[i], 1) << "index exceed limits.";
+} else {
+  val_newshape[j] = 1;
+}
+  }
+  values.shape_.assign(val_newshape.begin(), val_newshape.end());
+
+  // get numnew
+  mxnet::TShape old_valshape(values.shape_);
+  if (indices_len == 1) {  // tensor with only one element
+numnew = values.shape_[axis];
+index = start;
+CHECK(index >= -1 * N && index <= N)
+  << "Index should be in the range of [-r, r-1] where r is the dim size in 
'axis'";
+if (index < 0) {
+  index += N;
+}
+  } else {
+numnew = static_cast(indices_len);
+  }
+
+  const mxnet::TShape& outshape = outputs[out_pos].shape_;
+  int dtype = outputs[out_pos].type_flag_;
+  int vtype = param.val.has_value() ?
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682486
 
 

 ##
 File path: src/operator/numpy/np_insert_op_tensor-inl.h
 ##
 @@ -0,0 +1,229 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op-inl.h
+ * \brief Function definition of insert operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_TENSOR_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_TENSOR_INL_H_
+
+#include 
+#include 
+#include 
+#include "../../common/utils.h"
+#include "../tensor/sort_op.h"
+#include "../tensor/init_op.h"
+#include "../operator_common.h"
+#include "../mxnet_op.h"
+#include "./np_delete_op-inl.h"
+#include "./np_insert_op-inl.h"
+
+namespace mxnet {
+namespace op {
+
+/*
+ * Only support tensor indices (the type of param 'obj' is tensor).
+ */
+template
+void NumpyInsertTensorCompute(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+
+  const NumpyInsertParam& param = nnvm::get(attrs.parsed);
+  int input_count = param.val.has_value() ? 1 : 2;
+  int insize = input_count + 1;
+  CHECK_EQ(inputs.size(), insize);
+  CHECK_EQ(outputs.size(), 1);
+  CHECK_EQ(req.size(), 1);
+  mshadow::Stream *s = ctx.get_stream();
+  const int arr_pos = 0;
+  const int val_pos = param.val.has_value() ? 0 : 1;
+  const int obj_pos = val_pos + 1;
+  const int out_pos = 0;
+  int ndim = inputs[arr_pos].shape_.ndim();
+  int axis = param.axis.has_value() ? param.axis.value() : 0;
+  TBlob arr;
+  TBlob values = param.val.has_value() ?
+ TBlob(nullptr, mxnet::TShape(0, 1), xpu::kDevMask, 
outputs[out_pos].type_flag_) :
+ inputs[val_pos];
+  if (!param.axis.has_value()) {
+arr = inputs[arr_pos].reshape(Shape1(inputs[arr_pos].shape_.Size()));
+ndim = 1;
+  } else if (ndim == 0) {
+if (param.val.has_value()) {
+  CHECK_EQ(inputs[val_pos].shape_.ndim(), 0)
+<< "'arr' is a 0-d array, 'values' can not assign to it. "
+<< "alueError: assignment to 0-d array.";
+  mxnet_op::copy(s, outputs[out_pos], inputs[val_pos]);
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[out_pos].type_flag_, DType, {
+Fill(s, outputs[out_pos], req[0], 
static_cast(param.val.value()));
+  });
+}
+return;
+  } else {
+arr = inputs[arr_pos];
+CHECK(axis >= -1 * arr.shape_.ndim() && axis < arr.shape_.ndim())
+  << "Axis should be in the range of [-r, r-1] where r is the rank of 
input tensor";
+axis += (axis < 0) ? arr.shape_.ndim() : 0;
+  }
+
+  int N = arr.shape_[axis];
+  size_t indices_len = inputs[obj_pos].shape_.Size();  // indices amount
+
+  // get and check indices from tensor
+  int numnew = 0;  // numnew = output.shape[axis] - arr.shape[axis]
+  mxnet::TShape val_newshape(arr.shape_.ndim(), -1);
+  // modify values's ndim to arr's ndim, for broadcast easily later
+  // e.g. value shape: (2,) arr shape: (3, 2) => value shape: (1, 2)
+  for (int i = values.shape_.ndim() - 1, j = arr.shape_.ndim() - 1;
+i >= 0 || j >= 0;
+--i, --j) {
+if (i >= 0 && j >= 0) {
+  val_newshape[j] = values.shape_[i];
+} else if (i >= 0) {
+  CHECK_EQ(values.shape_[i], 1) << "index exceed limits.";
+} else {
+  val_newshape[j] = 1;
+}
+  }
+  values.shape_.assign(val_newshape.begin(), val_newshape.end());
+
+  // get numnew
+  mxnet::TShape old_valshape(values.shape_);
+  if (inputs[obj_pos].shape_.ndim() == 0) {  // scaler
+// values = moveaxis(values, 0, axis), will change values's shape
+numnew = values.shape_[0];
+mxnet::TShape axes(values.ndim(), -1);  // moved axes
+mxnet::TShape val_newshape(values.ndim(), -1);
+int axes_id = 0;
+for (int i = 1; i <= axis; ++i) {
+  axes[axes_id++] = i;
+}
+axes[axes_id++] = 0;
+for (int i = axis + 1; i < values.ndim(); ++i) {
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682089
 
 

 ##
 File path: src/operator/numpy/np_insert_op-inl.h
 ##
 @@ -0,0 +1,371 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op-inl.h
+ * \brief Function definition of insert operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include "../../common/utils.h"
+#include "../tensor/sort_op.h"
+#include "../tensor/init_op.h"
+#include "../operator_common.h"
+#include "../mxnet_op.h"
+#include "./np_delete_op-inl.h"
+namespace mxnet {
+namespace op {
+
+struct NumpyInsertParam : public dmlc::Parameter {
+  dmlc::optional val;
+  dmlc::optional start;
+  dmlc::optional stop;
+  dmlc::optional step;
+  dmlc::optional int_ind;
+  dmlc::optional axis;
+  DMLC_DECLARE_PARAMETER(NumpyInsertParam) {
+DMLC_DECLARE_FIELD(val)
+.set_default(dmlc::optional())
+.describe("A scaler to be inserted into 'array'");
+DMLC_DECLARE_FIELD(start)
+.set_default(dmlc::optional())
+.describe("If 'obj' is slice, 'start' is one of it's arguments.");
+DMLC_DECLARE_FIELD(stop)
+.set_default(dmlc::optional())
+.describe("If 'obj' is slice, 'stop' is one of it's arguments.");
+DMLC_DECLARE_FIELD(step)
+.set_default(dmlc::optional())
+.describe("If 'obj' is slice, 'step' is one of it's arguments.");
+DMLC_DECLARE_FIELD(int_ind)
+.set_default(dmlc::optional())
+.describe("If 'obj' is int, 'int_ind' is the index before which"
+  "'values' is inserted");
+DMLC_DECLARE_FIELD(axis)
+.set_default(dmlc::optional())
+.describe("Axis along which to insert 'values'.");
+  }
+};
+
+/*!
+ * \brief insert when obj is 'scaler' or a 'slice' with only one element.
+ * \tparam ndim - both 'in_arr', 'in_val' and 'out_data' have same ndim before 
call this.
+ * \param out_data - output: insert 'value' to 'arr' according to 'index'.
+ * \param in_arr - input: 'arr', original array.
+ * \param index - input(only for first Map): it's the only element in 'obj' 
indicats insert position.
+ * \param in_obj - input(only for second Map): It indicats insert position, 
it's ndim may equals to 0.
+ * \param in_val - input: 'value', insert to 'arr' according to 'index'.
+ * \param N - (only for first Map) arr.shape_[axis]
+ * \param numnew - extra dim size in 'out_data' compared with 'arr' in 'axis'.
+ * \param axis - insert 'value' to 'arr' in 'axis'.
+ * \param moveaxis - If 'obj' is a scaler, moveaxis is true;
+ If 'obj' is a slice with one element, moveaxis is false.
+ * \note Different between the two Map:
+ The first one use a scaler index;
+ The second one use a sequence of indecies which only has one index.
+ */
+template
+struct InsertSingleIndexKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out_data,
+  const VType* in_val, const DType* in_arr,
+  const mshadow::Shape outshape,
+  const mshadow::Shape valshape,
+  const int index, const int numnew,
+  const mshadow::Shape val_stride,
+  const mshadow::Shape old_val_stride,
+  const mshadow::Shape arr_stride,
+  const mshadow::Shape out_stride,
+  const int axis, bool moveaxis, const int 
req) {
+// i is the global flattened index in the output
+// out_idx: i -> position in output's shape
+mshadow::Shape out_idx = mxnet_op::unravel(i, outshape);
+int64_t dest_idx;
+if (out_idx[axis] >= index && out_idx[axis] < index + numnew) {  // from 
'value'
+  int idx_val = out_idx[axis] - index;
+  // val_idx: i -> position in values's shape
+  mshadow::Shape val_idx(out_idx);
+  val_idx[axis] = idx_val;
+  for (int j = ndim - 1; j >= 0; --j) {
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379682109
 
 

 ##
 File path: src/operator/numpy/np_insert_op-inl.h
 ##
 @@ -0,0 +1,371 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op-inl.h
+ * \brief Function definition of insert operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_INSERT_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_INSERT_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include "../../common/utils.h"
+#include "../tensor/sort_op.h"
+#include "../tensor/init_op.h"
+#include "../operator_common.h"
+#include "../mxnet_op.h"
+#include "./np_delete_op-inl.h"
+namespace mxnet {
+namespace op {
+
+struct NumpyInsertParam : public dmlc::Parameter {
+  dmlc::optional val;
+  dmlc::optional start;
+  dmlc::optional stop;
+  dmlc::optional step;
+  dmlc::optional int_ind;
+  dmlc::optional axis;
+  DMLC_DECLARE_PARAMETER(NumpyInsertParam) {
+DMLC_DECLARE_FIELD(val)
+.set_default(dmlc::optional())
+.describe("A scaler to be inserted into 'array'");
+DMLC_DECLARE_FIELD(start)
+.set_default(dmlc::optional())
+.describe("If 'obj' is slice, 'start' is one of it's arguments.");
+DMLC_DECLARE_FIELD(stop)
+.set_default(dmlc::optional())
+.describe("If 'obj' is slice, 'stop' is one of it's arguments.");
+DMLC_DECLARE_FIELD(step)
+.set_default(dmlc::optional())
+.describe("If 'obj' is slice, 'step' is one of it's arguments.");
+DMLC_DECLARE_FIELD(int_ind)
+.set_default(dmlc::optional())
+.describe("If 'obj' is int, 'int_ind' is the index before which"
+  "'values' is inserted");
+DMLC_DECLARE_FIELD(axis)
+.set_default(dmlc::optional())
+.describe("Axis along which to insert 'values'.");
+  }
+};
+
+/*!
+ * \brief insert when obj is 'scaler' or a 'slice' with only one element.
+ * \tparam ndim - both 'in_arr', 'in_val' and 'out_data' have same ndim before 
call this.
+ * \param out_data - output: insert 'value' to 'arr' according to 'index'.
+ * \param in_arr - input: 'arr', original array.
+ * \param index - input(only for first Map): it's the only element in 'obj' 
indicats insert position.
+ * \param in_obj - input(only for second Map): It indicats insert position, 
it's ndim may equals to 0.
+ * \param in_val - input: 'value', insert to 'arr' according to 'index'.
+ * \param N - (only for first Map) arr.shape_[axis]
+ * \param numnew - extra dim size in 'out_data' compared with 'arr' in 'axis'.
+ * \param axis - insert 'value' to 'arr' in 'axis'.
+ * \param moveaxis - If 'obj' is a scaler, moveaxis is true;
+ If 'obj' is a slice with one element, moveaxis is false.
+ * \note Different between the two Map:
+ The first one use a scaler index;
+ The second one use a sequence of indecies which only has one index.
+ */
+template
+struct InsertSingleIndexKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out_data,
+  const VType* in_val, const DType* in_arr,
+  const mshadow::Shape outshape,
+  const mshadow::Shape valshape,
+  const int index, const int numnew,
+  const mshadow::Shape val_stride,
+  const mshadow::Shape old_val_stride,
+  const mshadow::Shape arr_stride,
+  const mshadow::Shape out_stride,
+  const int axis, bool moveaxis, const int 
req) {
+// i is the global flattened index in the output
+// out_idx: i -> position in output's shape
+mshadow::Shape out_idx = mxnet_op::unravel(i, outshape);
+int64_t dest_idx;
+if (out_idx[axis] >= index && out_idx[axis] < index + numnew) {  // from 
'value'
+  int idx_val = out_idx[axis] - index;
+  // val_idx: i -> position in values's shape
+  mshadow::Shape val_idx(out_idx);
+  val_idx[axis] = idx_val;
+  for (int j = ndim - 1; j >= 0; --j) {
+  

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add op insert

2020-02-14 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy][Do Not Review]add 
op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r379681197
 
 

 ##
 File path: src/operator/numpy/np_insert_op_tensor.cc
 ##
 @@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_insert_op_tensor.cc
+ * \brief CPU Implementation of numpy insert operations
+ */
+
+#include 
+#include "./np_insert_op-inl.h"
+#include "./np_insert_op_tensor-inl.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(NumpyInsertParam);
+
+bool NumpyInsertTensorType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_type,
 
 Review comment:
   alignment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #17590: [WIP][numpy] Implement Weibull backward

2020-02-14 Thread GitBox
haojin2 commented on issue #17590: [WIP][numpy] Implement Weibull backward
URL: https://github.com/apache/incubator-mxnet/pull/17590#issuecomment-586505702
 
 
   @D-Roberts Thanks for your contribution! Please let @xidulu and/or me know 
when you think this PR is ready for a review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (8d887ca -> 39b158f)

2020-02-14 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8d887ca  add isposinf isneginf isfinite (#17563)
 add 39b158f  quantile_scalar (#17572)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 10 +++--
 python/mxnet/numpy/multiarray.py   |  4 ++--
 python/mxnet/symbol/numpy/_symbol.py   | 10 +++--
 src/operator/numpy/np_percentile_op-inl.h  | 25 ++
 src/operator/numpy/np_percentile_op.cc | 15 -
 .../python/unittest/test_numpy_interoperability.py | 13 +++
 tests/python/unittest/test_numpy_op.py | 22 +--
 7 files changed, 73 insertions(+), 26 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #17572: [Numpy] Quantile/Percentile-scalar support

2020-02-14 Thread GitBox
haojin2 merged pull request #17572: [Numpy] Quantile/Percentile-scalar support 
URL: https://github.com/apache/incubator-mxnet/pull/17572
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (1c07771 -> 8d887ca)

2020-02-14 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1c07771  fix CD and remove leftover from #15990 (#17551)
 add 8d887ca  add isposinf isneginf isfinite (#17563)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 152 -
 python/mxnet/numpy/multiarray.py   | 152 -
 python/mxnet/numpy_dispatch_protocol.py|   3 +
 python/mxnet/symbol/numpy/_symbol.py   | 104 +-
 src/operator/mshadow_op.h  |  24 
 src/operator/numpy/np_elemwise_unary_op_basic.cc   |  35 +++--
 src/operator/numpy/np_elemwise_unary_op_basic.cu   |   9 ++
 src/operator/operator_tune.cc  |   3 +
 .../python/unittest/test_numpy_interoperability.py |  39 +++---
 tests/python/unittest/test_numpy_op.py |  49 ++-
 10 files changed, 529 insertions(+), 41 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #17563: [Numpy] add isposinf isneginf isfinite

2020-02-14 Thread GitBox
haojin2 merged pull request #17563: [Numpy] add isposinf isneginf isfinite
URL: https://github.com/apache/incubator-mxnet/pull/17563
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu closed pull request #17598: Add OS X Staticbuild CI based on Github Actions

2020-02-14 Thread GitBox
leezu closed pull request #17598: Add OS X Staticbuild CI based on Github 
Actions
URL: https://github.com/apache/incubator-mxnet/pull/17598
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17598: Add OS X Staticbuild CI based on Github Actions

2020-02-14 Thread GitBox
leezu commented on issue #17598: Add OS X Staticbuild CI based on Github Actions
URL: https://github.com/apache/incubator-mxnet/pull/17598#issuecomment-586497521
 
 
   Closing to avoid trigger Jenkins on every push. You can track status at 
above link


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17598: Add OS X Staticbuild CI based on Github Actions

2020-02-14 Thread GitBox
leezu commented on issue #17598: Add OS X Staticbuild CI based on Github Actions
URL: https://github.com/apache/incubator-mxnet/pull/17598#issuecomment-586496421
 
 
   WIP: See build status at https://github.com/leezu/mxnet/actions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17598: Add OS X Staticbuild CI based on Github Actions

2020-02-14 Thread GitBox
leezu opened a new pull request #17598: Add OS X Staticbuild CI based on Github 
Actions
URL: https://github.com/apache/incubator-mxnet/pull/17598
 
 
   ## Description ##
   Add OS X Staticbuild CI based on Github Actions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17597: move stacktrance print to console rather than system.err

2020-02-14 Thread GitBox
ChaiBapchya commented on issue #17597: move stacktrance print to console rather 
than system.err
URL: https://github.com/apache/incubator-mxnet/pull/17597#issuecomment-586492046
 
 
   @anirudh2290 @access2rohit @marcoabreu 
   Since you reviewed the previous PR, thought you could take a look at this 
one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17597: move stacktrance print to console rather than system.err

2020-02-14 Thread GitBox
ChaiBapchya commented on issue #17597: move stacktrance print to console rather 
than system.err
URL: https://github.com/apache/incubator-mxnet/pull/17597#issuecomment-586491889
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #17597: move stacktrance print to console rather than system.err

2020-02-14 Thread GitBox
ChaiBapchya opened a new pull request #17597: move stacktrance print to console 
rather than system.err
URL: https://github.com/apache/incubator-mxnet/pull/17597
 
 
   ## Description ##
   Right now with #17465 if the build fails, we print the stacktrace using 
`error.printStackTrace()` function
   
   a. It is an insecure function that needs to be approved by administrators in 
Jenkins -> Manage Jenkins -> In-process Script approval
   Without the approval it gives SandBox error
   ```
   `org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: 
Scripts not permitted to use method java.lang.Throwable getStackTrace`
   ```
   
   b. However, the `error.printStackTrace()` function sends the report to 
System.err
   It doesn't get printed on the console.
   To print on console, it has to be passed as a parameter.
   Verified it works by adding a failure case in the try (so as to enter the 
"catch")
   ```
   try {
   update_github_commit_status('PENDING', 'Job has been enqueued')
   System.out.println(12 / 0)
   }
   ```
   It enters catch and then prints the stack trace for divide by zero exception 
as expected.
   
   Refer : 
   
http://jenkins.mxnet-ci-dev.amazon-ml.com/job/mxnet-validation/job/apache-unix-cpu/job/remove_print_stacktrace/lastBuild/console
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - [ ] Code is well-documented: 
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ## Comments ##
   Read more about Jenkins Script Security : 
https://plugins.jenkins.io/script-security/
   
   How to handle printStackTrace - 
https://stackoverflow.com/questions/12095378/difference-between-e-printstacktrace-and-system-out-printlne


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit edited a comment on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
access2rohit edited a comment on issue #17595: MKLDNN incompatibility with 
large tensor (dim >= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586451995
 
 
   @connorgoggins thanks for bringing this up 
   
   @PatricZhao @TaoLv  looks like blas=MKL/openblas/none(mnative mxnet) and 
MKLDNN=OFF are supporting gemm on int64 but with MKLDNN its not. If its not a 
known issue with MKLDNN can you guys please take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #16899: Enable MKL-DNN in pip packages

2020-02-14 Thread GitBox
leezu commented on issue #16899: Enable MKL-DNN in pip packages
URL: https://github.com/apache/incubator-mxnet/pull/16899#issuecomment-586477909
 
 
   @TaoLv Sheng just merged fixes for the CD pipeline. Let's wait 24 hours to 
verify CD works on master branch. Then we can merge this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (0f35489 -> 1c07771)

2020-02-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0f35489  use py3 for kvstore tests (#17593)
 add 1c07771  fix CD and remove leftover from #15990 (#17551)

No new revisions were added by this update.

Summary of changes:
 cd/mxnet_lib/mxnet_lib_pipeline.groovy | 38 --
 cd/python/pypi/Jenkins_pipeline.groovy |  1 -
 cd/python/pypi/pypi_package.sh |  8 +++
 ci/docker/runtime_functions.sh | 10 +
 4 files changed, 5 insertions(+), 52 deletions(-)



[incubator-mxnet] branch master updated (0f35489 -> 1c07771)

2020-02-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0f35489  use py3 for kvstore tests (#17593)
 add 1c07771  fix CD and remove leftover from #15990 (#17551)

No new revisions were added by this update.

Summary of changes:
 cd/mxnet_lib/mxnet_lib_pipeline.groovy | 38 --
 cd/python/pypi/Jenkins_pipeline.groovy |  1 -
 cd/python/pypi/pypi_package.sh |  8 +++
 ci/docker/runtime_functions.sh | 10 +
 4 files changed, 5 insertions(+), 52 deletions(-)



[GitHub] [incubator-mxnet] szha merged pull request #17551: [CD] fix CD and remove leftover from #15990

2020-02-14 Thread GitBox
szha merged pull request #17551: [CD] fix CD and remove leftover from #15990
URL: https://github.com/apache/incubator-mxnet/pull/17551
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu opened a new pull request #17596: Fix transformer.cu interleaved matmul for cuda arch < 5

2020-02-14 Thread GitBox
leezu opened a new pull request #17596: Fix transformer.cu interleaved matmul 
for cuda arch < 5
URL: https://github.com/apache/incubator-mxnet/pull/17596
 
 
   ## Description ##
   `cublasGemmBatchedEx` is only supported for GPU with architecture 
capabilities equal or greater than 5.0.
   
   Fixes a bug in https://github.com/apache/incubator-mxnet/pull/16408
   
   ### Changes ###
   - [X] Fix transformer.cu interleaved matmul for cuda arch < 5
   
   ## Comments ##
   CC @Caenorst 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #1137: cublas error when using like mx.nd.***

2020-02-14 Thread GitBox
eric-haibin-lin commented on issue #1137: cublas error when using like mx.nd.***
URL: 
https://github.com/apache/incubator-mxnet/issues/1137#issuecomment-586459620
 
 
   or just put a `mx.nd.waitall()`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit edited a comment on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
access2rohit edited a comment on issue #17595: MKLDNN incompatibility with 
large tensor (dim >= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586451995
 
 
   @connorgoggins thanks for bringing this up 
   
   @PatricZhao @TaoLv  looks like blas=MKL/openblas/none(mnative mxnet) and 
MKLDNN=OFF is supporting gemm on int64 but with MKLDNN its not. If its not a 
known issue with MKLDNN can you guys please take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
access2rohit commented on issue #17595: MKLDNN incompatibility with large 
tensor (dim >= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586451995
 
 
   @PatricZhao @TaoLv  looks like blas=MKL/openblas/none(mnative mxnet) and 
MKLDNN=OFF is supporting gemm on int64 but with MKLDNN its not. If its not a 
known issue with MKLDNN can you guys please take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
apeforest commented on issue #17595: MKLDNN incompatibility with large tensor 
(dim >= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586443098
 
 
   @PatricZhao Could your team please take a look at this? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
access2rohit commented on issue #17595: MKLDNN incompatibility with large 
tensor (dim >= 2^32) data
URL: 
https://github.com/apache/incubator-mxnet/issues/17595#issuecomment-586439922
 
 
   @mxnet-label-bot add [MKLDNN]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins opened a new issue #17595: MKLDNN incompatibility with large tensor (dim >= 2^32) data

2020-02-14 Thread GitBox
connorgoggins opened a new issue #17595: MKLDNN incompatibility with large 
tensor (dim >= 2^32) data
URL: https://github.com/apache/incubator-mxnet/issues/17595
 
 
   ## Description
   While testing individual ops for large tensor (dimension >= 2^32) input 
functionality, I found an error in MKLDNN. Within 
`3rdparty/mkldnn/src/cpu/gemm/gemm.cpp` on line 43 there is a function which 
takes in several parameters, including `M` (the variable used to accept the 
data dimension in the input). `M` is designated as an `int`, so when the value 
2^32 is passed in as the first dimension of the input data the > 0 assertion on 
the next line fails (since the `int` dtype in C++ interprets 2^32 as 0).
   
   Note that this error occurs whenever MKLDNN is enabled - whether the BLAS 
engine is MKL, OpenBLAS, or none. When MKLDNN is disabled, the error does not 
occur.
   
   
   ## Failing Environments
   ### BLAS = None
   ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ 
CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ 
OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✖ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ BLAS_MKL, ✖ 
BLAS_APPLE, ✔ LAPACK, ✔ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, 
✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP
   
   ### BLAS = MKL
   ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ 
CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ 
OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✖ BLAS_OPEN, ✖ BLAS_ATLAS, ✔ BLAS_MKL, ✖ 
BLAS_APPLE, ✔ LAPACK, ✔ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, 
✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP
   
   ### BLAS = OpenBLAS
   ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ 
CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ 
OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✔ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ BLAS_MKL, ✖ 
BLAS_APPLE, ✔ LAPACK, ✔ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, 
✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP
   
   ## Successful Environments
   ### BLAS = None
   ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ 
CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ 
OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✖ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ BLAS_MKL, ✖ 
BLAS_APPLE, ✔ LAPACK, ✖ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, 
✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP
   
   ### BLAS = MKL
   ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ 
CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ 
OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✖ BLAS_OPEN, ✖ BLAS_ATLAS, ✔ BLAS_MKL, ✖ 
BLAS_APPLE, ✔ LAPACK, ✖ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, 
✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP
   
   ### BLAS = OpenBLAS
   ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ 
CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ 
OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✔ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ BLAS_MKL, ✖ 
BLAS_APPLE, ✔ LAPACK, ✖ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, 
✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP
   
   ### Steps to reproduce
   Run `mx.nd.FullyConnected(data=mx.nd.random_normal(shape=(2**32,1)), 
weight=mx.nd.random_normal(shape=(1,1)), bias=mx.nd.random_normal(shape=(1,)), 
flatten=False, num_hidden=1`
   
   ### Error Message
   ```python3: /home/ubuntu/mxnet/3rdparty/mkldnn/src/cpu/gemm/gemm.cpp:43: 
void dnnl::impl::cpu::msan_unpoison_matrix(void*, int, int, int, size_t): 
Assertion `C
   != nullptr && M > 0 && N > 0 && LDC >= M && typesize' failed.
   Aborted (core dumped)
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on a change in pull request #17566: [WIP] Discussion on merging BMXNet 2 contributions

2020-02-14 Thread GitBox
zhreshold commented on a change in pull request #17566: [WIP] Discussion on 
merging BMXNet 2 contributions
URL: https://github.com/apache/incubator-mxnet/pull/17566#discussion_r379594847
 
 

 ##
 File path: .gitlab-ci.yml
 ##
 @@ -0,0 +1,71 @@
+stages:
 
 Review comment:
   this file should not go in to master


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on a change in pull request #17566: [WIP] Discussion on merging BMXNet 2 contributions

2020-02-14 Thread GitBox
zhreshold commented on a change in pull request #17566: [WIP] Discussion on 
merging BMXNet 2 contributions
URL: https://github.com/apache/incubator-mxnet/pull/17566#discussion_r379595075
 
 

 ##
 File path: .gitmodules
 ##
 @@ -29,3 +29,6 @@
 [submodule "3rdparty/nvidia_cub"]
path = 3rdparty/nvidia_cub
url = https://github.com/NVlabs/cub.git
+[submodule "example/bmxnet-examples"]
 
 Review comment:
   These examples should copy to examples instead of submodule


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on a change in pull request #17566: [WIP] Discussion on merging BMXNet 2 contributions

2020-02-14 Thread GitBox
zhreshold commented on a change in pull request #17566: [WIP] Discussion on 
merging BMXNet 2 contributions
URL: https://github.com/apache/incubator-mxnet/pull/17566#discussion_r379595336
 
 

 ##
 File path: README.md
 ##
 @@ -1,116 +1,136 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-  https://mxnet.incubator.apache.org/;>https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/mxnet_logo_2.png;>
-
-
-Apache MXNet (incubating) for Deep Learning
-=
-| Master | Docs  | License  |
-| :-:|:-:|::|
-| [![Build 
Status](http://jenkins.mxnet-ci.amazon-ml.com/job/incubator-mxnet/job/master/badge/icon)](http://jenkins.mxnet-ci.amazon-ml.com/job/incubator-mxnet/job/master/)
  | [![Documentation 
Status](http://jenkins.mxnet-ci.amazon-ml.com/job/restricted-website-build/badge/icon)](https://mxnet.incubator.apache.org/)
 | [![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE) |
-
-![banner](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/image/banner.png)
-
-Apache MXNet (incubating) is a deep learning framework designed for both 
*efficiency* and *flexibility*.
-It allows you to ***mix*** [symbolic and imperative 
programming](https://mxnet.incubator.apache.org/architecture/index.html#deep-learning-system-design-concepts)
-to ***maximize*** efficiency and productivity.
-At its core, MXNet contains a dynamic dependency scheduler that automatically 
parallelizes both symbolic and imperative operations on the fly.
-A graph optimization layer on top of that makes symbolic execution fast and 
memory efficient.
-MXNet is portable and lightweight, scaling effectively to multiple GPUs and 
multiple machines.
-
-MXNet is more than a deep learning project. It is a collection of
-[blue prints and 
guidelines](https://mxnet.incubator.apache.org/architecture/index.html#deep-learning-system-design-concepts)
 for building
-deep learning systems, and interesting insights of DL systems for hackers.
-
-Ask Questions
--
-* Please use [discuss.mxnet.io](https://discuss.mxnet.io/) for asking 
questions.
-* Please use [mxnet/issues](https://github.com/apache/incubator-mxnet/issues) 
for reporting bugs.
-* [Frequent Asked Questions](https://mxnet.incubator.apache.org/faq/faq.html)
-
-How to Contribute
--
-* [Contribute to 
MXNet](https://mxnet.incubator.apache.org/community/contribute.html)
-
-What's New
---
-* [Version 1.4.1 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.4.1) - MXNet 
1.4.1 Patch Release.
-* [Version 1.4.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.4.0) - MXNet 
1.4.0 Release.
-* [Version 1.3.1 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.3.1) - MXNet 
1.3.1 Patch Release.
-* [Version 1.3.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.3.0) - MXNet 
1.3.0 Release.
-* [Version 1.2.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.2.0) - MXNet 
1.2.0 Release.
-* [Version 1.1.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.1.0) - MXNet 
1.1.0 Release.
-* [Version 1.0.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/1.0.0) - MXNet 
1.0.0 Release.
-* [Version 0.12.1 
Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.1) - MXNet 
0.12.1 Patch Release.
-* [Version 0.12.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/0.12.0) - MXNet 
0.12.0 Release.
-* [Version 0.11.0 
Release](https://github.com/apache/incubator-mxnet/releases/tag/0.11.0) - MXNet 
0.11.0 Release.
-* [Apache Incubator](http://incubator.apache.org/projects/mxnet.html) - We are 
now an Apache Incubator project.
-* [Version 0.10.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.10.0) 
- MXNet 0.10.0 Release.
-* [Version 0.9.3 Release](./docs/architecture/release_note_0_9.md) - First 0.9 
official release.
-* [Version 0.9.1 Release (NNVM 
refactor)](./docs/architecture/release_note_0_9.md) - NNVM branch is merged 
into master now. An official release will be made soon.
-* [Version 0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0)
-* [Updated Image Classification with new Pre-trained 
Models](./example/image-classification)
-* [Notebooks How to Use MXNet](https://github.com/d2l-ai/d2l-en)
-* [MKLDNN for Faster CPU Performance](./docs/tutorials/mkldnn/MKLDNN_README.md)
-* [MXNet Memory Monger, Training Deeper Nets with Sublinear Memory 
Cost](https://github.com/dmlc/mxnet-memonger)
-* [Tutorial for NVidia GTC 2016](https://github.com/dmlc/mxnet-gtc-tutorial)
-* [Embedding Torch layers and functions in 
MXNet](https://mxnet.incubator.apache.org/faq/torch.html)
-* [MXNet.js: Javascript Package for Deep Learning in Browser (without server)
-](https://github.com/dmlc/mxnet.js/)
-* [Design Note: Design Efficient Deep Learning Data Loading 
Module](https://mxnet.incubator.apache.org/architecture/note_data_loading.html)
-* [MXNet on 

[GitHub] [incubator-mxnet] aaronmarkham opened a new pull request #17594: upgrade sphinx and autodocsumm

2020-02-14 Thread GitBox
aaronmarkham opened a new pull request #17594: upgrade sphinx and autodocsumm
URL: https://github.com/apache/incubator-mxnet/pull/17594
 
 
   ## Description ##
   Rollback of #17561
   
   A new version of autodocsumm came out. It supports the most recent version 
of Sphinx.
   
   This reverts the emergency patch to rollback these packages, but also pins 
the versions for both packages.
   
   Tested the build and it passes.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mjdenkowski commented on issue #17559: [MXNET-1446] Quantization: intgemm matrix multiply wrappers

2020-02-14 Thread GitBox
mjdenkowski commented on issue #17559: [MXNET-1446] Quantization: intgemm 
matrix multiply wrappers 
URL: https://github.com/apache/incubator-mxnet/pull/17559#issuecomment-586425627
 
 
   Hi @TaoLv, thanks for taking a look at this!  We understand your comments 
about MXNet already having multiple GEMM libraries.  We're particularly 
interested in Kenneth's (@kpuatamazon) intgemm because it provides 
functionality we weren't able to find in the existing libraries.  As he 
mentioned in the PR, we're seeing a roughly 3X inference speedup on an already 
significantly optimized transformer implementation.  Like many other Gluon 
users, our inference model is not currently expressible as a static graph.
   
   We would like to work out the best way to make this functionality available 
to the larger community.  Are there particular concerns we can address about 
adding intgemm as a third party library?  Is there another path to using 
intgemm with MXNet that you recommend?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest edited a comment on issue #16735: Use single-bit for mask in dropout operator

2020-02-14 Thread GitBox
apeforest edited a comment on issue #16735: Use single-bit for mask in dropout 
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586062120
 
 
   @roywei Using the test script in 
https://github.com/apache/incubator-mxnet/pull/13896
   
   Build | runtime (before) | runtime (after) 
   ---|---|---
   CPU w/ MKL | 262 ms ± 1.2 ms | 337 ms ± 12.5 ms
   CPU w/o MKL |359 ms ± 241 µs | 426 ms ± 222 µs
   GPU w/ cuDNN | 25.9 ms ± 202 µs | 25.8 ms ± 183 µs
   GPU w/o cuDNNN |1.34 s ± 5.83 ms | 1.35 s ± 13.1 ms
   
   Using python timer to measure CPU performance with MKL:
   
   This PR:
   
   ```
   [{'Dropout': [{'avg_time_Dropout': 1.1714265774935484, 'p50_time_Dropout': 
1.1715246364474297, 'p90_time_Dropout': 1.190436165779829, 'p99_time_Dropout': 
1.2154309218749404, 'inputs': {'data': (1024, 1024)}}]}]
   ```
   
   Master:
   ```
   [{'Dropout': [{'avg_time_Dropout': 0.6394564639776945, 'p50_time_Dropout': 
0.6996351294219494, 'p90_time_Dropout': 1.045508868992329, 'p99_time_Dropout': 
1.59036863129586, 'inputs': {'data': (1024, 1024)}}]}]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #16408: Add MXNet Ops for fast multihead attention

2020-02-14 Thread GitBox
leezu edited a comment on issue #16408: Add MXNet Ops for fast multihead 
attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-586418793
 
 
   There are two separate problems. `cublasGemmStridedBatchedEx` is buggy and 
has fixes in 10.2. But `cublasGemmStridedBatchedEx` is not supported in the 
first place on 3.5 arch and that's a bug in MXNet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #16408: Add MXNet Ops for fast multihead attention

2020-02-14 Thread GitBox
leezu commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-586418793
 
 
   No. My comment was wrong


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-14 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 978b5ea  Bump the publish timestamp.
978b5ea is described below

commit 978b5eadea8b579e71d663b18498e5f29ad8adf3
Author: mxnet-ci 
AuthorDate: Fri Feb 14 18:42:36 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..f224188
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Feb 14 18:42:36 UTC 2020



[GitHub] [incubator-mxnet] larroy commented on issue #16408: Add MXNet Ops for fast multihead attention

2020-02-14 Thread GitBox
larroy commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-586415593
 
 
   So it's related to cuda and not the arch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (8438d98 -> 0f35489)

2020-02-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8438d98  Implement all miscellaneous ops (#17511)
 add 0f35489  use py3 for kvstore tests (#17593)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)



[incubator-mxnet] branch master updated (8438d98 -> 0f35489)

2020-02-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8438d98  Implement all miscellaneous ops (#17511)
 add 0f35489  use py3 for kvstore tests (#17593)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)



[GitHub] [incubator-mxnet] szha merged pull request #17593: [CI] use py3 for kvstore tests

2020-02-14 Thread GitBox
szha merged pull request #17593: [CI] use py3 for kvstore tests
URL: https://github.com/apache/incubator-mxnet/pull/17593
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

2020-02-14 Thread GitBox
szha commented on a change in pull request #17265: Add bfloat16 floating-point 
format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r379517986
 
 

 ##
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##
 @@ -0,0 +1,167 @@
+/*!
 
 Review comment:
   yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

2020-02-14 Thread GitBox
TaoLv commented on a change in pull request #17265: Add bfloat16 floating-point 
format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r379484516
 
 

 ##
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##
 @@ -0,0 +1,167 @@
+/*!
 
 Review comment:
   @szha Do we need Apache license header for this new file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17265: Add bfloat16 floating-point format support based on AMP

2020-02-14 Thread GitBox
TaoLv commented on issue #17265: Add bfloat16 floating-point format support 
based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-586329605
 
 
   Hi @leezu, @larroy, this PR can pass the builds for ARM but always hit the 
time out of the test. 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fedge/detail/PR-17265/29/pipeline
   
   We don't have any environment to reproduce. Could you please take a look or 
have any suggestion for further debugging?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-14 Thread GitBox
TaoLv commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379465641
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -330,7 +330,7 @@ inline void PrintVerifyMsg(const NDArrayAttrs , const 
NDArrayAttrs ) {
  */
 inline std::vector GetTestInputArrays(
 int types = ArrayTypes::All, bool rand = false,
-std::vector scale = {1}, bool spatial_data_format = false) {
+std::vector scale = {1}, bool spatial_data_format = false, int max 
= 50) {
 
 Review comment:
   See 
https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379465354.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-14 Thread GitBox
TaoLv commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379465354
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 
 Review comment:
   Because I don't want to affect other test case which still use the default 
max=50 to generate integers in [50, 50). But For the FullyConnectedOp, I want 
to generate relative small numbers. With the given code, the elements will be 
-1 and 0. Any suggestion?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sl1pkn07 commented on issue #17514: C++-Package examples broken

2020-02-14 Thread GitBox
sl1pkn07 commented on issue #17514: C++-Package examples broken
URL: 
https://github.com/apache/incubator-mxnet/issues/17514#issuecomment-586274460
 
 
   try with
   
   ~~~
   -DCUDA_HOST_COMPILER=/usr/bin/cc-8 \
   -DCMAKE_C_COMPILER=/usr/bin/cc-8 \
   -DCMAKE_C_COMPILER_AR=/usr/bin/gcc-ar-8 \
   -DCMAKE_C_COMPILER_RANLIB=/usr/bin/gcc-ranlib-8 \
   -DCMAKE_CXX_COMPILER=/usr/bin/c++-8 \
   -DCMAKE_CXX_COMPILER_AR=/usr/bin/gcc-ar-8 \
   -DCMAKE_CXX_COMPILER_RANLIB=/usr/bin/gcc-ranlib-8
   ~~~


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-14 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 3fca7d8  Bump the publish timestamp.
3fca7d8 is described below

commit 3fca7d8d02a9751ea8cb2d5e2927f530bb247bbd
Author: mxnet-ci 
AuthorDate: Fri Feb 14 12:42:49 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..642a3ea
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Feb 14 12:42:49 UTC 2020



[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-14 Thread GitBox
ciyongch commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379387996
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 
 Review comment:
   How about change to `data[i] = std::rand() * 1.0 / RAND_MAX - 0.5;`? As `max 
= 1` will only generate two values: -1.0and 0.0 .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ciyongch commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-14 Thread GitBox
ciyongch commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379389375
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -63,24 +63,24 @@ struct TestArrayShapes {
 };
 
 // Init arrays with the default layout.
-inline static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
+inline static void InitDefaultArray(NDArray *arr, bool is_rand = false, int 
max = 50) {
   const TBlob  = arr->data();
   mshadow::default_real_t *data = blob.dptr();
   int size = blob.Size();
 
   for (int i = 0; i < size; i++)
 if (is_rand) {
-  data[i] = (std::rand() % 100) - 50;
+  data[i] = (std::rand() % (max * 2)) - max;
 } else {
-  data[i] = i % 100 - 50;
+  data[i] = i % (max * 2) - max;
 
 Review comment:
   Same as above, how about change to something like `data[i] = i * 2.0 / size 
- 1.0` to generate [-1.0, 1.0)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChiaraXian commented on issue #8148: How do I interpret the JSON compute graph?

2020-02-14 Thread GitBox
ChiaraXian commented on issue #8148: How do I interpret the JSON compute graph?
URL: 
https://github.com/apache/incubator-mxnet/issues/8148#issuecomment-586195655
 
 
   @aidan-plenert-macdonald Thanks a lot!!! Thank you for your help!!!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #17318: Enable MKL-DNN FullyConnected backward

2020-02-14 Thread GitBox
pengzhao-intel commented on a change in pull request #17318: Enable MKL-DNN 
FullyConnected backward
URL: https://github.com/apache/incubator-mxnet/pull/17318#discussion_r379349568
 
 

 ##
 File path: tests/cpp/include/test_mkldnn.h
 ##
 @@ -330,7 +330,7 @@ inline void PrintVerifyMsg(const NDArrayAttrs , const 
NDArrayAttrs ) {
  */
 inline std::vector GetTestInputArrays(
 int types = ArrayTypes::All, bool rand = false,
-std::vector scale = {1}, bool spatial_data_format = false) {
+std::vector scale = {1}, bool spatial_data_format = false, int max 
= 50) {
 
 Review comment:
   Is the new parameter for better usability?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >