[incubator-mxnet] branch master updated (74430a9 -> a807f6d)

2020-07-27 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 74430a9  remove NLL in metric (#18794)
 add a807f6d  [NumPy] loss for np array (#17196)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/loss.py | 251 +
 src/operator/nn/ctc_loss.cc|   1 +
 src/operator/nn/ctc_loss.cu|   1 +
 src/operator/tensor/broadcast_reduce_norm_value.cc |   1 +
 tests/python/gpu/test_gluon_gpu.py |   1 +
 tests/python/unittest/test_loss.py |  28 +--
 .../unittest/{test_loss.py => test_numpy_loss.py}  | 157 +++--
 tests/python/unittest/test_numpy_op.py | 136 +--
 8 files changed, 326 insertions(+), 250 deletions(-)
 copy tests/python/unittest/{test_loss.py => test_numpy_loss.py} (55%)



[incubator-mxnet] branch master updated (74430a9 -> a807f6d)

2020-07-27 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 74430a9  remove NLL in metric (#18794)
 add a807f6d  [NumPy] loss for np array (#17196)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/loss.py | 251 +
 src/operator/nn/ctc_loss.cc|   1 +
 src/operator/nn/ctc_loss.cu|   1 +
 src/operator/tensor/broadcast_reduce_norm_value.cc |   1 +
 tests/python/gpu/test_gluon_gpu.py |   1 +
 tests/python/unittest/test_loss.py |  28 +--
 .../unittest/{test_loss.py => test_numpy_loss.py}  | 157 +++--
 tests/python/unittest/test_numpy_op.py | 136 +--
 8 files changed, 326 insertions(+), 250 deletions(-)
 copy tests/python/unittest/{test_loss.py => test_numpy_loss.py} (55%)



[incubator-mxnet] branch master updated: add 'needs triage' label to new bug reports (#18696)

2020-07-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8ebb537  add 'needs triage' label to new bug reports (#18696)
8ebb537 is described below

commit 8ebb5372c3ad414cde096fb82de8be14cb748b11
Author: Sheng Zha 
AuthorDate: Mon Jul 13 13:17:12 2020 -0700

add 'needs triage' label to new bug reports (#18696)
---
 .github/ISSUE_TEMPLATE/bug_report.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/ISSUE_TEMPLATE/bug_report.md 
b/.github/ISSUE_TEMPLATE/bug_report.md
index 7a0115d..b34ee8b 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -2,7 +2,7 @@
 name: Bug report
 about: Create a report to help us improve
 title: ''
-labels: 'Bug'
+labels: 'Bug, needs triage'
 assignees: ''
 
 ---



[incubator-mxnet] branch master updated: add 'needs triage' label to new bug reports (#18696)

2020-07-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8ebb537  add 'needs triage' label to new bug reports (#18696)
8ebb537 is described below

commit 8ebb5372c3ad414cde096fb82de8be14cb748b11
Author: Sheng Zha 
AuthorDate: Mon Jul 13 13:17:12 2020 -0700

add 'needs triage' label to new bug reports (#18696)
---
 .github/ISSUE_TEMPLATE/bug_report.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/ISSUE_TEMPLATE/bug_report.md 
b/.github/ISSUE_TEMPLATE/bug_report.md
index 7a0115d..b34ee8b 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -2,7 +2,7 @@
 name: Bug report
 about: Create a report to help us improve
 title: ''
-labels: 'Bug'
+labels: 'Bug, needs triage'
 assignees: ''
 
 ---



[incubator-mxnet] branch master updated: add op npx.index_update (#18545)

2020-06-16 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8039377  add op npx.index_update (#18545)
8039377 is described below

commit 8039377e6630bcb00c5a95abdaf0851803686bc6
Author: JiangZhaoh <54654391+jiangzh...@users.noreply.github.com>
AuthorDate: Wed Jun 17 01:45:30 2020 +0800

add op npx.index_update (#18545)

* add op npx.index_update

* remove debug comment

* change eps

* fix stupid error

* add blank line in docs

* gpu temporary space request alignment

* fix test error

Co-authored-by: Ubuntu 
---
 python/mxnet/_numpy_op_doc.py  |  72 ++
 src/operator/tensor/index_add-inl.h|   2 +-
 src/operator/tensor/index_add_backward.cc  |  18 +-
 .../tensor/{index_add-inl.h => index_update-inl.h} | 175 --
 src/operator/tensor/index_update.cc| 261 +
 src/operator/tensor/index_update.cu| 204 
 tests/python/unittest/test_numpy_op.py | 162 +
 7 files changed, 813 insertions(+), 81 deletions(-)

diff --git a/python/mxnet/_numpy_op_doc.py b/python/mxnet/_numpy_op_doc.py
index fecd0e6..b8f4a49 100644
--- a/python/mxnet/_numpy_op_doc.py
+++ b/python/mxnet/_numpy_op_doc.py
@@ -630,6 +630,7 @@ def _npx_index_add(a, ind, val):
 """
 Add values to input according to given indexes.
 If exists repeate positions to be updated, the update value will be 
accumulated.
+
 Parameters
 --
 a : ndarray
@@ -643,10 +644,12 @@ def _npx_index_add(a, ind, val):
   - ind.dtype should be 'int32' or 'int64'
 val : ndarray
 Input data. The array to update the input 'a'.
+
 Returns
 ---
 out : ndarray
 The output array.
+
 Examples
 
 >>> a = np.zeros((2, 3, 4))
@@ -699,6 +702,75 @@ def _npx_index_add(a, ind, val):
 pass
 
 
+def _npx_index_update(a, ind, val):
+"""
+Update values to input according to given indexes.
+If multiple indices refer to the same location it is undefined which 
update is chosen; it may choose
+the order of updates arbitrarily and nondeterministically (e.g., due to 
concurrent updates on some
+hardware platforms). Recommend not to use repeate positions.
+
+Parameters
+--
+a : ndarray
+Input data. The array to be updated.
+Support dtype: 'float32', 'float64', 'int32', 'int64'.
+ind : ndarray
+Indexes for indicating update positions.
+For example, array([[0, 1], [2, 3], [4, 5]] indicates here are two 
positions to
+be updated, which is (0, 2, 4) and (1, 3, 5).
+Note: - 'ind' cannot be empty array '[]', for that case, please use 
operator 'add' instead.
+  - 0 <= ind.ndim <= 2.
+  - ind.dtype should be 'int32' or 'int64'
+val : ndarray
+Input data. The array to update the input 'a'.
+Support dtype: 'float32', 'float64', 'int32', 'int64'.
+
+Returns
+---
+out : ndarray
+The output array.
+
+Examples
+
+>>> a = np.zeros((2, 3, 4))
+>>> ind = np.array([[0, 0], [0, 0], [0, 1]], dtype='int32')
+>>> val = np.arange(2).reshape(2) + 1
+>>> b = npx.index_update(a, ind, val)
+>>> b
+array([[[1., 2., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]],
+
+   [[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+
+>>> ind=np.array([[0, 0], [0, 1]], dtype='int32') 
+>>> val = np.arange(8).reshape(2, 4) 
+>>> b = npx.index_update(a, ind, val)
+>>> b
+array([[[0., 1., 2., 3.],
+[4., 5., 6., 7.],
+[0., 0., 0., 0.]],
+
+   [[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+
+>>> val = np.arange(4).reshape(4)  # brocast 'val'
+>>> b = npx.index_update(a, ind, val)
+>>> b
+array([[[0., 1., 2., 3.],
+[0., 1., 2., 3.],
+[0., 0., 0., 0.]],
+
+[[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+"""
+pass
+
+
 def _np_diag(array, k=0):
 """
 Extracts a diagonal or constructs a diagonal array.
diff --git a/src/operator/tensor/index_add-inl.h 
b/src/operator/tensor/index_add-inl.h
index 83463da..122aa01 100644
--- a/src/operator/tensor/index_add-inl.h
+++ b/src/operator/tensor/index_add-inl.h
@@ -52,7 +52,7 @@ inline bool IndexModifyOpType(const nnvm::NodeAttrs& attrs,
   CHECK_NE((*in_att

[incubator-mxnet] branch master updated (f1f3f44 -> 09cf48a)

2020-06-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f1f3f44  Remove the deprecated BatchNorm_v1 op (#18538)
 add 09cf48a  Use correct array type for outputs in HybridBlock.forward 
(#18554)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/block.py| 2 +-
 tests/python/unittest/test_deferred_compute.py | 5 +
 2 files changed, 6 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (f1f3f44 -> 09cf48a)

2020-06-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f1f3f44  Remove the deprecated BatchNorm_v1 op (#18538)
 add 09cf48a  Use correct array type for outputs in HybridBlock.forward 
(#18554)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/block.py| 2 +-
 tests/python/unittest/test_deferred_compute.py | 5 +
 2 files changed, 6 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (fb73de7 -> 743bbcb)

2020-06-11 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fb73de7  remove mx.module.* APIs for MXNet 2.0 (#18525)
 add 743bbcb  unify impl (#18523)

No new revisions were added by this update.

Summary of changes:
 .gitignore |   4 +
 src/operator/mshadow_op.h  |  10 --
 src/operator/mxnet_op.h|   2 -
 src/operator/numpy/np_elemwise_broadcast_op.cc |  53 --
 src/operator/numpy/np_elemwise_broadcast_op.cu |  30 --
 src/operator/numpy/np_elemwise_broadcast_op.h  |  76 +-
 src/operator/numpy/np_true_divide-inl.h| 113 -
 src/operator/numpy/np_true_divide.cc   |  12 ---
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +-
 9 files changed, 7 insertions(+), 297 deletions(-)



[incubator-mxnet] branch master updated (fb73de7 -> 743bbcb)

2020-06-11 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fb73de7  remove mx.module.* APIs for MXNet 2.0 (#18525)
 add 743bbcb  unify impl (#18523)

No new revisions were added by this update.

Summary of changes:
 .gitignore |   4 +
 src/operator/mshadow_op.h  |  10 --
 src/operator/mxnet_op.h|   2 -
 src/operator/numpy/np_elemwise_broadcast_op.cc |  53 --
 src/operator/numpy/np_elemwise_broadcast_op.cu |  30 --
 src/operator/numpy/np_elemwise_broadcast_op.h  |  76 +-
 src/operator/numpy/np_true_divide-inl.h| 113 -
 src/operator/numpy/np_true_divide.cc   |  12 ---
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +-
 9 files changed, 7 insertions(+), 297 deletions(-)



[incubator-mxnet] branch master updated (fb73de7 -> 743bbcb)

2020-06-11 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fb73de7  remove mx.module.* APIs for MXNet 2.0 (#18525)
 add 743bbcb  unify impl (#18523)

No new revisions were added by this update.

Summary of changes:
 .gitignore |   4 +
 src/operator/mshadow_op.h  |  10 --
 src/operator/mxnet_op.h|   2 -
 src/operator/numpy/np_elemwise_broadcast_op.cc |  53 --
 src/operator/numpy/np_elemwise_broadcast_op.cu |  30 --
 src/operator/numpy/np_elemwise_broadcast_op.h  |  76 +-
 src/operator/numpy/np_true_divide-inl.h| 113 -
 src/operator/numpy/np_true_divide.cc   |  12 ---
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +-
 9 files changed, 7 insertions(+), 297 deletions(-)



[incubator-mxnet] branch master updated (fb73de7 -> 743bbcb)

2020-06-11 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fb73de7  remove mx.module.* APIs for MXNet 2.0 (#18525)
 add 743bbcb  unify impl (#18523)

No new revisions were added by this update.

Summary of changes:
 .gitignore |   4 +
 src/operator/mshadow_op.h  |  10 --
 src/operator/mxnet_op.h|   2 -
 src/operator/numpy/np_elemwise_broadcast_op.cc |  53 --
 src/operator/numpy/np_elemwise_broadcast_op.cu |  30 --
 src/operator/numpy/np_elemwise_broadcast_op.h  |  76 +-
 src/operator/numpy/np_true_divide-inl.h| 113 -
 src/operator/numpy/np_true_divide.cc   |  12 ---
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +-
 9 files changed, 7 insertions(+), 297 deletions(-)



[incubator-mxnet] branch master updated (691cf95 -> 8d220a2)

2020-06-03 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 691cf95  Dynamically Generate Version Dropdown (#18473)
 add 8d220a2  [Numpy]Fix einsum issue #18102 (#18419)

No new revisions were added by this update.

Summary of changes:
 src/operator/numpy/np_einsum_op-inl.h  |  4 +++-
 tests/python/unittest/test_numpy_op.py | 20 
 2 files changed, 23 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (691cf95 -> 8d220a2)

2020-06-03 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 691cf95  Dynamically Generate Version Dropdown (#18473)
 add 8d220a2  [Numpy]Fix einsum issue #18102 (#18419)

No new revisions were added by this update.

Summary of changes:
 src/operator/numpy/np_einsum_op-inl.h  |  4 +++-
 tests/python/unittest/test_numpy_op.py | 20 
 2 files changed, 23 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (c59a325 -> ca2bdb6)

2020-06-02 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c59a325  fix mixed type binary logic operators (#18427)
 add ca2bdb6  [Numpy] [Operator] Fix __neg__ (#18467)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py   |  2 +-
 python/mxnet/symbol/numpy/_symbol.py   |  2 +-
 tests/python/unittest/test_numpy_op.py | 21 +++--
 3 files changed, 21 insertions(+), 4 deletions(-)



[incubator-mxnet] branch master updated (c59a325 -> ca2bdb6)

2020-06-02 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c59a325  fix mixed type binary logic operators (#18427)
 add ca2bdb6  [Numpy] [Operator] Fix __neg__ (#18467)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py   |  2 +-
 python/mxnet/symbol/numpy/_symbol.py   |  2 +-
 tests/python/unittest/test_numpy_op.py | 21 +++--
 3 files changed, 21 insertions(+), 4 deletions(-)



[incubator-mxnet] branch master updated (b8490c5 -> c3fcbf3)

2020-06-01 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b8490c5  More clear description to `transform_first` (#18444)
 add c3fcbf3  Add npx op 'index_add' (#18089)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py |  73 ++
 src/operator/tensor/index_add-inl.h   | 231 ++
 src/operator/tensor/index_add_backward.cc | 102 +
 src/operator/tensor/index_add_backward.cu |  93 
 src/operator/tensor/index_add_forward.cc  | 132 +
 src/operator/tensor/index_add_forward.cu  |  91 
 tests/python/unittest/test_numpy_op.py| 154 
 7 files changed, 876 insertions(+)
 create mode 100644 src/operator/tensor/index_add-inl.h
 create mode 100644 src/operator/tensor/index_add_backward.cc
 create mode 100644 src/operator/tensor/index_add_backward.cu
 create mode 100644 src/operator/tensor/index_add_forward.cc
 create mode 100644 src/operator/tensor/index_add_forward.cu



[incubator-mxnet] branch master updated (b8490c5 -> c3fcbf3)

2020-06-01 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b8490c5  More clear description to `transform_first` (#18444)
 add c3fcbf3  Add npx op 'index_add' (#18089)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py |  73 ++
 src/operator/tensor/index_add-inl.h   | 231 ++
 src/operator/tensor/index_add_backward.cc | 102 +
 src/operator/tensor/index_add_backward.cu |  93 
 src/operator/tensor/index_add_forward.cc  | 132 +
 src/operator/tensor/index_add_forward.cu  |  91 
 tests/python/unittest/test_numpy_op.py| 154 
 7 files changed, 876 insertions(+)
 create mode 100644 src/operator/tensor/index_add-inl.h
 create mode 100644 src/operator/tensor/index_add_backward.cc
 create mode 100644 src/operator/tensor/index_add_backward.cu
 create mode 100644 src/operator/tensor/index_add_forward.cc
 create mode 100644 src/operator/tensor/index_add_forward.cu



[incubator-mxnet] branch master updated (b8490c5 -> c3fcbf3)

2020-06-01 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b8490c5  More clear description to `transform_first` (#18444)
 add c3fcbf3  Add npx op 'index_add' (#18089)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py |  73 ++
 src/operator/tensor/index_add-inl.h   | 231 ++
 src/operator/tensor/index_add_backward.cc | 102 +
 src/operator/tensor/index_add_backward.cu |  93 
 src/operator/tensor/index_add_forward.cc  | 132 +
 src/operator/tensor/index_add_forward.cu  |  91 
 tests/python/unittest/test_numpy_op.py| 154 
 7 files changed, 876 insertions(+)
 create mode 100644 src/operator/tensor/index_add-inl.h
 create mode 100644 src/operator/tensor/index_add_backward.cc
 create mode 100644 src/operator/tensor/index_add_backward.cu
 create mode 100644 src/operator/tensor/index_add_forward.cc
 create mode 100644 src/operator/tensor/index_add_forward.cu



[incubator-mxnet] branch master updated (b8490c5 -> c3fcbf3)

2020-06-01 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b8490c5  More clear description to `transform_first` (#18444)
 add c3fcbf3  Add npx op 'index_add' (#18089)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py |  73 ++
 src/operator/tensor/index_add-inl.h   | 231 ++
 src/operator/tensor/index_add_backward.cc | 102 +
 src/operator/tensor/index_add_backward.cu |  93 
 src/operator/tensor/index_add_forward.cc  | 132 +
 src/operator/tensor/index_add_forward.cu  |  91 
 tests/python/unittest/test_numpy_op.py| 154 
 7 files changed, 876 insertions(+)
 create mode 100644 src/operator/tensor/index_add-inl.h
 create mode 100644 src/operator/tensor/index_add_backward.cc
 create mode 100644 src/operator/tensor/index_add_backward.cu
 create mode 100644 src/operator/tensor/index_add_forward.cc
 create mode 100644 src/operator/tensor/index_add_forward.cu



[incubator-mxnet] branch master updated: Add npx op 'index_add' (#18089)

2020-06-01 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c3fcbf3  Add npx op 'index_add' (#18089)
c3fcbf3 is described below

commit c3fcbf3837e2082ad7800ddf3e031194b22a2c9d
Author: JiangZhaoh <54654391+jiangzh...@users.noreply.github.com>
AuthorDate: Tue Jun 2 09:06:28 2020 +0800

Add npx op 'index_add' (#18089)

* part cpu

* index_add forward & test

* fix wrong doc

* fix index_add_sanity_error

* index_update_test

* remove index_update & implement index_add backward

* fix sanity error

* reduce code length

* depart into two file

* test CI compiler

* test CI

* test CI

* reduce mshadow & allow more dtype

* fix sanity error

* fix conflict

* reduce fwd macro code

* reduce bwd macro code

* fix compile error

* tensor ind

* remove cudaMalloc/cudaFree

* fix windows compile error

* fix compile error

* use value instead of references

* remove pragma

* fix naive engine error

* try to pass CI

* fix sanity error

* depart gradient into three node

* resolve comment & initialize mshadow::Shape

* fix werror

Co-authored-by: Ubuntu 
Co-authored-by: Ubuntu 
Co-authored-by: Xingjian Shi 
---
 python/mxnet/_numpy_op_doc.py |  73 ++
 src/operator/tensor/index_add-inl.h   | 231 ++
 src/operator/tensor/index_add_backward.cc | 102 +
 src/operator/tensor/index_add_backward.cu |  93 
 src/operator/tensor/index_add_forward.cc  | 132 +
 src/operator/tensor/index_add_forward.cu  |  91 
 tests/python/unittest/test_numpy_op.py| 154 
 7 files changed, 876 insertions(+)

diff --git a/python/mxnet/_numpy_op_doc.py b/python/mxnet/_numpy_op_doc.py
index 198f151..fecd0e6 100644
--- a/python/mxnet/_numpy_op_doc.py
+++ b/python/mxnet/_numpy_op_doc.py
@@ -626,6 +626,79 @@ def _npx_reshape(a, newshape, reverse=False, order='C'):
 pass
 
 
+def _npx_index_add(a, ind, val):
+"""
+Add values to input according to given indexes.
+If exists repeate positions to be updated, the update value will be 
accumulated.
+Parameters
+--
+a : ndarray
+Input data. The array to be updated.
+ind : ndarray
+Indexes for indicating update positions.
+For example, array([[0, 1], [2, 3], [4, 5]] indicates here are two 
positions to
+be updated, which is (0, 2, 4) and (1, 3, 5).
+Note: - 'ind' cannot be empty array '[]', for that case, please use 
operator 'add' instead.
+  - 0 <= ind.ndim <= 2.
+  - ind.dtype should be 'int32' or 'int64'
+val : ndarray
+Input data. The array to update the input 'a'.
+Returns
+---
+out : ndarray
+The output array.
+Examples
+
+>>> a = np.zeros((2, 3, 4))
+>>> ind = np.array([[0, 0], [0, 0], [0, 1]], dtype='int32')
+>>> val = np.arange(2).reshape(2) + 1
+>>> b = npx.index_add(a, ind, val)
+>>> b
+array([[[1., 2., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]],
+
+   [[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+
+>>> ind = np.array([[0, 0], [0, 0], [0, 0]], dtype='int32')  # accumulate 
values in repeated positions
+>>> b = npx.index_add(a, ind, val)
+>>> b
+array([[[3., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]],
+
+   [[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+
+>>> ind=np.array([[0, 0], [0, 1]], dtype='int32') 
+>>> val = np.arange(8).reshape(2, 4) 
+>>> b = npx.index_add(a, ind, val)
+>>> b
+array([[[0., 1., 2., 3.],
+[4., 5., 6., 7.],
+[0., 0., 0., 0.]],
+
+   [[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+
+>>> val = np.arange(4).reshape(4)  # brocast 'val'
+>>> b = npx.index_add(a, ind, val)
+>>> b
+array([[[0., 1., 2., 3.],
+[0., 1., 2., 3.],
+[0., 0., 0., 0.]],
+
+[[0., 0., 0., 0.],
+[0., 0., 0., 0.],
+[0., 0., 0., 0.]]])
+"""
+pass
+
+
 def _np_diag(array, k=0):
 """
 Extracts a diagonal or constructs a diagonal array.
diff --git a/src/operator/tensor/index_add-inl.h 
b/src/operator

[incubator-mxnet] branch master updated (53a92f9 -> 8174771)

2020-05-31 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 53a92f9  [website] Redirect Chinese visitors to Apache Chinese CDN 
provider PART 1 (#18431)
 add 8174771  fix batchnorm (#18377)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/nn/basic_layers.py | 14 +++
 python/mxnet/gluon/nn/basic_layers.py | 36 +++
 2 files changed, 29 insertions(+), 21 deletions(-)



[incubator-mxnet] branch master updated (53a92f9 -> 8174771)

2020-05-31 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 53a92f9  [website] Redirect Chinese visitors to Apache Chinese CDN 
provider PART 1 (#18431)
 add 8174771  fix batchnorm (#18377)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/nn/basic_layers.py | 14 +++
 python/mxnet/gluon/nn/basic_layers.py | 36 +++
 2 files changed, 29 insertions(+), 21 deletions(-)



[incubator-mxnet] branch master updated (53a92f9 -> 8174771)

2020-05-31 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 53a92f9  [website] Redirect Chinese visitors to Apache Chinese CDN 
provider PART 1 (#18431)
 add 8174771  fix batchnorm (#18377)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/nn/basic_layers.py | 14 +++
 python/mxnet/gluon/nn/basic_layers.py | 36 +++
 2 files changed, 29 insertions(+), 21 deletions(-)



[incubator-mxnet] branch master updated (53a92f9 -> 8174771)

2020-05-31 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 53a92f9  [website] Redirect Chinese visitors to Apache Chinese CDN 
provider PART 1 (#18431)
 add 8174771  fix batchnorm (#18377)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/nn/basic_layers.py | 14 +++
 python/mxnet/gluon/nn/basic_layers.py | 36 +++
 2 files changed, 29 insertions(+), 21 deletions(-)



[incubator-mxnet] branch master updated (53a92f9 -> 8174771)

2020-05-31 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 53a92f9  [website] Redirect Chinese visitors to Apache Chinese CDN 
provider PART 1 (#18431)
 add 8174771  fix batchnorm (#18377)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/nn/basic_layers.py | 14 +++
 python/mxnet/gluon/nn/basic_layers.py | 36 +++
 2 files changed, 29 insertions(+), 21 deletions(-)



[incubator-mxnet] branch master updated (67b5d31 -> 5343aef)

2020-05-21 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 67b5d31  Fix race condition in unittest by pytest temp_dir fixtures 
(#18323)
 add 5343aef  [Numpy] Fix gluon activations (#18370)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/nn/activations.py  | 18 +++
 tests/python/unittest/test_numpy_gluon.py | 50 +++
 2 files changed, 63 insertions(+), 5 deletions(-)



[incubator-mxnet] branch master updated (67b5d31 -> 5343aef)

2020-05-21 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 67b5d31  Fix race condition in unittest by pytest temp_dir fixtures 
(#18323)
 add 5343aef  [Numpy] Fix gluon activations (#18370)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/nn/activations.py  | 18 +++
 tests/python/unittest/test_numpy_gluon.py | 50 +++
 2 files changed, 63 insertions(+), 5 deletions(-)



[incubator-mxnet] branch master updated (67b5d31 -> 5343aef)

2020-05-21 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 67b5d31  Fix race condition in unittest by pytest temp_dir fixtures 
(#18323)
 add 5343aef  [Numpy] Fix gluon activations (#18370)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/nn/activations.py  | 18 +++
 tests/python/unittest/test_numpy_gluon.py | 50 +++
 2 files changed, 63 insertions(+), 5 deletions(-)



[incubator-mxnet] branch master updated (67b5d31 -> 5343aef)

2020-05-21 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 67b5d31  Fix race condition in unittest by pytest temp_dir fixtures 
(#18323)
 add 5343aef  [Numpy] Fix gluon activations (#18370)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/nn/activations.py  | 18 +++
 tests/python/unittest/test_numpy_gluon.py | 50 +++
 2 files changed, 63 insertions(+), 5 deletions(-)



[incubator-mxnet] annotated tag 1.5.0.rc2 updated (75a9e18 -> bc631ea)

2020-05-19 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to annotated tag 1.5.0.rc2
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


*** WARNING: tag 1.5.0.rc2 was modified! ***

from 75a9e18  (commit)
  to bc631ea  (tag)
 tagging 75a9e187d00a8b7ebc71412a02ed0e3ae489d91f (commit)
 replaces v0.9.2
  by Sheng Zha
  on Thu Jun 27 09:51:39 2019 -0700

- Log -
1.5.0.rc2
---


No new revisions were added by this update.

Summary of changes:



[incubator-mxnet] annotated tag 1.5.0.rc2 updated (75a9e18 -> bc631ea)

2020-05-19 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to annotated tag 1.5.0.rc2
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


*** WARNING: tag 1.5.0.rc2 was modified! ***

from 75a9e18  (commit)
  to bc631ea  (tag)
 tagging 75a9e187d00a8b7ebc71412a02ed0e3ae489d91f (commit)
 replaces v0.9.2
  by Sheng Zha
  on Thu Jun 27 09:51:39 2019 -0700

- Log -
1.5.0.rc2
---


No new revisions were added by this update.

Summary of changes:



[incubator-mxnet] branch master updated (3140c55 -> b214477)

2020-05-19 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3140c55  Include all mkldnn headers in CD builds (#18355)
 add b214477  fix (#18313)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py   | 77 +-
 tests/python/unittest/test_numpy_op.py | 40 ++
 2 files changed, 116 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (3140c55 -> b214477)

2020-05-19 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3140c55  Include all mkldnn headers in CD builds (#18355)
 add b214477  fix (#18313)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py   | 77 +-
 tests/python/unittest/test_numpy_op.py | 40 ++
 2 files changed, 116 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated: [Bug Fix] Fix GroupNorm Implementation (#18199)

2020-04-30 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1496c91  [Bug Fix] Fix GroupNorm Implementation (#18199)
1496c91 is described below

commit 1496c91871b9d81d6a18785bdc8a1c3450bedbca
Author: Huang, Guangtai 
AuthorDate: Fri May 1 01:03:41 2020 +0800

[Bug Fix] Fix GroupNorm Implementation (#18199)

* init

* add in_channels
---
 python/mxnet/gluon/nn/basic_layers.py  | 11 +++
 src/operator/nn/group_norm-inl.h   | 25 +
 src/operator/nn/group_norm.cc  |  4 ++--
 tests/python/unittest/test_operator.py | 16 
 4 files changed, 30 insertions(+), 26 deletions(-)

diff --git a/python/mxnet/gluon/nn/basic_layers.py 
b/python/mxnet/gluon/nn/basic_layers.py
index 70b0a71..797392a 100644
--- a/python/mxnet/gluon/nn/basic_layers.py
+++ b/python/mxnet/gluon/nn/basic_layers.py
@@ -820,7 +820,7 @@ class GroupNorm(HybridBlock):
 """
 def __init__(self, num_groups=1, epsilon=1e-5, center=True, scale=True,
  beta_initializer='zeros', gamma_initializer='ones',
- prefix=None, params=None):
+ in_channels=0, prefix=None, params=None):
 super(GroupNorm, self).__init__(prefix=prefix, params=params)
 self._kwargs = {'eps': epsilon, 'num_groups': num_groups, 'center': 
center, 'scale': scale}
 self._num_groups = num_groups
@@ -828,10 +828,10 @@ class GroupNorm(HybridBlock):
 self._center = center
 self._scale = scale
 self.gamma = self.params.get('gamma', grad_req='write' if scale else 
'null',
- shape=(num_groups,), 
init=gamma_initializer,
+ shape=(in_channels,), 
init=gamma_initializer,
  allow_deferred_init=True)
 self.beta = self.params.get('beta', grad_req='write' if center else 
'null',
-shape=(num_groups,), init=beta_initializer,
+shape=(in_channels,), 
init=beta_initializer,
 allow_deferred_init=True)
 
 def hybrid_forward(self, F, data, gamma, beta):
@@ -839,7 +839,10 @@ class GroupNorm(HybridBlock):
 return norm_data
 
 def __repr__(self):
-s = '{name}({content})'
+s = '{name}({content}'
+in_channels = self.gamma.shape[0]
+s += ', in_channels={0}'.format(in_channels)
+s += ')'
 return s.format(name=self.__class__.__name__,
 content=', '.join(['='.join([k, v.__repr__()])
for k, v in self._kwargs.items()]))
diff --git a/src/operator/nn/group_norm-inl.h b/src/operator/nn/group_norm-inl.h
index 69d5a30..143e216 100644
--- a/src/operator/nn/group_norm-inl.h
+++ b/src/operator/nn/group_norm-inl.h
@@ -136,16 +136,16 @@ void GroupNormCompute(const nnvm::NodeAttrs& attrs,
   TBlob data_grp = data.reshape(temp_data_shape);
   const TBlob& mean_grp = mean.reshape(moments_shape);
   const TBlob& std_grp = std.reshape(moments_shape);
-  const TBlob& output = outputs[groupnorm::kOut].reshape(temp_data_shape);
+  const TBlob& output_grp = outputs[groupnorm::kOut].reshape(temp_data_shape);
 
   // Calculate data = data - mean
   BinaryBroadcastCompute(attrs, ctx,
  {data_grp, mean_grp},
- {kWriteTo}, {output});
+ {kWriteTo}, {output_grp});
 
   // Calculate std
   const TBlob centered_out = outputs[groupnorm::kOut].reshape(red_src_shape);
-  MSHADOW_REAL_TYPE_SWITCH(output.type_flag_, DType, {
+  MSHADOW_REAL_TYPE_SWITCH(output_grp.type_flag_, DType, {
 BROADCAST_NDIM_SWITCH(red_dst_shape.ndim(), NDim, {
   broadcast::Reduce(
 s, std_, req[0], workspace, centered_out);
@@ -157,11 +157,12 @@ void GroupNormCompute(const nnvm::NodeAttrs& attrs,
 
   // Calculate data = data / std
   BinaryBroadcastCompute(attrs, ctx,
-   {output, std_grp},
-   {kWriteTo}, {output});
+   {output_grp, std_grp},
+   {kWriteTo}, {output_grp});
 
-  mxnet::TShape new_param_shape(data_shape.ndim() + 1, 1);
-  new_param_shape[1] = num_groups;
+  const TBlob& output = outputs[groupnorm::kOut];
+  mxnet::TShape new_param_shape(data_shape.ndim(), 1);
+  new_param_shape[1] = data_shape[1];
 
   const TBlob& gamma = inputs[groupnorm::kGamma].reshape(new_param_shape);
   const TBlob& beta = inputs[groupnorm::kBeta].reshape(new_param_sh

[incubator-mxnet] branch master updated (fe73add -> 664bda1)

2020-04-30 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fe73add  Specify mxnetci dockerhub user in docker-compose.yml (#18195)
 add 664bda1  Revert "[NumPy]Set numpy default dtype (#17283)" (#18194)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/einsum/benchmark_einsum.py|   2 +-
 benchmark/python/ffi/benchmark_ffi.py  |   5 +-
 include/mxnet/c_api.h  |  14 --
 include/mxnet/imperative.h |  26 +--
 python/mxnet/__init__.py   |   1 -
 python/mxnet/gluon/data/dataloader.py  |   4 +-
 python/mxnet/ndarray/numpy/_op.py  | 147 +-
 python/mxnet/ndarray/numpy/random.py   |  33 ++-
 python/mxnet/numpy/multiarray.py   | 159 ++-
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_extension/__init__.py   |   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 103 --
 python/mxnet/symbol/numpy/random.py|  32 ++-
 python/mxnet/symbol/numpy_extension/random.py  |   2 +
 python/mxnet/test_utils.py |   2 +-
 python/mxnet/util.py   | 217 +---
 .../operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/api/operator/numpy/np_init_op.cc   |  97 +
 src/api/operator/numpy/np_window_op.cc |   3 +-
 src/api/operator/random/np_gamma_op.cc |   2 +-
 src/api/operator/random/np_normal_op.cc|   2 +-
 src/api/operator/random/np_uniform_op.cc   |   2 +-
 src/c_api/c_api_ndarray.cc |  12 --
 src/common/utils.h |  14 --
 src/operator/numpy/linalg/np_gesvd.cc  |   1 -
 src/operator/numpy/np_broadcast_reduce_op.h|   1 -
 src/operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/operator/numpy/np_init_op.cc   |  44 +---
 src/operator/numpy/np_init_op.cu   |   6 -
 src/operator/numpy/np_init_op.h|   9 +-
 src/operator/numpy/np_true_divide-inl.h|  24 +--
 src/operator/numpy/np_true_divide.cc   |   7 +-
 src/operator/numpy/np_window_op.cc |   6 +-
 src/operator/numpy/np_window_op.h  |   3 +-
 src/operator/numpy/random/np_bernoulli_op.h|   8 +-
 src/operator/numpy/random/np_gamma_op.cc   |   2 +-
 src/operator/numpy/random/np_gamma_op.h|   8 +-
 src/operator/numpy/random/np_laplace_op.h  |   2 +-
 src/operator/numpy/random/np_normal_op.h   |   8 +-
 src/operator/numpy/random/np_uniform_op.h  |   8 +-
 src/operator/random/sample_op.h|   3 +-
 src/operator/tensor/init_op.cc |   2 +
 src/operator/tensor/init_op.h  |  52 ++---
 tests/python/unittest/test_numpy_default_dtype.py  | 225 -
 tests/python/unittest/test_numpy_op.py |   9 +-
 45 files changed, 277 insertions(+), 1045 deletions(-)
 delete mode 100644 tests/python/unittest/test_numpy_default_dtype.py



[incubator-mxnet] branch master updated (fe73add -> 664bda1)

2020-04-30 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fe73add  Specify mxnetci dockerhub user in docker-compose.yml (#18195)
 add 664bda1  Revert "[NumPy]Set numpy default dtype (#17283)" (#18194)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/einsum/benchmark_einsum.py|   2 +-
 benchmark/python/ffi/benchmark_ffi.py  |   5 +-
 include/mxnet/c_api.h  |  14 --
 include/mxnet/imperative.h |  26 +--
 python/mxnet/__init__.py   |   1 -
 python/mxnet/gluon/data/dataloader.py  |   4 +-
 python/mxnet/ndarray/numpy/_op.py  | 147 +-
 python/mxnet/ndarray/numpy/random.py   |  33 ++-
 python/mxnet/numpy/multiarray.py   | 159 ++-
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_extension/__init__.py   |   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 103 --
 python/mxnet/symbol/numpy/random.py|  32 ++-
 python/mxnet/symbol/numpy_extension/random.py  |   2 +
 python/mxnet/test_utils.py |   2 +-
 python/mxnet/util.py   | 217 +---
 .../operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/api/operator/numpy/np_init_op.cc   |  97 +
 src/api/operator/numpy/np_window_op.cc |   3 +-
 src/api/operator/random/np_gamma_op.cc |   2 +-
 src/api/operator/random/np_normal_op.cc|   2 +-
 src/api/operator/random/np_uniform_op.cc   |   2 +-
 src/c_api/c_api_ndarray.cc |  12 --
 src/common/utils.h |  14 --
 src/operator/numpy/linalg/np_gesvd.cc  |   1 -
 src/operator/numpy/np_broadcast_reduce_op.h|   1 -
 src/operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/operator/numpy/np_init_op.cc   |  44 +---
 src/operator/numpy/np_init_op.cu   |   6 -
 src/operator/numpy/np_init_op.h|   9 +-
 src/operator/numpy/np_true_divide-inl.h|  24 +--
 src/operator/numpy/np_true_divide.cc   |   7 +-
 src/operator/numpy/np_window_op.cc |   6 +-
 src/operator/numpy/np_window_op.h  |   3 +-
 src/operator/numpy/random/np_bernoulli_op.h|   8 +-
 src/operator/numpy/random/np_gamma_op.cc   |   2 +-
 src/operator/numpy/random/np_gamma_op.h|   8 +-
 src/operator/numpy/random/np_laplace_op.h  |   2 +-
 src/operator/numpy/random/np_normal_op.h   |   8 +-
 src/operator/numpy/random/np_uniform_op.h  |   8 +-
 src/operator/random/sample_op.h|   3 +-
 src/operator/tensor/init_op.cc |   2 +
 src/operator/tensor/init_op.h  |  52 ++---
 tests/python/unittest/test_numpy_default_dtype.py  | 225 -
 tests/python/unittest/test_numpy_op.py |   9 +-
 45 files changed, 277 insertions(+), 1045 deletions(-)
 delete mode 100644 tests/python/unittest/test_numpy_default_dtype.py



[incubator-mxnet] branch master updated: [numpy] Fix ffi split (#18136)

2020-04-23 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 71a7b5d  [numpy] Fix ffi split  (#18136)
71a7b5d is described below

commit 71a7b5d5c918b07a6afde858ffc74e90f65173bd
Author: Yiyan66 <57363390+yiya...@users.noreply.github.com>
AuthorDate: Fri Apr 24 02:50:49 2020 +0800

[numpy] Fix ffi split  (#18136)

* fix ffi split

* add test

* fix ffi split

Co-authored-by: Ubuntu 
---
 src/api/operator/numpy/np_matrix_op.cc   | 5 -
 tests/python/unittest/test_numpy_interoperability.py | 1 +
 tests/python/unittest/test_numpy_op.py   | 2 +-
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/src/api/operator/numpy/np_matrix_op.cc 
b/src/api/operator/numpy/np_matrix_op.cc
index 58ee563..929a6a6 100644
--- a/src/api/operator/numpy/np_matrix_op.cc
+++ b/src/api/operator/numpy/np_matrix_op.cc
@@ -144,9 +144,12 @@ MXNET_REGISTER_API("_npi.split")
   if (args[1].type_code() == kDLInt) {
 param.indices = TShape(0, 0);
 param.sections = args[1].operator int();
+int index = param.axis >= 0 ? param.axis :
+  param.axis + inputs[0]->shape().ndim();
+CHECK_GE(index, 0) << "IndexError: tuple index out of range";
 CHECK_GT(param.sections, 0)
   << "ValueError: number sections must be larger than 0";
-CHECK_EQ(inputs[0]->shape()[param.axis] % param.sections, 0)
+CHECK_EQ(inputs[0]->shape()[index] % param.sections, 0)
   << "ValueError: array split does not result in an equal division";
   } else {
 TShape t = TShape(args[1].operator ObjectRef());
diff --git a/tests/python/unittest/test_numpy_interoperability.py 
b/tests/python/unittest/test_numpy_interoperability.py
index 824fa1e..c004d0c 100644
--- a/tests/python/unittest/test_numpy_interoperability.py
+++ b/tests/python/unittest/test_numpy_interoperability.py
@@ -345,6 +345,7 @@ def _add_workload_expand_dims():
 def _add_workload_split():
 OpArgMngr.add_workload('split', np.random.uniform(size=(4, 1)), 2)
 OpArgMngr.add_workload('split', np.arange(10), 2)
+OpArgMngr.add_workload('split', np.random.uniform(size=(10, 10, 3)), 3, -1)
 assertRaises(ValueError, np.split, np.arange(10), 3)
 
 
diff --git a/tests/python/unittest/test_numpy_op.py 
b/tests/python/unittest/test_numpy_op.py
index 3b11b35..20a940f 100644
--- a/tests/python/unittest/test_numpy_op.py
+++ b/tests/python/unittest/test_numpy_op.py
@@ -2938,7 +2938,7 @@ def test_np_split():
 dim = random.randint(0, 3)
 shape = [0] + [random.randint(2, 4) for i in range(dim)]
 for hybridize in [True, False]:
-for axis in range(len(shape)):
+for axis in range(-len(shape)+1, len(shape)):
 indices = get_indices(shape[axis])
 sections = 7 if shape[axis] is 0 else shape[axis]
 for indices_or_sections in [indices, sections]:



[incubator-mxnet] branch master updated (1679ade -> 07b8d7a)

2020-04-10 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1679ade  fixes #17918; update ruby & jekyll, remove incompatible 
plugins (#17927)
 add 07b8d7a  Fix ElemwiseSum for more than 4 inputs (#17995)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_sum.h |  2 +-
 tests/python/unittest/test_operator.py | 19 +++
 2 files changed, 20 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (1679ade -> 07b8d7a)

2020-04-10 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1679ade  fixes #17918; update ruby & jekyll, remove incompatible 
plugins (#17927)
 add 07b8d7a  Fix ElemwiseSum for more than 4 inputs (#17995)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_sum.h |  2 +-
 tests/python/unittest/test_operator.py | 19 +++
 2 files changed, 20 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (0aa2c78 -> b001006)

2020-03-09 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0aa2c78  Fix OpPerf in Master (#17735)
 add b001006  add npx.broadcast_like (#17605)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/broadcast_reduce_op_value.cc | 1 +
 1 file changed, 1 insertion(+)



[incubator-mxnet] branch master updated (592466e -> 27c74ad)

2020-03-06 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 592466e  allclose_op: Fix non-ascii characters in comments (#17746)
 add 27c74ad  [Numpy] Fix symbolic basic indexing (#17770)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/symbol/numpy/_symbol.py  |  5 -
 tests/python/unittest/test_numpy_gluon.py | 10 ++
 2 files changed, 14 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (a8452aa -> 58cbd65)

2020-01-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a8452aa  Fix language selection in get_started options.js (#17284)
 add 58cbd65  [MXNET-978] Higher Order Gradient Support `rsqrt`, `rcbrt`. 
(#15476)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_unary_op_pow.cc| 56 -
 tests/python/unittest/test_higher_order_grad.py | 40 ++
 2 files changed, 94 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (a8452aa -> 58cbd65)

2020-01-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a8452aa  Fix language selection in get_started options.js (#17284)
 add 58cbd65  [MXNET-978] Higher Order Gradient Support `rsqrt`, `rcbrt`. 
(#15476)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_unary_op_pow.cc| 56 -
 tests/python/unittest/test_higher_order_grad.py | 40 ++
 2 files changed, 94 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (4fda46b -> 2ad3ce4)

2019-12-26 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 4fda46b  fix py27 quantization (#17153)
 add 2ad3ce4  broadcast_axis optimization (#17091)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/broadcast_reduce_op_value.cc | 37 +++-
 1 file changed, 36 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (4fda46b -> 2ad3ce4)

2019-12-26 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 4fda46b  fix py27 quantization (#17153)
 add 2ad3ce4  broadcast_axis optimization (#17091)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/broadcast_reduce_op_value.cc | 37 +++-
 1 file changed, 36 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (faa2832 -> ed09547)

2019-12-18 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from faa2832  Add im2col and col2im operator (#16502)
 add ed09547  [MXNET-978] Higher Order Gradient Support `arcsin`, `arccos`. 
(#15515)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_unary_op_trig.cc   | 53 -
 tests/python/unittest/test_higher_order_grad.py | 38 ++
 2 files changed, 89 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (3b911cf -> f045018)

2019-12-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3b911cf  Website edits (#17050)
 add f045018  [MXNET-978] Higher Order Gradient Support `logp1`, `expm1`, 
`square`. (#15416)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/elemwise_unary_op_logexp.cc | 55 -
 src/operator/tensor/elemwise_unary_op_pow.cc| 30 +-
 tests/python/unittest/test_higher_order_grad.py | 33 +++
 3 files changed, 115 insertions(+), 3 deletions(-)



[incubator-mxnet] branch master updated (8f10d55 -> a98cefc)

2019-11-26 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8f10d55  [Numpy] Fix imperative basic indexing in numpy (#16902)
 add a98cefc  [Numpy] Basic indexing in symbolic interface of DeepNumpy 
(#16621)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/__init__.py  |   1 +
 python/mxnet/_ctypes/ndarray.py   |   4 +-
 python/mxnet/_ctypes/symbol.py|   8 +-
 python/mxnet/base.py  |  18 +++
 python/mxnet/cython/base.pyi  |   2 +
 python/mxnet/cython/ndarray.pyx   |   4 +-
 python/mxnet/cython/symbol.pyx|  12 +-
 python/mxnet/gluon/block.py   |  21 ++-
 python/mxnet/ndarray/numpy/_op.py |  18 +--
 python/mxnet/ndarray/register.py  |   7 +-
 python/mxnet/symbol/numpy/_symbol.py  | 212 +++---
 python/mxnet/symbol/register.py   |  11 +-
 python/mxnet/symbol/symbol.py |   2 +-
 python/mxnet/test_utils.py|  78 +++
 src/operator/numpy/np_matrix_op.cc|   4 +-
 tests/python/unittest/test_numpy_gluon.py | 207 -
 16 files changed, 553 insertions(+), 56 deletions(-)



[incubator-mxnet] branch master updated (8f10d55 -> a98cefc)

2019-11-26 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8f10d55  [Numpy] Fix imperative basic indexing in numpy (#16902)
 add a98cefc  [Numpy] Basic indexing in symbolic interface of DeepNumpy 
(#16621)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/__init__.py  |   1 +
 python/mxnet/_ctypes/ndarray.py   |   4 +-
 python/mxnet/_ctypes/symbol.py|   8 +-
 python/mxnet/base.py  |  18 +++
 python/mxnet/cython/base.pyi  |   2 +
 python/mxnet/cython/ndarray.pyx   |   4 +-
 python/mxnet/cython/symbol.pyx|  12 +-
 python/mxnet/gluon/block.py   |  21 ++-
 python/mxnet/ndarray/numpy/_op.py |  18 +--
 python/mxnet/ndarray/register.py  |   7 +-
 python/mxnet/symbol/numpy/_symbol.py  | 212 +++---
 python/mxnet/symbol/register.py   |  11 +-
 python/mxnet/symbol/symbol.py |   2 +-
 python/mxnet/test_utils.py|  78 +++
 src/operator/numpy/np_matrix_op.cc|   4 +-
 tests/python/unittest/test_numpy_gluon.py | 207 -
 16 files changed, 553 insertions(+), 56 deletions(-)



[incubator-mxnet] branch master updated (c9585bd -> a11b7ea)

2019-11-26 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c9585bd  Fix the problem in printing feature in c++ API examples : 
feature_extract (#15686)
 add a11b7ea  Try to fix CI (#16908)

No new revisions were added by this update.

Summary of changes:
 ci/docker/install/ubuntu_core.sh | 4 
 1 file changed, 4 insertions(+)



[incubator-mxnet] branch master updated (c9585bd -> a11b7ea)

2019-11-26 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c9585bd  Fix the problem in printing feature in c++ API examples : 
feature_extract (#15686)
 add a11b7ea  Try to fix CI (#16908)

No new revisions were added by this update.

Summary of changes:
 ci/docker/install/ubuntu_core.sh | 4 
 1 file changed, 4 insertions(+)



[incubator-mxnet] branch master updated (b972406 -> 4a27b5c)

2019-11-14 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b972406  clean TVM (#16814)
 add 4a27b5c  [Fix] Add ctx to the original ndarray and revise the usage of 
context to ctx (#16819)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/block.py  |  6 +++---
 python/mxnet/gluon/parameter.py  |  8 
 python/mxnet/ndarray/ndarray.py  | 37 +++--
 python/mxnet/numpy/multiarray.py | 11 ++-
 4 files changed, 40 insertions(+), 22 deletions(-)



[incubator-mxnet] branch master updated (9b25db0 -> e88e97f)

2019-11-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9b25db0  Fix numpy-compatible mean output type for integer inputs 
(#16792)
 add e88e97f  [Numpy] Fix collect_params().zero_grad() in gluon numpy 
interface (#16716)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/parameter.py   | 12 
 tests/python/unittest/test_numpy_gluon.py | 17 +
 2 files changed, 25 insertions(+), 4 deletions(-)



[incubator-mxnet] branch master updated (b9b56e6 -> 02f4f05)

2019-11-12 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b9b56e6  fix inv test flakiness using random matrices generated by SVD 
(#16782)
 add 02f4f05  [Numpy] Add sampling method for bernoulli (#16638)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/base.py   |   2 +-
 python/mxnet/ndarray/numpy_extension/__init__.py   |   1 +
 python/mxnet/ndarray/numpy_extension/random.py | 104 +++
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_extension/random.py |  56 +-
 python/mxnet/symbol/numpy_extension/__init__.py|   1 +
 python/mxnet/symbol/numpy_extension/random.py  | 104 +++
 src/operator/numpy/random/dist_common.h|  31 +++-
 .../random/{np_normal_op.cc => np_bernoulli_op.cc} |  46 ++---
 .../random/{np_normal_op.cu => np_bernoulli_op.cu} |  10 +-
 src/operator/numpy/random/np_bernoulli_op.h| 200 +
 src/operator/numpy/random/np_normal_op.h   |   1 +
 src/operator/numpy/random/np_uniform_op.h  |   1 -
 tests/python/unittest/test_numpy_op.py |  35 
 14 files changed, 559 insertions(+), 35 deletions(-)
 create mode 100644 python/mxnet/ndarray/numpy_extension/random.py
 create mode 100644 python/mxnet/symbol/numpy_extension/random.py
 copy src/operator/numpy/random/{np_normal_op.cc => np_bernoulli_op.cc} (55%)
 copy src/operator/numpy/random/{np_normal_op.cu => np_bernoulli_op.cu} (80%)
 create mode 100644 src/operator/numpy/random/np_bernoulli_op.h



[incubator-mxnet] branch master updated (b9b56e6 -> 02f4f05)

2019-11-12 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b9b56e6  fix inv test flakiness using random matrices generated by SVD 
(#16782)
 add 02f4f05  [Numpy] Add sampling method for bernoulli (#16638)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/base.py   |   2 +-
 python/mxnet/ndarray/numpy_extension/__init__.py   |   1 +
 python/mxnet/ndarray/numpy_extension/random.py | 104 +++
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_extension/random.py |  56 +-
 python/mxnet/symbol/numpy_extension/__init__.py|   1 +
 python/mxnet/symbol/numpy_extension/random.py  | 104 +++
 src/operator/numpy/random/dist_common.h|  31 +++-
 .../random/{np_normal_op.cc => np_bernoulli_op.cc} |  46 ++---
 .../random/{np_normal_op.cu => np_bernoulli_op.cu} |  10 +-
 src/operator/numpy/random/np_bernoulli_op.h| 200 +
 src/operator/numpy/random/np_normal_op.h   |   1 +
 src/operator/numpy/random/np_uniform_op.h  |   1 -
 tests/python/unittest/test_numpy_op.py |  35 
 14 files changed, 559 insertions(+), 35 deletions(-)
 create mode 100644 python/mxnet/ndarray/numpy_extension/random.py
 create mode 100644 python/mxnet/symbol/numpy_extension/random.py
 copy src/operator/numpy/random/{np_normal_op.cc => np_bernoulli_op.cc} (55%)
 copy src/operator/numpy/random/{np_normal_op.cu => np_bernoulli_op.cu} (80%)
 create mode 100644 src/operator/numpy/random/np_bernoulli_op.h



[incubator-mxnet] branch master updated (bb6305d -> 5a2fce5)

2019-11-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bb6305d  [MKLDNN] support mkldnn gelu (#16710)
 add 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/cnn/conv_layers.py  | 180 ++-
 ...nl.h => modulated_deformable_convolution-inl.h} | 331 -
 ...tion.cc => modulated_deformable_convolution.cc} |  49 +-
 ...tion.cu => modulated_deformable_convolution.cu} |  19 +-
 .../contrib/nn/modulated_deformable_im2col.cuh | 541 +
 .../contrib/nn/modulated_deformable_im2col.h   | 291 +++
 tests/python/gpu/test_gluon_contrib_gpu.py |  27 +
 tests/python/unittest/test_contrib_operator.py |  38 +-
 tests/python/unittest/test_gluon_contrib.py|  30 ++
 9 files changed, 1338 insertions(+), 168 deletions(-)
 copy src/operator/contrib/{deformable_convolution-inl.h => 
modulated_deformable_convolution-inl.h} (54%)
 copy src/operator/contrib/{deformable_convolution.cc => 
modulated_deformable_convolution.cc} (61%)
 copy src/operator/contrib/{deformable_convolution.cu => 
modulated_deformable_convolution.cu} (68%)
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.cuh
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.h



[incubator-mxnet] branch master updated (bb6305d -> 5a2fce5)

2019-11-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bb6305d  [MKLDNN] support mkldnn gelu (#16710)
 add 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/cnn/conv_layers.py  | 180 ++-
 ...nl.h => modulated_deformable_convolution-inl.h} | 331 -
 ...tion.cc => modulated_deformable_convolution.cc} |  49 +-
 ...tion.cu => modulated_deformable_convolution.cu} |  19 +-
 .../contrib/nn/modulated_deformable_im2col.cuh | 541 +
 .../contrib/nn/modulated_deformable_im2col.h   | 291 +++
 tests/python/gpu/test_gluon_contrib_gpu.py |  27 +
 tests/python/unittest/test_contrib_operator.py |  38 +-
 tests/python/unittest/test_gluon_contrib.py|  30 ++
 9 files changed, 1338 insertions(+), 168 deletions(-)
 copy src/operator/contrib/{deformable_convolution-inl.h => 
modulated_deformable_convolution-inl.h} (54%)
 copy src/operator/contrib/{deformable_convolution.cc => 
modulated_deformable_convolution.cc} (61%)
 copy src/operator/contrib/{deformable_convolution.cu => 
modulated_deformable_convolution.cu} (68%)
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.cuh
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.h



[incubator-mxnet] branch master updated (bb6305d -> 5a2fce5)

2019-11-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bb6305d  [MKLDNN] support mkldnn gelu (#16710)
 add 5a2fce5  [WIP][New Op] Add deformable conv v2 (#16341)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/contrib/cnn/conv_layers.py  | 180 ++-
 ...nl.h => modulated_deformable_convolution-inl.h} | 331 -
 ...tion.cc => modulated_deformable_convolution.cc} |  49 +-
 ...tion.cu => modulated_deformable_convolution.cu} |  19 +-
 .../contrib/nn/modulated_deformable_im2col.cuh | 541 +
 .../contrib/nn/modulated_deformable_im2col.h   | 291 +++
 tests/python/gpu/test_gluon_contrib_gpu.py |  27 +
 tests/python/unittest/test_contrib_operator.py |  38 +-
 tests/python/unittest/test_gluon_contrib.py|  30 ++
 9 files changed, 1338 insertions(+), 168 deletions(-)
 copy src/operator/contrib/{deformable_convolution-inl.h => 
modulated_deformable_convolution-inl.h} (54%)
 copy src/operator/contrib/{deformable_convolution.cc => 
modulated_deformable_convolution.cc} (61%)
 copy src/operator/contrib/{deformable_convolution.cu => 
modulated_deformable_convolution.cu} (68%)
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.cuh
 create mode 100644 src/operator/contrib/nn/modulated_deformable_im2col.h



[incubator-mxnet] branch master updated: RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)

2019-10-27 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9f21cdd  RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)
9f21cdd is described below

commit 9f21cddb3f6cc81e67a192f313066f7e9edd7fa8
Author: Leonard Lausen 
AuthorDate: Sun Oct 27 10:21:14 2019 -0700

RNNOp only call cuda/cudnn if GPU ctx is requested (#16632)
---
 src/operator/rnn-inl.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/src/operator/rnn-inl.h b/src/operator/rnn-inl.h
index ead7501..b448261 100644
--- a/src/operator/rnn-inl.h
+++ b/src/operator/rnn-inl.h
@@ -422,6 +422,8 @@ class RNNOp {
 init_mem_ = false;
 reserve_mem_size_ = 0;
 #endif
+
+if (ctx_.dev_type == kGPU) {
 #if MXNET_USE_CUDNN == 1
 init_cudnn_ = false;
 dtype_ = mshadow::DataType::kCudnnFlag;
@@ -505,6 +507,7 @@ class RNNOp {
   LOG(FATAL) << "RNN on GPU is only available for cuDNN at the moment.";
 }
 #endif  // MXNET_USE_CUDNN == 1
+}
 
 if (ctx_.dev_type == kCPU) {
   this->init_space_ = false;
@@ -523,6 +526,7 @@ class RNNOp {
   }
 
   ~RNNOp() {
+if (ctx_.dev_type == kGPU) {
 #if MXNET_USE_CUDNN == 1
 CUDNN_CALL(cudnnDestroyTensorDescriptor(hx_desc_));
 CUDNN_CALL(cudnnDestroyTensorDescriptor(cx_desc_));
@@ -557,6 +561,7 @@ class RNNOp {
 CUDNN_CALL(cudnnDestroyRNNDataDescriptor(dy_data_desc_));
 #endif  // MXNET_USE_CUDNN_GE_7200
 #endif  // MXNET_USE_CUDNN
+}
   }
 
   void Forward(const OpContext , const std::vector _data,



[incubator-mxnet] branch master updated (ffec31f -> 5b67a69)

2019-10-19 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/block.py | 110 
 tests/python/gpu/test_gluon_gpu.py  |  15 +
 tests/python/unittest/test_gluon.py |  44 +++
 3 files changed, 133 insertions(+), 36 deletions(-)



[incubator-mxnet] branch master updated (ffec31f -> 5b67a69)

2019-10-19 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/block.py | 110 
 tests/python/gpu/test_gluon_gpu.py  |  15 +
 tests/python/unittest/test_gluon.py |  44 +++
 3 files changed, 133 insertions(+), 36 deletions(-)



[incubator-mxnet] branch master updated (7f5e687 -> ca30ba8)

2019-10-11 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/matrix_op-inl.h  |  16 ++
 src/operator/tensor/pseudo2DTranspose_op-inl.cuh | 348 +++
 tests/python/unittest/test_operator.py   |  39 +++
 3 files changed, 403 insertions(+)
 create mode 100644 src/operator/tensor/pseudo2DTranspose_op-inl.cuh



[incubator-mxnet] branch master updated (ec766d5 -> d5666ed)

2019-10-08 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)

No new revisions were added by this update.

Summary of changes:
 src/operator/contrib/stes_op.cc   |  84 
 src/operator/contrib/stes_op.cu   |  43 
 src/operator/contrib/stes_op.h|  33 +++
 tests/python/unittest/test_contrib_stes_op.py | 137 ++
 4 files changed, 297 insertions(+)
 create mode 100644 src/operator/contrib/stes_op.cc
 create mode 100644 src/operator/contrib/stes_op.cu
 create mode 100644 src/operator/contrib/stes_op.h
 create mode 100644 tests/python/unittest/test_contrib_stes_op.py



[incubator-mxnet] branch master updated (ea440c7 -> 3ffd2c2)

2019-09-30 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ea440c7  [numpy] Cosmetic improvement on mxnet.numpy builtin op 
signature in documentation (#16305)
 add 3ffd2c2  [MXNET-978] Fully connected, higher order grad (#14779)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/tensor_blob.h |   2 +-
 src/operator/linalg.h   |   1 +
 src/operator/nn/fully_connected-inl.h   | 205 +++-
 src/operator/nn/fully_connected.cc  |  56 +--
 tests/python/unittest/test_higher_order_grad.py | 147 -
 5 files changed, 360 insertions(+), 51 deletions(-)



[incubator-mxnet] branch master updated (35ef45c -> 985a4ca)

2019-09-24 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 35ef45c  Fix lack of dylib support in Makefile when use lapack (#15813)
 add 985a4ca  Update KL Divergence formula (#16170)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/loss.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[incubator-mxnet] branch master updated (995b477 -> 1e058a3)

2019-09-18 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 995b477  Fix README Build Status (#16183)
 add 1e058a3  add exception check for numpy reshape (#16180)

No new revisions were added by this update.

Summary of changes:
 tests/python/unittest/test_exc_handling.py | 9 +
 1 file changed, 9 insertions(+)



[incubator-mxnet] branch master updated (8cc3443 -> 956cfa3)

2019-09-18 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8cc3443  adding codeowners (#16165)
 add 956cfa3  assert_allclose -> rtol=1e-10 (#16198)

No new revisions were added by this update.

Summary of changes:
 tests/python/unittest/test_ndarray.py | 33 +
 1 file changed, 25 insertions(+), 8 deletions(-)



[incubator-mxnet] branch master updated (18d145c -> 5ed5689)

2019-09-16 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 18d145c  use 1E-4 in groupnorm test(#16169)
 add 5ed5689  numpy operator ravel, derive from reshape (#16016)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 56 +-
 python/mxnet/numpy/multiarray.py   | 49 -
 python/mxnet/symbol/numpy/_symbol.py   | 42 -
 src/operator/numpy/np_matrix_op.cc |  1 +
 tests/python/unittest/test_numpy_op.py | 33 
 5 files changed, 178 insertions(+), 3 deletions(-)



[incubator-mxnet] branch master updated: [Numpy] Numpy copysign (#15851)

2019-09-15 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 90091b1  [Numpy] Numpy copysign (#15851)
90091b1 is described below

commit 90091b155d6f53c070e3c406f9edc69f38d02e96
Author: Haozheng Fan 
AuthorDate: Mon Sep 16 02:57:51 2019 +0800

[Numpy] Numpy copysign (#15851)

* add numpy compatible copysign

* fix scalar op registration error

* add test
---
 python/mxnet/ndarray/numpy/_op.py  |  53 -
 python/mxnet/numpy/multiarray.py   |  53 -
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 src/operator/mshadow_op.h  |  10 +++
 src/operator/numpy/np_elemwise_broadcast_op.cc |  36 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |  21 +
 src/operator/operator_tune.cc  |   5 ++
 tests/python/unittest/test_numpy_op.py | 105 +
 8 files changed, 316 insertions(+), 3 deletions(-)

diff --git a/python/mxnet/ndarray/numpy/_op.py 
b/python/mxnet/ndarray/numpy/_op.py
index 671345c..b8e4f3f 100644
--- a/python/mxnet/ndarray/numpy/_op.py
+++ b/python/mxnet/ndarray/numpy/_op.py
@@ -33,7 +33,7 @@ __all__ = ['zeros', 'ones', 'full', 'add', 'subtract', 
'multiply', 'divide', 'mo
'rint', 'radians', 'reciprocal', 'square', 'negative', 'fix', 
'ceil', 'floor',
'trunc', 'logical_not', 'arcsinh', 'arccosh', 'arctanh', 
'tensordot',
'linspace', 'expand_dims', 'tile', 'arange', 'split', 
'concatenate', 'stack', 'mean',
-   'maximum', 'minimum', 'swapaxes', 'clip', 'argmax', 'std', 'var', 
'indices']
+   'maximum', 'minimum', 'swapaxes', 'clip', 'argmax', 'std', 'var', 
'indices', 'copysign']
 
 
 @set_module('mxnet.ndarray.numpy')
@@ -2432,3 +2432,54 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 else:
 raise ValueError("The dimensions must be sequence of ints")
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.ndarray.numpy')
+def copysign(x1, x2, out=None):
+r"""copysign(x1, x2, out=None)
+
+Change the sign of x1 to that of x2, element-wise.
+
+If `x2` is a scalar, its sign will be copied to all elements of `x1`.
+
+Parameters
+--
+x1 : ndarray or scalar
+Values to change the sign of.
+x2 : ndarray or scalar
+The sign of `x2` is copied to `x1`.
+out : ndarray or None, optional
+A location into which the result is stored. It must be of the
+right shape and right type to hold the output. If not provided
+or `None`,a freshly-allocated array is returned.
+
+Returns
+---
+out : ndarray or scalar
+The values of `x1` with the sign of `x2`.
+This is a scalar if both `x1` and `x2` are scalars.
+
+Notes
+---
+This function differs from the original `numpy.copysign
+
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.copysign.html>`_ in
+the following aspects:
+
+- ``where`` param is not supported.
+
+Examples
+
+>>> np.copysign(1.3, -1)
+-1.3
+>>> 1/np.copysign(0, 1)
+inf
+>>> 1/np.copysign(0, -1)
+-inf
+
+>>> a = np.array([-1, 0, 1])
+>>> np.copysign(a, -1.1)
+array([-1., -0., -1.])
+>>> np.copysign(a, np.arange(3)-1)
+array([-1.,  0.,  1.])
+"""
+return _ufunc_helper(x1, x2, _npi.copysign, _np.copysign, 
_npi.copysign_scalar, _npi.rcopysign_scalar, out)
diff --git a/python/mxnet/numpy/multiarray.py b/python/mxnet/numpy/multiarray.py
index 1f8aa92..632cfad 100644
--- a/python/mxnet/numpy/multiarray.py
+++ b/python/mxnet/numpy/multiarray.py
@@ -52,7 +52,7 @@ __all__ = ['ndarray', 'empty', 'array', 'zeros', 'ones', 
'full', 'add', 'subtrac
'degrees', 'log2', 'log1p', 'rint', 'radians', 'reciprocal', 
'square', 'negative',
'fix', 'ceil', 'floor', 'trunc', 'logical_not', 'arcsinh', 
'arccosh', 'arctanh',
'tensordot', 'linspace', 'expand_dims', 'tile', 'arange', 'split', 
'concatenate',
-   'stack', 'mean', 'maximum', 'minimum', 'swapaxes', 'clip', 
'argmax', 'std', 'var', 'indices']
+   'stack', 'mean', 'maximum', 'minimum', 'swapaxes', 'clip', 
'argmax', 'std', 'var', 'indices', 'copysign']
 
 # Return code for dispatching indexing function call
 _NDARRAY_UNSUPPORTED_INDEXING = -1
@@ -3935,3 +3935,54 @@ def indices(dimensions, dtype=_np.int32, ctx=None):
 """
 return _mx_nd_np.indices(dimensions=dimensions, dtype=dtype, ctx=ctx)
 # pylint: enable=redefined-outer-name
+
+
+@set_module('mxnet.numpy')
+def copysign(x1, x2, out=None):
+r"""copysign(x1, x2, out=None)
+

[incubator-mxnet] branch v1.5.x updated: Revert "Fix a memory misalignment in topk operator" (#15999)

2019-08-27 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.5.x by this push:
 new 33f4de1  Revert "Fix a memory misalignment in topk operator" (#15999)
33f4de1 is described below

commit 33f4de13d3909fc356ace8ff7a5c9665a651fc63
Author: Lin Yuan 
AuthorDate: Tue Aug 27 10:28:05 2019 -0700

Revert "Fix a memory misalignment in topk operator" (#15999)

* Revert "Fix a memory misalignment in topk operator (#15948)"

This reverts commit 42746bc73e8bcb75bfcadd1398e6f71bc170fa10.
---
 src/operator/tensor/ordering_op-inl.h | 30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/src/operator/tensor/ordering_op-inl.h 
b/src/operator/tensor/ordering_op-inl.h
index bd27441..1dda901 100644
--- a/src/operator/tensor/ordering_op-inl.h
+++ b/src/operator/tensor/ordering_op-inl.h
@@ -385,8 +385,8 @@ void TopKImpl(const RunContext ,
   int axis = 0;
   bool do_transpose = false;
   bool is_ascend = false;
-  index_t k = 0;
-  size_t alignment = std::max(sizeof(DType), sizeof(index_t));
+  int k = 0;
+  size_t alignment = std::max(sizeof(DType), sizeof(int));
   mxnet::TShape target_shape;
   ParseTopKParam(src.shape_, param,
  _shape, _size, _num, , , 
_transpose, _ascend);
@@ -395,31 +395,31 @@ void TopKImpl(const RunContext ,
 << "The total element_num is " << element_num << ", but the selected 
IDType can only represent "
 << mxnet::common::MaxIntegerValue() << " elements";
   Tensor dat = src.FlatTo3D(axis, axis, s);
-  // Temp space needed by the full sorts.
-  size_t temp_size = std::max(
-  mxnet::op::SortByKeyWorkspaceSize(src.Size()),
-  mxnet::op::SortByKeyWorkspaceSize(src.Size()));
-
+  size_t temp_size = 0;
+  // Temp space needed by the gpu-based full sorts.
+  temp_size = std::max(temp_size,
+mxnet::op::SortByKeyWorkspaceSize(src.Size()));
   temp_size = std::max(temp_size,
-  mxnet::op::SortByKeyWorkspaceSize(src.Size()));
+mxnet::op::SortByKeyWorkspaceSize(src.Size()));
+  temp_size = std::max(temp_size,
+mxnet::op::SortByKeyWorkspaceSize(src.Size()));
   // Additional temp space for gpu full sorts for batch ids.
   temp_size += PadBytes(sizeof(int) * src.Size(), alignment);
   // Temp space for cpu sorts.
-  temp_size = std::max(temp_size, sizeof(DType) * src.Size());
-
+  temp_size = std::max(temp_size, static_cast(sizeof(DType) * 
src.Size()));
   size_t workspace_size = temp_size + PadBytes(sizeof(DType) * src.Size(), 
alignment)
 + PadBytes(sizeof(int) * src.Size(), 
alignment);
   if (param.ret_typ == topk_enum::kReturnMask) {
-workspace_size += PadBytes(sizeof(index_t) * batch_size * k, alignment);
+workspace_size += PadBytes(sizeof(int) * batch_size * k, alignment);
   }
   workspace = resource.get_space_typed(Shape1(workspace_size), 
s);
   char* workspace_curr_ptr = workspace.dptr_;
   sorted_dat = Tensor(reinterpret_cast(workspace_curr_ptr),
-  Shape1(src.Size()), s);  // contain sorted dat
+  Shape1(src.Size()), s);  // contain 
sorted dat
   workspace_curr_ptr += PadBytes(sizeof(DType) * src.Size(), alignment);
-  indices = Tensor(reinterpret_cast(workspace_curr_ptr),
-  Shape1(src.Size()), s);  // indices in the original matrix
-  workspace_curr_ptr += PadBytes(sizeof(index_t) * src.Size(), alignment);
+  indices = Tensor(reinterpret_cast(workspace_curr_ptr),
+Shape1(src.Size()), s);  // indices in the 
original matrix
+  workspace_curr_ptr += PadBytes(sizeof(int) * src.Size(), alignment);
 
   if (param.ret_typ == topk_enum::kReturnMask) {
 sel_indices = Tensor(reinterpret_cast(workspace_curr_ptr),



[incubator-mxnet] branch master updated (1eb1925 -> cba7c4e)

2019-08-23 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1eb1925  remove Julia cat image for license issue (#15964)
 add cba7c4e  Add fp16 support for topk (#15560)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/ordering_op-inl.h  | 22 ---
 src/operator/tensor/sort_op-inl.cuh| 68 +++---
 tests/python/unittest/test_operator.py | 64 +++-
 3 files changed, 92 insertions(+), 62 deletions(-)



[incubator-mxnet] branch master updated (fade159 -> 73a692e)

2019-08-22 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fade159  Fix get_rows_per_block (#15979)
 add 73a692e  Fix a memory misalignment in topk operator (#15948)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mshadow/mshadow/tensor.h |  6 +++---
 src/operator/tensor/ordering_op-inl.h | 24 
 2 files changed, 15 insertions(+), 15 deletions(-)



[incubator-mxnet] branch master updated: Group Normalization (#14959)

2019-07-18 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new eec0fb4  Group Normalization (#14959)
eec0fb4 is described below

commit eec0fb4eda40f4fb8222a8d93d8face454aead09
Author: Hao Jin 
AuthorDate: Thu Jul 18 22:44:34 2019 -0700

Group Normalization (#14959)

* GroupNorm

* add to amp list

* re-write forward
---
 python/mxnet/contrib/amp/lists/symbol.py |   1 +
 python/mxnet/gluon/nn/basic_layers.py|  91 +++-
 src/operator/nn/group_norm-inl.h | 347 +++
 src/operator/nn/group_norm.cc| 131 
 src/operator/nn/group_norm.cu|  37 
 tests/python/unittest/test_gluon.py  |   9 +
 tests/python/unittest/test_operator.py   |  91 
 7 files changed, 706 insertions(+), 1 deletion(-)

diff --git a/python/mxnet/contrib/amp/lists/symbol.py 
b/python/mxnet/contrib/amp/lists/symbol.py
index 9a587df..c6cc3d1 100644
--- a/python/mxnet/contrib/amp/lists/symbol.py
+++ b/python/mxnet/contrib/amp/lists/symbol.py
@@ -471,6 +471,7 @@ FP32_FUNCS = [
 'log_softmax',
 'InstanceNorm',
 'LayerNorm',
+'GroupNorm',
 'L2Normalization',
 'LRN',
 'SoftmaxActivation',
diff --git a/python/mxnet/gluon/nn/basic_layers.py 
b/python/mxnet/gluon/nn/basic_layers.py
index 3d6976c..b1482ce 100644
--- a/python/mxnet/gluon/nn/basic_layers.py
+++ b/python/mxnet/gluon/nn/basic_layers.py
@@ -19,7 +19,8 @@
 # pylint: disable= arguments-differ
 """Basic neural network layers."""
 __all__ = ['Sequential', 'HybridSequential', 'Dense', 'Dropout', 'Embedding',
-   'BatchNorm', 'InstanceNorm', 'LayerNorm', 'Flatten', 'Lambda', 
'HybridLambda']
+   'BatchNorm', 'InstanceNorm', 'LayerNorm', 'GroupNorm',
+   'Flatten', 'Lambda', 'HybridLambda']
 import warnings
 import numpy as np
 
@@ -616,6 +617,94 @@ class LayerNorm(HybridBlock):
for k, v in self._kwargs.items()]))
 
 
+class GroupNorm(HybridBlock):
+r"""
+Applies group normalization to the n-dimensional input array.
+This operator takes an n-dimensional input array where the leftmost 2 axis 
are
+`batch` and `channel` respectively:
+
+.. math::
+
+  x = x.reshape((N, num_groups, C // num_groups, ...))
+  axis = (2, ...)
+  out = \frac{x - mean[x, axis]}{ \sqrt{Var[x, axis] + \epsilon}} * gamma 
+ beta
+
+Parameters
+--
+num_groups: int, default 1
+Number of groups to separate the channel axis into.
+epsilon: float, default 1e-5
+Small float added to variance to avoid dividing by zero.
+center: bool, default True
+If True, add offset of `beta` to normalized tensor.
+If False, `beta` is ignored.
+scale: bool, default True
+If True, multiply by `gamma`. If False, `gamma` is not used.
+beta_initializer: str or `Initializer`, default 'zeros'
+Initializer for the beta weight.
+gamma_initializer: str or `Initializer`, default 'ones'
+Initializer for the gamma weight.
+
+
+Inputs:
+- **data**: input tensor with shape (N, C, ...).
+
+Outputs:
+- **out**: output tensor with the same shape as `data`.
+
+References
+--
+`Group Normalization
+<https://arxiv.org/pdf/1803.08494.pdf>`_
+
+Examples
+
+>>> # Input of shape (2, 3, 4)
+>>> x = mx.nd.array([[[ 0,  1,  2,  3],
+  [ 4,  5,  6,  7],
+  [ 8,  9, 10, 11]],
+ [[12, 13, 14, 15],
+  [16, 17, 18, 19],
+  [20, 21, 22, 23]]])
+>>> # Group normalization is calculated with the above formula
+>>> layer = GroupNorm()
+>>> layer.initialize(ctx=mx.cpu(0))
+>>> layer(x)
+[[[-1.5932543 -1.3035717 -1.0138891 -0.7242065]
+  [-0.4345239 -0.1448413  0.1448413  0.4345239]
+  [ 0.7242065  1.0138891  1.3035717  1.5932543]]
+ [[-1.5932543 -1.3035717 -1.0138891 -0.7242065]
+  [-0.4345239 -0.1448413  0.1448413  0.4345239]
+  [ 0.7242065  1.0138891  1.3035717  1.5932543]]]
+
+"""
+def __init__(self, num_groups=1, epsilon=1e-5, center=True, scale=True,
+ beta_initializer='zeros', gamma_initializer='ones',
+ prefix=None, params=None):
+super(GroupNorm, self).__init__(prefix=prefix, params=params)
+self._kwargs = {'eps': epsilon, 'num_groups': num_groups, 'center': 
center, 'scale': scale}
+self._num_groups = num_groups
+self._epsilon = epsilon
+self._center = center
+self._scale = scale
+s

[incubator-mxnet] branch master updated: PDF operators for each distribution for which we have a random sampler (plus also the PDF of the Dirichlet). Supports probabilities and log-probabilities, as w

2019-07-18 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b887c06  PDF operators for each distribution for which we have a 
random sampler (plus also the PDF of the Dirichlet).  Supports probabilities 
and log-probabilities, as well as gradients. (#14617)
b887c06 is described below

commit b887c06f6c74e64fca668dbf2a69b67ba2a197d3
Author: david-seiler <1927983+david-sei...@users.noreply.github.com>
AuthorDate: Fri Jul 19 07:41:58 2019 +0200

PDF operators for each distribution for which we have a random sampler 
(plus also the PDF of the Dirichlet).  Supports probabilities and 
log-probabilities, as well as gradients. (#14617)
---
 python/mxnet/contrib/amp/lists/symbol.py |  17 +
 src/operator/random/pdf_op.cc| 319 
 src/operator/random/pdf_op.cu|  48 +++
 src/operator/random/pdf_op.h | 622 +++
 tests/python/unittest/test_random.py | 111 +-
 5 files changed, 1110 insertions(+), 7 deletions(-)

diff --git a/python/mxnet/contrib/amp/lists/symbol.py 
b/python/mxnet/contrib/amp/lists/symbol.py
index 066618b..9a587df 100644
--- a/python/mxnet/contrib/amp/lists/symbol.py
+++ b/python/mxnet/contrib/amp/lists/symbol.py
@@ -600,6 +600,23 @@ WIDEST_TYPE_CASTS = [
 '_sparse_elemwise_mul',
 '_sparse_elemwise_sub',
 '_sparse_sum',
+
+'random_pdf_gamma',
+'random_pdf_exponential',
+'random_pdf_uniform',
+'random_pdf_negative_binomial',
+'random_pdf_generalized_negative_binomial',
+'random_pdf_dirichlet',
+'random_pdf_normal',
+'random_pdf_poisson',
+'_random_pdf_gamma',
+'_random_pdf_exponential',
+'_random_pdf_uniform',
+'_random_pdf_negative_binomial',
+'_random_pdf_generalized_negative_binomial',
+'_random_pdf_dirichlet',
+'_random_pdf_normal',
+'_random_pdf_poisson',
 ]
 
 LOSS_OUTPUT_FUNCTIONS = [
diff --git a/src/operator/random/pdf_op.cc b/src/operator/random/pdf_op.cc
new file mode 100644
index 000..070ca81
--- /dev/null
+++ b/src/operator/random/pdf_op.cc
@@ -0,0 +1,319 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file pdf_op.cc
+ * \brief CPU-operators for computing the pdf of random distributions. 
+ */
+
+#include "./pdf_op.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(PdfParam);
+
+#define MXNET_OPERATOR_REGISTER_PDF(distr, pdffunc, num_parms, \
+parm_name_1, parm_name_2, \
+parm_desc_1, parm_desc_2, \
+description, vectorparms) \
+  NNVM_REGISTER_OP(_random_pdf_##distr) \
+  .add_alias("random_pdf_" #distr) \
+  .describe(description()+std::string(ADD_FILELINE)) \
+  .set_num_inputs(num_parms+1) \
+  .set_num_outputs(1) \
+  .set_attr_parser(ParamParser) \
+  .set_attr("FListInputNames", \
+[](const NodeAttrs& attrs) { \
+  std::vector v = {"sample", parm_name_1, parm_name_2}; \
+  v.resize(num_parms+1); \
+  return v; \
+}) \
+  .set_attr("FInferShape", PdfOpShape) \
+  .set_attr("FInferType", ElemwiseType) \
+  .set_attr("FCompute", PdfOpForward) \
+  .set_attr("FGradient", 
ElemwiseGradUseInOut{"_backward_pdf_" #distr}) \
+  .add_argument("sample", "NDArray-or-Symbol", "Samples from the 
distributions.") \
+  .add_argument(parm_name_1, "NDArray-or-Symbol", parm_desc_1) \
+  .add_arguments(PdfParam::__FIELDS__())
+
+#define MXNET_OPERATOR_REGISTER_PDF_GRAD(distr, pdffunc, num_parms, 
vectorparms) \
+  NNVM_REGISTER_OP(_backward_pdf_##distr) \
+  .set_num_inputs(num_parms+3) \
+  .set_num_outputs(num_parms+1) \
+  .set_attr_parser(ParamParser) \
+  .set_attr("FInplaceOption", [](const NodeAttrs& attrs) 
\
+{ std::vector > v = {{1, 0}, {2, 1}, {3, 2}}; \
+v.resize(num_parms+1); 

[incubator-mxnet] branch master updated (2565fa2 -> 9c5acb4)

2019-07-12 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2565fa2  fix nightly CI failure (#15452)
 add 9c5acb4  Accelerate ROIPooling layer (#14894)

No new revisions were added by this update.

Summary of changes:
 src/operator/roi_pooling-inl.h |  11 +++-
 src/operator/roi_pooling.cc| 128 ++---
 src/operator/roi_pooling.cu| 127 ++--
 3 files changed, 66 insertions(+), 200 deletions(-)



[incubator-mxnet] branch master updated: [MXNET-80] Fix average pooling kernel size assignment error (#10000)

2018-04-09 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1e532bf  [MXNET-80] Fix average pooling kernel size assignment error 
(#1)
1e532bf is described below

commit 1e532bf2bc9c9bd51698ac61e89569828bea646d
Author: CoinCheung <867153...@qq.com>
AuthorDate: Tue Apr 10 02:49:24 2018 +0800

[MXNET-80] Fix average pooling kernel size assignment error (#1)

* fix average pooling kernel size assignment error

modify white space and other format errors

remove wrap line whitespace format error

remove whitespace at the end of line183

change error message

add default pooling type to pool_enum::kMaxPooling

add pooling without kernel test cases

adjust pooling parameter order and add associated test points

remove wrong error test points

ignore kernel size check if global_pool is assigned to be true

modify whitespace

line length adjust

adjust linelength

finally learned to use cpplint

switch off all shape checks if global_pool is assigned

parse parameter when global_pool used

modify pooling shape inference logic

change a way to infer pooling shape

add push oshape

change kernel shape

prepare pooling parameter shapes

check lint

pooling parameters preparation

modify kernel shape computation method

modify a bit pooling_v1

more modification of pooling_v1

remove "avg pool"

tiny changes

change pooling args order back

use size_t instead of int

use changed order and only try tiny changes

try no kernel indicated to python interface with original order

useless modify for recommit

* no order change and test kernel=

* change order
---
 src/operator/nn/pooling-inl.h |  49 +++
 src/operator/nn/pooling.cc| 155 +-
 src/operator/pooling_v1-inl.h |  79 +
 tests/python/gpu/test_operator_gpu.py |  29 +++
 4 files changed, 179 insertions(+), 133 deletions(-)

diff --git a/src/operator/nn/pooling-inl.h b/src/operator/nn/pooling-inl.h
index 15709e5..a390dd0 100644
--- a/src/operator/nn/pooling-inl.h
+++ b/src/operator/nn/pooling-inl.h
@@ -50,22 +50,22 @@ struct PoolingParam : public dmlc::Parameter {
   bool global_pool;
   bool cudnn_off;
   DMLC_DECLARE_PARAMETER(PoolingParam) {
-DMLC_DECLARE_FIELD(global_pool).set_default(false)
-.describe("Ignore kernel size, do global pooling based on current input 
feature map. ");
-
-DMLC_DECLARE_FIELD(cudnn_off).set_default(false)
-.describe("Turn off cudnn pooling and use MXNet pooling operator. ");
-
-DMLC_DECLARE_FIELD(kernel)
+DMLC_DECLARE_FIELD(kernel).set_default(TShape())  // add default value here
 .enforce_nonzero()
 .describe("Pooling kernel size: (y, x) or (d, y, x)");
 
-DMLC_DECLARE_FIELD(pool_type)
+DMLC_DECLARE_FIELD(pool_type).set_default(pool_enum::kMaxPooling)  // add 
default pooling method
 .add_enum("max", pool_enum::kMaxPooling)
 .add_enum("avg", pool_enum::kAvgPooling)
 .add_enum("sum", pool_enum::kSumPooling)
 .describe("Pooling type to be applied.");
 
+DMLC_DECLARE_FIELD(global_pool).set_default(false)
+.describe("Ignore kernel size, do global pooling based on current input 
feature map. ");
+
+DMLC_DECLARE_FIELD(cudnn_off).set_default(false)
+.describe("Turn off cudnn pooling and use MXNet pooling operator. ");
+
 DMLC_DECLARE_FIELD(pooling_convention).set_default(pool_enum::kValid)
 .add_enum("full", pool_enum::kFull)
 .add_enum("valid", pool_enum::kValid)
@@ -132,19 +132,23 @@ class PoolingOp {
 using namespace mshadow;
 Stream *s = ctx.get_stream();
 const TShape& ishape = in_data.shape_;
+TShape kernel = param_.kernel;
 TShape padding = param_.pad;
+TShape stride = param_.stride;
 if (param_.global_pool) {
-  for (index_t i = 0; i < padding.ndim(); i++) {
+  kernel = TShape(ishape.data() + 2,
+   ishape.data() + ishape.ndim());
+  padding = TShape(ishape.ndim() - 2);
+  for (index_t i = 0; i < ishape.ndim() - 2; i++) {
 padding[i] = 0;
   }
+  stride = TShape(ishape.ndim() - 2);
 }
 
 pool(s, in_data.dptr(), in_data.shape_, out_data.shape_,
- param_.global_pool?
-   TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), 
ishape.data()+ishape.ndim())
-   : param_.kernel,
+ kernel,
  padding,
- param_.global_pool? TSha

[incubator-mxnet] branch master updated: Fix windows setup doc using VS 2017 (#10363)

2018-04-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 28cb133  Fix windows setup doc using VS 2017 (#10363)
28cb133 is described below

commit 28cb133ef0db27b9e8c809c6d10aab90ecb9e465
Author: cgwang <wangcg@gmail.com>
AuthorDate: Wed Apr 4 18:15:05 2018 -0700

Fix windows setup doc using VS 2017 (#10363)

update windows gpu setup
---
 docs/install/index.md | 64 ---
 1 file changed, 61 insertions(+), 3 deletions(-)

diff --git a/docs/install/index.md b/docs/install/index.md
index d9d78dd..da68745 100644
--- a/docs/install/index.md
+++ b/docs/install/index.md
@@ -992,7 +992,67 @@ Refer to 
[#8671](https://github.com/apache/incubator-mxnet/issues/8671) for stat
 
 
 
-To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+We provide both options to build and install MXNet yourself using [Microsoft 
Visual Studio 2017](https://www.visualstudio.com/downloads/), and [Microsoft 
Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/).
+
+**Option 1** 
+
+To build and install MXNet yourself using [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/), you need the following 
dependencies. Install the required dependencies:
+
+1. If [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/) 
is not already installed, download and install it. You can download and install 
the free community edition.
+2. Download and install 
[CMake](https://cmake.org/files/v3.11/cmake-3.11.0-rc4-win64-x64.msi) if it is 
not already installed.
+3. Download and install 
[OpenCV](https://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.4.1/opencv-3.4.1-vc14_vc15.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (e.g., ```OpenCV_DIR = C:\utils\opencv\build```).
+6. If you don’t have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](https://sourceforge.net/projects/openblas/files/v0.2.20/OpenBLAS%200.2.20%20version.zip/download).
+7. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories (e.g., ```OpenBLAS_HOME = C:\utils\OpenBLAS```).
+8. Download and install CUDA: Install 
[CUDA](https://developer.nvidia.com/cuda-downloads?target_os=Windows_arch=x86_64_version=10_type=exelocal),
 and Download the base installer (e.g., ```cuda_9.1.85_win10.exe```).
+9. Download and install cuDNN. To get access to the download link, register as 
an NVIDIA community user. Then Follow the 
[link](http://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows)
 to install the cuDNN.
+10. Download and install [git](https://git-for-windows.github.io/).
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Start ```cmd``` in windows.
+
+2. Download the MXNet source code from GitHub by using following command:
+
+```r
+cd C:\
+git clone https://github.com/apache/incubator-mxnet.git --recursive
+```
+
+3. Follow [this 
link](https://docs.microsoft.com/en-us/visualstudio/install/modify-visual-studio)
 to modify ```Individual components```, and check ```VC++ 2017 version 15.4 
v14.11 toolset```, and click ```Modify```.
+
+4. Change the version of the Visual studio 2017 to v14.11 using the following 
command (by default the VS2017 is installed in the following path):
+
+```r
+"C:\Program Files (x86)\Microsoft Visual 
Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat" -vcvars_ver=14.11
+```
+
+5. Create a build dir using the following command and go to the directory, for 
example: 
+
+```r
+mkdir C:\build
+cd C:\build
+```
+
+6. CMake the MXNet source code by using following command:
+
+```r
+cmake -G "Visual Studio 15 2017 Win64" -T cuda=9.1,host=x64 -DUSE_CUDA=1 
-DUSE_CUDNN=1 -DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_BLAS=open 
-DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_LIST=Common -DCUDA_TOOLSET=9.1 
-DCUDNN_INCLUDE=C:\cuda\include -DCUDNN_LIBRARY=C:\cuda\lib\x64\cudnn.lib 
"C:\incubator-mxnet"
+```
+
+NOTE: make sure the DCUDNN_INCLUDE and DCUDNN_LIBRARY pointing to the 
“include” and “cudnn.lib” of your CUDA installed location, and the 
```C:\incubator-mxnet``` is the location of the source code you just git in the 
previous step
+
+7. After the CMake successfully completed, compile the the MXNet source code 
by using following command:
+
+```r
+msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount
+```
+
+**Option 2** 
+
+To build and install MXNet yourself using [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older

[incubator-mxnet] branch master updated: fix word language model script and readme (#10225)

2018-03-24 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 200f541  fix word language model script and readme (#10225)
200f541 is described below

commit 200f541b88f5b0749f9edba96e3f15ebbc05731e
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Sat Mar 24 12:48:36 2018 -0700

fix word language model script and readme (#10225)

* fix word language model script and readme

* update performance
---
 example/gluon/word_language_model/README.md   | 74 ++-
 example/gluon/word_language_model/get_ptb_data.sh | 43 -
 example/gluon/word_language_model/train.py| 11 +++-
 3 files changed, 41 insertions(+), 87 deletions(-)

diff --git a/example/gluon/word_language_model/README.md 
b/example/gluon/word_language_model/README.md
index ff8ea56..f99a3a6 100644
--- a/example/gluon/word_language_model/README.md
+++ b/example/gluon/word_language_model/README.md
@@ -1,32 +1,18 @@
 # Word-level language modeling RNN
 
-This example trains a multi-layer RNN (Elman, GRU, or LSTM) on Penn Treebank 
(PTB) language modeling benchmark.
+This example trains a multi-layer RNN (Elman, GRU, or LSTM) on WikiText-2 
language modeling benchmark.
 
-The model obtains the state-of-the-art result on PTB using LSTM, getting a 
test perplexity of ~72.
-And ~97 ppl in WikiText-2, outperform than basic LSTM(99.3) and reach 
Variational LSTM(96.3).
+The model obtains ~107 ppl in WikiText-2 using LSTM.
 
-The following techniques have been adopted for SOTA results: 
+The following techniques have been adopted for SOTA results:
 - [LSTM for LM](https://arxiv.org/pdf/1409.2329.pdf)
 - [Weight tying](https://arxiv.org/abs/1608.05859) between word vectors and 
softmax output embeddings
 
 ## Data
 
-### PTB
-
-The PTB data is the processed version from [(Mikolov et al, 
2010)](http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf):
-
-```bash
-bash get_ptb_data.sh
-python data.py
-```
-
 ### Wiki Text
 
-The wikitext-2 data is downloaded from [(The wikitext long term dependency 
language modeling 
dataset)](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/):
-
-```bash
-bash get_wikitext2_data.sh
-```
+The wikitext-2 data is from [(The wikitext long term dependency language 
modeling 
dataset)](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/).
 The training script automatically loads the dataset into `$PWD/data`.
 
 
 ## Usage
@@ -34,12 +20,7 @@ bash get_wikitext2_data.sh
 Example runs and the results:
 
 ```
-python train.py -data ./data/ptb. --cuda --tied --nhid 650 --emsize 650 
--dropout 0.5# Test ppl of 75.3 in ptb
-python train.py -data ./data/ptb. --cuda --tied --nhid 1500 --emsize 1500 
--dropout 0.65  # Test ppl of 72.0 in ptb
-```
-
-```
-python train.py -data ./data/wikitext-2/wiki. --cuda --tied --nhid 256 
--emsize 256  # Test ppl of 97.07 in wikitext-2 
+python train.py --cuda --tied --nhid 256 --emsize 256  # Test ppl of 
106.9 in wikitext-2
 ```
 
 
@@ -47,21 +28,32 @@ python train.py -data ./data/wikitext-2/wiki. --cuda --tied 
--nhid 256 --emsize
 
 `python train.py --help` gives the following arguments:
 ```
-Optional arguments:
-  -h, --help show this help message and exit
-  --data DATAlocation of the data corpus
-  --model MODEL  type of recurrent net (rnn_tanh, rnn_relu, lstm, gru)
-  --emsize EMSIZEsize of word embeddings
-  --nhid NHIDnumber of hidden units per layer
-  --nlayers NLAYERS  number of layers
-  --lr LRinitial learning rate
-  --clip CLIPgradient clipping
-  --epochs EPOCHSupper epoch limit
-  --batch_size N batch size
-  --bptt BPTTsequence length
-  --dropout DROPOUT  dropout applied to layers (0 = no dropout)
-  --tied tie the word embedding and softmax weights
-  --cuda Whether to use gpu
-  --log-interval N   report interval
-  --save SAVEpath to save the final model
+usage: train.py [-h] [--model MODEL] [--emsize EMSIZE] [--nhid NHID]
+[--nlayers NLAYERS] [--lr LR] [--clip CLIP] [--epochs EPOCHS]
+[--batch_size N] [--bptt BPTT] [--dropout DROPOUT] [--tied]
+[--cuda] [--log-interval N] [--save SAVE] [--gctype GCTYPE]
+[--gcthreshold GCTHRESHOLD]
+
+MXNet Autograd RNN/LSTM Language Model on Wikitext-2.
+
+optional arguments:
+  -h, --helpshow this help message and exit
+  --model MODEL type of recurrent net (rnn_tanh, rnn_relu, lstm, gru)
+  --emsize EMSIZE   size of word embeddings
+  --nhid NHID   number of hidden units per layer
+  --nlayers N

[incubator-mxnet] branch master updated: Update rnn_layer.py (#10153)

2018-03-18 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6e1f58d  Update rnn_layer.py (#10153)
6e1f58d is described below

commit 6e1f58d4a8eb15dea9d23ff7ae10c484bcd5e43e
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Sun Mar 18 16:11:31 2018 -0700

Update rnn_layer.py (#10153)

fixes #10152
---
 python/mxnet/gluon/rnn/rnn_layer.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/python/mxnet/gluon/rnn/rnn_layer.py 
b/python/mxnet/gluon/rnn/rnn_layer.py
index 2fac399..c82e953 100644
--- a/python/mxnet/gluon/rnn/rnn_layer.py
+++ b/python/mxnet/gluon/rnn/rnn_layer.py
@@ -254,7 +254,7 @@ class RNN(_RNNLayer):
 The number of features in the hidden state h.
 num_layers: int, default 1
 Number of recurrent layers.
-activation: {'relu' or 'tanh'}, default 'tanh'
+activation: {'relu' or 'tanh'}, default 'relu'
 The activation function to use.
 layout : str, default 'TNC'
 The format of input and output tensors. T, N and C stand for

-- 
To stop receiving notification emails like this one, please contact
sxjscie...@apache.org.


[incubator-mxnet] branch master updated: [MXNET-86] Revert to pre-profile-changes copy code (#10090)

2018-03-13 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c9ec311  [MXNET-86] Revert to pre-profile-changes copy code (#10090)
c9ec311 is described below

commit c9ec3118688c233a66ad847003a9e8d2d09e5952
Author: Chris Olivier <cjolivie...@gmail.com>
AuthorDate: Tue Mar 13 21:36:16 2018 -0700

[MXNET-86] Revert to pre-profile-changes copy code (#10090)

* Revert to pre-profile-changes copy code

* Add test

* Trigger rebuild
---
 src/ndarray/ndarray_function.cc  | 19 ++--
 tests/python/unittest/test_gluon_data.py | 81 
 2 files changed, 85 insertions(+), 15 deletions(-)

diff --git a/src/ndarray/ndarray_function.cc b/src/ndarray/ndarray_function.cc
index 927c906..552555a 100644
--- a/src/ndarray/ndarray_function.cc
+++ b/src/ndarray/ndarray_function.cc
@@ -26,7 +26,6 @@
 #include "./ndarray_function.h"
 #include "./ndarray_function-inl.h"
 #include "../common/utils.h"
-#include "../operator/mxnet_op.h"
 
 namespace mxnet {
 namespace ndarray {
@@ -34,27 +33,17 @@ template<>
 void Copy<cpu, cpu>(const TBlob , TBlob *to,
 Context from_ctx, Context to_ctx,
 RunContext ctx) {
-  using namespace mxnet::op;
   MSHADOW_TYPE_SWITCH(to->type_flag_, DType, {
 if (to->type_flag_ == from.type_flag_) {
-  TBlob dest = to->FlatTo1D<cpu, DType>();
-  TBlob src = from.FlatTo1D<cpu, DType>();
-  const size_t size = src.Size();
-  if (dest.CheckContiguous() && src.CheckContiguous() && size >= 2 /* 
non-trivial size */) {
-CHECK_EQ(dest.shape_, src.shape_)
-  << "Copy:shape mismatch:" << dest.shape_ << " vs " << src.shape_;
-  mxnet_op::Kernel<mxnet_op::op_with_req<mshadow_op::identity, 
kWriteTo>, cpu>::Launch(
-ctx.get_stream(), src.Size(), dest.dptr(), 
src.dptr());
-  } else {
-mshadow::Copy(to->FlatTo1D<cpu, DType>(), from.FlatTo1D<cpu, DType>());
-  }
+mshadow::Copy(to->FlatTo1D<cpu, DType>(),
+  from.FlatTo1D<cpu, DType>());
 } else {
 MSHADOW_TYPE_SWITCH(from.type_flag_, SrcDType, {
 to->FlatTo1D<cpu, DType>() =
-  mshadow::expr::tcast(from.FlatTo1D<cpu, SrcDType>());
+mshadow::expr::tcast(from.FlatTo1D<cpu, SrcDType>());
 })
 }
-  });
+  })
 }
 
 template
diff --git a/tests/python/unittest/test_gluon_data.py 
b/tests/python/unittest/test_gluon_data.py
index 49b1b8e..93160aa 100644
--- a/tests/python/unittest/test_gluon_data.py
+++ b/tests/python/unittest/test_gluon_data.py
@@ -20,8 +20,14 @@ import tarfile
 import unittest
 import mxnet as mx
 import numpy as np
+import random
 from mxnet import gluon
+import platform
 from common import setup_module, with_seed
+from mxnet.gluon.data import DataLoader
+import mxnet.ndarray as nd
+from mxnet import context
+from mxnet.gluon.data.dataset import Dataset
 
 @with_seed()
 def test_array_dataset():
@@ -112,6 +118,81 @@ def test_multi_worker():
 for i, batch in enumerate(loader):
 assert (batch.asnumpy() == i).all()
 
+@with_seed()
+def test_multi_worker_forked_data_loader():
+"""
+Test should successfully run its course of multi-process/forked data 
loader without errors
+"""
+class Dummy(Dataset):
+def __init__(self, random_shape):
+self.random_shape = random_shape
+
+def __getitem__(self, idx):
+key = idx
+if self.random_shape:
+out = np.random.uniform(size=(random.randint(1000, 1100), 40))
+labels = np.random.uniform(size=(random.randint(10, 15)))
+else:
+out = np.random.uniform(size=(1000, 40))
+labels = np.random.uniform(size=(10))
+return key, out, labels
+
+def __len__(self):
+return 50
+
+def batchify(self, data):
+"""
+Collate data into batch. Use shared memory for stacking.
+
+:param data: a list of array, with layout of 'NTC'.
+:return either x  and x's unpadded lengths, or x, x's unpadded 
lengths, y and y's unpadded lengths
+if labels are not supplied.
+"""
+
+# input layout is NTC
+keys, inputs, labels = [item[0] for item in data], [item[1] for 
item in data], \
+   [item[2] for item in data]
+
+if len(data) > 1:
+max_data_len = max([seq.shape[0] for seq in 

[incubator-mxnet] branch nlp_toolkit updated: CorpusReader (#10061)

2018-03-10 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch nlp_toolkit
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/nlp_toolkit by this push:
 new 453fc23  CorpusReader (#10061)
453fc23 is described below

commit 453fc234c78a1f888164570e2f393ed0b02219da
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Sat Mar 10 14:40:50 2018 -0800

CorpusReader (#10061)

* corpus reader

* update

* delete pair
---
 python/mxnet/gluon/data/datareader.py |  2 +-
 python/mxnet/gluon/data/text/base.py  | 99 +--
 python/mxnet/gluon/data/text/lm.py| 10 +--
 python/mxnet/gluon/data/text/utils.py | 27 ++--
 tests/python/unittest/test_gluon_data_text.py | 22 +++---
 5 files changed, 87 insertions(+), 73 deletions(-)

diff --git a/python/mxnet/gluon/data/datareader.py 
b/python/mxnet/gluon/data/datareader.py
index 9b94ed4..b96575b 100644
--- a/python/mxnet/gluon/data/datareader.py
+++ b/python/mxnet/gluon/data/datareader.py
@@ -31,4 +31,4 @@ class DataReader(object):
 raise NotImplementedError
 
 def read_iter(self):
-return self.read()
+return iter(self.read())
diff --git a/python/mxnet/gluon/data/text/base.py 
b/python/mxnet/gluon/data/text/base.py
index a9fa25b..6c4cf87 100644
--- a/python/mxnet/gluon/data/text/base.py
+++ b/python/mxnet/gluon/data/text/base.py
@@ -27,12 +27,56 @@ import os
 
 from ..dataset import SimpleDataset
 from ..datareader import DataReader
-from .utils import flatten_samples, collate, pair
+from .utils import flatten_samples, collate
 
-class WordLanguageReader(DataReader):
+class CorpusReader(DataReader):
 """Text reader that reads a whole corpus and produces a dataset based on 
provided
 sample splitter and word tokenizer.
 
+The returned dataset includes samples, each of which can either be a list 
of tokens if tokenizer
+is specified, or a single string segment from the result of 
sample_splitter.
+
+Parameters
+--
+filename : str
+Path to the input text file.
+encoding : str, default 'utf8'
+File encoding format.
+flatten : bool, default False
+Whether to return all samples as flattened tokens. If True, each 
sample is a token.
+sample_splitter : function, default str.splitlines
+A function that splits the dataset string into samples.
+tokenizer : function or None, default str.split
+A function that splits each sample string into list of tokens. If 
None, raw samples are
+returned according to `sample_splitter`.
+"""
+def __init__(self, filename, encoding='utf8', flatten=False,
+ sample_splitter=lambda s: s.splitlines(),
+ tokenizer=lambda s: s.split()):
+assert sample_splitter, 'sample_splitter must be specified.'
+self._filename = os.path.expanduser(filename)
+self._encoding = encoding
+self._flatten = flatten
+self._sample_splitter = sample_splitter
+self._tokenizer = tokenizer
+
+def read(self):
+with io.open(self._filename, 'r', encoding=self._encoding) as fin:
+content = fin.read()
+samples = (s.strip() for s in self._sample_splitter(content))
+if self._tokenizer:
+samples = [self._tokenizer(s) for s in samples if s]
+if self._flatten:
+samples = flatten(samples)
+else:
+samples = [s for s in samples if s]
+return SimpleDataset(samples)
+
+
+class WordLanguageReader(CorpusReader):
+"""Text reader that reads a whole corpus and produces a language modeling 
dataset given
+the provided sample splitter and word tokenizer.
+
 The returned dataset includes data (current word) and label (next word).
 
 Parameters
@@ -46,8 +90,9 @@ class WordLanguageReader(DataReader):
 tokenizer : function, default str.split
 A function that splits each sample string into list of tokens.
 seq_len : int or None
-The length of each of the samples. If None, samples are divided 
according to
-`sample_splitter` only, and may have variable lengths.
+The length of each of the samples regardless of sample boundary.
+If None, samples are divided according to `sample_splitter` only,
+and may have variable lengths.
 bos : str or None, default None
 The token to add at the begining of each sentence. If None, nothing is 
added.
 eos : str or None, default None
@@ -61,43 +106,27 @@ class WordLanguageReader(DataReader):
 """
 def __init__(self, filename, encoding='utf8', sample_splitter=lambda s: 
s.splitlines(),
  tokenizer=lambda s: s.split(), seq_len=None, bos=None, 
eos=None, pad=None):
-sel

[incubator-mxnet] branch nlp_toolkit updated: gluon language modeling dataset and text token reader (#9986)

2018-03-08 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch nlp_toolkit
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/nlp_toolkit by this push:
 new 329acde  gluon language modeling dataset and text token reader (#9986)
329acde is described below

commit 329acde5a722f7be44604dd601884592945755e1
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Thu Mar 8 16:49:51 2018 -0500

gluon language modeling dataset and text token reader (#9986)

* language modeling dataset and text token reader.

* update

* add padding

* update bos insert

* update doc
---
 example/gluon/word_language_model/train.py |  44 ---
 python/mxnet/gluon/data/__init__.py|   2 +
 .../gluon/data/{__init__.py => datareader.py}  |  18 ++-
 python/mxnet/gluon/data/{ => text}/__init__.py |  10 +-
 .../gluon/data/{__init__.py => text/_constants.py} |  12 +-
 python/mxnet/gluon/data/text/base.py   | 103 +++
 python/mxnet/gluon/data/text/lm.py | 145 +
 python/mxnet/gluon/data/text/utils.py  |  73 +++
 tests/python/unittest/test_gluon_data_text.py  |  50 +++
 9 files changed, 420 insertions(+), 37 deletions(-)

diff --git a/example/gluon/word_language_model/train.py 
b/example/gluon/word_language_model/train.py
index b69fd17..c732393 100644
--- a/example/gluon/word_language_model/train.py
+++ b/example/gluon/word_language_model/train.py
@@ -16,13 +16,13 @@
 # under the License.
 
 import argparse
+import collections
 import time
 import math
 import mxnet as mx
-from mxnet import gluon, autograd
-from mxnet.gluon import contrib
+from mxnet import gluon, autograd, contrib
+from mxnet.gluon import data
 import model
-import data
 
 parser = argparse.ArgumentParser(description='MXNet Autograd RNN/LSTM Language 
Model on Wikitext-2.')
 parser.add_argument('--model', type=str, default='lstm',
@@ -71,32 +71,40 @@ if args.cuda:
 else:
 context = mx.cpu(0)
 
-train_dataset = contrib.data.text.WikiText2('./data', 'train', 
seq_len=args.bptt)
-vocab = train_dataset.vocabulary
-val_dataset, test_dataset = [contrib.data.text.WikiText2('./data', segment,
- vocab=vocab,
- seq_len=args.bptt)
- for segment in ['validation', 'test']]
+train_dataset = data.text.lm.WikiText2('./data', 'train', seq_len=args.bptt,
+   eos='')
+
+def get_frequencies(dataset):
+return collections.Counter(x for tup in dataset for x in tup[0] if x)
+
+vocab = contrib.text.vocab.Vocabulary(get_frequencies(train_dataset))
+def index_tokens(data, label):
+return vocab.to_indices(data), vocab.to_indices(label)
+
+val_dataset, test_dataset = [data.text.lm.WikiText2('./data', segment,
+seq_len=args.bptt,
+eos='')
+ for segment in ['val', 'test']]
 
 nbatch_train = len(train_dataset) // args.batch_size
-train_data = gluon.data.DataLoader(train_dataset,
+train_data = gluon.data.DataLoader(train_dataset.transform(index_tokens),
batch_size=args.batch_size,
-   
sampler=contrib.data.IntervalSampler(len(train_dataset),
-
nbatch_train),
+   
sampler=gluon.contrib.data.IntervalSampler(len(train_dataset),
+  
nbatch_train),
last_batch='discard')
 
 nbatch_val = len(val_dataset) // args.batch_size
-val_data = gluon.data.DataLoader(val_dataset,
+val_data = gluon.data.DataLoader(val_dataset.transform(index_tokens),
  batch_size=args.batch_size,
- 
sampler=contrib.data.IntervalSampler(len(val_dataset),
-  
nbatch_val),
+ 
sampler=gluon.contrib.data.IntervalSampler(len(val_dataset),
+
nbatch_val),
  last_batch='discard')
 
 nbatch_test = len(test_dataset) // args.batch_size
-test_data = gluon.data.DataLoader(test_dataset,
+test_data = gluon.data.DataLoader(test_dataset.transform(index_tokens),
   batch_size=args.batch_size,
-  
sampler=contrib.data.IntervalSampler(len(test_dataset),
-

[incubator-mxnet] branch master updated: add axes support for dropouts in gluon (#10032)

2018-03-08 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 649b086  add axes support for dropouts in gluon (#10032)
649b086 is described below

commit 649b08665bad016a71fa8b7a29a184d25217e335
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Thu Mar 8 15:46:47 2018 -0500

add axes support for dropouts in gluon (#10032)
---
 python/mxnet/gluon/contrib/rnn/rnn_cell.py  |  8 ++
 python/mxnet/gluon/nn/basic_layers.py   |  9 ---
 python/mxnet/gluon/rnn/rnn_cell.py  | 14 ++
 tests/python/unittest/test_gluon.py | 40 +
 tests/python/unittest/test_gluon_contrib.py |  3 ---
 tests/python/unittest/test_operator.py  | 29 +++--
 6 files changed, 72 insertions(+), 31 deletions(-)

diff --git a/python/mxnet/gluon/contrib/rnn/rnn_cell.py 
b/python/mxnet/gluon/contrib/rnn/rnn_cell.py
index d6402b7..b964c71 100644
--- a/python/mxnet/gluon/contrib/rnn/rnn_cell.py
+++ b/python/mxnet/gluon/contrib/rnn/rnn_cell.py
@@ -180,16 +180,12 @@ class VariationalDropoutCell(ModifierCell):
 states = _get_begin_state(self, F, begin_state, inputs, batch_size)
 
 if self.drop_inputs:
-first_input = inputs.slice_axis(axis, 0, 1).split(1, axis=axis, 
squeeze_axis=True)
-self._initialize_input_masks(F, first_input, states)
-inputs = F.broadcast_mul(inputs, 
self.drop_inputs_mask.expand_dims(axis=axis))
+inputs = F.Dropout(inputs, p=self.drop_inputs, axes=(axis,))
 
 outputs, states = self.base_cell.unroll(length, inputs, states, 
layout, merge_outputs=True,
 valid_length=valid_length)
 if self.drop_outputs:
-first_output = outputs.slice_axis(axis, 0, 1).split(1, axis=axis, 
squeeze_axis=True)
-self._initialize_output_mask(F, first_output)
-outputs = F.broadcast_mul(outputs, 
self.drop_outputs_mask.expand_dims(axis=axis))
+outputs = F.Dropout(outputs, p=self.drop_outputs, axes=(axis,))
 merge_outputs = isinstance(outputs, tensor_types) if merge_outputs is 
None else \
 merge_outputs
 outputs, _, _, _ = _format_sequence(length, outputs, layout, 
merge_outputs)
diff --git a/python/mxnet/gluon/nn/basic_layers.py 
b/python/mxnet/gluon/nn/basic_layers.py
index b61540d..9dc1a24 100644
--- a/python/mxnet/gluon/nn/basic_layers.py
+++ b/python/mxnet/gluon/nn/basic_layers.py
@@ -226,6 +226,8 @@ class Dropout(HybridBlock):
 --
 rate : float
 Fraction of the input units to drop. Must be a number between 0 and 1.
+axes : tuple of int, default ()
+The axes on which dropout mask is shared. If empty, regular dropout is 
applied.
 
 
 Inputs:
@@ -239,15 +241,16 @@ class Dropout(HybridBlock):
 `Dropout: A Simple Way to Prevent Neural Networks from Overfitting
 <http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf>`_
 """
-def __init__(self, rate, **kwargs):
+def __init__(self, rate, axes=(), **kwargs):
 super(Dropout, self).__init__(**kwargs)
 self._rate = rate
+self._axes = axes
 
 def hybrid_forward(self, F, x):
-return F.Dropout(x, p=self._rate, name='fwd')
+return F.Dropout(x, p=self._rate, axes=self._axes, name='fwd')
 
 def __repr__(self):
-s = '{name}(p = {_rate})'
+s = '{name}(p = {_rate}, axes={_axes})'
 return s.format(name=self.__class__.__name__,
 **self.__dict__)
 
diff --git a/python/mxnet/gluon/rnn/rnn_cell.py 
b/python/mxnet/gluon/rnn/rnn_cell.py
index 61bf24e..f5c72f5 100644
--- a/python/mxnet/gluon/rnn/rnn_cell.py
+++ b/python/mxnet/gluon/rnn/rnn_cell.py
@@ -713,6 +713,8 @@ class DropoutCell(HybridRecurrentCell):
 rate : float
 Percentage of elements to drop out, which
 is 1 - percentage to retain.
+axes : tuple of int, default ()
+The axes on which dropout mask is shared. If empty, regular dropout is 
applied.
 
 
 Inputs:
@@ -723,13 +725,14 @@ class DropoutCell(HybridRecurrentCell):
 - **out**: output tensor with shape `(batch_size, size)`.
 - **next_states**: returns input `states` directly.
 """
-def __init__(self, rate, prefix=None, params=None):
+def __init__(self, rate, axes=(), prefix=None, params=None):
 super(DropoutCell, self).__init__(prefix, params)
 assert isinstance(rate, numeric_types), "rate must be a number"
-self.rate = rate
+self._rate = rate
+self._axes = axes
 
 def __repr__(self):
-s = '{name}(rate = {rate})'
+s = '{name}(rate={_rate}, axes={_axes})'
 

[incubator-mxnet] branch master updated: Using "uniform" Xavier strategy to initialize the weight for VGG network (a trial solution to issue#9866) (#9867)

2018-02-27 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 17a9c6a  Using "uniform" Xavier strategy to initialize the weight for 
VGG network (a trial solution to issue#9866) (#9867)
17a9c6a is described below

commit 17a9c6ad440139d3f87924a8e989d4da252504be
Author: Shufan <33112206+juliusshu...@users.noreply.github.com>
AuthorDate: Wed Feb 28 13:01:34 2018 +0800

Using "uniform" Xavier strategy to initialize the weight for VGG network (a 
trial solution to issue#9866) (#9867)

* Enable the reporting of cross-entropy or nll loss value during training

* Set the default value of loss as a '' to avoid a Python runtime issue 
when loss argument is not set

* Applying the Xavier with "uniform" type to initialize weight when network 
is VGG
---
 example/image-classification/common/fit.py | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/example/image-classification/common/fit.py 
b/example/image-classification/common/fit.py
index 0e0cd52..9412b6f 100755
--- a/example/image-classification/common/fit.py
+++ b/example/image-classification/common/fit.py
@@ -237,6 +237,9 @@ def fit(args, network, data_loader, **kwargs):
 if args.network == 'alexnet':
 # AlexNet will not converge using Xavier
 initializer = mx.init.Normal()
+# VGG will not trend to converge using Xavier-Gaussian
+elif 'vgg' in args.network:
+initializer = mx.init.Xavier()
 else:
 initializer = mx.init.Xavier(
 rnd_type='gaussian', factor_type="in", magnitude=2)

-- 
To stop receiving notification emails like this one, please contact
sxjscie...@apache.org.


[incubator-mxnet] branch master updated: add MUSE embeddings (#9818)

2018-02-20 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8748b3e  add MUSE embeddings (#9818)
8748b3e is described below

commit 8748b3e0389c30b5d5f457347f5c2c5df11429da
Author: Sheng Zha <s...@users.noreply.github.com>
AuthorDate: Tue Feb 20 12:36:53 2018 -0800

add MUSE embeddings (#9818)

* add MUSE embeddings

* compress fasttext embeddings
---
 python/mxnet/contrib/text/_constants.py| 705 ++---
 python/mxnet/contrib/text/embedding.py |  14 +
 tests/python/unittest/test_contrib_text.py |   4 +-
 3 files changed, 548 insertions(+), 175 deletions(-)

diff --git a/python/mxnet/contrib/text/_constants.py 
b/python/mxnet/contrib/text/_constants.py
index 77c0d97..ef394a9 100644
--- a/python/mxnet/contrib/text/_constants.py
+++ b/python/mxnet/contrib/text/_constants.py
@@ -47,301 +47,660 @@ GLOVE_PRETRAINED_ARCHIVE_SHA1 = \
  'glove.twitter.27B.200d.txt':
  '7921c77a53aa5977b1d9ce3a7c4430cbd9d1207a'}
 
+FAST_TEXT_ARCHIVE_SHA1 = \
+{'crawl-300d-2M.zip': 'bb40313d15837ceecc1e879bc954e9be04b17c3c',
+ 'wiki.aa.zip': '0d85feb259e17d5258f38b2b615a2b87cd628427',
+ 'wiki.ab.zip': '7a8c555b9cf3837c9b31c901e9e0142209990365',
+ 'wiki.ace.zip': '51555fccbe53b726f6c86a84d704c026a78dd02f',
+ 'wiki.ady.zip': '725d2c30c03001c941ac4084549c55c7f8e1d766',
+ 'wiki.af.zip': '1a18d34e1b60433b837f5850750a44ca3845323d',
+ 'wiki.ak.zip': 'daecc2303cfd05bc6c33b24d78c14e0d7f33e3a7',
+ 'wiki.als.zip': '38851192e0b556e566be6c3c93370abf9867e525',
+ 'wiki.am.zip': '4576e0121448564b07f448e05e287236343f17c1',
+ 'wiki.ang.zip': '9c03da3b06d4becef5d387b9a61438b9362fc36a',
+ 'wiki.an.zip': '170f60bdd161cf8e4b5e018acd7d36e8bfc457a6',
+ 'wiki.arc.zip': 'c8dad8b00865bf736b087e7b323999ab404bda29',
+ 'wiki.ar.zip': '34e9869daa463fdc5609040ff33a03e67512e9fd',
+ 'wiki.arz.zip': '2d2790e11e401d46e1bce2970ee5264d5678a32b',
+ 'wiki.ast.zip': '1136515e2de556c077324bcd42ffe7f40c8d94c6',
+ 'wiki.as.zip': 'f9efde3e4ccda4a1e93fa275a3210f74036e9e46',
+ 'wiki.av.zip': '9f8568a3e094a48de4a3b6bea3bdb6fd7e875a08',
+ 'wiki.ay.zip': 'f09a422cedc6a0f15fbf30d290febe8057de83db',
+ 'wiki.azb.zip': 'd8895581050b9fdb5a10dfec3e27910a150b6faf',
+ 'wiki.az.zip': '2a34c2db872597ba3e345ce8b7db138241f9efbf',
+ 'wiki.bar.zip': 'd6e40135a6f4ba7a07fab11633034eccb1b05d0a',
+ 'wiki.bat_smg.zip': '5d08bd04f0515a36723776c0682b3de0f11d4264',
+ 'wiki.ba.zip': '412ac2f3bf9a605e56e2b0990bb0baed41ddf3b0',
+ 'wiki.bcl.zip': 'd3717cda357e08390cb57a64e07f5c7b7768d5be',
+ 'wiki.be.zip': 'b691e63b8080af23cc37f5f2b21b3154e464c425',
+ 'wiki.bg.zip': '08509a510a95e2a8905c19d83faf40d614d2268b',
+ 'wiki.bh.zip': 'a812600c6454b779d442b7680e3867e15d895095',
+ 'wiki.bi.zip': 'd0d4a3f57419424815f77b3951ef9c7336f6adf5',
+ 'wiki.bjn.zip': '0d81879ff7611380896eac6059bb677a5b3fe308',
+ 'wiki.bm.zip': 'f3a2a1a8dbc94973a74343c059595a310a5b',
+ 'wiki.bn.zip': 'b3bc70520edf3963c2217873ff5c2537d3545650',
+ 'wiki.bo.zip': '2be9fe7701d6a8501461df7bd98fee26859cf83a',
+ 'wiki.bpy.zip': 'd44b9267bb4f86e3e43972a6a952cc0ccf90dd3c',
+ 'wiki.br.zip': '4bfa66f1ea5aa5cad736eccaa211f6025596bcd6',
+ 'wiki.bs.zip': '40c560c5994ab50485d08eeaffd88740f30236ab',
+ 'wiki.bug.zip': 'bc7cd87bb067ac477000259cd4f95f45bfb6e4df',
+ 'wiki.bxr.zip': '8396fd67ef53f3123540766788a0db54734c4f1a',
+ 'wiki.ca.zip': '8f5d3caf0f5d223b2771ec44f7e620e396974fb2',
+ 'wiki.cbk_zam.zip': '0af3be50823b564433455d10c8753df88461458f',
+ 'wiki.cdo.zip': '19024215aa0c13872c027fc6127b5d7506198b5f',
+ 'wiki.ceb.zip': '96374428bf36a43983ba4307d7f6fb5ab52a6c6a',
+ 'wiki.ce.zip': 'b27f1a8da448bc9315e15d4261519c64f00de8eb',
+ 'wiki.cho.zip': '20944e34c2b58f14adb849dd5a6f5168c7affdea',
+ 'wiki.chr.zip': 'b7f41ee3fa76e933e0b5ad6b793c507fc19afe98',
+ 'wiki.chy.zip': '4ef66004a609c724fd7d8aab2877f7634323d43f',
+ 'wiki.ch.zip': '7f73678b685c9b5f5d6eea9bc00322cfc18d40cb',
+ 'wiki.ckb.zip': 'b7db2805526ad8bed878af257b32ca9ba814855f',
+ 'wiki.co.zip': '1b9e19b11763cb87ca00520dbdd6ada565547c9c',
+ 'wiki.crh.zip': '792003bae25c4471d25721440002c983fa5af020',
+ 'wiki.cr.zip': '875e4aa0de8a829e57f6c8e13d43cac5103210de',
+ 'wiki.csb.zip': 'fa776014c4c83487d7cb2485bd08eaf6739d9dca',
+ 'wiki.cs.zip': 'dca18cb80460522cd281ccc3c9922cf2b3c08b81',
+ 'wiki.cu.zip': 'ed23b48ba3193181a358d7a73005afa7655a4fc3',
+ 'wiki.cv.zip': '27ccd50942c9c218e00365ee293fa0c3087a7646',
+ 'wiki.cy.zip': '78940d5be2969b82c99f785bda2ac5f4e18e149c',
+ 'wiki.da.zip': 'a45077d9d73328bd6a96efdba1b31ed9a3639dcd',
+ 'wiki.de.zip': '0d9e4bf80100b46237dcb73cfefe390103e7e827',
+ 'wiki.d

[incubator-mxnet] branch v1.0.0 updated: Fix the gradient of gather_nd (#9200)

2018-01-04 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch v1.0.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.0.0 by this push:
 new 5585393  Fix the gradient of gather_nd (#9200)
5585393 is described below

commit 558539382b8b37acae65bf6da8f9d19010856912
Author: Xingjian Shi <xsh...@ust.hk>
AuthorDate: Thu Jan 4 11:29:57 2018 -0800

Fix the gradient of gather_nd (#9200)

* try to implement scatter_nd_acc

fix

fix

fix

update

only support real_type

update

update

try to fix

update

fix

update

revise test

fix lint

* fix

* mark line as no lint

* fix test

* revise test

* fix test case

* revise

* remove openmp

* update

* update

* update

* update test

* Revert "update test"

This reverts commit 3eb3ac6b2757ba8facb9387cd8b0080e0d496f46.

* Revert "update"

This reverts commit a28fa53a61e13bcffd0dc4503804d8704ea200a0.

* Revert "update"

This reverts commit e99ffd075832881348ff6cf7d1524fca9e614a2d.

* Revert "update"

This reverts commit 399ba0216bc21f279d46c688282fbbd37b0126c8.

* add atomic and specialize the behavior of half_t

* use "!" instead of not

* add test

* fix test

* fix test

* fix test

* rename to backward_gather_nd

* fix

* fix

* fix doc
---
 src/common/cuda_utils.h|   5 ++
 src/operator/mxnet_op.h|  44 
 src/operator/tensor/indexing_op.cc | 118 +++--
 src/operator/tensor/indexing_op.cu |  29 
 src/operator/tensor/indexing_op.h  |  64 +-
 tests/python/unittest/test_operator.py |  41 +++-
 6 files changed, 277 insertions(+), 24 deletions(-)

diff --git a/src/common/cuda_utils.h b/src/common/cuda_utils.h
index a1c37a9..9d3388b 100644
--- a/src/common/cuda_utils.h
+++ b/src/common/cuda_utils.h
@@ -479,6 +479,11 @@ static inline __device__ void 
atomicAdd(mshadow::half::half_t *address,
   } while (assumed != old);
 }
 
+// Overload atomicAdd to work for signed int64 on all architectures
+static inline  __device__  void atomicAdd(int64_t *address, int64_t val) {
+  atomicAdd(reinterpret_cast(address), 
static_cast(val)); // NOLINT
+}
+
 template 
 __device__ inline DType ldg(const DType* address) {
 #if __CUDA_ARCH__ >= 350
diff --git a/src/operator/mxnet_op.h b/src/operator/mxnet_op.h
index 1d47943..e351b4a 100644
--- a/src/operator/mxnet_op.h
+++ b/src/operator/mxnet_op.h
@@ -132,6 +132,50 @@ inline int get_num_threads(const int N) {
 LOG(FATAL) << "ndim=" << NDim << "too large "; \
   }
 
+#define MXNET_NO_INT8_TYPE_SWITCH(type, DType, ...)\
+  switch (type) {  \
+  case mshadow::kFloat32:  \
+{  \
+  typedef float DType; \
+  {__VA_ARGS__}\
+}  \
+break; \
+  case mshadow::kFloat64:  \
+{  \
+  typedef double DType;\
+  {__VA_ARGS__}\
+}  \
+break; \
+  case mshadow::kFloat16:  \
+{  \
+  typedef mshadow::half::half_t DType; \
+  {__VA_ARGS__}\
+}  \
+break; \
+  case mshadow::kUint8:\
+LOG(FATAL) << "This operation does not "   \
+  "support int8 or uint8"; \
+break; \
+  case mshadow::kInt8: \
+LOG(FATAL) << "This operation does not "   \
+  "support int8 or uint8"; \
+break; \
+  case mshadow::kInt32:

[incubator-mxnet] branch vision updated: Revert "add test script"

2017-11-21 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch vision
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/vision by this push:
 new e9ae848  Revert "add test script"
e9ae848 is described below

commit e9ae848167a46d4a31cbb36e50fecbce36c688bc
Author: Xingjian Shi <xsh...@ust.hk>
AuthorDate: Tue Nov 21 11:08:23 2017 -0800

Revert "add test script"

This reverts commit 23f68272e305103ad87d089e700ef715b13067c0.
---
 test_new_image_loader.py | 34 --
 1 file changed, 34 deletions(-)

diff --git a/test_new_image_loader.py b/test_new_image_loader.py
deleted file mode 100644
index 296869e..000
--- a/test_new_image_loader.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import os
-os.environ['MXNET_CPU_WORKER_NTHREADS'] = '1'
-os.environ['OMP_NUM_THREADS'] = '1'
-import time
-import numpy as np
-import multiprocessing as mp
-import mxnet as mx
-from mxnet import gluon as gl
-from mxnet.gluon.data.vision import transforms
-
-if __name__ == '__main__':
-   M = 24
-   BS = 100
-
-   dataset = gl.data.vision.ImageFolderDataset('../256_ObjectCategories')
-   transform = transforms.Compose([transforms.ToTensor(),
-   
transforms.RandomBrightness(1.0),
-   
transforms.RandomContrast(1.0),
-   
transforms.RandomSaturation(1.0),
-   
transforms.Normalize([0, 0, 0], [1, 1, 1])])
-   dataset = dataset.transform_first(lambda x: 
transform(mx.image.center_crop(x, (224, 224))[0]))
-   data_loader = gl.data.DataLoader(dataset, BS, shuffle=True, 
num_workers=M)
-
-   N = len(dataset)
-
-   iterator = iter(data_loader)
-
-   tic = time.time()
-
-   for data, label in iterator:
-   data.wait_to_read()
-   print(data.shape)
-
-   print(N/(time.time() - tic))

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" <comm...@mxnet.apache.org>'].


[incubator-mxnet] branch vision updated: add test script

2017-11-21 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch vision
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/vision by this push:
 new 23f6827  add test script
23f6827 is described below

commit 23f68272e305103ad87d089e700ef715b13067c0
Author: Xingjian Shi <xsh...@ust.hk>
AuthorDate: Tue Nov 21 11:07:21 2017 -0800

add test script
---
 test_new_image_loader.py | 34 ++
 1 file changed, 34 insertions(+)

diff --git a/test_new_image_loader.py b/test_new_image_loader.py
new file mode 100644
index 000..296869e
--- /dev/null
+++ b/test_new_image_loader.py
@@ -0,0 +1,34 @@
+import os
+os.environ['MXNET_CPU_WORKER_NTHREADS'] = '1'
+os.environ['OMP_NUM_THREADS'] = '1'
+import time
+import numpy as np
+import multiprocessing as mp
+import mxnet as mx
+from mxnet import gluon as gl
+from mxnet.gluon.data.vision import transforms
+
+if __name__ == '__main__':
+   M = 24
+   BS = 100
+
+   dataset = gl.data.vision.ImageFolderDataset('../256_ObjectCategories')
+   transform = transforms.Compose([transforms.ToTensor(),
+   
transforms.RandomBrightness(1.0),
+   
transforms.RandomContrast(1.0),
+   
transforms.RandomSaturation(1.0),
+   
transforms.Normalize([0, 0, 0], [1, 1, 1])])
+   dataset = dataset.transform_first(lambda x: 
transform(mx.image.center_crop(x, (224, 224))[0]))
+   data_loader = gl.data.DataLoader(dataset, BS, shuffle=True, 
num_workers=M)
+
+   N = len(dataset)
+
+   iterator = iter(data_loader)
+
+   tic = time.time()
+
+   for data, label in iterator:
+   data.wait_to_read()
+   print(data.shape)
+
+   print(N/(time.time() - tic))

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" <comm...@mxnet.apache.org>'].


[incubator-mxnet] branch master updated: shared module bug fix (#8185)

2017-10-09 Thread sxjscience
This is an automated email from the ASF dual-hosted git repository.

sxjscience pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new fbfff66  shared module bug fix (#8185)
fbfff66 is described below

commit fbfff666ed105fc22f9b2a2b8420db5dd78571f6
Author: formath <jinpeng...@163.com>
AuthorDate: Mon Oct 9 07:31:33 2017 -0500

shared module bug fix (#8185)
---
 python/mxnet/module/module.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/python/mxnet/module/module.py b/python/mxnet/module/module.py
index d55b211..4c20a6f 100644
--- a/python/mxnet/module/module.py
+++ b/python/mxnet/module/module.py
@@ -403,7 +403,7 @@ class Module(BaseModule):
 assert isinstance(shared_module, Module) and \
 shared_module.binded and shared_module.params_initialized
 shared_group = shared_module._exec_group
-assert len(shared_group.execs) == len(self._context)
+assert len(shared_group.execs) >= len(self._context)
 else:
 shared_group = None
 

-- 
To stop receiving notification emails like this one, please contact
['"comm...@mxnet.apache.org" <comm...@mxnet.apache.org>'].