[incubator-mxnet] branch master updated: Fixup move gluon.metric api docs (#18748)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ac36089  Fixup move gluon.metric api docs (#18748)
ac36089 is described below

commit ac3608932f507e3f41f79d814ba31bb8f83fec3a
Author: Leonard Lausen 
AuthorDate: Fri Jul 31 04:54:10 2020 +

Fixup move gluon.metric api docs (#18748)

* Fix metric API page

* Update index.rst
---
 docs/python_docs/python/api/gluon/index.rst|  2 +-
 docs/python_docs/python/api/gluon/metric/index.rst | 23 ++
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/docs/python_docs/python/api/gluon/index.rst 
b/docs/python_docs/python/api/gluon/index.rst
index c74500e..8a8b425 100644
--- a/docs/python_docs/python/api/gluon/index.rst
+++ b/docs/python_docs/python/api/gluon/index.rst
@@ -97,7 +97,7 @@ Training
   Loss functions for training neural networks.
 
.. card::
-  :title: mxnet.metric
+  :title: gluon.metric
   :link: metric/index.html
 
   Metrics to evaluate the performance of a learned model.
diff --git a/docs/python_docs/python/api/gluon/metric/index.rst 
b/docs/python_docs/python/api/gluon/metric/index.rst
new file mode 100644
index 000..8c53597
--- /dev/null
+++ b/docs/python_docs/python/api/gluon/metric/index.rst
@@ -0,0 +1,23 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+gluon.metric
+
+
+.. automodule:: mxnet.gluon.metric
+:members:
+:autosummary:



[incubator-mxnet] branch master updated: Fixup move gluon.metric api docs (#18748)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ac36089  Fixup move gluon.metric api docs (#18748)
ac36089 is described below

commit ac3608932f507e3f41f79d814ba31bb8f83fec3a
Author: Leonard Lausen 
AuthorDate: Fri Jul 31 04:54:10 2020 +

Fixup move gluon.metric api docs (#18748)

* Fix metric API page

* Update index.rst
---
 docs/python_docs/python/api/gluon/index.rst|  2 +-
 docs/python_docs/python/api/gluon/metric/index.rst | 23 ++
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/docs/python_docs/python/api/gluon/index.rst 
b/docs/python_docs/python/api/gluon/index.rst
index c74500e..8a8b425 100644
--- a/docs/python_docs/python/api/gluon/index.rst
+++ b/docs/python_docs/python/api/gluon/index.rst
@@ -97,7 +97,7 @@ Training
   Loss functions for training neural networks.
 
.. card::
-  :title: mxnet.metric
+  :title: gluon.metric
   :link: metric/index.html
 
   Metrics to evaluate the performance of a learned model.
diff --git a/docs/python_docs/python/api/gluon/metric/index.rst 
b/docs/python_docs/python/api/gluon/metric/index.rst
new file mode 100644
index 000..8c53597
--- /dev/null
+++ b/docs/python_docs/python/api/gluon/metric/index.rst
@@ -0,0 +1,23 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+gluon.metric
+
+
+.. automodule:: mxnet.gluon.metric
+:members:
+:autosummary:



[incubator-mxnet] branch master updated (aa53291 -> 7a24006)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from aa53291  add adaptive left margin for python site document body 
(#18828)
 add 7a24006  Enable DIST_KVSTORE by default in staticbuild (#18796)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt| 1 +
 config/distribution/darwin_cpu.cmake  | 1 +
 config/distribution/linux_cpu.cmake   | 1 +
 config/distribution/linux_cu100.cmake | 1 +
 config/distribution/linux_cu101.cmake | 1 +
 config/distribution/linux_cu102.cmake | 1 +
 tools/dependencies/cityhash.sh| 7 +++
 tools/dependencies/libpng.sh  | 3 +--
 tools/dependencies/libturbojpeg.sh| 4 ++--
 tools/dependencies/lz4.sh | 7 +++
 tools/dependencies/openssl.sh | 4 ++--
 tools/dependencies/protobuf.sh| 5 -
 tools/dependencies/zmq.sh | 9 +
 13 files changed, 34 insertions(+), 11 deletions(-)



[incubator-mxnet] branch master updated (045efb2 -> aa53291)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 045efb2  [NumPy] DLPack refactor and npx.from_numpy (#18656)
 add aa53291  add adaptive left margin for python site document body 
(#18828)

No new revisions were added by this update.

Summary of changes:
 .../themes/mx-theme/mxtheme/static/sphinx_materialdesign_theme.css  | 2 +-
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.css.map | 2 +-
 docs/python_docs/themes/mx-theme/src/scss/_root.scss| 6 ++
 3 files changed, 8 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (045efb2 -> aa53291)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 045efb2  [NumPy] DLPack refactor and npx.from_numpy (#18656)
 add aa53291  add adaptive left margin for python site document body 
(#18828)

No new revisions were added by this update.

Summary of changes:
 .../themes/mx-theme/mxtheme/static/sphinx_materialdesign_theme.css  | 2 +-
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.css.map | 2 +-
 docs/python_docs/themes/mx-theme/src/scss/_root.scss| 6 ++
 3 files changed, 8 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated: [NumPy] DLPack refactor and npx.from_numpy (#18656)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 045efb2  [NumPy] DLPack refactor and npx.from_numpy (#18656)
045efb2 is described below

commit 045efb27842e850f6ddf7c48e5c16e5678508443
Author: Sheng Zha 
AuthorDate: Thu Jul 30 19:19:33 2020 -0700

[NumPy] DLPack refactor and npx.from_numpy (#18656)

* refactor dlpack and add from_numpy to npx

* remove reference of DeepNumPy

* map platform-dependent types to fixed-size types

* update DMLC_LOG_FATAL_THROW

* fix flaky

* fix flaky

* test no error
---
 CMakeLists.txt |   4 +-
 python/mxnet/base.py   |   1 -
 python/mxnet/dlpack.py | 185 ++
 python/mxnet/ndarray/ndarray.py| 271 +++--
 python/mxnet/ndarray/numpy/_op.py  |   6 +-
 python/mxnet/ndarray/numpy/random.py   |   2 +-
 python/mxnet/numpy/multiarray.py   |  16 +-
 python/mxnet/numpy/random.py   |   2 +-
 python/mxnet/numpy_extension/utils.py  |  92 ---
 python/mxnet/symbol/numpy/random.py|   2 +-
 tests/python/unittest/test_base.py |   2 +
 tests/python/unittest/test_gluon_probability_v1.py |   6 +-
 tests/python/unittest/test_numpy_ndarray.py|  32 +++
 tests/python/unittest/test_operator.py |   4 +-
 14 files changed, 373 insertions(+), 252 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 5f1a510..cb22c59 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -642,7 +642,6 @@ if(UNIX)
   endif()
   target_link_libraries(mxnet PUBLIC mshadow)
   target_link_libraries(mxnet PUBLIC ${CMAKE_DL_LIBS})
-  target_compile_definitions(mxnet PUBLIC 
DMLC_LOG_FATAL_THROW=$)
   if(CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo")
 target_compile_options(mxnet PRIVATE "$<$:-Werror>")
 # Ignore erroneous compiler warnings:
@@ -669,7 +668,6 @@ elseif(MSVC)
   foreach(arch ${arch_code_list})
 add_library(mxnet_${arch} SHARED ${SOURCE})
 target_link_libraries(mxnet_${arch} PUBLIC mshadow)
-target_compile_definitions(mxnet_${arch} PUBLIC 
DMLC_LOG_FATAL_THROW=$)
 target_compile_options(
   mxnet_${arch}
   PRIVATE
@@ -705,10 +703,10 @@ elseif(MSVC)
 endif(USE_SPLIT_ARCH_DLL)
   else()
 add_library(mxnet SHARED ${SOURCE})
-target_compile_definitions(mxnet PUBLIC 
DMLC_LOG_FATAL_THROW=$)
 target_link_libraries(mxnet PUBLIC mshadow)
   endif()
 endif()
+target_compile_definitions(mxnet PUBLIC 
DMLC_LOG_FATAL_THROW=$)
 
 # extension libraries (custom operators, custom subgraphs) are built by default
 add_library(customop_lib SHARED 
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
diff --git a/python/mxnet/base.py b/python/mxnet/base.py
index 65687ff..0b4bdf9 100644
--- a/python/mxnet/base.py
+++ b/python/mxnet/base.py
@@ -309,7 +309,6 @@ RtcHandle = ctypes.c_void_p
 CudaModuleHandle = ctypes.c_void_p
 CudaKernelHandle = ctypes.c_void_p
 ProfileHandle = ctypes.c_void_p
-DLPackHandle = ctypes.c_void_p
 
 
 #
diff --git a/python/mxnet/dlpack.py b/python/mxnet/dlpack.py
new file mode 100644
index 000..b5e8ee8
--- /dev/null
+++ b/python/mxnet/dlpack.py
@@ -0,0 +1,185 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=protected-access
+# pylint: disable=import-error, no-name-in-module, undefined-variable
+
+"""DLPack API of MXNet."""
+
+import ctypes
+from .base import _LIB, c_str, check_call, NDArrayHandle
+
+DLPackHandle = ctypes.c_void_p
+
+PyCapsuleDestructor = ctypes.CFUNCTYPE(None, ctypes.c_void_p)
+_c_str_dltensor = c_str('dltensor')
+_c_str_used_dltensor = c_str('used_dltensor')
+
+def _dlpack_deleter(pycapsule):
+pycapsule = ctypes.c_void_p(pycapsule)
+if ctypes.pythonapi.P

[incubator-mxnet] branch master updated: [NumPy] DLPack refactor and npx.from_numpy (#18656)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 045efb2  [NumPy] DLPack refactor and npx.from_numpy (#18656)
045efb2 is described below

commit 045efb27842e850f6ddf7c48e5c16e5678508443
Author: Sheng Zha 
AuthorDate: Thu Jul 30 19:19:33 2020 -0700

[NumPy] DLPack refactor and npx.from_numpy (#18656)

* refactor dlpack and add from_numpy to npx

* remove reference of DeepNumPy

* map platform-dependent types to fixed-size types

* update DMLC_LOG_FATAL_THROW

* fix flaky

* fix flaky

* test no error
---
 CMakeLists.txt |   4 +-
 python/mxnet/base.py   |   1 -
 python/mxnet/dlpack.py | 185 ++
 python/mxnet/ndarray/ndarray.py| 271 +++--
 python/mxnet/ndarray/numpy/_op.py  |   6 +-
 python/mxnet/ndarray/numpy/random.py   |   2 +-
 python/mxnet/numpy/multiarray.py   |  16 +-
 python/mxnet/numpy/random.py   |   2 +-
 python/mxnet/numpy_extension/utils.py  |  92 ---
 python/mxnet/symbol/numpy/random.py|   2 +-
 tests/python/unittest/test_base.py |   2 +
 tests/python/unittest/test_gluon_probability_v1.py |   6 +-
 tests/python/unittest/test_numpy_ndarray.py|  32 +++
 tests/python/unittest/test_operator.py |   4 +-
 14 files changed, 373 insertions(+), 252 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 5f1a510..cb22c59 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -642,7 +642,6 @@ if(UNIX)
   endif()
   target_link_libraries(mxnet PUBLIC mshadow)
   target_link_libraries(mxnet PUBLIC ${CMAKE_DL_LIBS})
-  target_compile_definitions(mxnet PUBLIC 
DMLC_LOG_FATAL_THROW=$)
   if(CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo")
 target_compile_options(mxnet PRIVATE "$<$:-Werror>")
 # Ignore erroneous compiler warnings:
@@ -669,7 +668,6 @@ elseif(MSVC)
   foreach(arch ${arch_code_list})
 add_library(mxnet_${arch} SHARED ${SOURCE})
 target_link_libraries(mxnet_${arch} PUBLIC mshadow)
-target_compile_definitions(mxnet_${arch} PUBLIC 
DMLC_LOG_FATAL_THROW=$)
 target_compile_options(
   mxnet_${arch}
   PRIVATE
@@ -705,10 +703,10 @@ elseif(MSVC)
 endif(USE_SPLIT_ARCH_DLL)
   else()
 add_library(mxnet SHARED ${SOURCE})
-target_compile_definitions(mxnet PUBLIC 
DMLC_LOG_FATAL_THROW=$)
 target_link_libraries(mxnet PUBLIC mshadow)
   endif()
 endif()
+target_compile_definitions(mxnet PUBLIC 
DMLC_LOG_FATAL_THROW=$)
 
 # extension libraries (custom operators, custom subgraphs) are built by default
 add_library(customop_lib SHARED 
${CMAKE_CURRENT_SOURCE_DIR}/example/extensions/lib_custom_op/gemm_lib.cc)
diff --git a/python/mxnet/base.py b/python/mxnet/base.py
index 65687ff..0b4bdf9 100644
--- a/python/mxnet/base.py
+++ b/python/mxnet/base.py
@@ -309,7 +309,6 @@ RtcHandle = ctypes.c_void_p
 CudaModuleHandle = ctypes.c_void_p
 CudaKernelHandle = ctypes.c_void_p
 ProfileHandle = ctypes.c_void_p
-DLPackHandle = ctypes.c_void_p
 
 
 #
diff --git a/python/mxnet/dlpack.py b/python/mxnet/dlpack.py
new file mode 100644
index 000..b5e8ee8
--- /dev/null
+++ b/python/mxnet/dlpack.py
@@ -0,0 +1,185 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+# pylint: disable=protected-access
+# pylint: disable=import-error, no-name-in-module, undefined-variable
+
+"""DLPack API of MXNet."""
+
+import ctypes
+from .base import _LIB, c_str, check_call, NDArrayHandle
+
+DLPackHandle = ctypes.c_void_p
+
+PyCapsuleDestructor = ctypes.CFUNCTYPE(None, ctypes.c_void_p)
+_c_str_dltensor = c_str('dltensor')
+_c_str_used_dltensor = c_str('used_dltensor')
+
+def _dlpack_deleter(pycapsule):
+pycapsule = ctypes.c_void_p(pycapsule)
+if ctypes.pythonapi.P

[incubator-mxnet] branch master updated (6bbd531 -> 608afef)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6bbd531  Update clang-tidy integration (#18815)
 add 608afef  Fix dirichlet flaky tests (#18817)

No new revisions were added by this update.

Summary of changes:
 tests/python/unittest/test_gluon_probability_v1.py | 12 ++--
 tests/python/unittest/test_gluon_probability_v2.py | 10 +-
 2 files changed, 11 insertions(+), 11 deletions(-)



[incubator-mxnet] branch master updated (6bbd531 -> 608afef)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6bbd531  Update clang-tidy integration (#18815)
 add 608afef  Fix dirichlet flaky tests (#18817)

No new revisions were added by this update.

Summary of changes:
 tests/python/unittest/test_gluon_probability_v1.py | 12 ++--
 tests/python/unittest/test_gluon_probability_v2.py | 10 +-
 2 files changed, 11 insertions(+), 11 deletions(-)



[incubator-mxnet] branch master updated: Fix dirichlet flaky tests (#18817)

2020-07-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 608afef  Fix dirichlet flaky tests (#18817)
608afef is described below

commit 608afef6fb69129730f4c18d0e42f5a8ac2078a7
Author: Xi Wang 
AuthorDate: Fri Jul 31 02:30:25 2020 +0800

Fix dirichlet flaky tests (#18817)

* make parameter smoother

* minor changes
---
 tests/python/unittest/test_gluon_probability_v1.py | 12 ++--
 tests/python/unittest/test_gluon_probability_v2.py | 10 +-
 2 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/tests/python/unittest/test_gluon_probability_v1.py 
b/tests/python/unittest/test_gluon_probability_v1.py
index c0dd5d5..0fece99 100644
--- a/tests/python/unittest/test_gluon_probability_v1.py
+++ b/tests/python/unittest/test_gluon_probability_v1.py
@@ -341,7 +341,7 @@ def test_gluon_cauchy_v1():
 for shape, hybridize in itertools.product(shapes, [True, False]):
 loc = np.random.uniform(-1, 1, shape)
 scale = np.random.uniform(0.5, 1.5, shape)
-samples = np.random.uniform(size=shape, high=1.0-1e-4)
+samples = np.random.uniform(size=shape, low=1e-4, high=1.0-1e-4)
 net = TestCauchy("icdf")
 if hybridize:
 net.hybridize()
@@ -837,7 +837,7 @@ def test_gluon_dirichlet_v1():
 dirichlet = mgp.Dirichlet(alpha, F, validate_args=True)
 return _distribution_method_invoker(dirichlet, self._func, *args)
 
-event_shapes = [2, 5, 10]
+event_shapes = [2, 4, 6]
 batch_shapes = [None, (2, 3)]
 
 # Test sampling
@@ -845,7 +845,7 @@ def test_gluon_dirichlet_v1():
 for hybridize in [True, False]:
 desired_shape = (
 batch_shape if batch_shape is not None else ()) + 
(event_shape,)
-alpha = np.random.uniform(size=desired_shape)
+alpha = np.random.uniform(1.0, 5.0, size=desired_shape)
 net = TestDirichlet("sample")
 if hybridize:
 net.hybridize()
@@ -862,9 +862,9 @@ def test_gluon_dirichlet_v1():
 for hybridize in [True, False]:
 desired_shape = (
 batch_shape if batch_shape is not None else ()) + 
(event_shape,)
-alpha = np.random.uniform(size=desired_shape)
+alpha = np.random.uniform(1.0, 5.0, desired_shape)
 np_samples = _np.random.dirichlet(
-[1 / event_shape] * event_shape, size=batch_shape)
+[10.0 / event_shape] * event_shape, size=batch_shape)
 net = TestDirichlet("log_prob")
 if hybridize:
 net.hybridize()
@@ -879,7 +879,7 @@ def test_gluon_dirichlet_v1():
 for func in ['mean', 'variance', 'entropy']:
 desired_shape = (
 batch_shape if batch_shape is not None else ()) + 
(event_shape,)
-alpha = np.random.uniform(size=desired_shape)
+alpha = np.random.uniform(1.0, 5.0, desired_shape)
 net = TestDirichlet(func)
 if hybridize:
 net.hybridize()
diff --git a/tests/python/unittest/test_gluon_probability_v2.py 
b/tests/python/unittest/test_gluon_probability_v2.py
index ecce63c..dc8ac14 100644
--- a/tests/python/unittest/test_gluon_probability_v2.py
+++ b/tests/python/unittest/test_gluon_probability_v2.py
@@ -837,7 +837,7 @@ def test_gluon_dirichlet():
 dirichlet = mgp.Dirichlet(alpha, validate_args=True)
 return _distribution_method_invoker(dirichlet, self._func, *args)
 
-event_shapes = [2, 5, 10]
+event_shapes = [2, 4, 6]
 batch_shapes = [None, (2, 3)]
 
 # Test sampling
@@ -845,7 +845,7 @@ def test_gluon_dirichlet():
 for hybridize in [True, False]:
 desired_shape = (
 batch_shape if batch_shape is not None else ()) + 
(event_shape,)
-alpha = np.random.uniform(size=desired_shape)
+alpha = np.random.uniform(1.0, 5.0, size=desired_shape)
 net = TestDirichlet("sample")
 if hybridize:
 net.hybridize()
@@ -862,9 +862,9 @@ def test_gluon_dirichlet():
 for hybridize in [True, False]:
 desired_shape = (
 batch_shape if batch_shape is not None else ()) + 
(event_shape,)
-alpha = np.random.uniform(size=desired_shape)
+alpha = np.random.uniform(1.0, 5.0, size=desired_shape)
 np_samples = _np.random.dirichlet(
-[1 / event_shape] * event_shape, size=batch_shape)
+[10.0 / event_shape] * event_shape, size=batch_shape)
 net = TestDirichlet("log_prob")
 if hybridize:
 net.hybridize()
@@ -879,7 +879,7 @@

[incubator-mxnet] branch master updated: Remove deepnumpy reference and move Numpy tutorials to top level (#18798)

2020-07-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 915f6b4  Remove deepnumpy reference and move Numpy tutorials to top 
level (#18798)
915f6b4 is described below

commit 915f6b43de409ec7fbf0373d270e9d4a05621fe2
Author: Yang Shi 
AuthorDate: Wed Jul 29 11:28:37 2020 -0700

Remove deepnumpy reference and move Numpy tutorials to top level (#18798)

* move np tutorials to top level

* replace deepnumpy reference to np

* add info in card

* remove useless entry

* replace NDArray API card with np.ndarray

* python site refactor

* remove duplicated drawer and refactor layout

* extend document width to 100% for xl devices
---
 docs/python_docs/_static/mxnet.css |  4 
 .../python/tutorials/getting-started/crash-course/1-ndarray.md |  2 +-
 .../tutorials/getting-started/crash-course/6-use_gpus.md   |  2 +-
 docs/python_docs/python/tutorials/getting-started/index.rst|  4 ++--
 .../tutorials/getting-started/{deepnumpy => np}/cheat-sheet.md |  0
 .../tutorials/getting-started/{deepnumpy => np}/index.rst  |  2 +-
 .../{deepnumpy/deepnumpy-vs-numpy.md => np/np-vs-numpy.md} |  0
 docs/python_docs/python/tutorials/index.rst|  7 +++
 docs/python_docs/python/tutorials/packages/index.rst   |  6 ++
 docs/python_docs/python/tutorials/packages/ndarray/index.rst   |  5 -
 .../packages/{ndarray/deepnumpy => np}/arrays.indexing.rst |  0
 .../packages/{ndarray/deepnumpy => np}/arrays.ndarray.rst  |  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/arrays.rst|  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/index.rst |  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/npx.rst   |  0
 .../packages/{ndarray/deepnumpy => np}/random/index.rst|  0
 .../{ndarray/deepnumpy => np}/routines.array-creation.rst  |  0
 .../{ndarray/deepnumpy => np}/routines.array-manipulation.rst  |  0
 .../packages/{ndarray/deepnumpy => np}/routines.io.rst |  0
 .../packages/{ndarray/deepnumpy => np}/routines.linalg.rst |  0
 .../packages/{ndarray/deepnumpy => np}/routines.math.rst   |  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/routines.rst  |  0
 .../packages/{ndarray/deepnumpy => np}/routines.sort.rst   |  0
 .../packages/{ndarray/deepnumpy => np}/routines.statistics.rst |  0
 docs/python_docs/themes/mx-theme/mxtheme/layout.html   | 10 ++
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.css|  2 +-
 .../mxtheme/static/sphinx_materialdesign_theme.css.map |  2 +-
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.js |  6 --
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.js.map |  2 +-
 .../{_static => themes/mx-theme/src/js}/feedback.js|  0
 .../themes/mx-theme/src/js/sphinx_materialdesign_theme.js  |  2 +-
 docs/python_docs/themes/mx-theme/src/scss/_root.scss   |  8 
 .../python_docs/themes/mx-theme/src/scss/grid/_simplegrid.scss |  9 -
 docs/python_docs/themes/mx-theme/src/scss/layout/_layout.scss  |  3 ++-
 34 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/docs/python_docs/_static/mxnet.css 
b/docs/python_docs/_static/mxnet.css
index 7d4f7f1..5c04804 100644
--- a/docs/python_docs/_static/mxnet.css
+++ b/docs/python_docs/_static/mxnet.css
@@ -19,10 +19,6 @@
 visibility: hidden;
 }
 
-.document .page-content {
-padding: 0 10% !important;
-}
-
 .mdl-layout__header--waterfall.is-casting-shadow {
 box-shadow: none !important;
 }
diff --git 
a/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md 
b/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
index 453cc35..52835b4 100644
--- 
a/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
+++ 
b/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
@@ -17,7 +17,7 @@
 
 # Step 1: Manipulate data with NP on MXNet
 
-This getting started exercise introduces the `np` package, which is similar to 
Numpy. For more information, please see [Differences between NP on MXNet and 
NumPy](/api/python/docs/tutorials/getting-started/deepnumpy/deepnumpy-vs-numpy.html).
+This getting started exercise introduces the `np` package, which is similar to 
Numpy. For more information, please see [Differences between NP on MXNet and 
NumPy](/api/python/docs/tutorials/getting-started/np/np-vs-numpy.html).
 
 ## Import packages and create an array
 
diff --git 
a/docs/python_docs/python/tutorials/getting-started/crash-course/6-use_gpus.md 
b/docs/python_docs/python/tutorials/getting-started/crash-course/6-use_gpus.md
index 1e60d5f..6

[incubator-mxnet] branch master updated: Remove deepnumpy reference and move Numpy tutorials to top level (#18798)

2020-07-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 915f6b4  Remove deepnumpy reference and move Numpy tutorials to top 
level (#18798)
915f6b4 is described below

commit 915f6b43de409ec7fbf0373d270e9d4a05621fe2
Author: Yang Shi 
AuthorDate: Wed Jul 29 11:28:37 2020 -0700

Remove deepnumpy reference and move Numpy tutorials to top level (#18798)

* move np tutorials to top level

* replace deepnumpy reference to np

* add info in card

* remove useless entry

* replace NDArray API card with np.ndarray

* python site refactor

* remove duplicated drawer and refactor layout

* extend document width to 100% for xl devices
---
 docs/python_docs/_static/mxnet.css |  4 
 .../python/tutorials/getting-started/crash-course/1-ndarray.md |  2 +-
 .../tutorials/getting-started/crash-course/6-use_gpus.md   |  2 +-
 docs/python_docs/python/tutorials/getting-started/index.rst|  4 ++--
 .../tutorials/getting-started/{deepnumpy => np}/cheat-sheet.md |  0
 .../tutorials/getting-started/{deepnumpy => np}/index.rst  |  2 +-
 .../{deepnumpy/deepnumpy-vs-numpy.md => np/np-vs-numpy.md} |  0
 docs/python_docs/python/tutorials/index.rst|  7 +++
 docs/python_docs/python/tutorials/packages/index.rst   |  6 ++
 docs/python_docs/python/tutorials/packages/ndarray/index.rst   |  5 -
 .../packages/{ndarray/deepnumpy => np}/arrays.indexing.rst |  0
 .../packages/{ndarray/deepnumpy => np}/arrays.ndarray.rst  |  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/arrays.rst|  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/index.rst |  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/npx.rst   |  0
 .../packages/{ndarray/deepnumpy => np}/random/index.rst|  0
 .../{ndarray/deepnumpy => np}/routines.array-creation.rst  |  0
 .../{ndarray/deepnumpy => np}/routines.array-manipulation.rst  |  0
 .../packages/{ndarray/deepnumpy => np}/routines.io.rst |  0
 .../packages/{ndarray/deepnumpy => np}/routines.linalg.rst |  0
 .../packages/{ndarray/deepnumpy => np}/routines.math.rst   |  0
 .../tutorials/packages/{ndarray/deepnumpy => np}/routines.rst  |  0
 .../packages/{ndarray/deepnumpy => np}/routines.sort.rst   |  0
 .../packages/{ndarray/deepnumpy => np}/routines.statistics.rst |  0
 docs/python_docs/themes/mx-theme/mxtheme/layout.html   | 10 ++
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.css|  2 +-
 .../mxtheme/static/sphinx_materialdesign_theme.css.map |  2 +-
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.js |  6 --
 .../mx-theme/mxtheme/static/sphinx_materialdesign_theme.js.map |  2 +-
 .../{_static => themes/mx-theme/src/js}/feedback.js|  0
 .../themes/mx-theme/src/js/sphinx_materialdesign_theme.js  |  2 +-
 docs/python_docs/themes/mx-theme/src/scss/_root.scss   |  8 
 .../python_docs/themes/mx-theme/src/scss/grid/_simplegrid.scss |  9 -
 docs/python_docs/themes/mx-theme/src/scss/layout/_layout.scss  |  3 ++-
 34 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/docs/python_docs/_static/mxnet.css 
b/docs/python_docs/_static/mxnet.css
index 7d4f7f1..5c04804 100644
--- a/docs/python_docs/_static/mxnet.css
+++ b/docs/python_docs/_static/mxnet.css
@@ -19,10 +19,6 @@
 visibility: hidden;
 }
 
-.document .page-content {
-padding: 0 10% !important;
-}
-
 .mdl-layout__header--waterfall.is-casting-shadow {
 box-shadow: none !important;
 }
diff --git 
a/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md 
b/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
index 453cc35..52835b4 100644
--- 
a/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
+++ 
b/docs/python_docs/python/tutorials/getting-started/crash-course/1-ndarray.md
@@ -17,7 +17,7 @@
 
 # Step 1: Manipulate data with NP on MXNet
 
-This getting started exercise introduces the `np` package, which is similar to 
Numpy. For more information, please see [Differences between NP on MXNet and 
NumPy](/api/python/docs/tutorials/getting-started/deepnumpy/deepnumpy-vs-numpy.html).
+This getting started exercise introduces the `np` package, which is similar to 
Numpy. For more information, please see [Differences between NP on MXNet and 
NumPy](/api/python/docs/tutorials/getting-started/np/np-vs-numpy.html).
 
 ## Import packages and create an array
 
diff --git 
a/docs/python_docs/python/tutorials/getting-started/crash-course/6-use_gpus.md 
b/docs/python_docs/python/tutorials/getting-started/crash-course/6-use_gpus.md
index 1e60d5f..6

[incubator-mxnet] branch v1.x updated: Add syrk test shape check (#18812)

2020-07-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new 85eb528  Add syrk test shape check (#18812)
85eb528 is described below

commit 85eb528c1f53cc3b88ea4596d02d6bfa251b9953
Author: Zhaoqi Zhu 
AuthorDate: Tue Jul 28 23:08:53 2020 -0700

Add syrk test shape check (#18812)

* add shape check

* add name to contributor.md

Co-authored-by: Ubuntu 
---
 CONTRIBUTORS.md   | 1 +
 tests/nightly/test_large_array.py | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index bd7f966..be04d82 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -252,6 +252,7 @@ List of Contributors
 * [Oliver Kowalke](https://github.com/olk)
 * [Connor Goggins](https://github.com/connorgoggins)
 * [Joe Evans](https://github.com/josephevans)
+* [Zhaoqi Zhu](https://github.com/zha0q1)
 
 Label Bot
 -
diff --git a/tests/nightly/test_large_array.py 
b/tests/nightly/test_large_array.py
index 306c827..8865eae 100644
--- a/tests/nightly/test_large_array.py
+++ b/tests/nightly/test_large_array.py
@@ -1201,9 +1201,11 @@ def test_linalg():
 A.attach_grad()
 with mx.autograd.record():
 out = nd.linalg.syrk(A, alpha=2, transpose=False)
+assert out.shape == (2, LARGE_SQ_X, LARGE_SQ_X)
 assert out[0,0,0] == 2
 assert_almost_equal(out[1,0,0], nd.array([0.02]), rtol=1e-3, atol=1e-5)
 out.backward()
+assert A.grad.shape == (2, LARGE_SQ_X, LARGE_SQ_X)
 assert A.grad[0,0,0] == 4
 assert_almost_equal(A.grad[1,0,0], nd.array([0.4]), rtol=1e-3, 
atol=1e-5)
 



[incubator-mxnet] branch v1.x updated: Add syrk test shape check (#18812)

2020-07-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new 85eb528  Add syrk test shape check (#18812)
85eb528 is described below

commit 85eb528c1f53cc3b88ea4596d02d6bfa251b9953
Author: Zhaoqi Zhu 
AuthorDate: Tue Jul 28 23:08:53 2020 -0700

Add syrk test shape check (#18812)

* add shape check

* add name to contributor.md

Co-authored-by: Ubuntu 
---
 CONTRIBUTORS.md   | 1 +
 tests/nightly/test_large_array.py | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index bd7f966..be04d82 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -252,6 +252,7 @@ List of Contributors
 * [Oliver Kowalke](https://github.com/olk)
 * [Connor Goggins](https://github.com/connorgoggins)
 * [Joe Evans](https://github.com/josephevans)
+* [Zhaoqi Zhu](https://github.com/zha0q1)
 
 Label Bot
 -
diff --git a/tests/nightly/test_large_array.py 
b/tests/nightly/test_large_array.py
index 306c827..8865eae 100644
--- a/tests/nightly/test_large_array.py
+++ b/tests/nightly/test_large_array.py
@@ -1201,9 +1201,11 @@ def test_linalg():
 A.attach_grad()
 with mx.autograd.record():
 out = nd.linalg.syrk(A, alpha=2, transpose=False)
+assert out.shape == (2, LARGE_SQ_X, LARGE_SQ_X)
 assert out[0,0,0] == 2
 assert_almost_equal(out[1,0,0], nd.array([0.02]), rtol=1e-3, atol=1e-5)
 out.backward()
+assert A.grad.shape == (2, LARGE_SQ_X, LARGE_SQ_X)
 assert A.grad[0,0,0] == 4
 assert_almost_equal(A.grad[1,0,0], nd.array([0.4]), rtol=1e-3, 
atol=1e-5)
 



[incubator-mxnet] branch master updated (126636c -> e9829e7)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 126636c  Fix naming in runtime_functions.sh (#18795)
 add e9829e7  Cherry-pick large tensor support from #18752. (#18804)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md |  1 +
 src/operator/tensor/la_op-inl.h | 11 ++-
 2 files changed, 7 insertions(+), 5 deletions(-)



[incubator-mxnet] branch master updated: Cherry-pick large tensor support from #18752. (#18804)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e9829e7  Cherry-pick large tensor support from #18752. (#18804)
e9829e7 is described below

commit e9829e71a7f536d0fc78a0faf96f31336987770e
Author: Joe Evans 
AuthorDate: Tue Jul 28 18:53:29 2020 -0700

Cherry-pick large tensor support from #18752. (#18804)

Co-authored-by: Joe Evans 
---
 CONTRIBUTORS.md |  1 +
 src/operator/tensor/la_op-inl.h | 11 ++-
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index f63b241..4146d45 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -254,6 +254,7 @@ List of Contributors
 * [Connor Goggins](https://github.com/connorgoggins)
 * [Wei Chu](https://github.com/waytrue17)
 * [Yang Shi](https://github.com/ys2843)
+* [Joe Evans](https://github.com/josephevans)
 
 Label Bot
 -
diff --git a/src/operator/tensor/la_op-inl.h b/src/operator/tensor/la_op-inl.h
index d580cce..7a5a602 100644
--- a/src/operator/tensor/la_op-inl.h
+++ b/src/operator/tensor/la_op-inl.h
@@ -36,9 +36,10 @@ using namespace mshadow;
 // Copies lower/upper triangular part to upper/lower, i.e. to the opposite 
side.
 struct CopyTriangularToOppositeSide {
   template
-  MSHADOW_XINLINE static void Map(int i, int matrix_size, int stride, DType* 
data, bool to_lower) {
+  MSHADOW_XINLINE static void Map(index_t i, size_t matrix_size, index_t 
stride,
+  DType* data, bool to_lower) {
 // Below computation works even when we are dealing with a batch of 
matrices.
-const int row((i % matrix_size) / stride), col(i % stride);
+const index_t row((i % matrix_size) / stride), col(i % stride);
 if (row > col) {
if (to_lower) {
  data[i] = data[i + (col - row) * (stride - 1)];
@@ -52,9 +53,9 @@ struct CopyTriangularToOppositeSide {
 // Zero's lower/upper triangular part of a matrix.
 struct ZeroTriangular {
   template
-  MSHADOW_XINLINE static void Map(int i, int matrix_size, int stride, DType* 
data,
-  bool zero_lower) {
-const int row((i % matrix_size) / stride), col(i % stride);
+  MSHADOW_XINLINE static void Map(index_t i, size_t matrix_size, index_t 
stride,
+  DType* data, bool zero_lower) {
+const index_t row((i % matrix_size) / stride), col(i % stride);
 if ((!zero_lower && (row < col)) || (zero_lower && (row > col))) data[i] = 
0;
   }
 };



[incubator-mxnet] branch v1.x updated (d009345 -> 7bef9cb)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d009345  [1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)
 add 7bef9cb  Back port optimization to broadcast_axis to MXNet1.x (#18773)

No new revisions were added by this update.

Summary of changes:
 src/operator/numpy/np_matmul_op-inl.h |  40 +-
 src/operator/tensor/broadcast_reduce_op.h | 208 +++---
 2 files changed, 224 insertions(+), 24 deletions(-)



[incubator-mxnet] branch v1.x updated: Back port optimization to broadcast_axis to MXNet1.x (#18773)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new 7bef9cb  Back port optimization to broadcast_axis to MXNet1.x (#18773)
7bef9cb is described below

commit 7bef9cb23b72c3b5b93c10d87e09db19f442d12e
Author: Rohit Kumar Srivastava 
AuthorDate: Tue Jul 28 16:58:07 2020 -0700

Back port optimization to broadcast_axis to MXNet1.x (#18773)

* Improving performance of broadcast_axis on GPU (#18168)

* adding separate int32_t kernel for GPU in broadcast_axis/to/like operators

* using structure instead of temp workspace to pass stride and shape

* replacing hardcoded int32_t with generic index_t

* combining CPU and GPU kernels to leverage cached stride calculation and 
fast access shape data in both

Co-authored-by: Rohit Kumar Srivastava 

* Improve performance of broadcast_axis on CPU (#17882)

* adding comments explaining code optimizations

* fixing broadcast_axis kernel to int32

* fixing slice_axis kernel to int32

* combining CPU and GPU implementation method signatures and cleaned up
code

* adding new broadcast_axis to np_matmul

Co-authored-by: Rohit Kumar Srivastava 

Co-authored-by: Rohit Kumar Srivastava 
---
 src/operator/numpy/np_matmul_op-inl.h |  40 +-
 src/operator/tensor/broadcast_reduce_op.h | 208 +++---
 2 files changed, 224 insertions(+), 24 deletions(-)

diff --git a/src/operator/numpy/np_matmul_op-inl.h 
b/src/operator/numpy/np_matmul_op-inl.h
index 89560f6..8f1b4f9 100644
--- a/src/operator/numpy/np_matmul_op-inl.h
+++ b/src/operator/numpy/np_matmul_op-inl.h
@@ -138,6 +138,8 @@ inline void MatmulImpl(const OpContext& ctx,
   mshadow::Tensor workspace;
   mshadow::Tensor ans, mlhs, mrhs;
   mshadow::Stream *s = ctx.get_stream();
+  bool isCPU = std::is_same::value;
+  // Is true if either a or b requires broadcast or not
   if (MatmulNeedBroadcast(a_shape, b_shape)) {
 // e.g. a.shape = (2, 3, 1, 4, 2)
 //  b.shape =   (5, 2, 4)
@@ -157,12 +159,38 @@ inline void MatmulImpl(const OpContext& ctx,
 DType* bc_b_ptr = bc_a_ptr + bc_size_a;
 MSHADOW_TYPE_SWITCH_WITH_BOOL(input_a.type_flag_, IType, {
   MSHADOW_TYPE_SWITCH_WITH_BOOL(input_b.type_flag_, OType, {
-Kernel, xpu>::Launch(
-  s, bc_size_a, input_a.dptr(), bc_a_ptr,
-  k_a_shape, k_a_shape_bc, OpReqType::kWriteTo, ndim);
-Kernel, xpu>::Launch(
-  s, bc_size_b, input_b.dptr(), bc_b_ptr,
-  k_b_shape, k_b_shape_bc, OpReqType::kWriteTo, ndim);
+struct ShapeAndStride aux_data_a, aux_data_b;
+PrepareAUXData(_data_a, k_a_shape, k_a_shape_bc, ndim);
+PrepareAUXData(_data_b, k_b_shape, k_b_shape_bc, ndim);
+if (isCPU) {
+  if (!aux_data_a.shape_changed) {
+Kernel, xpu>::Launch(
+  s, bc_size_a, input_a.dptr(), bc_a_ptr, 
OpReqType::kWriteTo);
+Kernel, xpu>::Launch(
+  s, input_b.Size(), input_b.dptr(), bc_b_ptr,
+  aux_data_b, OpReqType::kWriteTo, ndim);
+  } else if (!aux_data_b.shape_changed) {
+Kernel, xpu>::Launch(
+  s, bc_size_b, input_b.dptr(), bc_b_ptr, 
OpReqType::kWriteTo);
+Kernel, xpu>::Launch(
+  s, input_a.Size(), input_a.dptr(), bc_a_ptr,
+  aux_data_a, OpReqType::kWriteTo, ndim);
+  } else {
+Kernel, xpu>::Launch(
+  s, input_a.Size(), input_a.dptr(), bc_a_ptr,
+  aux_data_a, OpReqType::kWriteTo, ndim);
+Kernel, xpu>::Launch(
+  s, input_b.Size(), input_b.dptr(), bc_b_ptr,
+  aux_data_b, OpReqType::kWriteTo, ndim);
+  }
+} else {
+  Kernel, xpu>::Launch(
+s, bc_size_a, input_a.dptr(), bc_a_ptr,
+aux_data_a, OpReqType::kWriteTo, ndim);
+  Kernel, xpu>::Launch(
+s, bc_size_b, input_b.dptr(), bc_b_ptr,
+aux_data_b, OpReqType::kWriteTo, ndim);
+}
   });
 });
 ans = mshadow::Tensor(output.dptr(),
diff --git a/src/operator/tensor/broadcast_reduce_op.h 
b/src/operator/tensor/broadcast_reduce_op.h
index 5eb0c41..82b4f7d 100644
--- a/src/operator/tensor/broadcast_reduce_op.h
+++ b/src/operator/tensor/broadcast_reduce_op.h
@@ -25,6 +25,7 @@
 #ifndef MXNET_OPERATOR_TENSOR_BROADCAST_REDUCE_OP_H_
 #define MXNET_OPERATOR_TENSOR_BROADCAST_REDUCE_OP_H_
 
+#include 
 #include 
 #include 
 #include 
@@ -1037,34 +1038,182 @@ void ReduceAxesBackwardUseInOut(const nnvm::NodeAttrs& 
attrs,
   ReduceAxesBackwardUseInOutImpl(ctx, small, inputs, req, 
outputs);
 }
 
+namespace {  // unnamed namespace to keep scope of the struct within th

[incubator-mxnet] branch master updated (f83dbac -> 126636c)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f83dbac  remove executor manager from API doc (#18802)
 add 126636c  Fix naming in runtime_functions.sh (#18795)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (f83dbac -> 126636c)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f83dbac  remove executor manager from API doc (#18802)
 add 126636c  Fix naming in runtime_functions.sh (#18795)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (f83dbac -> 126636c)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f83dbac  remove executor manager from API doc (#18802)
 add 126636c  Fix naming in runtime_functions.sh (#18795)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (f83dbac -> 126636c)

2020-07-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f83dbac  remove executor manager from API doc (#18802)
 add 126636c  Fix naming in runtime_functions.sh (#18795)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[incubator-mxnet] branch v1.x updated: [1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)

2020-07-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new d009345  [1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)
d009345 is described below

commit d0093458e3be5e76d78750043c4e5a3f01a7d056
Author: Chaitanya Prakash Bapat 
AuthorDate: Mon Jul 27 20:28:43 2020 -0700

[1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)

* added forward, backward test for gemm2

* add backward check

* correct gradient assert

* move test inside linalg_ops

* add shape checks
---
 tests/nightly/test_large_array.py | 20 
 1 file changed, 20 insertions(+)

diff --git a/tests/nightly/test_large_array.py 
b/tests/nightly/test_large_array.py
index 020a707..306c827 100644
--- a/tests/nightly/test_large_array.py
+++ b/tests/nightly/test_large_array.py
@@ -1207,6 +1207,25 @@ def test_linalg():
 assert A.grad[0,0,0] == 4
 assert_almost_equal(A.grad[1,0,0], nd.array([0.4]), rtol=1e-3, 
atol=1e-5)
 
+def check_gemm2():
+def run_gemm2(inp1,inp2):
+inp1.attach_grad()
+inp2.attach_grad()
+with mx.autograd.record():
+out = mx.nd.linalg.gemm2(inp1,inp2)
+return inp1.grad, inp2.grad, out
+
+inp1=mx.nd.ones(shape=(SMALL_Y, LARGE_X))
+inp1[0][0]=0.1
+inp2=mx.nd.ones(shape=(LARGE_X, SMALL_Y))
+inp1_grad, inp2_grad, out= run_gemm2(inp1,inp2)
+assert out.asnumpy()[0][0] == LARGE_X
+assert out.shape == (SMALL_Y, SMALL_Y)
+out.backward()
+assert inp1_grad.shape == (SMALL_Y, LARGE_X)
+assert inp2_grad.shape == (LARGE_X, SMALL_Y)
+assert_almost_equal(inp2_grad.asnumpy()[0][0],49.1)
+
 def check_det():
 def run_det(inp):
 inp.attach_grad()
@@ -1321,6 +1340,7 @@ def test_linalg():
 check_potrf()
 check_potri()
 check_syrk_batch()
+check_gemm2()
 check_det()
 check_inverse()
 check_trmm()



[incubator-mxnet] branch v1.x updated: [1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)

2020-07-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new d009345  [1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)
d009345 is described below

commit d0093458e3be5e76d78750043c4e5a3f01a7d056
Author: Chaitanya Prakash Bapat 
AuthorDate: Mon Jul 27 20:28:43 2020 -0700

[1.x][LT] Add forward, backward test for linalg.gemm2 (#18784)

* added forward, backward test for gemm2

* add backward check

* correct gradient assert

* move test inside linalg_ops

* add shape checks
---
 tests/nightly/test_large_array.py | 20 
 1 file changed, 20 insertions(+)

diff --git a/tests/nightly/test_large_array.py 
b/tests/nightly/test_large_array.py
index 020a707..306c827 100644
--- a/tests/nightly/test_large_array.py
+++ b/tests/nightly/test_large_array.py
@@ -1207,6 +1207,25 @@ def test_linalg():
 assert A.grad[0,0,0] == 4
 assert_almost_equal(A.grad[1,0,0], nd.array([0.4]), rtol=1e-3, 
atol=1e-5)
 
+def check_gemm2():
+def run_gemm2(inp1,inp2):
+inp1.attach_grad()
+inp2.attach_grad()
+with mx.autograd.record():
+out = mx.nd.linalg.gemm2(inp1,inp2)
+return inp1.grad, inp2.grad, out
+
+inp1=mx.nd.ones(shape=(SMALL_Y, LARGE_X))
+inp1[0][0]=0.1
+inp2=mx.nd.ones(shape=(LARGE_X, SMALL_Y))
+inp1_grad, inp2_grad, out= run_gemm2(inp1,inp2)
+assert out.asnumpy()[0][0] == LARGE_X
+assert out.shape == (SMALL_Y, SMALL_Y)
+out.backward()
+assert inp1_grad.shape == (SMALL_Y, LARGE_X)
+assert inp2_grad.shape == (LARGE_X, SMALL_Y)
+assert_almost_equal(inp2_grad.asnumpy()[0][0],49.1)
+
 def check_det():
 def run_det(inp):
 inp.attach_grad()
@@ -1321,6 +1340,7 @@ def test_linalg():
 check_potrf()
 check_potri()
 check_syrk_batch()
+check_gemm2()
 check_det()
 check_inverse()
 check_trmm()



[incubator-mxnet] branch master updated (9e77e81 -> 74430a9)

2020-07-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9e77e81  Update CUB and include it only for CUDA < 11 (#18799)
 add 74430a9  remove NLL in metric (#18794)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/metric.py | 91 ++--
 tests/python/unittest/test_metric.py | 20 +---
 2 files changed, 26 insertions(+), 85 deletions(-)



[incubator-mxnet] branch master updated (9e77e81 -> 74430a9)

2020-07-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9e77e81  Update CUB and include it only for CUDA < 11 (#18799)
 add 74430a9  remove NLL in metric (#18794)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/metric.py | 91 ++--
 tests/python/unittest/test_metric.py | 20 +---
 2 files changed, 26 insertions(+), 85 deletions(-)



[incubator-mxnet] branch master updated (9e77e81 -> 74430a9)

2020-07-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 9e77e81  Update CUB and include it only for CUDA < 11 (#18799)
 add 74430a9  remove NLL in metric (#18794)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/metric.py | 91 ++--
 tests/python/unittest/test_metric.py | 20 +---
 2 files changed, 26 insertions(+), 85 deletions(-)



[incubator-mxnet] branch master updated: Remove caffe plugin (#18787)

2020-07-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c1db2d5  Remove caffe plugin (#18787)
c1db2d5 is described below

commit c1db2d5636a98084392b90ad3f020a9f9d197852
Author: Leonard Lausen 
AuthorDate: Sat Jul 25 16:58:45 2020 +

Remove caffe plugin (#18787)

* Remove caffe plugin

* Fix

* Remove CXX14 feature flag

* Update test
---
 CMakeLists.txt  |  42 
 docs/static_site/src/pages/api/faq/caffe.md | 148 
 include/mxnet/libinfo.h |   6 -
 plugin/caffe/README.md  |  58 -
 plugin/caffe/caffe.mk   |  32 ---
 plugin/caffe/caffe_blob.cc  |  94 
 plugin/caffe/caffe_blob.h   | 117 --
 plugin/caffe/caffe_common.cc|  48 
 plugin/caffe/caffe_common.h |  97 
 plugin/caffe/caffe_data_iter.cc | 273 --
 plugin/caffe/caffe_fieldentry.h | 113 -
 plugin/caffe/caffe_loss-inl.h   | 303 
 plugin/caffe/caffe_loss.cc  |  73 --
 plugin/caffe/caffe_loss.cu  |  53 -
 plugin/caffe/caffe_op-inl.h | 348 
 plugin/caffe/caffe_op.cc|  74 --
 plugin/caffe/caffe_op.cu|  53 -
 plugin/caffe/caffe_stream.cc|  37 ---
 plugin/caffe/caffe_stream.h |  38 ---
 python/mxnet/gluon/metric.py|   9 -
 python/mxnet/runtime.py |   3 +-
 src/libinfo.cc  |   3 -
 tests/jenkins/run_test.sh   |  56 -
 tests/jenkins/run_test_amzn_linux_gpu.sh|  65 --
 tests/jenkins/run_test_ubuntu.sh|  65 --
 tests/python/unittest/test_runtime.py   |   2 +-
 26 files changed, 2 insertions(+), 2208 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 688dd42..d3e6c74 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -74,7 +74,6 @@ option(USE_JEMALLOC "Build with Jemalloc support" OFF)
 option(USE_LIBJPEG_TURBO "Use libjpeg-turbo" OFF)
 option(USE_DIST_KVSTORE "Build with DIST_KVSTORE support" OFF)
 option(USE_PLUGINS_WARPCTC "Use WARPCTC Plugins" OFF)
-option(USE_PLUGIN_CAFFE "Use Caffe Plugin" OFF)
 option(USE_CPP_PACKAGE "Build C++ Package" OFF)
 option(USE_MXNET_LIB_NAMING "Use MXNet library naming conventions." ON)
 option(USE_GPROF "Compile with gprof (profiling) flag" OFF)
@@ -521,39 +520,6 @@ if(USE_OPERATOR_TUNING AND USE_OPENMP)
   add_definitions(-DMXNET_USE_OPERATOR_TUNING=1)
 endif()
 
-if(USE_PLUGIN_CAFFE)
-  if(NOT USE_CUDA)
-set(CPU_ONLY ON)
-add_definitions(-DCPU_ONLY=1)
-  endif()
-  if(NOT DEFINED CAFFE_PATH)
-if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/caffe)
-  set(CAFFE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/caffe)
-else()
-  set(CAFFE_PATH $ENV{CAFFE_PATH})
-endif()
-  endif()
-  list(APPEND CMAKE_MODULE_PATH ${CAFFE_PATH}/cmake)
-  include_directories(${CAFFE_PATH}/include)
-  include_directories(${CAFFE_PATH}/build/src)
-  include_directories(${CMAKE_BINARY_DIR}/caffe/include)
-  link_directories(${CAFFE_PATH}/build/lib)
-  if(NOT DEFINED CAFFE_PATH)
-message(FATAL_ERROR "Please set CAFFE_PATH to point to the caffe source 
installation")
-  endif()
-  FILE(GLOB_RECURSE PLUGINS_SOURCE "plugin/caffe/*.cc" "plugin/caffe/*.h")
-  FILE(GLOB_RECURSE PLUGINS_CUSRC "plugin/caffe/*.cu")
-  list(APPEND SOURCE ${PLUGINS_SOURCE})
-  list(APPEND CUDA ${PLUGINS_CUSRC})
-  include_directories(${CMAKE_BINARY_DIR}/include)
-  add_definitions(-DMXNET_USE_CAFFE=1)
-  list(APPEND mxnet_LINKER_LIBS
-protobuf boost_system boost_thread boost_filesystem
-gflags glog caffe
-${Caffe_LINKER_LIBS}
-)
-endif()
-
 if (NOT (EXTRA_OPERATORS STREQUAL ""))
 mxnet_source_group("Extra"   GLOB_RECURSE "${EXTRA_OPERATORS}/*.cc")
 mxnet_source_group("Extra\\Cuda"   GLOB_RECURSE "${EXTRA_OPERATORS}/*.cu")
@@ -640,14 +606,6 @@ if(USE_CUDA)
   link_directories(${CUDAToolkit_LIBRARY_DIR})
 endif()
 
-# unsupported: if caffe is a subdirectory of mxnet, load its CMakeLists.txt as 
well
-if(USE_PLUGIN_CAFFE)
-  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/caffe)
-add_subdirectory(caffe)
-  endif()
-endif()
-
-
 if(MSVC)
   set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /EHsc")
   set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /EHsc /Gy")
diff --git a/docs/static_site/src/pages/api/faq/caffe.md 
b/docs/static_site/src/pages/api/faq/caffe.md
dele

[incubator-mxnet] branch master updated: Remove caffe plugin (#18787)

2020-07-25 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c1db2d5  Remove caffe plugin (#18787)
c1db2d5 is described below

commit c1db2d5636a98084392b90ad3f020a9f9d197852
Author: Leonard Lausen 
AuthorDate: Sat Jul 25 16:58:45 2020 +

Remove caffe plugin (#18787)

* Remove caffe plugin

* Fix

* Remove CXX14 feature flag

* Update test
---
 CMakeLists.txt  |  42 
 docs/static_site/src/pages/api/faq/caffe.md | 148 
 include/mxnet/libinfo.h |   6 -
 plugin/caffe/README.md  |  58 -
 plugin/caffe/caffe.mk   |  32 ---
 plugin/caffe/caffe_blob.cc  |  94 
 plugin/caffe/caffe_blob.h   | 117 --
 plugin/caffe/caffe_common.cc|  48 
 plugin/caffe/caffe_common.h |  97 
 plugin/caffe/caffe_data_iter.cc | 273 --
 plugin/caffe/caffe_fieldentry.h | 113 -
 plugin/caffe/caffe_loss-inl.h   | 303 
 plugin/caffe/caffe_loss.cc  |  73 --
 plugin/caffe/caffe_loss.cu  |  53 -
 plugin/caffe/caffe_op-inl.h | 348 
 plugin/caffe/caffe_op.cc|  74 --
 plugin/caffe/caffe_op.cu|  53 -
 plugin/caffe/caffe_stream.cc|  37 ---
 plugin/caffe/caffe_stream.h |  38 ---
 python/mxnet/gluon/metric.py|   9 -
 python/mxnet/runtime.py |   3 +-
 src/libinfo.cc  |   3 -
 tests/jenkins/run_test.sh   |  56 -
 tests/jenkins/run_test_amzn_linux_gpu.sh|  65 --
 tests/jenkins/run_test_ubuntu.sh|  65 --
 tests/python/unittest/test_runtime.py   |   2 +-
 26 files changed, 2 insertions(+), 2208 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 688dd42..d3e6c74 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -74,7 +74,6 @@ option(USE_JEMALLOC "Build with Jemalloc support" OFF)
 option(USE_LIBJPEG_TURBO "Use libjpeg-turbo" OFF)
 option(USE_DIST_KVSTORE "Build with DIST_KVSTORE support" OFF)
 option(USE_PLUGINS_WARPCTC "Use WARPCTC Plugins" OFF)
-option(USE_PLUGIN_CAFFE "Use Caffe Plugin" OFF)
 option(USE_CPP_PACKAGE "Build C++ Package" OFF)
 option(USE_MXNET_LIB_NAMING "Use MXNet library naming conventions." ON)
 option(USE_GPROF "Compile with gprof (profiling) flag" OFF)
@@ -521,39 +520,6 @@ if(USE_OPERATOR_TUNING AND USE_OPENMP)
   add_definitions(-DMXNET_USE_OPERATOR_TUNING=1)
 endif()
 
-if(USE_PLUGIN_CAFFE)
-  if(NOT USE_CUDA)
-set(CPU_ONLY ON)
-add_definitions(-DCPU_ONLY=1)
-  endif()
-  if(NOT DEFINED CAFFE_PATH)
-if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/caffe)
-  set(CAFFE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/caffe)
-else()
-  set(CAFFE_PATH $ENV{CAFFE_PATH})
-endif()
-  endif()
-  list(APPEND CMAKE_MODULE_PATH ${CAFFE_PATH}/cmake)
-  include_directories(${CAFFE_PATH}/include)
-  include_directories(${CAFFE_PATH}/build/src)
-  include_directories(${CMAKE_BINARY_DIR}/caffe/include)
-  link_directories(${CAFFE_PATH}/build/lib)
-  if(NOT DEFINED CAFFE_PATH)
-message(FATAL_ERROR "Please set CAFFE_PATH to point to the caffe source 
installation")
-  endif()
-  FILE(GLOB_RECURSE PLUGINS_SOURCE "plugin/caffe/*.cc" "plugin/caffe/*.h")
-  FILE(GLOB_RECURSE PLUGINS_CUSRC "plugin/caffe/*.cu")
-  list(APPEND SOURCE ${PLUGINS_SOURCE})
-  list(APPEND CUDA ${PLUGINS_CUSRC})
-  include_directories(${CMAKE_BINARY_DIR}/include)
-  add_definitions(-DMXNET_USE_CAFFE=1)
-  list(APPEND mxnet_LINKER_LIBS
-protobuf boost_system boost_thread boost_filesystem
-gflags glog caffe
-${Caffe_LINKER_LIBS}
-)
-endif()
-
 if (NOT (EXTRA_OPERATORS STREQUAL ""))
 mxnet_source_group("Extra"   GLOB_RECURSE "${EXTRA_OPERATORS}/*.cc")
 mxnet_source_group("Extra\\Cuda"   GLOB_RECURSE "${EXTRA_OPERATORS}/*.cu")
@@ -640,14 +606,6 @@ if(USE_CUDA)
   link_directories(${CUDAToolkit_LIBRARY_DIR})
 endif()
 
-# unsupported: if caffe is a subdirectory of mxnet, load its CMakeLists.txt as 
well
-if(USE_PLUGIN_CAFFE)
-  if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/caffe)
-add_subdirectory(caffe)
-  endif()
-endif()
-
-
 if(MSVC)
   set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /EHsc")
   set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /EHsc /Gy")
diff --git a/docs/static_site/src/pages/api/faq/caffe.md 
b/docs/static_site/src/pages/api/faq/caffe.md
dele

[incubator-mxnet] branch v1.x updated (e6de5ae -> 85ff00d)

2020-07-24 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e6de5ae  Fix linalg_potri and linalg_potrf operators for large tensor. 
(#18752)
 add 85ff00d  Add Large Tensor Test for linalg_syrk (#18782)

No new revisions were added by this update.

Summary of changes:
 tests/nightly/test_large_array.py | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)



[incubator-mxnet] branch v1.x updated (e6de5ae -> 85ff00d)

2020-07-24 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e6de5ae  Fix linalg_potri and linalg_potrf operators for large tensor. 
(#18752)
 add 85ff00d  Add Large Tensor Test for linalg_syrk (#18782)

No new revisions were added by this update.

Summary of changes:
 tests/nightly/test_large_array.py | 19 ++-
 1 file changed, 18 insertions(+), 1 deletion(-)



[incubator-mxnet] branch v1.x updated (91d535a -> e6de5ae)

2020-07-24 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 91d535a  Fix crash when accessing already destructed static variables 
(#18768) (#18778)
 add e6de5ae  Fix linalg_potri and linalg_potrf operators for large tensor. 
(#18752)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md   |  1 +
 src/operator/tensor/la_op-inl.h   | 11 ++-
 tests/nightly/test_large_array.py | 27 +++
 3 files changed, 34 insertions(+), 5 deletions(-)



[incubator-mxnet] branch v1.x updated (91d535a -> e6de5ae)

2020-07-24 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 91d535a  Fix crash when accessing already destructed static variables 
(#18768) (#18778)
 add e6de5ae  Fix linalg_potri and linalg_potrf operators for large tensor. 
(#18752)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md   |  1 +
 src/operator/tensor/la_op-inl.h   | 11 ++-
 tests/nightly/test_large_array.py | 27 +++
 3 files changed, 34 insertions(+), 5 deletions(-)



[incubator-mxnet] branch master updated: set website default version to current stable (1.6) version (#18738)

2020-07-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e31ad77  set website default version to current stable (1.6) version 
(#18738)
e31ad77 is described below

commit e31ad77307cea634df9a8959ccff8e56be7611be
Author: Yang Shi 
AuthorDate: Thu Jul 23 11:33:31 2020 -0700

set website default version to current stable (1.6) version (#18738)

* set website default version - test redirect

* enable first time redirect on all master website pages

* update test code

* remove unnecessary test code

* fix typo

* delete test code
---
 docs/static_site/src/.htaccess | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/docs/static_site/src/.htaccess b/docs/static_site/src/.htaccess
index 2cf7300..dabc51a 100644
--- a/docs/static_site/src/.htaccess
+++ b/docs/static_site/src/.htaccess
@@ -22,6 +22,12 @@ RewriteOptions AllowNoSlash
   
 
 
+# Set default website version to current stable (v1.6)
+RewriteCond %{REQUEST_URI} !^/versions/
+RewriteCond %{HTTP_REFERER} !mxnet.apache.org
+RewriteCond %{HTTP_REFERER} !mxnet.incubator.apache.org
+RewriteRule ^(.*)$ /versions/1.6/$1 [r=307,L]
+
 # TODO temporary fix for issue #18604
 Redirect 302 /api/r/docs/api/R-package/build/mxnet-r-reference-manual.pdf 
https://mxnet-public.s3.us-east-2.amazonaws.com/docs/v1.x/mxnet-r-reference-manual.pdf
 Redirect 302 /api/scala/docs/api/ /versions/1.6/api/scala/docs/api/



[incubator-mxnet] branch master updated: set website default version to current stable (1.6) version (#18738)

2020-07-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e31ad77  set website default version to current stable (1.6) version 
(#18738)
e31ad77 is described below

commit e31ad77307cea634df9a8959ccff8e56be7611be
Author: Yang Shi 
AuthorDate: Thu Jul 23 11:33:31 2020 -0700

set website default version to current stable (1.6) version (#18738)

* set website default version - test redirect

* enable first time redirect on all master website pages

* update test code

* remove unnecessary test code

* fix typo

* delete test code
---
 docs/static_site/src/.htaccess | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/docs/static_site/src/.htaccess b/docs/static_site/src/.htaccess
index 2cf7300..dabc51a 100644
--- a/docs/static_site/src/.htaccess
+++ b/docs/static_site/src/.htaccess
@@ -22,6 +22,12 @@ RewriteOptions AllowNoSlash
   
 
 
+# Set default website version to current stable (v1.6)
+RewriteCond %{REQUEST_URI} !^/versions/
+RewriteCond %{HTTP_REFERER} !mxnet.apache.org
+RewriteCond %{HTTP_REFERER} !mxnet.incubator.apache.org
+RewriteRule ^(.*)$ /versions/1.6/$1 [r=307,L]
+
 # TODO temporary fix for issue #18604
 Redirect 302 /api/r/docs/api/R-package/build/mxnet-r-reference-manual.pdf 
https://mxnet-public.s3.us-east-2.amazonaws.com/docs/v1.x/mxnet-r-reference-manual.pdf
 Redirect 302 /api/scala/docs/api/ /versions/1.6/api/scala/docs/api/



[incubator-mxnet] branch master updated (1928117 -> 18af71e)

2020-07-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1928117  Fix crash when accessing already destructed static variables 
(#18768)
 add 18af71e  CI: Migrate remaining Dockerfiles to docker-compose.yml and 
remove unused code (#18771)

No new revisions were added by this update.

Summary of changes:
 ci/Jenkinsfile_docker_cache  |   1 -
 ci/build.py  | 189 +++-
 ci/dev_menu.py   |   1 -
 ci/docker/Dockerfile.build.ubuntu|  51 ++---
 ci/docker/Dockerfile.build.ubuntu_cpu_c  |  35 ---
 ci/docker/Dockerfile.build.ubuntu_cpu_jekyll |  43 +---
 ci/docker/Dockerfile.build.ubuntu_cpu_julia  |  66 --
 ci/docker/Dockerfile.build.ubuntu_cpu_r  |  46 
 ci/docker/Dockerfile.build.ubuntu_cpu_scala  |  53 -
 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt   |  47 
 ci/docker/Dockerfile.build.ubuntu_rat|  36 ---
 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu |  41 
 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu |  41 
 ci/docker/Dockerfile.publish.ubuntu1604_cpu  |  44 
 ci/docker/Dockerfile.publish.ubuntu1604_gpu  |  44 
 ci/docker/docker-compose.yml |  31 +++
 ci/docker/install/export_gpg_keys.sh |  23 --
 ci/docker/install/r.gpg  | Bin 1519 -> 0 bytes
 ci/docker/install/sbt.gpg| Bin 2210 -> 0 bytes
 ci/docker/install/tensorrt.sh|  49 
 ci/docker/install/ubuntu_base.sh |  40 
 ci/docker/install/ubuntu_clang.sh|  42 
 ci/docker/install/ubuntu_clojure.sh  |  30 ---
 ci/docker/install/ubuntu_cudnn.sh|  62 --
 ci/docker/install/ubuntu_emscripten.sh   |  41 
 ci/docker/install/ubuntu_gcc8.sh |  23 --
 ci/docker/install/ubuntu_julia.sh|  43 
 ci/docker/install/ubuntu_nightly_tests.sh|  35 ---
 ci/docker/install/ubuntu_r.sh|  50 -
 ci/docker/install/ubuntu_rat.sh  |  34 ---
 ci/docker/install/ubuntu_scala.sh|  31 ---
 ci/docker/runtime_functions.sh   |  18 +-
 ci/docker_cache.py   | 203 -
 ci/docker_cache_requirements |  24 --
 ci/jenkins/Jenkins_steps.groovy  |   5 +-
 ci/test_docker_cache.py  | 272 ---
 ci/windows/test_jl07_cpu.ps1 |  56 -
 ci/windows/test_jl10_cpu.ps1 |  56 -
 tests/nightly/apache_rat_license_check/README.md |   2 +-
 41 files changed, 105 insertions(+), 1881 deletions(-)
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_c
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_julia
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_r
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_scala
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_rat
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_gpu
 delete mode 100755 ci/docker/install/export_gpg_keys.sh
 delete mode 100644 ci/docker/install/r.gpg
 delete mode 100644 ci/docker/install/sbt.gpg
 delete mode 100755 ci/docker/install/tensorrt.sh
 delete mode 100755 ci/docker/install/ubuntu_base.sh
 delete mode 100755 ci/docker/install/ubuntu_clang.sh
 delete mode 100755 ci/docker/install/ubuntu_clojure.sh
 delete mode 100755 ci/docker/install/ubuntu_cudnn.sh
 delete mode 100755 ci/docker/install/ubuntu_emscripten.sh
 delete mode 100755 ci/docker/install/ubuntu_gcc8.sh
 delete mode 100755 ci/docker/install/ubuntu_julia.sh
 delete mode 100755 ci/docker/install/ubuntu_nightly_tests.sh
 delete mode 100755 ci/docker/install/ubuntu_r.sh
 delete mode 100755 ci/docker/install/ubuntu_rat.sh
 delete mode 100755 ci/docker/install/ubuntu_scala.sh
 delete mode 100644 ci/docker_cache.py
 delete mode 100644 ci/docker_cache_requirements
 delete mode 100644 ci/test_docker_cache.py
 delete mode 100644 ci/windows/test_jl07_cpu.ps1
 delete mode 100644 ci/windows/test_jl10_cpu.ps1



[incubator-mxnet] branch master updated (1928117 -> 18af71e)

2020-07-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1928117  Fix crash when accessing already destructed static variables 
(#18768)
 add 18af71e  CI: Migrate remaining Dockerfiles to docker-compose.yml and 
remove unused code (#18771)

No new revisions were added by this update.

Summary of changes:
 ci/Jenkinsfile_docker_cache  |   1 -
 ci/build.py  | 189 +++-
 ci/dev_menu.py   |   1 -
 ci/docker/Dockerfile.build.ubuntu|  51 ++---
 ci/docker/Dockerfile.build.ubuntu_cpu_c  |  35 ---
 ci/docker/Dockerfile.build.ubuntu_cpu_jekyll |  43 +---
 ci/docker/Dockerfile.build.ubuntu_cpu_julia  |  66 --
 ci/docker/Dockerfile.build.ubuntu_cpu_r  |  46 
 ci/docker/Dockerfile.build.ubuntu_cpu_scala  |  53 -
 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt   |  47 
 ci/docker/Dockerfile.build.ubuntu_rat|  36 ---
 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu |  41 
 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu |  41 
 ci/docker/Dockerfile.publish.ubuntu1604_cpu  |  44 
 ci/docker/Dockerfile.publish.ubuntu1604_gpu  |  44 
 ci/docker/docker-compose.yml |  31 +++
 ci/docker/install/export_gpg_keys.sh |  23 --
 ci/docker/install/r.gpg  | Bin 1519 -> 0 bytes
 ci/docker/install/sbt.gpg| Bin 2210 -> 0 bytes
 ci/docker/install/tensorrt.sh|  49 
 ci/docker/install/ubuntu_base.sh |  40 
 ci/docker/install/ubuntu_clang.sh|  42 
 ci/docker/install/ubuntu_clojure.sh  |  30 ---
 ci/docker/install/ubuntu_cudnn.sh|  62 --
 ci/docker/install/ubuntu_emscripten.sh   |  41 
 ci/docker/install/ubuntu_gcc8.sh |  23 --
 ci/docker/install/ubuntu_julia.sh|  43 
 ci/docker/install/ubuntu_nightly_tests.sh|  35 ---
 ci/docker/install/ubuntu_r.sh|  50 -
 ci/docker/install/ubuntu_rat.sh  |  34 ---
 ci/docker/install/ubuntu_scala.sh|  31 ---
 ci/docker/runtime_functions.sh   |  18 +-
 ci/docker_cache.py   | 203 -
 ci/docker_cache_requirements |  24 --
 ci/jenkins/Jenkins_steps.groovy  |   5 +-
 ci/test_docker_cache.py  | 272 ---
 ci/windows/test_jl07_cpu.ps1 |  56 -
 ci/windows/test_jl10_cpu.ps1 |  56 -
 tests/nightly/apache_rat_license_check/README.md |   2 +-
 41 files changed, 105 insertions(+), 1881 deletions(-)
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_c
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_julia
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_r
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_scala
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_rat
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_gpu
 delete mode 100755 ci/docker/install/export_gpg_keys.sh
 delete mode 100644 ci/docker/install/r.gpg
 delete mode 100644 ci/docker/install/sbt.gpg
 delete mode 100755 ci/docker/install/tensorrt.sh
 delete mode 100755 ci/docker/install/ubuntu_base.sh
 delete mode 100755 ci/docker/install/ubuntu_clang.sh
 delete mode 100755 ci/docker/install/ubuntu_clojure.sh
 delete mode 100755 ci/docker/install/ubuntu_cudnn.sh
 delete mode 100755 ci/docker/install/ubuntu_emscripten.sh
 delete mode 100755 ci/docker/install/ubuntu_gcc8.sh
 delete mode 100755 ci/docker/install/ubuntu_julia.sh
 delete mode 100755 ci/docker/install/ubuntu_nightly_tests.sh
 delete mode 100755 ci/docker/install/ubuntu_r.sh
 delete mode 100755 ci/docker/install/ubuntu_rat.sh
 delete mode 100755 ci/docker/install/ubuntu_scala.sh
 delete mode 100644 ci/docker_cache.py
 delete mode 100644 ci/docker_cache_requirements
 delete mode 100644 ci/test_docker_cache.py
 delete mode 100644 ci/windows/test_jl07_cpu.ps1
 delete mode 100644 ci/windows/test_jl10_cpu.ps1



[incubator-mxnet] branch master updated (1928117 -> 18af71e)

2020-07-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1928117  Fix crash when accessing already destructed static variables 
(#18768)
 add 18af71e  CI: Migrate remaining Dockerfiles to docker-compose.yml and 
remove unused code (#18771)

No new revisions were added by this update.

Summary of changes:
 ci/Jenkinsfile_docker_cache  |   1 -
 ci/build.py  | 189 +++-
 ci/dev_menu.py   |   1 -
 ci/docker/Dockerfile.build.ubuntu|  51 ++---
 ci/docker/Dockerfile.build.ubuntu_cpu_c  |  35 ---
 ci/docker/Dockerfile.build.ubuntu_cpu_jekyll |  43 +---
 ci/docker/Dockerfile.build.ubuntu_cpu_julia  |  66 --
 ci/docker/Dockerfile.build.ubuntu_cpu_r  |  46 
 ci/docker/Dockerfile.build.ubuntu_cpu_scala  |  53 -
 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt   |  47 
 ci/docker/Dockerfile.build.ubuntu_rat|  36 ---
 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu |  41 
 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu |  41 
 ci/docker/Dockerfile.publish.ubuntu1604_cpu  |  44 
 ci/docker/Dockerfile.publish.ubuntu1604_gpu  |  44 
 ci/docker/docker-compose.yml |  31 +++
 ci/docker/install/export_gpg_keys.sh |  23 --
 ci/docker/install/r.gpg  | Bin 1519 -> 0 bytes
 ci/docker/install/sbt.gpg| Bin 2210 -> 0 bytes
 ci/docker/install/tensorrt.sh|  49 
 ci/docker/install/ubuntu_base.sh |  40 
 ci/docker/install/ubuntu_clang.sh|  42 
 ci/docker/install/ubuntu_clojure.sh  |  30 ---
 ci/docker/install/ubuntu_cudnn.sh|  62 --
 ci/docker/install/ubuntu_emscripten.sh   |  41 
 ci/docker/install/ubuntu_gcc8.sh |  23 --
 ci/docker/install/ubuntu_julia.sh|  43 
 ci/docker/install/ubuntu_nightly_tests.sh|  35 ---
 ci/docker/install/ubuntu_r.sh|  50 -
 ci/docker/install/ubuntu_rat.sh  |  34 ---
 ci/docker/install/ubuntu_scala.sh|  31 ---
 ci/docker/runtime_functions.sh   |  18 +-
 ci/docker_cache.py   | 203 -
 ci/docker_cache_requirements |  24 --
 ci/jenkins/Jenkins_steps.groovy  |   5 +-
 ci/test_docker_cache.py  | 272 ---
 ci/windows/test_jl07_cpu.ps1 |  56 -
 ci/windows/test_jl10_cpu.ps1 |  56 -
 tests/nightly/apache_rat_license_check/README.md |   2 +-
 41 files changed, 105 insertions(+), 1881 deletions(-)
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_c
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_julia
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_r
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_scala
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_rat
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_gpu
 delete mode 100755 ci/docker/install/export_gpg_keys.sh
 delete mode 100644 ci/docker/install/r.gpg
 delete mode 100644 ci/docker/install/sbt.gpg
 delete mode 100755 ci/docker/install/tensorrt.sh
 delete mode 100755 ci/docker/install/ubuntu_base.sh
 delete mode 100755 ci/docker/install/ubuntu_clang.sh
 delete mode 100755 ci/docker/install/ubuntu_clojure.sh
 delete mode 100755 ci/docker/install/ubuntu_cudnn.sh
 delete mode 100755 ci/docker/install/ubuntu_emscripten.sh
 delete mode 100755 ci/docker/install/ubuntu_gcc8.sh
 delete mode 100755 ci/docker/install/ubuntu_julia.sh
 delete mode 100755 ci/docker/install/ubuntu_nightly_tests.sh
 delete mode 100755 ci/docker/install/ubuntu_r.sh
 delete mode 100755 ci/docker/install/ubuntu_rat.sh
 delete mode 100755 ci/docker/install/ubuntu_scala.sh
 delete mode 100644 ci/docker_cache.py
 delete mode 100644 ci/docker_cache_requirements
 delete mode 100644 ci/test_docker_cache.py
 delete mode 100644 ci/windows/test_jl07_cpu.ps1
 delete mode 100644 ci/windows/test_jl10_cpu.ps1



[incubator-mxnet] branch master updated (1928117 -> 18af71e)

2020-07-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1928117  Fix crash when accessing already destructed static variables 
(#18768)
 add 18af71e  CI: Migrate remaining Dockerfiles to docker-compose.yml and 
remove unused code (#18771)

No new revisions were added by this update.

Summary of changes:
 ci/Jenkinsfile_docker_cache  |   1 -
 ci/build.py  | 189 +++-
 ci/dev_menu.py   |   1 -
 ci/docker/Dockerfile.build.ubuntu|  51 ++---
 ci/docker/Dockerfile.build.ubuntu_cpu_c  |  35 ---
 ci/docker/Dockerfile.build.ubuntu_cpu_jekyll |  43 +---
 ci/docker/Dockerfile.build.ubuntu_cpu_julia  |  66 --
 ci/docker/Dockerfile.build.ubuntu_cpu_r  |  46 
 ci/docker/Dockerfile.build.ubuntu_cpu_scala  |  53 -
 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt   |  47 
 ci/docker/Dockerfile.build.ubuntu_rat|  36 ---
 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu |  39 
 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu |  41 
 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu |  41 
 ci/docker/Dockerfile.publish.ubuntu1604_cpu  |  44 
 ci/docker/Dockerfile.publish.ubuntu1604_gpu  |  44 
 ci/docker/docker-compose.yml |  31 +++
 ci/docker/install/export_gpg_keys.sh |  23 --
 ci/docker/install/r.gpg  | Bin 1519 -> 0 bytes
 ci/docker/install/sbt.gpg| Bin 2210 -> 0 bytes
 ci/docker/install/tensorrt.sh|  49 
 ci/docker/install/ubuntu_base.sh |  40 
 ci/docker/install/ubuntu_clang.sh|  42 
 ci/docker/install/ubuntu_clojure.sh  |  30 ---
 ci/docker/install/ubuntu_cudnn.sh|  62 --
 ci/docker/install/ubuntu_emscripten.sh   |  41 
 ci/docker/install/ubuntu_gcc8.sh |  23 --
 ci/docker/install/ubuntu_julia.sh|  43 
 ci/docker/install/ubuntu_nightly_tests.sh|  35 ---
 ci/docker/install/ubuntu_r.sh|  50 -
 ci/docker/install/ubuntu_rat.sh  |  34 ---
 ci/docker/install/ubuntu_scala.sh|  31 ---
 ci/docker/runtime_functions.sh   |  18 +-
 ci/docker_cache.py   | 203 -
 ci/docker_cache_requirements |  24 --
 ci/jenkins/Jenkins_steps.groovy  |   5 +-
 ci/test_docker_cache.py  | 272 ---
 ci/windows/test_jl07_cpu.ps1 |  56 -
 ci/windows/test_jl10_cpu.ps1 |  56 -
 tests/nightly/apache_rat_license_check/README.md |   2 +-
 41 files changed, 105 insertions(+), 1881 deletions(-)
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_c
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_julia
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_r
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_cpu_scala
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_gpu_tensorrt
 delete mode 100644 ci/docker/Dockerfile.build.ubuntu_rat
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_cpu
 delete mode 100644 ci/docker/Dockerfile.publish.ubuntu1604_gpu
 delete mode 100755 ci/docker/install/export_gpg_keys.sh
 delete mode 100644 ci/docker/install/r.gpg
 delete mode 100644 ci/docker/install/sbt.gpg
 delete mode 100755 ci/docker/install/tensorrt.sh
 delete mode 100755 ci/docker/install/ubuntu_base.sh
 delete mode 100755 ci/docker/install/ubuntu_clang.sh
 delete mode 100755 ci/docker/install/ubuntu_clojure.sh
 delete mode 100755 ci/docker/install/ubuntu_cudnn.sh
 delete mode 100755 ci/docker/install/ubuntu_emscripten.sh
 delete mode 100755 ci/docker/install/ubuntu_gcc8.sh
 delete mode 100755 ci/docker/install/ubuntu_julia.sh
 delete mode 100755 ci/docker/install/ubuntu_nightly_tests.sh
 delete mode 100755 ci/docker/install/ubuntu_r.sh
 delete mode 100755 ci/docker/install/ubuntu_rat.sh
 delete mode 100755 ci/docker/install/ubuntu_scala.sh
 delete mode 100644 ci/docker_cache.py
 delete mode 100644 ci/docker_cache_requirements
 delete mode 100644 ci/test_docker_cache.py
 delete mode 100644 ci/windows/test_jl07_cpu.ps1
 delete mode 100644 ci/windows/test_jl10_cpu.ps1



[incubator-mxnet] branch master updated: Fix crash when accessing already destructed static variables (#18768)

2020-07-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1928117  Fix crash when accessing already destructed static variables 
(#18768)
1928117 is described below

commit 1928117cee718bcdcd3bc1408940c8747f4c840e
Author: Przemyslaw Tredak 
AuthorDate: Tue Jul 21 23:35:15 2020 -0700

Fix crash when accessing already destructed static variables (#18768)
---
 src/c_api/c_api.cc | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/c_api/c_api.cc b/src/c_api/c_api.cc
index ea39d9a..53ff1e4 100644
--- a/src/c_api/c_api.cc
+++ b/src/c_api/c_api.cc
@@ -1316,6 +1316,7 @@ int MXNotifyShutdown() {
   API_BEGIN();
   mxnet::op::custom::CustomOperator::Get()->Stop();
   Engine::Get()->NotifyShutdown();
+  Engine::Get()->WaitForAll();
   API_END();
 }
 



[incubator-mxnet] branch master updated: Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error (#18686)

2020-07-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a330a02  Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error (#18686)
a330a02 is described below

commit a330a022d4c32b9096c4b6d7066a936d6eef59a1
Author: Leonard Lausen 
AuthorDate: Wed Jul 22 06:31:47 2020 +

Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error (#18686)

* Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error

Performed shallow copy instead of deep copy

* Test

* Fix test
---
 tests/python/unittest/test_symbol.py | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/tests/python/unittest/test_symbol.py 
b/tests/python/unittest/test_symbol.py
index 1c84af0..910b6ca 100644
--- a/tests/python/unittest/test_symbol.py
+++ b/tests/python/unittest/test_symbol.py
@@ -479,3 +479,13 @@ def test_infershape_happens_for_all_ops_in_graph():
 
 assert False
 
+def test_symbol_copy():
+a = mx.sym.Variable('a')
+b = copy.copy(a)
+b._set_attr(name='b')
+assert a.name == 'a' and b.name == 'b'
+
+a = mx.sym.Variable('a').as_np_ndarray()
+b = copy.copy(a)
+b._set_attr(name='b')
+assert a.name == 'a' and b.name == 'b'



[incubator-mxnet] branch master updated: Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error (#18686)

2020-07-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a330a02  Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error (#18686)
a330a02 is described below

commit a330a022d4c32b9096c4b6d7066a936d6eef59a1
Author: Leonard Lausen 
AuthorDate: Wed Jul 22 06:31:47 2020 +

Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error (#18686)

* Fix mx.symbol.numpy._Symbol.__deepcopy__ logic error

Performed shallow copy instead of deep copy

* Test

* Fix test
---
 tests/python/unittest/test_symbol.py | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/tests/python/unittest/test_symbol.py 
b/tests/python/unittest/test_symbol.py
index 1c84af0..910b6ca 100644
--- a/tests/python/unittest/test_symbol.py
+++ b/tests/python/unittest/test_symbol.py
@@ -479,3 +479,13 @@ def test_infershape_happens_for_all_ops_in_graph():
 
 assert False
 
+def test_symbol_copy():
+a = mx.sym.Variable('a')
+b = copy.copy(a)
+b._set_attr(name='b')
+assert a.name == 'a' and b.name == 'b'
+
+a = mx.sym.Variable('a').as_np_ndarray()
+b = copy.copy(a)
+b._set_attr(name='b')
+assert a.name == 'a' and b.name == 'b'



[incubator-mxnet] branch master updated (146b49e -> bf26bcc)

2020-07-20 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 146b49e  Unittest tolerance handling improvements (#18694)
 add bf26bcc  [NumPy] enable large tensor in np (#18368)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py | 104 +--
 tests/nightly/test_np_large_array.py |  78 ++
 2 files changed, 165 insertions(+), 17 deletions(-)
 create mode 100644 tests/nightly/test_np_large_array.py



[incubator-mxnet] branch master updated (146b49e -> bf26bcc)

2020-07-20 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 146b49e  Unittest tolerance handling improvements (#18694)
 add bf26bcc  [NumPy] enable large tensor in np (#18368)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py | 104 +--
 tests/nightly/test_np_large_array.py |  78 ++
 2 files changed, 165 insertions(+), 17 deletions(-)
 create mode 100644 tests/nightly/test_np_large_array.py



[incubator-mxnet] branch master updated (146b49e -> bf26bcc)

2020-07-20 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 146b49e  Unittest tolerance handling improvements (#18694)
 add bf26bcc  [NumPy] enable large tensor in np (#18368)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py | 104 +--
 tests/nightly/test_np_large_array.py |  78 ++
 2 files changed, 165 insertions(+), 17 deletions(-)
 create mode 100644 tests/nightly/test_np_large_array.py



[incubator-mxnet] branch master updated: Unittest tolerance handling improvements (#18694)

2020-07-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 146b49e  Unittest tolerance handling improvements (#18694)
146b49e is described below

commit 146b49ead32b941f74db694f2d453cb25650d252
Author: Dick Carter 
AuthorDate: Sun Jul 19 14:12:50 2020 -0700

Unittest tolerance handling improvements (#18694)

* Add sm arch 80 to Makefile

* Add TF32 to cuBLAS GEMMs

Signed-off-by: Serge Panev 

* Add CUDA version guards

Signed-off-by: Serge Panev 

* Remove useless TF32 for double and old CUDA version

Signed-off-by: Serge Panev 

* Factorize VERSION_ADJUSTED_TF32_MATH

Signed-off-by: Serge Panev 

* Add TF32 considerations to test_util.py:check_consistency()

* Bypass test_gluon_gpu.py:test_large_models if gmem >32GB

* Default tols in assert_almost_equal() now a function of dtype and ctx

* Expand types listed by default_tols()

* Fix pylint

* All with_seed() tests to waitall in teardown

* Elevate MXNET_TEST_SEED logging to WARNING

* Revert test_gluon_gpu.py:test_rnn_layer to default tols

* Fix test_gluon_model_zoo_gpu.py::test_inference and 
test_operator_gpy.py::test_np_linalg_{solve,tensorinv}

* test_numpy_interoperability.py to not fix seed for rest of CI

* Further fix to test_np_linalg_tensorinv

* Fix test_gluon_data.py:test_dataloader_context when run on 1-GPU system.

* Fix test_operator_gpu.py::test_embedding_with_type

* Fix 
test_operator_gpu.py::{test_*convolution_large_c,test_np_linalg_tensorsolve}

* Remove unneeded print() from test_numpy_interoperability.py

* Unify tol handling of check_consistency() and assert_almost_equal().  
Test tweeks.

* Add tol handling of assert_almost_equal() with number args

* Add tol handling of bool comparisons

* Fix test_numpy_op.py::test_np_random_rayleigh

* Fix test_operator_gpu.py::test_batchnorm_with_type

* Fix test_gluon.py::test_sync_batchnorm in cpu selftest

* Improve unittest failure reporting

* Add to robustness of test_operator_gpu.py::test_embedding_with_type

* Check_consistency() to use equal backward gradients for increased test 
robustness

* Fix test_operator_gpu.py::test_{fully_connected,gemm}.  Add 
default_numeric_eps().

* test_utils.py fix for numeric gradient calc

* Reinstate rtol=1e-2 for test_operator.py::test_order

* Remove auto-cast of check_consistency() input data to least precise dtype 
(not needed)

* Fix test_operator.py::test_{reciprocol,cbrt,rcbrt}_op

* Expand default float64 numeric_eps for test_operator_gpu.py::test_sofmin

* Fix segfault-on-error of @retry decorator. Add test isolation.

* assert_almost_equal() to handle a,b scalars

* Fix test_operator_gpu.py::test_gluon_{mvn,mvn_v1} race

* Fix test_operator_gpu.py::test_flatten_slice_after_conv via scale

* Remove test_utils.py:almost_equal_ignore_nan()

* Fix sample vs. pop variance issue with 
test_numpy_op.py::test_npx_batch_norm

* Expose test_utils.py:effective_dtype() and use to fix 
test_operator_gpu.py::test_np_linalg_svd

* Fix true_divide int_array / int_scalar -> float_array to honor 
np_default_dtype

* Try test_elemwise_binary_ops serial to avoid pytest worker crash

* Fix (log_)softmax backward on empty ndarray

* Temporarily log all CI seeds to troubleshoot seed non-determinism

* Revert "Temporarily log all CI seeds to troubleshoot seed non-determinism"

This reverts commit f60eff20785b812ac4fcd70d51359ee0cbfb3e47.

* Temp log all CI seeds to troubleshoot unwanted seed determinism

* Revert "Add sm arch 80 to Makefile"

This reverts commit f9306cecc53b0633ef5f5b7b000802fbf0d73fe9.

* Same fix of sample vs. pop variance issue, now with 
test_operator_gpu.py::test_batchnorm

* Revert "Temp log all CI seeds to troubleshoot unwanted seed determinism"

This reverts commit ff328efb0be3445690669d5437a6af575ff12b49.

* Marking test_sparse_dot_grad with garbage_expected after teardown error

* Fix flakiness of test_gluon_probability{_v1,_v2}.py::test_gluon_kl{_v1,}

* Temp skip of test_aggregate_duplication on gpu

* Add seeding to test_{numpy,}_contrib_gluon_data_vision.py.  Make created 
files unique.

* Add ndarray module isolation to help debug test_bbox_augmenters worker 
crash

* Marking test_sparse_square_sum serial after pytest worker crash

* Fix flakiness of 
test_gluon_probabili

[incubator-mxnet] branch master updated (3ef00b8 -> a77f774)

2020-07-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3ef00b8  Refactoring of Pooled Storage Manager classes (#18582)
 add a77f774  Remove NNPACK integration (#18722)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/pages/api/faq/env_var.md|   3 -
 docs/static_site/src/pages/api/faq/nnpack.md | 162 ---
 src/operator/convolution_v1.cc   |   4 -
 src/operator/nn/convolution.cc   |   3 -
 src/operator/nn/fully_connected.cc   |   3 -
 src/operator/nn/pooling.cc   |   3 -
 src/operator/nnpack/nnpack_convolution-inl.h | 124 -
 src/operator/nnpack/nnpack_fully_connected-inl.h | 108 ---
 src/operator/nnpack/nnpack_pooling-inl.h |  91 -
 src/operator/nnpack/nnpack_util.cc   |  37 --
 src/operator/nnpack/nnpack_util.h|  64 -
 11 files changed, 602 deletions(-)
 delete mode 100644 docs/static_site/src/pages/api/faq/nnpack.md
 delete mode 100644 src/operator/nnpack/nnpack_convolution-inl.h
 delete mode 100644 src/operator/nnpack/nnpack_fully_connected-inl.h
 delete mode 100644 src/operator/nnpack/nnpack_pooling-inl.h
 delete mode 100644 src/operator/nnpack/nnpack_util.cc
 delete mode 100644 src/operator/nnpack/nnpack_util.h



[incubator-mxnet] branch master updated: Remove NNPACK integration (#18722)

2020-07-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a77f774  Remove NNPACK integration (#18722)
a77f774 is described below

commit a77f774ed179786fc8429d913a2da1d942528de9
Author: Leonard Lausen 
AuthorDate: Fri Jul 17 05:01:17 2020 +

Remove NNPACK integration (#18722)
---
 docs/static_site/src/pages/api/faq/env_var.md|   3 -
 docs/static_site/src/pages/api/faq/nnpack.md | 162 ---
 src/operator/convolution_v1.cc   |   4 -
 src/operator/nn/convolution.cc   |   3 -
 src/operator/nn/fully_connected.cc   |   3 -
 src/operator/nn/pooling.cc   |   3 -
 src/operator/nnpack/nnpack_convolution-inl.h | 124 -
 src/operator/nnpack/nnpack_fully_connected-inl.h | 108 ---
 src/operator/nnpack/nnpack_pooling-inl.h |  91 -
 src/operator/nnpack/nnpack_util.cc   |  37 --
 src/operator/nnpack/nnpack_util.h|  64 -
 11 files changed, 602 deletions(-)

diff --git a/docs/static_site/src/pages/api/faq/env_var.md 
b/docs/static_site/src/pages/api/faq/env_var.md
index 364fd1d..55e5f38 100644
--- a/docs/static_site/src/pages/api/faq/env_var.md
+++ b/docs/static_site/src/pages/api/faq/env_var.md
@@ -59,9 +59,6 @@ $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 * MXNET_CPU_PRIORITY_NTHREADS
   - Values: Int ```(default=4)```
   - The number of threads given to prioritized CPU jobs.
-* MXNET_CPU_NNPACK_NTHREADS
-  - Values: Int ```(default=4)```
-  - The number of threads used for NNPACK. NNPACK package aims to provide 
high-performance implementations of some layers for multi-core CPUs. Checkout 
[NNPACK]({{'/api/faq/nnpack'|relative_url}}) to know more about it.
 * MXNET_MP_WORKER_NTHREADS
   - Values: Int ```(default=1)```
   - The number of scheduling threads on CPU given to multiprocess workers. 
Enlarge this number allows more operators to run in parallel in individual 
workers but please consider reducing the overall `num_workers` to avoid thread 
contention (not available on Windows).
diff --git a/docs/static_site/src/pages/api/faq/nnpack.md 
b/docs/static_site/src/pages/api/faq/nnpack.md
deleted file mode 100644
index 84bedee..000
--- a/docs/static_site/src/pages/api/faq/nnpack.md
+++ /dev/null
@@ -1,162 +0,0 @@

-layout: page_category
-title: NNPACK for Multi-Core CPU Support in MXNet
-category: faq
-faq_c: Speed
-question: Can I use nnpack to improve the CPU performance of MXNet?
-permalink: /api/faq/nnpack

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-### NNPACK for Multi-Core CPU Support in MXNet
-[NNPACK](https://github.com/Maratyszcza/NNPACK) is an acceleration package
-for neural network computations, which can run on x86-64, ARMv7, or ARM64 
architecture CPUs.
-Using NNPACK, higher-level libraries like _MXNet_ can speed up
-the execution on multi-core CPU computers, including laptops and mobile 
devices.
-
-_MXNet_ supports NNPACK for forward propagation (inference only) in 
convolution, max-pooling, and fully-connected layers.
-In this document, we give a high level overview of how to use NNPACK with 
_MXNet_.
-
-
-### Conditions
-The underlying implementation of NNPACK utilizes several acceleration methods,
-including [fft](https://arxiv.org/abs/1312.5851) and 
[winograd](https://arxiv.org/abs/1509.09308).
-These algorithms work better on some special `batch size`, `kernel size`, and 
`stride` settings than on other,
-so depending on the context, not all convolution, max-pooling, or 
fully-connected layers can be powered by NNPACK.
-When favorable conditions for running NNPACKS are not met,
-_MXNet_ will fall back to the default implementation automatically.
-
-NNPACK only supports Linux and OS X systems. Windows is not supported at 
present.
-The following table explains under which conditions NNPACK will work.
-
-| operation  | conditions |
-|:-  |:-- |
-|convolution |2d convolution `and` no-bias=False `and` dilate=(1,1) `and` 
num_group=1 `and` batch-size = 1 or batch-size > 1 && stride = (1,1);|
-|pooling | max-pooling `and` kernel=(2,2) `and` stride=(2,2) `and` 
pooling_convention=full|
-|fully-connected| without any restrictions |
-
-### Build/Install NNPACK with MXNet
-
-If the trained model meets some conditions of using NNPACK,
-you can build MXNet with NNPACK support.
-Follow these simple steps:
-* Build NNPACK shared library with the following commands. _MXNet_ will link 
NNPACK dynamically.
-
-Note: The following NNPACK installation instructions have been tested on 
Ubuntu 14.04 and 16.04.
-
-```bash
-# Install Pip
-$ sudo apt-get update
-$ sudo apt-get install -y python-pip
-$ sudo pip install --upgrade pip
-
-# Install Peach
-$ git clone https://github.com/

[incubator-mxnet] branch master updated (2abf0b8 -> 3ef00b8)

2020-07-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2abf0b8  Initialize docker cache in build.py for docker-compose 
containers (#18724)
 add 3ef00b8  Refactoring of Pooled Storage Manager classes (#18582)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/pages/api/faq/env_var.md |  67 ++-
 src/profiler/storage_profiler.h   |  20 +-
 src/storage/cpu_device_storage.h  |  11 +-
 src/storage/cpu_shared_storage_manager.h  |  16 +-
 src/storage/gpu_device_storage.h  |  37 +-
 src/storage/naive_storage_manager.h   |   1 -
 src/storage/pinned_memory_storage.h   |  20 +-
 src/storage/pooled_storage_manager.h  | 590 ++
 src/storage/storage.cc| 269 +++-
 src/storage/storage_manager.h |   2 +-
 src/storage/storage_manager_helpers.h | 162 +++
 tests/python/unittest/test_gluon_data.py  |  15 +-
 12 files changed, 732 insertions(+), 478 deletions(-)
 create mode 100644 src/storage/storage_manager_helpers.h



[incubator-mxnet] branch master updated (2abf0b8 -> 3ef00b8)

2020-07-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 2abf0b8  Initialize docker cache in build.py for docker-compose 
containers (#18724)
 add 3ef00b8  Refactoring of Pooled Storage Manager classes (#18582)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/pages/api/faq/env_var.md |  67 ++-
 src/profiler/storage_profiler.h   |  20 +-
 src/storage/cpu_device_storage.h  |  11 +-
 src/storage/cpu_shared_storage_manager.h  |  16 +-
 src/storage/gpu_device_storage.h  |  37 +-
 src/storage/naive_storage_manager.h   |   1 -
 src/storage/pinned_memory_storage.h   |  20 +-
 src/storage/pooled_storage_manager.h  | 590 ++
 src/storage/storage.cc| 269 +++-
 src/storage/storage_manager.h |   2 +-
 src/storage/storage_manager_helpers.h | 162 +++
 tests/python/unittest/test_gluon_data.py  |  15 +-
 12 files changed, 732 insertions(+), 478 deletions(-)
 create mode 100644 src/storage/storage_manager_helpers.h



[incubator-mxnet] branch master updated (8198442 -> 37bdf0b)

2020-07-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8198442  [numpy] symbolic advanced indexing (#18319)
 add 37bdf0b  [MXNET-1453] Support the intput whose dimension is greater 
than 6 for Transpose and Rollaxis (#18707)

No new revisions were added by this update.

Summary of changes:
 src/operator/numpy/np_matrix_op-inl.h  |  51 
 src/operator/numpy/np_matrix_op.cc |  17 +++-
 src/operator/tensor/matrix_op-inl.h| 138 +++--
 src/operator/tensor/matrix_op.cc   |   4 +
 tests/python/unittest/test_numpy_op.py |   8 +-
 tests/python/unittest/test_operator.py |   4 +-
 6 files changed, 191 insertions(+), 31 deletions(-)



[incubator-mxnet] branch master updated: [MXNET-1453] Support the intput whose dimension is greater than 6 for Transpose and Rollaxis (#18707)

2020-07-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 37bdf0b  [MXNET-1453] Support the intput whose dimension is greater 
than 6 for Transpose and Rollaxis (#18707)
37bdf0b is described below

commit 37bdf0bf981d11a89bd248b02f473211d57bc9c6
Author: JackieWu 
AuthorDate: Fri Jul 17 01:25:01 2020 +0800

[MXNET-1453] Support the intput whose dimension is greater than 6 for 
Transpose and Rollaxis (#18707)

* support 6+ dims for transpose

* test over

* reorder code

* fix transposeex
---
 src/operator/numpy/np_matrix_op-inl.h  |  51 
 src/operator/numpy/np_matrix_op.cc |  17 +++-
 src/operator/tensor/matrix_op-inl.h| 138 +++--
 src/operator/tensor/matrix_op.cc   |   4 +
 tests/python/unittest/test_numpy_op.py |   8 +-
 tests/python/unittest/test_operator.py |   4 +-
 6 files changed, 191 insertions(+), 31 deletions(-)

diff --git a/src/operator/numpy/np_matrix_op-inl.h 
b/src/operator/numpy/np_matrix_op-inl.h
index 0125feb..0fea76b 100644
--- a/src/operator/numpy/np_matrix_op-inl.h
+++ b/src/operator/numpy/np_matrix_op-inl.h
@@ -134,10 +134,10 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-  const NumpyTransposeParam& param = 
nnvm::get(attrs.parsed);
   if (req[0] == kNullOp) return;
   CHECK(req[0] == kWriteTo || req[0] == kAddTo)
-  << "Transpose only supports kWriteTo, kNullOp and kAddTo";
+  << "Transpose does not support inplace";
+  const NumpyTransposeParam& param = 
nnvm::get(attrs.parsed);
   mxnet::TShape axes;
   if (ndim_is_known(param.axes)) {
 axes = common::CanonicalizeAxes(param.axes);
@@ -147,10 +147,14 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   axes[i] = axes.ndim() - 1 - i;
 }
   }
+  mshadow::Tensor workspace =
+GetTransposeExWorkspace(ctx, axes);
   if (req[0] == kAddTo) {
-TransposeImpl(ctx.run_ctx, inputs[0], outputs[0], axes);
+TransposeExImpl(ctx.run_ctx, inputs[0], outputs[0],
+axes, workspace);
   } else {
-TransposeImpl(ctx.run_ctx, inputs[0], outputs[0], axes);
+TransposeExImpl(ctx.run_ctx, inputs[0], outputs[0],
+axes, workspace);
   }
 }
 
@@ -779,13 +783,21 @@ void NumpyRollaxisCompute(const nnvm::NodeAttrs& attrs,
   using namespace mshadow::expr;
   CHECK_EQ(inputs.size(), 1U);
   CHECK_EQ(outputs.size(), 1U);
-  CHECK_EQ(req[0], kWriteTo) << "Rollaxis does not support inplace";
-  mxnet::TShape axes;
+  if (req[0] == kNullOp) return;
+  CHECK(req[0] == kWriteTo || req[0] == kAddTo)
+  << "Rollaxis does not support inplace";
   const NumpyRollaxisParam& param = 
nnvm::get(attrs.parsed);
-  axes = NumpyRollaxisShapeImpl(param.axis, param.start, inputs[0].ndim());
-  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, Dtype, {
-TransposeImpl(ctx.run_ctx, inputs[0], outputs[0], axes);
-  })
+  mxnet::TShape axes = NumpyRollaxisShapeImpl(param.axis, param.start, 
inputs[0].ndim());
+
+  mshadow::Tensor workspace =
+GetTransposeExWorkspace(ctx, axes);
+  if (req[0] == kAddTo) {
+TransposeExImpl(ctx.run_ctx, inputs[0], outputs[0],
+axes, workspace);
+  } else {
+TransposeExImpl(ctx.run_ctx, inputs[0], outputs[0],
+axes, workspace);
+  }
 }
 
 template
@@ -796,6 +808,9 @@ void NumpyRollaxisBackward(const nnvm::NodeAttrs ,
 const std::vector ) {
   using namespace mshadow;
   using namespace mshadow::expr;
+  if (req[0] == kNullOp) return;
+  CHECK(req[0] == kWriteTo || req[0] == kAddTo)
+  << "Rollaxis Backward does not support inplace";
   const NumpyRollaxisParam& param = 
nnvm::get(attrs.parsed);
   int axis_origin = param.axis;
   int start_origin = param.start;
@@ -819,11 +834,17 @@ void NumpyRollaxisBackward(const nnvm::NodeAttrs ,
 axis = start_origin;
 start = axis_origin + 1;
   }
-  mxnet::TShape axes;
-  axes = NumpyRollaxisShapeImpl(axis, start, inputs[0].ndim());
-  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, Dtype, {
-TransposeImpl(ctx.run_ctx, inputs[0], outputs[0], axes);
-  })
+  mxnet::TShape axes = NumpyRollaxisShapeImpl(axis, start, inputs[0].ndim());
+
+  mshadow::Tensor workspace =
+GetTransposeExWorkspace(ctx, axes);
+  if (req[0] == kAddTo) {
+TransposeExImpl(ctx.run_ctx, inputs[0], outputs[0],
+axes, workspace);
+  } else {
+TransposeExImpl(ctx.run_ctx, inputs[0], outputs[0],
+axes, workspace);
+  }
 }
 
 struct NumpyRot90Param : public dmlc::Parameter {
diff --git a/src/operator/numpy/np_matrix_op.cc 
b/src/operator/numpy/np_matr

[incubator-mxnet] branch master updated (e2366e9 -> 6901325)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e2366e9  Refactor scope functionality in Python API (#18619)
 add 6901325  Add the newest mxnet discuss  version. Add d2l.ai (#18663)

No new revisions were added by this update.

Summary of changes:
 README.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (e2366e9 -> 6901325)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e2366e9  Refactor scope functionality in Python API (#18619)
 add 6901325  Add the newest mxnet discuss  version. Add d2l.ai (#18663)

No new revisions were added by this update.

Summary of changes:
 README.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (e2366e9 -> 6901325)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e2366e9  Refactor scope functionality in Python API (#18619)
 add 6901325  Add the newest mxnet discuss  version. Add d2l.ai (#18663)

No new revisions were added by this update.

Summary of changes:
 README.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (e2366e9 -> 6901325)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e2366e9  Refactor scope functionality in Python API (#18619)
 add 6901325  Add the newest mxnet discuss  version. Add d2l.ai (#18663)

No new revisions were added by this update.

Summary of changes:
 README.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (e2366e9 -> 6901325)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e2366e9  Refactor scope functionality in Python API (#18619)
 add 6901325  Add the newest mxnet discuss  version. Add d2l.ai (#18663)

No new revisions were added by this update.

Summary of changes:
 README.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (12ec046 -> e2366e9)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 12ec046  Migrate from private to public jetson toolchain files (#18677)
 add e2366e9  Refactor scope functionality in Python API (#18619)

No new revisions were added by this update.

Summary of changes:
 ci/docker/Dockerfile.build.centos7|  18 +-
 ci/docker/install/requirements|   2 +
 example/profiler/profiler_matmul.py   |   4 +-
 include/mxnet/c_api.h |   8 +
 python/mxnet/attribute.py |  46 ++---
 python/mxnet/base.py  |  63 ---
 python/mxnet/context.py   |  49 ++
 python/mxnet/gluon/block.py   | 204 ++
 python/mxnet/gluon/contrib/estimator/estimator.py |   2 +-
 python/mxnet/gluon/contrib/nn/basic_layers.py |   3 +-
 python/mxnet/gluon/parameter.py   |  19 +-
 python/mxnet/gluon/trainer.py |  13 +-
 python/mxnet/name.py  |  53 ++
 python/mxnet/optimizer/updater.py |   4 +-
 python/mxnet/profiler.py  |  60 ++-
 python/mxnet/symbol/contrib.py|   4 +-
 python/mxnet/symbol/numpy/_symbol.py  |   2 +-
 python/mxnet/symbol/register.py   |  28 +--
 python/mxnet/symbol/symbol.py |  39 -
 python/mxnet/test_utils.py|  23 +--
 python/setup.py   |   2 +-
 src/c_api/c_api_symbolic.cc   |  14 ++
 tests/python/gpu/test_gluon_gpu.py|  14 +-
 tests/python/gpu/test_profiler_gpu.py |   2 +-
 tests/python/mkl/test_mkldnn.py   |   4 +-
 tests/python/unittest/onnx/backend_test.py|  51 +++---
 tests/python/unittest/onnx/mxnet_export_test.py   |  10 +-
 tests/python/unittest/onnx/test_node.py   |   8 +-
 tests/python/unittest/test_autograd.py|   1 +
 tests/python/unittest/test_deferred_compute.py|   3 +-
 tests/python/unittest/test_gluon.py   |  68 
 tests/python/unittest/test_gluon_contrib.py   |   4 +-
 tests/python/unittest/test_gluon_rnn.py   |  63 +++
 tests/python/unittest/test_gluon_trainer.py   |  15 ++
 tests/python/unittest/test_memory_opt.py  |   5 -
 tests/python/unittest/test_numpy_default_dtype.py |   5 -
 tests/python/unittest/test_numpy_op.py|   9 +-
 tests/python/unittest/test_profiler.py|   4 +-
 tests/python/unittest/test_sparse_ndarray.py  |  13 +-
 tests/python/unittest/test_thread_local.py| 153 +++-
 40 files changed, 466 insertions(+), 626 deletions(-)



[incubator-mxnet] branch master updated (12ec046 -> e2366e9)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 12ec046  Migrate from private to public jetson toolchain files (#18677)
 add e2366e9  Refactor scope functionality in Python API (#18619)

No new revisions were added by this update.

Summary of changes:
 ci/docker/Dockerfile.build.centos7|  18 +-
 ci/docker/install/requirements|   2 +
 example/profiler/profiler_matmul.py   |   4 +-
 include/mxnet/c_api.h |   8 +
 python/mxnet/attribute.py |  46 ++---
 python/mxnet/base.py  |  63 ---
 python/mxnet/context.py   |  49 ++
 python/mxnet/gluon/block.py   | 204 ++
 python/mxnet/gluon/contrib/estimator/estimator.py |   2 +-
 python/mxnet/gluon/contrib/nn/basic_layers.py |   3 +-
 python/mxnet/gluon/parameter.py   |  19 +-
 python/mxnet/gluon/trainer.py |  13 +-
 python/mxnet/name.py  |  53 ++
 python/mxnet/optimizer/updater.py |   4 +-
 python/mxnet/profiler.py  |  60 ++-
 python/mxnet/symbol/contrib.py|   4 +-
 python/mxnet/symbol/numpy/_symbol.py  |   2 +-
 python/mxnet/symbol/register.py   |  28 +--
 python/mxnet/symbol/symbol.py |  39 -
 python/mxnet/test_utils.py|  23 +--
 python/setup.py   |   2 +-
 src/c_api/c_api_symbolic.cc   |  14 ++
 tests/python/gpu/test_gluon_gpu.py|  14 +-
 tests/python/gpu/test_profiler_gpu.py |   2 +-
 tests/python/mkl/test_mkldnn.py   |   4 +-
 tests/python/unittest/onnx/backend_test.py|  51 +++---
 tests/python/unittest/onnx/mxnet_export_test.py   |  10 +-
 tests/python/unittest/onnx/test_node.py   |   8 +-
 tests/python/unittest/test_autograd.py|   1 +
 tests/python/unittest/test_deferred_compute.py|   3 +-
 tests/python/unittest/test_gluon.py   |  68 
 tests/python/unittest/test_gluon_contrib.py   |   4 +-
 tests/python/unittest/test_gluon_rnn.py   |  63 +++
 tests/python/unittest/test_gluon_trainer.py   |  15 ++
 tests/python/unittest/test_memory_opt.py  |   5 -
 tests/python/unittest/test_numpy_default_dtype.py |   5 -
 tests/python/unittest/test_numpy_op.py|   9 +-
 tests/python/unittest/test_profiler.py|   4 +-
 tests/python/unittest/test_sparse_ndarray.py  |  13 +-
 tests/python/unittest/test_thread_local.py| 153 +++-
 40 files changed, 466 insertions(+), 626 deletions(-)



[incubator-mxnet] branch v1.x updated: Fix the monitor_callback invalid issue during calibration with variable input shapes (#18705)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new 5cdefeb  Fix the monitor_callback invalid issue during calibration 
with variable input shapes (#18705)
5cdefeb is described below

commit 5cdefeb2b3852827e23f5203b7bb0663883168a5
Author: ciyong 
AuthorDate: Wed Jul 15 23:24:12 2020 +0800

Fix the monitor_callback invalid issue during calibration with variable 
input shapes (#18705)
---
 python/mxnet/executor.py   |  9 ++
 tests/python/unittest/test_operator.py | 53 ++
 2 files changed, 62 insertions(+)

diff --git a/python/mxnet/executor.py b/python/mxnet/executor.py
index 03fa812..d78d7e5 100644
--- a/python/mxnet/executor.py
+++ b/python/mxnet/executor.py
@@ -79,6 +79,7 @@ class Executor(object):
 self._aux_dict = None
 self._output_dict = None
 self._monitor_callback = None
+self._monitor_all = None
 self._ctx = copy.deepcopy(ctx)
 self._grad_req = copy.deepcopy(grad_req)
 self._group2ctx = copy.deepcopy(group2ctx)
@@ -253,6 +254,7 @@ class Executor(object):
 """
 cb_type = ctypes.CFUNCTYPE(None, ctypes.c_char_p, NDArrayHandle, 
ctypes.c_void_p)
 self._monitor_callback = cb_type(_monitor_callback_wrapper(callback))
+self._monitor_all = monitor_all
 check_call(_LIB.MXExecutorSetMonitorCallbackEX(
 self.handle,
 self._monitor_callback,
@@ -477,6 +479,13 @@ class Executor(object):
 executor.arg_arrays = arg_arrays
 executor.grad_arrays = grad_arrays
 executor.aux_arrays = aux_arrays
+if (self._monitor_callback is not None) and (self._monitor_all is not 
None):
+# rebind callback to the new executor if the callback is valid
+check_call(_LIB.MXExecutorSetMonitorCallbackEX(
+handle,
+self._monitor_callback,
+None,
+ctypes.c_int(self._monitor_all)))
 return executor
 
 def debug_str(self):
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index b22dc7b..e6db0e9 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -8365,6 +8365,59 @@ def test_op_all_names_monitor():
 del os.environ['MXNET_SUBGRAPH_BACKEND']
 
 @with_seed()
+def test_monitor_with_variable_input_shape():
+output = {}
+
+def get_output_min_callback(name, arr):
+name = py_str(name)
+handle = ctypes.cast(arr, NDArrayHandle)
+arr = NDArray(handle, writable=False)
+min_val = mx.ndarray.min(arr).asscalar()
+if name in output:
+output[name] = min(output[name], min_val)
+else:
+output[name] = min_val
+
+def check_result(output, names):
+assert len(output) > 0
+for k, v in output.items():
+assert k in names
+assert v is not None
+
+is_windows = sys.platform.startswith('win')
+if (is_windows):
+# Windows doesn't support set environment variable on the fly, so 
disable it for now
+pass
+else:
+# Disable subgraph in case subgraph will replace symbol
+os.environ['MXNET_SUBGRAPH_BACKEND'] = "NONE"
+
+batch_size = 1
+op_name = 'conv'
+dshape = (batch_size, 3, 10, 10)
+data = mx.sym.Variable('data', shape=dshape)
+sym = mx.sym.Convolution(data, kernel=(1, 1), num_filter=1, 
name=op_name)
+
+mod = mx.module.Module(symbol=sym, label_names=None)
+mod.bind(for_training=False, data_shapes=[('data', dshape)])
+mod.init_params()
+mod._exec_group.execs[0].set_monitor_callback(get_output_min_callback, 
monitor_all=True)
+
+new_dshape = dshape[:-1] + (dshape[-1] + 4,)
+new_data = mx.nd.random.uniform(shape=new_dshape)
+new_data = mx.io.NDArrayIter(data=new_data, batch_size=batch_size)
+new_data = DummyIter(new_data)
+
+for batch in new_data:
+mod.forward(data_batch=batch, is_train=False)
+mx.nd.waitall()
+break
+
+name_list = ['data', 'conv_data', 'conv_weight', 'conv_bias', 
'conv_output']
+check_result(output, name_list)
+del os.environ['MXNET_SUBGRAPH_BACKEND']
+
+@with_seed()
 @unittest.skip("test fails intermittently. temporarily disabled till it gets 
fixed. tracked at https://github.com/apache/incubator-mxnet/issues/13915;)
 def test_activation():
 shapes = [(9,), (9, 10), (9, 10, 10), (1, 9, 10, 10)]



[incubator-mxnet] branch v1.x updated: Fix the monitor_callback invalid issue during calibration with variable input shapes (#18705)

2020-07-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.x by this push:
 new 5cdefeb  Fix the monitor_callback invalid issue during calibration 
with variable input shapes (#18705)
5cdefeb is described below

commit 5cdefeb2b3852827e23f5203b7bb0663883168a5
Author: ciyong 
AuthorDate: Wed Jul 15 23:24:12 2020 +0800

Fix the monitor_callback invalid issue during calibration with variable 
input shapes (#18705)
---
 python/mxnet/executor.py   |  9 ++
 tests/python/unittest/test_operator.py | 53 ++
 2 files changed, 62 insertions(+)

diff --git a/python/mxnet/executor.py b/python/mxnet/executor.py
index 03fa812..d78d7e5 100644
--- a/python/mxnet/executor.py
+++ b/python/mxnet/executor.py
@@ -79,6 +79,7 @@ class Executor(object):
 self._aux_dict = None
 self._output_dict = None
 self._monitor_callback = None
+self._monitor_all = None
 self._ctx = copy.deepcopy(ctx)
 self._grad_req = copy.deepcopy(grad_req)
 self._group2ctx = copy.deepcopy(group2ctx)
@@ -253,6 +254,7 @@ class Executor(object):
 """
 cb_type = ctypes.CFUNCTYPE(None, ctypes.c_char_p, NDArrayHandle, 
ctypes.c_void_p)
 self._monitor_callback = cb_type(_monitor_callback_wrapper(callback))
+self._monitor_all = monitor_all
 check_call(_LIB.MXExecutorSetMonitorCallbackEX(
 self.handle,
 self._monitor_callback,
@@ -477,6 +479,13 @@ class Executor(object):
 executor.arg_arrays = arg_arrays
 executor.grad_arrays = grad_arrays
 executor.aux_arrays = aux_arrays
+if (self._monitor_callback is not None) and (self._monitor_all is not 
None):
+# rebind callback to the new executor if the callback is valid
+check_call(_LIB.MXExecutorSetMonitorCallbackEX(
+handle,
+self._monitor_callback,
+None,
+ctypes.c_int(self._monitor_all)))
 return executor
 
 def debug_str(self):
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index b22dc7b..e6db0e9 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -8365,6 +8365,59 @@ def test_op_all_names_monitor():
 del os.environ['MXNET_SUBGRAPH_BACKEND']
 
 @with_seed()
+def test_monitor_with_variable_input_shape():
+output = {}
+
+def get_output_min_callback(name, arr):
+name = py_str(name)
+handle = ctypes.cast(arr, NDArrayHandle)
+arr = NDArray(handle, writable=False)
+min_val = mx.ndarray.min(arr).asscalar()
+if name in output:
+output[name] = min(output[name], min_val)
+else:
+output[name] = min_val
+
+def check_result(output, names):
+assert len(output) > 0
+for k, v in output.items():
+assert k in names
+assert v is not None
+
+is_windows = sys.platform.startswith('win')
+if (is_windows):
+# Windows doesn't support set environment variable on the fly, so 
disable it for now
+pass
+else:
+# Disable subgraph in case subgraph will replace symbol
+os.environ['MXNET_SUBGRAPH_BACKEND'] = "NONE"
+
+batch_size = 1
+op_name = 'conv'
+dshape = (batch_size, 3, 10, 10)
+data = mx.sym.Variable('data', shape=dshape)
+sym = mx.sym.Convolution(data, kernel=(1, 1), num_filter=1, 
name=op_name)
+
+mod = mx.module.Module(symbol=sym, label_names=None)
+mod.bind(for_training=False, data_shapes=[('data', dshape)])
+mod.init_params()
+mod._exec_group.execs[0].set_monitor_callback(get_output_min_callback, 
monitor_all=True)
+
+new_dshape = dshape[:-1] + (dshape[-1] + 4,)
+new_data = mx.nd.random.uniform(shape=new_dshape)
+new_data = mx.io.NDArrayIter(data=new_data, batch_size=batch_size)
+new_data = DummyIter(new_data)
+
+for batch in new_data:
+mod.forward(data_batch=batch, is_train=False)
+mx.nd.waitall()
+break
+
+name_list = ['data', 'conv_data', 'conv_weight', 'conv_bias', 
'conv_output']
+check_result(output, name_list)
+del os.environ['MXNET_SUBGRAPH_BACKEND']
+
+@with_seed()
 @unittest.skip("test fails intermittently. temporarily disabled till it gets 
fixed. tracked at https://github.com/apache/incubator-mxnet/issues/13915;)
 def test_activation():
 shapes = [(9,), (9, 10), (9, 10, 10), (1, 9, 10, 10)]



[incubator-mxnet] branch master updated (d512814 -> 0dc30a2)

2020-07-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d512814  Disable test coverage in MKL builds (#18443)
 add 0dc30a2  Enable GPU Memory profiler tests (#18701)

No new revisions were added by this update.

Summary of changes:
 tests/python/gpu/test_profiler_gpu.py  | 145 +
 tests/python/unittest/test_profiler.py | 120 +--
 2 files changed, 146 insertions(+), 119 deletions(-)
 create mode 100644 tests/python/gpu/test_profiler_gpu.py



[incubator-mxnet] branch master updated (d512814 -> 0dc30a2)

2020-07-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d512814  Disable test coverage in MKL builds (#18443)
 add 0dc30a2  Enable GPU Memory profiler tests (#18701)

No new revisions were added by this update.

Summary of changes:
 tests/python/gpu/test_profiler_gpu.py  | 145 +
 tests/python/unittest/test_profiler.py | 120 +--
 2 files changed, 146 insertions(+), 119 deletions(-)
 create mode 100644 tests/python/gpu/test_profiler_gpu.py



[incubator-mxnet] branch master updated (d8430b6 -> d512814)

2020-07-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d8430b6  Set CMAKE_CUDA_COMPILER in aarch64-linux-gnu-toolchain.cmake 
(#18713)
 add d512814  Disable test coverage in MKL builds (#18443)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh  | 70 ++---
 ci/jenkins/Jenkins_steps.groovy | 16 +++---
 2 files changed, 35 insertions(+), 51 deletions(-)



[incubator-mxnet] branch master updated (d8430b6 -> d512814)

2020-07-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d8430b6  Set CMAKE_CUDA_COMPILER in aarch64-linux-gnu-toolchain.cmake 
(#18713)
 add d512814  Disable test coverage in MKL builds (#18443)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh  | 70 ++---
 ci/jenkins/Jenkins_steps.groovy | 16 +++---
 2 files changed, 35 insertions(+), 51 deletions(-)



[incubator-mxnet] branch master updated (d8430b6 -> d512814)

2020-07-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d8430b6  Set CMAKE_CUDA_COMPILER in aarch64-linux-gnu-toolchain.cmake 
(#18713)
 add d512814  Disable test coverage in MKL builds (#18443)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh  | 70 ++---
 ci/jenkins/Jenkins_steps.groovy | 16 +++---
 2 files changed, 35 insertions(+), 51 deletions(-)



[incubator-mxnet] branch v1.7.x updated: Revert "Fix memory leaks in Gluon (#18328) (#18358)" (#18692)

2020-07-11 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.7.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.7.x by this push:
 new c4c7b11  Revert "Fix memory leaks in Gluon (#18328) (#18358)" (#18692)
c4c7b11 is described below

commit c4c7b11a84fd8f0333131b6b228afb4832fc49de
Author: Ziyi Mu 
AuthorDate: Sat Jul 11 20:44:16 2020 -0700

Revert "Fix memory leaks in Gluon (#18328) (#18358)" (#18692)

This reverts commit c4d9270dde5c091386dbdbd53f8060a73b98cbc9.
---
 python/mxnet/gluon/block.py| 21 ++--
 tests/python/unittest/test_gluon.py| 39 --
 tests/python/unittest/test_thread_local.py |  5 ++--
 3 files changed, 10 insertions(+), 55 deletions(-)

diff --git a/python/mxnet/gluon/block.py b/python/mxnet/gluon/block.py
index 968c787..bed6679 100644
--- a/python/mxnet/gluon/block.py
+++ b/python/mxnet/gluon/block.py
@@ -23,10 +23,8 @@ __all__ = ['Block', 'HybridBlock', 'SymbolBlock']
 import threading
 import copy
 import warnings
-import weakref
-from collections import OrderedDict, defaultdict
-
 import re
+from collections import OrderedDict, defaultdict
 import numpy as np
 
 from ..base import mx_real_t, MXNetError
@@ -48,7 +46,7 @@ class _BlockScope(object):
 _current = threading.local()
 
 def __init__(self, block):
-self._block = weakref.ref(block) if block is not None else None
+self._block = block
 self._counter = {}
 self._old_scope = None
 self._name_scope = None
@@ -57,8 +55,7 @@ class _BlockScope(object):
 def create(prefix, params, hint):
 """Creates prefix and params for new `Block`."""
 current = getattr(_BlockScope._current, "value", None)
-block = current._block() if current is not None else None
-if current is None or block is None:
+if current is None:
 if prefix is None:
 if not hasattr(_name.NameManager._current, "value"):
 _name.NameManager._current.value = _name.NameManager()
@@ -74,25 +71,23 @@ class _BlockScope(object):
 prefix = '%s%d_'%(hint, count)
 current._counter[hint] = count + 1
 if params is None:
-parent = block.params
+parent = current._block.params
 params = ParameterDict(parent.prefix+prefix, parent._shared)
 else:
 params = ParameterDict(params.prefix, params)
-return block.prefix + prefix, params
+return current._block.prefix+prefix, params
 
 def __enter__(self):
-block = self._block()
-if block is None or block._empty_prefix:
+if self._block._empty_prefix:
 return self
 self._old_scope = getattr(_BlockScope._current, "value", None)
 _BlockScope._current.value = self
-self._name_scope = _name.Prefix(block.prefix)
+self._name_scope = _name.Prefix(self._block.prefix)
 self._name_scope.__enter__()
 return self
 
 def __exit__(self, ptype, value, trace):
-block = self._block()
-if block is None or block._empty_prefix:
+if self._block._empty_prefix:
 return
 self._name_scope.__exit__(ptype, value, trace)
 self._name_scope = None
diff --git a/tests/python/unittest/test_gluon.py 
b/tests/python/unittest/test_gluon.py
index cf6bc36..a026825 100644
--- a/tests/python/unittest/test_gluon.py
+++ b/tests/python/unittest/test_gluon.py
@@ -17,7 +17,6 @@
 
 import os
 import tempfile
-import gc
 
 import mxnet as mx
 from mxnet import gluon
@@ -3213,44 +3212,6 @@ def test_reqs_switching_training_inference():
 
 mx.test_utils.assert_almost_equal(grad1, grad2)
 
-def test_no_memory_leak_in_gluon():
-# Collect all other garbage prior to this test. Otherwise the test may fail
-# due to unrelated memory leaks.
-gc.collect()
-
-gc_flags = gc.get_debug()
-gc.set_debug(gc.DEBUG_SAVEALL)
-net = mx.gluon.nn.Dense(10, in_units=10)
-net.initialize()
-del net
-gc.collect()
-gc.set_debug(gc_flags)  # reset gc flags
-
-# Check for leaked NDArrays
-seen = set()
-def has_array(element):
-try:
-if element in seen:
-return False
-seen.add(element)
-except TypeError:  # unhashable
-pass
-
-if isinstance(element, mx.nd._internal.NDArrayBase):
-return True
-elif hasattr(element, '__dict__'):
-return any(has_array(x) for x in vars(element))
-elif isinstance(element, dict):
-return any(has_array(x) for x in element.items())
-else:
-try:
-return any(has_array(x) for x in element)
-except (TypeError, KeyError):
-   

[incubator-mxnet] branch master updated: [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1 (#18504)

2020-07-08 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new beafba7  [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 
1 (#18504)
beafba7 is described below

commit beafba76395e75c093f99d20ac62e38f48e91012
Author: JackieWu 
AuthorDate: Thu Jul 9 08:01:35 2020 +0800

[Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1 (#18504)

* fix batch norm when fix_gamma is True

* support gradient accumulation for batch norm

* mkldnn batchnorm support grad add

* unittest for bn

* fix bn arg

* fix lint

* fix mkldnn

* fix mkldnn bn

* fix grad when fixing gamma

* fix naive gpu bn

* fix lint

* invoke mkldnn and cudnn batchnorm when axis != 1

* backport 18500

* change condition

* fix

* fix

* add mkldnn_off for bn

* remove mkldnn_off

* recover save_000800.json

* cast
---
 src/operator/nn/batch_norm.cc  | 12 ---
 src/operator/nn/batch_norm.cu  |  6 ++--
 src/operator/nn/cudnn/cudnn_batch_norm-inl.h   | 26 +++
 src/operator/nn/mkldnn/mkldnn_batch_norm-inl.h | 44 +++---
 4 files changed, 68 insertions(+), 20 deletions(-)

diff --git a/src/operator/nn/batch_norm.cc b/src/operator/nn/batch_norm.cc
index 7e540ca..2fdd31e 100644
--- a/src/operator/nn/batch_norm.cc
+++ b/src/operator/nn/batch_norm.cc
@@ -435,10 +435,14 @@ static bool BatchNormType(const nnvm::NodeAttrs& attrs,
 
 #if MXNET_USE_MKLDNN == 1
 static inline bool SupportMKLDNNBN(const NDArray , const BatchNormParam 
) {
-  mxnet::TShape shape = input.shape();
-  return SupportMKLDNN(input) && shape.ndim() == 4
-  && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS
-  && !mxnet::op::batchnorm::disable_mkl;
+  if (mxnet::op::batchnorm::disable_mkl) return false;
+  const mxnet::TShape shape = input.shape();
+  const int ndim = shape.ndim();
+  if (ndim == 0 || shape.Size() == 0) return false;
+  const int dtype = input.dtype();
+  return (dtype == mshadow::kFloat32 ||
+  dtype == mshadow::kBfloat16) &&
+  SupportStorageMKLDNN(input.storage_type());
 }
 
 void BatchNormComputeExCPU(const nnvm::NodeAttrs ,
diff --git a/src/operator/nn/batch_norm.cu b/src/operator/nn/batch_norm.cu
index 0875f05..c7e991f 100644
--- a/src/operator/nn/batch_norm.cu
+++ b/src/operator/nn/batch_norm.cu
@@ -698,8 +698,7 @@ void BatchNormCompute(const nnvm::NodeAttrs& attrs,
 
   param.axis = mxnet::op::batchnorm::GetRealAxis(shape, param.axis);
 #if MXNET_USE_CUDNN == 1
-  if (!param.use_global_stats && !param.cudnn_off
-  && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS) {
+  if (!param.use_global_stats && !param.cudnn_off) {
 MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
   GetCuDNNOp(param).Forward(ctx, in_data, req, outputs, aux_states);
 })
@@ -727,8 +726,7 @@ void BatchNormGradCompute(const nnvm::NodeAttrs& attrs,
 
   param.axis = mxnet::op::batchnorm::GetRealAxis(shape, param.axis);
 #if MXNET_USE_CUDNN == 1
-  if (!param.use_global_stats && !param.cudnn_off
-  && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS) {
+  if (!param.use_global_stats && !param.cudnn_off) {
 MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
   GetCuDNNOp(param).Backward(ctx, inputs, req, outputs);
 })
diff --git a/src/operator/nn/cudnn/cudnn_batch_norm-inl.h 
b/src/operator/nn/cudnn/cudnn_batch_norm-inl.h
index 13db44d..340c2f3 100644
--- a/src/operator/nn/cudnn/cudnn_batch_norm-inl.h
+++ b/src/operator/nn/cudnn/cudnn_batch_norm-inl.h
@@ -262,15 +262,27 @@ class CuDNNBatchNormOp {
 
  private:
   void Init(const TBlob _data) {
-if (in_data.ndim() == 4) {
-  for (int i = 0; i < 4; ++i)
-shape_[i] = in_data.shape_[i];
+CHECK_GE(param_.axis, 0);
+CHECK_LT(param_.axis, in_data.ndim());
+if (param_.axis == 1) {
+  if (in_data.ndim() == 4) {
+for (int i = 0; i < 4; ++i)
+  shape_[i] = in_data.shape_[i];
+  } else {
+// when in_data.ndim() != 4
+shape_[0] = in_data.shape_[0];
+shape_[1] = in_data.ndim() > 1 ? in_data.shape_[1] : 1;
+shape_[2] = 1;
+shape_[3] = static_cast(in_data.shape_.ProdShape(2,
+  in_data.ndim()));
+  }
 } else {
-  // when in_data.ndim() != 4
-  shape_[0] = in_data.shape_[0];
-  shape_[1] = in_data.ndim() > 1 ? in_data.shape_[1] : 1;
+  // reshape to (N, C, 1, D), C is the `param_.axis` dimension
+  shape_[0] = static_cast(in_data.shape_.ProdShape(0, param_.axis));
+  shape_[1] = in_data.shape_[param_.axis];
   shape_[2] = 1;
-  shape_[3

[incubator-mxnet] branch master updated: [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1 (#18504)

2020-07-08 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new beafba7  [Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 
1 (#18504)
beafba7 is described below

commit beafba76395e75c093f99d20ac62e38f48e91012
Author: JackieWu 
AuthorDate: Thu Jul 9 08:01:35 2020 +0800

[Improvement] Invoke mkldnn and cudnn BatchNorm when axis != 1 (#18504)

* fix batch norm when fix_gamma is True

* support gradient accumulation for batch norm

* mkldnn batchnorm support grad add

* unittest for bn

* fix bn arg

* fix lint

* fix mkldnn

* fix mkldnn bn

* fix grad when fixing gamma

* fix naive gpu bn

* fix lint

* invoke mkldnn and cudnn batchnorm when axis != 1

* backport 18500

* change condition

* fix

* fix

* add mkldnn_off for bn

* remove mkldnn_off

* recover save_000800.json

* cast
---
 src/operator/nn/batch_norm.cc  | 12 ---
 src/operator/nn/batch_norm.cu  |  6 ++--
 src/operator/nn/cudnn/cudnn_batch_norm-inl.h   | 26 +++
 src/operator/nn/mkldnn/mkldnn_batch_norm-inl.h | 44 +++---
 4 files changed, 68 insertions(+), 20 deletions(-)

diff --git a/src/operator/nn/batch_norm.cc b/src/operator/nn/batch_norm.cc
index 7e540ca..2fdd31e 100644
--- a/src/operator/nn/batch_norm.cc
+++ b/src/operator/nn/batch_norm.cc
@@ -435,10 +435,14 @@ static bool BatchNormType(const nnvm::NodeAttrs& attrs,
 
 #if MXNET_USE_MKLDNN == 1
 static inline bool SupportMKLDNNBN(const NDArray , const BatchNormParam 
) {
-  mxnet::TShape shape = input.shape();
-  return SupportMKLDNN(input) && shape.ndim() == 4
-  && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS
-  && !mxnet::op::batchnorm::disable_mkl;
+  if (mxnet::op::batchnorm::disable_mkl) return false;
+  const mxnet::TShape shape = input.shape();
+  const int ndim = shape.ndim();
+  if (ndim == 0 || shape.Size() == 0) return false;
+  const int dtype = input.dtype();
+  return (dtype == mshadow::kFloat32 ||
+  dtype == mshadow::kBfloat16) &&
+  SupportStorageMKLDNN(input.storage_type());
 }
 
 void BatchNormComputeExCPU(const nnvm::NodeAttrs ,
diff --git a/src/operator/nn/batch_norm.cu b/src/operator/nn/batch_norm.cu
index 0875f05..c7e991f 100644
--- a/src/operator/nn/batch_norm.cu
+++ b/src/operator/nn/batch_norm.cu
@@ -698,8 +698,7 @@ void BatchNormCompute(const nnvm::NodeAttrs& attrs,
 
   param.axis = mxnet::op::batchnorm::GetRealAxis(shape, param.axis);
 #if MXNET_USE_CUDNN == 1
-  if (!param.use_global_stats && !param.cudnn_off
-  && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS) {
+  if (!param.use_global_stats && !param.cudnn_off) {
 MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
   GetCuDNNOp(param).Forward(ctx, in_data, req, outputs, aux_states);
 })
@@ -727,8 +726,7 @@ void BatchNormGradCompute(const nnvm::NodeAttrs& attrs,
 
   param.axis = mxnet::op::batchnorm::GetRealAxis(shape, param.axis);
 #if MXNET_USE_CUDNN == 1
-  if (!param.use_global_stats && !param.cudnn_off
-  && param.axis == mxnet::op::batchnorm::DEFAULT_AXIS) {
+  if (!param.use_global_stats && !param.cudnn_off) {
 MSHADOW_REAL_TYPE_SWITCH(dtype, DType, {
   GetCuDNNOp(param).Backward(ctx, inputs, req, outputs);
 })
diff --git a/src/operator/nn/cudnn/cudnn_batch_norm-inl.h 
b/src/operator/nn/cudnn/cudnn_batch_norm-inl.h
index 13db44d..340c2f3 100644
--- a/src/operator/nn/cudnn/cudnn_batch_norm-inl.h
+++ b/src/operator/nn/cudnn/cudnn_batch_norm-inl.h
@@ -262,15 +262,27 @@ class CuDNNBatchNormOp {
 
  private:
   void Init(const TBlob _data) {
-if (in_data.ndim() == 4) {
-  for (int i = 0; i < 4; ++i)
-shape_[i] = in_data.shape_[i];
+CHECK_GE(param_.axis, 0);
+CHECK_LT(param_.axis, in_data.ndim());
+if (param_.axis == 1) {
+  if (in_data.ndim() == 4) {
+for (int i = 0; i < 4; ++i)
+  shape_[i] = in_data.shape_[i];
+  } else {
+// when in_data.ndim() != 4
+shape_[0] = in_data.shape_[0];
+shape_[1] = in_data.ndim() > 1 ? in_data.shape_[1] : 1;
+shape_[2] = 1;
+shape_[3] = static_cast(in_data.shape_.ProdShape(2,
+  in_data.ndim()));
+  }
 } else {
-  // when in_data.ndim() != 4
-  shape_[0] = in_data.shape_[0];
-  shape_[1] = in_data.ndim() > 1 ? in_data.shape_[1] : 1;
+  // reshape to (N, C, 1, D), C is the `param_.axis` dimension
+  shape_[0] = static_cast(in_data.shape_.ProdShape(0, param_.axis));
+  shape_[1] = in_data.shape_[param_.axis];
   shape_[2] = 1;
-  shape_[3

[incubator-mxnet] branch master updated (54c0155 -> b4b8b80)

2020-07-07 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 54c0155  User Feedback Widget (#18639)
 add b4b8b80  Gluon.probability (#18403)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/__init__.py |2 +
 .../mxnet/gluon/probability}/__init__.py   |7 +-
 .../mxnet/gluon/probability/block}/__init__.py |5 +-
 .../gluon/probability/block/stochastic_block.py|  134 ++
 .../gluon/probability/distributions/__init__.py|   86 +
 .../gluon/probability/distributions/bernoulli.py   |  139 ++
 .../mxnet/gluon/probability/distributions/beta.py  |   97 +
 .../gluon/probability/distributions/binomial.py|  145 ++
 .../gluon/probability/distributions/categorical.py |  168 ++
 .../gluon/probability/distributions/cauchy.py  |   96 +
 .../distributions/chi2.py} |   38 +-
 .../gluon/probability/distributions/constraint.py  |  548 +
 .../gluon/probability/distributions/dirichlet.py   |  102 +
 .../probability/distributions/distribution.py  |  198 ++
 .../gluon/probability/distributions/divergence.py  |  382 +++
 .../gluon/probability/distributions/exp_family.py  |   68 +
 .../gluon/probability/distributions/exponential.py |  110 +
 .../probability/distributions/fishersnedecor.py|  107 +
 .../mxnet/gluon/probability/distributions/gamma.py |  102 +
 .../gluon/probability/distributions/geometric.py   |  133 ++
 .../gluon/probability/distributions/gumbel.py  |  109 +
 .../gluon/probability/distributions/half_cauchy.py |   81 +
 .../gluon/probability/distributions/half_normal.py |   82 +
 .../gluon/probability/distributions/independent.py |   94 +
 .../gluon/probability/distributions/laplace.py |  143 ++
 .../gluon/probability/distributions/multinomial.py |  125 +
 .../distributions/multivariate_normal.py   |  174 ++
 .../probability/distributions/negative_binomial.py |  140 ++
 .../gluon/probability/distributions/normal.py  |  166 ++
 .../distributions/one_hot_categorical.py   |  105 +
 .../gluon/probability/distributions/pareto.py  |   83 +
 .../gluon/probability/distributions/poisson.py |  110 +
 .../probability/distributions/relaxed_bernoulli.py |  138 ++
 .../distributions/relaxed_one_hot_categorical.py   |  177 ++
 .../gluon/probability/distributions/studentT.py|  119 +
 .../distributions/transformed_distribution.py  |  105 +
 .../gluon/probability/distributions/uniform.py |  101 +
 .../mxnet/gluon/probability/distributions/utils.py |  202 ++
 .../gluon/probability/distributions/weibull.py |   85 +
 .../transformation}/__init__.py|7 +-
 .../gluon/probability/transformation/domain_map.py |  123 +
 .../probability/transformation/transformation.py   |  305 +++
 python/mxnet/ndarray/numpy_extension/random.py |2 +-
 src/operator/random/multisample_op.cc  |   17 +-
 src/operator/random/multisample_op.h   |6 +-
 src/operator/random/sample_op.cc   |1 +
 tests/python/gpu/test_operator_gpu.py  |2 +
 tests/python/unittest/test_gluon_probability_v1.py | 2435 
 tests/python/unittest/test_gluon_probability_v2.py | 2365 +++
 49 files changed, 10233 insertions(+), 36 deletions(-)
 copy {plugin/opencv => python/mxnet/gluon/probability}/__init__.py (88%)
 copy {plugin/opencv => python/mxnet/gluon/probability/block}/__init__.py (93%)
 create mode 100644 python/mxnet/gluon/probability/block/stochastic_block.py
 create mode 100644 python/mxnet/gluon/probability/distributions/__init__.py
 create mode 100644 python/mxnet/gluon/probability/distributions/bernoulli.py
 create mode 100644 python/mxnet/gluon/probability/distributions/beta.py
 create mode 100644 python/mxnet/gluon/probability/distributions/binomial.py
 create mode 100644 python/mxnet/gluon/probability/distributions/categorical.py
 create mode 100644 python/mxnet/gluon/probability/distributions/cauchy.py
 copy python/mxnet/gluon/{__init__.py => probability/distributions/chi2.py} 
(52%)
 create mode 100644 python/mxnet/gluon/probability/distributions/constraint.py
 create mode 100644 python/mxnet/gluon/probability/distributions/dirichlet.py
 create mode 100644 python/mxnet/gluon/probability/distributions/distribution.py
 create mode 100644 python/mxnet/gluon/probability/distributions/divergence.py
 create mode 100644 python/mxnet/gluon/probability/distributions/exp_family.py
 create mode 100644 python/mxnet/gluon/probability/distributions/exponential.py
 create mode 100644 
python/mxnet/gluon/probability/distributions/fishersnedecor.py
 create mode 100644 python/mxnet/gluon/probability/distributions/gamma.py
 create mode 100644 python/mxnet/gluon/probability/distributions/geometric.py
 create mode 100644 python/mxnet/gluon/pr

[incubator-mxnet] branch master updated (54c0155 -> b4b8b80)

2020-07-07 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 54c0155  User Feedback Widget (#18639)
 add b4b8b80  Gluon.probability (#18403)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/__init__.py |2 +
 .../mxnet/gluon/probability}/__init__.py   |7 +-
 .../mxnet/gluon/probability/block}/__init__.py |5 +-
 .../gluon/probability/block/stochastic_block.py|  134 ++
 .../gluon/probability/distributions/__init__.py|   86 +
 .../gluon/probability/distributions/bernoulli.py   |  139 ++
 .../mxnet/gluon/probability/distributions/beta.py  |   97 +
 .../gluon/probability/distributions/binomial.py|  145 ++
 .../gluon/probability/distributions/categorical.py |  168 ++
 .../gluon/probability/distributions/cauchy.py  |   96 +
 .../distributions/chi2.py} |   38 +-
 .../gluon/probability/distributions/constraint.py  |  548 +
 .../gluon/probability/distributions/dirichlet.py   |  102 +
 .../probability/distributions/distribution.py  |  198 ++
 .../gluon/probability/distributions/divergence.py  |  382 +++
 .../gluon/probability/distributions/exp_family.py  |   68 +
 .../gluon/probability/distributions/exponential.py |  110 +
 .../probability/distributions/fishersnedecor.py|  107 +
 .../mxnet/gluon/probability/distributions/gamma.py |  102 +
 .../gluon/probability/distributions/geometric.py   |  133 ++
 .../gluon/probability/distributions/gumbel.py  |  109 +
 .../gluon/probability/distributions/half_cauchy.py |   81 +
 .../gluon/probability/distributions/half_normal.py |   82 +
 .../gluon/probability/distributions/independent.py |   94 +
 .../gluon/probability/distributions/laplace.py |  143 ++
 .../gluon/probability/distributions/multinomial.py |  125 +
 .../distributions/multivariate_normal.py   |  174 ++
 .../probability/distributions/negative_binomial.py |  140 ++
 .../gluon/probability/distributions/normal.py  |  166 ++
 .../distributions/one_hot_categorical.py   |  105 +
 .../gluon/probability/distributions/pareto.py  |   83 +
 .../gluon/probability/distributions/poisson.py |  110 +
 .../probability/distributions/relaxed_bernoulli.py |  138 ++
 .../distributions/relaxed_one_hot_categorical.py   |  177 ++
 .../gluon/probability/distributions/studentT.py|  119 +
 .../distributions/transformed_distribution.py  |  105 +
 .../gluon/probability/distributions/uniform.py |  101 +
 .../mxnet/gluon/probability/distributions/utils.py |  202 ++
 .../gluon/probability/distributions/weibull.py |   85 +
 .../transformation}/__init__.py|7 +-
 .../gluon/probability/transformation/domain_map.py |  123 +
 .../probability/transformation/transformation.py   |  305 +++
 python/mxnet/ndarray/numpy_extension/random.py |2 +-
 src/operator/random/multisample_op.cc  |   17 +-
 src/operator/random/multisample_op.h   |6 +-
 src/operator/random/sample_op.cc   |1 +
 tests/python/gpu/test_operator_gpu.py  |2 +
 tests/python/unittest/test_gluon_probability_v1.py | 2435 
 tests/python/unittest/test_gluon_probability_v2.py | 2365 +++
 49 files changed, 10233 insertions(+), 36 deletions(-)
 copy {plugin/opencv => python/mxnet/gluon/probability}/__init__.py (88%)
 copy {plugin/opencv => python/mxnet/gluon/probability/block}/__init__.py (93%)
 create mode 100644 python/mxnet/gluon/probability/block/stochastic_block.py
 create mode 100644 python/mxnet/gluon/probability/distributions/__init__.py
 create mode 100644 python/mxnet/gluon/probability/distributions/bernoulli.py
 create mode 100644 python/mxnet/gluon/probability/distributions/beta.py
 create mode 100644 python/mxnet/gluon/probability/distributions/binomial.py
 create mode 100644 python/mxnet/gluon/probability/distributions/categorical.py
 create mode 100644 python/mxnet/gluon/probability/distributions/cauchy.py
 copy python/mxnet/gluon/{__init__.py => probability/distributions/chi2.py} 
(52%)
 create mode 100644 python/mxnet/gluon/probability/distributions/constraint.py
 create mode 100644 python/mxnet/gluon/probability/distributions/dirichlet.py
 create mode 100644 python/mxnet/gluon/probability/distributions/distribution.py
 create mode 100644 python/mxnet/gluon/probability/distributions/divergence.py
 create mode 100644 python/mxnet/gluon/probability/distributions/exp_family.py
 create mode 100644 python/mxnet/gluon/probability/distributions/exponential.py
 create mode 100644 
python/mxnet/gluon/probability/distributions/fishersnedecor.py
 create mode 100644 python/mxnet/gluon/probability/distributions/gamma.py
 create mode 100644 python/mxnet/gluon/probability/distributions/geometric.py
 create mode 100644 python/mxnet/gluon/pr

[incubator-mxnet] branch v1.7.x updated: [v1.7.x] backport mixed type binary ops to v1.7.x (#18649)

2020-07-05 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.7.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.7.x by this push:
 new 477affe  [v1.7.x] backport mixed type binary ops to v1.7.x (#18649)
477affe is described below

commit 477affeef45c2630825306138c07304727a5e18c
Author: Yijun Chen 
AuthorDate: Sun Jul 5 14:30:54 2020 +0800

[v1.7.x] backport mixed type binary ops to v1.7.x (#18649)

* Fix Windows GPU CI (#17962)

Update Windows CI to use VS 2019 and enable x64 bit toolchain. Previously 
we are using an older 32 bit toolchain causing OOM errors during linking. 
Switching to x64 bit toolchain on the older VS version previously used by the 
CI was attempted in #17912 and did not work. Update to Cuda 10.2 as it is 
required by VS 2019. Switch to ninja-build on Windows to speed up build as 
ninja-build is now preinstalled. Remove logic to install cmake 3.16 on every PR 
as cmake 3.17 is now preinstalled. [...]

Co-authored-by: vexilligera 

* backport mixed type

Co-authored-by: Leonard Lausen 
Co-authored-by: vexilligera 
---
 .gitignore |   4 +
 ci/build_windows.py|  68 +++
 include/mxnet/imperative.h |  11 ++
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/common/utils.h |  19 +++
 src/operator/contrib/gradient_multiplier_op.cc |   6 +-
 src/operator/mshadow_op.h  |  83 +++--
 src/operator/mxnet_op.h|   9 +-
 .../numpy/np_elemwise_broadcast_logic_op.cc|  18 +--
 src/operator/numpy/np_elemwise_broadcast_op.cc | 136 +++--
 src/operator/numpy/np_elemwise_broadcast_op.cu |  41 +++
 src/operator/numpy/np_elemwise_broadcast_op.h  | 112 +++--
 .../numpy/np_elemwise_broadcast_op_extended.cc |  68 +--
 src/operator/numpy/np_matrix_op-inl.h  |   4 +-
 src/operator/numpy/np_true_divide-inl.h| 124 ++-
 src/operator/numpy/np_true_divide.cc   |  40 +++---
 src/operator/numpy/np_true_divide.cu   |   4 +
 src/operator/operator_tune.cc  |   2 +
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +-
 src/operator/tensor/elemwise_binary_scalar_op.h| 123 +++
 .../tensor/elemwise_binary_scalar_op_basic.cc  |  49 
 .../tensor/elemwise_binary_scalar_op_extended.cc   |  36 ++
 .../tensor/elemwise_binary_scalar_op_logic.cc  |   3 +-
 tests/python/unittest/test_numpy_op.py | 103 +---
 tests/python/unittest/test_symbol.py   |   2 -
 25 files changed, 568 insertions(+), 509 deletions(-)

diff --git a/.gitignore b/.gitignore
index c50d1ec..9fafdb1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -121,6 +121,10 @@ cmake_install.cmake
 # Mac OS X
 .DS_Store
 
+# Windows
+windows_package.7z
+windows_package
+
 #Notebook Automated Test
 !tests/nightly/test_tutorial_config.txt
 !tests/nightly/TestNotebook
diff --git a/ci/build_windows.py b/ci/build_windows.py
index 2590d21..b9c17a8 100755
--- a/ci/build_windows.py
+++ b/ci/build_windows.py
@@ -33,13 +33,15 @@ import time
 import zipfile
 from distutils.dir_util import copy_tree
 from enum import Enum
-from subprocess import check_call
+from subprocess import check_call, call
 
 from util import *
 
 KNOWN_VCVARS = {
+# https://gitlab.kitware.com/cmake/cmake/issues/18920
 'VS 2015': r'C:\Program Files (x86)\Microsoft Visual Studio 
14.0\VC\bin\x86_amd64\vcvarsx86_amd64.bat',
-'VS 2017': r'C:\Program Files (x86)\Microsoft Visual 
Studio\2017\Community\VC\Auxiliary\Build\vcvarsx86_amd64.bat'
+'VS 2017': r'C:\Program Files (x86)\Microsoft Visual 
Studio\2017\Community\VC\Auxiliary\Build\vcvarsx86_amd64.bat',
+'VS 2019': r'C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat',
 }
 
 
@@ -54,6 +56,8 @@ class BuildFlavour(Enum):
 
 CMAKE_FLAGS = {
 'WIN_CPU': (
+'-DCMAKE_C_COMPILER=cl '
+'-DCMAKE_CXX_COMPILER=cl '
 '-DUSE_CUDA=OFF '
 '-DUSE_CUDNN=OFF '
 '-DENABLE_CUDA_RTC=OFF '
@@ -67,6 +71,8 @@ CMAKE_FLAGS = {
 '-DCMAKE_BUILD_TYPE=Release')
 
 , 'WIN_CPU_MKLDNN': (
+'-DCMAKE_C_COMPILER=cl '
+'-DCMAKE_CXX_COMPILER=cl '
 '-DUSE_CUDA=OFF '
 '-DUSE_CUDNN=OFF '
 '-DENABLE_CUDA_RTC=OFF '
@@ -80,6 +86,8 @@ CMAKE_FLAGS = {
 '-DCMAKE_BUILD_TYPE=Release')
 
 , 'WIN_CPU_MKLDNN_MKL': (
+'-DCMAKE_C_COMPILER=cl '
+'-DCMAKE_CXX_COMPILER=cl '
 '-DUSE_CUDA=OFF '
 '-DUSE_CUDNN=OFF '
 '-DENABLE_CUDA_RTC=OFF '
@@ -93,6 +101,8 @@ CMAKE_FLAGS = {
 '-DCMAKE_BUILD_TYPE=Release

[incubator-mxnet] branch v1.7.x updated: [v1.7.x] backport mixed type binary ops to v1.7.x (#18649)

2020-07-05 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.7.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.7.x by this push:
 new 477affe  [v1.7.x] backport mixed type binary ops to v1.7.x (#18649)
477affe is described below

commit 477affeef45c2630825306138c07304727a5e18c
Author: Yijun Chen 
AuthorDate: Sun Jul 5 14:30:54 2020 +0800

[v1.7.x] backport mixed type binary ops to v1.7.x (#18649)

* Fix Windows GPU CI (#17962)

Update Windows CI to use VS 2019 and enable x64 bit toolchain. Previously 
we are using an older 32 bit toolchain causing OOM errors during linking. 
Switching to x64 bit toolchain on the older VS version previously used by the 
CI was attempted in #17912 and did not work. Update to Cuda 10.2 as it is 
required by VS 2019. Switch to ninja-build on Windows to speed up build as 
ninja-build is now preinstalled. Remove logic to install cmake 3.16 on every PR 
as cmake 3.17 is now preinstalled. [...]

Co-authored-by: vexilligera 

* backport mixed type

Co-authored-by: Leonard Lausen 
Co-authored-by: vexilligera 
---
 .gitignore |   4 +
 ci/build_windows.py|  68 +++
 include/mxnet/imperative.h |  11 ++
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/common/utils.h |  19 +++
 src/operator/contrib/gradient_multiplier_op.cc |   6 +-
 src/operator/mshadow_op.h  |  83 +++--
 src/operator/mxnet_op.h|   9 +-
 .../numpy/np_elemwise_broadcast_logic_op.cc|  18 +--
 src/operator/numpy/np_elemwise_broadcast_op.cc | 136 +++--
 src/operator/numpy/np_elemwise_broadcast_op.cu |  41 +++
 src/operator/numpy/np_elemwise_broadcast_op.h  | 112 +++--
 .../numpy/np_elemwise_broadcast_op_extended.cc |  68 +--
 src/operator/numpy/np_matrix_op-inl.h  |   4 +-
 src/operator/numpy/np_true_divide-inl.h| 124 ++-
 src/operator/numpy/np_true_divide.cc   |  40 +++---
 src/operator/numpy/np_true_divide.cu   |   4 +
 src/operator/operator_tune.cc  |   2 +
 src/operator/tensor/elemwise_binary_broadcast_op.h |   4 +-
 src/operator/tensor/elemwise_binary_scalar_op.h| 123 +++
 .../tensor/elemwise_binary_scalar_op_basic.cc  |  49 
 .../tensor/elemwise_binary_scalar_op_extended.cc   |  36 ++
 .../tensor/elemwise_binary_scalar_op_logic.cc  |   3 +-
 tests/python/unittest/test_numpy_op.py | 103 +---
 tests/python/unittest/test_symbol.py   |   2 -
 25 files changed, 568 insertions(+), 509 deletions(-)

diff --git a/.gitignore b/.gitignore
index c50d1ec..9fafdb1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -121,6 +121,10 @@ cmake_install.cmake
 # Mac OS X
 .DS_Store
 
+# Windows
+windows_package.7z
+windows_package
+
 #Notebook Automated Test
 !tests/nightly/test_tutorial_config.txt
 !tests/nightly/TestNotebook
diff --git a/ci/build_windows.py b/ci/build_windows.py
index 2590d21..b9c17a8 100755
--- a/ci/build_windows.py
+++ b/ci/build_windows.py
@@ -33,13 +33,15 @@ import time
 import zipfile
 from distutils.dir_util import copy_tree
 from enum import Enum
-from subprocess import check_call
+from subprocess import check_call, call
 
 from util import *
 
 KNOWN_VCVARS = {
+# https://gitlab.kitware.com/cmake/cmake/issues/18920
 'VS 2015': r'C:\Program Files (x86)\Microsoft Visual Studio 
14.0\VC\bin\x86_amd64\vcvarsx86_amd64.bat',
-'VS 2017': r'C:\Program Files (x86)\Microsoft Visual 
Studio\2017\Community\VC\Auxiliary\Build\vcvarsx86_amd64.bat'
+'VS 2017': r'C:\Program Files (x86)\Microsoft Visual 
Studio\2017\Community\VC\Auxiliary\Build\vcvarsx86_amd64.bat',
+'VS 2019': r'C:\Program Files (x86)\Microsoft Visual 
Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat',
 }
 
 
@@ -54,6 +56,8 @@ class BuildFlavour(Enum):
 
 CMAKE_FLAGS = {
 'WIN_CPU': (
+'-DCMAKE_C_COMPILER=cl '
+'-DCMAKE_CXX_COMPILER=cl '
 '-DUSE_CUDA=OFF '
 '-DUSE_CUDNN=OFF '
 '-DENABLE_CUDA_RTC=OFF '
@@ -67,6 +71,8 @@ CMAKE_FLAGS = {
 '-DCMAKE_BUILD_TYPE=Release')
 
 , 'WIN_CPU_MKLDNN': (
+'-DCMAKE_C_COMPILER=cl '
+'-DCMAKE_CXX_COMPILER=cl '
 '-DUSE_CUDA=OFF '
 '-DUSE_CUDNN=OFF '
 '-DENABLE_CUDA_RTC=OFF '
@@ -80,6 +86,8 @@ CMAKE_FLAGS = {
 '-DCMAKE_BUILD_TYPE=Release')
 
 , 'WIN_CPU_MKLDNN_MKL': (
+'-DCMAKE_C_COMPILER=cl '
+'-DCMAKE_CXX_COMPILER=cl '
 '-DUSE_CUDA=OFF '
 '-DUSE_CUDNN=OFF '
 '-DENABLE_CUDA_RTC=OFF '
@@ -93,6 +101,8 @@ CMAKE_FLAGS = {
 '-DCMAKE_BUILD_TYPE=Release

[incubator-mxnet] branch master updated (becb9ca -> 638622f)

2020-06-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from becb9ca  Remove mention of nightly in pypi (#18635)
 add 638622f  Improve performance of broadcast_axis on CPU (#17882)

No new revisions were added by this update.

Summary of changes:
 src/operator/numpy/np_matmul_op-inl.h |  37 ++--
 src/operator/tensor/broadcast_reduce_op.h | 149 --
 2 files changed, 172 insertions(+), 14 deletions(-)



[incubator-mxnet] branch master updated (becb9ca -> 638622f)

2020-06-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from becb9ca  Remove mention of nightly in pypi (#18635)
 add 638622f  Improve performance of broadcast_axis on CPU (#17882)

No new revisions were added by this update.

Summary of changes:
 src/operator/numpy/np_matmul_op-inl.h |  37 ++--
 src/operator/tensor/broadcast_reduce_op.h | 149 --
 2 files changed, 172 insertions(+), 14 deletions(-)



[incubator-mxnet] branch master updated: Enhance license checker to cover multiple license header and md files (#18633)

2020-06-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b12abbf  Enhance license checker to cover multiple license header and 
md files (#18633)
b12abbf is described below

commit b12abbfb356be93f8c24d427c72448f91d1980ec
Author: ciyong 
AuthorDate: Mon Jun 29 11:14:34 2020 +0800

Enhance license checker to cover multiple license header and md files 
(#18633)
---
 LICENSE| 60 ---
 src/operator/numpy/np_einsum_op-inl.h  | 19 +
 .../nightly/apache_rat_license_check/rat-excludes  |  1 -
 tests/python/unittest/onnx/README.md   | 17 +
 tools/license_header.py| 87 ++
 5 files changed, 143 insertions(+), 41 deletions(-)

diff --git a/LICENSE b/LICENSE
index 235fbc3..9aa20d1 100644
--- a/LICENSE
+++ b/LICENSE
@@ -216,7 +216,7 @@
 The following components are provided under an Apache 2.0 license.
 
 1. MXNet Cpp-package - For details, /cpp-package/LICENSE
- Copyright (c) 2015-2016 by Contributors 
+ Copyright (c) 2015-2016 by Contributors
 2. MXNet rcnn - For details, see, example/rcnn/LICENSE
  Copyright (c) 2014, 2015, The Regents of the University of California 
(Regents)
 3. MXNet scala-package - For details, see, scala-package/LICENSE
@@ -226,9 +226,9 @@
 5. 3rdparty/dlpack - For details, see, 3rdparty/dlpack/LICENSE
  Copyright 2017 by Contributors
 6. 3rdparty/dmlc-core - For details, see, 3rdparty/dmlc-core/LICENSE
- Copyright (c) 2015 by Contributors 
+ Copyright (c) 2015 by Contributors
  Copyright 2015 by dmlc-core developers
- Copyright by Contributors 
+ Copyright by Contributors
 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE
  Copyright (c) 2014-2016 by Contributors
  Copyright by Contributors
@@ -237,7 +237,7 @@
  Copyright 2018 by Contributors
  Copyright (c) 2018 by Xilinx, Contributors
 9. 3rdparty/tvm/dmlc-core - For details, see, 
3rdparty/tvm/3rdparty/dmlc-core/LICENSE
- Copyright (c) 2015 by Contributors 
+ Copyright (c) 2015 by Contributors
 10. 3rdparty/tvm/dlpack - For details, see, 
3rdparty/tvm/3rdparty/dlpack/LICENSE
  Copyright (c) 2015-2017 by Contributors
  Copyright by Contributors
@@ -343,6 +343,12 @@
 11. Numpy einsum operator - For details, see 
src/operator/numpy/np_einsum_op-inl.h
  Copyright (c) 2005-2019, NumPy Developers.
  Copyright (c) 2019, The Apache Software Foundation.
+12. Numpy einsum operator - For details, see 
src/operator/numpy/np_einsum_path_op-inl.h
+ Copyright (c) 2005-2019, NumPy Developers.
+ Copyright (c) 2019, The Apache Software Foundation.
+13. Numpy einsum operator - For details, see 
src/operator/numpy/np_einsum_op.cc
+ Copyright (c) 2005-2019, NumPy Developers.
+ Copyright (c) 2019, The Apache Software Foundation.
 
 
===
 2-clause BSD licenses
@@ -385,14 +391,18 @@
 5. im2col.cuh - For details, see, src/operator/nn/im2col.cuh
  Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
  Copyright (c) 2014-2017, the respective contributors
-
 6. deformable_im2col.h - For details, see, 
src/operator/contrib/nn/deformable_im2col.h
  Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
  Copyright (c) 2014-2017, the respective contributors
-
 7. deformable_im2col.cuh - For details, see, 
src/operator/contrib/nn/deformable_im2col.cuh
  Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
  Copyright (c) 2014-2017, the respective contributors
+8. modulated_deformable_im2col.h - For details, see, 
src/operator/contrib/nn/modulated_deformable_im2col.h
+ Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
+ Copyright (c) 2014-2017, the respective contributors
+9. modulated_deformable_im2col.cuh - For details, see, 
src/operator/contrib/nn/modulated_deformable_im2col.cuh
+ Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
+ Copyright (c) 2014-2017, the respective contributors
 
 
 COPYRIGHT
@@ -667,7 +677,7 @@
 
 
 
===
-
+
 13. FindJeMalloc.cmake
 For details, see cmake/Modules/FindJeMalloc.cmake
 
@@ -690,14 +700,14 @@
 # execute, and transmit the Software, and to prepare derivative works of 
the
 # Software, and to permit third-parties to whom

[incubator-mxnet] branch master updated: Enhance license checker to cover multiple license header and md files (#18633)

2020-06-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b12abbf  Enhance license checker to cover multiple license header and 
md files (#18633)
b12abbf is described below

commit b12abbfb356be93f8c24d427c72448f91d1980ec
Author: ciyong 
AuthorDate: Mon Jun 29 11:14:34 2020 +0800

Enhance license checker to cover multiple license header and md files 
(#18633)
---
 LICENSE| 60 ---
 src/operator/numpy/np_einsum_op-inl.h  | 19 +
 .../nightly/apache_rat_license_check/rat-excludes  |  1 -
 tests/python/unittest/onnx/README.md   | 17 +
 tools/license_header.py| 87 ++
 5 files changed, 143 insertions(+), 41 deletions(-)

diff --git a/LICENSE b/LICENSE
index 235fbc3..9aa20d1 100644
--- a/LICENSE
+++ b/LICENSE
@@ -216,7 +216,7 @@
 The following components are provided under an Apache 2.0 license.
 
 1. MXNet Cpp-package - For details, /cpp-package/LICENSE
- Copyright (c) 2015-2016 by Contributors 
+ Copyright (c) 2015-2016 by Contributors
 2. MXNet rcnn - For details, see, example/rcnn/LICENSE
  Copyright (c) 2014, 2015, The Regents of the University of California 
(Regents)
 3. MXNet scala-package - For details, see, scala-package/LICENSE
@@ -226,9 +226,9 @@
 5. 3rdparty/dlpack - For details, see, 3rdparty/dlpack/LICENSE
  Copyright 2017 by Contributors
 6. 3rdparty/dmlc-core - For details, see, 3rdparty/dmlc-core/LICENSE
- Copyright (c) 2015 by Contributors 
+ Copyright (c) 2015 by Contributors
  Copyright 2015 by dmlc-core developers
- Copyright by Contributors 
+ Copyright by Contributors
 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE
  Copyright (c) 2014-2016 by Contributors
  Copyright by Contributors
@@ -237,7 +237,7 @@
  Copyright 2018 by Contributors
  Copyright (c) 2018 by Xilinx, Contributors
 9. 3rdparty/tvm/dmlc-core - For details, see, 
3rdparty/tvm/3rdparty/dmlc-core/LICENSE
- Copyright (c) 2015 by Contributors 
+ Copyright (c) 2015 by Contributors
 10. 3rdparty/tvm/dlpack - For details, see, 
3rdparty/tvm/3rdparty/dlpack/LICENSE
  Copyright (c) 2015-2017 by Contributors
  Copyright by Contributors
@@ -343,6 +343,12 @@
 11. Numpy einsum operator - For details, see 
src/operator/numpy/np_einsum_op-inl.h
  Copyright (c) 2005-2019, NumPy Developers.
  Copyright (c) 2019, The Apache Software Foundation.
+12. Numpy einsum operator - For details, see 
src/operator/numpy/np_einsum_path_op-inl.h
+ Copyright (c) 2005-2019, NumPy Developers.
+ Copyright (c) 2019, The Apache Software Foundation.
+13. Numpy einsum operator - For details, see 
src/operator/numpy/np_einsum_op.cc
+ Copyright (c) 2005-2019, NumPy Developers.
+ Copyright (c) 2019, The Apache Software Foundation.
 
 
===
 2-clause BSD licenses
@@ -385,14 +391,18 @@
 5. im2col.cuh - For details, see, src/operator/nn/im2col.cuh
  Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
  Copyright (c) 2014-2017, the respective contributors
-
 6. deformable_im2col.h - For details, see, 
src/operator/contrib/nn/deformable_im2col.h
  Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
  Copyright (c) 2014-2017, the respective contributors
-
 7. deformable_im2col.cuh - For details, see, 
src/operator/contrib/nn/deformable_im2col.cuh
  Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
  Copyright (c) 2014-2017, the respective contributors
+8. modulated_deformable_im2col.h - For details, see, 
src/operator/contrib/nn/modulated_deformable_im2col.h
+ Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
+ Copyright (c) 2014-2017, the respective contributors
+9. modulated_deformable_im2col.cuh - For details, see, 
src/operator/contrib/nn/modulated_deformable_im2col.cuh
+ Copyright (c) 2014-2017 The Regents of the University of California 
(Regents)
+ Copyright (c) 2014-2017, the respective contributors
 
 
 COPYRIGHT
@@ -667,7 +677,7 @@
 
 
 
===
-
+
 13. FindJeMalloc.cmake
 For details, see cmake/Modules/FindJeMalloc.cmake
 
@@ -690,14 +700,14 @@
 # execute, and transmit the Software, and to prepare derivative works of 
the
 # Software, and to permit third-parties to whom

[incubator-mxnet] branch master updated: use new mxnet.gluon.block APIs (#18601)

2020-06-24 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1fcc7ea  use new mxnet.gluon.block APIs (#18601)
1fcc7ea is described below

commit 1fcc7ea8b8f5dfebd3f5440ffe9e0c7d4b13b90f
Author: RuRo 
AuthorDate: Wed Jun 24 12:03:20 2020 +0300

use new mxnet.gluon.block APIs (#18601)
---
 tests/python/unittest/onnx/test_node.py | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tests/python/unittest/onnx/test_node.py 
b/tests/python/unittest/onnx/test_node.py
index 00e557c..351ac5e 100644
--- a/tests/python/unittest/onnx/test_node.py
+++ b/tests/python/unittest/onnx/test_node.py
@@ -123,7 +123,7 @@ class TestNode(unittest.TestCase):
 mx_op = mx_op(**attrs)
 mx_op.initialize()
 mx_op(mx.nd.zeros(input_shape))
-params = {k: v.data() for k, v in 
mx_op.collect_params().items()}
+params = {p.name: p.data() for p in 
mx_op.collect_params().values()}
 outsym = mx_op(input_sym)
 else:
 params = {}
@@ -203,10 +203,9 @@ export_test_cases = [
 ("test_expand", "Expand", mx.sym.broadcast_to, (2,1,3,1), {'shape': 
(2,1,3,1)}),
 ("test_tile", "Tile", mx.sym.tile, (2,1,3,1), {'reps': (2,3)}),
 ("test_topk", "TopK", mx.sym.topk, (2, 10, 2), {'k': 3, 'axis': 1, 
'ret_typ': 'both', 'dtype': np.int64}),
-("test_slice_axis", "Slice", mx.sym.slice_axis, (2, 10, 2), {'begin': 3, 
'end': 7, 'axis': 1})
-# https://github.com/apache/incubator-mxnet/issues/18596
-# ("test_LSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3}),
-# ("test_BiLSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3, 
'bidirectional': True}),
+("test_slice_axis", "Slice", mx.sym.slice_axis, (2, 10, 2), {'begin': 3, 
'end': 7, 'axis': 1}),
+("test_LSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3}),
+("test_BiLSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3, 
'bidirectional': True}),
 ]
 
 if __name__ == '__main__':



[incubator-mxnet] branch master updated: use new mxnet.gluon.block APIs (#18601)

2020-06-24 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1fcc7ea  use new mxnet.gluon.block APIs (#18601)
1fcc7ea is described below

commit 1fcc7ea8b8f5dfebd3f5440ffe9e0c7d4b13b90f
Author: RuRo 
AuthorDate: Wed Jun 24 12:03:20 2020 +0300

use new mxnet.gluon.block APIs (#18601)
---
 tests/python/unittest/onnx/test_node.py | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tests/python/unittest/onnx/test_node.py 
b/tests/python/unittest/onnx/test_node.py
index 00e557c..351ac5e 100644
--- a/tests/python/unittest/onnx/test_node.py
+++ b/tests/python/unittest/onnx/test_node.py
@@ -123,7 +123,7 @@ class TestNode(unittest.TestCase):
 mx_op = mx_op(**attrs)
 mx_op.initialize()
 mx_op(mx.nd.zeros(input_shape))
-params = {k: v.data() for k, v in 
mx_op.collect_params().items()}
+params = {p.name: p.data() for p in 
mx_op.collect_params().values()}
 outsym = mx_op(input_sym)
 else:
 params = {}
@@ -203,10 +203,9 @@ export_test_cases = [
 ("test_expand", "Expand", mx.sym.broadcast_to, (2,1,3,1), {'shape': 
(2,1,3,1)}),
 ("test_tile", "Tile", mx.sym.tile, (2,1,3,1), {'reps': (2,3)}),
 ("test_topk", "TopK", mx.sym.topk, (2, 10, 2), {'k': 3, 'axis': 1, 
'ret_typ': 'both', 'dtype': np.int64}),
-("test_slice_axis", "Slice", mx.sym.slice_axis, (2, 10, 2), {'begin': 3, 
'end': 7, 'axis': 1})
-# https://github.com/apache/incubator-mxnet/issues/18596
-# ("test_LSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3}),
-# ("test_BiLSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3, 
'bidirectional': True}),
+("test_slice_axis", "Slice", mx.sym.slice_axis, (2, 10, 2), {'begin': 3, 
'end': 7, 'axis': 1}),
+("test_LSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3}),
+("test_BiLSTM", "LSTM", mx.gluon.rnn.LSTM, (3,1,2), {'hidden_size': 3, 
'bidirectional': True}),
 ]
 
 if __name__ == '__main__':



[incubator-mxnet] branch master updated (56cfd9c -> 74fcb99)

2020-06-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 56cfd9c  Use chain.from_iterable in artifact_repository.py (#18578)
 add 74fcb99  redirect api reference on v-master to v1.6 (#18607)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/.htaccess | 8 
 1 file changed, 8 insertions(+)



[incubator-mxnet] branch master updated (56cfd9c -> 74fcb99)

2020-06-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 56cfd9c  Use chain.from_iterable in artifact_repository.py (#18578)
 add 74fcb99  redirect api reference on v-master to v1.6 (#18607)

No new revisions were added by this update.

Summary of changes:
 docs/static_site/src/.htaccess | 8 
 1 file changed, 8 insertions(+)



[incubator-mxnet] branch master updated: Use chain.from_iterable in artifact_repository.py (#18578)

2020-06-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 56cfd9c  Use chain.from_iterable in artifact_repository.py (#18578)
56cfd9c is described below

commit 56cfd9c272e81988682db6fde1b9205becc6a235
Author: Ram Rachum 
AuthorDate: Mon Jun 22 21:23:04 2020 +0300

Use chain.from_iterable in artifact_repository.py (#18578)
---
 cd/utils/artifact_repository.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cd/utils/artifact_repository.py b/cd/utils/artifact_repository.py
index be7d383..41893d9 100644
--- a/cd/utils/artifact_repository.py
+++ b/cd/utils/artifact_repository.py
@@ -496,7 +496,7 @@ def sanitize_path_array(paths: List[str]) -> List[str]:
 :return: A sanitized list of paths
 :raises FileNotFoundError if a file does not exist
 """
-expanded_paths = list(chain(*[glob.glob(path.strip()) for path in paths if 
path.strip() != '']))
+expanded_paths = list(chain.from_iterable(glob.glob(path.strip()) for path 
in paths if path.strip() != ''))
 return [path.strip() for path in expanded_paths if path.strip() != '' and 
is_file(path)]
 
 



[incubator-mxnet] branch master updated: Use chain.from_iterable in artifact_repository.py (#18578)

2020-06-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 56cfd9c  Use chain.from_iterable in artifact_repository.py (#18578)
56cfd9c is described below

commit 56cfd9c272e81988682db6fde1b9205becc6a235
Author: Ram Rachum 
AuthorDate: Mon Jun 22 21:23:04 2020 +0300

Use chain.from_iterable in artifact_repository.py (#18578)
---
 cd/utils/artifact_repository.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cd/utils/artifact_repository.py b/cd/utils/artifact_repository.py
index be7d383..41893d9 100644
--- a/cd/utils/artifact_repository.py
+++ b/cd/utils/artifact_repository.py
@@ -496,7 +496,7 @@ def sanitize_path_array(paths: List[str]) -> List[str]:
 :return: A sanitized list of paths
 :raises FileNotFoundError if a file does not exist
 """
-expanded_paths = list(chain(*[glob.glob(path.strip()) for path in paths if 
path.strip() != '']))
+expanded_paths = list(chain.from_iterable(glob.glob(path.strip()) for path 
in paths if path.strip() != ''))
 return [path.strip() for path in expanded_paths if path.strip() != '' and 
is_file(path)]
 
 



[incubator-mxnet] branch master updated (c1098aa -> 2fbec60)

2020-06-22 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c1098aa  Switch to cached op in the testing suite (#18579)
 add 2fbec60  graph executor c api removal  (#18598)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |   10 +-
 Makefile   |   44 +-
 R-package/.Rbuildignore|8 -
 R-package/.gitignore   |   10 -
 R-package/DESCRIPTION  |   36 -
 R-package/LICENSE  |  202 --
 R-package/Makefile |   33 -
 R-package/R/callback.R |  176 --
 R-package/R/context.R  |   64 -
 R-package/R/executor.R |   89 -
 R-package/R/initializer.R  |  118 -
 R-package/R/io.R   |   75 -
 R-package/R/kvstore.R  |   29 -
 R-package/R/lr_scheduler.R |   92 -
 R-package/R/metric.R   |  134 -
 R-package/R/model.R|  716 -
 R-package/R/model.rnn.R|  370 ---
 R-package/R/mx.io.bucket.iter.R|  122 -
 R-package/R/ndarray.R  |  217 --
 R-package/R/optimizer.R|  608 -
 R-package/R/profiler.R |   47 -
 R-package/R/random.R   |   94 -
 R-package/R/rnn.infer.R|  286 --
 R-package/R/symbol.R   |  264 --
 R-package/R/util.R |   77 -
 R-package/R/viz.graph.R|  167 --
 R-package/R/zzz.R  |   73 -
 R-package/README.md|   31 -
 R-package/demo/00Index |6 -
 R-package/demo/basic_bench.R   |   35 -
 R-package/demo/basic_executor.R|   50 -
 R-package/demo/basic_kvstore.R |   31 -
 R-package/demo/basic_ndarray.R |   38 -
 R-package/demo/basic_random.R  |   27 -
 R-package/demo/basic_symbol.R  |   30 -
 R-package/dummy.NAMESPACE  |   16 -
 R-package/src/Makevars |4 -
 R-package/src/Makevars.win |2 -
 R-package/src/base.h   |  397 ---
 R-package/src/executor.cc  |  287 --
 R-package/src/executor.h   |  222 --
 R-package/src/export.cc|  144 -
 R-package/src/export.h |   68 -
 R-package/src/im2rec.cc|  288 --
 R-package/src/im2rec.h |   67 -
 R-package/src/io.cc|  246 --
 R-package/src/io.h |  227 --
 R-package/src/kvstore.cc   |  204 --
 R-package/src/kvstore.h|  114 -
 R-package/src/mxnet.cc |   94 -
 R-package/src/name.h   |   65 -
 R-package/src/ndarray.cc   |  780 --
 R-package/src/ndarray.h|  325 ---
 R-package/src/symbol.cc|  419 ---
 R-package/src/symbol.h |  233 --
 R-package/tests/testthat/get_data.R|  117 -
 R-package/tests/testthat/test_initializer.R|  131 -
 R-package/tests/testthat/test_io.R |   90 -
 R-package/tests/testthat/test_ndarray.R|  218 --
 R-package/tests/testthat/test_random.R |   36 -
 R-package/tests/testthat/test_symbol.R |  119 -
 R-package/vignettes/CharRnnModel.Rmd   |  293 --
 R-package/vignettes/MultidimLstm.Rmd   |  302 ---
 .../classifyRealImageWithPretrainedModel.Rmd   |  164 --
 R-package/vignettes/ndarray.Rmd|  148 --
 README.md  |1 -
 amalgamation/.gitignore|1 -
 amalgamation/Makefile  |  143 -
 amalgamation/README.md |  161 --
 amalgamation/amalgamation.py   |  236 --
 amalgamation/dmlc-minimum0.cc  |   35 -
 .../jni/org/dmlc/mxnet/MxnetException.java |   28 -
 amalgamation/jni/org/dmlc/mxnet/Predictor.java |  119 -
 amalgamation/jni/org_dmlc_mxnet_Predictor.h|   64 -
 amalgamation/jni/predictor.cc  |  129

[incubator-mxnet] branch master updated (041bd30 -> c1b96f5)

2020-06-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 041bd30  [MXNET-889] Implement ONNX export for gluon LSTM. (#17734)
 add c1b96f5  cmake: x86 options only on x86 and remove manual 
specification on CI (#18588)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mshadow/CMakeLists.txt |  6 --
 3rdparty/mshadow/cmake/AutoDetectF16C.cmake |  4 
 CMakeLists.txt  | 12 
 ci/docker/runtime_functions.sh  | 28 
 4 files changed, 16 insertions(+), 34 deletions(-)



[incubator-mxnet] branch master updated (041bd30 -> c1b96f5)

2020-06-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 041bd30  [MXNET-889] Implement ONNX export for gluon LSTM. (#17734)
 add c1b96f5  cmake: x86 options only on x86 and remove manual 
specification on CI (#18588)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mshadow/CMakeLists.txt |  6 --
 3rdparty/mshadow/cmake/AutoDetectF16C.cmake |  4 
 CMakeLists.txt  | 12 
 ci/docker/runtime_functions.sh  | 28 
 4 files changed, 16 insertions(+), 34 deletions(-)



[incubator-mxnet] branch master updated (041bd30 -> c1b96f5)

2020-06-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 041bd30  [MXNET-889] Implement ONNX export for gluon LSTM. (#17734)
 add c1b96f5  cmake: x86 options only on x86 and remove manual 
specification on CI (#18588)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mshadow/CMakeLists.txt |  6 --
 3rdparty/mshadow/cmake/AutoDetectF16C.cmake |  4 
 CMakeLists.txt  | 12 
 ci/docker/runtime_functions.sh  | 28 
 4 files changed, 16 insertions(+), 34 deletions(-)



[incubator-mxnet] branch master updated: [MXNET-889] Implement ONNX export for gluon LSTM. (#17734)

2020-06-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 041bd30  [MXNET-889] Implement ONNX export for gluon LSTM. (#17734)
041bd30 is described below

commit 041bd3016375c6bdadddc9e9f43655923ee739bf
Author: RuRo 
AuthorDate: Fri Jun 19 21:56:05 2020 +0300

[MXNET-889] Implement ONNX export for gluon LSTM. (#17734)

* implement onnx translations for _full type nodes

* implement onnx translations for _rnn_param_concat

* implement onnx translations for RNN (LSTM mode)

* implement node export unittest for gluon.LSTM
---
 .../mxnet/contrib/onnx/mx2onnx/_op_translations.py | 306 +
 tests/python/unittest/onnx/test_node.py|  14 +-
 2 files changed, 318 insertions(+), 2 deletions(-)

diff --git a/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py 
b/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
index 6d61163..f2d840f 100644
--- a/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
+++ b/python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
@@ -950,6 +950,312 @@ def convert_concat(node, **kwargs):
 )
 return [concat_node]
 
+@mx_op.register("RNN")
+def convert_RNN(node, **kwargs):
+"""Map MXNet's RNN operator attributes to onnx's RNN operator
+and return the created node.
+"""
+name, input_nodes, attrs = get_inputs(node, kwargs)
+nodes = []
+
+# == Attributes ==
+mode = attrs['mode'].upper()
+rnn_kwargs = {}
+if mode != 'LSTM':
+raise NotImplementedError(
+"Only LSTM mode RNN conversion to ONNX is currently supported."
+)
+
+hidden_size = rnn_kwargs['hidden_size'] = int(attrs.get("state_size"))
+if eval(attrs.get('bidirectional', 'False')):
+rnn_kwargs['direction'] = 'bidirectional'
+num_directions = 2
+else:
+rnn_kwargs['direction'] = 'forward'
+num_directions = 1
+
+clip_min = eval(attrs.get('lstm_state_clip_min', 'None'))
+clip_max = eval(attrs.get('lstm_state_clip_max', 'None'))
+if clip_min is not None or clip_max is not None:
+# ONNX LSTMs have the `clip` attribute, however it seems to give
+# slightly different results, when compared to the MXNet equivalent
+raise NotImplementedError(
+"Conversion of RNNs with lstm_state_clip_min/max "
+"to ONNX is currently not supported."
+)
+
+if eval(attrs.get('lstm_state_clip_nan', 'False')):
+raise NotImplementedError(
+"ONNX RNN operator doesn't support lstm_state_clip_nan"
+)
+
+if eval(attrs.get('use_sequence_length', 'False')):
+# This can maybe be implemented using the `sequence_len` optional input
+raise NotImplementedError(
+"Conversion of RNNs with variable input sequence length "
+"to ONNX is currently not supported."
+)
+
+if eval(attrs.get('num_layers', '1')) != 1:
+raise NotImplementedError(
+"Conversion of RNNs with num_layers > 1 "
+"to ONNX is currently not supported."
+)
+
+if eval(attrs.get('p', '0')) != 0:
+# WARNING! The `p` attribute in mxnet is "dropout probability" while
+# the `p` optional input of ONNX LSTMs is the peephole weights tensor.
+raise NotImplementedError(
+"Conversion of RNNs with dropout "
+"to ONNX is currently not supported."
+)
+
+if eval(attrs.get('projection_size', 'None')) is not None:
+raise NotImplementedError(
+"Conversion of RNNs with custom projection_size "
+"to ONNX is currently not supported."
+)
+
+if not eval(attrs.get('state_outputs', 'True')):
+raise NotImplementedError(
+"Conversion of RNNs with state_outputs=False "
+"to ONNX is currently not supported."
+)
+
+# == Parameters ==
+
+# (See _rnn_param_concat for part 1 of this comment section)
+
+# Unfortunately, mxnets version of _rnn_param_concat concatenates *ALL*
+# the parameters, instead of grouping them like ONNX. The workaround,
+# used here, is that the _rnn_param_concat node conversion code will
+# produce multiple nodes with names ending in rnn_param_concatN__P
+# (Where P is the parameter group name W, R or B)
+# We then use regular expressions to get the "extra outputs" of the
+# _rnn_param_concat node.
+
+x, param_concat, *initial_states = input_node

[incubator-mxnet] branch master updated (92971b8 -> 9591436)

2020-06-17 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 92971b8  fix misbehave of KLDivLoss (#18423)
 add 9591436  [Numpy] Bugfix of slice operator export (MXNet to ONNX) v2 
(#18535)

No new revisions were added by this update.

Summary of changes:
 .../mxnet/contrib/onnx/mx2onnx/_op_translations.py | 28 +-
 python/mxnet/contrib/onnx/mx2onnx/export_onnx.py   | 11 ++---
 tests/python/unittest/onnx/mxnet_export_test.py| 25 +++
 3 files changed, 55 insertions(+), 9 deletions(-)



[incubator-mxnet] branch master updated (b9118d9 -> 92971b8)

2020-06-17 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b9118d9  fix contribute page anchor position shifted (#18571)
 add 92971b8  fix misbehave of KLDivLoss (#18423)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/loss.py | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)



[incubator-mxnet] branch master updated (b9118d9 -> 92971b8)

2020-06-17 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b9118d9  fix contribute page anchor position shifted (#18571)
 add 92971b8  fix misbehave of KLDivLoss (#18423)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/loss.py | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)



[incubator-mxnet] branch v1.6.x updated: [CI][1.6.x] fix centos 7 url to unblock centos-cpu & gpu pipeline (#18560)

2020-06-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.6.x by this push:
 new d271348  [CI][1.6.x] fix centos 7 url to unblock centos-cpu & gpu 
pipeline (#18560)
d271348 is described below

commit d2713482f9a6a45f1274df87bd34d784a94756ed
Author: Chaitanya Prakash Bapat 
AuthorDate: Mon Jun 15 11:36:33 2020 -0700

[CI][1.6.x] fix centos 7 url to unblock centos-cpu & gpu pipeline (#18560)

* fix centos 7 url to unblock centos-cpu & gpu pipeline

* [v1.7.x] update jetson dockerfile to support CUDA 10.0 (#18339)

* update dockerfile for jetson

* add toolchain files

* update build_jetson function

* update ubuntu_julia.sh

* update FindCUDAToolkit.cmake

* Update centos7_python.sh

* revert changes on ubuntu_julia.sh

* disable TVM for gpu build

* Disable TVM_OP on GPU builds

Co-authored-by: Wei Chu 
Co-authored-by: Leonard Lausen 

* skip quantized conv flaky case (#16866)

* Fix quantized concat when inputs are mixed int8 and uint8

Change-Id: I4da04bf4502425134a466823fb5f73da2d7a419b

* skip flaky test

* trigger ci

Co-authored-by: waytrue17 <52505574+waytru...@users.noreply.github.com>
Co-authored-by: Wei Chu 
Co-authored-by: Leonard Lausen 
Co-authored-by: Xinyu Chen 
---
 ci/docker/Dockerfile.build.jetson  |  96 +-
 ci/docker/install/centos7_python.sh|   2 +-
 ci/docker/runtime_functions.sh |  68 +++
 .../aarch64-linux-gnu-toolchain.cmake} |  27 +--
 .../arm-linux-gnueabihf-toolchain.cmake}   |  26 +--
 ci/jenkins/Jenkins_steps.groovy|  44 ++---
 ci/jenkins/Jenkinsfile_unix_gpu|   7 +-
 cmake/Modules/FindCUDAToolkit.cmake| 205 +++--
 tests/python/quantization/test_quantization.py |   5 +-
 9 files changed, 255 insertions(+), 225 deletions(-)

diff --git a/ci/docker/Dockerfile.build.jetson 
b/ci/docker/Dockerfile.build.jetson
index e31ee43..93fe5e0 100644
--- a/ci/docker/Dockerfile.build.jetson
+++ b/ci/docker/Dockerfile.build.jetson
@@ -20,68 +20,58 @@
 # This script assumes /work/mxnet exists and contains the mxnet code you wish 
to compile and
 # that /work/build exists and is the target for your output.
 
-FROM nvidia/cuda:9.0-cudnn7-devel as cudabuilder
+FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
 
-FROM dockcross/linux-arm64
+ENV ARCH=aarch64 \
+HOSTCC=gcc \
+TARGET=ARMV8
 
-ENV ARCH aarch64
-ENV HOSTCC gcc
-ENV TARGET ARMV8
+WORKDIR /usr/local
 
-# gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567
-#RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security 
jessie/updates main#d' /etc/apt/sources.list
-#RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list
+RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
+build-essential \
+ninja-build \
+git \
+curl \
+zip \
+unzip \
+python3 \
+python3-pip \
+awscli \
+crossbuild-essential-arm64 \
+ && rm -rf /var/lib/apt/lists/*
 
+# cmake on Ubuntu 18.04 is too old
+RUN python3 -m pip install cmake
 
-WORKDIR /work/deps
-
-COPY install/ubuntu_arm.sh /work/
-RUN /work/ubuntu_arm.sh
-
-COPY install/arm_openblas.sh /work/
-RUN /work/arm_openblas.sh
-
-ENV OpenBLAS_HOME=${CROSS_ROOT}
-ENV OpenBLAS_DIR=${CROSS_ROOT}
-
+# ccache on Ubuntu 18.04 is too old to support Cuda correctly
 COPY install/deb_ubuntu_ccache.sh /work/
 RUN /work/deb_ubuntu_ccache.sh
 
-# Setup CUDA build env (including configuring and copying nvcc)
-COPY --from=cudabuilder /usr/local/cuda /usr/local/cuda
-ENV TARGET_ARCH aarch64
-ENV TARGET_OS linux
+COPY toolchains/aarch64-linux-gnu-toolchain.cmake /usr
+ENV CMAKE_TOOLCHAIN_FILE=/usr/aarch64-linux-gnu-toolchain.cmake
+
+RUN git clone --recursive -b v0.3.9 https://github.com/xianyi/OpenBLAS.git && \
+cd /usr/local/OpenBLAS && \
+make NOFORTRAN=1 CC=aarch64-linux-gnu-gcc && \
+make PREFIX=/usr/aarch64-linux-gnu install && \
+cd /usr/local && \
+rm -rf OpenBLAS
 
-# Install ARM depedencies based on Jetpack 3.3
-RUN 
JETPACK_DOWNLOAD_PREFIX=https://developer.download.nvidia.com/devzone/devcenter/mobile/jetpack_l4t/3.3/lw.xd42/JetPackL4T_33_b39
 && \
-CUDA_REPO_PREFIX=/var/cuda-repo-9-0-local && \
-ARM_CUDA_INSTALLER_PACKAGE=cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb && \
-ARM_CUDNN_INSTALLER_PACKAGE=libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb && \
-ARM_CUDNN_DEV_INSTALLER_PACKAGE=libcudnn7-dev_7.1.5.14-1+cuda9.0_arm64.deb 
&& \
-ARM_LICENSE_INSTALLER=cuda-license-9-0_9.0.252-1_ar

[incubator-mxnet] branch v1.6.x updated: [CI][1.6.x] fix centos 7 url to unblock centos-cpu & gpu pipeline (#18560)

2020-06-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.6.x by this push:
 new d271348  [CI][1.6.x] fix centos 7 url to unblock centos-cpu & gpu 
pipeline (#18560)
d271348 is described below

commit d2713482f9a6a45f1274df87bd34d784a94756ed
Author: Chaitanya Prakash Bapat 
AuthorDate: Mon Jun 15 11:36:33 2020 -0700

[CI][1.6.x] fix centos 7 url to unblock centos-cpu & gpu pipeline (#18560)

* fix centos 7 url to unblock centos-cpu & gpu pipeline

* [v1.7.x] update jetson dockerfile to support CUDA 10.0 (#18339)

* update dockerfile for jetson

* add toolchain files

* update build_jetson function

* update ubuntu_julia.sh

* update FindCUDAToolkit.cmake

* Update centos7_python.sh

* revert changes on ubuntu_julia.sh

* disable TVM for gpu build

* Disable TVM_OP on GPU builds

Co-authored-by: Wei Chu 
Co-authored-by: Leonard Lausen 

* skip quantized conv flaky case (#16866)

* Fix quantized concat when inputs are mixed int8 and uint8

Change-Id: I4da04bf4502425134a466823fb5f73da2d7a419b

* skip flaky test

* trigger ci

Co-authored-by: waytrue17 <52505574+waytru...@users.noreply.github.com>
Co-authored-by: Wei Chu 
Co-authored-by: Leonard Lausen 
Co-authored-by: Xinyu Chen 
---
 ci/docker/Dockerfile.build.jetson  |  96 +-
 ci/docker/install/centos7_python.sh|   2 +-
 ci/docker/runtime_functions.sh |  68 +++
 .../aarch64-linux-gnu-toolchain.cmake} |  27 +--
 .../arm-linux-gnueabihf-toolchain.cmake}   |  26 +--
 ci/jenkins/Jenkins_steps.groovy|  44 ++---
 ci/jenkins/Jenkinsfile_unix_gpu|   7 +-
 cmake/Modules/FindCUDAToolkit.cmake| 205 +++--
 tests/python/quantization/test_quantization.py |   5 +-
 9 files changed, 255 insertions(+), 225 deletions(-)

diff --git a/ci/docker/Dockerfile.build.jetson 
b/ci/docker/Dockerfile.build.jetson
index e31ee43..93fe5e0 100644
--- a/ci/docker/Dockerfile.build.jetson
+++ b/ci/docker/Dockerfile.build.jetson
@@ -20,68 +20,58 @@
 # This script assumes /work/mxnet exists and contains the mxnet code you wish 
to compile and
 # that /work/build exists and is the target for your output.
 
-FROM nvidia/cuda:9.0-cudnn7-devel as cudabuilder
+FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
 
-FROM dockcross/linux-arm64
+ENV ARCH=aarch64 \
+HOSTCC=gcc \
+TARGET=ARMV8
 
-ENV ARCH aarch64
-ENV HOSTCC gcc
-ENV TARGET ARMV8
+WORKDIR /usr/local
 
-# gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567
-#RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security 
jessie/updates main#d' /etc/apt/sources.list
-#RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list
+RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
+build-essential \
+ninja-build \
+git \
+curl \
+zip \
+unzip \
+python3 \
+python3-pip \
+awscli \
+crossbuild-essential-arm64 \
+ && rm -rf /var/lib/apt/lists/*
 
+# cmake on Ubuntu 18.04 is too old
+RUN python3 -m pip install cmake
 
-WORKDIR /work/deps
-
-COPY install/ubuntu_arm.sh /work/
-RUN /work/ubuntu_arm.sh
-
-COPY install/arm_openblas.sh /work/
-RUN /work/arm_openblas.sh
-
-ENV OpenBLAS_HOME=${CROSS_ROOT}
-ENV OpenBLAS_DIR=${CROSS_ROOT}
-
+# ccache on Ubuntu 18.04 is too old to support Cuda correctly
 COPY install/deb_ubuntu_ccache.sh /work/
 RUN /work/deb_ubuntu_ccache.sh
 
-# Setup CUDA build env (including configuring and copying nvcc)
-COPY --from=cudabuilder /usr/local/cuda /usr/local/cuda
-ENV TARGET_ARCH aarch64
-ENV TARGET_OS linux
+COPY toolchains/aarch64-linux-gnu-toolchain.cmake /usr
+ENV CMAKE_TOOLCHAIN_FILE=/usr/aarch64-linux-gnu-toolchain.cmake
+
+RUN git clone --recursive -b v0.3.9 https://github.com/xianyi/OpenBLAS.git && \
+cd /usr/local/OpenBLAS && \
+make NOFORTRAN=1 CC=aarch64-linux-gnu-gcc && \
+make PREFIX=/usr/aarch64-linux-gnu install && \
+cd /usr/local && \
+rm -rf OpenBLAS
 
-# Install ARM depedencies based on Jetpack 3.3
-RUN 
JETPACK_DOWNLOAD_PREFIX=https://developer.download.nvidia.com/devzone/devcenter/mobile/jetpack_l4t/3.3/lw.xd42/JetPackL4T_33_b39
 && \
-CUDA_REPO_PREFIX=/var/cuda-repo-9-0-local && \
-ARM_CUDA_INSTALLER_PACKAGE=cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb && \
-ARM_CUDNN_INSTALLER_PACKAGE=libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb && \
-ARM_CUDNN_DEV_INSTALLER_PACKAGE=libcudnn7-dev_7.1.5.14-1+cuda9.0_arm64.deb 
&& \
-ARM_LICENSE_INSTALLER=cuda-license-9-0_9.0.252-1_ar

[incubator-mxnet] branch master updated (da25273 -> af1b45b)

2020-06-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from da25273  remove dependency on train_mnist.py script (#18550)
 add af1b45b  Create config.yml (#18553)

No new revisions were added by this update.

Summary of changes:
 .github/ISSUE_TEMPLATE/config.yml | 8 
 1 file changed, 8 insertions(+)
 create mode 100644 .github/ISSUE_TEMPLATE/config.yml



[incubator-mxnet] branch master updated (da25273 -> af1b45b)

2020-06-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from da25273  remove dependency on train_mnist.py script (#18550)
 add af1b45b  Create config.yml (#18553)

No new revisions were added by this update.

Summary of changes:
 .github/ISSUE_TEMPLATE/config.yml | 8 
 1 file changed, 8 insertions(+)
 create mode 100644 .github/ISSUE_TEMPLATE/config.yml



[incubator-mxnet] branch master updated: Create config.yml (#18553)

2020-06-14 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new af1b45b  Create config.yml (#18553)
af1b45b is described below

commit af1b45ba3590b21014c55c58838c3e04b3f2cea3
Author: Chaitanya Prakash Bapat 
AuthorDate: Sun Jun 14 22:45:57 2020 -0700

Create config.yml (#18553)

Add options for stackoverflow and discuss to issue_template & disable blank 
issue
---
 .github/ISSUE_TEMPLATE/config.yml | 8 
 1 file changed, 8 insertions(+)

diff --git a/.github/ISSUE_TEMPLATE/config.yml 
b/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 000..4f904b0
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1,8 @@
+blank_issues_enabled: false
+contact_links:
+  - name: Ask a question
+url: https://stackoverflow.com/questions/tagged/mxnet
+about: Use Stack Overflow to ask and answer questions
+  - name: Discuss
+url: https://discuss.mxnet.io/
+about: Use Discuss forums for discussions [Stackoverflow alternative]



[incubator-mxnet] branch master updated: remove dependency on train_mnist.py script (#18550)

2020-06-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new da25273  remove dependency on train_mnist.py script (#18550)
da25273 is described below

commit da252734c70164a0983404de076464ba7a526a60
Author: Manu Seth <22492939+mset...@users.noreply.github.com>
AuthorDate: Sat Jun 13 18:30:29 2020 -0700

remove dependency on train_mnist.py script (#18550)

* remove dependency on train_mnist.py script

* remove image classification tests from nightly
---
 cd/python/docker/test_python_image.sh   |  9 +---
 ci/docker/runtime_functions.sh  | 19 +--
 docker/docker-python/README.md  |  1 -
 docker/docker-python/build_python_dockerfile.sh |  2 -
 tests/nightly/JenkinsfileForBinaries| 10 +---
 tests/nightly/test_image_classification.sh  | 67 -
 tools/profile/tune_mnist.sh | 23 -
 7 files changed, 4 insertions(+), 127 deletions(-)

diff --git a/cd/python/docker/test_python_image.sh 
b/cd/python/docker/test_python_image.sh
index b10dcfb..be4f9dc 100755
--- a/cd/python/docker/test_python_image.sh
+++ b/cd/python/docker/test_python_image.sh
@@ -32,14 +32,9 @@ if [ -z "${MXNET_COMMIT_ID}" ]; then
 fi
 
 # Execute tests
-if [[ $mxnet_variant == cu* ]]; then
-mnist_params="--gpu 0"
-test_conv_params="--gpu"
-fi
-
-if [[ $mxnet_variant == cpu ]]; then
+if [[ $mxnet_variant != native ]]; then
 python3 tests/python/mkl/test_mkldnn.py
 fi
 
-python3 example/image-classification/train_mnist.py ${mnist_params}
+# TODO: Add more tests (18549)
 
diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh
index e5514a7..a913ca5 100755
--- a/ci/docker/runtime_functions.sh
+++ b/ci/docker/runtime_functions.sh
@@ -1403,13 +1403,6 @@ nightly_test_imagenet_inference() {
 ./unit_test_imagenet_inference.sh
 }
 
-#Runs a simple MNIST training example
-nightly_test_image_classification() {
-set -ex
-export DMLC_LOG_STACK_TRACE_DEPTH=10
-./tests/nightly/test_image_classification.sh
-}
-
 #Single Node KVStore Test
 nightly_test_KVStore_singleNode() {
 set -ex
@@ -1894,21 +1887,11 @@ cd_integration_test_pypi() {
 set -ex
 source /opt/rh/rh-python36/enable
 
-local gpu_enabled=${1:-"false"}
-
-local test_conv_params=''
-local mnist_params=''
-
-if [ "${gpu_enabled}" = "true" ]; then
-mnist_params="--gpu 0"
-test_conv_params="--gpu"
-fi
-
 # install mxnet wheel package
 pip3 install --user ./wheel_build/dist/*.whl
 
 # execute tests
-python3 /work/mxnet/example/image-classification/train_mnist.py 
${mnist_params}
+# TODO: Add tests (18549)
 }
 
 # Publishes wheel to PyPI
diff --git a/docker/docker-python/README.md b/docker/docker-python/README.md
index 75fb2b2..a5dd0e3 100644
--- a/docker/docker-python/README.md
+++ b/docker/docker-python/README.md
@@ -47,7 +47,6 @@ For example:
 `./build_python_dockerfile.sh 1.3.0 1.3.0.post0 ~/build-docker/incubator-mxnet`
 
 ### Tests run
-* 
[train_mnist.py](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_mnist.py)
 * 
[test_mxnet.py](https://github.com/apache/incubator-mxnet/blob/master/docker/docker-python/test_mxnet.py):
 This script is used to make sure that the docker image builds the expected 
mxnet version. That is, the version picked by pip is the same as as the version 
passed as a parameter.
 
 ### Dockerhub Credentials
diff --git a/docker/docker-python/build_python_dockerfile.sh 
b/docker/docker-python/build_python_dockerfile.sh
index cdcc362..6dc77e0 100755
--- a/docker/docker-python/build_python_dockerfile.sh
+++ b/docker/docker-python/build_python_dockerfile.sh
@@ -58,7 +58,6 @@ docker_test_image_cpu(){
 python_version="${2}"
 echo "Running tests on mxnet/python:${image_tag}"
 docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/docker/docker-python/test_mxnet.py ${mxnet_version}"
-docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/example/image-classification/train_mnist.py"
 }
 
 docker_test_image_gpu(){
@@ -66,7 +65,6 @@ docker_test_image_gpu(){
 python_version="${2}"
 echo "Running tests on mxnet/python:${1}"
 nvidia-docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/docker/docker-python/test_mxnet.py ${mxnet_version}"
-nvidia-docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/example/image-classification/train_mnist.py --gpus 
0,1,2,3&qu

[incubator-mxnet] branch master updated: remove dependency on train_mnist.py script (#18550)

2020-06-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new da25273  remove dependency on train_mnist.py script (#18550)
da25273 is described below

commit da252734c70164a0983404de076464ba7a526a60
Author: Manu Seth <22492939+mset...@users.noreply.github.com>
AuthorDate: Sat Jun 13 18:30:29 2020 -0700

remove dependency on train_mnist.py script (#18550)

* remove dependency on train_mnist.py script

* remove image classification tests from nightly
---
 cd/python/docker/test_python_image.sh   |  9 +---
 ci/docker/runtime_functions.sh  | 19 +--
 docker/docker-python/README.md  |  1 -
 docker/docker-python/build_python_dockerfile.sh |  2 -
 tests/nightly/JenkinsfileForBinaries| 10 +---
 tests/nightly/test_image_classification.sh  | 67 -
 tools/profile/tune_mnist.sh | 23 -
 7 files changed, 4 insertions(+), 127 deletions(-)

diff --git a/cd/python/docker/test_python_image.sh 
b/cd/python/docker/test_python_image.sh
index b10dcfb..be4f9dc 100755
--- a/cd/python/docker/test_python_image.sh
+++ b/cd/python/docker/test_python_image.sh
@@ -32,14 +32,9 @@ if [ -z "${MXNET_COMMIT_ID}" ]; then
 fi
 
 # Execute tests
-if [[ $mxnet_variant == cu* ]]; then
-mnist_params="--gpu 0"
-test_conv_params="--gpu"
-fi
-
-if [[ $mxnet_variant == cpu ]]; then
+if [[ $mxnet_variant != native ]]; then
 python3 tests/python/mkl/test_mkldnn.py
 fi
 
-python3 example/image-classification/train_mnist.py ${mnist_params}
+# TODO: Add more tests (18549)
 
diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh
index e5514a7..a913ca5 100755
--- a/ci/docker/runtime_functions.sh
+++ b/ci/docker/runtime_functions.sh
@@ -1403,13 +1403,6 @@ nightly_test_imagenet_inference() {
 ./unit_test_imagenet_inference.sh
 }
 
-#Runs a simple MNIST training example
-nightly_test_image_classification() {
-set -ex
-export DMLC_LOG_STACK_TRACE_DEPTH=10
-./tests/nightly/test_image_classification.sh
-}
-
 #Single Node KVStore Test
 nightly_test_KVStore_singleNode() {
 set -ex
@@ -1894,21 +1887,11 @@ cd_integration_test_pypi() {
 set -ex
 source /opt/rh/rh-python36/enable
 
-local gpu_enabled=${1:-"false"}
-
-local test_conv_params=''
-local mnist_params=''
-
-if [ "${gpu_enabled}" = "true" ]; then
-mnist_params="--gpu 0"
-test_conv_params="--gpu"
-fi
-
 # install mxnet wheel package
 pip3 install --user ./wheel_build/dist/*.whl
 
 # execute tests
-python3 /work/mxnet/example/image-classification/train_mnist.py 
${mnist_params}
+# TODO: Add tests (18549)
 }
 
 # Publishes wheel to PyPI
diff --git a/docker/docker-python/README.md b/docker/docker-python/README.md
index 75fb2b2..a5dd0e3 100644
--- a/docker/docker-python/README.md
+++ b/docker/docker-python/README.md
@@ -47,7 +47,6 @@ For example:
 `./build_python_dockerfile.sh 1.3.0 1.3.0.post0 ~/build-docker/incubator-mxnet`
 
 ### Tests run
-* 
[train_mnist.py](https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/train_mnist.py)
 * 
[test_mxnet.py](https://github.com/apache/incubator-mxnet/blob/master/docker/docker-python/test_mxnet.py):
 This script is used to make sure that the docker image builds the expected 
mxnet version. That is, the version picked by pip is the same as as the version 
passed as a parameter.
 
 ### Dockerhub Credentials
diff --git a/docker/docker-python/build_python_dockerfile.sh 
b/docker/docker-python/build_python_dockerfile.sh
index cdcc362..6dc77e0 100755
--- a/docker/docker-python/build_python_dockerfile.sh
+++ b/docker/docker-python/build_python_dockerfile.sh
@@ -58,7 +58,6 @@ docker_test_image_cpu(){
 python_version="${2}"
 echo "Running tests on mxnet/python:${image_tag}"
 docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/docker/docker-python/test_mxnet.py ${mxnet_version}"
-docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/example/image-classification/train_mnist.py"
 }
 
 docker_test_image_gpu(){
@@ -66,7 +65,6 @@ docker_test_image_gpu(){
 python_version="${2}"
 echo "Running tests on mxnet/python:${1}"
 nvidia-docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/docker/docker-python/test_mxnet.py ${mxnet_version}"
-nvidia-docker run -v ${test_dir}:/mxnet mxnet/python:${image_tag} bash -c 
"${python_version} /mxnet/example/image-classification/train_mnist.py --gpus 
0,1,2,3&qu

[incubator-mxnet] branch master updated (97d4ba5 -> f1f3f44)

2020-06-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 97d4ba5  Remove XXOutput loss operators  (#18531)
 add f1f3f44  Remove the deprecated BatchNorm_v1 op (#18538)

No new revisions were added by this update.

Summary of changes:
 benchmark/opperf/utils/op_registry_utils.py   |   2 +-
 python/mxnet/contrib/amp/lists/symbol_bf16.py |   1 -
 python/mxnet/contrib/amp/lists/symbol_fp16.py |   1 -
 src/operator/batch_norm_v1-inl.h  | 380 --
 src/operator/batch_norm_v1.cc | 113 
 src/operator/batch_norm_v1.cu |  38 ---
 tests/cpp/operator/batchnorm_test.cc  |   1 -
 tests/python/gpu/test_operator_gpu.py |  20 --
 tests/python/unittest/test_operator.py|  12 -
 9 files changed, 1 insertion(+), 567 deletions(-)
 delete mode 100644 src/operator/batch_norm_v1-inl.h
 delete mode 100644 src/operator/batch_norm_v1.cc
 delete mode 100644 src/operator/batch_norm_v1.cu



[incubator-mxnet] branch master updated: Remove the deprecated BatchNorm_v1 op (#18538)

2020-06-13 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f1f3f44  Remove the deprecated BatchNorm_v1 op (#18538)
f1f3f44 is described below

commit f1f3f44166e2e47afad6c65025fb48dd47efeb65
Author: Haibin Lin 
AuthorDate: Sat Jun 13 10:10:25 2020 -0700

Remove the deprecated BatchNorm_v1 op (#18538)

* remove batchnorm_v1

* fix gpu build

Co-authored-by: EC2 Default User 
Co-authored-by: Lin 
---
 benchmark/opperf/utils/op_registry_utils.py   |   2 +-
 python/mxnet/contrib/amp/lists/symbol_bf16.py |   1 -
 python/mxnet/contrib/amp/lists/symbol_fp16.py |   1 -
 src/operator/batch_norm_v1-inl.h  | 380 --
 src/operator/batch_norm_v1.cc | 113 
 src/operator/batch_norm_v1.cu |  38 ---
 tests/cpp/operator/batchnorm_test.cc  |   1 -
 tests/python/gpu/test_operator_gpu.py |  20 --
 tests/python/unittest/test_operator.py|  12 -
 9 files changed, 1 insertion(+), 567 deletions(-)

diff --git a/benchmark/opperf/utils/op_registry_utils.py 
b/benchmark/opperf/utils/op_registry_utils.py
index d3cf1a4..6b9efc8 100644
--- a/benchmark/opperf/utils/op_registry_utils.py
+++ b/benchmark/opperf/utils/op_registry_utils.py
@@ -52,7 +52,7 @@ def _select_ops(operator_names, filters=("_contrib", "_"), 
merge_op_forward_back
 operators_with_backward = []
 
 # Filter out deprecated operators
-filters += ("normal", "uniform", "BatchNorm_v1", "Flatten", 
"contrib_CTCLoss", "Pad", "Cast",
+filters += ("normal", "uniform", "Flatten", "contrib_CTCLoss", "Pad", 
"Cast",
 "Pooling_v1", "Concat", "Reshape", "Convolution_v1", 
"SliceChannel", "Crop",
 "crop", "onehot_encode", "batch_take")
 
diff --git a/python/mxnet/contrib/amp/lists/symbol_bf16.py 
b/python/mxnet/contrib/amp/lists/symbol_bf16.py
index da01e61..5931132 100644
--- a/python/mxnet/contrib/amp/lists/symbol_bf16.py
+++ b/python/mxnet/contrib/amp/lists/symbol_bf16.py
@@ -57,7 +57,6 @@ BF16_USE_FP32_PARAMS = {
 FP32_FUNCS = [
 'Deconvolution',
 'RNN',
-'BatchNorm_v1',
 'BilinearSampler',
 'BlockGrad',
 'Cast',
diff --git a/python/mxnet/contrib/amp/lists/symbol_fp16.py 
b/python/mxnet/contrib/amp/lists/symbol_fp16.py
index ae812fb..676129f 100644
--- a/python/mxnet/contrib/amp/lists/symbol_fp16.py
+++ b/python/mxnet/contrib/amp/lists/symbol_fp16.py
@@ -32,7 +32,6 @@ FP16_FUNCS = [
 # are dtype neutral (can work in both fp16 and fp32)
 FP16_FP32_FUNCS = [
 'BatchNorm',
-'BatchNorm_v1',
 'BilinearSampler',
 'BlockGrad',
 'Cast',
diff --git a/src/operator/batch_norm_v1-inl.h b/src/operator/batch_norm_v1-inl.h
deleted file mode 100644
index 1520df9..000
--- a/src/operator/batch_norm_v1-inl.h
+++ /dev/null
@@ -1,380 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-/*!
- * Copyright (c) 2015 by Contributors
- * \file batch_norm_v1-inl.h
- * \brief
- * \author Bing Xu
-*/
-#ifndef MXNET_OPERATOR_BATCH_NORM_V1_INL_H_
-#define MXNET_OPERATOR_BATCH_NORM_V1_INL_H_
-
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include "./operator_common.h"
-#include "./mshadow_op.h"
-
-namespace mxnet {
-namespace op {
-
-namespace batchnorm_v1 {
-enum BatchNormOpInputs {kData, kGamma, kBeta};
-enum BatchNormOpOutputs {kOut, kMean, kVar};
-enum BatchNormOpAuxiliary {kMovingMean, kMovingVar};
-enum BatchNormBackResource {kTempSpace};
-}  // namespace batchnorm_v1
-
-struct BatchNormV1Param : public dmlc::Parameter {
-  float eps;
-  float momentum;
-  bool fix_gamma;
-  bool use_global_stats;
-  bool output_mean_var;
-  DMLC_DECLARE_PARAMETER(BatchNormV1Param) {
-DMLC_DECLARE_FIELD(eps).se

  1   2   3   4   5   6   7   8   9   10   >