[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit fca9c924cd9699f1e265422afa53e194c2570e79
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8a2d75078d9cc98afff0071c0dba3e8fc5532d01
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] branch ir-patch updated (baeeb22 -> 8a2d750)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit baeeb22  [IR-Bridge] Operator Attributes (#16421)
omit 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit 3163642  [IR-Patch] IR Bridge (#16290)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 new 08b8498  [IR-Patch] IR Bridge (#16290)
 new fca9c92  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new 8a2d750  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (baeeb22)
\
 N -- N -- N   refs/heads/ir-patch (8a2d750)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt |   6 +-
 ci/docker/Dockerfile.build.ubuntu_nightly_gpu  |   1 +
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  95 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 420 -
 python/mxnet/numpy/multiarray.py   | 411 -
 python/mxnet/numpy_dispatch_protocol.py|   2 +
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/optimizer/optimizer.py|   2 +-
 python/mxnet/symbol/numpy/_symbol.py   | 398 -
 python/mxnet/test_utils.py | 221 +--
 python/mxnet/util.py   |  61 ++
 src/common/utils.h |  15 +
 src/ndarray/ndarray_function.cc|  13 +-
 src/ndarray/ndarray_function.cu|   4 -
 src/operator/contrib/allclose_op-inl.h | 160 +
 src/operator/contrib/allclose_op.cc|  86 +++
 src/operator/contrib/allclose_op.cu|  58 ++
 src/operator/contrib/index_copy-inl.h  |   2 +-
 src/operator/contrib/index_copy.cc |   4 +-
 src/operator/contrib/mrcnn_mask_target-inl.h   | 132 
 src/operator/contrib/mrcnn_mask_target.cu  | 278 +
 src/operator/contrib/reset_arrays-inl.h|  92 +++
 src/operator/contrib/reset_arrays.cc   |  74 +++
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/leaky_relu-inl.h  |   2 +-
 src/operator/mshadow_op.h  |  32 +
 src/operator/mxnet_op.h|  20 +
 src/operator/nn/concat-inl.h   |  62 ++
 src/operator/nn/dropout-inl.h  |   4 +-
 .../numpy/np_

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 08b849869b9a0fc15521256d0b3662cf3b1cf42f
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 70b0991..aaa07ab 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -748,7 +748,7 @@ if(USE_DIST_KVSTORE)
 endif()
 
 if(USE_TVM_OP)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8a2d75078d9cc98afff0071c0dba3e8fc5532d01
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 08b849869b9a0fc15521256d0b3662cf3b1cf42f
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 70b0991..aaa07ab 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -748,7 +748,7 @@ if(USE_DIST_KVSTORE)
 endif()
 
 if(USE_TVM_OP)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@

[incubator-mxnet] branch ir-patch updated (baeeb22 -> 8a2d750)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit baeeb22  [IR-Bridge] Operator Attributes (#16421)
omit 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit 3163642  [IR-Patch] IR Bridge (#16290)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 new 08b8498  [IR-Patch] IR Bridge (#16290)
 new fca9c92  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new 8a2d750  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (baeeb22)
\
 N -- N -- N   refs/heads/ir-patch (8a2d750)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt |   6 +-
 ci/docker/Dockerfile.build.ubuntu_nightly_gpu  |   1 +
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  95 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 420 -
 python/mxnet/numpy/multiarray.py   | 411 -
 python/mxnet/numpy_dispatch_protocol.py|   2 +
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/optimizer/optimizer.py|   2 +-
 python/mxnet/symbol/numpy/_symbol.py   | 398 -
 python/mxnet/test_utils.py | 221 +--
 python/mxnet/util.py   |  61 ++
 src/common/utils.h |  15 +
 src/ndarray/ndarray_function.cc|  13 +-
 src/ndarray/ndarray_function.cu|   4 -
 src/operator/contrib/allclose_op-inl.h | 160 +
 src/operator/contrib/allclose_op.cc|  86 +++
 src/operator/contrib/allclose_op.cu|  58 ++
 src/operator/contrib/index_copy-inl.h  |   2 +-
 src/operator/contrib/index_copy.cc |   4 +-
 src/operator/contrib/mrcnn_mask_target-inl.h   | 132 
 src/operator/contrib/mrcnn_mask_target.cu  | 278 +
 src/operator/contrib/reset_arrays-inl.h|  92 +++
 src/operator/contrib/reset_arrays.cc   |  74 +++
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/leaky_relu-inl.h  |   2 +-
 src/operator/mshadow_op.h  |  32 +
 src/operator/mxnet_op.h|  20 +
 src/operator/nn/concat-inl.h   |  62 ++
 src/operator/nn/dropout-inl.h  |   4 +-
 .../numpy/np_

[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit fca9c924cd9699f1e265422afa53e194c2570e79
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] branch ir-patch updated (baeeb22 -> 8a2d750)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit baeeb22  [IR-Bridge] Operator Attributes (#16421)
omit 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit 3163642  [IR-Patch] IR Bridge (#16290)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 new 08b8498  [IR-Patch] IR Bridge (#16290)
 new fca9c92  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new 8a2d750  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (baeeb22)
\
 N -- N -- N   refs/heads/ir-patch (8a2d750)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt |   6 +-
 ci/docker/Dockerfile.build.ubuntu_nightly_gpu  |   1 +
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  95 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 420 -
 python/mxnet/numpy/multiarray.py   | 411 -
 python/mxnet/numpy_dispatch_protocol.py|   2 +
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/optimizer/optimizer.py|   2 +-
 python/mxnet/symbol/numpy/_symbol.py   | 398 -
 python/mxnet/test_utils.py | 221 +--
 python/mxnet/util.py   |  61 ++
 src/common/utils.h |  15 +
 src/ndarray/ndarray_function.cc|  13 +-
 src/ndarray/ndarray_function.cu|   4 -
 src/operator/contrib/allclose_op-inl.h | 160 +
 src/operator/contrib/allclose_op.cc|  86 +++
 src/operator/contrib/allclose_op.cu|  58 ++
 src/operator/contrib/index_copy-inl.h  |   2 +-
 src/operator/contrib/index_copy.cc |   4 +-
 src/operator/contrib/mrcnn_mask_target-inl.h   | 132 
 src/operator/contrib/mrcnn_mask_target.cu  | 278 +
 src/operator/contrib/reset_arrays-inl.h|  92 +++
 src/operator/contrib/reset_arrays.cc   |  74 +++
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/leaky_relu-inl.h  |   2 +-
 src/operator/mshadow_op.h  |  32 +
 src/operator/mxnet_op.h|  20 +
 src/operator/nn/concat-inl.h   |  62 ++
 src/operator/nn/dropout-inl.h  |   4 +-
 .../numpy/np_

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 08b849869b9a0fc15521256d0b3662cf3b1cf42f
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 70b0991..aaa07ab 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -748,7 +748,7 @@ if(USE_DIST_KVSTORE)
 endif()
 
 if(USE_TVM_OP)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8a2d75078d9cc98afff0071c0dba3e8fc5532d01
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit fca9c924cd9699f1e265422afa53e194c2570e79
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8a2d75078d9cc98afff0071c0dba3e8fc5532d01
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 08b849869b9a0fc15521256d0b3662cf3b1cf42f
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 70b0991..aaa07ab 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -748,7 +748,7 @@ if(USE_DIST_KVSTORE)
 endif()
 
 if(USE_TVM_OP)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@

[incubator-mxnet] branch ir-patch updated (baeeb22 -> 8a2d750)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit baeeb22  [IR-Bridge] Operator Attributes (#16421)
omit 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit 3163642  [IR-Patch] IR Bridge (#16290)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 new 08b8498  [IR-Patch] IR Bridge (#16290)
 new fca9c92  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new 8a2d750  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (baeeb22)
\
 N -- N -- N   refs/heads/ir-patch (8a2d750)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt |   6 +-
 ci/docker/Dockerfile.build.ubuntu_nightly_gpu  |   1 +
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  95 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 420 -
 python/mxnet/numpy/multiarray.py   | 411 -
 python/mxnet/numpy_dispatch_protocol.py|   2 +
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/optimizer/optimizer.py|   2 +-
 python/mxnet/symbol/numpy/_symbol.py   | 398 -
 python/mxnet/test_utils.py | 221 +--
 python/mxnet/util.py   |  61 ++
 src/common/utils.h |  15 +
 src/ndarray/ndarray_function.cc|  13 +-
 src/ndarray/ndarray_function.cu|   4 -
 src/operator/contrib/allclose_op-inl.h | 160 +
 src/operator/contrib/allclose_op.cc|  86 +++
 src/operator/contrib/allclose_op.cu|  58 ++
 src/operator/contrib/index_copy-inl.h  |   2 +-
 src/operator/contrib/index_copy.cc |   4 +-
 src/operator/contrib/mrcnn_mask_target-inl.h   | 132 
 src/operator/contrib/mrcnn_mask_target.cu  | 278 +
 src/operator/contrib/reset_arrays-inl.h|  92 +++
 src/operator/contrib/reset_arrays.cc   |  74 +++
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/leaky_relu-inl.h  |   2 +-
 src/operator/mshadow_op.h  |  32 +
 src/operator/mxnet_op.h|  20 +
 src/operator/nn/concat-inl.h   |  62 ++
 src/operator/nn/dropout-inl.h  |   4 +-
 .../numpy/np_

[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit fca9c924cd9699f1e265422afa53e194c2570e79
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8a2d75078d9cc98afff0071c0dba3e8fc5532d01
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 08b849869b9a0fc15521256d0b3662cf3b1cf42f
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 70b0991..aaa07ab 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -748,7 +748,7 @@ if(USE_DIST_KVSTORE)
 endif()
 
 if(USE_TVM_OP)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@

[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit fca9c924cd9699f1e265422afa53e194c2570e79
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] branch ir-patch updated (baeeb22 -> 8a2d750)

2019-10-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit baeeb22  [IR-Bridge] Operator Attributes (#16421)
omit 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit 3163642  [IR-Patch] IR Bridge (#16290)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 new 08b8498  [IR-Patch] IR Bridge (#16290)
 new fca9c92  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new 8a2d750  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (baeeb22)
\
 N -- N -- N   refs/heads/ir-patch (8a2d750)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt |   6 +-
 ci/docker/Dockerfile.build.ubuntu_nightly_gpu  |   1 +
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  95 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 420 -
 python/mxnet/numpy/multiarray.py   | 411 -
 python/mxnet/numpy_dispatch_protocol.py|   2 +
 python/mxnet/numpy_extension/__init__.py   |   2 +-
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/optimizer/optimizer.py|   2 +-
 python/mxnet/symbol/numpy/_symbol.py   | 398 -
 python/mxnet/test_utils.py | 221 +--
 python/mxnet/util.py   |  61 ++
 src/common/utils.h |  15 +
 src/ndarray/ndarray_function.cc|  13 +-
 src/ndarray/ndarray_function.cu|   4 -
 src/operator/contrib/allclose_op-inl.h | 160 +
 src/operator/contrib/allclose_op.cc|  86 +++
 src/operator/contrib/allclose_op.cu|  58 ++
 src/operator/contrib/index_copy-inl.h  |   2 +-
 src/operator/contrib/index_copy.cc |   4 +-
 src/operator/contrib/mrcnn_mask_target-inl.h   | 132 
 src/operator/contrib/mrcnn_mask_target.cu  | 278 +
 src/operator/contrib/reset_arrays-inl.h|  92 +++
 src/operator/contrib/reset_arrays.cc   |  74 +++
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/leaky_relu-inl.h  |   2 +-
 src/operator/mshadow_op.h  |  32 +
 src/operator/mxnet_op.h|  20 +
 src/operator/nn/concat-inl.h   |  62 ++
 src/operator/nn/dropout-inl.h  |   4 +-
 .../numpy/np_

[incubator-mxnet] branch ir-patch updated (b846a29 -> baeeb22)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 discard b846a29  [IR-Bridge] Operator Attributes (#16421)
 discard 5361b9b  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 discard 8c22d66  [IR-Patch] IR Bridge (#16290)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 new 3163642  [IR-Patch] IR Bridge (#16290)
 new 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new baeeb22  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (b846a29)
\
 N -- N -- N   refs/heads/ir-patch (baeeb22)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 src/operator/nn/mkldnn/mkldnn_base-inl.h   |   1 -
 src/operator/nn/mkldnn/mkldnn_base.cc  |   6 +-
 src/operator/nn/mkldnn/mkldnn_flatten-inl.h|  48 -
 src/operator/nn/mkldnn/mkldnn_flatten.cc   |  79 --
 src/operator/nn/mkldnn/mkldnn_ops-inl.h|   5 -
 src/operator/nn/mkldnn/mkldnn_reshape-inl.h|  27 ++---
 src/operator/nn/mkldnn/mkldnn_reshape.cc   | 118 +++--
 src/operator/numpy/np_matrix_op.cc |   2 +-
 .../mkldnn/mkldnn_quantized_flatten.cc |   4 +-
 src/operator/tensor/matrix_op.cc   |  44 +++-
 tests/python/unittest/test_numpy_op.py |  14 +--
 11 files changed, 78 insertions(+), 270 deletions(-)
 delete mode 100644 src/operator/nn/mkldnn/mkldnn_flatten-inl.h
 delete mode 100644 src/operator/nn/mkldnn/mkldnn_flatten.cc



[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 4358f79fc89bb191eccbc6c04250678edd302947
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit baeeb223be436733eedd450d818279ec7b9b9b13
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 3163642097bdb2d10c1cdfda980d3c14050e6e66
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 5045bba..c14e169 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amal

[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 4358f79fc89bb191eccbc6c04250678edd302947
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   7 +-
 9 files changed, 560 insertions(+), 57 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit baeeb223be436733eedd450d818279ec7b9b9b13
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 3163642097bdb2d10c1cdfda980d3c14050e6e66
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 5045bba..c14e169 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amal

[incubator-mxnet] branch ir-patch updated (b846a29 -> baeeb22)

2019-10-12 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 discard b846a29  [IR-Bridge] Operator Attributes (#16421)
 discard 5361b9b  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 discard 8c22d66  [IR-Patch] IR Bridge (#16290)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 new 3163642  [IR-Patch] IR Bridge (#16290)
 new 4358f79  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new baeeb22  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (b846a29)
\
 N -- N -- N   refs/heads/ir-patch (baeeb22)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 src/operator/nn/mkldnn/mkldnn_base-inl.h   |   1 -
 src/operator/nn/mkldnn/mkldnn_base.cc  |   6 +-
 src/operator/nn/mkldnn/mkldnn_flatten-inl.h|  48 -
 src/operator/nn/mkldnn/mkldnn_flatten.cc   |  79 --
 src/operator/nn/mkldnn/mkldnn_ops-inl.h|   5 -
 src/operator/nn/mkldnn/mkldnn_reshape-inl.h|  27 ++---
 src/operator/nn/mkldnn/mkldnn_reshape.cc   | 118 +++--
 src/operator/numpy/np_matrix_op.cc |   2 +-
 .../mkldnn/mkldnn_quantized_flatten.cc |   4 +-
 src/operator/tensor/matrix_op.cc   |  44 +++-
 tests/python/unittest/test_numpy_op.py |  14 +--
 11 files changed, 78 insertions(+), 270 deletions(-)
 delete mode 100644 src/operator/nn/mkldnn/mkldnn_flatten-inl.h
 delete mode 100644 src/operator/nn/mkldnn/mkldnn_flatten.cc



[incubator-mxnet] 03/03: [IR-Bridge] Operator Attributes (#16421)

2019-10-11 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit b846a299f15ffebd58d17fbe7c17a24606d280b6
Author: Junru Shao 
AuthorDate: Thu Oct 10 15:27:57 2019 -0700

[IR-Bridge] Operator Attributes (#16421)

* [IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py

* Attributes

* [IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* restore?

* style
---
 src/v3/include/op/attrs/legacy_nnvm_attrs.h | 2861 +++
 src/v3/src/op/legacy_nnvm_attrs.cc  |  406 
 2 files changed, 3267 insertions(+)

diff --git a/src/v3/include/op/attrs/legacy_nnvm_attrs.h 
b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
new file mode 100644
index 000..ae942cf
--- /dev/null
+++ b/src/v3/include/op/attrs/legacy_nnvm_attrs.h
@@ -0,0 +1,2861 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm_attrs.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../../ir.h"
+
+namespace mxnet {
+namespace v3 {
+namespace op {
+namespace attrs {
+// _copyto
+using LegacyCopytoAttrs = ir::Attrs;
+// all_finite
+class LegacyAllFiniteAttrs : public ir::AttrsNode {
+ public:
+  bool init_output;
+
+  MX_V3_DECLARE_ATTRS(LegacyAllFiniteAttrs, 
"mxnet.v3.attrs.LegacyAllFiniteAttrs") {
+MX_V3_ATTR_FIELD(init_output);
+  }
+};
+// _npi_deg2rad
+using LegacyNpiDeg2radAttrs = ir::Attrs;
+// _npi_rad2deg
+using LegacyNpiRad2degAttrs = ir::Attrs;
+// IdentityAttachKLSparseReg
+class LegacyIdentityAttachKLSparseRegAttrs
+: public ir::AttrsNode {
+ public:
+  double sparseness_target;
+  double penalty;
+  double momentum;
+
+  MX_V3_DECLARE_ATTRS(LegacyIdentityAttachKLSparseRegAttrs,
+  "mxnet.v3.attrs.LegacyIdentityAttachKLSparseRegAttrs") {
+MX_V3_ATTR_FIELD(sparseness_target);
+MX_V3_ATTR_FIELD(penalty);
+MX_V3_ATTR_FIELD(momentum);
+  }
+};
+// LeakyReLU
+class LegacyLeakyReLUAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+  double slope;
+  double lower_bound;
+  double upper_bound;
+
+  MX_V3_DECLARE_ATTRS(LegacyLeakyReLUAttrs, 
"mxnet.v3.attrs.LegacyLeakyReLUAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+MX_V3_ATTR_FIELD(slope);
+MX_V3_ATTR_FIELD(lower_bound);
+MX_V3_ATTR_FIELD(upper_bound);
+  }
+};
+// softmax_cross_entropy
+using LegacySoftmaxCrossEntropyAttrs = ir::Attrs;
+// Activation
+class LegacyActivationAttrs : public ir::AttrsNode {
+ public:
+  std::string act_type;
+
+  MX_V3_DECLARE_ATTRS(LegacyActivationAttrs, 
"mxnet.v3.attrs.LegacyActivationAttrs") {
+MX_V3_ATTR_FIELD(act_type);
+  }
+};
+// BatchNorm
+class LegacyBatchNormAttrs : public ir::AttrsNode {
+ public:
+  double eps;
+  double momentum;
+  bool fix_gamma;
+  bool use_global_stats;
+  bool output_mean_var;
+  int axis;
+  bool cudnn_off;
+  double min_calib_range;
+  double max_calib_range;
+
+  MX_V3_DECLARE_ATTRS(LegacyBatchNormAttrs, 
"mxnet.v3.attrs.LegacyBatchNormAttrs") {
+MX_V3_ATTR_FIELD(eps);
+MX_V3_ATTR_FIELD(momentum);
+MX_V3_ATTR_FIELD(fix_gamma);
+MX_V3_ATTR_FIELD(use_global_stats);
+MX_V3_A

[incubator-mxnet] 02/03: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-11 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 5361b9bbe83dcc6a0da580c39e405c004abf0b62
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   9 +-
 9 files changed, 561 insertions(+), 58 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] branch ir-patch updated (62b9f31 -> b846a29)

2019-10-11 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 discard 62b9f31  [IR-Bridge] Operator Attributes (#16421)
omit d34c82e  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit d4e4e80  [IR-Patch] IR Bridge (#16290)
 add 9dc0ab8  global numpy shape flag (#16335)
 add cfe9e50  Skipping installing nightly test (#16418)
 add a2018ba  cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
 add 56e1bef  Adds PyPI CD Pipeline (#16190)
 add 88521ff  upgrade the pytest version (#16429)
 add 6ce323f  [DOC] fix installation selector wrong history (#16381)
 add 1ab4c95  New ops for RCNN + old ops improvements for RCNN (#16215)
 add e484f72  Beta build (#16411)
 add 243ade9  [WIP] Improving Python Docs API (#16392)
 add cf61364  Revert "add mkl installation temp fix (#16304)" (#16369)
 add d8193c6  Update add_op_in_backend.md (#16403)
 add 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)
 add d2d76dc  increase docker cache timeout (#16430)
 new 8c22d66  [IR-Patch] IR Bridge (#16290)
 new 5361b9b  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
 new b846a29  [IR-Bridge] Operator Attributes (#16421)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (62b9f31)
\
 N -- N -- N   refs/heads/ir-patch (b846a29)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 cd/Jenkinsfile_cd_pipeline |   4 +
 cd/Jenkinsfile_utils.groovy|  73 
 cd/README.md   |   8 +
 cd/python/pypi/Jenkins_pipeline.groovy |  78 
 cd/python/pypi/README.md   |  43 ++
 cd/python/pypi/pypi_package.sh |  60 +++
 cd/python/pypi/pypi_publish.py |  87 
 ci/docker/install/ubuntu_mkl.sh|   2 +-
 ci/docker/install/ubuntu_onnx.sh   |   4 +-
 ci/docker/runtime_functions.sh |  48 ++-
 ci/docker_cache.py |   2 +-
 ci/jenkins/Jenkins_steps.groovy|  43 +-
 ...enkinsfile_website => Jenkinsfile_website_beta} |  23 +-
 docs/python_docs/README.md | 116 +-
 docs/python_docs/_static/autodoc.js|  49 +++
 docs/python_docs/_static/mxnet.css |   7 +
 docs/python_docs/environment.yml   |   1 +
 docs/python_docs/python/Makefile   |   3 +-
 docs/python_docs/python/README.md  | 130 --
 docs/python_docs/python/api/advanced/index.rst |  74 
 .../python/api/advanced/mxnet.executor_manager.rst |  38 --
 .../python/api/advanced/mxnet.kvstore_server.rst   |  36 --
 .../python/api/advanced/mxnet.test_utils.rst   |  91 
 .../index.rst} |  10 +-
 .../autograd/index.rst}|  10 +-
 docs/python_docs/python/api/contrib/index.rst  |  88 
 .../io/index.rst}  |  10 +-
 .../ndarray/index.rst} |  10 +-
 .../mxnet.random.rst => contrib/onnx/index.rst}|  12 +-
 .../quantization/index.rst}|  10 +-
 .../symbol/index.rst}  |  10 +-
 .../tensorboard/index.rst} |  10 +-
 .../tensorrt/index.rst}|  10 +-
 .../mxnet.random.rst => contrib/text/index.rst}|  11 +-
 .../python_docs/python/api/gluon-related/index.rst | 111 -
 .../python/api/gluon-related/mxnet.autograd.rst|  38 --
 .../python/api/gluon-related/mxnet.image.rst   |  99 -
 .../python/api/gluon-related/mxnet.initializer.rst |  58 ---
 .../python/api/gluon-related/mxnet.io.rst  |  52 ---
 .../api/gluon-related/mxnet.kvstore.KVStore.rst|  61 ---
 .../api/gluon-related/mxnet.lr_scheduler.rst   |  31 --
 .../python/api/gluon-re

[incubator-mxnet] 01/03: [IR-Patch] IR Bridge (#16290)

2019-10-11 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 8c22d66ec4ae1a52fd6364b5a49c62b76ba9a2b3
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 5045bba..c14e169 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amal

[incubator-mxnet] 02/02: [IR-Bridge] Support attrs for operators: convolution, batch norm, relu (#16351)

2019-10-10 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit d34c82e04960a7a30b82eba3d8b19d2259a65db5
Author: Junru Shao 
AuthorDate: Wed Oct 9 23:01:24 2019 -0700

[IR-Bridge] Support attrs for operators: convolution, batch norm, relu 
(#16351)

* Rebased

* Trigger CI

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...

* ...

* Trigger CI

* Trigger CI

* Trigger CI

* Trigger CI

* ...

* ...
---
 Makefile   |   4 +-
 src/imperative/cached_op.cc|  14 +-
 src/v3/include/bridge/legacy_nnvm.h|  64 +++
 src/v3/include/ir.h| 188 +
 src/v3/include/op/attrs/nn.h   |  71 
 src/v3/src/bridge/legacy_nnvm/attrs.cc | 120 +
 .../legacy_nnvm/ir.cc} | 109 ++--
 src/v3/src/op/attrs.cc |  40 +
 tests/python/unittest/test_numpy_op.py |   9 +-
 9 files changed, 561 insertions(+), 58 deletions(-)

diff --git a/Makefile b/Makefile
index b18edf0..3a675cd 100644
--- a/Makefile
+++ b/Makefile
@@ -462,7 +462,7 @@ endif
 
 all: lib/libmxnet.a lib/libmxnet.so $(BIN) extra-packages sample_lib
 
-SRC = $(wildcard src/*/*/*/*.cc src/*/*/*.cc src/*/*.cc src/*.cc)
+SRC = $(wildcard src/*/*/*/*/*/*.cc src/*/*/*/*/*.cc src/*/*/*/*.cc 
src/*/*/*.cc src/*/*.cc src/*.cc)
 OBJ = $(patsubst %.cc, build/%.o, $(SRC))
 CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
@@ -795,6 +795,8 @@ clean_all: clean
 -include build/*/*.d
 -include build/*/*/*.d
 -include build/*/*/*/*.d
+-include build/*/*/*/*/*.d
+-include build/*/*/*/*/*/*.d
 ifneq ($(EXTRA_OPERATORS),)
-include $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, 
%/*/*.d, $(EXTRA_OPERATORS))
 endif
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 14e9527..5180c7f 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,18 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
-#if MXNET_USE_TVM_OP
-#ifndef MXNET_AMALGAMATION
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
 #include 
 namespace mxnet {
 namespace v3 {
-namespace nnvm_relay_bridge {
+namespace bridge {
+namespace legacy_nnvm {
 tvm::NodeRef NNVMToRelay(const nnvm::Graph );
-}  // namespace nnvm_relay_bridge
+}  // namespace legacy_nnvm
+}  // namespace bridge
 }  // namespace v3
 }  // namespace mxnet
-#endif  // MXNET_AMALGAMATION
-#endif  // MXNET_USE_TVM_OP
+#endif
 
 namespace mxnet {
 
@@ -325,7 +325,7 @@ bool CachedOp::SetForwardGraph(
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
 #if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
-  v3::nnvm_relay_bridge::NNVMToRelay(g);
+  v3::bridge::legacy_nnvm::NNVMToRelay(g);
 #endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
diff --git a/src/v3/include/bridge/legacy_nnvm.h 
b/src/v3/include/bridge/legacy_nnvm.h
new file mode 100644
index 000..e2c99a5
--- /dev/null
+++ b/src/v3/include/bridge/legacy_nnvm.h
@@ -0,0 +1,64 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file legacy_nnvm.h
+ * \author Junru Shao
+ */
+#pragma once
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+#include 
+
+#include "../ir.h"
+
+namespace nnvm {
+class Op;
+class Graph;
+}  // namespace nnvm
+
+namespace mxnet {
+namespace v3 {
+namespace bridge {
+namespace legacy_nnvm {
+
+class NNVMCapsuleNode final : public ir::Node {
+ public:
+  nnvm::NodeAttrs attrs;
+  void VisitAttrs(tvm::AttrVisitor *v) final {}
+  static constexpr const char *_type_key = "mxnet.v3.bridge.NNVMCa

[incubator-mxnet] 01/02: [IR-Patch] IR Bridge (#16290)

2019-10-10 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit d4e4e803873e349f42d5898bda23ea0f2d845a36
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 5045bba..c14e169 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index 0a1e355..b18edf0 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,15 +618,16 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
@@ -634,7 +635,7 @@ TVM_OP_COMPILE_OPTIONS = -o $(ROOTDIR)/lib/libtvmop.so
 ifneq ($(CUDA_ARCH),)
TVM_OP_COMPILE_OPTIONS += --cuda-arch "$(CUDA_ARCH)"
 endif
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -700,8 +701,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amal

[incubator-mxnet] branch ir-patch updated (44cde6a -> d34c82e)

2019-10-10 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 44cde6a  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)
omit d3d2b10  [IR-Patch] IR Bridge (#16290)
 add 0bace55  fix choice signature
 add ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)
 add 15ea40d  Add boolean ndarray (#15940)
 add 1d0d1e6  Faster Transpose 2D (#16104)
 add 9ff644b  Fix windows flakiness (#16415)
 add a8181dd  [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
 new d4e4e80  [IR-Patch] IR Bridge (#16290)
 new d34c82e  [IR-Bridge] Support attrs for operators: convolution, batch 
norm, relu (#16351)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (44cde6a)
\
 N -- N -- N   refs/heads/ir-patch (d34c82e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 3rdparty/mshadow/mshadow/base.h|  62 +++-
 CMakeLists.txt |   6 +-
 Makefile   |   6 +-
 ci/docker/runtime_functions.sh |   4 +-
 contrib/tvmop/__init__.py  |   1 +
 contrib/tvmop/compile.py   |  41 ++-
 .../pycocotools => contrib/tvmop/core}/__init__.py |   2 +-
 contrib/tvmop/core/fromnumeric.py  |  63 
 contrib/tvmop/core/umath.py| 122 
 contrib/tvmop/opdef.py |   6 +-
 contrib/tvmop/utils.py |  16 +-
 include/mxnet/tensor_blob.h|   1 +
 julia/NEWS.md  |   4 +
 julia/src/MXNet.jl |   3 +-
 julia/src/context.jl   |  18 ++
 python/mxnet/_numpy_op_doc.py  | 122 
 python/mxnet/context.py|   2 -
 python/mxnet/ndarray/ndarray.py|   6 +
 python/mxnet/ndarray/numpy/_op.py  | 198 +++-
 python/mxnet/ndarray/numpy/random.py   |   4 +-
 python/mxnet/numpy/multiarray.py   | 316 +++
 python/mxnet/numpy/random.py   |   4 +-
 python/mxnet/numpy/utils.py|   7 +-
 python/mxnet/symbol/numpy/_symbol.py   | 246 ---
 python/mxnet/symbol/numpy/random.py|   5 +-
 python/mxnet/test_utils.py |  17 ++
 src/ndarray/ndarray.cc |   2 +-
 src/ndarray/ndarray_function.cc|   9 +
 src/ndarray/ndarray_function.cu|  10 +-
 src/operator/contrib/boolean_mask.cc   |   7 +-
 src/operator/contrib/boolean_mask.cu   |   4 +-
 src/operator/contrib/stes_op.cc|  84 +
 src/operator/contrib/stes_op.cu|  43 +++
 src/operator/contrib/stes_op.h |  33 ++
 src/operator/mxnet_op.h|  16 +
 src/operator/numpy/np_broadcast_reduce_op.h|  21 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  71 +
 src/operator/numpy/np_elemwise_broadcast_op.cc | 253 ++-
 src/operator/operator_tune.cc  |  23 +-
 .../tensor/elemwise_binary_broadcast_op_logic.cc   |   6 -
 .../tensor/elemwise_binary_scalar_op_logic.cc  |   6 -
 src/operator/tensor/elemwise_unary_op.h|   2 +-
 src/operator/tensor/init_op.h  |  10 +-
 src/operator/tensor/la_op.cc   |   2 -
 src/operator/tensor/la_op.cu   |   2 -
 src/operator/tensor/la_op.h|   7 +-
 src/operator/tensor/matrix_op-inl.h|  52 +++-
 src/operator/tvmop/op_module.cc|  27 +-
 src/operator/tvmop/op_module.h |  18 +-
 tests/python/unittest/test_contrib_stes_op.py  | 137 +

[incubator-mxnet] 01/01: [IR-Patch] IR Bridge (#16290)

2019-10-08 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit d3d2b1064ff65c2c701c7e55dae61de4e1f76218
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index f441e9b..051dc91 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index b3b188a..bd580ef 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,19 +618,20 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -696,8 +697,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@ -49,7 +49,7 @@ endif
 .PHONY: all clean
 
 DEFS+=-DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 
-DMSHADOW_DIST_PS=0 -DDMLC_LOG_STACK_TRACE=0

[incubator-mxnet] branch ir-patch updated (38e8828 -> d3d2b10)

2019-10-08 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 38e8828  [IR-Patch] IR Bridge (#16290)
 add 3244a7a  Julia: add API docs back (#16363)
 add b6f3235  Fix nightly scala pipeline (#16362)
 add 09ae7df  remove redundant branch name (#16372)
 add 626fc32  Disable Pylint false error in numpy_op_signature  (#16370)
 add 916fbf2  boolean_mask_assign operator for future boolean indexing 
(#16361)
 add 8096421  Embedding gradient performance optimization on GPU (#16355)
 add 2c81a71  Change mailing list url in footer to point to instructions 
about how to subscribe instead (#16384)
 add 2127f75  Add instructions to report a security vulnerability (#16383)
 add 09285c8  Implements ldexp. (#15845)
 add 2df3282  Numpy Operators: Inner, Outer, vdot (#15846)
 add 295fc14  Numpy det and slogdet operators (#15861)
 add 4940ec0  Fix random op signature
 add df4125a  update NEWS.md and README.md (#16385)
 new d3d2b10  [IR-Patch] IR Bridge (#16290)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (38e8828)
\
 N -- N -- N   refs/heads/ir-patch (d3d2b10)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 NEWS.md|  24 ++
 README.md  |   1 +
 ci/Jenkinsfile_utils.groovy|   4 +-
 ci/docker/Dockerfile.publish.ubuntu1604_cpu|   2 +
 ci/docker/Dockerfile.publish.ubuntu1604_gpu|   2 +
 docs/static_site/src/_includes/footer.html |   3 +-
 docs/static_site/src/pages/api/faq/security.md |  17 +
 julia/docs/src/api/ndarray.md  |  18 +-
 julia/docs/src/api/symbolic-node.md|  11 +-
 python/mxnet/_numpy_op_doc.py  | 122 ++
 python/mxnet/initializer.py|   8 +-
 python/mxnet/ndarray/numpy/_op.py  | 203 -
 python/mxnet/ndarray/numpy/random.py   |  18 +-
 python/mxnet/numpy/multiarray.py   | 187 -
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_op_signature.py |   1 -
 python/mxnet/symbol/numpy/_symbol.py   | 182 +++-
 python/mxnet/symbol/numpy/random.py|  13 +-
 src/operator/mshadow_op.h  |  11 +
 src/operator/numpy/np_boolean_mask_assign.cc   | 270 
 src/operator/numpy/np_boolean_mask_assign.cu   | 229 ++
 src/operator/numpy/np_broadcast_reduce_op.h|   9 +
 src/operator/numpy/np_elemwise_broadcast_op.cc |  37 ++
 src/operator/numpy/np_elemwise_broadcast_op.cu |  19 +
 src/operator/operator_tune.cc  |   5 +
 src/operator/random/sample_op.cc   |   2 -
 src/operator/tensor/indexing_op.cu | 233 +++
 src/operator/tensor/la_op.cc   |   2 +
 src/operator/tensor/la_op.cu   |   2 +
 src/operator/tensor/la_op.h|   7 +-
 tests/python/unittest/test_exc_handling.py |  15 +-
 tests/python/unittest/test_numpy_gluon.py  |  15 +-
 tests/python/unittest/test_numpy_op.py | 557 +
 33 files changed, 2095 insertions(+), 142 deletions(-)
 create mode 100644 src/operator/numpy/np_boolean_mask_assign.cc
 create mode 100644 src/operator/numpy/np_boolean_mask_assign.cu



[incubator-mxnet] 01/01: [IR-Patch] IR Bridge (#16290)

2019-10-03 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 38e88283fc37ffe97f6ee91a343c438b89c4982e
Author: Junru Shao 
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

[IR-Patch] IR Bridge (#16290)

* ir converter

Add license

Missed something

lint

lintlintlint

* Restore cryptic part of CachedOp

* Update Makefile

* try again for libtvm.so...

* try again

* try once once again

* let's try to fix julia's issue first

* Remove AsText which is not an exposed symbol

* try to bypass amalgamation

* try again

* boy try this

* blacklist tvm to amalgamation.py
---
 3rdparty/tvm   |   2 +-
 CMakeLists.txt |   2 +-
 Makefile   |  17 +-
 amalgamation/Makefile  |   4 +-
 amalgamation/amalgamation.py   |   4 +-
 ci/jenkins/Jenkins_steps.groovy|  20 +--
 .../assembly/src/main/assembly/assembly.xml|   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala|   2 +-
 src/imperative/cached_op.cc|  16 +-
 src/v3/src/nnvm_relay_bridge.cc| 182 +
 tests/nightly/JenkinsfileForBinaries   |   4 +-
 .../JenkinsfileForMBCC |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 16
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index f441e9b..051dc91 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS 
${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index b3b188a..bd580ef 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu 
src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,19 +618,20 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) 
config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
echo "Compile TVM"
[ -e $(LLVM_PATH)/bin/llvm-config ] || sh 
$(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
cd $(TVM_PATH)/build; \
-   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+   cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" 
-DHIDE_PRIVATE_SYMBOLS=ON \
+   -DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; 
\
$(MAKE) VERBOSE=1; \
mkdir -p $(ROOTDIR)/lib; \
-   cp $(TVM_PATH)/build/libtvm_runtime.so 
$(ROOTDIR)/lib/libtvm_runtime.so; \
+   cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
ls $(ROOTDIR)/lib; \
cd $(ROOTDIR)
 
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py 
contrib/tvmop/*.py)
echo "Compile TVM operators"

PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -696,8 +697,8 @@ rpkg:
cp -rf lib/libmklml_intel.so R-package/inst/libs; \
fi
 
-   if [ -e "lib/libtvm_runtime.so" ]; then \
-   cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+   if [ -e "lib/libtvm.so" ]; then \
+   cp -rf lib/libtvm.so R-package/inst/libs; \
fi
 
mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@ -49,7 +49,7 @@ endif
 .PHONY: all clean
 
 DEFS+=-DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 
-DMSHADOW_DIST_PS=0 -DDMLC_LOG_STACK_TRACE=0

[incubator-mxnet] branch ir-patch updated (6af2611 -> 38e8828)

2019-10-03 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 6af2611  [IR-Patch] IR Bridge (#16290)
 add 8004a02  Fixing links for website + Fixing search (#16284)
 add dc5470c  Factorize CUDA_KERNEL_LOOP used in CUDA kernels (#16197)
 add 01ca278  [Gluon] Support None argument in HybridBlock (#16280)
 add 943bed5  add mkl installation temp fix (#16304)
 add 3950a47  [MXNET-978] n-th order gradient test support. (#15611)
 add 512d25a  Minor fix in ToTensor documentation. (#16299)
 add ea440c7  [numpy] Cosmetic improvement on mxnet.numpy builtin op 
signature in documentation (#16305)
 add 3ffd2c2  [MXNET-978] Fully connected, higher order grad (#14779)
 add 66f1656  [MXNET-978] Higher Order Gradient Support `arcsinh`, 
`arccosh`. (#15530)
 add 810e67c  Add fast implementation of LARS (#16122)
 add 097deff  add 'Release' cmake flag (#16294)
 add c7f3ac9  add code of conduct and conflict resolution (#16343)
 add 6931748  adding redirects so that old website API links surfaced from 
searches (#16342)
 add 1363b5a  simple typo error in NEWS.md (#16344)
 add e6e2e2e  Fix code block formatting in Why MXNet doc page (#16334)
 add 480b50c  S3 upload artifacts (#16336)
 add 8136d49  fix atol for test_preloaded_multi_sgd (#16356)
 new 38e8828  [IR-Patch] IR Bridge (#16290)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (6af2611)
\
 N -- N -- N   refs/heads/ir-patch (38e8828)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .github/PULL_REQUEST_TEMPLATE.md   |   2 +-
 CODE_OF_CONDUCT.md |  43 ++
 MKLDNN_README.md   |   2 +-
 NEWS.md|  14 +-
 R-package/R/zzz.R  |   2 +-
 R-package/README.md|   2 +-
 R-package/vignettes/MultidimLstm.Rmd   |   2 +-
 README.md  |  30 +-
 benchmark/opperf/README.md |   2 +-
 ci/Jenkinsfile_utils.groovy|  14 +-
 ci/docker/install/ubuntu_mkl.sh|   2 +-
 ci/docker/install/ubuntu_r.sh  |   2 +-
 ci/other/ci_deploy_doc.sh  |   2 +-
 cmake/cmake_options.yml|   2 +-
 contrib/clojure-package/README.md  |   6 +-
 .../examples/pre-trained-models/README.md  |   6 +-
 .../clojure-package/examples/tutorial/README.md|   2 +-
 .../examples/tutorial/src/tutorial/kvstore.clj |   2 +-
 .../examples/tutorial/src/tutorial/module.clj  |   2 +-
 .../examples/tutorial/src/tutorial/ndarray.clj |   2 +-
 .../examples/tutorial/src/tutorial/symbol.clj  |   2 +-
 cpp-package/README.md  |   6 +-
 docs/README.md |   6 +-
 docs/python_docs/python/scripts/conf.py|   4 +-
 .../python/tutorials/deploy/export/index.rst   |  13 +-
 .../python/tutorials/deploy/export/onnx.md | 150 ++
 docs/python_docs/python/tutorials/deploy/index.rst |   6 +-
 .../python/tutorials/deploy/inference/index.rst|  38 +-
 .../tutorials/deploy/inference/wine_detector.md| 405 ++
 .../python/tutorials/deploy/run-on-aws/index.rst   |   8 +-
 .../python/tutorials/extend/custom_layer.md|  18 +-
 .../python/tutorials/extend/customop.md|   2 +-
 docs/python_docs/python/tutorials/extend/index.rst |  12 +-
 .../getting-started/crash-course/5-predict.md  |   2 +-
 .../gluon_from_experiment_to_deployment.md |  22 +-
 .../python/tutorials/getting-started/index.rst |  28 +-
 .../logistic_regression_explained.md   | 257 +
 .../tutorials/getting-started/to-mxnet/index.rst   |   2 +-
 .../tutorials/getting-started/to-mxnet/pytorch.md  |   6 +-
 .../packages/autograd/{autograd.md => index.md}|   4 +-
 .../gluon/{ => blocks}/activations/activations

[incubator-mxnet] branch ir-patch updated (5ed5689 -> 33c7b5c)

2019-09-27 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5ed5689  numpy operator ravel, derive from reshape (#16016)
 add 7fc1d84  Adds dynamic libmxnet to CD pipeline (#16163)
 add 692f49f  Sequence last fix (#16156)
 add 2a55cd7  fixing test for model compatibility checker (#16159)
 add 3dacabe  Tutorials nighly fix (#16179)
 add b777a69  Add register_op_hook for gluon (#15839)
 add 8cc3443  adding codeowners (#16165)
 add 956cfa3  assert_allclose -> rtol=1e-10 (#16198)
 add f75d093  [Dataset] add shard API (#16175)
 add 2a201ba  Add __array_function__
 add 681d9a7  Improved error mesages
 add 477e6f7  Fix np.choice
 add 479ab46  [MEMORY] retry GPU memory allocation if fragmented (#16194)
 add 995b477  Fix README Build Status (#16183)
 add 1e058a3  add exception check for numpy reshape (#16180)
 add 53b2b40  improve dataloader signals and messages (#16114)
 add 66c4207  [Numpy] Numpy behavior normal distribution (#16109)
 add b3da7d2  fix multinomial bug on gpu (#16204)
 add 5fe4d2a  Update ndarray.py (#16205)
 add 5f9a680  subscribe to build and CD changes (#16192)
 add a37a76c  Float64 fallback for mkldnn subgraph and rnn op (#15853)
 add f5d8fbf  New Website: Remove Old Content [2/3] (#15885)
 add 6af6570  fix flaky test (#16191)
 add 6247dc8  [Numpy] Differentiable svd (#15795)
 add a783d81  add epsilon to sum(pvalue) upperbound (#16211)
 add 7126438  New Website: New Pipeline [3/3] (#15883)
 add 986cecd  Update MKL-DNN dependency (#16073)
 add d61ed3f  Solve #14116, #15143 (#15144)
 add 618c481  [MXNET-1422] Fix wrong results of min([inf, inf]) and 
max([-inf,-inf]) (#16226)
 add be7296b  removing MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64 
(#16203)
 add 11f73ed  np compatible vstack (#15850)
 add 37c641f  Faster Scala NDArray to BufferedImage function (#16219)
 add 93b794e  Fix inconsistent interpolation method values (#16212)
 add 89cb4ad  Numpy add numpy op roll (#15902)
 add da6e744  add numpy compatible trace (#16008)
 add 53ebe12  add numpy op hanning, hamming, blackman (#15815)
 add 0994ea0  julia: implement `context.num_gpus` (#16236)
 add 8e48cec  julia: add `AbstractMXError` as parent type (#16235)
 add dccfc88  [Numpy]flip (#15819)
 add 7344c91  numpy operator around (#16126)
 add 8af1b57  numpy operator arctan2 (#15890)
 add ae2f1bb  numpy operator nonzero (#15838)
 add bea5653  numpy operator hypot (#15901)
 add 72b4d9b  tvm numpy operator deg2rad && rad2deg (#16015)
 add a77bd75  Integrate MKL-DNN leakyrelu (#16075)
 add cbbb96a  [CD] Add COMMIT_ID param to release job (#16202)
 add f635595  numpy op unique
 add 08f8372  try to fix bug
 add 3ca1920  fix memory bug and disable some test
 add 33bd02f  fix according to review
 add 1d4032d  set fixed seed for profiler (#16155)
 add a5e698a  FullyConnected Bias performance improvement on GPU (#16039)
 add 35ef45c  Fix lack of dylib support in Makefile when use lapack (#15813)
 add 985a4ca  Update KL Divergence formula (#16170)
 add ab2214b  fix broken links (#16255)
 add df34e76  Add list_ctx to ParameterDict (#16185)
 add 1a2da12  [MKLDNN] NDArray reorder in C API and deconv (#16265)
 add c5007ea  Numpy operators: `lcm`, `tril`, `identity` and `take` (#16264)
 add c69c8bf  redirect to the 404 page (#16287)
 add f52ddfd  Fix MXNDArrayGetData (#16289)
 add 7656a11  Removes git status update stop gap solution (#16285)
 add 33c7b5c  add google-analytics config (#16271)

No new revisions were added by this update.

Summary of changes:
 3rdparty/mkldnn|2 +-
 3rdparty/mshadow/mshadow/base.h|   43 +-
 CODEOWNERS |   15 +-
 Makefile   |   28 +-
 R-package/DESCRIPTION  |1 +
 R-package/src/export.cc|   13 +
 README.md  |   29 +-
 benchmark/opperf/README.md |   18 +-
 benchmark/opperf/utils/benchmark_utils.py  |1 +
 cd/Jenkinsfile_cd_pipeline |   17 +-
 cd/Jenkinsfile_release_job |   44 +-
 cd/Jenkinsfile_utils.groovy|   29 +-
 cd/README.md   |9 +
 cd/img/job_setup.png   |  Bin 0 -> 250237 bytes
 cd/mxnet_lib/dynamic/Jenkins_pipeline.groovy   |   57 +
 ci/Jenkinsfile_utils.groovy|   64 +-
 ci/docker/Dockerfile.build.test.arm_qemu   |1 +
 ci/docker/Dockerfile.build.ubuntu_blc  |

[incubator-mxnet] branch ir-patch created (now 5ed5689)

2019-09-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 5ed5689  numpy operator ravel, derive from reshape (#16016)

No new revisions were added by this update.