[incubator-tvm] branch master updated (44ff1f3 -> 2e93aef)

2020-07-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 44ff1f3  [Relay] Handle ndarray_size in FoldConstant (#6156)
 add 2e93aef  Improve error messages in graph tuner, graph runtime, and 
module loader. (#6148)

No new revisions were added by this update.

Summary of changes:
 python/tvm/autotvm/graph_tuner/base_graph_tuner.py |  3 +++
 python/tvm/contrib/graph_runtime.py|  5 -
 src/runtime/library_module.cc  | 19 +--
 src/runtime/stackvm/stackvm_module.cc  | 19 +--
 4 files changed, 41 insertions(+), 5 deletions(-)



[incubator-tvm] branch master updated: [Relay] Handle ndarray_size in FoldConstant (#6156)

2020-07-28 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 44ff1f3  [Relay] Handle ndarray_size in FoldConstant (#6156)
44ff1f3 is described below

commit 44ff1f3b5ed0751fee39537a0e6e3870a74c930b
Author: lixiaoquan 
AuthorDate: Wed Jul 29 06:49:21 2020 +0800

[Relay] Handle ndarray_size in FoldConstant (#6156)

* [Relay] Handle ndarray_size in FoldConstant

* Use Optional
---
 src/relay/transforms/fold_constant.cc | 75 ---
 tests/python/relay/test_pass_fold_constant.py | 22 
 2 files changed, 90 insertions(+), 7 deletions(-)

diff --git a/src/relay/transforms/fold_constant.cc 
b/src/relay/transforms/fold_constant.cc
index 0b873bf..3f5ecaa 100644
--- a/src/relay/transforms/fold_constant.cc
+++ b/src/relay/transforms/fold_constant.cc
@@ -86,7 +86,8 @@ class ConstantFolder : public ExprMutator {
 shape_func_op_(Op::Get("vm.shape_func")),
 alloc_tensor_op_(Op::Get("memory.alloc_tensor")),
 alloc_storage_op_(Op::Get("memory.alloc_storage")),
-cast_op_(Op::Get("cast")) {}
+cast_op_(Op::Get("cast")),
+ndarray_size_op_(Op::Get("ndarray_size")) {}
 
   Expr VisitExpr_(const LetNode* op) final {
 Expr value = this->Mutate(op->value);
@@ -128,6 +129,10 @@ class ConstantFolder : public ExprMutator {
   return EvaluateShapeOf(res, origin_args, call->attrs);
 }
 
+if (call->op == ndarray_size_op_) {
+  return EvaluateNdarraySize(res, origin_args, call->attrs);
+}
+
 // We should think about potentially constant evaluation over these ops 
too.
 if (call->op == invoke_tvm_op_ || call->op == shape_func_op_ || call->op 
== alloc_tensor_op_ ||
 call->op == alloc_storage_op_) {
@@ -173,6 +178,7 @@ class ConstantFolder : public ExprMutator {
   const Op& alloc_tensor_op_;
   const Op& alloc_storage_op_;
   const Op& cast_op_;
+  const Op& ndarray_size_op_;
 
   // Convert value to expression.
   Expr ObjectToExpr(const ObjectRef& value) {
@@ -223,10 +229,8 @@ class ConstantFolder : public ExprMutator {
 CHECK(param != nullptr);
 
 tvm::Array ishape;
-if (const ConstantNode* op = input.as()) {
-  ishape = op->tensor_type()->shape;
-} else if (input->checked_type_.defined()) {
-  ishape = input->checked_type().as()->shape;
+if (auto opt = GetConstantShape(input)) {
+  ishape = opt.value();
 } else {
   return expr;
 }
@@ -261,12 +265,69 @@ class ConstantFolder : public ExprMutator {
   shape = Constant(ndarray);
 }
 
+return CastValue(shape, param->dtype);
+  }
+
+  // Evaluate a call to the ndarray_size operator for tensors with constant
+  // shapes.
+  Expr EvaluateNdarraySize(Expr expr, Array args, Attrs attrs) {
+Expr input = args[0];
+const auto* param = attrs.as();
+CHECK(param != nullptr);
+
+tvm::Array ishape;
+if (auto opt = GetConstantShape(input)) {
+  ishape = opt.value();
+} else {
+  return expr;
+}
+
+// Get the constant size
+DLContext ctx;
+ctx.device_type = kDLCPU;
+ctx.device_id = 0;
+runtime::NDArray value;
+DLDataType cdtype = DataType::Int(32);
+value = runtime::NDArray::Empty({1}, cdtype, ctx);
+int32_t* data = static_cast(value->data);
+if (ishape.size() == 0) {
+  *data = 0;
+} else {
+  *data = 1;
+  using ::tvm::tir::IntImmNode;
+  for (size_t i = 0; i < ishape.size(); ++i) {
+if (const IntImmNode* dim = ishape[i].as()) {
+  *data *= dim->value;
+} else {
+  return expr;
+}
+  }
+}
+
+Constant size = Downcast(ObjectToExpr(value));
+return CastValue(size, param->dtype);
+  }
+
+  Expr CastValue(const Expr& value, DataType dtype) {
 // Cast the constant into correct dtype
 auto cast_attrs = make_object();
-cast_attrs->dtype = param->dtype;
-Expr ret = Call(cast_op_, {shape}, Attrs(cast_attrs), {});
+cast_attrs->dtype = dtype;
+Expr ret = Call(cast_op_, {value}, Attrs(cast_attrs), {});
 return ConstEvaluate(ret);
   }
+
+  Optional> GetConstantShape(const Expr& input) {
+tvm::Array ishape;
+if (const ConstantNode* op = input.as()) {
+  ishape = op->tensor_type()->shape;
+} else if (input->checked_type_.defined()) {
+  ishape = input->checked_type().as()->shape;
+} else {
+  return Optional>(nullptr);
+}
+
+return Optional>(ishape);
+  }
 };
 
 Expr FoldConstant(const Expr& expr, const IRModule& mod) {
diff --git a/tests/python/relay/test_pass_fold_constant.py 
b/tests/python/relay/test_pass_fold_constant.py
index fcccab5..e985268 100644
--- a/tests/python/relay/test_pass_fold_constant.py
+++ b/tests/python/relay/test_pass_fold_constant.py
@@ -164,6 +164,27 @@ def test_fold_shape_of():
 assert 

[incubator-tvm] branch master updated: [Topi, x86] Using MKL blas for quantized dense (#6115)

2020-07-28 Thread haichen
This is an automated email from the ASF dual-hosted git repository.

haichen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 8cd53e0  [Topi, x86] Using MKL blas for quantized dense (#6115)
8cd53e0 is described below

commit 8cd53e00722d079290b53ade348b860f9c237ee9
Author: Animesh Jain 
AuthorDate: Tue Jul 28 13:32:00 2020 -0700

[Topi, x86] Using MKL blas for quantized dense (#6115)

* [Topi, x86] Using MKL blas for quantized dense

* Typo

* CBLAS_OFFSET only available for MKL

* Skipping tests as GPU CI uses Openblas

* Retrigger

Co-authored-by: Ubuntu 
---
 python/tvm/contrib/cblas.py | 33 +
 src/runtime/contrib/cblas/cblas.cc  | 41 ++
 src/runtime/contrib/cblas/gemm_common.h | 48 ++
 tests/python/contrib/test_cblas.py  | 52 +
 topi/python/topi/x86/dense.py   |  8 -
 5 files changed, 181 insertions(+), 1 deletion(-)

diff --git a/python/tvm/contrib/cblas.py b/python/tvm/contrib/cblas.py
index e1a4a8a..68586dfd 100644
--- a/python/tvm/contrib/cblas.py
+++ b/python/tvm/contrib/cblas.py
@@ -52,6 +52,39 @@ def matmul(lhs, rhs, transa=False, transb=False, **kwargs):
 )
 
 
+def matmul_u8s8s32(lhs, rhs, transa=False, transb=False, **kwargs):
+"""Create an extern op that compute matrix mult of A and rhs with CrhsLAS
+This function serves as an example on how to call external libraries.
+
+Parameters
+--
+lhs: Tensor
+The left matrix operand
+rhs: Tensor
+The right matrix operand
+transa: bool
+Whether transpose lhs
+transb: bool
+Whether transpose rhs
+
+Returns
+---
+C: Tensor
+The result tensor.
+"""
+n = lhs.shape[1] if transa else lhs.shape[0]
+m = rhs.shape[0] if transb else rhs.shape[1]
+return te.extern(
+(n, m),
+[lhs, rhs],
+lambda ins, outs: tvm.tir.call_packed(
+"tvm.contrib.cblas.matmul_u8s8s32", ins[0], ins[1], outs[0], 
transa, transb
+),
+name="C",
+**kwargs
+)
+
+
 def batch_matmul(lhs, rhs, transa=False, transb=False, iterative=False, 
**kwargs):
 """Create an extern op that compute batched matrix mult of A and rhs with 
CBLAS
 This function serves as an example on how to call external libraries.
diff --git a/src/runtime/contrib/cblas/cblas.cc 
b/src/runtime/contrib/cblas/cblas.cc
index 0cf4c69..e84ee11 100644
--- a/src/runtime/contrib/cblas/cblas.cc
+++ b/src/runtime/contrib/cblas/cblas.cc
@@ -44,8 +44,37 @@ using namespace runtime;
 
 inline CBLAS_TRANSPOSE BooleanToTranspose(bool trans) { return trans ? 
CblasTrans : CblasNoTrans; }
 
+#if USE_MKL_BLAS == 1
+inline CBLAS_OFFSET StringToOffset(const std::string offset_type) {
+  if (offset_type != "CblasFixOffset" && offset_type != "CblasColOffset" &&
+  offset_type != "CblasRowOffset") {
+LOG(FATAL) << "Unrecognized offset_type " << offset_type;
+  }
+  if (offset_type == "CblasFixOffset") {
+return CblasFixOffset;
+  } else if (offset_type == "CblasColOffset") {
+return CblasColOffset;
+  }
+  return CblasRowOffset;
+}
+#endif
+
 inline char BooleanToTransposeChar(bool trans) { return trans ? 'T' : 'N'; }
 
+struct CblasGemmU8S8S32Op {
+  void operator()(bool ta, bool tb, int M, int N, int K, float alpha, const 
void* A, int lda,
+  int offset_a, const void* B, int ldb, int offset_b, float 
beta, int* C, int ldc,
+  const std::string offset_ctype, int* offset_c) {
+#if USE_MKL_BLAS == 1
+cblas_gemm_s8u8s32(CblasColMajor, BooleanToTranspose(ta), 
BooleanToTranspose(tb),
+   StringToOffset(offset_ctype), M, N, K, alpha, A, lda, 
offset_a, B, ldb,
+   offset_b, beta, C, ldc, offset_c);
+#else
+LOG(FATAL) << "Quantized Gemm is supported with MKL Blas only";
+#endif
+  }
+};
+
 struct CblasSgemmOp {
   typedef float TDatatype;
   void operator()(bool ta, bool tb, int M, int N, int K, float alpha, float* 
A, int lda, float* B,
@@ -170,6 +199,18 @@ 
TVM_REGISTER_GLOBAL("tvm.contrib.cblas.matmul").set_body([](TVMArgs args, TVMRet
 CallGemm(args, ret, CblasDgemmOp());
 });
 
+// integer matrix multiplication for row major
+TVM_REGISTER_GLOBAL("tvm.contrib.cblas.matmul_u8s8s32")
+.set_body([](TVMArgs args, TVMRetValue* ret) {
+  DLTensor* A = args[0];
+  DLTensor* B = args[1];
+  DLTensor* C = args[2];
+  CHECK(TypeMatch(A->dtype, kDLUInt, 8) && TypeMatch(B->dtype, kDLInt, 8) 
&&
+TypeMatch(C->dtype, kDLInt, 32));
+
+  CallU8S8S32Gemm(args, ret, CblasGemmU8S8S32Op());
+});
+
 TVM_REGISTER_GLOBAL("tvm.contrib.cblas.batch_matmul").set_body([](TVMArgs 
args, TVMRetValue* ret) {
   DLTensor* A = args[0];
   

[incubator-tvm] branch master updated: Correct runtime.load_module (#6161)

2020-07-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 1e9e4b9  Correct runtime.load_module (#6161)
1e9e4b9 is described below

commit 1e9e4b9fee46119c8bf52d8ea5d58301fe273780
Author: Tianqi Chen 
AuthorDate: Tue Jul 28 13:18:14 2020 -0700

Correct runtime.load_module (#6161)
---
 docs/deploy/hls.rst   | 6 +++---
 docs/dev/introduction_to_module_serialization.rst | 2 +-
 docs/dev/relay_bring_your_own_codegen.rst | 6 +++---
 rust/tvm/examples/resnet/src/build_resnet.py  | 2 +-
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/docs/deploy/hls.rst b/docs/deploy/hls.rst
index 64717ed..da1721d 100644
--- a/docs/deploy/hls.rst
+++ b/docs/deploy/hls.rst
@@ -64,11 +64,11 @@ We use two python scripts for this tutorial.
 
   tgt="sdaccel"
 
-  fadd = tvm.runtime.load("myadd.so")
+  fadd = tvm.runtime.load_module("myadd.so")
   if os.environ.get("XCL_EMULATION_MODE"):
-  fadd_dev = tvm.runtime.load("myadd.xclbin")
+  fadd_dev = tvm.runtime.load_module("myadd.xclbin")
   else:
-  fadd_dev = tvm.runtime.load("myadd.awsxclbin")
+  fadd_dev = tvm.runtime.load_module("myadd.awsxclbin")
   fadd.import_module(fadd_dev)
 
   ctx = tvm.context(tgt, 0)
diff --git a/docs/dev/introduction_to_module_serialization.rst 
b/docs/dev/introduction_to_module_serialization.rst
index 78f6d71..5451b84 100644
--- a/docs/dev/introduction_to_module_serialization.rst
+++ b/docs/dev/introduction_to_module_serialization.rst
@@ -53,7 +53,7 @@ Let us build one ResNet-18 workload for GPU as an example 
first.
resnet18_lib.export_library(path_lib)
 
# load it back
-   loaded_lib = tvm.runtime.load(path_lib)
+   loaded_lib = tvm.runtime.load_module(path_lib)
assert loaded_lib.type_key == "library"
assert loaded_lib.imported_modules[0].type_key == "cuda"
 
diff --git a/docs/dev/relay_bring_your_own_codegen.rst 
b/docs/dev/relay_bring_your_own_codegen.rst
index 0cced36..4d761bf 100644
--- a/docs/dev/relay_bring_your_own_codegen.rst
+++ b/docs/dev/relay_bring_your_own_codegen.rst
@@ -905,7 +905,7 @@ We also need to register this function to enable the 
corresponding Python API:
   TVM_REGISTER_GLOBAL("module.loadbinary_examplejson")
   .set_body_typed(ExampleJsonModule::LoadFromBinary);
 
-The above registration means when users call ``tvm.runtime.load(lib_path)`` 
API and the exported library has an ExampleJSON stream, our ``LoadFromBinary`` 
will be invoked to create the same customized runtime module.
+The above registration means when users call 
``tvm.runtime.load_module(lib_path)`` API and the exported library has an 
ExampleJSON stream, our ``LoadFromBinary`` will be invoked to create the same 
customized runtime module.
 
 In addition, if you want to support module creation directly from an 
ExampleJSON file, you can also implement a simple function and register a 
Python API as follows:
 
@@ -930,7 +930,7 @@ In addition, if you want to support module creation 
directly from an ExampleJSON
   *rv = ExampleJsonModule::Create(args[0]);
   });
 
-It means users can manually write/modify an ExampleJSON file, and use Python 
API ``tvm.runtime.load("mysubgraph.examplejson", "examplejson")`` to construct 
a customized module.
+It means users can manually write/modify an ExampleJSON file, and use Python 
API ``tvm.runtime.load_module("mysubgraph.examplejson", "examplejson")`` to 
construct a customized module.
 
 ***
 Summary
@@ -954,7 +954,7 @@ In summary, here is a checklist for you to refer:
   * ``Run`` to execute a subgraph.
   * Register a runtime creation API.
   * ``SaveToBinary`` and ``LoadFromBinary`` to serialize/deserialize 
customized runtime module.
-  * Register ``LoadFromBinary`` API to support 
``tvm.runtime.load(your_module_lib_path)``.
+  * Register ``LoadFromBinary`` API to support 
``tvm.runtime.load_module(your_module_lib_path)``.
   * (optional) ``Create`` to support customized runtime module construction 
from subgraph file in your representation.
 
 * An annotator to annotate a user Relay program to make use of your compiler 
and runtime (TBA).
diff --git a/rust/tvm/examples/resnet/src/build_resnet.py 
b/rust/tvm/examples/resnet/src/build_resnet.py
index a09a0c3..1142f99 100644
--- a/rust/tvm/examples/resnet/src/build_resnet.py
+++ b/rust/tvm/examples/resnet/src/build_resnet.py
@@ -112,7 +112,7 @@ def download_img_labels():
 def test_build(build_dir):
 """ Sanity check with random input"""
 graph = open(osp.join(build_dir, "deploy_graph.json")).read()
-lib = tvm.runtime.load(osp.join(build_dir, "deploy_lib.so"))
+lib = tvm.runtime.load_module(osp.join(build_dir, "deploy_lib.so"))
 params = bytearray(open(osp.join(build_dir,"deploy_param.params"), 
"rb").read())
 

[incubator-tvm] branch master updated (d35a149 -> a02d377)

2020-07-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from d35a149  [CI][Caffe Frontend] add caffe environment (#6023)
 add a02d377  [TIR][Bugfix] Improved massive build times caused by 
tir.floormod and tir.floordiv. Fixed Topi testcase. (#5666)

No new revisions were added by this update.

Summary of changes:
 src/target/llvm/codegen_llvm.cc  |  8 +-
 src/target/llvm/codegen_llvm.h   |  5 
 src/target/source/codegen_c.cc   |  8 +-
 src/target/source/codegen_c.h|  5 
 src/target/spirv/codegen_spirv.cc|  8 +-
 src/target/spirv/codegen_spirv.h |  5 
 src/tir/transforms/lower_intrin.cc   | 47 +---
 src/tir/transforms/split_host_device.cc  | 22 +--
 topi/tests/python/test_topi_broadcast.py | 27 ++
 9 files changed, 96 insertions(+), 39 deletions(-)



[incubator-tvm] branch master updated (bbc2dbf -> d35a149)

2020-07-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bbc2dbf  [Ansor][AutoTVM v2.0] Phase 1: Add follow_split and 
follow_fused_split steps (#6142)
 add d35a149  [CI][Caffe Frontend] add caffe environment (#6023)

No new revisions were added by this update.

Summary of changes:
 docker/Dockerfile.ci_cpu   |  4 
 ...buntu_install_nodejs.sh => ubuntu_install_caffe.sh} | 18 +++---
 2 files changed, 15 insertions(+), 7 deletions(-)
 copy docker/install/{ubuntu_install_nodejs.sh => ubuntu_install_caffe.sh} (66%)
 mode change 100755 => 100644



[incubator-tvm] branch master updated: [Ansor][AutoTVM v2.0] Phase 1: Add follow_split and follow_fused_split steps (#6142)

2020-07-28 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new bbc2dbf  [Ansor][AutoTVM v2.0] Phase 1: Add follow_split and 
follow_fused_split steps (#6142)
bbc2dbf is described below

commit bbc2dbf9f81669c505ac8c73f4a6511bfc941d4f
Author: jiuqi-yang <68428961+jiuqi-y...@users.noreply.github.com>
AuthorDate: Tue Jul 28 22:49:05 2020 +0800

[Ansor][AutoTVM v2.0] Phase 1: Add follow_split and follow_fused_split 
steps (#6142)

* Add cache_read/cache_write step

* Update

* Add follow split and follow fused split

Signed-off-by: jingbang.yjb 

Conflicts:
src/auto_scheduler/compute_dag.cc
src/auto_scheduler/transform_step.cc
src/auto_scheduler/transform_step.h
tests/python/unittest/test_auto_scheduler_loop_state.py

* add loop_state.py

Signed-off-by: jingbang.yjb 

* Update

* Update

* Update state->current_compute_dag to Optional

* Add some doc strings for Follow_Split and Follow_fused_split

Signed-off-by: jingbang.yjb 

* Check code using c-lint

Signed-off-by: jingbang.yjb 

* Add more doc strings and change the order for follow split.

Signed-off-by: jingbang.yjb 

* Add record test for follow_split and follow_fused_split

Signed-off-by: jingbang.yjb 

* Add record test for follow_split

Signed-off-by: jingbang.yjb 

* Add record test for follow_fused_split.

Signed-off-by: jingbang.yjb 

* Add test record for follow_fused_split
1. delete a comment
2. add "fuse" between follow_split and follow_fused_split

Signed-off-by: jingbang.yjb 

* Add doc strings for some functions and variables

Signed-off-by: jingbang.yjb 

* Fix the code format in src/auto_scheduler/transform_step.h

Signed-off-by: jingbang.yjb 

* Update

* Update doc

* Update

* Update

* Fix follow_split and follow_fused_split record test.

Signed-off-by: jingbang.yjb 

* Doc update

* Update some doc strings

Signed-off-by: jingbang.yjb 

* Fix code style and some function definitions.

Signed-off-by: jingbang.yjb 

* Update

Signed-off-by: jingbang.yjb 

* Add comments on parameters.

Signed-off-by: jingbang.yjb 

* Add more doc strings and fix some.

Signed-off-by: jingbang.yjb 

* Update

Signed-off-by: jingbang.yjb 

* Update

Signed-off-by: jingbang.yjb 

* Update

Signed-off-by: jingbang.yjb 

* Update.

Signed-off-by: jingbang.yjb 

Co-authored-by: chengfan.jcf 
Co-authored-by: jingbang.yjb 
---
 include/tvm/auto_scheduler/loop_state.h|  23 +++
 include/tvm/auto_scheduler/transform_step.h| 168 -
 python/tvm/auto_scheduler/loop_state.py|  96 ++
 src/auto_scheduler/compute_dag.cc  |   4 +-
 src/auto_scheduler/loop_state.cc   |  34 
 src/auto_scheduler/transform_step.cc   | 208 -
 .../unittest/test_auto_scheduler_loop_state.py |  42 -
 .../python/unittest/test_auto_scheduler_measure.py |  25 ++-
 8 files changed, 589 insertions(+), 11 deletions(-)

diff --git a/include/tvm/auto_scheduler/loop_state.h 
b/include/tvm/auto_scheduler/loop_state.h
index 1c8ea77..9850620 100644
--- a/include/tvm/auto_scheduler/loop_state.h
+++ b/include/tvm/auto_scheduler/loop_state.h
@@ -359,6 +359,29 @@ class State : public ObjectRef {
   TVM_DLL Array split(int stage_id, const Iterator& it,
 const Array>& lengths,
 bool inner_to_outer = true);
+  /*!
+   * \brief Schedule primitive extends to split step.
+   * \param stage_id The index of the stage to be split.
+   * \param it The iterator to be split.
+   * \param src_step_id The index of the split step to be followed in the 
history.
+   * \param n_split The number of split level.
+   * \return The splitted new Iterators.
+   */
+  TVM_DLL Array follow_split(int stage_id, const Iterator& it, int 
src_step_id,
+   int n_split);
+  /*!
+   * \brief Schedule primitive extends to split step.
+   * \param stage_id The index of the stage to be split.
+   * \param it The iterator to be split.
+   * \param src_step_ids The indices of the split steps to be followed in the 
history.
+   * \param level Use the length in this split level.
+   * \param factor_or_nparts True to use `factor` for split from inner to 
outer,
+  False to use `nparts` for split from outer to inner.
+   * \return The splitted new Iterators.

[incubator-tvm] branch master updated (ac1b0ea -> 6c17e65)

2020-07-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ac1b0ea  [DOCS][REFACTOR] Clarify Docs Categorization (#6155)
 add 6c17e65  Adding t-vi as a reviewer (#6149)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)



[incubator-tvm] branch master updated: [DOCS][REFACTOR] Clarify Docs Categorization (#6155)

2020-07-28 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new ac1b0ea  [DOCS][REFACTOR] Clarify Docs Categorization (#6155)
ac1b0ea is described below

commit ac1b0ea18bf3c4b7258c6e81cc8ec9fecbb47131
Author: Tianqi Chen 
AuthorDate: Tue Jul 28 07:27:01 2020 -0700

[DOCS][REFACTOR] Clarify Docs Categorization (#6155)

This PR categorizes the docs into a few categories:
- How To
- Tutorials
- References
- Deep Dive
- MISC

Co-authored-by: Chris Hoge 

Co-authored-by: Chris Hoge 
---
 docs/index.rst | 14 --
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/docs/index.rst b/docs/index.rst
index defaf4a..18b2da7 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -37,11 +37,13 @@ For Developers
 
 .. toctree::
:maxdepth: 1
-   :caption: Get Started
+   :caption: How to
:hidden:
 
install/index
contribute/index
+   deploy/index
+   dev/how_to
 
 .. toctree::
:maxdepth: 1
@@ -52,13 +54,6 @@ For Developers
 
 
 .. toctree::
-   :maxdepth: 1
-   :caption: How-to Guide
-   :hidden:
-
-   deploy/index
-
-.. toctree::
:maxdepth: 2
:caption: References
:hidden:
@@ -70,10 +65,9 @@ For Developers
 .. toctree::
:maxdepth: 2
:hidden:
-   :caption: For Developers
+   :caption: Deep Dive
 
dev/index
-   dev/how_to
 
 .. toctree::
:maxdepth: 2