[incubator-tvm] branch master updated (1767b08 -> 66b7ddb)

2020-08-31 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 1767b08  [DOCKER] Fix Dockerfile.demo_android (#6361)
 add 66b7ddb  [TIR][Transform]Block scope hoisting added (#6238)

No new revisions were added by this update.

Summary of changes:
 python/tvm/driver/build_module.py  |   2 +-
 python/tvm/tir/transform/transform.py  |  24 +-
 src/tir/transforms/hoist_if_then_else.cc   | 165 +--
 .../python/unittest/test_tir_transform_hoist_if.py | 496 -
 4 files changed, 629 insertions(+), 58 deletions(-)



[incubator-tvm] branch master updated (e6374dc -> b81bdee)

2020-09-10 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e6374dc  Fix broadcast shape (#6422)
 add b81bdee  [Relay] Add Defunctionalization Pass  (#6400)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/transform/transform.py|  26 ++
 src/relay/transforms/defunctionalization.cc| 431 +
 .../python/relay/test_pass_defunctionalization.py  | 226 +++
 3 files changed, 683 insertions(+)
 create mode 100644 src/relay/transforms/defunctionalization.cc
 create mode 100644 tests/python/relay/test_pass_defunctionalization.py



[incubator-tvm] branch master updated (21e895c -> a0c072e)

2020-08-03 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 21e895c  [TIR] Enhance VerifyGPUCode (#6194)
 add a0c072e  [TIR][Transform] HoistIfThenElse added (#6066)

No new revisions were added by this update.

Summary of changes:
 include/tvm/tir/transform.h|   8 +
 python/tvm/driver/build_module.py  |   1 +
 python/tvm/tir/transform/transform.py  |   9 +
 src/tir/transforms/hoist_if_then_else.cc   | 365 +
 tests/python/unittest/test_te_build_lower.py   |   2 +-
 .../python/unittest/test_tir_transform_hoist_if.py | 268 +++
 6 files changed, 652 insertions(+), 1 deletion(-)
 create mode 100644 src/tir/transforms/hoist_if_then_else.cc
 create mode 100644 tests/python/unittest/test_tir_transform_hoist_if.py



[incubator-tvm] branch master updated: [RELAY] Basic block normal form (#6152)

2020-08-03 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new b6db7e3  [RELAY] Basic block normal form (#6152)
b6db7e3 is described below

commit b6db7e33f7a589fdf5dce062d8488ce2f83a3727
Author: Haibin Lin 
AuthorDate: Mon Aug 3 21:38:51 2020 -0700

[RELAY] Basic block normal form (#6152)

* initial commit

* refactor utils

* add util

* revert anf test

* update test

* fix logging

* fix scope bug

* complete tests

* remove logging

* revert refactoring

* add one more test case

* fix missing var binding

* fix test

* fix lint

* fix lint

* fix clang-format

* fix lint

* fix lint

* commit missing code

* add analysis api

* fix lint

* fix lint

* lint

* add test for func

* address CR

* fix typo

* fix return type

* fix lint

* refactor classes

* fix lint

* remove prints

* address comments

Co-authored-by: Ubuntu 
---
 include/tvm/relay/analysis.h   |   9 +
 include/tvm/relay/transform.h  |  15 +
 python/tvm/relay/analysis/analysis.py  |  15 +
 python/tvm/relay/transform/transform.py|  15 +
 src/relay/analysis/dependency_graph.cc |   4 +
 src/relay/backend/build_module.cc  |   1 +
 src/relay/backend/vm/compiler.cc   |   1 +
 src/relay/transforms/let_list.h|   6 +
 src/relay/transforms/pass_util.h   |  88 
 src/relay/transforms/to_a_normal_form.cc   | 299 ++---
 src/relay/transforms/to_basic_block_normal_form.cc | 104 +
 .../relay/test_analysis_basic_block_normal_form.py | 206 +
 .../relay/test_pass_to_basic_block_normal_form.py  | 482 +
 13 files changed, 1098 insertions(+), 147 deletions(-)

diff --git a/include/tvm/relay/analysis.h b/include/tvm/relay/analysis.h
index 8eda7dd..c65bb41 100644
--- a/include/tvm/relay/analysis.h
+++ b/include/tvm/relay/analysis.h
@@ -67,6 +67,15 @@ TVM_DLL Kind KindCheck(const Type& t, const IRModule& mod);
 TVM_DLL bool ConstantCheck(const Expr& e);
 
 /*!
+ * \brief Check whether an expression is in the basic block normal form.
+ *
+ * \param e the expression.
+ *
+ * \return whether the expression is in the basic block normal form.
+ */
+TVM_DLL bool BasicBlockNormalFormCheck(const Expr& e);
+
+/*!
  * \brief Check that each Var is only bound once.
  *
  * For example, the expression `let x = 1 in let x = 2 in 3` bound x twice.
diff --git a/include/tvm/relay/transform.h b/include/tvm/relay/transform.h
index d995301..cf14feb 100644
--- a/include/tvm/relay/transform.h
+++ b/include/tvm/relay/transform.h
@@ -117,6 +117,21 @@ TVM_DLL Pass FuseOps(int fuse_opt_level = -1);
 TVM_DLL Pass RewriteAnnotatedOps(int fallback_device);
 
 /*!
+ * \brief Turn an expression to Basic Block Normal Form.
+ *
+ * We define a block as a group of expressions implied by the scope structure.
+ *
+ * Each graph node can only belong to a single block.
+ *
+ * For any value that is being used in multiple blocks, it has to be referred
+ * by a Var which is defined in a block, whose scope is the least common 
ancestor
+ * of blocks this value is used.
+ *
+ * \return The pass.
+ */
+TVM_DLL Pass ToBasicBlockNormalForm();
+
+/*!
  * \brief turn a dataflow graph into Administrative Normal Form, or A-Normal 
Form (ANF).
  *
  * It will turn an expression that is in a graph form (with sharing implicit),
diff --git a/python/tvm/relay/analysis/analysis.py 
b/python/tvm/relay/analysis/analysis.py
index 632af46..165e39a 100644
--- a/python/tvm/relay/analysis/analysis.py
+++ b/python/tvm/relay/analysis/analysis.py
@@ -106,6 +106,21 @@ def check_constant(expr):
 """
 return _ffi_api.check_constant(expr)
 
+def check_basic_block_normal_form(expr):
+"""Check whether an expression is in the basic block form
+
+Parameters
+--
+expr : tvm.relay.Expr
+The input expression
+
+Returns
+---
+result : bool
+Whether the expression is in the basic block form.
+"""
+return _ffi_api.check_basic_block_normal_form(expr)
+
 
 def free_vars(expr):
 """Get free Vars from expression expr in Post DFS order.
diff --git a/python/tvm/relay/transform/transform.py 
b/python/tvm/relay/transform/transform.py
index 7db0687..3abc382 100644
--- a/python/tvm/relay/transform/transform.py
+++ b/python/tvm/relay/transform/transform.py
@@ -488,6 +488,21 @@ def ToANormalForm():
 """
 return _ffi_api.ToANor

[incubator-tvm] branch master updated (3d8ad7a -> b3c42f9)

2020-08-07 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3d8ad7a  [C++ RPC] fix typo to keep same with source code (#6220)
 add b3c42f9  [Relay][Pass] Support combine multiple dense op just into 
dense (#6062)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/transform.h  |   4 +-
 python/tvm/relay/transform/transform.py|  31 +++-
 src/relay/transforms/combine_parallel_dense.cc | 166 +-
 .../relay/test_pass_combine_parallel_dense.py  | 189 -
 4 files changed, 379 insertions(+), 11 deletions(-)



[incubator-tvm] branch master updated: Amendments for gradients (#5941)

2020-06-29 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 2e04393  Amendments for gradients (#5941)
2e04393 is described below

commit 2e043937831dbb07e152f4702457ed05ff3cd31e
Author: Thomas Viehmann 
AuthorDate: Tue Jun 30 05:35:36 2020 +0200

Amendments for gradients (#5941)

* Amendments for gradients

- We fix the dtype handling of consts in generated gradients.
- We add a collapse_sum_to instruction mirroring the collapse_sum_like.
  While for general definitions (potentially dynamic shapes),
  collapse_sum_like is the first choice, when moving to static,
  using collapse_sum_to will greatly simplify the graph.
  (This simplification is not part of the PR.)

* Fix Broadcast rel description in comment

Thank you, @MarisaKirisame
---
 python/tvm/relay/op/_tensor_grad.py| 24 ++-
 python/tvm/relay/op/_transform.py  |  1 +
 python/tvm/relay/op/transform.py   | 21 +
 src/relay/op/tensor/transform.cc   | 48 ++
 tests/python/relay/test_op_grad_level1.py  | 13 
 tests/python/relay/test_op_grad_level10.py | 14 +
 tests/python/relay/test_op_grad_level3.py  | 31 +--
 tests/python/relay/test_op_level10.py  | 20 +
 8 files changed, 137 insertions(+), 35 deletions(-)

diff --git a/python/tvm/relay/op/_tensor_grad.py 
b/python/tvm/relay/op/_tensor_grad.py
index 00ea097..2907d72 100644
--- a/python/tvm/relay/op/_tensor_grad.py
+++ b/python/tvm/relay/op/_tensor_grad.py
@@ -69,7 +69,7 @@ def log2_grad(orig, grad):
 """Returns [grad * 1 / (log(2) * x)]"""
 x = orig.args[0]
 ones = ones_like(x)
-two = const(2.0)
+two = const(2.0, dtype=x.checked_type.dtype)
 return [grad * ones / (log(two) * x)]
 
 
@@ -78,7 +78,7 @@ def log10_grad(orig, grad):
 """Returns [grad * 1 / (log(10) * x)]"""
 x = orig.args[0]
 ones = ones_like(x)
-ten = const(10.0)
+ten = const(10.0, dtype=x.checked_type.dtype)
 return [grad * ones / (log(ten) * x)]
 
 
@@ -175,8 +175,9 @@ def exp_grad(orig, grad):
 @register_gradient("sqrt")
 def sqrt_grad(orig, grad):
 """Returns [grad * 0.5 * (x ^ -0.5)]"""
-a = const(0.5)  # (TODO) type?
-return [grad * a * power(orig.args[0], negative(a))]
+x = orig.args[0]
+a = const(0.5, dtype=x.checked_type.dtype)
+return [grad * a * power(x, negative(a))]
 
 
 @register_gradient("sigmoid")
@@ -261,6 +262,13 @@ def collapse_sum_like_grad(orig, grad):
 return [broadcast_to_like(grad, x), zeros_like(y)]
 
 
+@register_gradient("collapse_sum_to")
+def collapse_sum_to_grad(orig, grad):
+"""Returns [broadcast_to_like(grad, x), 0]"""
+x, y = orig.args
+return [broadcast_to_like(grad, x), zeros_like(y)]
+
+
 @register_gradient("abs")
 def abs_grad(orig, grad):
 """Returns grad * (select(x < 0, -1, 1))."""
@@ -284,8 +292,8 @@ def clip_grad(orig, grad):
 x = orig.args[0]
 a_min = orig.attrs.get_int("a_min")
 a_max = orig.attrs.get_int("a_max")
-a_mins = broadcast_to_like(const(a_min), x)
-a_maxs = broadcast_to_like(const(a_max), x)
+a_mins = broadcast_to_like(const(a_min, dtype=x.checked_type.dtype), x)
+a_maxs = broadcast_to_like(const(a_max, dtype=x.checked_type.dtype), x)
 zeros = zeros_like(x)
 ones = ones_like(x)
 return [where(less(x, a_mins), zeros, where(less(a_maxs, x), zeros, ones * 
grad))]
@@ -591,7 +599,7 @@ def cross_entropy_grad(orig, grad):
 x, y = orig.args
 shape = shape_of(x)
 batch_size = take(shape, const(0, dtype='int32'), axis=0)
-grad = grad / batch_size.astype('float32')
+grad = grad / batch_size.astype(x.checked_type.dtype)
 return [-grad * y / x, -grad * log(x)]
 
 
@@ -600,5 +608,5 @@ def cross_entropy_with_logits_grad(orig, grad):
 x, y = orig.args
 shape = shape_of(x)
 batch_size = take(shape, const(0, dtype='int32'), axis=0)
-grad = grad / batch_size.astype('float32')
+grad = grad / batch_size.astype(x.checked_type.dtype)
 return [-grad * y, -grad * x]
diff --git a/python/tvm/relay/op/_transform.py 
b/python/tvm/relay/op/_transform.py
index d104c1b..10238d1 100644
--- a/python/tvm/relay/op/_transform.py
+++ b/python/tvm/relay/op/_transform.py
@@ -57,6 +57,7 @@ _reg.register_injective_schedule("gather_nd")
 _reg.register_injective_schedule("sequence_mask")
 _reg.register_injective_schedule("one_hot")
 _reg.register_reduce_schedule("collapse_sum_like")
+_reg.register_reduce_schedule("co

[incubator-tvm] branch master updated (5d445ca -> 957aefb)

2020-06-29 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 5d445ca  Fix some typo errors in license header (#5956)
 add 957aefb  [RELAY][GRAD] handle Tuple/TupleGetItem in first order 
gradient (#5946)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/gradient.cc | 79 +++-
 src/relay/transforms/pattern_util.h  |  6 +++
 tests/python/relay/test_pass_gradient.py | 28 ---
 3 files changed, 104 insertions(+), 9 deletions(-)



[incubator-tvm] branch master updated: Add 'get_num_inputs' to GraphRuntime (#6118)

2020-07-24 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new bfa4eae  Add 'get_num_inputs' to GraphRuntime (#6118)
bfa4eae is described below

commit bfa4eae1dcac7f2493e543823e51eb420b0f8b2c
Author: Alexander Booth 
AuthorDate: Fri Jul 24 07:22:39 2020 -0700

Add 'get_num_inputs' to GraphRuntime (#6118)
---
 python/tvm/contrib/graph_runtime.py | 11 +++
 src/runtime/graph/graph_runtime.cc  |  9 +
 src/runtime/graph/graph_runtime.h   |  6 ++
 3 files changed, 26 insertions(+)

diff --git a/python/tvm/contrib/graph_runtime.py 
b/python/tvm/contrib/graph_runtime.py
index ec102f5..326eccb 100644
--- a/python/tvm/contrib/graph_runtime.py
+++ b/python/tvm/contrib/graph_runtime.py
@@ -133,6 +133,7 @@ class GraphModule(object):
 self._get_output = module["get_output"]
 self._get_input = module["get_input"]
 self._get_num_outputs = module["get_num_outputs"]
+self._get_num_inputs = module["get_num_inputs"]
 self._load_params = module["load_params"]
 self._share_params = module["share_params"]
 
@@ -187,6 +188,16 @@ class GraphModule(object):
 """
 return self._get_num_outputs()
 
+def get_num_inputs(self):
+"""Get the number of inputs to the graph
+
+Returns
+---
+count : int
+The number of inputs.
+"""
+return self._get_num_inputs()
+
 def get_input(self, index, out=None):
 """Get index-th input to out
 
diff --git a/src/runtime/graph/graph_runtime.cc 
b/src/runtime/graph/graph_runtime.cc
index e984861..18245ba 100644
--- a/src/runtime/graph/graph_runtime.cc
+++ b/src/runtime/graph/graph_runtime.cc
@@ -135,6 +135,12 @@ void GraphRuntime::SetInputZeroCopy(int index, DLTensor* 
data_ref) {
  */
 int GraphRuntime::NumOutputs() const { return outputs_.size(); }
 /*!
+ * \brief Get the number of inputs
+ *
+ * \return The number of inputs to the graph.
+ */
+int GraphRuntime::NumInputs() const { return input_nodes_.size(); }
+/*!
  * \brief Return NDArray for given input index.
  * \param index The input index.
  *
@@ -433,6 +439,9 @@ PackedFunc GraphRuntime::GetFunction(const std::string& 
name,
   } else if (name == "get_num_outputs") {
 return PackedFunc(
 [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->NumOutputs(); });
+  } else if (name == "get_num_inputs") {
+return PackedFunc(
+[sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->NumInputs(); });
   } else if (name == "run") {
 return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { 
this->Run(); });
   } else if (name == "load_params") {
diff --git a/src/runtime/graph/graph_runtime.h 
b/src/runtime/graph/graph_runtime.h
index d0c9822..dcef1e4 100644
--- a/src/runtime/graph/graph_runtime.h
+++ b/src/runtime/graph/graph_runtime.h
@@ -125,6 +125,12 @@ class TVM_DLL GraphRuntime : public ModuleNode {
*/
   int NumOutputs() const;
   /*!
+   * \brief Get the number of inputs
+   *
+   * \return The number of inputs to the graph.
+   */
+  int NumInputs() const;
+  /*!
* \brief Return NDArray for given input index.
* \param index The input index.
*



[incubator-tvm] branch master updated (bfa4eae -> 782190e)

2020-07-24 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfa4eae  Add 'get_num_inputs' to GraphRuntime (#6118)
 add 782190e  [Relay] Keep fixed dim when unifying dynamic shape (#5795)

No new revisions were added by this update.

Summary of changes:
 src/relay/analysis/type_solver.cc | 11 +++
 tests/python/relay/test_any.py|  4 +---
 tests/python/relay/test_type_infer.py | 11 +++
 3 files changed, 23 insertions(+), 3 deletions(-)



[incubator-tvm] branch master updated (ccacb1e -> 3150db7)

2020-07-17 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ccacb1e  Fixed point multiplication improvements for AArch64 (#5980)
 add 3150db7  [Relay][Dyn] Add dynamic reshape grad (#6080)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/_tensor_grad.py  | 12 
 python/tvm/relay/testing/__init__.py | 36 +---
 tests/python/relay/dyn/test_dynamic_op_level3.py |  9 +-
 3 files changed, 52 insertions(+), 5 deletions(-)



[incubator-tvm] branch master updated: [Relay] Handle ndarray_size in FoldConstant (#6156)

2020-07-28 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 44ff1f3  [Relay] Handle ndarray_size in FoldConstant (#6156)
44ff1f3 is described below

commit 44ff1f3b5ed0751fee39537a0e6e3870a74c930b
Author: lixiaoquan 
AuthorDate: Wed Jul 29 06:49:21 2020 +0800

[Relay] Handle ndarray_size in FoldConstant (#6156)

* [Relay] Handle ndarray_size in FoldConstant

* Use Optional
---
 src/relay/transforms/fold_constant.cc | 75 ---
 tests/python/relay/test_pass_fold_constant.py | 22 
 2 files changed, 90 insertions(+), 7 deletions(-)

diff --git a/src/relay/transforms/fold_constant.cc 
b/src/relay/transforms/fold_constant.cc
index 0b873bf..3f5ecaa 100644
--- a/src/relay/transforms/fold_constant.cc
+++ b/src/relay/transforms/fold_constant.cc
@@ -86,7 +86,8 @@ class ConstantFolder : public ExprMutator {
 shape_func_op_(Op::Get("vm.shape_func")),
 alloc_tensor_op_(Op::Get("memory.alloc_tensor")),
 alloc_storage_op_(Op::Get("memory.alloc_storage")),
-cast_op_(Op::Get("cast")) {}
+cast_op_(Op::Get("cast")),
+ndarray_size_op_(Op::Get("ndarray_size")) {}
 
   Expr VisitExpr_(const LetNode* op) final {
 Expr value = this->Mutate(op->value);
@@ -128,6 +129,10 @@ class ConstantFolder : public ExprMutator {
   return EvaluateShapeOf(res, origin_args, call->attrs);
 }
 
+if (call->op == ndarray_size_op_) {
+  return EvaluateNdarraySize(res, origin_args, call->attrs);
+}
+
 // We should think about potentially constant evaluation over these ops 
too.
 if (call->op == invoke_tvm_op_ || call->op == shape_func_op_ || call->op 
== alloc_tensor_op_ ||
 call->op == alloc_storage_op_) {
@@ -173,6 +178,7 @@ class ConstantFolder : public ExprMutator {
   const Op& alloc_tensor_op_;
   const Op& alloc_storage_op_;
   const Op& cast_op_;
+  const Op& ndarray_size_op_;
 
   // Convert value to expression.
   Expr ObjectToExpr(const ObjectRef& value) {
@@ -223,10 +229,8 @@ class ConstantFolder : public ExprMutator {
 CHECK(param != nullptr);
 
 tvm::Array ishape;
-if (const ConstantNode* op = input.as()) {
-  ishape = op->tensor_type()->shape;
-} else if (input->checked_type_.defined()) {
-  ishape = input->checked_type().as()->shape;
+if (auto opt = GetConstantShape(input)) {
+  ishape = opt.value();
 } else {
   return expr;
 }
@@ -261,12 +265,69 @@ class ConstantFolder : public ExprMutator {
   shape = Constant(ndarray);
 }
 
+return CastValue(shape, param->dtype);
+  }
+
+  // Evaluate a call to the ndarray_size operator for tensors with constant
+  // shapes.
+  Expr EvaluateNdarraySize(Expr expr, Array args, Attrs attrs) {
+Expr input = args[0];
+const auto* param = attrs.as();
+CHECK(param != nullptr);
+
+tvm::Array ishape;
+if (auto opt = GetConstantShape(input)) {
+  ishape = opt.value();
+} else {
+  return expr;
+}
+
+// Get the constant size
+DLContext ctx;
+ctx.device_type = kDLCPU;
+ctx.device_id = 0;
+runtime::NDArray value;
+DLDataType cdtype = DataType::Int(32);
+value = runtime::NDArray::Empty({1}, cdtype, ctx);
+int32_t* data = static_cast(value->data);
+if (ishape.size() == 0) {
+  *data = 0;
+} else {
+  *data = 1;
+  using ::tvm::tir::IntImmNode;
+  for (size_t i = 0; i < ishape.size(); ++i) {
+if (const IntImmNode* dim = ishape[i].as()) {
+  *data *= dim->value;
+} else {
+  return expr;
+}
+  }
+}
+
+Constant size = Downcast(ObjectToExpr(value));
+return CastValue(size, param->dtype);
+  }
+
+  Expr CastValue(const Expr& value, DataType dtype) {
 // Cast the constant into correct dtype
 auto cast_attrs = make_object();
-cast_attrs->dtype = param->dtype;
-Expr ret = Call(cast_op_, {shape}, Attrs(cast_attrs), {});
+cast_attrs->dtype = dtype;
+Expr ret = Call(cast_op_, {value}, Attrs(cast_attrs), {});
 return ConstEvaluate(ret);
   }
+
+  Optional> GetConstantShape(const Expr& input) {
+tvm::Array ishape;
+if (const ConstantNode* op = input.as()) {
+  ishape = op->tensor_type()->shape;
+} else if (input->checked_type_.defined()) {
+  ishape = input->checked_type().as()->shape;
+} else {
+  return Optional>(nullptr);
+}
+
+return Optional>(ishape);
+  }
 };
 
 Expr FoldConstant(const Expr& expr, const IRModule& mod) {
diff --git a/tests/python/relay/test_pass_fold_constant.py 
b/tests/python/relay/test_pass_fold_constant.py
index fccc

[tvm] branch main updated: [Docs] Update stale links (#8111)

2021-05-23 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new ebf80cb  [Docs] Update stale links (#8111)
ebf80cb is described below

commit ebf80cbc35a7cd9839b5adb213ed8d77738dcc3f
Author: Yuchen Jin 
AuthorDate: Sun May 23 09:53:31 2021 -0700

[Docs] Update stale links (#8111)
---
 docs/dev/pass_infra.rst   | 4 ++--
 docs/dev/relay_add_op.rst | 2 +-
 docs/dev/relay_add_pass.rst   | 4 ++--
 docs/langref/relay_type.rst   | 2 +-
 src/relay/analysis/annotated_region_set.h | 2 +-
 5 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/docs/dev/pass_infra.rst b/docs/dev/pass_infra.rst
index 6bd4689..67ef30a 100644
--- a/docs/dev/pass_infra.rst
+++ b/docs/dev/pass_infra.rst
@@ -344,7 +344,7 @@ We've covered the concept of different level of passes and 
the context used for
 compilation. It would be interesting to see how easily users can register
 a pass.  Let's take const folding as an example. This pass has already been
 implemented to fold constants in a Relay function (found in
-`src/relay/pass/fold_constant.cc`_).
+`src/relay/transforms/fold_constant.cc`_).
 
 An API was provided to perform the ``Expr`` to ``Expr`` transformation.
 
@@ -536,7 +536,7 @@ optimization pipeline and debug Relay and tir passes, 
please refer to the
 
 .. _src/ir/transform.cc: 
https://github.com/apache/tvm/blob/main/src/ir/transform.cc
 
-.. _src/relay/pass/fold_constant.cc: 
https://github.com/apache/tvm/blob/main/src/relay/pass/fold_constant.cc
+.. _src/relay/transforms/fold_constant.cc: 
https://github.com/apache/tvm/blob/main/src/relay/transforms/fold_constant.cc
 
 .. _python/tvm/relay/transform/transform.py: 
https://github.com/apache/tvm/blob/main/python/tvm/relay/transform/transform.py
 
diff --git a/docs/dev/relay_add_op.rst b/docs/dev/relay_add_op.rst
index c5dce83..f9ade45 100644
--- a/docs/dev/relay_add_op.rst
+++ b/docs/dev/relay_add_op.rst
@@ -469,7 +469,7 @@ Adding a Gradient in C++
 Adding a gradient in C++ is similar to adding one in Python, but the
 interface for registering is slightly different.
 
-First, make sure ``src/relay/pass/pattern_utils.h`` is included. It provides
+First, make sure ``src/relay/transforms/pattern_utils.h`` is included. It 
provides
 helper functions for creating nodes in the Relay AST. Then, define the
 gradient in a similar fashion as in the Python example:
 
diff --git a/docs/dev/relay_add_pass.rst b/docs/dev/relay_add_pass.rst
index 0661df0..90211d0 100644
--- a/docs/dev/relay_add_pass.rst
+++ b/docs/dev/relay_add_pass.rst
@@ -397,10 +397,10 @@ the below code applies both the ``FoldConstant`` and 
``ToANormalForm`` passes
 More detail about registration can be found in :ref:`tvm-runtime-system` and 
more
 information about the pass manager interface can be found in :ref:`pass-infra`.
 Relay's standard passes are listed in `include/tvm/relay/transform.h`_ and 
implemented
-in `src/relay/pass/`_.
+in `src/relay/transforms/`_.
 
 .. _include/tvm/relay/transform.h: 
https://github.com/apache/tvm/blob/main/include/tvm/relay/transform.h
 
-.. _src/relay/pass/: https://github.com/apache/tvm/tree/main/src/relay/pass
+.. _src/relay/transforms/: 
https://github.com/apache/tvm/tree/main/src/relay/transforms
 
 .. _src/relay/transforms/fold_constant.cc: 
https://github.com/apache/tvm/blob/main/src/relay/transforms/fold_constant.cc
diff --git a/docs/langref/relay_type.rst b/docs/langref/relay_type.rst
index 0fc19b7..632c638 100644
--- a/docs/langref/relay_type.rst
+++ b/docs/langref/relay_type.rst
@@ -43,7 +43,7 @@ Relay to be ahead-of-time compiled and provides much more 
information about
 tensors for optimizations further in the compilation pipeline. Such 
optimizations
 can be implemented as passes, which are Relay-to-Relay AST transformations, and
 may use the inferred types (e.g., shape information) for making decisions about
-program transformations. For instance, :code:`src/relay/pass/fuse_ops.cc` gives
+program transformations. For instance, 
:code:`src/relay/transforms/fuse_ops.cc` gives
 an implementation of a pass that uses inferred tensor shapes to replace 
invocations
 of operators in a Relay program with fused operator implementations.
 
diff --git a/src/relay/analysis/annotated_region_set.h 
b/src/relay/analysis/annotated_region_set.h
index d9923cc..d225cb8 100644
--- a/src/relay/analysis/annotated_region_set.h
+++ b/src/relay/analysis/annotated_region_set.h
@@ -18,7 +18,7 @@
  */
 
 /*!
- * \file tvm/relay/pass/annotated_region_set.h
+ * \file tvm/relay/transforms/annotated_region_set.h
  * \brief Define data structures to extract and manipulate regions from
  * a relay function. Regions are denoted by region_begin and region_end
  * annotations that exist on all the input and output edges of the region.


[tvm] branch main updated: [FIX] Fix temporary allocation size in threefry (#7709)

2021-03-23 Thread marisa
This is an automated email from the ASF dual-hosted git repository.

marisa pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 6f0a656  [FIX] Fix temporary allocation size in threefry (#7709)
6f0a656 is described below

commit 6f0a6561593898053cde051fbb4687eef3adec39
Author: Tristan Konolige 
AuthorDate: Tue Mar 23 13:47:53 2021 -0700

[FIX] Fix temporary allocation size in threefry (#7709)

* [FIX] Fix temporary allocation size in threefry

* bump sizes
---
 python/tvm/topi/random/kernel.py   |  2 +-
 tests/python/topi/python/test_topi_prng.py | 10 +-
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/python/tvm/topi/random/kernel.py b/python/tvm/topi/random/kernel.py
index 728cd68..a09a5f3 100644
--- a/python/tvm/topi/random/kernel.py
+++ b/python/tvm/topi/random/kernel.py
@@ -141,7 +141,7 @@ def _threefry(
 return [x, y]
 
 # temporary buffer for holding the results of _PERMUTATIONS
-tmp = irb.allocate(out_buf.dtype, out_shape, name="tmp", scope="global")
+tmp = irb.allocate(out_buf.dtype, out_shape * nwords, name="tmp", 
scope="global")
 tmp_offset = 0
 
 # Initialize entire key. It is composed of the original key with one
diff --git a/tests/python/topi/python/test_topi_prng.py 
b/tests/python/topi/python/test_topi_prng.py
index 649e541..102e93f 100644
--- a/tests/python/topi/python/test_topi_prng.py
+++ b/tests/python/topi/python/test_topi_prng.py
@@ -87,9 +87,9 @@ def test_threefry_generate(target, ctx):
 gen = tvm.relay.random.threefry_key(0).data.asnumpy()
 
 # check that we can generate some data
-a, rands = threefry_generate(target, ctx, gen, (100,))
+a, rands = threefry_generate(target, ctx, gen, (2048,))
 assert (
-rands.shape[0] == 100 and len(rands.shape) == 1
+rands.shape[0] == 2048 and len(rands.shape) == 1
 ), "Output shape should match requested shape"
 
 # check that gen out does not equal input
@@ -99,13 +99,13 @@ def test_threefry_generate(target, ctx):
 gen = np.array(
 [0, 0, 0, 0, 0, 0, 0, 2 ** 64 - 2, 1 << 63, 0], dtype="uint64"
 )  # make counter large
-a, rands = threefry_generate(target, ctx, gen, (100,))
+a, rands = threefry_generate(target, ctx, gen, (2048,))
 assert gen[4] != a[4], "Overflow of counter should trigger path change"
-assert a[7] == 100, "Overflow of counter should still update counter"
+assert a[7] == 2048, "Overflow of counter should still update counter"
 
 # check generate with path at length limit
 gen = np.array([0, 0, 0, 0, 0, 0, 0, 2 ** 64 - 2, 0, 0], dtype="uint64")  
# make counter large
-a, rands = threefry_generate(target, ctx, gen, (100,))
+a, rands = threefry_generate(target, ctx, gen, (2048,))
 assert (
 gen[0:4] != a[0:4]
 ).any(), "Overflowing counter with no space left in path should change 
state"