[tvm] branch main updated: [Arith] Support eq in detect_clip_bound (#13746)

2023-01-27 Thread wrongtest
This is an automated email from the ASF dual-hosted git repository.

wrongtest pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 0c2ab1bb42 [Arith] Support eq in detect_clip_bound (#13746)
0c2ab1bb42 is described below

commit 0c2ab1bb42fc960ba23416f3ae4068bece8ca2e2
Author: wrongtest 
AuthorDate: Sat Jan 28 13:42:53 2023 +0800

[Arith] Support eq in detect_clip_bound (#13746)

* Support eq in detect_clip_bound

* follow review suggestion
---
 src/arith/detect_linear_equation.cc| 38 +-
 .../unittest/test_arith_detect_clip_bound.py   | 13 
 2 files changed, 42 insertions(+), 9 deletions(-)

diff --git a/src/arith/detect_linear_equation.cc 
b/src/arith/detect_linear_equation.cc
index 8ea8f168b6..576ac1716e 100644
--- a/src/arith/detect_linear_equation.cc
+++ b/src/arith/detect_linear_equation.cc
@@ -189,6 +189,7 @@ bool DetectClipBound(const PrimExpr& cond,
   PostOrderVisit(cond, fvisit);
   if (flag != 1) return false;
   // canonical form: exp >= 0
+  bool is_eq = false;
   PrimExpr canonical;
   if (const LTNode* op = cond.as()) {
 if (!op->a.dtype().is_int()) return false;
@@ -202,6 +203,10 @@ bool DetectClipBound(const PrimExpr& cond,
   } else if (const GENode* op = cond.as()) {
 if (!op->a.dtype().is_int()) return false;
 canonical = op->a - op->b;
+  } else if (const EQNode* op = cond.as()) {
+if (!op->a.dtype().is_int()) return false;
+canonical = op->a - op->b;
+is_eq = true;
   } else {
 return false;
   }
@@ -210,25 +215,40 @@ bool DetectClipBound(const PrimExpr& cond,
   if (!LinearEqDetector(var).Detect(canonical, )) return false;
   ret.coeff = analyzer.Simplify(ret.coeff);
   IntervalEntry& p = (*bmap)[var.get()];
+
+  Optional min_value;
+  Optional max_value;
   if (is_const_int(ret.coeff, 1)) {
 // var + shift >=0 -> var >= -shift
+min_value = -ret.base;
+if (is_eq) {
+  max_value = min_value;
+}
+  } else if (is_const_int(ret.coeff, -1)) {
+// -var + shift >=0 -> var <= shift
+max_value = ret.base;
+if (is_eq) {
+  min_value = max_value;
+}
+  }
+  if (!min_value.defined() && !max_value.defined()) {
+return false;
+  }
+  if (min_value.defined()) {
 if (p.min_value.defined()) {
-  p.min_value = max(p.min_value, -ret.base);
+  p.min_value = max(p.min_value, min_value.value());
 } else {
-  p.min_value = -ret.base;
+  p.min_value = min_value.value();
 }
-return true;
   }
-  if (is_const_int(ret.coeff, -1)) {
-// -var + shift >=0 -> var <= shift
+  if (max_value.defined()) {
 if (p.max_value.defined()) {
-  p.max_value = min(p.max_value, ret.base);
+  p.max_value = min(p.max_value, max_value.value());
 } else {
-  p.max_value = ret.base;
+  p.max_value = max_value.value();
 }
-return true;
   }
-  return false;
+  return true;
 }
 
 template 
diff --git a/tests/python/unittest/test_arith_detect_clip_bound.py 
b/tests/python/unittest/test_arith_detect_clip_bound.py
index 0a9d75fcea..03fff11f77 100644
--- a/tests/python/unittest/test_arith_detect_clip_bound.py
+++ b/tests/python/unittest/test_arith_detect_clip_bound.py
@@ -39,5 +39,18 @@ def test_basic():
 tvm.testing.assert_prim_expr_equal(m[2], 4)
 
 
+def test_trivial_eq():
+a = te.var("a")
+b = te.var("b")
+m = tvm.arith.detect_clip_bound(b == 3, [a, b])
+tvm.testing.assert_prim_expr_equal(m[2], 3)
+tvm.testing.assert_prim_expr_equal(m[3], 3)
+m = tvm.arith.detect_clip_bound(tvm.tir.all(a == 4, b == 3), [a, b])
+tvm.testing.assert_prim_expr_equal(m[0], 4)
+tvm.testing.assert_prim_expr_equal(m[1], 4)
+tvm.testing.assert_prim_expr_equal(m[2], 3)
+tvm.testing.assert_prim_expr_equal(m[3], 3)
+
+
 if __name__ == "__main__":
 test_basic()



[GitHub] [tvm] wrongtest-intellif merged pull request #13746: [Arith] Support eq in detect_clip_bound

2023-01-27 Thread via GitHub


wrongtest-intellif merged PR #13746:
URL: https://github.com/apache/tvm/pull/13746


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch nightly updated (ec72ac6690 -> 1bc8cf80d0)

2023-01-27 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from ec72ac6690 [ROCM] Fixes compiling on ROCM 5 and accuracy on dense op 
(#13847)
 add 18b7dc1dd9 [MetaSchedule] Fix for RewriteLayout + AllocateConst when 
the rank of the rewritten weight doesn't change (#13851)
 add 56771a87d1 [CLML][RUNTIME] Enable more ops in CLML runtime (#13834)
 add 2bfdcbe07a [Relay] Convert negative axes to positive when importing 
ONNX Unsqueeze (#13846)
 add 16b19582a2 [ETHOSN] Apply FoldConstant before NPU partitioning (#13848)
 add 95fa22308b [Hexagon][CI] Updated sha for builder LLVM (#13418)
 add c2cc01910c [microTVM] Update tutorials (#13845)
 add 1bc8cf80d0 [ONNX] Support Bernoulli op on ONNX front-end (#13802)

No new revisions were added by this update.

Summary of changes:
 docker/install/ubuntu_install_hexagon.sh   |   5 +-
 docs/conf.py   |  12 +-
 docs/topic/microtvm/index.rst  |  11 +-
 gallery/how_to/work_with_microtvm/micro_aot.py |  17 +-
 .../how_to/work_with_microtvm/micro_autotune.py|  13 +-
 gallery/how_to/work_with_microtvm/micro_ethosu.py  |   6 +-
 .../how_to/work_with_microtvm/micro_mlperftiny.py  |   7 +-
 gallery/how_to/work_with_microtvm/micro_pytorch.py |  18 +-
 .../work_with_microtvm/micro_reference_vm.py   | 159 -
 gallery/how_to/work_with_microtvm/micro_tflite.py  |  72 +++-
 gallery/how_to/work_with_microtvm/micro_train.py   |   9 +-
 gallery/how_to/work_with_microtvm/micro_tvmc.sh|  43 ++---
 python/tvm/micro/testing/utils.py  |   8 +-
 python/tvm/relay/frontend/onnx.py  |  36 +++-
 python/tvm/relay/op/contrib/clml.py|  16 +-
 python/tvm/relay/op/contrib/ethosn.py  |   1 +
 src/relay/backend/te_compiler_cache.cc |  21 ++-
 src/runtime/contrib/clml/clml_runtime.cc   |  67 ++-
 tests/python/contrib/test_clml/test_ops.py | 102 +++
 tests/python/contrib/test_ethosn/test_addition.py  |  68 +--
 tests/python/contrib/test_ethosn/test_networks.py  |   2 +-
 tests/python/frontend/onnx/test_forward.py | 198 +
 .../test_meta_schedule_relay_integration.py|  74 
 tests/scripts/request_hook/request_hook.py |   2 +-
 24 files changed, 675 insertions(+), 292 deletions(-)
 delete mode 100644 gallery/how_to/work_with_microtvm/micro_reference_vm.py



[GitHub] [tvm] tvm-bot commented on pull request #13860: [topi] remove comment redundancy in resize.py

2023-01-27 Thread via GitHub


tvm-bot commented on PR #13860:
URL: https://github.com/apache/tvm/pull/13860#issuecomment-1407281726

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* No users to tag found in teams: `topi` See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] terrance-liang opened a new pull request, #13860: [topi] remove comment redundancy in resize.py

2023-01-27 Thread via GitHub


terrance-liang opened a new pull request, #13860:
URL: https://github.com/apache/tvm/pull/13860

   @mbrookhart 
   Hi, It seems redundant, and I removed the departed one.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13859: [TVMScript] Connect `assert_structural_equal` with new TVMScript printer

2023-01-27 Thread via GitHub


tvm-bot commented on PR #13859:
URL: https://github.com/apache/tvm/pull/13859#issuecomment-1407263082

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* cc @junrushao See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cyx-6 opened a new pull request, #13859: [TVMScript] Connect `assert_structural_equal` with new TVMScript printer

2023-01-27 Thread via GitHub


cyx-6 opened a new pull request, #13859:
URL: https://github.com/apache/tvm/pull/13859

   This PR refactors the output of `assert_structural_equal`. Different from 
the direct printing mismatching nodes, in the old version, the improved one 
will print the whole scripts, with mismatching nodes underlined. For example, 
we have following functions
   
   ```python
   @T.prim_func
   def func1(a: T.handle, b: T.handle):
 A = T.match_buffer(a, (128, 128))
 B = T.match_buffer(b, (128, 128))
   
   @T.prim_func
   def func2(a: T.handle, b: T.handle):
 A = T.match_buffer(a, (128, 128))
 B = T.match_buffer(b, (128, 256))
   ```
   
   the log of `assert_structural_equal(func1, func2)` will be like
   
   ```python
   ValueError: StructuralEqual check failed, caused by lhs:
   # from tvm.script import tir as T
   
   @T.prim_func
   def main(a: T.handle, b: T.handle):
 A = T.match_buffer(a, (128, 128))
 B = T.match_buffer(b, (128, 128))
 ^^^
 T.evaluate(0)
   and rhs:
   # from tvm.script import tir as T
   
   @T.prim_func
   def main(a: T.handle, b: T.handle):
 A = T.match_buffer(a, (128, 128))
 B = T.match_buffer(b, (128, 256))
 ^^^
 T.evaluate(0)
   ```
   
   instead of
   
   ```python
   ValueError: StructuralEqual check failed, caused by lhs:
   128
   and rhs:
   256
   ```
   
   which is not readable sometimes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tmoreau89 commented on a diff in pull request #13844: [Hexagon] Software cache management for DMA with cache bypass

2023-01-27 Thread via GitHub


tmoreau89 commented on code in PR #13844:
URL: https://github.com/apache/tvm/pull/13844#discussion_r1089571055


##
src/tir/transforms/lower_async_dma.cc:
##
@@ -192,19 +198,33 @@ class AsyncDMALowerer : public StmtExprMutator {
   // save queue ID for inspection in `wait` transform
   queue_ids_.insert(queue_id);
 
-  return Evaluate(Call(DataType::Int(32), builtin::dma_copy(),
-   {queue_id,
-Call(DataType::Handle(), builtin::address_of(),
- {BufferLoad(bufferstorenode->buffer, 
store_index)}),
-Call(DataType::Handle(), builtin::address_of(),
- {BufferLoad(bufferloadnode->buffer, 
load_index)}),
-for_loop->extent * bufferloadnode->dtype.bytes(), 
dma_bypass_cache_}));
+  auto call_dma_copy =
+  Evaluate(Call(DataType::Int(32), builtin::dma_copy(),
+{queue_id,
+ Call(DataType::Handle(), builtin::address_of(),
+  {BufferLoad(bufferstorenode->buffer, 
store_index)}),
+ Call(DataType::Handle(), builtin::address_of(),
+  {BufferLoad(bufferloadnode->buffer, 
load_index)}),
+ for_loop->extent * bufferloadnode->dtype.bytes(), 
dma_bypass_cache_}));
+
+  // if the buffer we are about to DMA was modified by the primfunc
+  // then we need to flush the buffer from the cache prior to the DMA

Review Comment:
   Agreed that it's probably better to perform an invalidation vs. flush 
depending on the directionality of the data transfer:
   
   - Upon DMA “read”, you have to flush before
   
   - Upon DMA “write”, you have to invalidate after
   
   A helpful example hopefully could be what was done for VTA when VTA would be 
implemented with non-coherent DMA, as in here: 
https://github.com/apache/tvm/blob/bf0607bd317a7db8eba1a91c12170934c1ad201f/vta/runtime/runtime.cc#L1319-L1329



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh commented on pull request #13818: [microTVM]Update Zephyr version and Zephyr SDK version

2023-01-27 Thread via GitHub


mehrdadh commented on PR #13818:
URL: https://github.com/apache/tvm/pull/13818#issuecomment-1407206723

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [ONNX] Support Bernoulli op on ONNX front-end (#13802)

2023-01-27 Thread andrewzhaoluo
This is an automated email from the ASF dual-hosted git repository.

andrewzhaoluo pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 1bc8cf80d0 [ONNX] Support Bernoulli op on ONNX front-end (#13802)
1bc8cf80d0 is described below

commit 1bc8cf80d0a55dda3e0102d4233d8459b31bce97
Author: Valery Chernov 
AuthorDate: Sat Jan 28 03:22:43 2023 +0400

[ONNX] Support Bernoulli op on ONNX front-end (#13802)

* add Bernoulli converter for onnx front-end

* test for bernoulli was implemented

* fix tuple split. update test for stability with different seed on ort and 
tvm sides

* check that output values are 0 or 1

* remove std check as meaningless

* calculate theoretical mean and compare with result, remove ort for 
comparison. clean code

* add customized input as arg

* add test with input sequence of 0 and 1

* pylint fix

* fix inputs-shape issue

* add binomial test

* fix input type

* small fix

* update 0-1 check

* init arrays in numpy style

* check result determinism for fixed seed

* fix inputs issue

* modify binomial test

* pylint fix

-

Co-authored-by: Valery Chernov 
---
 python/tvm/relay/frontend/onnx.py  |  31 ++
 tests/python/frontend/onnx/test_forward.py | 159 +
 2 files changed, 190 insertions(+)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index ed99176282..7b35d4a481 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -5669,6 +5669,36 @@ class GridSample(OnnxOpConverter):
 )
 
 
+class Bernoulli(OnnxOpConverter):
+"""Operator converter for Bernoulli"""
+
+@classmethod
+def _impl_v15(cls, inputs, attr, params):
+in_dtype = infer_type(inputs[0]).checked_type.dtype
+assert in_dtype in [
+"float32",
+"float64",
+], "Only float input tensor is currently supported."
+# The data type for the elements of the output tensor.
+# if not specified, we will use the data type of the input tensor
+out_dtype = attr.get("dtype", None)
+if out_dtype is None:
+out_dtype = in_dtype
+else:
+out_dtype = get_type(out_dtype)
+
+seed = attr.get("seed", None)
+if seed is None:
+seed = np.random.randint(1e6)
+else:
+seed = int(seed)
+
+key = _random.threefry_key(seed)
+inter_outputs = _op.random.uniform(key, infer_shape(inputs[0]), 
in_dtype)
+_, uniform_nums = _expr.TupleWrapper(inter_outputs, 2)
+return _op.cast(_op.less(uniform_nums, inputs[0]), out_dtype)
+
+
 class RandomNormal(OnnxOpConverter):
 """Operator converter for random_normal"""
 
@@ -6436,6 +6466,7 @@ def _get_convert_map(opset):
 "QLinearGlobalAveragePool": 
QLinearGlobalAveragePool.get_converter(opset),
 "QLinearLeakyRelu": QLinearLeakyRelu.get_converter(opset),
 # Random number generation.
+"Bernoulli": Bernoulli.get_converter(opset),
 "RandomNormal": RandomNormal.get_converter(opset),
 "RandomNormalLike": RandomNormalLike.get_converter(opset),
 "RandomUniform": RandomUniform.get_converter(opset),
diff --git a/tests/python/frontend/onnx/test_forward.py 
b/tests/python/frontend/onnx/test_forward.py
index ebb6821901..4b17cfbbb3 100644
--- a/tests/python/frontend/onnx/test_forward.py
+++ b/tests/python/frontend/onnx/test_forward.py
@@ -6914,6 +6914,165 @@ def test_qlinearsigmoid(target, dev):
 verify_qlinearsigmoid([])
 
 
+@tvm.testing.parametrize_targets("llvm")
+def test_random_bernoulli(target, dev):
+"""test_random_bernoulli"""
+
+def _get_tvm_output(
+inputs,
+out_dtype="int32",
+seed=None,
+target=target,
+dev=dev,
+use_vm=False,
+freeze_params=False,
+):
+def get_bernoulli_model(shape, in_dtype="float32", out_dtype="int32", 
seed=None):
+onnx_itype = mapping.NP_TYPE_TO_TENSOR_TYPE[np.dtype(in_dtype)]
+onnx_otype = mapping.NP_TYPE_TO_TENSOR_TYPE[np.dtype(out_dtype)]
+node = helper.make_node(
+"Bernoulli",
+["input"],
+["output"],
+)
+dtype_attr = helper.make_attribute("dtype", onnx_otype)
+node.attribute.append(dtype_attr)
+if seed is not None:
+seed_attr = helper.make_attribute("seed", float(seed))
+node.attribute.append(seed_attr)
+
+graph = helper.make_graph(
+[node],
+"random_bernoulli_test",
+inputs=[helper.make_tensor_value_info("input", onnx_itype, 
list(shape))],

[GitHub] [tvm] AndrewZhaoLuo merged pull request #13802: [ONNX] Support Bernoulli op on ONNX front-end

2023-01-27 Thread via GitHub


AndrewZhaoLuo merged PR #13802:
URL: https://github.com/apache/tvm/pull/13802


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13858: [microTVM]Refactor test and add skip to current failing tests/boards

2023-01-27 Thread via GitHub


tvm-bot commented on PR #13858:
URL: https://github.com/apache/tvm/pull/13858#issuecomment-1407174657

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* cc @alanmacd, @gromero, @leandron, @mkatanbaf See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh opened a new pull request, #13858: [microTVM]Refactor test and add skip to current failing tests/boards

2023-01-27 Thread via GitHub


mehrdadh opened a new pull request, #13858:
URL: https://github.com/apache/tvm/pull/13858

   This PR refactors some of the Zephyr tests and adds skip for each test/board 
that is currently failing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13857: [microTVM] Custom IDE Tutorial

2023-01-27 Thread via GitHub


tvm-bot commented on PR #13857:
URL: https://github.com/apache/tvm/pull/13857#issuecomment-1407156902

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mkatanbaf opened a new pull request, #13857: [microTVM] Custom IDE Tutorial

2023-01-27 Thread via GitHub


mkatanbaf opened a new pull request, #13857:
URL: https://github.com/apache/tvm/pull/13857

   Adds "Bring microTVM to your own development environment" to microTVM 
tutorials. This tutorial describes the steps required to integrate a model 
compiled with microTVM into a custom development environment. We use STM32Cube 
IDE, the VWW model and the nucleo_l4r5zi board.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [microTVM] Update tutorials (#13845)

2023-01-27 Thread mehrdadh
This is an automated email from the ASF dual-hosted git repository.

mehrdadh pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new c2cc01910c [microTVM] Update tutorials (#13845)
c2cc01910c is described below

commit c2cc01910c1b88ad2b593a138c8b984385d47db8
Author: Mehrdad Hessar 
AuthorDate: Fri Jan 27 14:13:21 2023 -0800

[microTVM] Update tutorials (#13845)

This PR updates microTVM tutorials to use updated APIs.
It also adds an ordering to the tutorials that are useful for first time 
users.
RVM tutorial is also removed as it is not supported anymore.
---
 docs/conf.py   |  12 +-
 docs/topic/microtvm/index.rst  |  11 +-
 gallery/how_to/work_with_microtvm/micro_aot.py |  17 +--
 .../how_to/work_with_microtvm/micro_autotune.py|  13 +-
 gallery/how_to/work_with_microtvm/micro_ethosu.py  |   6 +-
 .../how_to/work_with_microtvm/micro_mlperftiny.py  |   7 +-
 gallery/how_to/work_with_microtvm/micro_pytorch.py |  18 +--
 .../work_with_microtvm/micro_reference_vm.py   | 159 -
 gallery/how_to/work_with_microtvm/micro_tflite.py  |  72 --
 gallery/how_to/work_with_microtvm/micro_train.py   |   9 +-
 gallery/how_to/work_with_microtvm/micro_tvmc.sh|  43 +++---
 python/tvm/micro/testing/utils.py  |   8 +-
 tests/scripts/request_hook/request_hook.py |   2 +-
 13 files changed, 105 insertions(+), 272 deletions(-)

diff --git a/docs/conf.py b/docs/conf.py
index eb2b39d4b1..8d24f05b9b 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -511,15 +511,15 @@ within_subsection_order = {
 "use_pass_instrument.py",
 "bring_your_own_datatypes.py",
 ],
-"micro": [
-"micro_train.py",
-"micro_autotune.py",
-"micro_reference_vm.py",
-"micro_tflite.py",
-"micro_ethosu.py",
+"work_with_microtvm": [
 "micro_tvmc.py",
+"micro_tflite.py",
 "micro_aot.py",
 "micro_pytorch.py",
+"micro_train.py",
+"micro_autotune.py",
+"micro_ethosu.py",
+"micro_mlperftiny.py",
 ],
 }
 
diff --git a/docs/topic/microtvm/index.rst b/docs/topic/microtvm/index.rst
index ebcadb3442..4dd4ab5d51 100644
--- a/docs/topic/microtvm/index.rst
+++ b/docs/topic/microtvm/index.rst
@@ -50,13 +50,12 @@ Getting Started with microTVM
 ~
 
 Before working with microTVM, we recommend you have a supported development 
board. Then, follow these
-tutorials to get started with microTVM:
+tutorials to get started with microTVM. Tutorials are in the order that could 
help developers to learn
+more as they follow through them. Here is a list of tutorials that you can 
start with:
 
-1. :ref:`Start the microTVM Reference VM `. The 
microTVM tutorials
-   depend on Zephyr and on a compiler toolchain for your hardware. The 
reference VM is a convenient
-   way to install those dependencies.
-2. Try the :ref:`microTVM with TFLite Tutorial `.
-3. Try running a more complex `CIFAR10-CNN model 
`_.
+1. Try :ref:`microTVM CLI Tool `.
+2. Try the :ref:`microTVM TFLite Tutorial `.
+3. Try running a more complex tutorial: :ref:`Creating Your MLPerfTiny 
Submission with microTVM `.
 
 
 How microTVM Works
diff --git a/gallery/how_to/work_with_microtvm/micro_aot.py 
b/gallery/how_to/work_with_microtvm/micro_aot.py
index c1b29ba5c5..f31ffa1570 100644
--- a/gallery/how_to/work_with_microtvm/micro_aot.py
+++ b/gallery/how_to/work_with_microtvm/micro_aot.py
@@ -15,10 +15,10 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-.. _tutorial-micro-AoT:
+.. _tutorial-micro-aot:
 
-microTVM Host-Driven AoT
-===
+3. microTVM Ahead-of-Time (AOT) Compilation
+===
 **Authors**:
 `Mehrdad Hessar `_,
 `Alan MacDonald `_
@@ -59,6 +59,7 @@ import json
 
 import tvm
 from tvm import relay
+import tvm.micro.testing
 from tvm.relay.backend import Executor, Runtime
 from tvm.contrib.download import download_testdata
 
@@ -102,8 +103,7 @@ relay_mod, params = relay.frontend.from_tflite(
 # using AOT host driven executor. We use the host micro target which is for 
running a model
 # on x86 CPU using CRT runtime or running a model with Zephyr platform on 
qemu_x86 simulator
 # board. In the case of a physical microcontroller, we get the target model 
for the physical
-# board (E.g. nucleo_l4r5zi) and pass it to `tvm.target.target.micro` to 
create a full
-# micro target.
+# board (E.g. nucleo_l4r5zi) and change `BOARD` to supported Zephyr board.
 #
 
 # Use the C runtime (crt) and enable static linking by setting system-lib to 
True
@@ -111,18 +111,15 @@ RUNTIME = Runtime("crt", {"system-lib": True})
 

[GitHub] [tvm] mehrdadh merged pull request #13845: [microTVM] Update tutorials

2023-01-27 Thread via GitHub


mehrdadh merged PR #13845:
URL: https://github.com/apache/tvm/pull/13845


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh opened a new issue, #13856: [Bug] CMSIS-NN BYOC fails with Zephyr 3.2

2023-01-27 Thread via GitHub


mehrdadh opened a new issue, #13856:
URL: https://github.com/apache/tvm/issues/13856

   **What is the error?**
   
   ```
   /opt/arm/ethosu/cmsis/CMSIS-NN/Source/SoftmaxFunctions/arm_softmax_s8.c: In 
function 'arm_exp_on_negative_values_mve_32x4':
   
/opt/arm/ethosu/cmsis/CMSIS-NN/Source/SoftmaxFunctions/arm_softmax_s8.c:74:1: 
internal compiler error: in trunc_int_for_mode, at explow.cc:59
  74 | }
 | ^
   0x169d629 internal_error(char const*, ...)
   ???:0
   0x667b60 fancy_abort(char const*, int, char const*)
   ???:0
   0x895993 trunc_int_for_mode(long, machine_mode)
   ???:0
   0x8959b8 trunc_int_for_mode(poly_int<1u, long>, machine_mode)
   ???:0
   0x88a1c8 gen_int_mode(poly_int<1u, long>, machine_mode)
   ???:0
   
   ...
   
   at_mult_nt_t_s8.c
   In file included from 
/opt/zephyrproject/modules/hal/cmsis/CMSIS/Core/Include/cmsis_compiler.h:54,
from 
/opt/arm/ethosu/cmsis/CMSIS-NN/Include/arm_nn_math_types.h:90,
from 
/opt/arm/ethosu/cmsis/CMSIS-NN/Include/arm_nnsupportfunctions.h:33,
from 
/opt/arm/ethosu/cmsis/CMSIS-NN/Source/NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:31:
   
/opt/arm/ethosu/cmsis/CMSIS-NN/Source/NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:
 In function 'arm_nn_mat_mult_nt_t_s8':
   /opt/zephyrproject/modules/hal/cmsis/CMSIS/Core/Include/cmsis_gcc.h:41:50: 
error: 'asm' operand has impossible constraints
  41 |   #define __ASM  __asm
 |  ^
   
/opt/arm/ethosu/cmsis/CMSIS-NN/Source/NNSupportFunctions/arm_nn_mat_mult_nt_t_s8.c:102:13:
 note: in expansion of macro '__ASM'
 102 | __ASM volatile("   vldrb.8 q0, [%[col]], #16 
\n"
 | ^
   ninja: build stopped: subcommand failed.
   
   ```
   
   **How to reproduce?**
   Use ci_cortexm docker image:
   ```bash
   cd apps/microtvm/zephyr_cmsisnn
   ./run_demo.sh
   ```
   
   **Environment**
   Zephyr 3.2
   Zephyr-SDK 0.15.2
   CMSIS SHA:  51263182d16c92649a48144ba56c0945f9fce60e
   CMSIS NN SHA:  v4.0.0
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Mousius commented on a diff in pull request #13643: [CMSIS-NN] Add a runtime error message

2023-01-27 Thread via GitHub


Mousius commented on code in PR #13643:
URL: https://github.com/apache/tvm/pull/13643#discussion_r1089394773


##
tests/python/contrib/test_cmsisnn/test_last_error.py:
##
@@ -0,0 +1,119 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""CMSIS-NN integration tests: test if the model builds in case 
debug_last_error is enabled"""
+
+import numpy as np
+import pytest
+
+import tvm
+from tvm import relay
+from tvm.relay.op.contrib import cmsisnn
+from tvm.testing.aot import get_dtype_range, generate_ref_data, AOTTestModel, 
compile_and_run
+
+
+from .utils import (
+skip_if_no_reference_system,
+make_module,
+make_qnn_relu,
+assert_partitioned_function,
+create_test_runner,
+)
+
+
+def generate_variable(name, dtype="int8"):
+return relay.var(name, shape=(1, 16, 16, 3), dtype=dtype)
+
+
+def make_model(
+op,
+input_0,
+input_1,
+input_0_scale,
+input_0_zero_point,
+input_1_scale,
+input_1_zero_point,
+relu_type="NONE",
+out_scale=1.0 / 256,
+out_zero_point=-128,
+):
+"""Create a Relay Function / network model"""
+binary_op = op(
+input_0,
+input_1,
+relay.const(input_0_scale, "float32"),
+relay.const(input_0_zero_point, "int32"),
+relay.const(input_1_scale, "float32"),
+relay.const(input_1_zero_point, "int32"),
+relay.const(out_scale, "float32"),
+relay.const(out_zero_point, "int32"),
+)
+return make_qnn_relu(binary_op, relu_type, out_scale, out_zero_point, 
"int8")
+
+
+@skip_if_no_reference_system
+@tvm.testing.requires_cmsisnn
+@pytest.mark.parametrize("debug_last_error", [True, False])
+def test_last_error(debug_last_error):

Review Comment:
   This checks it compiles with last error enabled but doesn't actually check 
the last error which is the most important functionality in this patch - can 
you check the error you're expecting is printed? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [Hexagon][CI] Updated sha for builder LLVM (#13418)

2023-01-27 Thread mehrdadh
This is an automated email from the ASF dual-hosted git repository.

mehrdadh pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 95fa22308b [Hexagon][CI] Updated sha for builder LLVM (#13418)
95fa22308b is described below

commit 95fa22308bd08e583dadb3ad429aa768ecce85c2
Author: joshherr-quic <95375797+joshherr-q...@users.noreply.github.com>
AuthorDate: Fri Jan 27 12:56:04 2023 -0600

[Hexagon][CI] Updated sha for builder LLVM (#13418)

Updated sha to deal with some codegen issues that came up with the last 
version.
---
 docker/install/ubuntu_install_hexagon.sh | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/docker/install/ubuntu_install_hexagon.sh 
b/docker/install/ubuntu_install_hexagon.sh
index 722cfaa40c..57807398a7 100755
--- a/docker/install/ubuntu_install_hexagon.sh
+++ b/docker/install/ubuntu_install_hexagon.sh
@@ -21,7 +21,7 @@ set -o pipefail
 
 # Install LLVM/clang
 CLANG_LLVM_HOME=/opt/clang-llvm
-LLVM_SHA=361a27c155ec8b222e3318488a208c0eb39624c8
+LLVM_SHA=a9871772a8b13c1240a95a84a3327f84bb67dddc
 
 mkdir llvm-hexagon
 pushd llvm-hexagon
@@ -37,8 +37,7 @@ cmake \
   -DCMAKE_INSTALL_PREFIX=${CLANG_LLVM_HOME} \
   -DLLVM_ENABLE_ASSERTIONS=ON \
   -DLLVM_TARGETS_TO_BUILD:STRING="Hexagon;X86" \
-  -DLLVM_ENABLE_PROJECTS:STRING="clang;llvm" \
-  -DTARGET_TRIPLE=x86_64-unknown-linux-gnu \
+  -DLLVM_ENABLE_PROJECTS:STRING="llvm" \
   -DLLVM_DEFAULT_TARGET_TRIPLE=x86_64-unknown-linux-gnu \
   ../llvm
 ninja install



[GitHub] [tvm] mehrdadh merged pull request #13418: [Hexagon][CI] Updated sha for builder LLVM

2023-01-27 Thread via GitHub


mehrdadh merged PR #13418:
URL: https://github.com/apache/tvm/pull/13418


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] mehrdadh commented on issue #13501: [microTVM] Change apps/microtvm/zephyr_cmsisnn to a microTVM tutorial

2023-01-27 Thread via GitHub


mehrdadh commented on issue #13501:
URL: https://github.com/apache/tvm/issues/13501#issuecomment-1406941896

   @Mousius @gromero I agree, it makes sense to keep this for the standalone 
deployment story. I think this has the potential to be more generic and not 
just specific to use CMSIS. We could change this to be a standalone deployment 
of microTVM with Zephyr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] kparzysz-quic commented on pull request #13418: [Hexagon][CI] Updated sha for builder LLVM

2023-01-27 Thread via GitHub


kparzysz-quic commented on PR #13418:
URL: https://github.com/apache/tvm/pull/13418#issuecomment-1406940470

   The CI tests have passed with docker image built from this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] guberti commented on a diff in pull request #13815: [CMSIS-NN] Reduction in code size of AOT test runner binary

2023-01-27 Thread via GitHub


guberti commented on code in PR #13815:
URL: https://github.com/apache/tvm/pull/13815#discussion_r1088234086


##
python/tvm/topi/arm_cpu/mprofile/dsp/micro_kernel/multi_channel_convolve.py:
##
@@ -179,20 +179,26 @@ def _dual_int16_channel_convolve_impl(_tensor_h, 
tensor_w, channels, kernel_h, k
 extern "C"
 #endif
 int32_t {_get_func_name("int16", tensor_w, channels, kernel_h, 
kernel_w, suffix)}(
-uint32_t *out,
-uint32_t *tensor,
-uint32_t *kernel) {{
+int32_t *out,
+int16_t *tensor,
+int16_t *kernel) {{
 
-  uint32_t sum_c0 = 0;
-  uint32_t sum_c1 = 0;
+  int32_t sum_c0 = 0;
+  int32_t sum_c1 = 0;
+
+  int32_t kernel_i32[{kernel_h} * {kernel_w}];
+  memcpy(kernel_i32, kernel, {kernel_h} * {kernel_w} * 
sizeof(int32_t));

Review Comment:
   I imagine these lines are removed by the compiler (and that the `memcpy` 
doesn't actually run), but I would still like to confirm there is no 
performance regression. I'd also love a comment making it clearer why the 
`memcpy` doesn't slow us down like you'd expect.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] balaram-cadence opened a new issue, #13855: [Bug] [Frontend][Tensorflow] tf.where with broadcast condition fails to import due to Incompatible broadcast type

2023-01-27 Thread via GitHub


balaram-cadence opened a new issue, #13855:
URL: https://github.com/apache/tvm/issues/13855

   The test case below fails to import in tvm:
   
   ```
   def test_forward_where_with_broadcast_cond():
   t1 = np.array([1.0, 2.0, 3.0, 4.0, 5.0]).astype("float32")
   t2 = np.array([2.0, 4.0, 1.0, 3.0, 5.0]).astype("float32")
   x = np.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0], [7.0, 8.0], [9.0, 
10.0]]).astype("float32")
   y = np.array([[10.0, 9.0], [8.0, 7.0], [6.0, 5.0], [4.0, 3.0], [2.0, 
1.0]]).astype("float32")
   
   with tf.Graph().as_default():
   in1 = tf.placeholder(shape=(5), dtype = "float32", name="in1")
   in2 = tf.placeholder(shape=(5), dtype = "float32", name="in2")
   condition = math_ops.less(in1, in2, name="less")
   lhs = tf.placeholder(shape=(5,2), dtype = "float32", name="x")
   rhs = tf.placeholder(shape=(5,2), dtype = "float32", name="y")
   out = tf.where(condition, lhs, rhs)
   compare_tf_with_tvm([t1, t2, x, y], ["in1:0", "in2:0", "x:0", 
"y:0"], out.name)
   ```
   
   ### Expected behavior
   Should be identical to tensorflow output:
   ```
   [array([[1., 2.],
  [3., 4.],
  [6., 5.],
  [4., 3.],
  [2., 1.]], dtype=float32)]
   ```
   
   ### Actual behavior
   Failed with this error:
   
   `Incompatible broadcast type TensorType([5], bool) and TensorType([5, 2], 
float32)`
   
   ### Environment
   ```
   Linux
   LSB Version:
:core-4.1-amd64:core-4.1-ia32:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-ia32:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-ia32:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
   Distributor ID: RedHatEnterpriseWorkstation
   Description:Red Hat Enterprise Linux Workstation release 7.9 (Maipo)
   Release:7.9
   Codename:   Maipo
   Name: apache-tvm
   Version: 0.10.0
   Home-page: https://tlcpack.ai
   Author: Apache TVM
   Author-email: None
   License: Apache
   ```
   
   ### Steps to reproduce
   Add above testcase to tests/python/frontend/tensorflow/test_forward.py and 
run
   `python -m pytest tests/python/frontend/tensorflow/test_forward.py -k 
test_forward_where_with_broadcast_cond`
   
   ### Triage
   
   * needs-triage
   * frontend:tensorflow 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] arina-grovety commented on a diff in pull request #13212: [TVMC][microNPU] tvmc option for printing which operators are offloaded to Ethos-U

2023-01-27 Thread via GitHub


arina-grovety commented on code in PR #13212:
URL: https://github.com/apache/tvm/pull/13212#discussion_r1089071751


##
python/tvm/driver/tvmc/compiler.py:
##
@@ -459,3 +493,65 @@ def save_dumps(module_name: str, dumps: Dict[str, str], 
dump_root: str = "."):
 dump_name = module_name + "." + dump_format
 with open(Path(dump_root, dump_name), "w") as f:
 f.write(dumps[dump_format])
+
+
+def dump_operation_offloads(mod: tvm.ir.IRModule, initial_relay_astext: list, 
dump_path: str):
+"""This helper function forms a line-by-line output of the initial Relay 
lines,
+indicating which operations are ported to which backend,
+indicating the composite that includes those operations e.g
+'device1<- device2.qnn_conv2d'
+'device1<-%0 = qnn.conv2d(%tfl.quantize, %v_param_1, ...'
+'device1<-%1 = nn.bias_add(%0, %v_param_2, axis=3);'
+'device1<-%2 = qnn.requantize(%1, meta[relay.Constant]...'
+'device2<- device2.reshape'
+'device2<-%3 = reshape(%206, newshape=[1, 1001]);'
+
+Parameters
+--
+mod : tvm.ir.IRModule
+The IRModule that gets generated from a relay frontend.
+initial_relay_astext : list

Review Comment:
   Hello @ashutosh-arm,
   sorry, there is a non-fixed comment,
   
   now we pass the initial Relay as Relay IR itself, and then use "annotate" 
parameter of  the 
[astext()](https://github.com/apache/tvm/pull/13212/files#diff-5c75738c5cc53888ceb0c8b4833a013631295fb70b2a3a1178f8523271052671R536)
 function 
   to add the desired annotations to the generated text, and then parsing our 
annotations from the formed text.
   
   I will fix the comment string in the update to the PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] NicolaLancellotti commented on a diff in pull request #13815: [CMSIS-NN] Reduction in code size of AOT test runner binary

2023-01-27 Thread via GitHub


NicolaLancellotti commented on code in PR #13815:
URL: https://github.com/apache/tvm/pull/13815#discussion_r1089042287


##
python/tvm/topi/arm_cpu/mprofile/dsp/micro_kernel/multi_channel_convolve.py:
##
@@ -179,20 +179,26 @@ def _dual_int16_channel_convolve_impl(_tensor_h, 
tensor_w, channels, kernel_h, k
 extern "C"
 #endif
 int32_t {_get_func_name("int16", tensor_w, channels, kernel_h, 
kernel_w, suffix)}(
-uint32_t *out,
-uint32_t *tensor,
-uint32_t *kernel) {{
+int32_t *out,
+int16_t *tensor,
+int16_t *kernel) {{
 
-  uint32_t sum_c0 = 0;
-  uint32_t sum_c1 = 0;
+  int32_t sum_c0 = 0;
+  int32_t sum_c1 = 0;
+
+  int32_t kernel_i32[{kernel_h} * {kernel_w}];
+  memcpy(kernel_i32, kernel, {kernel_h} * {kernel_w} * 
sizeof(int32_t));

Review Comment:
   Hi Gavin, this is a functional change to remove the type punning error.  So 
performance can be worse.
   But, it is not right to rely on undefined behaviours to have better 
performance.
   Do you know any other way to address this undefined behaviour, without the 
`memcpy`s?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] lhutton1 commented on a diff in pull request #13212: [TVMC][microNPU] tvmc option for printing which operators are offloaded to Ethos-U

2023-01-27 Thread via GitHub


lhutton1 commented on code in PR #13212:
URL: https://github.com/apache/tvm/pull/13212#discussion_r1086482503


##
python/tvm/driver/tvmc/compiler.py:
##
@@ -459,3 +489,79 @@ def save_dumps(module_name: str, dumps: Dict[str, str], 
dump_root: str = "."):
 dump_name = module_name + "." + dump_format
 with open(Path(dump_root, dump_name), "w") as f:
 f.write(dumps[dump_format])
+
+
+def dump_operation_offloads(mod: tvm.ir.IRModule, initial_mod: 
tvm.ir.IRModule, dump_path: str):
+"""This helper function forms a line-by-line output of the initial Relay 
lines,
+indicating which operations are ported to which target,
+and indicating the composite that includes those operations;
+the 'generic' target refers to operations uploaded to the host, e.g
+'target1<- target2.qnn_conv2d'

Review Comment:
   nit: `target2.qnn_conv2d` -> `target1.qnn_conv2d`



##
python/tvm/relay/analysis/operations_distribution.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities that enable analyze Relay and get mappings for unique
+input module layer name to the tuple of compiler and operation name"""
+import tvm
+from tvm import relay
+from tvm.relay.expr_functor import ExprVisitor
+
+
+class AnalyzeOperationsDistribution(ExprVisitor):

Review Comment:
   Thanks, could we also add a test for the generic only fallback case?



##
python/tvm/relay/frontend/common.py:
##
@@ -1067,6 +1067,20 @@ def __init__(self, span):
 self._span = 
tvm.relay.Span(tvm.relay.SourceName(span.decode("utf-8")), 0, 0, 0, 0)
 else:
 assert False, f"unsupported span type: {type(span)}"
+self.suffix_str = "_PART_"
+self.counter = 0
+self.distance_from_leaf = -1
+
+def _create_span(self):
+"""Adds suffix_str + counter value to _span.source_name.name,
+to create a unique source_name for the Relay layer
+"""
+if self.distance_from_leaf == 0:
+return tvm.relay.Span(tvm.relay.SourceName(self._span), 0, 0, 0, 0)
+self.distance_from_leaf -= 1
+span_str = "{}{}{}".format(self._span.source_name.name, 
self.suffix_str, str(self.counter))
+self.counter += 1
+return tvm.relay.Span(tvm.relay.SourceName(span_str), 0, 0, 0, 0)

Review Comment:
   This seems sensible to me, cc the original author of span filling 
@chunit-quic just to check
   
   Is `self.counter` required, or can the same info already be fetched from 
`self.distance_from_leaf` e.g. `.format(..., str(-self.distance_from_leaf))`?



##
tests/python/contrib/test_ethosu/test_pass_operations_distribution.py:
##
@@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+from tvm.relay.analysis.operations_distribution import 
analyze_operations_distribution
+from . import infra
+import pytest
+import numpy as np
+
+
+def test_operations_distribution():
+
+tflite = pytest.importorskip("tflite")
+tensorflow = pytest.importorskip("tensorflow")
+pytest.importorskip("ethosu.vela")
+
+import tensorflow as tf
+
+inp = (224, 224, 9)
+input_shape = (1, *inp)
+kernel_shape = (3, 3)
+padding = (1, 1, 1, 1)
+padding_out = (1, 33, 33, 1)
+
+@tf.function
+def simple_net(x):
+weight_shape = [kernel_shape[0], kernel_shape[1], input_shape[3], 3]
+weights = 

[GitHub] [tvm] vvchernov commented on pull request #13802: [ONNX] Support Bernoulli op on ONNX front-end

2023-01-27 Thread via GitHub


vvchernov commented on PR #13802:
URL: https://github.com/apache/tvm/pull/13802#issuecomment-1406462257

   Hello @AndrewZhaoLuo! Please, recheck, discuss with Jon if need and merge if 
can


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] ashutosh-arm commented on a diff in pull request #13212: [TVMC][microNPU] tvmc option for printing which operators are offloaded to Ethos-U

2023-01-27 Thread via GitHub


ashutosh-arm commented on code in PR #13212:
URL: https://github.com/apache/tvm/pull/13212#discussion_r1088919482


##
python/tvm/driver/tvmc/compiler.py:
##
@@ -459,3 +493,65 @@ def save_dumps(module_name: str, dumps: Dict[str, str], 
dump_root: str = "."):
 dump_name = module_name + "." + dump_format
 with open(Path(dump_root, dump_name), "w") as f:
 f.write(dumps[dump_format])
+
+
+def dump_operation_offloads(mod: tvm.ir.IRModule, initial_relay_astext: list, 
dump_path: str):
+"""This helper function forms a line-by-line output of the initial Relay 
lines,
+indicating which operations are ported to which backend,
+indicating the composite that includes those operations e.g
+'device1<- device2.qnn_conv2d'
+'device1<-%0 = qnn.conv2d(%tfl.quantize, %v_param_1, ...'
+'device1<-%1 = nn.bias_add(%0, %v_param_2, axis=3);'
+'device1<-%2 = qnn.requantize(%1, meta[relay.Constant]...'
+'device2<- device2.reshape'
+'device2<-%3 = reshape(%206, newshape=[1, 1001]);'
+
+Parameters
+--
+mod : tvm.ir.IRModule
+The IRModule that gets generated from a relay frontend.
+initial_relay_astext : list

Review Comment:
   I would suggest the same thing as @lhutton1 did above. Text representation 
changes quite often. It is better to rely on the information available inside 
the module object and extract it using let's say `ExprVisitor`.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] github-actions[bot] commented on pull request #13493: [Bug][Rust] Fix variable type mismatch

2023-01-27 Thread github-actions


github-actions[bot] commented on PR #13493:
URL: https://github.com/apache/tvm/pull/13493#issuecomment-1406421221

   Failed to re-run CI in https://github.com/apache/tvm/actions/runs/4024385817
   
   
   
   ```
   Traceback (most recent call last):
 File "ci/scripts/github/github_tvmbot.py", line 594, in comment_failure
   raise item
 File "ci/scripts/github/github_tvmbot.py", line 700, in run
   pr.rerun_jenkins_ci()
 File "ci/scripts/github/github_tvmbot.py", line 553, in rerun_jenkins_ci
   post(url, auth=("tvm-bot", TVM_BOT_JENKINS_TOKEN))
 File "/home/runner/work/tvm/tvm/ci/scripts/jenkins/git_utils.py", line 53, 
in post
   with request.urlopen(req, data) as response:
 File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
   return opener.open(url, data, timeout)
 File "/usr/lib/python3.8/urllib/request.py", line 531, in open
   response = meth(req, response)
 File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
   response = self.parent.error(
 File "/usr/lib/python3.8/urllib/request.py", line 569, in error
   return self._call_chain(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
   result = func(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 649, in 
http_error_default
   raise HTTPError(req.full_url, code, msg, hdrs, fp)
   urllib.error.HTTPError: HTTP Error 500: Server Error
   
   ```
   
   with response
   
   ```
   
 
 
   
   
   
   Jenkins [Jenkins]src="/static/e3b9d568/scripts/yui/connection/connection-min.js"> 
 >src="/static/e3b9d568/scripts/yui/datasource/datasource-min.js"> 
 >src="/static/e3b9d568/scripts/yui/autocomplete/autocomplete-min.js"> src="/static/e3b9d568/scripts/yui/menu/menu-min.js">src="/static/e3b9d568/scripts/yui/element/element-min.js">src="/static/e3b9d568/scripts/yui/button/button-min.js">src="/static/e3b9d568/scripts/yui/storage/storage-min.js">src="/static/e3b9d568/scripts/hudson-behavior.js" 
 >type="text/javascript">src="/static/e3b9d568/scripts/sortable.js" 
 >type="text/javascript">href="/static/e3b9d568/scripts/yui/container/assets/container.css" 
 >type="text/css">href="/static/e3b9d568/scripts/yui/container/assets/skins/sam/container.css" 
 >type="text/css">Skip to contentJenkinshttp://www.w3.org/2000/svg; class="" viewBox="0 0 512 
512">https://www.jenkins.io/redirect/search-box; 
class="main-search__icon-trailing">http://www.w3.org/2000/svg; viewBox="0 0 512 512">log
 inDashboard Oops!A pr
 oblem occurred while processing the request.Logging 
ID=17f3b75e-baf9-4eb7-8df1-b546b6bc9f8bREST APIhttps://www.jenkins.io/; target="_blank">Jenkins 
2.361.2
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on pull request #13493: [Bug][Rust] Fix variable type mismatch

2023-01-27 Thread via GitHub


gromero commented on PR #13493:
URL: https://github.com/apache/tvm/pull/13493#issuecomment-1406420877

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] gromero commented on issue #13501: [microTVM] Change apps/microtvm/zephyr_cmsisnn to a microTVM tutorial

2023-01-27 Thread via GitHub


gromero commented on issue #13501:
URL: https://github.com/apache/tvm/issues/13501#issuecomment-1406417390

   @mehrdadh I think @Mousius is right, I forgot that we actually have 
basically two uses cases. For instance, one if the user just got a new board 
and wants to promptly use the Project API to select a platform and a board and 
run a model without caring about details like RTOS, etc. The other, which is 
actually demonstrate by the demo in question, is more about how to integrate 
TVM into an existing project, not necessary Zephyr, or Arduino etc. So I agree 
with @Mousius to keep it the way it's now :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] vvchernov commented on a diff in pull request #13802: [ONNX] Support Bernoulli op on ONNX front-end

2023-01-27 Thread via GitHub


vvchernov commented on code in PR #13802:
URL: https://github.com/apache/tvm/pull/13802#discussion_r1088868698


##
tests/python/frontend/onnx/test_forward.py:
##
@@ -6707,6 +6707,117 @@ def verify_qlinearsigmoid(a_shape):
 verify_qlinearsigmoid([])
 
 
+@tvm.testing.parametrize_targets("llvm")
+def test_random_bernoulli(target, dev):
+"""test_random_bernoulli"""
+
+def verify_bernoulli(
+inputs=None,
+shape=[],
+in_dtype="float32",
+out_dtype="int32",
+seed=None,
+target=target,
+dev=dev,
+use_vm=False,
+freeze_params=False,
+rtol=0.1,
+atol=0.1,
+in_out_equal=False,
+):
+def get_bernoulli_model(shape, in_dtype="float32", out_dtype="int32", 
seed=None):
+onnx_itype = mapping.NP_TYPE_TO_TENSOR_TYPE[np.dtype(in_dtype)]
+onnx_otype = mapping.NP_TYPE_TO_TENSOR_TYPE[np.dtype(out_dtype)]
+node = helper.make_node(
+"Bernoulli",
+["input"],
+["output"],
+)
+dtype_attr = helper.make_attribute("dtype", onnx_otype)
+node.attribute.append(dtype_attr)
+if seed is not None:
+seed_attr = helper.make_attribute("seed", float(seed))
+node.attribute.append(seed_attr)
+
+graph = helper.make_graph(
+[node],
+"random_bernoulli_test",
+inputs=[helper.make_tensor_value_info("input", onnx_itype, 
list(shape))],
+outputs=[helper.make_tensor_value_info("output", onnx_otype, 
list(shape))],
+)
+return helper.make_model(graph, 
producer_name="random_bernoulli_test")
+
+if inputs is None:
+assert len(shape) != 0
+inputs = np.random.uniform(size=shape).astype(in_dtype)
+else:
+shape = inputs.shape
+in_dtype = inputs.dtype
+model = get_bernoulli_model(shape, in_dtype, out_dtype, seed)
+
+if use_vm:
+tvm_out = get_tvm_output_with_vm(
+model,
+inputs,
+target,
+dev,
+freeze_params=freeze_params,
+)
+else:
+tvm_out = get_tvm_output(
+model,
+inputs,
+target,
+dev,
+)
+
+if isinstance(tvm_out, list):
+tvm_out = tvm_out[0]
+ideal_mean = np.mean(inputs)
+# check that values are 0 or 1
+tvm_flat = tvm_out.flatten()
+for i in range(len(tvm_flat)):
+assert tvm_flat[i] == 0 or tvm_flat[i] == 1
+if in_out_equal:
+tvm.testing.assert_allclose(inputs, tvm_out)
+else:
+# check that mean value is close to the theoretical one by 
binomial test
+bnm_test_res = scipy.stats.binomtest(

Review Comment:
   Hello @octoJon! I've modified the test due to it was already 
"over-conservative" with p-value threshold = 1e-6. I've increased threshold to 
0.05 as more classical approach. If test condition failed there are two cases: 
something wrong in the operation or we have gotten "bad" output sequence on the 
tail of distribution. Due to the last is rare case and should be rechecked I 
repeat the test again (and third time if need) with new seed for internal 
distribution (input is the same).
   P.S. As you know RandomUniform and RandomNormal already were implemented on 
TVM side. Possibly their CI tests should be also updated to testing stability 
and avoiding of flaky failures. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] ibsidorenko opened a new pull request, #13854: [QNN][Relay][Topi] Add qnn.dense with weight layout

2023-01-27 Thread via GitHub


ibsidorenko opened a new pull request, #13854:
URL: https://github.com/apache/tvm/pull/13854

   This commit adds new Relay operation `qnn.dense_pack` that supports 
different weights layout (`nn.dense` and `qnn.dense` do not support this 
attribute). This new operation is full analog of `nn.contrib_dense_pack` 
operation but in QNN space.
   
   With this PR, current QNN Dense can achieve ~10x performance gain on Hexagon 
target without QNN canonicalization (through the use of `vrmpy` intrinsic).
   
   Also, this PR includes slight performance improvement for `qnn.mul` (without 
QNN canonicalization).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] tvm-bot commented on pull request #13854: [QNN][Relay][Topi] Add qnn.dense with weight layout

2023-01-27 Thread via GitHub


tvm-bot commented on PR #13854:
URL: https://github.com/apache/tvm/pull/13854#issuecomment-1406373758

   
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines 
https://tvm.apache.org/docs/contribute/ for useful information and tips. Please 
request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @-ing them in a comment.
   
   
* No users to tag found in teams: `qnn`, `relay`, `topi` See 
[#10317](https://github.com/apache/tvm/issues/10317) for 
details
   
   Generated by 
[tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] lhutton1 commented on pull request #13848: [ETHOSN] Apply FoldConstant before NPU partitioning

2023-01-27 Thread via GitHub


lhutton1 commented on PR #13848:
URL: https://github.com/apache/tvm/pull/13848#issuecomment-1406232857

   Thanks @ashutosh-arm!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [ETHOSN] Apply FoldConstant before NPU partitioning (#13848)

2023-01-27 Thread lukhut
This is an automated email from the ASF dual-hosted git repository.

lukhut pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 16b19582a2 [ETHOSN] Apply FoldConstant before NPU partitioning (#13848)
16b19582a2 is described below

commit 16b19582a2b0887d0de19813d8ed8932c26dc521
Author: Ashutosh Parkhi <86472128+ashutosh-...@users.noreply.github.com>
AuthorDate: Fri Jan 27 09:20:26 2023 +

[ETHOSN] Apply FoldConstant before NPU partitioning (#13848)

Introduced FoldConstant before NPU partitioning.
Added a qnn.add test where both inputs are constants.
Updated the number of operators remaining in the host code
for ssd_mobilenet_v1 as the FoldConstant reduces the number
of operators.
---
 python/tvm/relay/op/contrib/ethosn.py |  1 +
 tests/python/contrib/test_ethosn/test_addition.py | 68 +++
 tests/python/contrib/test_ethosn/test_networks.py |  2 +-
 3 files changed, 58 insertions(+), 13 deletions(-)

diff --git a/python/tvm/relay/op/contrib/ethosn.py 
b/python/tvm/relay/op/contrib/ethosn.py
index 3e10f3d604..7acaee9706 100644
--- a/python/tvm/relay/op/contrib/ethosn.py
+++ b/python/tvm/relay/op/contrib/ethosn.py
@@ -129,6 +129,7 @@ def partition_for_ethosn(mod, params=None, **opts):
 
 passes = [
 transform.InferType(),
+transform.FoldConstant(fold_qnn=True),
 transform.MergeComposite(pattern_table()),
 transform.AnnotateTarget("ethos-n"),
 transform.MergeCompilerRegions(),
diff --git a/tests/python/contrib/test_ethosn/test_addition.py 
b/tests/python/contrib/test_ethosn/test_addition.py
index 9841e798af..5813ef7b9d 100644
--- a/tests/python/contrib/test_ethosn/test_addition.py
+++ b/tests/python/contrib/test_ethosn/test_addition.py
@@ -41,20 +41,28 @@ def _get_model(
 ):
 """Return a model and any parameters it may have"""
 
-iinfo = np.iinfo(dtype)
-data_min = iinfo.min
-data_max = iinfo.max
+def create_or_assign_constant(shape, dtype, default_data):
+"""Creates new numpy array or assigns default_data if available."""
+
+iinfo = np.iinfo(dtype)
+data_min = iinfo.min
+data_max = iinfo.max
+
+nparray = None
+if default_data:
+nparray = np.array(default_data, dtype=dtype).reshape(shape)
+else:
+nparray = np.random.randint(data_min, data_max + 1, size=shape, 
dtype=dtype)
+
+return relay.const(nparray, dtype=dtype)
 
 if lhs_is_constant:
-a_data = np.array(constant_data, dtype=dtype).reshape(lhs_shape)
-a = relay.const(a_data, dtype=dtype)
+a = create_or_assign_constant(lhs_shape, dtype, constant_data)
 else:
 a = relay.var("a", shape=lhs_shape, dtype=dtype)
 
 if rhs_is_constant:
-b_data = np.array(constant_data, dtype=dtype).reshape(rhs_shape)
-np.random.randint(data_min, data_max + 1, size=rhs_shape, dtype=dtype)
-b = relay.const(b_data, dtype=dtype)
+b = create_or_assign_constant(rhs_shape, dtype, constant_data)
 else:
 b = relay.var("b", shape=rhs_shape, dtype=dtype)
 
@@ -125,6 +133,46 @@ def test_addition(dtype, shape):
 tei.verify(outputs, dtype, 1)
 
 
+@requires_ethosn
+@pytest.mark.parametrize("dtype", ["uint8", "int8"])
+@pytest.mark.parametrize(
+"lhs_shape,lhs_is_constant,rhs_shape,rhs_is_constant",
+[
+((1, 4, 4, 8), True, (1, 1, 1, 8), True),
+((4,), True, (1, 16, 12, 4), True),
+((1, 1, 1, 8), True, (1, 4, 4, 8), True),
+((1, 16, 12, 4), True, (4,), True),
+],
+)
+def test_addition_both_inputs_constants(
+dtype, lhs_shape, lhs_is_constant, rhs_shape, rhs_is_constant
+):
+"""Check if addition is simplified when both inputs are constants."""
+np.random.seed(0)
+
+lhs_zp, lhs_sc, rhs_zp, rhs_sc, out_zp, out_sc = 
_get_addition_qnn_params(dtype)
+
+model = _get_model(
+lhs_shape,
+rhs_shape,
+lhs_zp,
+lhs_sc,
+rhs_zp,
+rhs_sc,
+out_zp,
+out_sc,
+dtype,
+lhs_is_constant=lhs_is_constant,
+rhs_is_constant=rhs_is_constant,
+)
+from tvm.relay.op.contrib import partition_for_ethosn  # pylint: 
disable=import-outside-toplevel
+
+mod = tei.make_module(model, {})
+assert "qnn.add" in mod.astext(False)
+mod = partition_for_ethosn(mod, {})
+assert "qnn.add" not in mod.astext(False)
+
+
 @requires_ethosn
 @pytest.mark.parametrize("dtype", ["uint8", "int8"])
 @pytest.mark.parametrize(
@@ -145,9 +193,6 @@ def test_addition_to_depthwise(dtype, lhs_shape, 
lhs_is_constant, rhs_shape, rhs
 data_max = iinfo.max
 lhs_zp, lhs_sc, rhs_zp, rhs_sc, out_zp, out_sc = 
_get_addition_qnn_params(dtype)
 
-constant_shape = lhs_shape if lhs_is_constant else rhs_shape
-constant_data = np.random.randint(data_min, data_max + 

[GitHub] [tvm] lhutton1 merged pull request #13848: [ETHOSN] Apply FoldConstant before NPU partitioning

2023-01-27 Thread via GitHub


lhutton1 merged PR #13848:
URL: https://github.com/apache/tvm/pull/13848


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] masahi merged pull request #13846: [Relay] Convert negative axes to positive when importing ONNX Unsqueeze

2023-01-27 Thread via GitHub


masahi merged PR #13846:
URL: https://github.com/apache/tvm/pull/13846


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated (56771a87d1 -> 2bfdcbe07a)

2023-01-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 56771a87d1 [CLML][RUNTIME] Enable more ops in CLML runtime (#13834)
 add 2bfdcbe07a [Relay] Convert negative axes to positive when importing 
ONNX Unsqueeze (#13846)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  |  5 +++-
 tests/python/frontend/onnx/test_forward.py | 39 ++
 2 files changed, 43 insertions(+), 1 deletion(-)



[GitHub] [tvm] apeskov commented on a diff in pull request #13849: [RUNTIME] Fix the manual determination of cores in FillDataForMeasure

2023-01-27 Thread via GitHub


apeskov commented on code in PR #13849:
URL: https://github.com/apache/tvm/pull/13849#discussion_r1088719361


##
src/runtime/contrib/random/mt_random_engine.cc:
##
@@ -192,12 +192,12 @@ class RandomEngine {
 struct ParallelTask {
   static int RunTask(int task_id, TVMParallelGroupEnv* penv, void* cdata) {
 ParallelTask* task = static_cast(cdata);
-task->Run(task_id);
+task->Run(task_id, penv->num_task);
 return 0;
   }
 
-  void Run(int i) {
-int64_t chunk_size = size / num_threads;
+  void Run(int i, int num_threads) {
+int64_t chunk_size = ceil(size / num_threads);

Review Comment:
   > is it still correct to use this number as a divider for size.
   
   Yes and no simultaneously. With current implementation it is incorrect. 
There is missed check at line below `int64_t st = std::min(i * chunk_size, 
size);`. With adding this line it will be correct.
   
   > how prev->num_task is correlate with number of threads?
   
   It's one and the same.  TVMBackendParallelLaunch API 
[reference](https://github.com/apache/tvm/blob/56771a87d1560f8963cf745ad093ffc3d83f3f6a/include/tvm/runtime/c_backend_api.h#L119-L146)
 
   
   > Is it possible that size is less than num_threads?
   
   Yes, it's possible.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated (18b7dc1dd9 -> 56771a87d1)

2023-01-27 Thread srk
This is an automated email from the ASF dual-hosted git repository.

srk pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 18b7dc1dd9 [MetaSchedule] Fix for RewriteLayout + AllocateConst when 
the rank of the rewritten weight doesn't change (#13851)
 add 56771a87d1 [CLML][RUNTIME] Enable more ops in CLML runtime (#13834)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/clml.py|  16 -
 src/runtime/contrib/clml/clml_runtime.cc   |  67 ++-
 tests/python/contrib/test_clml/test_ops.py | 102 +
 3 files changed, 183 insertions(+), 2 deletions(-)



[GitHub] [tvm] srkreddy1238 merged pull request #13834: [CLML][RUNTIME] Enable more ops in CLML runtime

2023-01-27 Thread via GitHub


srkreddy1238 merged PR #13834:
URL: https://github.com/apache/tvm/pull/13834


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org