[GitHub] [tvm] tmoreau89 commented on pull request #7781: [Relay]Frontend][Onnx] Remove pop that interferes with nested loops.

2021-04-03 Thread GitBox


tmoreau89 commented on pull request #7781:
URL: https://github.com/apache/tvm/pull/7781#issuecomment-812948236


   Thank you @jwfromm @mbrookhart  the PR has been merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (d31d048 -> 91311b3)

2021-04-03 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from d31d048  [AutoScheduler] Add task.desc for its function name (#7794)
 add 91311b3  [Relay]Frontend][Onnx] Remove pop that interferes with nested 
loops. (#7781)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)


[GitHub] [tvm] tmoreau89 merged pull request #7781: [Relay]Frontend][Onnx] Remove pop that interferes with nested loops.

2021-04-03 Thread GitBox


tmoreau89 merged pull request #7781:
URL: https://github.com/apache/tvm/pull/7781


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (1f59139 -> d31d048)

2021-04-03 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 1f59139  [Target] Fix empty target and host for autotvm task (#7791)
 add d31d048  [AutoScheduler] Add task.desc for its function name (#7794)

No new revisions were added by this update.

Summary of changes:
 include/tvm/auto_scheduler/search_task.h   |  6 +-
 python/tvm/auto_scheduler/relay_integration.py | 17 +++--
 python/tvm/auto_scheduler/search_task.py   |  6 ++
 src/auto_scheduler/search_task.cc  |  8 +---
 4 files changed, 27 insertions(+), 10 deletions(-)


[GitHub] [tvm] merrymercy merged pull request #7794: [AutoScheduler] Add task.desc for its function name

2021-04-03 Thread GitBox


merrymercy merged pull request #7794:
URL: https://github.com/apache/tvm/pull/7794


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #7792: [TVMC] Autotuning - Hardware configs not default

2021-04-03 Thread GitBox


jwfromm commented on pull request #7792:
URL: https://github.com/apache/tvm/pull/7792#issuecomment-812901294


   Just to add a little more context, the reason for this change is that we 
recently had someone try out TVMC and get very confused why it was producing 
different tuning results than autoscheduling the traditional way. It turns out 
it was due to `num_cores` defaulting to 4 in TVMC. We think this behavior could 
throw off a lot of new users and the default hardware params should be those of 
the system being run on.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] liaopeiyuan commented on pull request #7719: _fix_outputs for BatchNormalization

2021-04-03 Thread GitBox


liaopeiyuan commented on pull request #7719:
URL: https://github.com/apache/tvm/pull/7719#issuecomment-812878495


   Sorry, this is part of a larger patch, which is an on-going research 
project. We will add the test cases when we have time or are done with the 
internal patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7789: [TF frontend][bugfix]Avoid making a new node when already has span info

2021-04-03 Thread GitBox


tqchen commented on pull request #7789:
URL: https://github.com/apache/tvm/pull/7789#issuecomment-812858365


   cc @zhiics can you please help manage this PR Thank you


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen closed issue #7790: target = None error while tuing an autotvm task

2021-04-03 Thread GitBox


tqchen closed issue #7790:
URL: https://github.com/apache/tvm/issues/7790


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #7791: [Target] Fix empty target and host for autotvm task

2021-04-03 Thread GitBox


tqchen merged pull request #7791:
URL: https://github.com/apache/tvm/pull/7791


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Target] Fix empty target and host for autotvm task (#7791)

2021-04-03 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 1f59139  [Target] Fix empty target and host for autotvm task (#7791)
1f59139 is described below

commit 1f59139db2003bc718159d1d87e7a7d36522961c
Author: Xiyou Zhou 
AuthorDate: Sat Apr 3 05:23:14 2021 -0700

[Target] Fix empty target and host for autotvm task (#7791)
---
 python/tvm/autotvm/task/task.py |  4 ++--
 python/tvm/target/target.py |  3 +++
 tests/python/integration/test_tuning.py |  4 ++--
 tests/python/unittest/test_target_target.py | 33 -
 4 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/python/tvm/autotvm/task/task.py b/python/tvm/autotvm/task/task.py
index 0d60ca9..668832b 100644
--- a/python/tvm/autotvm/task/task.py
+++ b/python/tvm/autotvm/task/task.py
@@ -185,7 +185,7 @@ class Task(object):
 "config_space": self.config_space,
 "flop": self.flop,
 "target": self.target,
-"target_host": self.target.host,
+"target_host": self.target_host,
 "func": cloudpickle.dumps(self.func),
 }
 
@@ -465,7 +465,7 @@ def create(task_name, args, target, target_host=None):
 
 ret.flop = ret.config_space.flop or compute_flop(sch)
 ret.target = target
-ret.target_host = target.host
+ret.target_host = target_host
 
 return ret
 
diff --git a/python/tvm/target/target.py b/python/tvm/target/target.py
index 6d0a063..baf0760 100644
--- a/python/tvm/target/target.py
+++ b/python/tvm/target/target.py
@@ -182,6 +182,9 @@ class Target(Object):
 target_is_dict_key : Bool
 When the type of target is dict, whether Target is the key 
(Otherwise the value)
 """
+if target is None:
+assert host is None, "Target host is not empty when target is 
empty."
+return target, host
 if isinstance(target, dict) and "kind" not in target:
 new_target = {}
 for tgt, mod in target.items():
diff --git a/tests/python/integration/test_tuning.py 
b/tests/python/integration/test_tuning.py
index 45e0958..55c8e56 100644
--- a/tests/python/integration/test_tuning.py
+++ b/tests/python/integration/test_tuning.py
@@ -30,6 +30,7 @@ from tvm import te
 
 from tvm import autotvm
 from tvm.autotvm.tuner import RandomTuner
+from tvm.target import Target
 
 import tvm.testing
 
@@ -131,8 +132,7 @@ def teardown_module():
 
 
 def get_sample_task(target=tvm.target.cuda(), target_host=None):
-target = tvm.target.Target(target, target_host)
-target_host = target.host
+target, target_host = Target.check_and_update_host_consist(target, 
target_host)
 """return a sample task for testing"""
 task = autotvm.task.create(
 "testing/conv2d_no_batching", args=(1, 7, 7, 512, 512, 3, 3), 
target=target
diff --git a/tests/python/unittest/test_target_target.py 
b/tests/python/unittest/test_target_target.py
index 2f885d3..98a9edc 100644
--- a/tests/python/unittest/test_target_target.py
+++ b/tests/python/unittest/test_target_target.py
@@ -18,7 +18,7 @@ import json
 import sys
 import pytest
 import tvm
-from tvm.target import cuda, rocm, mali, intel_graphics, arm_cpu, vta, bifrost
+from tvm.target import cuda, rocm, mali, intel_graphics, arm_cpu, vta, 
bifrost, Target
 
 
 @tvm.target.generic_func
@@ -268,5 +268,36 @@ def test_target_with_host():
 assert tgt.host.attrs["registers_per_block"] == 32768
 
 
+def test_check_and_update_host_consist_0():
+target = None
+host = None
+target, host = Target.check_and_update_host_consist(target, host)
+
+
+def test_check_and_update_host_consist_1():
+target = None
+host = "llvm"
+with pytest.raises(AssertionError, match=r"Target host is not empty when 
target is empty."):
+target, host = Target.check_and_update_host_consist(target, host)
+
+
+def test_check_and_update_host_consist_2():
+target = Target("cuda")
+host = Target("llvm")
+target, host = Target.check_and_update_host_consist(target, host)
+assert target.kind.name == "cuda"
+assert target.host.kind.name == "llvm"
+
+
+def test_check_and_update_host_consist_3():
+target = Target(target="cuda", host="llvm")
+host = None
+target, host = Target.check_and_update_host_consist(target, host)
+assert target.kind.name == "cuda"
+assert target.host.kind.name == "llvm"
+assert host.kind.name == "llvm"
+assert target.host == host
+
+
 if __name__ == "__main__":
 sys.exit(pytest.main([__file__] + sys.argv[1:]))


[tvm] branch main updated: Disable Rust CI (#7793)

2021-04-03 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 7071fda  Disable Rust CI (#7793)
7071fda is described below

commit 7071fda67422734f5eca9641424b04238f3e1351
Author: Jared Roesch 
AuthorDate: Sat Apr 3 05:22:40 2021 -0700

Disable Rust CI (#7793)
---
 Jenkinsfile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index f7fc6e4..ea9f160 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -187,7 +187,8 @@ stage('Build') {
   sh "${docker_run} ${ci_cpu} ./tests/scripts/task_python_vta_fsim.sh"
   sh "${docker_run} ${ci_cpu} ./tests/scripts/task_python_vta_tsim.sh"
   // sh "${docker_run} ${ci_cpu} ./tests/scripts/task_golang.sh"
-  sh "${docker_run} ${ci_cpu} ./tests/scripts/task_rust.sh"
+  // TODO(@jroesch): need to resolve CI issue will turn back on in 
follow up patch
+  // sh "${docker_run} ${ci_cpu} ./tests/scripts/task_rust.sh"
   junit "build/pytest-results/*.xml"
 }
   }


[GitHub] [tvm] tqchen merged pull request #7793: Disable Rust CI

2021-04-03 Thread GitBox


tqchen merged pull request #7793:
URL: https://github.com/apache/tvm/pull/7793


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AD1024 opened a new pull request #7795: [FIX] `skip_conv_layers` will affect quantization of `nn.dense`

2021-04-03 Thread GitBox


AD1024 opened a new pull request #7795:
URL: https://github.com/apache/tvm/pull/7795


   Fixed a bug that will cause quantization not working for `nn.dense`
   Also added the Repr for `skip_dense_layer` field in QConfig


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7754: [PatternMatcher] Support matching tuples, call nodes, and functions with variable numbers of inputs

2021-04-03 Thread GitBox


masahi commented on pull request #7754:
URL: https://github.com/apache/tvm/pull/7754#issuecomment-812838540


   thanks @mbrookhart @ekalda @mbaret 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [PatternMatcher] Support matching tuples, call nodes, and functions with variable numbers of inputs (#7754)

2021-04-03 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 7e68b4d  [PatternMatcher] Support matching tuples, call nodes, and 
functions with variable numbers of inputs (#7754)
7e68b4d is described below

commit 7e68b4d413dfc0ec0042634a40d94e9bab59dbde
Author: Matthew Brookhart 
AuthorDate: Sat Apr 3 03:12:13 2021 -0600

[PatternMatcher] Support matching tuples, call nodes, and functions with 
variable numbers of inputs (#7754)

* Allow TuplePattern to have null fields and match any tuple

* support matching functions and call nodes with variable numbers of 
parameters

* remove development code that was commented out

* add docs for fuzzy matching
---
 docs/langref/relay_pattern.rst|  16 
 python/tvm/relay/dataflow_pattern/__init__.py |   5 +-
 src/relay/ir/dataflow_matcher.cc  | 107 ++
 src/relay/ir/dataflow_pattern_functor.cc  |  18 +++--
 src/relay/ir/indexed_graph.cc |  18 +++--
 tests/python/relay/test_dataflow_pattern.py   |  80 +--
 6 files changed, 192 insertions(+), 52 deletions(-)

diff --git a/docs/langref/relay_pattern.rst b/docs/langref/relay_pattern.rst
index d77a519..efb9804 100644
--- a/docs/langref/relay_pattern.rst
+++ b/docs/langref/relay_pattern.rst
@@ -307,6 +307,22 @@ The final example is matching diamonds with a 
post-dominator relationship. We em
 assert diamond.match(out)
 
 
+Matching Fuzzy Patterns
+===
+
+The Dominator analysis above lets one match a subgraph of Relay AST that 
doesn't correspond to a set of patterns nodes exactly 1-to-1. There are a few 
other places where we support such "fuzzy" matching.
+
+Tuples, Functions, and Call nodes with any number of inputs can be matched by 
passing `None` as the argument value, i.e.::
+
+tuple_pattern = is_tuple(None)
+func_pattern = FunctionPattern(None, wildcard() + wildcard())
+call_pattern = func_pattern(None)
+
+These patterns allow matching more generic classes patterns by constraining 
the use of the arguments rather than the number of arguments.
+
+Additionally, we support matching Functions with fuzzy bodies, i.e., a 
function body that is under constrained by the pattern. The pattern 
`FunctionPattern([is_var(), is_var()], wildcard() + wildcard()])` will match 
`relay.Function([x, y], x + y)`, but it will also match `relay.Function([x, y], 
x * x + y)`. In the second case, the pattern doesn't perfectly constrain the 
body of the function, so the resulting match is fuzzy.
+
+
 Pattern Language Design
 ===
 
diff --git a/python/tvm/relay/dataflow_pattern/__init__.py 
b/python/tvm/relay/dataflow_pattern/__init__.py
index d4a8481..b368f4e 100644
--- a/python/tvm/relay/dataflow_pattern/__init__.py
+++ b/python/tvm/relay/dataflow_pattern/__init__.py
@@ -47,7 +47,10 @@ class DFPattern(Node):
 """Base class of all Patterns."""
 
 def __call__(self, *args):
-return CallPattern(self, list(args))
+args = list(args)
+if len(args) == 1 and args[0] is None:
+args = None
+return CallPattern(self, args)
 
 def __or__(self, other):
 return AltPattern(self, other)
diff --git a/src/relay/ir/dataflow_matcher.cc b/src/relay/ir/dataflow_matcher.cc
index 43a6473..6ed24d5 100644
--- a/src/relay/ir/dataflow_matcher.cc
+++ b/src/relay/ir/dataflow_matcher.cc
@@ -242,6 +242,7 @@ bool DFPatternMatcher::VisitDFPattern_(const 
CallPatternNode* op, const Expr& ex
 }
 return false;
   };
+
   // logic
   auto watermark = matched_nodes_.size();
   if (const auto* call_node = expr.as()) {
@@ -253,13 +254,15 @@ bool DFPatternMatcher::VisitDFPattern_(const 
CallPatternNode* op, const Expr& ex
 const Array expr_args) {
 bool matches = true;
 size_t i = 0;
-if (pattern_args.size() == expr_args.size()) {
-  while (matches && i < pattern_args.size()) {
-matches &= VisitDFPattern(pattern_args[i], expr_args[i]);
-++i;
+if (pattern_args.defined()) {
+  if (pattern_args.size() == expr_args.size()) {
+while (matches && i < pattern_args.size()) {
+  matches &= VisitDFPattern(pattern_args[i], expr_args[i]);
+  ++i;
+}
+  } else {
+matches = false;
   }
-} else {
-  matches = false;
 }
 if (!matches) {
   ClearMap(watermark2);
@@ -381,14 +384,16 @@ bool DFPatternMatcher::VisitDFPattern_(const 
FunctionPatternNode* op, const Expr
   bool matches = false;
   if (const auto* func = expr.as()) {
 matches = true;
-size_t i = 0;
-if (op->params.size() == func->params.size()) {
-  while (matches 

[GitHub] [tvm] masahi merged pull request #7754: [PatternMatcher] Support matching tuples, call nodes, and functions with variable numbers of inputs

2021-04-03 Thread GitBox


masahi merged pull request #7754:
URL: https://github.com/apache/tvm/pull/7754


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on pull request #7719: _fix_outputs for BatchNormalization

2021-04-03 Thread GitBox


FrozenGene commented on pull request #7719:
URL: https://github.com/apache/tvm/pull/7719#issuecomment-812818597


   Any update? @liaopeiyuan 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org