[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #5917: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse

2020-06-25 Thread GitBox


merrymercy commented on a change in pull request #5917:
URL: https://github.com/apache/incubator-tvm/pull/5917#discussion_r445980708



##
File path: .gitignore
##
@@ -196,6 +196,7 @@ tvm_t.*
 .python_history
 .pytest_cache
 .local
+cmake-build-debug

Review comment:
   It is produced by my Clion





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #5917: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse

2020-06-25 Thread GitBox


merrymercy commented on a change in pull request #5917:
URL: https://github.com/apache/incubator-tvm/pull/5917#discussion_r445980708



##
File path: .gitignore
##
@@ -196,6 +196,7 @@ tvm_t.*
 .python_history
 .pytest_cache
 .local
+cmake-build-debug

Review comment:
   It is produced by my Jetbrain Clion





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse (#5917)

2020-06-25 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 96bf271  [TE] Add LegalizeInvalidAttach to legalize the compute_at 
location after split or fuse (#5917)
96bf271 is described below

commit 96bf271e4925030d4533737991dfa7161be60d93
Author: Lianmin Zheng 
AuthorDate: Thu Jun 25 22:52:19 2020 -0700

[TE] Add LegalizeInvalidAttach to legalize the compute_at location after 
split or fuse (#5917)

* Add LegalizeInvalidAttach

* lint & typo

* lint & typo

* address comment

* fix lint
---
 .gitignore   |  1 +
 src/te/schedule/schedule_dataflow_rewrite.cc | 79 +++-
 tests/python/unittest/test_te_schedule.py| 20 +++
 3 files changed, 98 insertions(+), 2 deletions(-)

diff --git a/.gitignore b/.gitignore
index b935701..506e54d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -196,6 +196,7 @@ tvm_t.*
 .python_history
 .pytest_cache
 .local
+cmake-build-debug
 
 # Visual Studio Code
 .vscode
diff --git a/src/te/schedule/schedule_dataflow_rewrite.cc 
b/src/te/schedule/schedule_dataflow_rewrite.cc
index af72d3b..f130cb4 100644
--- a/src/te/schedule/schedule_dataflow_rewrite.cc
+++ b/src/te/schedule/schedule_dataflow_rewrite.cc
@@ -451,7 +451,7 @@ Tensor Schedule::cache_write(const Tensor& tensor, const 
std::string& scope) {
   }
 }
 
-void RebaseNonZeroMinLoop(const Schedule& sch) {
+void RebaseNonZeroMinLoop(ScheduleNode* sch) {
   std::unordered_map rebase_map;
   for (Stage s : sch->stages) {
 if (s->attach_type == kInlinedAlready) continue;
@@ -614,10 +614,85 @@ void InjectInline(ScheduleNode* sch) {
   }
 }
 
+void LegalizeInvalidAttach(ScheduleNode* sch) {
+  // Legalize the compute_at location if the target iterator of compute_at is 
split or fused.
+  // Case 1: If the target of compute_at is split,
+  // we will move the compute_at location to the inner iterator.
+  // Case 2: If the target of compute_at is fused,
+  // we will move the compute_at location to the newly fused iterator.
+  // Note that case 2 can only happen if the target of compute_at
+  // is the innermost operand of fuse operation.
+
+  // Map an old invalid attach point to its new valid attach point
+  std::unordered_map replace_map;
+
+  for (Stage stage : sch->stages) {
+for (Stage s = stage; s.defined();) {
+  // The following logic is simiar to the `CreateAttachPath` in 
`src/te/schedule/graph.h`,
+  // because we follow the validation check in that function to legalize 
the attach.
+  Stage spec = s.GetAttachSpec();
+  if (spec->attach_type != kScope) {
+break;
+  }
+  bool start_attach = false;
+  IterVar attach_ivar = spec->attach_ivar;
+  s = spec->attach_stage;
+  CHECK(attach_ivar.defined());
+  CHECK(s.defined());
+
+  for (size_t i = s->leaf_iter_vars.size(); i != 0; --i) {
+IterVar iv = s->leaf_iter_vars[i - 1];
+if (!start_attach && iv.same_as(attach_ivar)) {
+  start_attach = true;
+  break;
+}
+  }
+
+  if (!start_attach) {
+IterVar new_attach_ivar = attach_ivar;
+bool updated = true;
+// recursively update the relations
+while (updated) {
+  updated = false;
+  for (const auto& rel : s->relations) {
+if (const FuseNode* r = rel.as()) {
+  if (new_attach_ivar.same_as(r->inner)) {
+new_attach_ivar = r->fused;
+updated = true;
+  }
+} else if (const SplitNode* r = rel.as()) {
+  if (new_attach_ivar.same_as(r->parent)) {
+new_attach_ivar = r->inner;
+updated = true;
+  }
+}
+  }
+  replace_map[attach_ivar] = new_attach_ivar;
+}
+  }
+}
+  }
+
+  // remap the parent relation
+  for (Stage s : sch->stages) {
+if (s->attach_type != kScope) continue;
+if (replace_map.count(s->attach_ivar)) {
+  s->attach_ivar = replace_map.at(s->attach_ivar);
+}
+  }
+  for (Stage s : sch->groups) {
+if (s->attach_type != kScope) continue;
+if (replace_map.count(s->attach_ivar)) {
+  s->attach_ivar = replace_map.at(s->attach_ivar);
+}
+  }
+}
+
 Schedule Schedule::normalize() {
   Schedule sn = copy();
   InjectInline(sn.operator->());
-  RebaseNonZeroMinLoop(sn);
+  RebaseNonZeroMinLoop(sn.operator->());
+  LegalizeInvalidAttach(sn.operator->());
   return sn;
 }
 
diff --git a/tests/python/unittest/test_te_schedule.py 
b/tests/python/unittest/test_te_schedule.py
index 2c851cc..c00ee70 100644
--- a/tests/python/unittest/test_te_schedule.py
+++ b/tests/python/unittest/test_te_schedule.py
@@ -289,6 +289,25 @@ def test_tensor_intrin_scalar_params():
 assert 

[GitHub] [incubator-tvm] merrymercy merged pull request #5917: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse

2020-06-25 Thread GitBox


merrymercy merged pull request #5917:
URL: https://github.com/apache/incubator-tvm/pull/5917


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch opened a new pull request #5932: [Frontend][Relay] Add Parser 2.0

2020-06-25 Thread GitBox


jroesch opened a new pull request #5932:
URL: https://github.com/apache/incubator-tvm/pull/5932


   This PR implements a pure C++ parser for Relay's text format and starts to 
lay ground work for improved error reporting and full program parsing work that 
I will send an RFC for sometime next week. The goal is to remove the external 
dependency on ANTLR and make it easier for non-parsing experts to make simple 
modifications or tweaks to the parser. 
   
   I have implemented nearly all the expression and definition parsing, I have 
some remaining work to do on parsing types and ensuring end to end examples are 
working. I am opening the PR now in draft form to solicit some initial 
feedback. 
   
   ### Features
   - [ ] graph definitions and variables
   - [ ] comments
   - [ ] integer literals 
   - [ ] float literals
   - [ ] boolean literals 
   - [ ] unary operations
   - [ ] binary operations 
   - [ ] parens 
   - [ ] operator table an operator precedence 
   - [ ] let bindings 
   - [ ] sequencing 
   - [ ] tuple expressions
   - [ ] function literals 
   - [ ] top-level global functions
   - [ ] recursive calls
   - [ ] if-then-else
   - [ ] function calls 
   - [ ] incomplete types 
   - [ ] builtin types 
   - [ ] tuple types 
   - [ ] adt definitions
   - [ ] match expression 
   - [ ] extern types 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5917: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse

2020-06-25 Thread GitBox


junrushao1994 commented on a change in pull request #5917:
URL: https://github.com/apache/incubator-tvm/pull/5917#discussion_r445941711



##
File path: .gitignore
##
@@ -196,6 +196,7 @@ tvm_t.*
 .python_history
 .pytest_cache
 .local
+cmake-build-debug

Review comment:
   Maybe we don’t need this line





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] s1113950 edited a comment on pull request #5456: Creates a TVM wheel install

2020-06-25 Thread GitBox


s1113950 edited a comment on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-649922257


   I was told to stop working on this until a later date while I was at OctoML, 
and then my contract with OctoML ended. Sorry, I forgot to mention that in a 
comment on this PR  .
   @fhaynes do you know if there's internal  progress on this? I do know 
there's a script that we had used that helped build a Python TVM package but it 
wasn't a wheel specifically. From what I remember, that script might be able to 
be adapted to this purpose  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] s1113950 commented on pull request #5456: Creates a TVM wheel install

2020-06-25 Thread GitBox


s1113950 commented on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-649922257


   I was told to stop working on this until a later date while I was at OctoML, 
and then my contract ended. Sorry, I forgot to mention that in a comment on 
this PR  .
   @fhaynes do you know if there's internal  progress on this? I do know 
there's a script that we had used that helped build a Python TVM package but it 
wasn't a wheel specifically. From what I remember, that script might be able to 
be adapted to this purpose  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5929: [Frontend] Check all unsupported ops before raising an exception

2020-06-25 Thread GitBox


tqchen merged pull request #5929:
URL: https://github.com/apache/incubator-tvm/pull/5929


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (16c5d4d -> 318d268)

2020-06-25 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 16c5d4d  [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv 
simplify (#5922)
 add 318d268  refine error (#5929)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/mxnet.py | 20 
 1 file changed, 16 insertions(+), 4 deletions(-)



[incubator-tvm] branch master updated (6d59ed4 -> 16c5d4d)

2020-06-25 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 6d59ed4  Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+ 
(#5920)
 add 16c5d4d  [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv 
simplify (#5922)

No new revisions were added by this update.

Summary of changes:
 src/arith/canonical_simplify.cc| 1 +
 tests/python/unittest/test_arith_canonical_simplify.py | 5 +++--
 2 files changed, 4 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] yzhliu merged pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


yzhliu merged pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5930: [PatternLang] Don't rewrite expressions used outside of the pattern

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5930:
URL: https://github.com/apache/incubator-tvm/pull/5930#discussion_r445919066



##
File path: tests/python/relay/test_dataflow_pattern.py
##
@@ -1133,6 +1133,37 @@ def test_partition_double_batchnorm():
 reference = f2(gamma, f1(gamma, x, mean, var, beta), mean, var, beta)
 assert tvm.ir.structural_equal(partitioned, reference)
 
+def test_overlappting_partitions():
+x = wildcard()
+gamma = wildcard()
+beta = wildcard()
+moving_mean = wildcard()
+moving_var = wildcard()
+bn_node = is_op('nn.batch_norm')(x, gamma, beta, moving_mean, moving_var)
+tuple_get_item_node = TupleGetItemPattern(bn_node, 0)
+
+x = relay.var('x')
+var = relay.var('var')
+mean = relay.var('mean')
+beta = relay.var('beta')
+gamma = relay.var('gamma')
+BN = relay.op.nn.batch_norm(x, gamma, beta, mean, var, epsilon=1e-5)
+T1 = BN[0]
+T2 = BN[0]
+add = T1 + T2
+
+assert tuple_get_item_node.partition(add) == add

Review comment:
   Ah I see. That makes sense.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #5930: [PatternLang] Don't rewrite expressions used outside of the pattern

2020-06-25 Thread GitBox


mbrookhart commented on a change in pull request #5930:
URL: https://github.com/apache/incubator-tvm/pull/5930#discussion_r445915192



##
File path: tests/python/relay/test_dataflow_pattern.py
##
@@ -1133,6 +1133,37 @@ def test_partition_double_batchnorm():
 reference = f2(gamma, f1(gamma, x, mean, var, beta), mean, var, beta)
 assert tvm.ir.structural_equal(partitioned, reference)
 
+def test_overlappting_partitions():
+x = wildcard()
+gamma = wildcard()
+beta = wildcard()
+moving_mean = wildcard()
+moving_var = wildcard()
+bn_node = is_op('nn.batch_norm')(x, gamma, beta, moving_mean, moving_var)
+tuple_get_item_node = TupleGetItemPattern(bn_node, 0)
+
+x = relay.var('x')
+var = relay.var('var')
+mean = relay.var('mean')
+beta = relay.var('beta')
+gamma = relay.var('gamma')
+BN = relay.op.nn.batch_norm(x, gamma, beta, mean, var, epsilon=1e-5)
+T1 = BN[0]
+T2 = BN[0]
+add = T1 + T2
+
+assert tuple_get_item_node.partition(add) == add

Review comment:
   Partitioning either path is invalid, since either side would need 
intermediate nodes from the other, so we expect the original expression to come 
back unchanged, thus the `==`. We treat it as a match, but we don't treat it as 
something we can independently rewrite.

##
File path: tests/python/relay/test_dataflow_pattern.py
##
@@ -1133,6 +1133,37 @@ def test_partition_double_batchnorm():
 reference = f2(gamma, f1(gamma, x, mean, var, beta), mean, var, beta)
 assert tvm.ir.structural_equal(partitioned, reference)
 
+def test_overlappting_partitions():
+x = wildcard()
+gamma = wildcard()
+beta = wildcard()
+moving_mean = wildcard()
+moving_var = wildcard()
+bn_node = is_op('nn.batch_norm')(x, gamma, beta, moving_mean, moving_var)
+tuple_get_item_node = TupleGetItemPattern(bn_node, 0)
+
+x = relay.var('x')
+var = relay.var('var')
+mean = relay.var('mean')
+beta = relay.var('beta')
+gamma = relay.var('gamma')
+BN = relay.op.nn.batch_norm(x, gamma, beta, mean, var, epsilon=1e-5)
+T1 = BN[0]
+T2 = BN[0]
+add = T1 + T2
+
+assert tuple_get_item_node.partition(add) == add
+
+def test_partition_overused():
+pattern = is_op("nn.relu")(is_op("nn.conv2d")(wildcard(), wildcard()))
+
+x = relay.var('input')
+w = relay.var('weight')
+conv2d = relay.op.nn.conv2d(x, w)
+relu = relay.op.nn.relu(conv2d)
+out = relu + conv2d
+
+assert pattern.partition(out) == out

Review comment:
   Again, fusing the conv and relu would make the rest of the expr invalid, 
so we expect the expr to come back unchanged.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-06-25 Thread GitBox


liangfu commented on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-649886775


   Thanks for quick response and the comprehensive RFC. Let's discuss the the 
proposed features in the post.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5931: Add TupleGetItem to CSE

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5931:
URL: https://github.com/apache/incubator-tvm/pull/5931#discussion_r445901105



##
File path: src/relay/transforms/eliminate_common_subexpr.cc
##
@@ -58,27 +58,52 @@ class CommonSubexprEliminator : public ExprMutator {
 
 auto it = expr_map_.find(new_call->op);
 if (it != expr_map_.end()) {
-  for (const CallNode* candidate : it->second) {
-bool is_equivalent = true;
-if (!attrs_equal(new_call->attrs, candidate->attrs)) {
-  continue;
+  for (const Expr& candidate_expr : it->second) {
+if (const CallNode* candidate = candidate_expr.as()) {
+  bool is_equivalent = true;
+  if (!attrs_equal(new_call->attrs, candidate->attrs)) {
+continue;
+  }
+  for (size_t i = 0; i < new_call->args.size(); i++) {
+if (!new_call->args[i].same_as(candidate->args[i]) &&
+!IsEqualScalar(new_call->args[i], candidate->args[i])) {
+  is_equivalent = false;
+  break;
+}
+  }
+  if (!is_equivalent) continue;
+  return GetRef(candidate);
 }
-for (size_t i = 0; i < new_call->args.size(); i++) {
-  if (!new_call->args[i].same_as(candidate->args[i]) &&
-  !IsEqualScalar(new_call->args[i], candidate->args[i])) {
-is_equivalent = false;
-break;
+  }
+}
+expr_map_[new_call->op].push_back(new_expr);
+return new_expr;
+  }
+
+  Expr VisitExpr_(const TupleGetItemNode* op) final {
+Expr new_expr = ExprMutator::VisitExpr_(op);
+const TupleGetItemNode* new_tuple = new_expr.as();

Review comment:
   `new_tuple` is a bit confusing as this is not actually a `tuple`. Maybe 
`new_tuple_item` would be better?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5931: Add TupleGetItem to CSE

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5931:
URL: https://github.com/apache/incubator-tvm/pull/5931#discussion_r445901105



##
File path: src/relay/transforms/eliminate_common_subexpr.cc
##
@@ -58,27 +58,52 @@ class CommonSubexprEliminator : public ExprMutator {
 
 auto it = expr_map_.find(new_call->op);
 if (it != expr_map_.end()) {
-  for (const CallNode* candidate : it->second) {
-bool is_equivalent = true;
-if (!attrs_equal(new_call->attrs, candidate->attrs)) {
-  continue;
+  for (const Expr& candidate_expr : it->second) {
+if (const CallNode* candidate = candidate_expr.as()) {
+  bool is_equivalent = true;
+  if (!attrs_equal(new_call->attrs, candidate->attrs)) {
+continue;
+  }
+  for (size_t i = 0; i < new_call->args.size(); i++) {
+if (!new_call->args[i].same_as(candidate->args[i]) &&
+!IsEqualScalar(new_call->args[i], candidate->args[i])) {
+  is_equivalent = false;
+  break;
+}
+  }
+  if (!is_equivalent) continue;
+  return GetRef(candidate);
 }
-for (size_t i = 0; i < new_call->args.size(); i++) {
-  if (!new_call->args[i].same_as(candidate->args[i]) &&
-  !IsEqualScalar(new_call->args[i], candidate->args[i])) {
-is_equivalent = false;
-break;
+  }
+}
+expr_map_[new_call->op].push_back(new_expr);
+return new_expr;
+  }
+
+  Expr VisitExpr_(const TupleGetItemNode* op) final {
+Expr new_expr = ExprMutator::VisitExpr_(op);
+const TupleGetItemNode* new_tuple = new_expr.as();

Review comment:
   `new_tuple` is a bit confuse as this is not actually a `tuple`. Maybe 
`new_tuple_item` would be better?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-06-25 Thread GitBox


areusch commented on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-649874830


   hi @liangfu, I posted up an RFC about this here: 
https://discuss.tvm.ai/t/rfc-misra-c-changes-for-rpc-support/7098/2



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #5456: Creates a TVM wheel install

2020-06-25 Thread GitBox


liangfu commented on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-649874396


   Hi @s1113950, is there any update lately?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #5921: µTVM CRT modifications for on-device RPC server

2020-06-25 Thread GitBox


liangfu commented on pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#issuecomment-649873219


   @areusch Thanks for your valuable contribution. Since this is a rather big 
change that might affect many existing users, would you please send an RFC for 
discussion before taking actions on implementation?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart edited a comment on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart edited a comment on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649870756


   Those two PRs plus this change to the script fixes the issue for me:
   
   ```
   import tvm 
   from tvm import relay
   from tvm.relay.dataflow_pattern import *
   import gluoncv as gcv # require pip install gluoncv
   from tvm.relay.build_module import bind_params_by_name
   
   def get_gcv_model(model_name):
   """Pull a Gluon CV model."""
   import gluoncv as gcv 
   
   model_name = model_name.lower()
   
   print('Pulling the model from Gluon CV model zoo...')
   shape = (1, 3, 224, 224)
   if model_name.find('inception') != -1: 
   shape = (1, 3, 299, 299)
   elif model_name.find('yolo3') != -1: 
   shape = (1, 3, 320, 320)
   elif model_name.startswith('ssd'):
   tokens = re.search(r'ssd_(\d+)_', model_name)
   size = int(tokens.group(1))
   shape = (1, 3, size, size)
   net = gcv.model_zoo.get_model(model_name, pretrained=True)
   ret = relay.frontend.from_mxnet(net, shape={'data': shape})
   return ret[0], ret[1], ('data', shape)
   
   mod, params, data_shape = get_gcv_model('mobilenetv2_1.0')
   mod["main"] = bind_params_by_name(mod["main"], params)
   mod = relay.transform.EliminateCommonSubexpr()(mod)
   
   bn_out = is_op('nn.batch_norm')(wildcard(), wildcard(), wildcard(), 
wildcard(), wildcard())
   pat = is_tuple_get_item(bn_out, 0)
   print(pat.partition(mod['main']))
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649870756


   Those two PRs plus this change to the script fixes the issue for me:
   
   ```
   import tvm 
   from tvm import relay
   from tvm.relay.dataflow_pattern import *
   import gluoncv as gcv # require pip install gluoncv
   from tvm.relay.build_module import bind_params_by_name
   
   def get_gcv_model(model_name):
   """Pull a Gluon CV model."""
   import gluoncv as gcv 
   
   model_name = model_name.lower()
   
   print('Pulling the model from Gluon CV model zoo...')
   shape = (1, 3, 224, 224)
   if model_name.find('inception') != -1: 
   shape = (1, 3, 299, 299)
   elif model_name.find('yolo3') != -1: 
   shape = (1, 3, 320, 320)
   elif model_name.startswith('ssd'):
   tokens = re.search(r'ssd_(\d+)_', model_name)
   size = int(tokens.group(1))
   shape = (1, 3, size, size)
   net = gcv.model_zoo.get_model(model_name, pretrained=True)
   ret = relay.frontend.from_mxnet(net, shape={'data': shape})
   return ret[0], ret[1], ('data', shape)
   
   mod, params, data_shape = get_gcv_model('mobilenetv2_1.0')
   mod["main"] = bind_params_by_name(mod["main"], params)
   mod = relay.transform.EliminateCommonSubexpr()(mod)
   
   bn_out = is_op('nn.batch_norm')(wildcard(), wildcard(), wildcard(), 
wildcard(), wildcard())
   pat = is_tuple_get_item(bn_out, 0)
   print(pat.partition(mod['main']))
   ~
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #5931: Add TupleGetItem to CSE

2020-06-25 Thread GitBox


mbrookhart opened a new pull request #5931:
URL: https://github.com/apache/incubator-tvm/pull/5931


   @MarisaKirisame @comaniac 
   
   Add TupleGetItem to CSE. Helps solve the issue in #5928 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5930: [PatternLang] Don't rewrite expressions used outside of the pattern

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5930:
URL: https://github.com/apache/incubator-tvm/pull/5930#discussion_r445892435



##
File path: tests/python/relay/test_dataflow_pattern.py
##
@@ -1133,6 +1133,37 @@ def test_partition_double_batchnorm():
 reference = f2(gamma, f1(gamma, x, mean, var, beta), mean, var, beta)
 assert tvm.ir.structural_equal(partitioned, reference)
 
+def test_overlappting_partitions():
+x = wildcard()
+gamma = wildcard()
+beta = wildcard()
+moving_mean = wildcard()
+moving_var = wildcard()
+bn_node = is_op('nn.batch_norm')(x, gamma, beta, moving_mean, moving_var)
+tuple_get_item_node = TupleGetItemPattern(bn_node, 0)
+
+x = relay.var('x')
+var = relay.var('var')
+mean = relay.var('mean')
+beta = relay.var('beta')
+gamma = relay.var('gamma')
+BN = relay.op.nn.batch_norm(x, gamma, beta, mean, var, epsilon=1e-5)
+T1 = BN[0]
+T2 = BN[0]
+add = T1 + T2
+
+assert tuple_get_item_node.partition(add) == add
+
+def test_partition_overused():
+pattern = is_op("nn.relu")(is_op("nn.conv2d")(wildcard(), wildcard()))
+
+x = relay.var('input')
+w = relay.var('weight')
+conv2d = relay.op.nn.conv2d(x, w)
+relu = relay.op.nn.relu(conv2d)
+out = relu + conv2d
+
+assert pattern.partition(out) == out

Review comment:
   ditto.

##
File path: tests/python/relay/test_dataflow_pattern.py
##
@@ -1133,6 +1133,37 @@ def test_partition_double_batchnorm():
 reference = f2(gamma, f1(gamma, x, mean, var, beta), mean, var, beta)
 assert tvm.ir.structural_equal(partitioned, reference)
 
+def test_overlappting_partitions():
+x = wildcard()
+gamma = wildcard()
+beta = wildcard()
+moving_mean = wildcard()
+moving_var = wildcard()
+bn_node = is_op('nn.batch_norm')(x, gamma, beta, moving_mean, moving_var)
+tuple_get_item_node = TupleGetItemPattern(bn_node, 0)
+
+x = relay.var('x')
+var = relay.var('var')
+mean = relay.var('mean')
+beta = relay.var('beta')
+gamma = relay.var('gamma')
+BN = relay.op.nn.batch_norm(x, gamma, beta, mean, var, epsilon=1e-5)
+T1 = BN[0]
+T2 = BN[0]
+add = T1 + T2
+
+assert tuple_get_item_node.partition(add) == add

Review comment:
   Two questions to this test case:
   1. Should we use `structural_equal ` here?
   2. Does that mean we do not even treat `BN -> T2` as a match anymore?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #5930: [PatternLang] Don't rewrite expressions used outside of the pattern

2020-06-25 Thread GitBox


mbrookhart opened a new pull request #5930:
URL: https://github.com/apache/incubator-tvm/pull/5930


   @comaniac 
   
   While investigating #5928, I discovered an issue where the pattern language 
would rewrite expressions used outside of the pattern, this prevents that 
issue, but doesn't fix #5928 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649836440


   Yes. I think the right solution is CSE, but that is at lot of work.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


comaniac commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649824271


   I guess an ideal solution would be trying to figure out those two matches 
are duplicated and reuse the first partitioned function for the second match? 
But this would look like an ad hoc solution targeting to Relay graph with 
duplicated nodes...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


comaniac commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649819741


   Ah apparently I was wrong. MobileNet has residual connections...I see. So 
`%20` and `%33` are partitioned to a function. In this case "another `%20`" is 
rejected to be partitioned again with `%21`. I need to think about the proper 
semantic for this case. Having Relay level CSE would definitely solve this 
issue but I'm not sure if it's overkill.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649816992


   It should probably just be:
   ```
   %34 = add(%32, %21)
   ```
   To get the behavior you want. So...Relay level CSE? I'm going to double 
check the partition rejection logic, I'm not sure that 100% works right now, I 
think I might be partitioning one of those when I shouldn't.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new pull request #5929: [Frontend] Check all unsupported ops before raising an exception

2020-06-25 Thread GitBox


comaniac opened a new pull request #5929:
URL: https://github.com/apache/incubator-tvm/pull/5929


   To better understand which ops are missing in the MXNet frontend, it would 
be better to traverse an entire model and provide a list of unsupported ops and 
their counts instead of throwing an exception right after the first unsupported 
op.
   
   cc @icemelon9 @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649815056


   Found it:
   
   The original model has this:
   ```
 %20 = nn.batch_norm(%19, meta[relay.Constant][26] /* ty=Tensor[(24), 
float32] */ /* ty=Tensor[(24), float32] */, meta[relay.Constant][27] /* 
ty=Tensor[(24), float32] */ /* ty=Tensor[(24), float32] */, 
meta[relay.Constant][28] /* ty=Tensor[(24), float32] */ /* ty=Tensor[(24), 
float32] */, meta[relay.Constant][29] /* ty=Tensor[(24), float32] */ /* 
ty=Tensor[(24), float32] */) /* ty=(Tensor[(1, 24, 56, 56), float32], 
Tensor[(24), float32], Tensor[(24), float32]) */;
 %21 = %20.0;
   ```
   and later:
   ```
 %31 = nn.batch_norm(%30, meta[relay.Constant][41] /* ty=Tensor[(24), 
float32] */ /* ty=Tensor[(24), float32] */, meta[relay.Constant][42] /* 
ty=Tensor[(24), float32] */ /* ty=Tensor[(24), float32] */, 
meta[relay.Constant][43] /* ty=Tensor[(24), float32] */ /* ty=Tensor[(24), 
float32] */, meta[relay.Constant][44] /* ty=Tensor[(24), float32] */ /* 
ty=Tensor[(24), float32] */) /* ty=(Tensor[(1, 24, 56, 56), float32], 
Tensor[(24), float32], Tensor[(24), float32]) */;
 %32 = %31.0;
 %33 = %20.0;
 %34 = add(%32, %33) /* ty=Tensor[(1, 24, 56, 56), float32] */;
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


comaniac commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649814547


   Yeah it should not be a case for MobileNet model. MobileNet model should be 
a single dataflow pipeline without branches and residual connections.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649810207


   This isn't making much sense to me, it looks like a few of the batch_norm 
call nodes are duplicated, i.e., exactly the same object. The partition pass is 
refusing to fuse the same op into two different graphs, cause that would be bad.
   But I don't see an obvious duplication in the relay IR. Digging more.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5848: [TFLite] QNN support for TFLite 2.1.0 quantized models

2020-06-25 Thread GitBox


anijain2305 commented on a change in pull request #5848:
URL: https://github.com/apache/incubator-tvm/pull/5848#discussion_r445827227



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -2566,17 +2613,27 @@ def convert_quantize(self, op):
 input_tensors = self.get_input_tensors(op)
 assert len(input_tensors) == 1, "input tensors length should be 1"
 input_tensor = input_tensors[0]
+input_tensor_type_str = 
self.get_tensor_type_str(input_tensor.tensor.Type())
 in_expr = self.get_expr(input_tensor.tensor_idx)
 
 output_tensors = self.get_output_tensors(op)
 assert len(output_tensors) == 1, "output tensors length should be 1"
 output_tensor = output_tensors[0]
+output_tensor_type_str = 
self.get_tensor_type_str(output_tensor.tensor.Type())
 
 # The output must be quantized
 assert output_tensor.qnn_params
-# Quantize the input
-out = self.quantize(in_expr, output_tensor)
 
+# TFLite Quantize op can also act as Requantize op
+if input_tensor_type_str == "float32":
+out = self.quantize(in_expr, output_tensor)
+else:
+out = _qnn.op.requantize(in_expr,
+ 
input_scale=input_tensor.qnn_params['scale'],
+ 
input_zero_point=input_tensor.qnn_params['zero_point'],
+ 
output_scale=output_tensor.qnn_params['scale'],
+ 
output_zero_point=output_tensor.qnn_params['zero_point'],
+ out_dtype=output_tensor_type_str)
 return out

Review comment:
   
![image](https://user-images.githubusercontent.com/13822661/85793474-ebec7980-b6e9-11ea-8e53-fa3121ba50c0.png)
   
   Above test case added to force both types of quantize nodes





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5910: [Please Do Not Review] Trying to Unlink libcuda.so

2020-06-25 Thread GitBox


tqchen edited a comment on pull request #5910:
URL: https://github.com/apache/incubator-tvm/pull/5910#issuecomment-649803983


   Thanks @jinboci Please do the experiments locally first before sending the 
PR upstream, since the CI resources is pretty limited 
   atm.  Because the cuda runtime uses driver API, we will need to link to cuda 
driver if you want to use the CUDA module.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5910: [Please Do Not Review] Trying to Unlink libcuda.so

2020-06-25 Thread GitBox


tqchen commented on pull request #5910:
URL: https://github.com/apache/incubator-tvm/pull/5910#issuecomment-649803983


   Thanks @jinboci Please do the experiments locally first before sending the 
PR upstream, since the CI resources is pretty limited atm. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed pull request #5910: [Please Do Not Review] Trying to Unlink libcuda.so

2020-06-25 Thread GitBox


tqchen closed pull request #5910:
URL: https://github.com/apache/incubator-tvm/pull/5910


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-06-25 Thread GitBox


lhutton1 commented on pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#issuecomment-649796679


   > let us consider expand the name acl to arm_compute_lib or some other 
alternatives, since ACL means different things to ML/NLP audiences
   
   I'd be happy to change this, `arm_compute_lib` sounds good to me, @u99127 do 
you have any other preference?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r445810621



##
File path: cmake/modules/contrib/ACL.cmake
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# We separate the codegen and runtime build since ACL can only be built
+# for AArch. In the world where we take the cross compilation approach,
+# which is common with arm devices, we need to be able to cross-compile
+# a relay graph on x86 for AArch and then run the graph on AArch.

Review comment:
   I see. Make sense.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r445810283



##
File path: src/relay/backend/contrib/acl/README.md
##
@@ -0,0 +1,111 @@
+
+
+# Relay Arm Compute Library Integration

Review comment:
   Good question. I'm looking at "tutorials" or "deploy". @tqchen do you 
have a preference?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


mbrookhart commented on issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928#issuecomment-649789497


   I'll try to reproduce, thanks for filing the issue!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5857: [OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add

2020-06-25 Thread GitBox


trevor-m commented on a change in pull request #5857:
URL: https://github.com/apache/incubator-tvm/pull/5857#discussion_r445798573



##
File path: tests/python/relay/test_op_level5.py
##
@@ -270,8 +270,8 @@ def verify_get_valid_counts(dshape, score_threshold, 
id_index, score_index):
 intrp = relay.create_executor("debug", ctx=ctx, target=target)
 out = intrp.evaluate(func)(np_data)
 tvm.testing.assert_allclose(out[0].asnumpy(), np_out1, rtol=1e-3, 
atol=1e-04)
-# get_valid_count for cuda doesn't do data rearrangement
-if target == 'cuda':
+# get_valid_count for cuda, opencl doesn't do data rearrangement
+if target in ['cuda', 'opencl']:
 return

Review comment:
   OpenCL uses the same implementation as CUDA. The CUDA implementation of 
`get_valid_counts` was changed to no longer rearrange the output of 
`get_valid_counts` because it will be rearranged by NMS later anyway. This 
gives the correct output for NMS. See 
https://github.com/apache/incubator-tvm/pull/5339
   
   That issue with NMS looks to be a separate issue where the CUDA 
implementation wasn't fully updated to match changes to CPU implementation by 
https://github.com/apache/incubator-tvm/pull/4312/
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5857: [OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add

2020-06-25 Thread GitBox


trevor-m commented on a change in pull request #5857:
URL: https://github.com/apache/incubator-tvm/pull/5857#discussion_r445799816



##
File path: tests/python/relay/test_op_level5.py
##
@@ -270,8 +270,8 @@ def verify_get_valid_counts(dshape, score_threshold, 
id_index, score_index):
 intrp = relay.create_executor("debug", ctx=ctx, target=target)
 out = intrp.evaluate(func)(np_data)
 tvm.testing.assert_allclose(out[0].asnumpy(), np_out1, rtol=1e-3, 
atol=1e-04)
-# get_valid_count for cuda doesn't do data rearrangement
-if target == 'cuda':
+# get_valid_count for cuda, opencl doesn't do data rearrangement
+if target in ['cuda', 'opencl']:
 return

Review comment:
   See https://github.com/apache/incubator-tvm/pull/5339





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5857: [OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add

2020-06-25 Thread GitBox


trevor-m commented on a change in pull request #5857:
URL: https://github.com/apache/incubator-tvm/pull/5857#discussion_r445799816



##
File path: tests/python/relay/test_op_level5.py
##
@@ -270,8 +270,8 @@ def verify_get_valid_counts(dshape, score_threshold, 
id_index, score_index):
 intrp = relay.create_executor("debug", ctx=ctx, target=target)
 out = intrp.evaluate(func)(np_data)
 tvm.testing.assert_allclose(out[0].asnumpy(), np_out1, rtol=1e-3, 
atol=1e-04)
-# get_valid_count for cuda doesn't do data rearrangement
-if target == 'cuda':
+# get_valid_count for cuda, opencl doesn't do data rearrangement
+if target in ['cuda', 'opencl']:
 return

Review comment:
   See https://github.com/apache/incubator-tvm/pull/5339





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5857: [OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add

2020-06-25 Thread GitBox


trevor-m commented on a change in pull request #5857:
URL: https://github.com/apache/incubator-tvm/pull/5857#discussion_r445798573



##
File path: tests/python/relay/test_op_level5.py
##
@@ -270,8 +270,8 @@ def verify_get_valid_counts(dshape, score_threshold, 
id_index, score_index):
 intrp = relay.create_executor("debug", ctx=ctx, target=target)
 out = intrp.evaluate(func)(np_data)
 tvm.testing.assert_allclose(out[0].asnumpy(), np_out1, rtol=1e-3, 
atol=1e-04)
-# get_valid_count for cuda doesn't do data rearrangement
-if target == 'cuda':
+# get_valid_count for cuda, opencl doesn't do data rearrangement
+if target in ['cuda', 'opencl']:
 return

Review comment:
   OpenCL uses the same implementation as CUDA. The CUDA implementation of 
`get_valid_counts` was changed to no longer rearrange the output of 
`get_valid_counts` because it will be rearranged by NMS later anyway. This 
gives the correct output for NMS.
   
   That issue with NMS looks to be a separate issue.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-06-25 Thread GitBox


lhutton1 commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r445783066



##
File path: cmake/modules/contrib/ACL.cmake
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# We separate the codegen and runtime build since ACL can only be built
+# for AArch. In the world where we take the cross compilation approach,
+# which is common with arm devices, we need to be able to cross-compile
+# a relay graph on x86 for AArch and then run the graph on AArch.

Review comment:
   Yes we do that currently, you just need to add USE_ACL_GRAPH_RUNTIME to 
the cmake config. The reason for the separation is based around the fact that 
we want to give the user the option to only compile the codegen part. This way 
they can compile an ACL module without needing to have ACL present on an x86 
device

##
File path: src/relay/backend/contrib/acl/README.md
##
@@ -0,0 +1,111 @@
+
+
+# Relay Arm Compute Library Integration

Review comment:
   Makes sense, would somewhere under docs be the best place to put this? 
Perhaps `docs/backend/contrib` or `docs/runtime/contrib`?

##
File path: src/relay/backend/contrib/acl/README.md
##
@@ -0,0 +1,111 @@
+
+
+# Relay Arm Compute Library Integration
+Arm Compute Library (ACL) is an open source project that provides accelerated 
kernels for Arm CPU's
+and GPU's. Currently the integration offloads operators to ACL to use 
hand-crafted assembler
+routines in the library. By offloading select operators from a relay graph to 
ACL we can achieve
+a performance boost on such devices.
+
+## Building with ACL support
+The current implementation has two separate build options in cmake. The reason 
for this split is
+because ACL cannot be used on an x86 machine. However, we still want to be 
able compile an ACL
+runtime module on an x86 machine.
+
+* USE_ACL=ON/OFF - Enabling this flag will add support for compiling an ACL 
runtime module.
+* USE_GRAPH_RUNTIME_ACL=ON/OFF/path-to-acl - Enabling this flag will allow the 
graph runtime to
+compute the ACL offloaded functions.
+
+These flags can be used in different scenarios depending on your setup. For 
example, if you want
+to compile ACL on an x86 machine and then run the module on a remote Arm 
device via RPC, you will
+need to use USE_ACL=ON on the x86 machine and USE_GRAPH_RUNTIME_ACL=ON on the 
remote AArch64
+device.
+## Usage
+_Note:_ this may not stay up-to-date with changes to the API.
+1. Create a relay graph. This may be a single operator or a whole graph. The 
intention is that any
+relay graph can be input. The ACL integration will only pick supported 
operators to be offloaded
+whilst the rest will be computed via TVM. (For this example we will use a 
single
+max_pool2d operator).
+```
+import tvm
+from tvm import relay
+
+data_type = "float32"
+data_shape = (1, 14, 14, 512)
+strides = (2, 2)
+padding = (0, 0, 0, 0)
+pool_size = (2, 2)
+layout = "NHWC"
+output_shape = (1, 7, 7, 512)
+
+data = relay.var('data', shape=data_shape, dtype=data_type)
+out = relay.nn.max_pool2d(data, pool_size=pool_size, strides=strides,
+  layout=layout, padding=padding)
+module = tvm.IRModule.from_expr(out)
+```
+2. Annotate and partition the graph for ACL.

Review comment:
   Will amend





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-25 Thread GitBox


comaniac opened a new issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928


   Pointed out by @trevor-m, the following case that uses a `batch_norm -> 
get(0)` to match MobileNet V2, but some subgraphs were failed to be matched and 
partitioned.
   
   ```python
   import tvm
   from tvm import relay
   from tvm.relay.dataflow_pattern import *
   import gluoncv as gcv # require pip install gluoncv
   from tvm.relay.build_module import bind_params_by_name
   
   def get_gcv_model(model_name):
   """Pull a Gluon CV model."""
   import gluoncv as gcv
   
   model_name = model_name.lower()
   
   print('Pulling the model from Gluon CV model zoo...')
   shape = (1, 3, 224, 224)
   if model_name.find('inception') != -1:
   shape = (1, 3, 299, 299)
   elif model_name.find('yolo3') != -1:
   shape = (1, 3, 320, 320)
   elif model_name.startswith('ssd'):
   tokens = re.search(r'ssd_(\d+)_', model_name)
   size = int(tokens.group(1))
   shape = (1, 3, size, size)
   net = gcv.model_zoo.get_model(model_name, pretrained=True)
   ret = relay.frontend.from_mxnet(net, shape={'data': shape})
   return ret[0], ret[1], ('data', shape)
   
   mod, params, data_shape = get_gcv_model('mobilenetv2_1.0')
   mod["main"] = bind_params_by_name(mod["main"], params)
   
   
   bn_out = is_op('nn.batch_norm')(wildcard(), wildcard(), wildcard(), 
wildcard(), wildcard())
   pat = is_tuple_get_item(bn_out, 0)
   print(pat.partition(mod['main']))
   ```
   
   Here is a log snippet:
   
   ```
 %21 = fn (%FunctionVar_52_0, %FunctionVar_52_1, %FunctionVar_52_2, 
%FunctionVar_52_3, %FunctionVar_52_4, 
PartitionedFromPattern="nn.batch_norm_TupleGetItem0_") {
   %20 = nn.batch_norm(%FunctionVar_52_0, %FunctionVar_52_1, 
%FunctionVar_52_2, %FunctionVar_52_3, %FunctionVar_52_4);
   %20.0
 };
   %22 = %21(%19, meta[relay.Constant][21] /* ty=Tensor[(96), float32] */ /* 
ty=Tensor[(96), float32] */, meta[relay.Constant][22] /* ty=Tensor[(96), 
float32] */ /* ty=Tensor[(96), float32] */, meta[relay.Constant][23] /* 
ty=Tensor[(96), float32] */ /* ty=Tensor[(96), float32] */, 
meta[relay.Constant][24] /* ty=Tensor[(96), float32] */ /* ty=Tensor[(96), 
float32] */);
 %23 = clip(%22, a_min=0f, a_max=6f);
 %24 = nn.conv2d(%23, meta[relay.Constant][25] /* ty=Tensor[(24, 96, 1, 1), 
float32] */ /* ty=Tensor[(24, 96, 1, 1), float32] */, padding=[0, 0, 0, 0], 
channels=24, kernel_size=[1, 1]);
 %25 = nn.batch_norm(%24, meta[relay.Constant][26] /* ty=Tensor[(24), 
float32] */ /* ty=Tensor[(24), float32] */, meta[relay.Constant][27] /* 
ty=Tensor[(24), float32] */ /* ty=Tensor[(24), float32] */, 
meta[relay.Constant][28] /* ty=Tensor[(24), float32] */ /* ty=Tensor[(24), 
float32] */, meta[relay.Constant][29] /* ty=Tensor[(24), float32] */ /* 
ty=Tensor[(24), float32] */);
 %26 = %25.0;
 %27 = nn.conv2d(%26, meta[relay.Constant][30] /* ty=Tensor[(144, 24, 1, 
1), float32] */ /* ty=Tensor[(144, 24, 1, 1), float32] */, padding=[0, 0, 0, 
0], channels=144, kernel_size=[1, 1]);
 %29 = fn (%FunctionVar_50_0, %FunctionVar_50_1, %FunctionVar_50_2, 
%FunctionVar_50_3, %FunctionVar_50_4, 
PartitionedFromPattern="nn.batch_norm_TupleGetItem0_") {
   %28 = nn.batch_norm(%FunctionVar_50_0, %FunctionVar_50_1, 
%FunctionVar_50_2, %FunctionVar_50_3, %FunctionVar_50_4);
   %28.0
 };
   ```
   
   As can be seen, `batch_norm` `%20` and `%28` were successfully matched and 
partitioned, but `%25` wasn't. It seems to me that they are all the same, tho.
   
   @mbrookhart could you help take a look? Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on pull request #5826: [DYNAMIC] Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass

2020-06-25 Thread GitBox


icemelon9 commented on pull request #5826:
URL: https://github.com/apache/incubator-tvm/pull/5826#issuecomment-649765936


   @mbrookhart Since we separate the static and dynamic reshape in this pr, we 
should remove the `Optional` in the `ReshapeAttrs` correspondingly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5826: [DYNAMIC] Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass

2020-06-25 Thread GitBox


icemelon9 commented on a change in pull request #5826:
URL: https://github.com/apache/incubator-tvm/pull/5826#discussion_r445779288



##
File path: python/tvm/relay/op/dyn/transform.py
##
@@ -0,0 +1,74 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# pylint: disable=import-outside-toplevel
+"""Dynamic Transform operators."""
+
+from . import _make
+
+
+def reshape(data, newshape):

Review comment:
   @jroesch Could you comment on this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5925: Fix small typo in nn.conv2d_gemm_weight_transform

2020-06-25 Thread GitBox


tqchen commented on pull request #5925:
URL: https://github.com/apache/incubator-tvm/pull/5925#issuecomment-649753751


   Thanks @giuseros can you add a regression test case as per 
https://tvm.apache.org/docs/contribute/code_review.html#ensure-test-coverage? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-06-25 Thread GitBox


tqchen commented on pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#issuecomment-649750307


   let us consider expand the name acl to arm_compute_lib or some other 
alternatives, since ACL means different things to ML/NLP audiences



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r445736912



##
File path: cmake/modules/contrib/ACL.cmake
##
@@ -0,0 +1,68 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# We separate the codegen and runtime build since ACL can only be built
+# for AArch. In the world where we take the cross compilation approach,
+# which is common with arm devices, we need to be able to cross-compile
+# a relay graph on x86 for AArch and then run the graph on AArch.

Review comment:
   I am not familiar with runtime compilation. What I know is that for edge 
devices we can run `make runtime` to compile only the runtime required modules. 
Could we do the same for ACL as well?

##
File path: src/relay/backend/contrib/acl/README.md
##
@@ -0,0 +1,111 @@
+
+
+# Relay Arm Compute Library Integration

Review comment:
   It would be better to put this in the document (and use RST doc style) 
so that people can easily find it.

##
File path: src/relay/backend/contrib/acl/README.md
##
@@ -0,0 +1,111 @@
+
+
+# Relay Arm Compute Library Integration
+Arm Compute Library (ACL) is an open source project that provides accelerated 
kernels for Arm CPU's
+and GPU's. Currently the integration offloads operators to ACL to use 
hand-crafted assembler
+routines in the library. By offloading select operators from a relay graph to 
ACL we can achieve
+a performance boost on such devices.
+
+## Building with ACL support
+The current implementation has two separate build options in cmake. The reason 
for this split is
+because ACL cannot be used on an x86 machine. However, we still want to be 
able compile an ACL
+runtime module on an x86 machine.
+
+* USE_ACL=ON/OFF - Enabling this flag will add support for compiling an ACL 
runtime module.
+* USE_GRAPH_RUNTIME_ACL=ON/OFF/path-to-acl - Enabling this flag will allow the 
graph runtime to
+compute the ACL offloaded functions.
+
+These flags can be used in different scenarios depending on your setup. For 
example, if you want
+to compile ACL on an x86 machine and then run the module on a remote Arm 
device via RPC, you will
+need to use USE_ACL=ON on the x86 machine and USE_GRAPH_RUNTIME_ACL=ON on the 
remote AArch64
+device.
+## Usage
+_Note:_ this may not stay up-to-date with changes to the API.
+1. Create a relay graph. This may be a single operator or a whole graph. The 
intention is that any
+relay graph can be input. The ACL integration will only pick supported 
operators to be offloaded
+whilst the rest will be computed via TVM. (For this example we will use a 
single
+max_pool2d operator).
+```
+import tvm
+from tvm import relay
+
+data_type = "float32"
+data_shape = (1, 14, 14, 512)
+strides = (2, 2)
+padding = (0, 0, 0, 0)
+pool_size = (2, 2)
+layout = "NHWC"
+output_shape = (1, 7, 7, 512)
+
+data = relay.var('data', shape=data_shape, dtype=data_type)
+out = relay.nn.max_pool2d(data, pool_size=pool_size, strides=strides,
+  layout=layout, padding=padding)
+module = tvm.IRModule.from_expr(out)
+```
+2. Annotate and partition the graph for ACL.

Review comment:
   Are we expecting users to call a sequence of passes by themselves, 
especially you already have `partition_for_acl`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5917: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse

2020-06-25 Thread GitBox


tqchen edited a comment on pull request #5917:
URL: https://github.com/apache/incubator-tvm/pull/5917#issuecomment-649748029


   cc @vinx13 @Hzfengsy @spectrometerHBH please also help to take a look, we 
can proceed to merge after reviews by a few more eyes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch v0.6 updated: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify (#5927)

2020-06-25 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch v0.6
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/v0.6 by this push:
 new 802f055  [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv 
simplify (#5927)
802f055 is described below

commit 802f053312d71e66489f92fcec9ea00b31df
Author: Yizhi Liu 
AuthorDate: Thu Jun 25 11:33:18 2020 -0700

[BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify (#5927)
---
 src/arithmetic/canonical_simplify.cc   | 1 +
 tests/python/unittest/test_arith_canonical_simplify.py | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/arithmetic/canonical_simplify.cc 
b/src/arithmetic/canonical_simplify.cc
index 2a05834..52104f3 100644
--- a/src/arithmetic/canonical_simplify.cc
+++ b/src/arithmetic/canonical_simplify.cc
@@ -715,6 +715,7 @@ SplitDivConst(SplitExpr lhs, int64_t cval, DivMode 
div_mode) {
   CHECK(lhs->DivModeCompatibleTo(div_mode));
   CHECK_EQ(lhs->scale, 1);
   lhs.CopyOnWrite()->lower_factor *= cval;
+  lhs.CopyOnWrite()->div_mode = div_mode;
   return lhs;
 }
 
diff --git a/tests/python/unittest/test_arith_canonical_simplify.py 
b/tests/python/unittest/test_arith_canonical_simplify.py
index 08038ff..aa41d71 100644
--- a/tests/python/unittest/test_arith_canonical_simplify.py
+++ b/tests/python/unittest/test_arith_canonical_simplify.py
@@ -82,6 +82,7 @@ def test_split_index_simplify():
 # floordiv
 fld = tvm.floordiv
 flm = tvm.floormod
+ck.verify(fld(x*5, 2), fld(x*5, 2))
 ck.verify(fld(x, 3) * 3 + flm(x, 3), x)
 ck.verify(fld(x, 6) * 6 + flm(fld(x, 3), 2) * 3 + flm(x, 3), x)
 ck.verify(fld(fld(flm(x, 16), 2) * 2, 4), fld(flm(x, 16), 4))



[GitHub] [incubator-tvm] tqchen merged pull request #5927: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


tqchen merged pull request #5927:
URL: https://github.com/apache/incubator-tvm/pull/5927


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5917: [TE] Add LegalizeInvalidAttach to legalize the compute_at location after split or fuse

2020-06-25 Thread GitBox


tqchen commented on pull request #5917:
URL: https://github.com/apache/incubator-tvm/pull/5917#issuecomment-649748029


   cc @vinx13 @Hzfengsy @spectrometerHBH please also help to take a look, we 
can proceed to merge after a few more eyes



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu opened a new pull request #5927: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


yzhliu opened a new pull request #5927:
URL: https://github.com/apache/incubator-tvm/pull/5927


   #5922



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


tqchen commented on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649726080


   @yzhliu please also send a patch to v0.6



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


yzhliu commented on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649705664


   @ANSHUMAN87 good catch, this is what the argument `step` for I added in #5618



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5920: Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+

2020-06-25 Thread GitBox


tqchen merged pull request #5920:
URL: https://github.com/apache/incubator-tvm/pull/5920


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (2a03d24 -> 6d59ed4)

2020-06-25 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2a03d24  Update install.rst (#5858)
 add 6d59ed4  Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+ 
(#5920)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/device_api.h|  3 ++-
 src/runtime/cuda/cuda_device_api.cc |  4 
 src/runtime/metal/metal_device_api.mm   |  2 ++
 src/runtime/opencl/opencl_device_api.cc |  2 ++
 src/runtime/rocm/rocm_common.h  |  1 +
 src/runtime/rocm/rocm_device_api.cc |  8 +++-
 src/runtime/vulkan/vulkan.cc|  2 ++
 src/target/llvm/codegen_amdgpu.cc   | 29 +
 8 files changed, 45 insertions(+), 6 deletions(-)



[incubator-tvm] branch master updated (7830561 -> 2a03d24)

2020-06-25 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 7830561  [Relay][Vm] Some performance improvement to VM (#5901)
 add 2a03d24  Update install.rst (#5858)

No new revisions were added by this update.

Summary of changes:
 docs/vta/install.rst | 17 +
 1 file changed, 9 insertions(+), 8 deletions(-)



[GitHub] [incubator-tvm] tmoreau89 merged pull request #5858: Update install.rst

2020-06-25 Thread GitBox


tmoreau89 merged pull request #5858:
URL: https://github.com/apache/incubator-tvm/pull/5858


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5858: Update install.rst

2020-06-25 Thread GitBox


tmoreau89 commented on pull request #5858:
URL: https://github.com/apache/incubator-tvm/pull/5858#issuecomment-64967


   thank you @badenh your changes have been merged



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5920: Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+

2020-06-25 Thread GitBox


t-vi commented on pull request #5920:
URL: https://github.com/apache/incubator-tvm/pull/5920#issuecomment-649657507


   @tqchen Yeah, so the background to this is that the recent release of ROCm 
3.5 brings rather sweeping changes (changing the compiler backend for the HIP 
compilation among other things). 
   
   My conclusion from this would be
   - I'd have some sympathy for people not switching immediately (I was one of 
them 2 days ago),
   - In half a year, serious users will have switched. Then we can then just 
bump the requirement to ROCm >= 3.5 then (IMHO).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445666228



##
File path: src/runtime/metadata_module.cc
##
@@ -48,15 +48,22 @@ class MetadataModuleNode : public ModuleNode {
  public:
   MetadataModuleNode(const std::unordered_map& metadata,
  const std::unordered_map>& sym_vars)
-  : metadata_(metadata), sym_vars_(sym_vars) {}
+  : metadata_(metadata), sym_vars_(sym_vars) {
+// Only the related submodules are cached to reduce the number of runtime
+// symbol lookup for initialization. Otherwise, symbols/primitives in the
+// DSO module will also be cached but they never need to be initialized.
+for (const auto& it : sym_vars_) {
+  initialized_[it.first] = false;
+}
+  }

Review comment:
   #5926 filed. Will removed these changes after it has been merged.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new pull request #5926: [Runtime] Only initialize required module

2020-06-25 Thread GitBox


comaniac opened a new pull request #5926:
URL: https://github.com/apache/incubator-tvm/pull/5926


   To reduce the overhead of runtime symbol lookup, we should only initialize 
the runtime modules that include the symbol we need for executing this graph.
   
   cc @zhiics @mbaret 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


mbaret commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445665730



##
File path: tests/python/relay/test_json_runtime.py
##
@@ -0,0 +1,625 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for JSON codegen and runtime."""

Review comment:
   Yeah, I think this is actually a broader issue in TVM in that we don't 
seem to have a robust C++ unit testing framework so a lot of things are tested 
indirectly via Python. I'll have a think about this, but not blocking on this 
PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


mbaret commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445666219



##
File path: src/runtime/contrib/json/json_runtime.h
##
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_runtime.h
+ * \brief Utilities for json runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_RUNTIME_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_RUNTIME_H_
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "json_node.h"
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+/*!
+ * \brief A json runtime that executes the serialized JSON format. This runtime
+ * can be extended by user defined runtime for execution.
+ */
+class JSONRuntimeBase : public ModuleNode {
+ public:
+  JSONRuntimeBase(const std::string& symbol_name, const std::string& 
graph_json,
+  const Array const_names)
+  : symbol_name_(symbol_name), graph_json_(graph_json), 
const_names_(const_names) {
+LoadGraph(graph_json_);
+  }
+
+  const char* type_key() const { return "json"; }
+
+  /*! \brief Initialize a specific json runtime. */
+  virtual void Init(const Array& consts) = 0;
+
+  /*! \brief Invoke the execution engine to inteprete a specific json runtime. 
*/
+  virtual void Run() = 0;
+
+  /*!
+   * \brief Get a packed function.
+   * \param name The name/symbol of the function.
+   * \param sptr_to_self The pointer to the module node.
+   * \return The packed function.
+   */
+  virtual PackedFunc GetFunction(const std::string& name, const 
ObjectPtr& sptr_to_self) {
+if (name == "get_symbol") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->symbol_name_; });
+} else if (name == "get_const_vars") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->const_names_; });
+} else if (this->symbol_name_ == name) {
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+CHECK(this->initialized_) << "The module has not been initialized";
+
+// Bind argument tensors to data entries.
+this->SetInputOutputBuffers(args);
+// Execute the subgraph.
+this->Run();
+  });
+} else if ("__init_" + this->symbol_name_ == name) {
+  // The function to initialize constant tensors.
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+CHECK_EQ(args.size(), 1U);
+this->Init(args[0]);
+this->initialized_ = true;
+*rv = 0;
+  });
+} else {
+  return PackedFunc(nullptr);
+}
+  }
+
+  virtual void SaveToBinary(dmlc::Stream* stream) {
+// Save the symbol
+stream->Write(symbol_name_);
+// Save the graph
+stream->Write(graph_json_);
+// Save the required const names
+std::vector consts;
+for (const auto& it : const_names_) {
+  consts.push_back(it);
+}
+stream->Write(consts);
+  }
+
+  template ::value>::type>
+  static Module LoadFromBinary(void* strm) {
+dmlc::Stream* stream = static_cast(strm);
+std::string symbol;
+std::string graph_json;
+std::vector consts;
+// Load the symbol
+CHECK(stream->Read()) << "Loading symbol name failed";

Review comment:
   OK, good to hear.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 edited a comment on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


ANSHUMAN87 edited a comment on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649639015


   @tqchen, @yzhliu : I have one concern here(However not related to this PR).
   As you see the expr below:
   expr = (((x0 - floordiv((0 - (x1*5)), 2)) - 1) + floordiv((37 + ((x1 + 
7)*-5)), 2))
   
   should be evaluated to actually
   expr = x0
   
   but with current simplify stage(rewrite_simplify + canonical_simplify), it 
evaluates to below:
   expr = ((floordiv((0 - (x1: int32*5)), 2) + x0: int32) - floordiv((x1*-5), 
2))
   
   But when i run stages as (rewrite_simplify + canonical_simplify + 
rewrite_simplify) it evaluates perfectly to x0.
   
   So may be we need to run these stages with some customized config every time 
or until there are no more changes?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


mbaret commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445662045



##
File path: src/relay/backend/contrib/codegen_json/codegen_json.h
##
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/codegen_json.h
+ * \brief Utilities for json codegen and runtime
+ */
+#ifndef TVM_RELAY_BACKEND_CONTRIB_CODEGEN_JSON_CODEGEN_JSON_H_
+#define TVM_RELAY_BACKEND_CONTRIB_CODEGEN_JSON_CODEGEN_JSON_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../runtime/contrib/json/json_node.h"
+#include "../../../../runtime/contrib/json/json_runtime.h"
+#include "../../utils.h"
+
+namespace tvm {
+namespace relay {
+namespace backend {
+namespace contrib {
+
+using namespace tvm::runtime::json;
+
+using ShapeVector = std::vector>;
+using TypeVector = std::vector;
+using JSONGraphObjectPtr = std::shared_ptr;
+
+/*!
+ * \brief Helper class to extract all attributes of a certain op and save them
+ * into text format.
+ */
+class OpAttrExtractor : public AttrVisitor {
+ public:
+  explicit OpAttrExtractor(JSONGraphObjectPtr node) : node_(node) {}
+
+  template ::value>>
+  std::string Fp2String(const T value) {
+std::ostringstream out;
+out.precision(std::numeric_limits::max_digits10);
+out << value;
+return out.str();
+  }
+
+  void SetNodeAttr(const char* key, const std::vector& value) {
+std::vector attr;
+attr.emplace_back(value);
+node_->SetAttr(key, attr);
+  }
+
+  void Visit(const char* key, double* value) final { SetNodeAttr(key, 
{Fp2String(*value)}); }
+
+  void Visit(const char* key, int64_t* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, uint64_t* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, int* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, bool* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, std::string* value) final { SetNodeAttr(key, 
{*value}); }
+
+  void Visit(const char* key, DataType* value) final {
+if (!value->is_void()) {
+  SetNodeAttr(key, {runtime::DLDataType2String(*value)});
+} else {
+  SetNodeAttr(key, {""});
+}
+  }
+
+  void Visit(const char* key, runtime::ObjectRef* value) final {
+if (const auto* an = (*value).as()) {
+  std::vector attr;
+  for (size_t i = 0; i < an->size(); ++i) {
+if (const auto* im = (*an)[i].as()) {
+  attr.push_back(std::to_string(im->value));
+} else if (const auto* fm = (*an)[i].as()) {
+  attr.push_back(Fp2String(fm->value));
+} else if (const auto* str = (*an)[i].as()) {
+  String s = GetRef(str);
+  attr.push_back(s.operator std::string());
+} else {
+  LOG(FATAL) << "Not supported type: " << (*an)[i]->GetTypeKey();
+}
+  }
+  SetNodeAttr(key, attr);
+} else if (!(*value).defined()) {  // Skip NullValue
+  SetNodeAttr(key, std::vector{""});
+} else if (const auto* im = (*value).as()) {
+  SetNodeAttr(key, std::vector{std::to_string(im->value)});
+} else if (const auto* fm = (*value).as()) {
+  SetNodeAttr(key, std::vector{Fp2String(fm->value)});
+} else if (const auto* str = (*value).as()) {
+  String s = GetRef(str);
+  SetNodeAttr(key, std::vector{s.operator std::string()});
+} else {
+  LOG(FATAL) << "Not yet supported type: " << (*value)->GetTypeKey() << ": 
" << *value;
+}
+  }
+
+  void Visit(const char* key, runtime::NDArray* value) final {
+LOG(FATAL) << "NDArray is not allowed in op attribute";
+  }
+
+  void Visit(const char* key, void** value) final {
+LOG(FATAL) << "void pointer is not allowed in op attribute";
+  }
+
+  void Extract(Object* node) {
+if (node) {
+  reflection_->VisitAttrs(node, this);
+}
+  }
+
+ private:
+  JSONGraphObjectPtr node_;
+  ReflectionVTable* reflection_ = ReflectionVTable::Global();
+};
+
+/*! \brief Serialize a Relay expression to 

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


ANSHUMAN87 commented on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649639015


   @tqchen, @yzhliu : I have one concern here.
   As you see the expr below:
   expr = (((x0 - floordiv((0 - (x1*5)), 2)) - 1) + floordiv((37 + ((x1 + 
7)*-5)), 2))
   
   should be evaluated to actually
   expr = x0
   
   but with current simplify stage(rewrite_simplify + canonical_simplify), it 
evaluates to below:
   expr = ((floordiv((0 - (x1: int32*5)), 2) + x0: int32) - floordiv((x1*-5), 
2))
   
   But when i run stages as (rewrite_simplify + canonical_simplify + 
rewrite_simplify) it evaluates perfectly to x0.
   
   So may be we need to run these stages with some customized config every time 
or until there are no more changes?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 edited a comment on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


ANSHUMAN87 edited a comment on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649636006


   @yzhliu :  The original issue is fixed indeed, Thanks!
I think it would be good if we add the issue test case as well.
   
   --> expr = (((x0 - floordiv((0 - (x1*5)), 2)) - 1) + floordiv((37 + ((x1 + 
7)*-5)), 2))
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


ANSHUMAN87 commented on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649636006


   @yzhliu :  The issue is fixed indeed, Thanks!
I think it would be good if we add the issue test case as well.
   
   --> expr = (((x0 - floordiv((0 - (x1*5)), 2)) - 1) + floordiv((37 + ((x1 + 
7)*-5)), 2))
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445655112



##
File path: src/relay/backend/graph_runtime_codegen.cc
##
@@ -661,6 +660,8 @@ struct Handler> {
 writer->WriteObjectKeyValue(k, 
dmlc::get>>(v));
   } else if (SameType>(v)) {
 writer->WriteObjectKeyValue(k, dmlc::get>(v));
+  } else if (SameType>(v)) {

Review comment:
   Thanks for catching on this. This change supports any type of vectors to 
be serialized and we should keep it in `json_node.h`. I'll push a commit to fix 
it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445655112



##
File path: src/relay/backend/graph_runtime_codegen.cc
##
@@ -661,6 +660,8 @@ struct Handler> {
 writer->WriteObjectKeyValue(k, 
dmlc::get>>(v));
   } else if (SameType>(v)) {
 writer->WriteObjectKeyValue(k, dmlc::get>(v));
+  } else if (SameType>(v)) {

Review comment:
   Thanks for catching on this. This change supports any type of vectors to 
be serialized and we need a corresponding process in `json_node.h`. I'll push a 
commit to fix it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445653492



##
File path: src/runtime/contrib/json/json_runtime.h
##
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_runtime.h
+ * \brief Utilities for json runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_RUNTIME_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_RUNTIME_H_
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "json_node.h"
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+/*!
+ * \brief A json runtime that executes the serialized JSON format. This runtime
+ * can be extended by user defined runtime for execution.
+ */
+class JSONRuntimeBase : public ModuleNode {
+ public:
+  JSONRuntimeBase(const std::string& symbol_name, const std::string& 
graph_json,
+  const Array const_names)
+  : symbol_name_(symbol_name), graph_json_(graph_json), 
const_names_(const_names) {
+LoadGraph(graph_json_);
+  }
+
+  const char* type_key() const { return "json"; }
+
+  /*! \brief Initialize a specific json runtime. */
+  virtual void Init(const Array& consts) = 0;
+
+  /*! \brief Invoke the execution engine to inteprete a specific json runtime. 
*/
+  virtual void Run() = 0;
+
+  /*!
+   * \brief Get a packed function.
+   * \param name The name/symbol of the function.
+   * \param sptr_to_self The pointer to the module node.
+   * \return The packed function.
+   */
+  virtual PackedFunc GetFunction(const std::string& name, const 
ObjectPtr& sptr_to_self) {
+if (name == "get_symbol") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->symbol_name_; });
+} else if (name == "get_const_vars") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->const_names_; });
+} else if (this->symbol_name_ == name) {
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+CHECK(this->initialized_) << "The module has not been initialized";
+
+// Bind argument tensors to data entries.
+this->SetInputOutputBuffers(args);
+// Execute the subgraph.
+this->Run();
+  });
+} else if ("__init_" + this->symbol_name_ == name) {
+  // The function to initialize constant tensors.
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+CHECK_EQ(args.size(), 1U);
+this->Init(args[0]);
+this->initialized_ = true;
+*rv = 0;
+  });
+} else {
+  return PackedFunc(nullptr);
+}
+  }
+
+  virtual void SaveToBinary(dmlc::Stream* stream) {
+// Save the symbol
+stream->Write(symbol_name_);
+// Save the graph
+stream->Write(graph_json_);
+// Save the required const names
+std::vector consts;
+for (const auto& it : const_names_) {
+  consts.push_back(it);
+}
+stream->Write(consts);
+  }
+
+  template ::value>::type>
+  static Module LoadFromBinary(void* strm) {
+dmlc::Stream* stream = static_cast(strm);
+std::string symbol;
+std::string graph_json;
+std::vector consts;
+// Load the symbol
+CHECK(stream->Read()) << "Loading symbol name failed";

Review comment:
   We had a discussion with @tqchen and our conclusion was that the 
overhead is ignorable. Looking into the implementation, the module 
initialization process is required for each subgraph, and the potential 
overhead of copying shared tensors from one metadata module to many customized 
modules has been avoided by zero-copy.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445649806



##
File path: tests/python/relay/test_json_runtime.py
##
@@ -0,0 +1,625 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for JSON codegen and runtime."""

Review comment:
   The case you mentioned is indeed more desired, but that means we have to 
enhance the example c compiler to support all these ops, which seems 
impractical. That's why enabling DNNL in the CI is important.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-25 Thread GitBox


tqchen edited a comment on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-649619450


   Thanks @jcf94 we should add a testcase to test_arith_rewrite_simplify, by 
constructing the case and 
   
   - verify each of the rule added in this PR.
   - Use `isinstance(x, tvm.ir.Ramp)` to assert the ramp node
   - You mentioned a bug in the previous rule, it would be great if the 
testcase covers the bug you mentioned



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-25 Thread GitBox


tqchen commented on a change in pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#discussion_r445642433



##
File path: src/arith/rewrite_simplify.cc
##
@@ -722,8 +728,15 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const 
FloorDivNode* op) {
   ModularSet bmod = analyzer_->modular_set(b1.Eval());
   int64_t ramp_min = floordiv(bmod->base, c2val);
   int64_t ramp_max = floordiv(bmod->base + (lanes.Eval() - 1) * c1val, 
c2val);
-  if (bmod->coeff % c2val == 0 && ramp_min == ramp_max) {
-return broadcast(floordiv(b1, c2), lanes).Eval();
+  if (ramp_min == ramp_max) {
+// If b1 can device c2

Review comment:
   device-> divide

##
File path: src/arith/rewrite_simplify.cc
##
@@ -722,8 +728,15 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const 
FloorDivNode* op) {
   ModularSet bmod = analyzer_->modular_set(b1.Eval());
   int64_t ramp_min = floordiv(bmod->base, c2val);
   int64_t ramp_max = floordiv(bmod->base + (lanes.Eval() - 1) * c1val, 
c2val);
-  if (bmod->coeff % c2val == 0 && ramp_min == ramp_max) {
-return broadcast(floordiv(b1, c2), lanes).Eval();
+  if (ramp_min == ramp_max) {
+// If b1 can device c2
+if (bmod->coeff % c2val == 0) {
+  return broadcast(floordiv(b1, c2), lanes).Eval();
+}
+// If all indices can be guaranteed to settle inside a coeff range
+if (c2val % bmod->coeff == 0 && bmod->base + (lanes.Eval() - 1) * 
c1val < bmod->coeff) {

Review comment:
   Please add a unit test in tests_arith_rewrite_simplify to cover this 
rule.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445644835



##
File path: src/relay/backend/contrib/codegen_json/codegen_json.h
##
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/codegen_json.h
+ * \brief Utilities for json codegen and runtime
+ */
+#ifndef TVM_RELAY_BACKEND_CONTRIB_CODEGEN_JSON_CODEGEN_JSON_H_
+#define TVM_RELAY_BACKEND_CONTRIB_CODEGEN_JSON_CODEGEN_JSON_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../runtime/contrib/json/json_node.h"
+#include "../../../../runtime/contrib/json/json_runtime.h"
+#include "../../utils.h"
+
+namespace tvm {
+namespace relay {
+namespace backend {
+namespace contrib {
+
+using namespace tvm::runtime::json;
+
+using ShapeVector = std::vector>;
+using TypeVector = std::vector;
+using JSONGraphObjectPtr = std::shared_ptr;
+
+/*!
+ * \brief Helper class to extract all attributes of a certain op and save them
+ * into text format.
+ */
+class OpAttrExtractor : public AttrVisitor {
+ public:
+  explicit OpAttrExtractor(JSONGraphObjectPtr node) : node_(node) {}
+
+  template ::value>>
+  std::string Fp2String(const T value) {
+std::ostringstream out;
+out.precision(std::numeric_limits::max_digits10);
+out << value;
+return out.str();
+  }
+
+  void SetNodeAttr(const char* key, const std::vector& value) {
+std::vector attr;
+attr.emplace_back(value);
+node_->SetAttr(key, attr);
+  }
+
+  void Visit(const char* key, double* value) final { SetNodeAttr(key, 
{Fp2String(*value)}); }
+
+  void Visit(const char* key, int64_t* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, uint64_t* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, int* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, bool* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, std::string* value) final { SetNodeAttr(key, 
{*value}); }
+
+  void Visit(const char* key, DataType* value) final {
+if (!value->is_void()) {
+  SetNodeAttr(key, {runtime::DLDataType2String(*value)});
+} else {
+  SetNodeAttr(key, {""});
+}
+  }
+
+  void Visit(const char* key, runtime::ObjectRef* value) final {
+if (const auto* an = (*value).as()) {
+  std::vector attr;
+  for (size_t i = 0; i < an->size(); ++i) {
+if (const auto* im = (*an)[i].as()) {
+  attr.push_back(std::to_string(im->value));
+} else if (const auto* fm = (*an)[i].as()) {
+  attr.push_back(Fp2String(fm->value));
+} else if (const auto* str = (*an)[i].as()) {
+  String s = GetRef(str);
+  attr.push_back(s.operator std::string());
+} else {
+  LOG(FATAL) << "Not supported type: " << (*an)[i]->GetTypeKey();
+}
+  }
+  SetNodeAttr(key, attr);
+} else if (!(*value).defined()) {  // Skip NullValue
+  SetNodeAttr(key, std::vector{""});
+} else if (const auto* im = (*value).as()) {
+  SetNodeAttr(key, std::vector{std::to_string(im->value)});
+} else if (const auto* fm = (*value).as()) {
+  SetNodeAttr(key, std::vector{Fp2String(fm->value)});
+} else if (const auto* str = (*value).as()) {
+  String s = GetRef(str);
+  SetNodeAttr(key, std::vector{s.operator std::string()});
+} else {
+  LOG(FATAL) << "Not yet supported type: " << (*value)->GetTypeKey() << ": 
" << *value;
+}
+  }
+
+  void Visit(const char* key, runtime::NDArray* value) final {
+LOG(FATAL) << "NDArray is not allowed in op attribute";
+  }
+
+  void Visit(const char* key, void** value) final {
+LOG(FATAL) << "void pointer is not allowed in op attribute";
+  }
+
+  void Extract(Object* node) {
+if (node) {
+  reflection_->VisitAttrs(node, this);
+}
+  }
+
+ private:
+  JSONGraphObjectPtr node_;
+  ReflectionVTable* reflection_ = ReflectionVTable::Global();
+};
+
+/*! \brief Serialize a Relay expression to 

[GitHub] [incubator-tvm] tqchen commented on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-25 Thread GitBox


tqchen commented on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-649619450


   @jcf94 we should add a testcase to test_arith_rewrite_simplify, by 
constructing the case and verify the rules you have worked. Use `isinstance(x, 
tvm.ir.Ramp)` to assert the ramp node



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5920: Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+

2020-06-25 Thread GitBox


tqchen commented on pull request #5920:
URL: https://github.com/apache/incubator-tvm/pull/5920#issuecomment-649618356


   NOTE: using runtime detection of rocm features will only work if we are 
building on the same machine and won't work for cross compilation. While it is 
OK for now, let us keep that in mind and once we land 
https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844, we might want to 
allow user to explicitly specify the attr and only use auto detect if the attr 
is not specified(or march=native is used)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5826: [DYNAMIC] Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass

2020-06-25 Thread GitBox


tqchen commented on pull request #5826:
URL: https://github.com/apache/incubator-tvm/pull/5826#issuecomment-649616499


   cc @icemelon9 please followup and manage this PR, @jroesch @electriclilies 
please 
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explicitly



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5826: [DYNAMIC] Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass

2020-06-25 Thread GitBox


tqchen edited a comment on pull request #5826:
URL: https://github.com/apache/incubator-tvm/pull/5826#issuecomment-649616499


   cc @icemelon9 please followup and manage this PR, @jroesch @electriclilies 
@lixiaoquan  please 
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explicitly



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5922: [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify

2020-06-25 Thread GitBox


tqchen commented on pull request #5922:
URL: https://github.com/apache/incubator-tvm/pull/5922#issuecomment-649615620


   Thanks @yzhliu , seems we need a different testcase, that uses a truncdiv 
but still triggers DivModeCompatibleTo



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (074a07e -> 7830561)

2020-06-25 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 074a07e  CUDA device API & VerifyGPUCode pass update (#5898)
 add 7830561  [Relay][Vm] Some performance improvement to VM (#5901)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/vm.h |  6 ++---
 src/relay/backend/vm/compiler.cc | 31 --
 src/runtime/vm/executable.cc |  2 +-
 src/runtime/vm/vm.cc | 55 ++--
 4 files changed, 58 insertions(+), 36 deletions(-)



[GitHub] [incubator-tvm] zhiics merged pull request #5901: [Relay][Vm] Some performance improvement to VM

2020-06-25 Thread GitBox


zhiics merged pull request #5901:
URL: https://github.com/apache/incubator-tvm/pull/5901


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5901: [Relay][Vm] Some performance improvement to VM

2020-06-25 Thread GitBox


zhiics commented on pull request #5901:
URL: https://github.com/apache/incubator-tvm/pull/5901#issuecomment-649601265


   Thanks @icemelon9 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5920: Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+

2020-06-25 Thread GitBox


t-vi commented on pull request #5920:
URL: https://github.com/apache/incubator-tvm/pull/5920#issuecomment-649546779


   Sorry, another small thing:
   - I'm throwing in an addition of kMaxRegistersPerBlock for ROCm. This was 
introduced for CUDA in #5898.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] giuseros commented on pull request #5925: Fix small typo in nn.conv2d_gemm_weight_transform

2020-06-25 Thread GitBox


giuseros commented on pull request #5925:
URL: https://github.com/apache/incubator-tvm/pull/5925#issuecomment-649539144


   cc @anijain2305 @FrozenGene @u99127 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] giuseros opened a new pull request #5925: Fix small typo in nn.conv2d_gemm_weight_transform

2020-06-25 Thread GitBox


giuseros opened a new pull request #5925:
URL: https://github.com/apache/incubator-tvm/pull/5925


   This is fixing a small typo in `conv2d_gemm_weight_transform` which causes 
the compilation to fail for AArch64 (NHWC) targets. 
   
   Change-Id: I7844d898ebf82592f78f478982262ef95f83cc3e
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5848: [TFLite] QNN support for TFLite 2.1.0 quantized models

2020-06-25 Thread GitBox


siju-samuel commented on a change in pull request #5848:
URL: https://github.com/apache/incubator-tvm/pull/5848#discussion_r445531482



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -258,15 +260,18 @@ def get_tensors(self, tensors_idx_list):
 assert isinstance(tflite_zero_point, np.ndarray)
 
 # Tensor - Per-axis quantization
-if tflite_scale.shape != (1,) and tflite_zero_point.shape 
!= (1,):
+if tflite_scale.size != 1 and tflite_zero_point.size != 1:
 scale = tflite_scale
-# Ensure that all zero points are identical
+# Ensure that all zero points are zeros
 zero_point = tflite_zero_point
-assert all(x == zero_point[0] for x in zero_point)
+if not all(x == 0 for x in zero_point):
+raise tvm.error.OpAttributeInvalid(\
+"TFLite per-axis quantization restricts 
all zero points to be"
++ " 0, but a non-zero value is observed")
 zero_point = int(zero_point[0])
 
 # Scalar - Per-tensor quantization
-elif tflite_scale.shape == (1,) and 
tflite_zero_point.shape == (1,):
+elif tflite_scale.size == 1 and tflite_zero_point.size == 
1:
 scale = float(tflite_scale[0])
 zero_point = int(tflite_zero_point[0])
 

Review comment:
   error coud be due to wrong `tflite_zero_point`as well,  plz print type 
of `tflite_zero_point` in else case.

##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -298,11 +303,15 @@ def get_tensor_value(self, tensor_wrapper):
 except ImportError:
 raise ImportError("The tflite package must be installed")
 
+# Read the data from the buffer. Also extract the shape.
+# The shape is used later to reshape the data.
 data = tensor_wrapper.buffer.DataAsNumpy()
 shape = tensor_wrapper.tensor.ShapeAsNumpy()
 
-# Set shape to 1 if the data is a scalar type
-if data.shape == (1,) and isinstance(shape, int) and shape == 0:
+# When TFLite buffer is of size 1 (scalar), then TFLite tensor shape 
is set to 0.
+# Therefore, we set the shape to 1 for numpy reshape to work.
+Set shape to 1 if the data is a scalar type

Review comment:
   missed to add # before comment?

##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -258,15 +260,18 @@ def get_tensors(self, tensors_idx_list):
 assert isinstance(tflite_zero_point, np.ndarray)
 
 # Tensor - Per-axis quantization
-if tflite_scale.shape != (1,) and tflite_zero_point.shape 
!= (1,):
+if tflite_scale.size != 1 and tflite_zero_point.size != 1:
 scale = tflite_scale
-# Ensure that all zero points are identical
+# Ensure that all zero points are zeros
 zero_point = tflite_zero_point
-assert all(x == zero_point[0] for x in zero_point)
+if not all(x == 0 for x in zero_point):

Review comment:
   Suggestion `if not all(x == 0 for x in zero_point):` >> `if not 
np.all((zero_point == 0)):`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5920: Two small fixes to AMDCPU codegen for LLVM 10+ and ROCm 3.5+

2020-06-25 Thread GitBox


t-vi commented on pull request #5920:
URL: https://github.com/apache/incubator-tvm/pull/5920#issuecomment-649501459


   @masahi This is needed for the latest ROCm and also recent LLVM.
   Without it, the execution will fail with "symbol not found error", which 
probably is hard to decypher.
   
   I'm not particularly fond of needing to go through the device API, but I 
haven't found a better way, as the codegen wants to be compilable without 
having HIP installed.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-25 Thread GitBox


mbaret commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445492584



##
File path: src/relay/backend/graph_runtime_codegen.cc
##
@@ -661,6 +660,8 @@ struct Handler> {
 writer->WriteObjectKeyValue(k, 
dmlc::get>>(v));
   } else if (SameType>(v)) {
 writer->WriteObjectKeyValue(k, dmlc::get>(v));
+  } else if (SameType>(v)) {

Review comment:
   What does this change do?

##
File path: src/relay/backend/contrib/codegen_json/codegen_json.h
##
@@ -0,0 +1,353 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file relay/backend/contrib/codegen_json.h
+ * \brief Utilities for json codegen and runtime
+ */
+#ifndef TVM_RELAY_BACKEND_CONTRIB_CODEGEN_JSON_CODEGEN_JSON_H_
+#define TVM_RELAY_BACKEND_CONTRIB_CODEGEN_JSON_CODEGEN_JSON_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../runtime/contrib/json/json_node.h"
+#include "../../../../runtime/contrib/json/json_runtime.h"
+#include "../../utils.h"
+
+namespace tvm {
+namespace relay {
+namespace backend {
+namespace contrib {
+
+using namespace tvm::runtime::json;
+
+using ShapeVector = std::vector>;
+using TypeVector = std::vector;
+using JSONGraphObjectPtr = std::shared_ptr;
+
+/*!
+ * \brief Helper class to extract all attributes of a certain op and save them
+ * into text format.
+ */
+class OpAttrExtractor : public AttrVisitor {
+ public:
+  explicit OpAttrExtractor(JSONGraphObjectPtr node) : node_(node) {}
+
+  template ::value>>
+  std::string Fp2String(const T value) {
+std::ostringstream out;
+out.precision(std::numeric_limits::max_digits10);
+out << value;
+return out.str();
+  }
+
+  void SetNodeAttr(const char* key, const std::vector& value) {
+std::vector attr;
+attr.emplace_back(value);
+node_->SetAttr(key, attr);
+  }
+
+  void Visit(const char* key, double* value) final { SetNodeAttr(key, 
{Fp2String(*value)}); }
+
+  void Visit(const char* key, int64_t* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, uint64_t* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, int* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, bool* value) final { SetNodeAttr(key, 
{std::to_string(*value)}); }
+
+  void Visit(const char* key, std::string* value) final { SetNodeAttr(key, 
{*value}); }
+
+  void Visit(const char* key, DataType* value) final {
+if (!value->is_void()) {
+  SetNodeAttr(key, {runtime::DLDataType2String(*value)});
+} else {
+  SetNodeAttr(key, {""});
+}
+  }
+
+  void Visit(const char* key, runtime::ObjectRef* value) final {
+if (const auto* an = (*value).as()) {
+  std::vector attr;
+  for (size_t i = 0; i < an->size(); ++i) {
+if (const auto* im = (*an)[i].as()) {
+  attr.push_back(std::to_string(im->value));
+} else if (const auto* fm = (*an)[i].as()) {
+  attr.push_back(Fp2String(fm->value));
+} else if (const auto* str = (*an)[i].as()) {
+  String s = GetRef(str);
+  attr.push_back(s.operator std::string());
+} else {
+  LOG(FATAL) << "Not supported type: " << (*an)[i]->GetTypeKey();
+}
+  }
+  SetNodeAttr(key, attr);
+} else if (!(*value).defined()) {  // Skip NullValue
+  SetNodeAttr(key, std::vector{""});
+} else if (const auto* im = (*value).as()) {
+  SetNodeAttr(key, std::vector{std::to_string(im->value)});
+} else if (const auto* fm = (*value).as()) {
+  SetNodeAttr(key, std::vector{Fp2String(fm->value)});
+} else if (const auto* str = (*value).as()) {
+  String s = GetRef(str);
+  SetNodeAttr(key, std::vector{s.operator std::string()});
+} else {
+  LOG(FATAL) << "Not yet supported type: " << (*value)->GetTypeKey() << ": 
" << *value;
+}
+  }
+
+  void Visit(const char* key, runtime::NDArray* value) final {
+LOG(FATAL) << "NDArray is not allowed in op attribute";
+  }
+
+  void Visit(const char* key, void** 

[GitHub] [incubator-tvm] jcf94 opened a new pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-25 Thread GitBox


jcf94 opened a new pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924


   This pr is part of #5883 , fix for the rewrite_simplify error when doing 
vectorized cooperative fetching in some cases.
   
   Code generated with bug is shown like this:
   ```
   A.shared[ramp(((ax0.ax1.fused.outer.outer*256) + (threadIdx.x_1*4)), 1, 4)] =
   (float32x4*)A_2[(((broadcast(((floordiv(blockIdx.x, 4)*32768) + 
(ax0.ax1.fused.outer.outer*2048)), 4) + (floordiv(ramp((threadIdx.x_1*4), 1, 
4), broadcast(64, 4))*broadcast(512, 4))) + broadcast((k.outer.outer*64), 4)) + 
floormod(ramp((threadIdx.x_1*4), 1, 4), broadcast(64, 4)))])
   ```
   Which will finally lower to wrong CUDA C instructions.
   This should be simplified to generate the correct RampNode:
   ```
   A.shared[ramp(((ax0.ax1.fused.outer.outer*256) + (threadIdx.x_1*4)), 1, 4)] =
   (float32x4*)A_2[ramp((floordiv(blockIdx.x, 4)*32768) + 
(ax0.ax1.fused.outer.outer*2048)) + (floordiv(threadIdx.x_1, 16)*512)) + 
(k.outer.outer*64)) + (floormod(threadIdx.x_1, 16)*4)), 1, 4)])
   ```
   
   Then main problems inside this expression are:
   ```
   threadIdx.x_1 = [0, 64]
   floordiv(ramp((threadIdx.x_1*4), 1, 4), broadcast(64, 4)) * broadcast(512, 4)
   floormod(ramp((threadIdx.x_1*4), 1, 4), broadcast(64, 4))
   ```
   should be simplified to:
   ```
   threadIdx.x_1 = [0, 64]
   broadcast(floordiv(threadIdx.x_1, 16)*512), 4)
   ramp(floormod(threadIdx.x_1, 16)*4, 1, 4)
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] badenh opened a new pull request #5923: Update code_review.rst

2020-06-25 Thread GitBox


badenh opened a new pull request #5923:
URL: https://github.com/apache/incubator-tvm/pull/5923


   editorial pass with corrections
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] badenh commented on pull request #5858: Update install.rst

2020-06-25 Thread GitBox


badenh commented on pull request #5858:
URL: https://github.com/apache/incubator-tvm/pull/5858#issuecomment-649428998


   Yes, fixed the missing item now



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on a change in pull request #5857: [OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add

2020-06-25 Thread GitBox


kazum commented on a change in pull request #5857:
URL: https://github.com/apache/incubator-tvm/pull/5857#discussion_r445411686



##
File path: tests/python/relay/test_op_level5.py
##
@@ -270,8 +270,8 @@ def verify_get_valid_counts(dshape, score_threshold, 
id_index, score_index):
 intrp = relay.create_executor("debug", ctx=ctx, target=target)
 out = intrp.evaluate(func)(np_data)
 tvm.testing.assert_allclose(out[0].asnumpy(), np_out1, rtol=1e-3, 
atol=1e-04)
-# get_valid_count for cuda doesn't do data rearrangement
-if target == 'cuda':
+# get_valid_count for cuda, opencl doesn't do data rearrangement
+if target in ['cuda', 'opencl']:
 return

Review comment:
   Returning here looks wrong to me.  The test in the below link doesn't 
work for OpenCL too because we don't do data rearrangement for GPU nms 
implementation.
   
https://discuss.tvm.ai/t/nms-compile-fails-for-cuda-target-but-works-fine-for-llvm-target/7045/2
   
   Probably, we should fix non_max_suppression for GPU first?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5848: [TFLite] QNN support for TFLite 2.1.0 quantized models

2020-06-25 Thread GitBox


anijain2305 commented on a change in pull request #5848:
URL: https://github.com/apache/incubator-tvm/pull/5848#discussion_r445355375



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -262,21 +298,25 @@ def get_tensor_value(self, tensor_wrapper):
 except ImportError:
 raise ImportError("The tflite package must be installed")
 
+data = tensor_wrapper.buffer.DataAsNumpy()
+shape = tensor_wrapper.tensor.ShapeAsNumpy()
+
+# Set shape to 1 if the data is a scalar type
+if data.shape == (1,) and isinstance(shape, int) and shape == 0:

Review comment:
   I modified the comments. Can you take a look now? This is sort of a 
corner case when the TFLite buffer is a scalar.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >