[GitHub] [incubator-tvm] liangfu commented on a change in pull request #5601: [DataType] Add bfloat16

2020-05-28 Thread GitBox


liangfu commented on a change in pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#discussion_r432266454



##
File path: src/tir/transforms/bf16_legalize.cc
##
@@ -0,0 +1,384 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file bf16_legalize.cc
+ * \brief legalize bf16 type by adding cast_to_fp32
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+
+class BF16PromoteRewriter : public StmtExprMutator {
+ public:
+  BF16PromoteRewriter() {}
+
+  Stmt operator()(Stmt s) { return VisitStmt(s); }
+
+  std::tuple DoCast(PrimExpr orig_a, PrimExpr orig_b, 
bool* is_bfloat16) {
+auto a = this->VisitExpr(orig_a);
+auto b = this->VisitExpr(orig_b);
+*is_bfloat16 = false;
+if (a->dtype.is_bfloat16()) {
+  CHECK(b->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+} else if (b->dtype.is_bfloat16()) {
+  CHECK(a->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+}
+
+if (*is_bfloat16) {
+  DataType fp32ty(kDLFloat, 32, 1);
+  a = CastNode::make(fp32ty, a);
+  b = CastNode::make(fp32ty, b);
+}
+return std::make_tuple(a, b);
+  }
+
+  PrimExpr VisitExpr_(const AddNode* op) final;
+  PrimExpr VisitExpr_(const SubNode* op) final;
+  PrimExpr VisitExpr_(const MulNode* op) final;
+  PrimExpr VisitExpr_(const DivNode* op) final;
+  PrimExpr VisitExpr_(const MinNode* op) final;
+  PrimExpr VisitExpr_(const MaxNode* op) final;
+  PrimExpr VisitExpr_(const LTNode* op) final;
+  PrimExpr VisitExpr_(const LENode* op) final;
+  PrimExpr VisitExpr_(const GTNode* op) final;
+  PrimExpr VisitExpr_(const GENode* op) final;
+};
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(OP, FUNC)\
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {   \
+PrimExpr a, b;   \
+bool is_bfloat16;\
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16); \
+if (a.same_as(op->a) && b.same_as(op->b)) {  \
+  return GetRef(op);   \
+} else { \
+  auto ret = FUNC(a, b); \
+  if (!is_bfloat16)  \
+return ret;  \
+  else   \
+return CastNode::make(DataType(kTVMBFloat, 16, 1), ret); \
+}\
+  }
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(OP, FUNC) \
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {\
+PrimExpr a, b;\
+bool is_bfloat16; \
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16);  \
+if (a.same_as(op->a) && b.same_as(op->b)) {   \
+  return GetRef(op);\
+} else {  \
+  auto ret = FUNC(a, b);  \
+  return ret; \
+} \
+  }
+
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(AddNode, operator+)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(SubNode, operator-)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MulNode, operator*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(DivNode, div)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MinNode, min)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MaxNode, max)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LTNode, operator<)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LENode, operator<=)  // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GTNode, operator>)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GENode, 

[GitHub] [incubator-tvm] majiang31312 commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-05-28 Thread GitBox


majiang31312 commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-635759813


   I have encounted the same problem.
   It's triggered by 7b74a8672e1e40e7541c0007d8628586c62277e8, as vulkan use 
some cuda codes in topi.  Forcing the sched_warp_softmax  (in 
topi/python/topi/cuda/softmax.py) to return false could bypass the problem.
   The root cause of the problem seems that MakeAllreduce mistakenly assume the 
fourth arg should be a var node, as the var node might be replaced by a 
constant node in the simplify optimization.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on issue #5650: [REFACTOR] PassContext.fallback_device -> PassConfig.config

2020-05-28 Thread GitBox


zhiics commented on issue #5650:
URL: https://github.com/apache/incubator-tvm/issues/5650#issuecomment-635745547


   Sounds like a good plan.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-28 Thread GitBox


dhruvaray commented on pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-635745170


   @FrozenGene - Could you kindly review and merge?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5559: [TIR] Bugs in HoistIfThenElse

2020-05-28 Thread GitBox


tqchen commented on issue #5559:
URL: https://github.com/apache/incubator-tvm/issues/5559#issuecomment-635745118


   We could certainly rewrite the pass completely, instad of the PostOrderVisit



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5650: [REFACTOR] PassContext.fallback_device -> PassConfig.config

2020-05-28 Thread GitBox


tqchen commented on issue #5650:
URL: https://github.com/apache/incubator-tvm/issues/5650#issuecomment-635744914


   We could mark build_config as deprecation, and remove that API in the next 
release cycle. In the meanwhile, we should replace all of the current usages.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on issue #5650: [REFACTOR] PassContext.fallback_device -> PassConfig.config

2020-05-28 Thread GitBox


zhiics commented on issue #5650:
URL: https://github.com/apache/incubator-tvm/issues/5650#issuecomment-635742328


   One potential problem is that it may break downstream deployment.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on issue #5559: [TIR] Bugs in HoistIfThenElse

2020-05-28 Thread GitBox


roastduck commented on issue #5559:
URL: https://github.com/apache/incubator-tvm/issues/5559#issuecomment-635739066


   I met some difficulties improving this pass. For now, I'm not going to take 
over it.
   
   This pass massively utilizes low level semantics such as `PostOrderVisit` 
(instead of `StmtExprMutator`) and raw pointers to `Object`, and it relies on 
manually tracking the updates to these pointers, which is hard to understand. 
Maybe we should develop an improved `StmtExprMutator`, which can track the 
updates to the nodes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tobegit3hub opened a new pull request #5694: Support more dtypes for TVMDSOOp

2020-05-28 Thread GitBox


tobegit3hub opened a new pull request #5694:
URL: https://github.com/apache/incubator-tvm/pull/5694


   Add more dtypes support for TVMDSOOp which expands the scenarios of TVM 
operators in TensorFlow graph.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu edited a comment on pull request #5456: Creates a TVM wheel install

2020-05-28 Thread GitBox


liangfu edited a comment on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-635727096


   Just a high-level question, how shall we integrate with LLVM?
   The options we have
* Search for llvm-config
* Search for LLVM in default system library, like /usr/lib, 
/usr/lib/x86_64-linux-gnu/, and /usr/local/lib, /usr/local/llvm/lib
* Compile llvm as static library and make the TVM wheel independent from 
the llvm library in the  operating system.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #5456: Creates a TVM wheel install

2020-05-28 Thread GitBox


liangfu commented on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-635727096


   Just a high-level question, how shall we integrate with LLVM?
   The options we have
* Search for LLVM in default system library, like /usr/lib, 
/usr/lib/x86_64-linux-gnu/, and /usr/local/lib, /usr/local/llvm/lib
* Compile llvm as static library and make the TVM wheel independent from 
the llvm library in the  operating system.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5601: [TVM Core] Add bfloat16

2020-05-28 Thread GitBox


tqchen commented on pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#issuecomment-635725119


   @liangfu please followup with the reviews. @gussmith23 it would be great if 
you can also help to take a look



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5559: [TIR] Bugs in HoistIfThenElse

2020-05-28 Thread GitBox


tqchen commented on issue #5559:
URL: https://github.com/apache/incubator-tvm/issues/5559#issuecomment-635724547


   @roastduck would you be interested in taking over the pass?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-28 Thread GitBox


mbrookhart opened a new pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693


   Following up from conversation in #5689, this PR implements these rules for 
lifting or embedding constants in Partitioned Functions:
   
   - Pattern input ExprPattern(relay.const):
- Only match constant nodes that contain the same constant value and 
embed them in the partitioned functions. In this case, the arguments of 
partitioned functions will be reduced.
   - Pattern input ConstantPattern:
- Only match constant nodes and embed them in the partitioned 
functions. In this case, the arguments of partitioned functions will be reduced.
   - Pattern input VarPattern('x'):
- Match only the var node with an optional specified name hint. In this 
case, the arguments of partitioned function is fixed.
   - Pattern input wildcard:
- Match anything. In this case, the arguments of partitioned function 
is fixed.
   - Pattern AltPattern:
- Match either lhs or rhs. depending on which side matched, recursively 
apply this matching logic and embed constants appropriately based on the 
contained pattern.
   
   cc @comaniac @mbaret @masahi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-28 Thread GitBox


zhiics commented on a change in pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690#discussion_r432223352



##
File path: tests/python/relay/test_pass_annotation.py
##
@@ -538,9 +539,10 @@ def expected():
 expected_index = [2, 2, 2, 1, 1, 1, 2, 2]
 check_annotated_graph(annotated_func, expected_func)
 params = {"a": a_data, "b": b_data, "c": c_data, "d": d_data}
-config = {"opt_level": 0}
-config["fallback_device"] = fallback_device
-with relay.build_config(**config):
+fall_dev = tvm.tir.IntImm("int32", fallback_device.device_type)
+with tvm.transform.PassContext(opt_level=0,
+   config={"relay.fallback_device_type":
+   tvm.tir.IntImm(32, fall_dev)}):

Review comment:
   yeah, it's not necessary. Let me update.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on pull request #5692: [COMMUNITY] @zhiics -> PPMC

2020-05-28 Thread GitBox


cchung100m commented on pull request #5692:
URL: https://github.com/apache/incubator-tvm/pull/5692#issuecomment-635713660


   Congratulations! @zhiics :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on pull request #5691: [COMMUNITY] @masahi -> PPMC

2020-05-28 Thread GitBox


cchung100m commented on pull request #5691:
URL: https://github.com/apache/incubator-tvm/pull/5691#issuecomment-635713551


   Congratulations! @masahi :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5692: @zhiics -> PPMC

2020-05-28 Thread GitBox


tqchen opened a new pull request #5692:
URL: https://github.com/apache/incubator-tvm/pull/5692


   Please join us to welcom @zhiics as a new PPMC member of the TVM community.
   
   Zhi has contributed to various aspects of TVM. His primary contribution 
includes the relay pass manager, code generator and so long. He also keeps up 
the code quality and help to improve the overall designs for the tvm infra. 
Some of them reflected in his effort in helping code refactors and general 
design RFCs.
   
   - [Commits](https://github.com/apache/incubator-tvm/commits?author=zhiics)
   - [Code 
Review](https://github.com/apache/incubator-tvm/pulls?utf8=%E2%9C%93=reviewed-by%3Azhiics)
   - [Community Forum summary](https://discuss.tvm.ai/u/zhiics/summary)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5691: [COMMUNITY] @masahi -> PPMC

2020-05-28 Thread GitBox


tqchen opened a new pull request #5691:
URL: https://github.com/apache/incubator-tvm/pull/5691


   Please join us to welcome @masahi as a new PPMC member of the TVM community. 
Masa is an early TVM contributor.  He has been contributing various modules 
such as pytorch control flow import, AMD Rocm backend and quantization.
   
   - [Commits](https://github.com/apache/incubator-tvm/commits?author=masahi)
   - [Code 
Reviews](https://github.com/apache/incubator-tvm/pulls?utf8=%E2%9C%93=reviewed-by%3Amasahi)
   - [Community Forum summary](https://discuss.tvm.ai/u/masahi/summary)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5683: [PYTORCH]Minor bug fixes

2020-05-28 Thread GitBox


masahi commented on pull request #5683:
URL: https://github.com/apache/incubator-tvm/pull/5683#issuecomment-635683945


   yes this is also a good option if the model is not too complicated. It is up 
to @siju-samuel. See our lstm tests 
https://github.com/apache/incubator-tvm/blob/master/tests/python/frontend/pytorch/lstm_test.py



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5683: [PYTORCH]Minor bug fixes

2020-05-28 Thread GitBox


tqchen commented on pull request #5683:
URL: https://github.com/apache/incubator-tvm/pull/5683#issuecomment-635679640


   One middle ground would be manually construct some snippet that relates to 
bert but not directly use th bert model itself



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-28 Thread GitBox


tqchen commented on a change in pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690#discussion_r432187036



##
File path: tests/python/relay/test_pass_annotation.py
##
@@ -538,9 +539,10 @@ def expected():
 expected_index = [2, 2, 2, 1, 1, 1, 2, 2]
 check_annotated_graph(annotated_func, expected_func)
 params = {"a": a_data, "b": b_data, "c": c_data, "d": d_data}
-config = {"opt_level": 0}
-config["fallback_device"] = fallback_device
-with relay.build_config(**config):
+fall_dev = tvm.tir.IntImm("int32", fallback_device.device_type)
+with tvm.transform.PassContext(opt_level=0,
+   config={"relay.fallback_device_type":
+   tvm.tir.IntImm(32, fall_dev)}):

Review comment:
   do we ned the intimm? I feel that conversion from int to the intImm is 
automatic





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5650: [REFACTOR] PassContext.fallback_device -> PassConfig.config

2020-05-28 Thread GitBox


tqchen commented on issue #5650:
URL: https://github.com/apache/incubator-tvm/issues/5650#issuecomment-635678608


   I agree we should do that as well



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (a072da0 -> 95b3ad9)

2020-05-28 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from a072da0  [TIR][REFACTOR] std::string -> String Migration in TIR nodes 
(#5596)
 add 95b3ad9  [PatternLang] Add ConstantPattern (#5689)

No new revisions were added by this update.

Summary of changes:
 docs/langref/relay_pattern.rst|  43 +++
 include/tvm/relay/dataflow_pattern.h  |  18 ++
 include/tvm/relay/dataflow_pattern_functor.h  |   3 +
 python/tvm/relay/dataflow_pattern/__init__.py |   8 +
 src/relay/ir/dataflow_matcher.cc  |   5 +
 src/relay/ir/dataflow_pattern.cc  |  12 +
 src/relay/ir/dataflow_pattern_functor.cc  |   2 +
 src/relay/ir/indexed_graph.cc |   2 +
 tests/python/relay/test_dataflow_pattern.py   | 359 +-
 9 files changed, 337 insertions(+), 115 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi merged pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635675585


   > @comaniac @mbrookhart good to go?
   
   Yeah I'm fine with that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635675471


   @comaniac @mbrookhart good to go?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635675067


   @comaniac The input VarPattern name is already optional, if you use 
is_input() it will match any VarNode
   
https://github.com/apache/incubator-tvm/blob/a072da0588c542757d2815832b7f010f530b2428/src/relay/ir/dataflow_matcher.cc#L386-L395
   
https://github.com/apache/incubator-tvm/blob/a072da0588c542757d2815832b7f010f530b2428/python/tvm/relay/dataflow_pattern/__init__.py#L171-L185
   
   I'm not sure that made it into the document or test cases, I'll make sure to 
update in the followup PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi commented on a change in pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#discussion_r432183447



##
File path: docs/langref/relay_pattern.rst
##
@@ -131,6 +136,44 @@ The next example is matching a pattern of batch_norm -> 
get(0) -> relu:
 out = relay.nn.relu(tuple_get_item_node)
 pat.match(out)
 
+The next example is matching a constant node regarding its values. This is 
useful to check
+if a specific parameter in a subgraph has been bind or not.

Review comment:
   bound, can be fixed in the next PR





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635674194


   I'd liked to refine slightly:
   - Pattern input ExprPattern(relay.const):
   - Only match constant nodes that match the value and embed them in the 
partitioned functions. In this case, the arguments of partitioned functions 
will be reduced.
   - Pattern input ConstantPattern:
   - Only match constant nodes and embed them in the partitioned functions. 
In this case, the arguments of partitioned functions will be reduced.
   - Pattern input VarPattern('x'):
   - Match only the var node with an optional specified name hint. In this 
case, the arguments of partitioned function is fixed.
   - Pattern input wildcard:
   - Match anything. In this case, the arguments of partitioned function is 
fixed.
   - Pattern AltPattern:
   - Match either lhs or rhs. depending on which side matched, recursively 
apply this matching logic and embed constants appropriately based on the 
contained pattern.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on issue #5650: [REFACTOR] PassContext.fallback_device -> PassConfig.config

2020-05-28 Thread GitBox


zhiics commented on issue #5650:
URL: https://github.com/apache/incubator-tvm/issues/5650#issuecomment-635674079


   I realize we may also want to remove `relay.build_config` and directly use 
PassContext to keep it consistent to tir, but it may invoke a lot of changes. 
@tqchen thoughts? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics opened a new pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-28 Thread GitBox


zhiics opened a new pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690


   Per #5650, this PR moves this fallback_device field in `PassContext` into 
config. 
   
   @tqchen @junrushao1994 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635672040


   OK so here is a conclusion for another follow-up PR to deal with constant 
lifting.
   
   - Pattern input `ConstantPattern`:
   - Only match constant nodes and keep them in the partitioned functions. 
In this case, the arguments of partitioned functions will be reduced.
   - Pattern input `VarPattern('x')`:
   - Match only the var node with a specified name hint. In this case, the 
arguments of partitioned function is fixed.
   - Pattern input `wildcard` or `VarPattern('x') | ConstantPattern`:
   - Match both constant and var node but life constant nodes. In this 
case, the arguments of partitioned function is fixed whever it matches constant 
or var.
   
   btw, IMHO, for clearer semantic, it might be better if we support 
`VarPattern()` that matches a var node with any name hints but not constant 
node.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635672040


   OK so here is a conclusion for another follow-up PR to deal with constant 
lifting.
   
   - Pattern input `ConstantPattern`:
   - Only match constant nodes and keep them in the partitioned functions. 
In this case, the arguments of partitioned functions will be reduced.
   - Pattern input `VarPattern('x')`:
   - Match only the var node with a specified name hint. In this case, the 
arguments of partitioned function is fixed.
   - Pattern input `wildcard` or `VarPattern('x') | ConstantPattern`:
   - Match both constant and var node but life constant nodes. In this 
case, the arguments of partitioned function is fixed whever it matches constant 
or var.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635668518


   I think it depends on which one matched? If it's a var, we lift it, if it's 
a constant, we embed?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635667979


   That's the behavior I just realized with above examples. In this case, what 
would be the behavior if we specify `VarPattern('x') | ConstantPattern()`?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635666562


   I think I agree with @mbaret here. I think I see a simple way to do that 
behavior by default, why don't I post another PR in that direction after this 
goes in?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbaret commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635665317


   It's not really the case that old merge composite explicitly 'lifts' 
constant nodes if they're vars, in old merge composite vars indicate inputs to 
the pattern (so they don't need to explicitly match against a VarNode).
   
   My expectation would be if the pattern requires there to be a constant node, 
I should see that constant node in the partitioned function. In other words, 
the body of the function should still match the pattern used to create it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635663626


   OK I can finally reproduce this case:
   
   ```
   fn (%x: Tensor[(1, 3, 224, 224), float32], %b: Tensor[(3), float32]) -> 
Tensor[(1, 3, 222, 222), float32] {
 %1 = fn (%x1: Tensor[(1, 3, 224, 224), float32], %b1: Tensor[(3), 
float32], Composite="pat") -> Tensor[(1, 3, 222, 222), float32] {
   %0 = nn.conv2d(%x1, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 3), 
float64] */ /* ty=Tensor[(3, 3, 3, 3), float64] */, padding=[0, 0, 0, 0]) /* 
ty=Tensor[(1, 3, 222, 222), float32] */;
   nn.bias_add(%0, %b1) /* ty=Tensor[(1, 3, 222, 222), float32] */
 };
 %1(%x, %b) /* ty=Tensor[(1, 3, 222, 222), float32] */
   }
   // meta data omitted. you can use show_meta_data=True to include meta data
   
   ```
   
   Combining wih the previous example, we know that old `MergeComposite` 
preserves constant node when the pattern is also a constant; otherwise it will 
lift constant nodes. In the DNNL codegen, the patterns are all specified as 
VarNode, so the case that matches a pattern with constant nodess never shows up 
in unit tests and that's why we missed it.
   
   However, I personally think this behavior is weird. General speaking, 
whatever we specify `var` or `const` in the pattern, we may need or not need 
constant lifting. It seems to me that this partition behavior should not be 
bind with patterns but should be a partition option.
   
   Anyways, I think the behavior of constant lifting should be discussed in 
another topic and does not relate to this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635663626


   OK I can finally reproduce this case:
   
   ```
   fn (%x: Tensor[(1, 3, 224, 224), float32], %b: Tensor[(3), float32]) -> 
Tensor[(1, 3, 222, 222), float32] {
 %1 = fn (%x1: Tensor[(1, 3, 224, 224), float32], %b1: Tensor[(3), 
float32], Composite="pat") -> Tensor[(1, 3, 222, 222), float32] {
   %0 = nn.conv2d(%x1, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 3), 
float64] */ /* ty=Tensor[(3, 3, 3, 3), float64] */, padding=[0, 0, 0, 0]) /* 
ty=Tensor[(1, 3, 222, 222), float32] */;
   nn.bias_add(%0, %b1) /* ty=Tensor[(1, 3, 222, 222), float32] */
 };
 %1(%x, %b) /* ty=Tensor[(1, 3, 222, 222), float32] */
   }
   // meta data omitted. you can use show_meta_data=True to include meta data
   
   ```
   
   Combining wih the previous example, we know that old `MergeComposite` 
preserves constant node when the pattern is also a constant; otherwise it will 
lift constant nodes. In the DNNL codegen, the patterns are all specified as 
VarNode, so the case that matches a pattern with constant nodess never shows up 
in unit tests and that's why we missed it.
   
   However, I personally think this behavior is weird. Generally speaking, 
whatever we specify `var` or `const` in the pattern, we may need or not need 
constant lifting. It seems to me that this partition behavior should not be 
bind with patterns but should be a partition option.
   
   Anyways, I think the behavior of constant lifting should be discussed in 
another topic and does not relate to this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635663626


   OK I can finally reproduce this case:
   
   ```
   fn (%x: Tensor[(1, 3, 224, 224), float32], %b: Tensor[(3), float32]) -> 
Tensor[(1, 3, 222, 222), float32] {
 %1 = fn (%x1: Tensor[(1, 3, 224, 224), float32], %b1: Tensor[(3), 
float32], Composite="pat") -> Tensor[(1, 3, 222, 222), float32] {
   %0 = nn.conv2d(%x1, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 3), 
float64] */ /* ty=Tensor[(3, 3, 3, 3), float64] */, padding=[0, 0, 0, 0]) /* 
ty=Tensor[(1, 3, 222, 222), float32] */;
   nn.bias_add(%0, %b1) /* ty=Tensor[(1, 3, 222, 222), float32] */
 };
 %1(%x, %b) /* ty=Tensor[(1, 3, 222, 222), float32] */
   }
   // meta data omitted. you can use show_meta_data=True to include meta data
   
   ```
   
   Combining wih the previous example we know that old `MergeComposite` 
preserves constant node when the pattern is also a constant; otherwise it will 
lift constant nodes. In the DNNL codegen, the patterns are all specified as 
VarNode, so the case that matches a pattern with constant nodess never shows up 
in unit tests and that's why we missed it.
   
   However, I personally think this behavior is weird. Generally speaking, 
whatever we specify `var` or `const` in the pattern, we may need or not need 
constant lifting. It seems to me that this partition behavior should not be 
bind with patterns but should be a partition option.
   
   Anyways, I think the behavior of constant lifting should be discussed in 
another topic and does not relate to this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635658618


   > @masahi I just checked with this PR. The unit test hits the line you 
pointed out twice.
   
   hmm something seems off to me. On the dnnl fused mobilenet tests, I think 
all params in the network should go through that visitor. In 
https://github.com/apache/incubator-tvm/pull/5310, @zhiics improved how to 
handle big constants, which made running fused mobilenet possible. If the 
visitor is called only twice, we shouldn't have had such issues at all.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbaret commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635650558


   Try changing w in the pattern to a relay.const rather than a var.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635649921


   I made an example and tested it with old `MergeComposite` pass (master 
commit 6100112a150540588ddc9abb36dea0ff961f4301). It behaves as I expected. The 
partitioned composite function still has 3 arguments even `%w` has been bind. 
@mbaret could you double check?
   
   ```python
   import tvm
   from tvm import relay
   from tvm import tir
   from tvm.relay.testing import run_opt_pass
   
   from tvm.relay.build_module import bind_params_by_name
   import numpy as np
   
   
   # Make a graph
   x = relay.var('x', shape=(1, 3, 224, 224))
   w = relay.var('w', shape=(3, 3, 3, 3))
   b = relay.var('b', shape=(3,))
   
   conv2d = relay.op.nn.conv2d(x, w)
   out = relay.op.nn.bias_add(conv2d, b)
   func = relay.Function([x, w, b], out)
   mod = tvm.IRModule.from_expr(func)
   
   mod["main"] = bind_params_by_name(mod["main"],
 {'w': tvm.nd.array(np.ones(shape=(3, 3, 3, 
3)))})
   print('=== Before ===')
   print(mod['main'].body)
   
   
   def pat():
   x = relay.var('x', shape=(1, 3, 224, 224))
   w = relay.var('w', shape=(3, 3, 3, 3))
   b = relay.var('b', shape=(3,))
   
   conv2d = relay.op.nn.conv2d(x, w)
   out = relay.op.nn.bias_add(conv2d, b)
   return out
   
   pattern_table = [('pat', pat())]
   result = run_opt_pass(mod['main'], 
relay.transform.MergeComposite(pattern_table))
   print('=== After ===')
   print(result)
   ```
   
   ```
   === Before ===
   free_var %x: Tensor[(1, 3, 224, 224), float32]
   %0 = nn.conv2d(%x, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 3), 
float64] */ /* ty=Tensor[(3, 3, 3, 3), float64] */, padding=[0, 0, 0, 0]) /* 
ty=Tensor[(1, 3, 222, 222), float32] */;
   free_var %b: Tensor[(3), float32]
   nn.bias_add(%0, %b) /* ty=Tensor[(1, 3, 222, 222), float32] */
   // meta data omitted. you can use show_meta_data=True to include meta data
   === After ===
   fn (%x: Tensor[(1, 3, 224, 224), float32], %b: Tensor[(3), float32]) -> 
Tensor[(1, 3, 222, 222), float32] {
 %1 = fn (%x1: Tensor[(1, 3, 224, 224), float32], %w: Tensor[(3, 3, 3, 3), 
float64], %b1: Tensor[(3), float32], Composite="pat") -> Tensor[(1, 3, 222, 
222), float32] {
   %0 = nn.conv2d(%x1, %w, padding=[0, 0, 0, 0]) /* ty=Tensor[(1, 3, 222, 
222), float32] */;
   nn.bias_add(%0, %b1) /* ty=Tensor[(1, 3, 222, 222), float32] */
 };
 %1(%x, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 3), float64] */ /* 
ty=Tensor[(3, 3, 3, 3), float64] */, %b) /* ty=Tensor[(1, 3, 222, 222), 
float32] */
   }
   // meta data omitted. you can use show_meta_data=True to include meta data
   
   ```
   
   @masahi I just checked with this PR. The unit test hits the line you pointed 
out twice.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5683: [PYTORCH]Minor bug fixes

2020-05-28 Thread GitBox


masahi commented on pull request #5683:
URL: https://github.com/apache/incubator-tvm/pull/5683#issuecomment-635648772


   You can add BERT tests to the repo and only execute them if import succeeds. 
They won't be run on CI but we can test them locally. Since BERT seems popular, 
it would be nice to have an example of how to load them with our PyTorch 
frontend.
   
   @siju-samuel Do you want to do it? Otherwise I can merge this as it is. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635646234


   :point_up: I'll let you guys discuss the appropriate behavior, there's seems 
to be some complication to this. 
   
   In the mean time, this was my initial assumption:
   
   > I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   But it doesn't match the fusion pass.
   
   > I think the most logical behaviour is for constants to remain embedded 
where they are explicitly part of the pattern and to lift them where they are 
not.
   
   I think about how to do that. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635644901


   I might be missing something, but I expect constants to be available to 
external codegen at compile time. Otherwise we cannot do constant folding. 
Sorry I didn't put thought on this issue when merging #5663
   
   @comaniac Under the current implementation of merge composite + pattern 
partitioning, do you hit this visitor 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L156
 during DNNL codegen? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635646234


   :point_up: I'll let you guys discuss the appropriate behavior, there's seems 
to be some complication to this. 
   
   In the mean time, this was my initial assumption:
   
   > I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   But it doesn't match the fusion pass.
   
   > I think the most logical behaviour is for constants to remain embedded 
where they are explicitly part of the pattern and to lift them where they are 
not.
   
   I think about how to do that. It's a little complicated, just due to the 
current separation of responsibilities between the various passes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635644901


   I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   @comaniac Under the current implementation of pattern partitioning, do you 
hit this visitor 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L156
 during DNNL codegen? 
   
   Sorry I didn't put thought on this issue when merging #5663



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635644901


   I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   @comaniac Under the current implementation of merge composite + pattern 
partitioning, do you hit this visitor 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L156
 during DNNL codegen? 
   
   Sorry I didn't put thought on this issue when merging #5663



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635644901


   I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   @comaniac Under the current implementation of partitioning, do you hit this 
visitor 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L156
 during DNNL codegen? 
   
   Sorry I didn't put thought on this issue when merging #5663



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635644901


   I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   @comaniac Under the current constant impl of partitioning, do you hit this 
visitor 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L156
 during DNNL codegen? 
   
   Sorry I didn't put thought on this issue when merging #5663



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


masahi commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635644901


   I expect constants to be available to external codegen at compile time. 
Otherwise we cannot do constant folding.
   
   @comaniac Under the current constant impl of partitioning, do you hit this 
visitor 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L156
 during DNNL codegen? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#issuecomment-635643165


   @srkreddy1238,
   I have incorporated all your review comments. Please take a look at the 
fresh changes. Also quickly evaluated if we can consider using 
convert_variables_to_constants_v2() but it doesn't look promising for all 
covered cases including out names, shape info, and nested function calls. But 
we can discuss more on this if you want. If not, request you to approve the 
changes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432158588



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3155,6 +3176,91 @@ def _convert_control_flow_operator(self, node, inputs, 
attrs, control_flow_node_
 
 return op
 
+def _partition_call_operator(self, inputs, attr):
+"""
+Convert the Relay Partition call ops into Relay Function calls and
+function definitions from Tensorflow graph library attribute to Relay 
global
+functions
+
+Parameters
+--
+node: TensorFlow graph node object.
+A TensorFlow graph node object.
+
+inputs : List[tvm.relay.Expr]
+List of input symbols.
+
+attrs : Dict[tvm.Attrs]
+Dict of operator attributes.
+
+Returns
+---
+op : tvm.relay.Expr
+Converted relay expression.
+"""
+
+try:
+from tensorflow.python.framework import function_def_to_graph
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required {}".format(e))
+
+main_graph_proto = self._main_graph_proto
+outer_graph_def = main_graph_proto._graph
+
+node_func_name = attr.get('f').name
+func = next((f for f in outer_graph_def.library.function
+ if f.signature.name == node_func_name), None)
+if func:
+devices = set(node.device for node in func.node_def)
+if len(devices) > 1:
+raise Exception("Found inconsistent Device assignment in the "\
+"Stateful Partitioned SubGraph. Rejecting "\
+"the subgraph ")
+# Convert function definition to graph
+func_input_shapes = func.attr["_input_shapes"].list.shape
+subgraph, _ = function_def_to_graph.\
+function_def_to_graph_def(func, func_input_shapes)
+
+# Computing subgraph's input shape dictionary
+subgraph_shape_dict, input_expr_dict = {}, {}
+for f_arg, input in zip(func.signature.input_arg, inputs):
+input_expr_dict[f_arg.name] = input
+subgraph_shape_dict[f_arg.name] = _infer_shape(input, 
main_graph_proto._mod)
+
+func_name = 'func_{}'.format(func.signature.name)
+try:
+global_func = main_graph_proto._mod[func_name]

Review comment:
   Nothing to be done here I assume now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432158330



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -3179,10 +3183,342 @@ def test_forward_isfinite():
 _verify_infiniteness_ops(tf.is_finite, "isfinite")
 
 
+def _test_spop_placeholder_one():
+with tf.Graph().as_default():
+
+@function.Defun(*[tf.int32]*2)
+def Forward(x,y):
+print(x.name)
+print(y.name)
+b = tf.add(x, y)
+return b
+pl1 = tf.placeholder(tf.int32,name="pl1")
+pl2 = tf.placeholder(tf.int32,name="pl2")
+pl3 = tf.placeholder(tf.int32, name="pl3")
+data = np.array([[-1, 1], [2, -2]], dtype=np.int32)
+data2 = np.array([[-2, 3], [4, -6]], dtype=np.int32)
+data3 = np.array([[-2, 3], [4, -6]], dtype=np.int32)
+z1 = gen_functional_ops.StatefulPartitionedCall(args=[pl1,pl2], 
Tout=[tf.int32],f=Forward)
+z2 = z1 + pl3
+compare_tf_with_tvm([data, data2, data3], ['pl1:0', 'pl2:0', 'pl3:0'],
+['StatefulPartitionedCall:0',z2.name],  mode='vm', 
init_global_variables=True)
+
+
+def _test_spop_placeholder_two():
+with tf.Graph().as_default():
+data = np.ones([1], dtype=int).astype(np.int32)
+dataVar = tf.Variable(data, shape=data.shape)
+pl1 = 
array_ops.placeholder_with_default(dataVar,shape=data.shape,name="pl1")
+tpl = tf.convert_to_tensor(pl1, dtype=tf.int32)
+
+@function.Defun(*[tf.int32])
+def pl_with_default(pl):
+return tf.expand_dims(tf.multiply(pl, pl), 0)
+
+z = gen_functional_ops.StatefulPartitionedCall(args=[tpl], 
Tout=[tf.int32], f=pl_with_default)
+compare_tf_with_tvm(data, ['pl1:0'], 'StatefulPartitionedCall:0', 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_placeholder_three():
+with tf.Graph().as_default():
+t1 = tf.placeholder(tf.int32, (3, 3, 3), "t1")
+t1_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+t2 = tf.placeholder(tf.int32, (3, 3, 3), "t2")
+t2_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+
+@tf.function
+def add(x, y):
+return tf.add(x, y, "add_t1_t2")
+
+t3 = add(t1, t2)
+compare_tf_with_tvm([t1_data, t2_data], ['t1:0', 't2:0'], [t3.name], 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_placeholder_four():
+with tf.Graph().as_default():
+t1_data = np.array([[-1, 1, 3], [2, -2, 4], [2, -3, 14]], 
dtype=np.int32)
+t2_data = np.array([[-2, 1, 2], [12, -2, 14], [12, -3, 4]], 
dtype=np.int32)
+t1 = tf.placeholder(tf.int32, name="t1")
+t2 = tf.placeholder(tf.int32, name="t2")
+
+@tf.function
+def add(x, y):
+return tf.add(x, y, "add_t1_t2")
+
+t3 = add(t1, t2)
+compare_tf_with_tvm([t1_data, t2_data], ['t1:0', 't2:0'], [t3.name], 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_function_invocation_basic():
+with tf.Graph().as_default():
+
+def fun1(a):
+return tf.multiply(a,a)
+
+def fun2(b):
+return tf.multiply(b,10)
+
+@tf.function
+def fun3(x,y):
+x = fun2(x)
+y = fun1(y)
+z = tf.add(x,y)
+return z
+
+t3 = fun3(tf.constant(10.5), tf.constant(20.4))
+
+compare_tf_with_tvm([], [], [t3.name], mode='vm', 
init_global_variables=True)
+
+
+def _test_spop_function_invocation_nested():
+with tf.Graph().as_default():
+t1 = tf.placeholder(tf.int32, (3, 3, 3), name="t1")
+t1_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+t2 = tf.placeholder(tf.int32, name="t2")
+t2_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+
+@tf.function
+def myfunc(x, y):
+return tf.add(x, y, "myfunc")
+
+@tf.function
+def myfunc2(x, y):
+z = myfunc(x, y)
+l = myfunc(z, y)
+m = myfunc(l,z)
+return tf.add(l, m, "myfunc2")
+
+res1 = myfunc(t1, t2)
+res2 = myfunc2(res1, t1)
+
+compare_tf_with_tvm([t1_data, t2_data], ['t1:0', 't2:0'], [res2.name], 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_function_invocation_no_autograph():
+with tf.Graph().as_default():
+
+@tf.function(autograph=False)
+def fun1(a):
+return tf.multiply(a,a)
+
+@tf.function(autograph=False)
+def fun2(b):
+return tf.multiply(b,10)
+
+@tf.function
+def fun3(x,y):
+x = fun2(x)
+y = fun1(y)
+z = tf.add(x,y)
+return z
+
+t3 = fun3(tf.constant(10.5), tf.constant(20.4))
+
+compare_tf_with_tvm([], [], [t3.name], mode='vm', 
init_global_variables=True)
+
+
+def 

[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432158461



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+raise Exception("Found a stateful operator in this graph 
{}. "\

Review comment:
   Completed this. Please review once.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432158182



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -3179,10 +3183,342 @@ def test_forward_isfinite():
 _verify_infiniteness_ops(tf.is_finite, "isfinite")
 
 
+def _test_spop_placeholder_one():
+with tf.Graph().as_default():

Review comment:
   Changed the test cases names appropriately.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635637903


   > Thanks for the PR so soon! Is there an example of how partition works on a 
constant match? In particular, does the constant remain propagated into the 
body?
   
   An example can be found in the unit test: 
https://github.com/apache/incubator-tvm/pull/5689/files#diff-f9920485e5e341787129348ce1985db9R213
   
   Also, could you provide an example to show your expection of constant 
propogation? I've checked that all merge composite unit tests are passed with 
the current implement even for the DNNL codegen unit tests that use composite 
functions with parameter binding. It would be better if you could include all 
your use cases in the unit tests so that we could on the same page.
   
   > 
   > On a general point, it's helpful if reformats are kept separate from 
feature additions for reviews so it's easy to see what the changes are.
   
   Yeah the bugfix should be put to a separate PR for sure. For re-formatting, 
I didn't intentionally reformat the unit test file. VSCode by default 
auto-formats the file once I paste some code. I was even considering to revert 
auto-formatting due to the reason you pointed out. However, I checked the 
re-formatting and it is mostly just blank and too-long lines, so I think it 
should be fine for this small feature PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbaret commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635640183


   ```
   %2 = fn (%input: Tensor[(1, 224, 224, 3), uint8], Composite="qnn_conv2d") -> 
Tensor[(1, 224, 224, 64), uint8] {
   %0 = qnn.conv2d(%input, meta[relay.Constant][0] /* ty=Tensor[(3, 3, 3, 
64), uint8] */ /* ty=Tensor[(3, 3, 3, 64), uint8] */, 115 /* ty=int32 */, 134 
/* ty=int32 */, 1f /* ty=float32 */, 0.00503911f /* ty=float32 */, padding=[1, 
1, 1, 1], channels=64, kernel_size=[3, 3], data_layout="NHWC", 
kernel_layout="HWIO", out_dtype="int32") /* ty=Tensor[(1, 224, 224, 64), int32] 
*/;
   %1 = nn.bias_add(%0, meta[relay.Constant][1] /* ty=Tensor[(64), int32] 
*/ /* ty=Tensor[(64), int32] */, axis=3) /* ty=Tensor[(1, 224, 224, 64), int32] 
*/;
   qnn.requantize(%1, 0.00503911f /* ty=float32 */, 0 /* ty=int32 */, 
3.26957f /* ty=float32 */, 0 /* ty=int32 */, out_dtype="uint8") /* 
ty=Tensor[(1, 224, 224, 64), uint8] */
 };
   ```
   Would be an example of the composite function I'd expect if I matched the 
weight of the conv as a constant (this is the old merge composite behaviour for 
that case).
   
   I think the most logical behaviour is for constants to remain embedded where 
they are explicitly part of the pattern and to lift them where they are not.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635637682


   @mbaret Originally the partition pass embedded constants in the function 
body, but @comaniac filed #5662, and I responded with #5663, so it currently 
will lift the constants to the function arguments.
   
   Do you prefer embedded constants?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635637903


   > Thanks for the PR so soon! Is there an example of how partition works on a 
constant match? In particular, does the constant remain propagated into the 
body?
   
   An example can be found in the unit test: 
https://github.com/apache/incubator-tvm/pull/5689/files#diff-f9920485e5e341787129348ce1985db9R213
   
   Also, could you provide an example to show your expection of constant 
propogation?
   
   > 
   > On a general point, it's helpful if reformats are kept separate from 
feature additions for reviews so it's easy to see what the changes are.
   
   Yeah the bugfix should be put to a separate PR for sure. For re-formatting, 
I didn't intentionally reformat the unit test file. VSCode by default 
auto-formats the file once I paste some code. I was even considering to revert 
auto-formatting due to the reason you pointed out. However, I checked the 
re-formatting and it is mostly just blank and too-long lines, so I think it 
should be fine for this small feature PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbrookhart commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635637682


   @mbaret Originally the pass embedded constants in the function body, but 
@comaniac filed #5662, and I responded with #5663, so it currently will lift 
the constants to the function arguments.
   
   Do you prefer embedded constants?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


mbaret commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635633008


   Thanks for the PR so soon! Is there an example of how partition works on a 
constant match? In particular, does the constant remain propagated into the 
body?
   
   On a general point, it's helpful if reformats are kept separate from feature 
additions for reviews so it's easy to see what the changes are.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac commented on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635629161


   > To make other syntactic sugar, you could add an is_const() that simiply 
creats a new ConstPattern, but I don't think it's strictly necessary.
   
   I could do that. How about `is_tuple` and `is_tuple_get_item`?
   
   
   > For my own edification, what did you use to auto-format the test file? I 
tried autopep8 on it a while back and got really bad results, so I ignored it 
after that (as does the linter).
   
   I used `yapf` with a self 
[.style.yapf](https://gist.github.com/comaniac/388921955dbe6c13561c4734aaa7a3e3)
 I put under the TVM home.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac edited a comment on pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689#issuecomment-635629161


   > To make other syntactic sugar, you could add an is_const() that simiply 
creats a new ConstPattern, but I don't think it's strictly necessary.
   
   I could do that. How about `is_tuple` and `is_tuple_get_item`?
   
   
   > For my own edification, what did you use to auto-format the test file? I 
tried autopep8 on it a while back and got really bad results, so I ignored it 
after that (as does the linter).
   
   I used `yapf` with the 
[.style.yapf](https://gist.github.com/comaniac/388921955dbe6c13561c4734aaa7a3e3)
 I put under the TVM home.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new pull request #5689: [PatternLang] Add ConstantPattern

2020-05-28 Thread GitBox


comaniac opened a new pull request #5689:
URL: https://github.com/apache/incubator-tvm/pull/5689


   Discussion: https://discuss.tvm.ai/t/patternlang-match-constant-nodes/6835
   
   Changelog:
   * Adds `ConstantPattern` to the pattern language.
   * Add unit tests of constant pattern.
   * Eliminate pylint errors in the dataflow pattern unit test file.
   * Update the document.
   
   cc @mbrookhart @zhiics @mbaret 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5681: [RELAY] Fix segfault in pretty print when ObjectRef is null

2020-05-28 Thread GitBox


tqchen commented on pull request #5681:
URL: https://github.com/apache/incubator-tvm/pull/5681#issuecomment-635592715


   we can create a new file tests/relay/test_text_printer.py



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5681: [RELAY] Fix segfault in pretty print when ObjectRef is null

2020-05-28 Thread GitBox


tqchen edited a comment on pull request #5681:
URL: https://github.com/apache/incubator-tvm/pull/5681#issuecomment-635592715


   
https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_ir_text_printer.py



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5673: mxnet group conv not support very well by tvm

2020-05-28 Thread GitBox


tqchen commented on issue #5673:
URL: https://github.com/apache/incubator-tvm/issues/5673#issuecomment-635545203


   Thanks for reporting the problem, let us move to the discuss thread 
https://discuss.tvm.ai/t/mxnet-group-conv-not-support-very-well-by-tvm/6811/3



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #5673: mxnet group conv not support very well by tvm

2020-05-28 Thread GitBox


tqchen closed issue #5673:
URL: https://github.com/apache/incubator-tvm/issues/5673


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432049322



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -614,6 +614,52 @@ def _impl(inputs, attr, params, mod):
 return out
 return _impl
 
+def _nms():
+def _impl(inputs, attr, params, mod):
+# Get parameter values
+max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])
+iou_threshold = np.atleast_1d(inputs[3].data.asnumpy())[0]
+# score_threshold was introduced from V3
+score_threshold = np.atleast_1d(inputs[4].data.asnumpy())[0] if 
len(inputs) > 4 else 0.0
+
+# Generate data with shape (1, num_anchors, 5)
+scores = AttrCvt(op_name="expand_dims",
+ ignores=['T_threshold'],
+ extras={'axis': -1, 'num_newaxis': 1})([inputs[1]], 
attr)
+data = get_relay_op('concatenate')([scores, inputs[0]], -1)
+data = get_relay_op('expand_dims')(data, 0, 1)
+
+# reason why using get_valid_counts is for inference performance
+ct, data, indices = get_relay_op('get_valid_counts')(data,
+ 
score_threshold=score_threshold,
+ id_index=-1,
+ score_index=0)
+# TensorFlow NMS doesn't have parameter top_k
+top_k = -1
+# TF doesn't have class id for nms input
+score_index = 0
+nms_ret = get_relay_op('non_max_suppression')(data=data,
+  valid_count=ct,
+  indices=indices,
+  
max_output_size=max_output_size,
+  
iou_threshold=iou_threshold,
+  force_suppress=True,
+  top_k=top_k,
+  coord_start=1,
+  score_index=score_index,
+  id_index=-1,
+  return_indices=True,
+  invalid_to_bottom=False)
+
+# squeeze it, TF NMS is not batched
+end = get_relay_op("squeeze")(nms_ret[1], axis=[1])
+data_slice = get_relay_op("squeeze")(nms_ret[0], axis=[0])
+
+# slice to get the dynamic result
+ret = get_relay_op("strided_slice")(data_slice, _expr.const([0]), end, 
_expr.const([1]))
+return ret
+return _impl

Review comment:
   We can use slice_mode for tensorflow slice now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432045059



##
File path: python/tvm/relay/op/transform.py
##
@@ -611,31 +611,41 @@ def split(data, indices_or_sections, axis=0):
 return TupleWrapper(_make.split(data, indices_or_sections, axis), ret_size)
 
 
-def strided_slice(data, begin, end, strides=None):
+def strided_slice(data, begin, end, strides=None, slice_mode=False):
 """Strided slice of an array.
 
 Parameters
 --
 data : relay.Expr
 The source array to be sliced.
 
-begin: list of int
+begin: relay.Expr or List[int]
 The indices to begin with in the slicing.
 
-end: list of int
+end: relay.Expr or List[int]
 Indices indicating end of the slice.
 
-strides: list of int, optional
+strides: relay.Expr or List[int], optional
 Specifies the stride values, it can be negative in that case,
 the input tensor will be reversed in that particular axis.
 
+slice_mode: boolean, optional
+Whether to ignore the negative elements in input end,
+will slice to the end of data for the ignored element.

Review comment:
   Provide more details about this attribute, such as end is actually slice 
size and strides is ignored.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432044019



##
File path: python/tvm/relay/op/_transform.py
##
@@ -99,8 +99,80 @@ def _arange_shape_func(start, stop, step):
 
 @_reg.register_shape_func("arange", True)
 def arange_shape_func(attrs, inputs, _):
+"""
+Shape func for arange
+"""
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func_input_data(data, begin, end, strides,
+ slice_mode):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if strides.shape[0] > i:
+cstride = strides[i]
+if begin.shape[0] > i:
+cbegin = begin[i]
+if end.shape[0] <= i:
+cend = data.shape[i]
+elif slice_mode != 0:
+if end[i] < 0:
+cend = data.shape[i]
+elif cstride < 0:

Review comment:
   Should we always set cstride=1 for slice mode?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432043625



##
File path: python/tvm/relay/op/_transform.py
##
@@ -99,8 +99,80 @@ def _arange_shape_func(start, stop, step):
 
 @_reg.register_shape_func("arange", True)
 def arange_shape_func(attrs, inputs, _):
+"""
+Shape func for arange
+"""
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func_input_data(data, begin, end, strides,
+ slice_mode):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if strides.shape[0] > i:
+cstride = strides[i]
+if begin.shape[0] > i:
+cbegin = begin[i]
+if end.shape[0] <= i:
+cend = data.shape[i]
+elif slice_mode != 0:
+if end[i] < 0:
+cend = data.shape[i]
+elif cstride < 0:
+cend = cbegin - end[i]
+else:
+cend = cbegin + end[i]
+else:
+cend = end[i]
+assert cstride != 0, "Strides can't be zero."
+out[i] = int64(ceil_div((int64(cend) - int64(cbegin)), int64(cstride)))
+return out
+
+@script
+def _strided_slice_shape_func_input_shape(data_shape, begin, end, strides, 
slice_mode):
+ndim = data_shape.shape[0]
+assert ndim == 2, "not correct"
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = int64(0)
+cend = int64(data_shape[i])
+cstride = int64(1)
+if len(strides) > i:
+cstride = int64(strides[i])
+if len(begin) > i:
+cbegin = int64(begin[i])
+if len(end) <= i:
+cend = int64(data_shape[i])
+elif slice_mode != 0:
+if end[i] < 0:
+cend = int64(data_shape[i])
+elif cstride < 0:

Review comment:
   Should we always set cstride=1 for slice mode?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on pull request #5683: [PYTORCH]Minor bug fixes

2020-05-28 Thread GitBox


siju-samuel commented on pull request #5683:
URL: https://github.com/apache/incubator-tvm/pull/5683#issuecomment-635427158


   BERT has dependancy with pip package `pytorch_pretrained_bert`. i will add 
some bert testcases later. May be we can add as another tutorial also. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on pull request #5596: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-28 Thread GitBox


cchung100m commented on pull request #5596:
URL: https://github.com/apache/incubator-tvm/pull/5596#issuecomment-635418845


   Thanks to @jroesch :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on a change in pull request #5447: [TOPI,RELAY][TFLITE] Sparse to dense operator

2020-05-28 Thread GitBox


dhruvaray commented on a change in pull request #5447:
URL: https://github.com/apache/incubator-tvm/pull/5447#discussion_r431854463



##
File path: topi/include/topi/transform.h
##
@@ -1309,5 +1309,57 @@ inline Tensor one_hot(const Tensor& indices, const 
PrimExpr on_value, const Prim
   name, tag);
 }
 
+/*!
+ * \brief Get a dense tensor.
+ * \param sparse_indices sparse_indices[i] contains the complete index where 
sparse_values[i] will be placed.
+ * \param sparse_values is a 0-D or 1-D tensor. Values corresponding to each 
row of sparse_indices
+ * \param default_value is a 0-D tensor. Defaults to zero.
+ * \param output_shape is the shape of the dense output tensor
+ * \param name output tensor name.
+ * \param tag output tensor tag.
+ * \return Tensor of output_shape.
+ */
+inline Tensor sparse_to_dense(const Tensor& sparse_indices,
+  const Tensor& sparse_values,
+  const Tensor& default_value,
+  const Array& output_shape,
+  const std::string name = "T_sparse_to_dense",
+  const std::string tag = kInjective) {
+  CHECK(sparse_indices->dtype.is_int())
+<< "sparse_indices only accepts integer values";
+
+  CHECK_LE(sparse_indices->shape.size(), 3)
+<< "sparse_indices tensor should be 0D, 1D, or 2D only";
+
+  CHECK_LE(sparse_values->shape.size(), 2)
+<< "sparse_values tensor should be 0D or 1D only";
+
+  const auto rank_sparse_indices = 
static_cast(sparse_indices->shape.size());

Review comment:
   @siju-samuel : validate_indices is a runtime check on index data passed 
in. (no duplicate indexes, indexes should be sorted) So no change in behavior. 
Hence this is not considered.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on a change in pull request #5447: [TOPI,RELAY][TFLITE] Sparse to dense operator

2020-05-28 Thread GitBox


dhruvaray commented on a change in pull request #5447:
URL: https://github.com/apache/incubator-tvm/pull/5447#discussion_r431854463



##
File path: topi/include/topi/transform.h
##
@@ -1309,5 +1309,57 @@ inline Tensor one_hot(const Tensor& indices, const 
PrimExpr on_value, const Prim
   name, tag);
 }
 
+/*!
+ * \brief Get a dense tensor.
+ * \param sparse_indices sparse_indices[i] contains the complete index where 
sparse_values[i] will be placed.
+ * \param sparse_values is a 0-D or 1-D tensor. Values corresponding to each 
row of sparse_indices
+ * \param default_value is a 0-D tensor. Defaults to zero.
+ * \param output_shape is the shape of the dense output tensor
+ * \param name output tensor name.
+ * \param tag output tensor tag.
+ * \return Tensor of output_shape.
+ */
+inline Tensor sparse_to_dense(const Tensor& sparse_indices,
+  const Tensor& sparse_values,
+  const Tensor& default_value,
+  const Array& output_shape,
+  const std::string name = "T_sparse_to_dense",
+  const std::string tag = kInjective) {
+  CHECK(sparse_indices->dtype.is_int())
+<< "sparse_indices only accepts integer values";
+
+  CHECK_LE(sparse_indices->shape.size(), 3)
+<< "sparse_indices tensor should be 0D, 1D, or 2D only";
+
+  CHECK_LE(sparse_values->shape.size(), 2)
+<< "sparse_values tensor should be 0D or 1D only";
+
+  const auto rank_sparse_indices = 
static_cast(sparse_indices->shape.size());

Review comment:
   @siju-samuel : validate_indices is a runtime check on data passed in. 
(no duplicate indexes, indexes should be sorted) So no change in behavior. 
Hence this is not considered.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] poincarelee commented on issue #4371: ImportError: cannot import name 'bilinear_sample_nchw'

2020-05-28 Thread GitBox


poincarelee commented on issue #4371:
URL: https://github.com/apache/incubator-tvm/issues/4371#issuecomment-635215942


   Thanks a lot, this problem was solved indeed. I retried and it worked. 
Thanks again.
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   At 2020-05-25 09:43:09, "Junru Shao"  wrote:
   
   The original issue was closed because it had been fixed. Discussion thread: 
https://discuss.tvm.ai/t/cannot-import-name-bilinear-sample-nchw/2510/30?u=junrushao1994.
   
   @poincarelee if it is still a problem for you, this would be super helpful 
for the community if you ask on the discussion forum. Thanks!
   
   —
   You are receiving this because you were mentioned.
   Reply to this email directly, view it on GitHub, or unsubscribe.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch merged pull request #5596: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-28 Thread GitBox


jroesch merged pull request #5596:
URL: https://github.com/apache/incubator-tvm/pull/5596


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [TIR][REFACTOR] std::string -> String Migration in TIR nodes (#5596)

2020-05-28 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new a072da0  [TIR][REFACTOR] std::string -> String Migration in TIR nodes 
(#5596)
a072da0 is described below

commit a072da0588c542757d2815832b7f010f530b2428
Author: Neo Chien 
AuthorDate: Thu May 28 16:56:06 2020 +0800

[TIR][REFACTOR] std::string -> String Migration in TIR nodes (#5596)

* [TIR][REFACTOR] std::string -> String Migration for Var node and SizeVar 
Node

* update json_compact.py
---
 include/tvm/tir/stmt.h  |  2 +-
 include/tvm/tir/var.h   | 12 ++--
 python/tvm/ir/json_compact.py   |  4 ++--
 src/printer/tir_text_printer.cc |  2 +-
 src/tir/ir/data_layout.cc   |  6 +++---
 src/tir/ir/expr.cc  | 15 +++
 6 files changed, 20 insertions(+), 21 deletions(-)

diff --git a/include/tvm/tir/stmt.h b/include/tvm/tir/stmt.h
index e1fef55..bbc37fe 100644
--- a/include/tvm/tir/stmt.h
+++ b/include/tvm/tir/stmt.h
@@ -20,7 +20,7 @@
  * \file tvm/tir/stmt.h
  * \brief TIR statements.
  */
-// Acknowledgement: Mnay low-level stmts originate from Halide.
+// Acknowledgement: Many low-level stmts originate from Halide.
 #ifndef TVM_TIR_STMT_H_
 #define TVM_TIR_STMT_H_
 
diff --git a/include/tvm/tir/var.h b/include/tvm/tir/var.h
index a89c665..4db462d 100644
--- a/include/tvm/tir/var.h
+++ b/include/tvm/tir/var.h
@@ -50,7 +50,7 @@ class VarNode : public PrimExprNode {
* \brief The hint to the variable name.
* \note Each variable is uniquely identified by its address.
*/
-  std::string name_hint;
+  String name_hint;
   /*!
* \brief type annotaion of the variable.
*
@@ -92,19 +92,19 @@ class Var : public PrimExpr {
* \param name_hint variable name
* \param dtype data type
*/
-  TVM_DLL explicit Var(std::string name_hint = "v", DataType dtype = 
DataType::Int(32));
+  TVM_DLL explicit Var(String name_hint = "v", DataType dtype = 
DataType::Int(32));
   /*!
* \brief Constructor which provides a more detailed type annotation.
* \param name_hint variable name.
* \param type_annotation The type annotation.
*/
-  TVM_DLL explicit Var(std::string name_hint, Type type_annotation);
+  TVM_DLL explicit Var(String name_hint, Type type_annotation);
   /*!
* \brief Make a new copy of var with same type, append suffix
* \param suffix The suffix to be appended.
* \return the new Var copy
*/
-  TVM_DLL Var copy_with_suffix(const std::string& suffix) const;
+  TVM_DLL Var copy_with_suffix(const String& suffix) const;
   /*!
* \brief Get pointer to the internal value.
* \return the corresponding Variable.
@@ -138,7 +138,7 @@ class SizeVar : public Var {
* \param name_hint variable name
* \param t data type
*/
-  TVM_DLL explicit SizeVar(std::string name_hint = "s", DataType t = 
DataType::Int(32));
+  TVM_DLL explicit SizeVar(String name_hint = "s", DataType t = 
DataType::Int(32));
   /*!
* \brief Get pointer to the internal value.
* \return the corresponding Variable.
@@ -178,7 +178,7 @@ enum IterVarType : int {
   /*!
* \brief The IterVar itself is a thread-index
*  of a fixed thread launching group.
-   *  Note that this is already assumed to be paralellized.
+   *  Note that this is already assumed to be parallelized.
*
*  Disallow: split/fuse/vectorize/parallel
*/
diff --git a/python/tvm/ir/json_compact.py b/python/tvm/ir/json_compact.py
index 2abfd81..9d90685 100644
--- a/python/tvm/ir/json_compact.py
+++ b/python/tvm/ir/json_compact.py
@@ -130,8 +130,8 @@ def create_updater_06_to_07():
 "relay.ModulePass": _rename("transform.ModulePass"),
 "relay.Sequential": _rename("transform.Sequential"),
 # TIR
-"Variable": _update_tir_var("tir.Var"),
-"SizeVar": _update_tir_var("tir.SizeVar"),
+"Variable": [_update_tir_var("tir.Var"), _update_from_std_str("name")],
+"SizeVar": [_update_tir_var("tir.SizeVar"), 
_update_from_std_str("name")],
 }
 return create_updater(node_map, "0.6", "0.7")
 
diff --git a/src/printer/tir_text_printer.cc b/src/printer/tir_text_printer.cc
index 0bcc148..4d22cbb 100644
--- a/src/printer/tir_text_printer.cc
+++ b/src/printer/tir_text_printer.cc
@@ -567,7 +567,7 @@ Doc TIRTextPrinter::AllocVar(const Var& var) {
   if (it != memo_var_.end()) {
 return it->second;
   }
-  std::string name = var->name_hint;
+  std::string name = var->name_hint.operator std::string();
   if (name.length() == 0 || !std::isalpha(name[0])) {
 name = "v" + name;
   }
diff --git a/src/tir/ir/data_layout.cc b/src/tir/ir/data_layout.cc
index 23e13ed..6c38982 100644
--- a/src/tir/ir/data_layout.cc
+++ b/src/tir/ir/data_layout.cc
@@ -82,7 +82,7 @@ Layout::Layout(const Array& axes) {
 }
 

[GitHub] [incubator-tvm] jroesch commented on pull request #5596: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-28 Thread GitBox


jroesch commented on pull request #5596:
URL: https://github.com/apache/incubator-tvm/pull/5596#issuecomment-635212257


   Thanks! LGTM 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5052: [TARGET] ONNX codegen

2020-05-28 Thread GitBox


maheshambule commented on a change in pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#discussion_r431653020



##
File path: python/tvm/contrib/target/onnx.py
##
@@ -0,0 +1,755 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, import-self, len-as-condition, 
unused-argument, too-many-lines, redefined-builtin
+"""Relay to ONNX codegen """
+
+import os
+import struct
+import numpy
+import onnx
+import onnx.utils
+from onnx import numpy_helper, OperatorSetIdProto, defs
+import tvm
+import tvm._ffi
+from tvm.autotvm.graph_tuner.utils.traverse_graph import _expr2graph_impl
+from tvm.relay.expr import Call, TupleGetItem, Var, Constant, Tuple
+
+ONNX_OPSET_VERSONS_SUPPORTED = [11]
+
+
+def tvm_array_to_list(arr):
+return tuple(x.value for x in arr)
+
+
+def get_onnx_version():
+return onnx.__version__
+
+
+def add_input(data, name, model_container):
+dtype = onnx.mapping.NP_TYPE_TO_TENSOR_TYPE[data.dtype]
+tensor_value_info = onnx.helper.make_tensor_value_info(name, dtype, 
shape=data.shape)
+model_container.add_inputs([tensor_value_info])
+data_tensor = numpy_helper.from_array(data, name)
+model_container.add_initializers([data_tensor])
+
+
+class OpConverter(object):
+""" Operator converter Base Class.
+"""
+
+@classmethod
+def convert_attributes(cls, attrs):
+"""convert Relay attributes to ONNX attributes.
+   The derived classes should implement this method
+   if attributes are required by the operator
+   otherwise by default no attributes are passed
+"""
+return {}
+
+@classmethod
+def convert(cls, node, model_container, node_list):
+attrs = cls.convert_attributes(node['node'].attrs)
+node = onnx.helper.make_node(cls.__name__,
+ node['input_names'],
+ node['output_names'],
+ **attrs)
+model_container.add_nodes([node])
+
+
+def rename(op_name):
+""" This method creates dynamic operator of name op_name with empty 
attributes
+"""
+return type(op_name, (OpConverter,), {})
+
+
+class Reshape(object):
+""" Operator converter for Reshape.
+"""
+
+@classmethod
+def convert(cls, node, model_container, node_list):
+"""Converts Relay operator Reshape to ONNX operator.
+   Relay operator accepts shape as attribute but ONNX operator
+   accepts it as a input.
+"""
+
+shape = numpy.asarray([a.value for a in node['node'].attrs.newshape],
+  dtype=numpy.int64)
+input_name = 'shape{}'.format(node['output_names'][0])
+node = onnx.helper.make_node(cls.__name__, [node['input_names'][0], 
input_name],
+ node['output_names'])
+model_container.add_nodes([node])
+add_input(shape, input_name, model_container)
+
+
+class Conv(OpConverter):
+""" Operator converter for Conv.
+"""
+
+@classmethod
+def convert_attributes(cls, attrs):
+return {
+'group': attrs.get_int("groups"),
+'pads': attrs.get_int_tuple("padding"),
+'strides': attrs.get_int_tuple("strides"),
+'dilations': attrs.get_int_tuple("dilation"),
+'kernel_shape': attrs.get_int_tuple("kernel_size"),
+}
+
+
+class MaxPool(OpConverter):
+""" Operator converter for MaxPool.
+"""
+
+@classmethod
+def convert_attributes(cls, attrs):
+return {
+'pads': attrs.get_int_tuple("padding"),
+'strides': attrs.get_int_tuple("strides"),
+'kernel_shape': attrs.get_int_tuple("pool_size"),
+}
+
+
+class Transpose(OpConverter):
+""" Operator converter for Transpose.
+"""
+
+@classmethod
+def convert_attributes(cls, attrs):
+return {'perm': attrs.get_int_tuple("axes")} if attrs["axes"] else {}
+
+
+class MatMul(OpConverter):
+""" Operator converter for MatMul.
+"""
+
+@classmethod
+def convert(cls, node, model_container, node_list):
+output_name = 

[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


maheshambule commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431651445



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:

Review comment:
   @srkreddy1238, could you please elaborate more here? Do you want us to 
check if these ops are supported in different TF versions?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


maheshambule commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431643670



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3155,6 +3176,91 @@ def _convert_control_flow_operator(self, node, inputs, 
attrs, control_flow_node_
 
 return op
 
+def _partition_call_operator(self, inputs, attr):
+"""
+Convert the Relay Partition call ops into Relay Function calls and
+function definitions from Tensorflow graph library attribute to Relay 
global
+functions
+
+Parameters
+--
+node: TensorFlow graph node object.
+A TensorFlow graph node object.
+
+inputs : List[tvm.relay.Expr]
+List of input symbols.
+
+attrs : Dict[tvm.Attrs]
+Dict of operator attributes.
+
+Returns
+---
+op : tvm.relay.Expr
+Converted relay expression.
+"""
+
+try:
+from tensorflow.python.framework import function_def_to_graph
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required {}".format(e))
+
+main_graph_proto = self._main_graph_proto
+outer_graph_def = main_graph_proto._graph
+
+node_func_name = attr.get('f').name
+func = next((f for f in outer_graph_def.library.function
+ if f.signature.name == node_func_name), None)
+if func:
+devices = set(node.device for node in func.node_def)
+if len(devices) > 1:
+raise Exception("Found inconsistent Device assignment in the "\
+"Stateful Partitioned SubGraph. Rejecting "\
+"the subgraph ")
+# Convert function definition to graph
+func_input_shapes = func.attr["_input_shapes"].list.shape
+subgraph, _ = function_def_to_graph.\
+function_def_to_graph_def(func, func_input_shapes)
+
+# Computing subgraph's input shape dictionary
+subgraph_shape_dict, input_expr_dict = {}, {}
+for f_arg, input in zip(func.signature.input_arg, inputs):
+input_expr_dict[f_arg.name] = input
+subgraph_shape_dict[f_arg.name] = _infer_shape(input, 
main_graph_proto._mod)
+
+func_name = 'func_{}'.format(func.signature.name)
+try:
+global_func = main_graph_proto._mod[func_name]

Review comment:
   Yes, it is. We add the function definition to main module and if there 
are multiple calls to the same function, we need not add the definition again 
to the main module, we just need to add the function call node.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5688: [FRONTEND]Darknet support batch size for yolo

2020-05-28 Thread GitBox


siju-samuel opened a new pull request #5688:
URL: https://github.com/apache/incubator-tvm/pull/5688


   Fix the issue of batch size reported in 
https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-failed/6796
   
   @FrozenGene Please help to review this PR. TIA.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431635377



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+raise Exception("Found a stateful operator in this graph 
{}. "\

Review comment:
   Yes sure, It will support the aforesaid purpose. You can review once 
after I handle this in the new implementation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431635461



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -3179,10 +3183,342 @@ def test_forward_isfinite():
 _verify_infiniteness_ops(tf.is_finite, "isfinite")
 
 
+def _test_spop_placeholder_one():
+with tf.Graph().as_default():
+
+@function.Defun(*[tf.int32]*2)
+def Forward(x,y):
+print(x.name)
+print(y.name)
+b = tf.add(x, y)
+return b
+pl1 = tf.placeholder(tf.int32,name="pl1")
+pl2 = tf.placeholder(tf.int32,name="pl2")
+pl3 = tf.placeholder(tf.int32, name="pl3")
+data = np.array([[-1, 1], [2, -2]], dtype=np.int32)
+data2 = np.array([[-2, 3], [4, -6]], dtype=np.int32)
+data3 = np.array([[-2, 3], [4, -6]], dtype=np.int32)
+z1 = gen_functional_ops.StatefulPartitionedCall(args=[pl1,pl2], 
Tout=[tf.int32],f=Forward)
+z2 = z1 + pl3
+compare_tf_with_tvm([data, data2, data3], ['pl1:0', 'pl2:0', 'pl3:0'],
+['StatefulPartitionedCall:0',z2.name],  mode='vm', 
init_global_variables=True)
+
+
+def _test_spop_placeholder_two():
+with tf.Graph().as_default():
+data = np.ones([1], dtype=int).astype(np.int32)
+dataVar = tf.Variable(data, shape=data.shape)
+pl1 = 
array_ops.placeholder_with_default(dataVar,shape=data.shape,name="pl1")
+tpl = tf.convert_to_tensor(pl1, dtype=tf.int32)
+
+@function.Defun(*[tf.int32])
+def pl_with_default(pl):
+return tf.expand_dims(tf.multiply(pl, pl), 0)
+
+z = gen_functional_ops.StatefulPartitionedCall(args=[tpl], 
Tout=[tf.int32], f=pl_with_default)
+compare_tf_with_tvm(data, ['pl1:0'], 'StatefulPartitionedCall:0', 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_placeholder_three():
+with tf.Graph().as_default():
+t1 = tf.placeholder(tf.int32, (3, 3, 3), "t1")
+t1_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+t2 = tf.placeholder(tf.int32, (3, 3, 3), "t2")
+t2_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+
+@tf.function
+def add(x, y):
+return tf.add(x, y, "add_t1_t2")
+
+t3 = add(t1, t2)
+compare_tf_with_tvm([t1_data, t2_data], ['t1:0', 't2:0'], [t3.name], 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_placeholder_four():
+with tf.Graph().as_default():
+t1_data = np.array([[-1, 1, 3], [2, -2, 4], [2, -3, 14]], 
dtype=np.int32)
+t2_data = np.array([[-2, 1, 2], [12, -2, 14], [12, -3, 4]], 
dtype=np.int32)
+t1 = tf.placeholder(tf.int32, name="t1")
+t2 = tf.placeholder(tf.int32, name="t2")
+
+@tf.function
+def add(x, y):
+return tf.add(x, y, "add_t1_t2")
+
+t3 = add(t1, t2)
+compare_tf_with_tvm([t1_data, t2_data], ['t1:0', 't2:0'], [t3.name], 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_function_invocation_basic():
+with tf.Graph().as_default():
+
+def fun1(a):
+return tf.multiply(a,a)
+
+def fun2(b):
+return tf.multiply(b,10)
+
+@tf.function
+def fun3(x,y):
+x = fun2(x)
+y = fun1(y)
+z = tf.add(x,y)
+return z
+
+t3 = fun3(tf.constant(10.5), tf.constant(20.4))
+
+compare_tf_with_tvm([], [], [t3.name], mode='vm', 
init_global_variables=True)
+
+
+def _test_spop_function_invocation_nested():
+with tf.Graph().as_default():
+t1 = tf.placeholder(tf.int32, (3, 3, 3), name="t1")
+t1_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+t2 = tf.placeholder(tf.int32, name="t2")
+t2_data = np.arange(27, dtype=np.int32).reshape((3, 3, 3))
+
+@tf.function
+def myfunc(x, y):
+return tf.add(x, y, "myfunc")
+
+@tf.function
+def myfunc2(x, y):
+z = myfunc(x, y)
+l = myfunc(z, y)
+m = myfunc(l,z)
+return tf.add(l, m, "myfunc2")
+
+res1 = myfunc(t1, t2)
+res2 = myfunc2(res1, t1)
+
+compare_tf_with_tvm([t1_data, t2_data], ['t1:0', 't2:0'], [res2.name], 
mode='vm', init_global_variables=True)
+
+
+def _test_spop_function_invocation_no_autograph():
+with tf.Graph().as_default():
+
+@tf.function(autograph=False)
+def fun1(a):
+return tf.multiply(a,a)
+
+@tf.function(autograph=False)
+def fun2(b):
+return tf.multiply(b,10)
+
+@tf.function
+def fun3(x,y):
+x = fun2(x)
+y = fun1(y)
+z = tf.add(x,y)
+return z
+
+t3 = fun3(tf.constant(10.5), tf.constant(20.4))
+
+compare_tf_with_tvm([], [], [t3.name], mode='vm', 
init_global_variables=True)
+
+
+def 

[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431629053



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+raise Exception("Found a stateful operator in this graph 
{}. "\

Review comment:
   Objective of this function is to give a consolidated list of unsupported 
ops.
   Any approach is good as long as the purpose is met, which is showing all 
unsupported ops instead of just an exception.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 opened a new pull request #5687: [Draft][Object] Unify StrMapNode and MapNode

2020-05-28 Thread GitBox


junrushao1994 opened a new pull request #5687:
URL: https://github.com/apache/incubator-tvm/pull/5687


   Steps
   
   - [x] Unify them in `src/`
   - [ ] Unify them in `include/`
   - [ ] Unify them in `python/` and `tests/cpp`
   - [ ] Fix potential bugs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431623431



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry

Review comment:
   yes makes sense. Thanks Siva :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431623053



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+raise Exception("Found a stateful operator in this graph 
{}. "\

Review comment:
   @srkreddy1238 , Sure that's a good idea. Then i can create a list of 
stateful_partitioned_ops(eg. StatefulPartitioned,PartitionedOp) just like 
_freezed_graph_pruned_op_list to provide extended info to the user. Does it 
make sense?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431620598



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry

Review comment:
   Hence we don't need this try. We could join it with all tf imports above 
:)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] deepakbabel23 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


deepakbabel23 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431618598



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry

Review comment:
   @srkreddy1238 ,
   Thank you Siva. op_def_registry is part of both 1.x and 2.x versions. So 
there would not be an exception in this case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lichun-wang commented on issue #5673: mxnet group conv not support very well by tvm

2020-05-28 Thread GitBox


lichun-wang commented on issue #5673:
URL: https://github.com/apache/incubator-tvm/issues/5673#issuecomment-635132928


   sorry,  group=32,  cost17ms,  not group=3



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org