[GitHub] [incubator-tvm-vta] tmoreau89 commented on a change in pull request #5: Add c++ and python local deploy example

2020-04-30 Thread GitBox


tmoreau89 commented on a change in pull request #5:
URL: https://github.com/apache/incubator-tvm-vta/pull/5#discussion_r418425846



##
File path: apps/deploy/Makefile
##
@@ -0,0 +1,71 @@
+#licensed to the Apache Software Foundation (ASF) under one

Review comment:
   `#licensed` -> `# Licensed`

##
File path: apps/deploy/Makefile
##
@@ -0,0 +1,71 @@
+#licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Makefile Example to deploy TVM modules.
+TVM_ROOT=$(shell cd ../../../../; pwd)
+CUR_DIR=$(shell pwd)
+DMLC_CORE=${TVM_ROOT}/3rdparty/dmlc-core
+
+TARGET := ${shell python3 ../../config/vta_config.py --target}
+
+
+VTA_LIB=vta
+ifeq (${TARGET}, sim)
+   VTA_LIB=vta_fsim
+endif
+
+#packages:

Review comment:
   cleanup commented out lines?

##
File path: apps/deploy/Makefile
##
@@ -0,0 +1,71 @@
+#licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Makefile Example to deploy TVM modules.
+TVM_ROOT=$(shell cd ../../../../; pwd)

Review comment:
   We can use TVM_PATH and VTA_HW_PATH env vars, see 
https://docs.tvm.ai/vta/install.html

##
File path: apps/deploy/Makefile
##
@@ -0,0 +1,71 @@
+#licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Makefile Example to deploy TVM modules.
+TVM_ROOT=$(shell cd ../../../../; pwd)
+CUR_DIR=$(shell pwd)
+DMLC_CORE=${TVM_ROOT}/3rdparty/dmlc-core
+
+TARGET := ${shell python3 ../../config/vta_config.py --target}
+
+
+VTA_LIB=vta
+ifeq (${TARGET}, sim)
+   VTA_LIB=vta_fsim
+endif
+
+#packages:
+#  [ -z `dpkg -l | grep libboost-all-dev` ] && sudo apt-get install 
libboost-all-dev
+
+#.PHONY: packages
+
+#p2:
+#  [ -z `dpkg -l | grep libpng-dev` ] && sudo apt-get install libpng-dev
+#.PHONY: p2
+
+PKG_CFLAGS = -std=c++11 -O0 -g -fPIC\
+-I${TVM_ROOT}/include\
+-I${TVM_ROOT}/vta/include\
+-I${DMLC_CORE}/include\
+
-I${TVM_ROOT}/3rdparty/dlpack/include\
+
-I${TVM_ROOT}/3rdparty/vta-hw/include\
+-I${TVM_ROOT}/\
+
+PKG_LDFLAGS = -L${TVM_ROOT}/build  -L${CUR_DIR} -ldl -pthread -l${VTA_LIB} 
-ltvm_runtime
+
+.PHONY: clean all
+
+all:./build/deploy copylib
+
+./build/deploy: ./build/deploy.o ./build/model/lib.so
+   $(CXX) $(PKG_CFLAGS) -o $@  $^ $(PKG_LDFLAGS)
+   # Build rule for all in one TVM package library

Review comment:
   Move comment up

##
File path: apps/deploy/ReadME.md
##
@@ -0,0 +1,123 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+How to Deploy TVM-VTA Modules

[incubator-tvm-vta] branch master updated: [pynq_driver] fix device early return (#7)

2020-04-30 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-vta.git


The following commit(s) were added to refs/heads/master by this push:
 new 21937a0  [pynq_driver] fix device early return (#7)
21937a0 is described below

commit 21937a067fe0e831244766b41ae915c833ff15ba
Author: ZHANG Hao 
AuthorDate: Fri May 1 13:42:48 2020 +0800

[pynq_driver] fix device early return (#7)

Co-authored-by: Zhang Hao 
---
 src/pynq/pynq_driver.cc | 5 +
 1 file changed, 5 insertions(+)

diff --git a/src/pynq/pynq_driver.cc b/src/pynq/pynq_driver.cc
index a37bb4e..518b6c3 100644
--- a/src/pynq/pynq_driver.cc
+++ b/src/pynq/pynq_driver.cc
@@ -22,6 +22,7 @@
 
 #include 
 #include 
+#include 
 #include "pynq_driver.h"
 
 
@@ -126,6 +127,10 @@ class VTADevice {
 VTAWriteMappedReg(vta_compute_handle_, 0x0, VTA_AUTORESTART);
 VTAWriteMappedReg(vta_store_handle_, 0x0, VTA_AUTORESTART);
 
+// Allow device to respond
+struct timespec ts = { .tv_sec = 0, .tv_nsec = 1000 };
+nanosleep(, );
+
 // Loop until the VTA is done
 unsigned t, flag = 0;
 for (t = 0; t < wait_cycles; ++t) {



[incubator-tvm] branch master updated: Make "none" DataType explicit (#5491)

2020-04-30 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 12e737f  Make "none" DataType explicit (#5491)
12e737f is described below

commit 12e737f5289acb1e6ef2ad0aa590bf7b12c679b5
Author: Krzysztof Parzyszek 
AuthorDate: Fri May 1 00:06:06 2020 -0500

Make "none" DataType explicit (#5491)

* Make "none" DataType explicit

The None data type is created when converting an empty string to DataType.
Add functions to create it and recognize it. Convert it to the "void" LLVM
type in LLVM codegen.

* Rename "none" to "void"

* Map VoidType:Type -> Void:DataType in GetRuntimeDataType

* Map Void:DataType -> VoidType:Type in GetType
---
 include/tvm/runtime/data_type.h   | 20 +---
 src/target/llvm/codegen_llvm.cc   |  3 +++
 src/tir/ir/op.cc  |  7 ---
 tests/python/unittest/test_target_codegen_llvm.py | 12 
 4 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/include/tvm/runtime/data_type.h b/include/tvm/runtime/data_type.h
index 44385d6..940818a 100644
--- a/include/tvm/runtime/data_type.h
+++ b/include/tvm/runtime/data_type.h
@@ -107,7 +107,7 @@ class DataType {
   }
   /*! \return whether type is a handle type. */
   bool is_handle() const {
-return code() == DataType::kHandle;
+return code() == DataType::kHandle && !is_void();
   }
   /*! \return whether type is a vector type. */
   bool is_vector() const {
@@ -117,6 +117,10 @@ class DataType {
   bool is_vector_bool() const {
 return is_vector() && bits() == 1;
   }
+  /*! \return whether type is a Void type. */
+  bool is_void() const {
+return code() == DataType::kHandle && bits() == 0 && lanes() == 0;
+  }
   /*!
* \brief Create a new data type by change lanes to a specified value.
* \param lanes The target number of lanes.
@@ -212,6 +216,13 @@ class DataType {
 return DataType(kHandle, bits, lanes);
   }
   /*!
+   * \brief Construct a Void type.
+   * \return The constructed data type.
+   */
+  static DataType Void() {
+return DataType(kHandle, 0, 0);
+  }
+  /*!
* \brief Get the corresponding type of TVMShapeIndex.
* \return The type of TVM shape index.
*/
@@ -335,6 +346,9 @@ inline std::ostream& operator<<(std::ostream& os, 
DLDataType t) {  // NOLINT(*)
   if (t.bits == 1 && t.lanes == 1 && t.code == kDLUInt) {
 os << "bool"; return os;
   }
+  if (DataType(t).is_void()) {
+return os << "void";
+  }
   if (t.code < kTVMCustomBegin) {
 os << TypeCode2Str(t.code);
   } else {
@@ -361,9 +375,9 @@ inline std::string DLDataType2String(DLDataType t) {
 
 inline DLDataType String2DLDataType(std::string s) {
   DLDataType t;
-  // handle None type
+  // handle void type
   if (s.length() == 0) {
-t.bits = 0; t.lanes = 0; t.code = kTVMOpaqueHandle;
+t = DataType::Void();
 return t;
   }
   t.bits = 32; t.lanes = 1;
diff --git a/src/target/llvm/codegen_llvm.cc b/src/target/llvm/codegen_llvm.cc
index 86cd5a3..74bda71 100644
--- a/src/target/llvm/codegen_llvm.cc
+++ b/src/target/llvm/codegen_llvm.cc
@@ -309,6 +309,9 @@ llvm::Type* CodeGenLLVM::DTypeToLLVMType(const DataType& 
dtype) const {
 CHECK_EQ(dtype.lanes(), 1);
 return t_void_p_;
   }
+  if (dtype.is_void()) {
+return t_void_;
+  }
   llvm::Type* etype = nullptr;
   if (dtype.is_int() || dtype.is_uint()) {
 etype = llvm::Type::getIntNTy(*ctx_, dtype.bits());
diff --git a/src/tir/ir/op.cc b/src/tir/ir/op.cc
index 4ad244f..6224321 100644
--- a/src/tir/ir/op.cc
+++ b/src/tir/ir/op.cc
@@ -38,6 +38,8 @@ runtime::DataType GetRuntimeDataType(const Type& type) {
 return n->dtype;
   } else if (type.as()) {
 return DataType::Handle();
+  } else if (IsVoidType(type)) {
+return DataType::Void();
   } else {
 LOG(FATAL) << "Type " << type
<< " does not have a corresponding runtime::DataType";
@@ -57,9 +59,8 @@ Type GetType(const PrimExpr& expr) {
   }
   // Default: return the type indicated by the dtype.
   runtime::DataType dtype = expr.dtype();
-  // These types already implies the specific type.
-  if (dtype.is_int() || dtype.is_uint() || dtype.is_float()) {
-return PrimType(dtype);
+  if (dtype.is_void()) {
+return VoidType();
   }
   return PrimType(dtype);
 }
diff --git a/tests/python/unittest/test_target_codegen_llvm.py 
b/tests/python/unittest/test_target_codegen_llvm.py
index a7e1e57..c659172 100644
--- a/tests/python/unittest/test_target_codegen_llvm.py
+++ b/tests/python/unittest/test_target_codegen_llvm.py
@@ -43,6 +43,18 @@ def test_llvm_intrin():
 fcode = tvm.build(mod, None, "llvm")
 
 
+def test_llvm_void_intrin():
+ib = tvm.tir.ir_builder.create()
+A = ib.pointer("uint8", name="A")
+# Create an intrinsic 

[GitHub] [incubator-tvm] tqchen commented on pull request #5491: Make "none" DataType explicit

2020-04-30 Thread GitBox


tqchen commented on pull request #5491:
URL: https://github.com/apache/incubator-tvm/pull/5491#issuecomment-622250434


   Thanks @kparzysz-quic !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622238350


   To avoid making this page too long, I will edit the examples for 
reference in the top comment if I change the format.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622238350


   To avoid making this page too long, I will edit the examples for 
reference in the top if I change the format.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418406985



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418401901



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] spectrometerHBH commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418401170



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] spectrometerHBH commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418400818



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] spectrometerHBH commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418378401



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622203710


   > Some comments:
   > 
   > 1. There are more than one spaces before the left brace in the allocation 
line
   >```
   >allocate(B.local, float32, [64])  {
   >```
   > 2. Can we use the same rule for the allocation stmt as the one for attr? 
Allocation stmt now will bring extra indentation
   > 3. ```
   > attr [IterVar(blockIdx.z: int32, [(nullptr)], "ThreadIndex", 
"blockIdx.z")] "thread_extent" = 196;
   >```
   >
   >
   >It is strange to print `nullptr` here especially in square brackets. 
Perhaps we can use `IterVar(blockIdx.z: int32, (nullptr), "ThreadIndex", 
"blockIdx.z")]` or `IterVar(blockIdx.z: int32, , "ThreadIndex", "blockIdx.z")]`
   > 4. Considering future parsing use, we must print the dtype for every const 
number. But we may use some shorthand for common dtype. e.g. `2f` for float32, 
`2h` for float16(half), direct`2` for int32 (for here most integer numbers in 
schedule are int32). But still, keep the complete form for every type. e.g. 
`int8(2)`, `float64(2)`(or may be `fp64(2)`) , also, `float32(2)` is legal as 
well.
   
   1. fixed
   2. fixed. But here we implicitly assume that `Allocate` and `Attr` will have 
at least one child. Otherwise, for such a scenario
   ```c++
   attr...;
   attr...;
   attr...;
   for...;
   ```
   We can not determine whether it is `attr|attr|attr|for` or 
`attr(attr)|attr|for` or `attr(attr)|attr(for)` or `attr(attr(attr))|for` or 
`attr(attr(attr(for)))`
   
   3. fixed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622203710


   > Some comments:
   > 
   > 1. There are more than one spaces before the left brace in the allocation 
line
   >```
   >allocate(B.local, float32, [64])  {
   >```
   > 2. Can we use the same rule for the allocation stmt as the one for attr? 
Allocation stmt now will bring extra indentation
   > 3. ```
   > attr [IterVar(blockIdx.z: int32, [(nullptr)], "ThreadIndex", 
"blockIdx.z")] "thread_extent" = 196;
   >```
   >
   >
   >It is strange to print `nullptr` here especially in square brackets. 
Perhaps we can use `IterVar(blockIdx.z: int32, (nullptr), "ThreadIndex", 
"blockIdx.z")]` or `IterVar(blockIdx.z: int32, , "ThreadIndex", "blockIdx.z")]`
   > 4. Considering future parsing use, we must print the dtype for every const 
number. But we may use some shorthand for common dtype. e.g. `2f` for float32, 
`2h` for float16(half), direct`2` for int32 (for here most integer numbers in 
schedule are int32). But still, keep the complete form for every type. e.g. 
`int8(2)`, `float64(2)`(or may be `fp64(2)`) , also, `float32(2)` is legal as 
well.
   
   1. fixed
   2. fixed. But here we implicitly assume that `Allocate` and `Attr` will have 
at least one child. Otherwise, for such a scenario
   ```c++
   attr...;
   attr...;
   attr...;
   for...
   ```
   We can not determine whether it is `attr|attr|attr|for` or 
`attr(attr)|attr|for` or `attr(attr)|attr(for)` or `attr(attr(attr))|for` or 
`attr(attr(attr(for)))`
   
   3. fixed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622203710


   > Some comments:
   > 
   > 1. There are more than one spaces before the left brace in the allocation 
line
   >```
   >allocate(B.local, float32, [64])  {
   >```
   > 2. Can we use the same rule for the allocation stmt as the one for attr? 
Allocation stmt now will bring extra indentation
   > 3. ```
   > attr [IterVar(blockIdx.z: int32, [(nullptr)], "ThreadIndex", 
"blockIdx.z")] "thread_extent" = 196;
   >```
   >
   >
   >It is strange to print `nullptr` here especially in square brackets. 
Perhaps we can use `IterVar(blockIdx.z: int32, (nullptr), "ThreadIndex", 
"blockIdx.z")]` or `IterVar(blockIdx.z: int32, , "ThreadIndex", "blockIdx.z")]`
   > 4. Considering future parsing use, we must print the dtype for every const 
number. But we may use some shorthand for common dtype. e.g. `2f` for float32, 
`2h` for float16(half), direct`2` for int32 (for here most integer numbers in 
schedule are int32). But still, keep the complete form for every type. e.g. 
`int8(2)`, `float64(2)`(or may be `fp64(2)`) , also, `float32(2)` is legal as 
well.
   
   1. fixed
   2. fixed. But here we implicitly assume that `Allocate` and `Attr` will have 
at least one child. Otherwise, for such a scenario
   ```c++
   attr...;
   attr...;
   attr...;
   ```
   We can not determine whether it is `attr|attr|attr` or `attr(attr)|attr`
   
   3. fixed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


spectrometerHBH commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622203710


   > Some comments:
   > 
   > 1. There are more than one spaces before the left brace in the allocation 
line
   >```
   >allocate(B.local, float32, [64])  {
   >```
   > 2. Can we use the same rule for the allocation stmt as the one for attr? 
Allocation stmt now will bring extra indentation
   > 3. ```
   > attr [IterVar(blockIdx.z: int32, [(nullptr)], "ThreadIndex", 
"blockIdx.z")] "thread_extent" = 196;
   >```
   >
   >
   >It is strange to print `nullptr` here especially in square brackets. 
Perhaps we can use `IterVar(blockIdx.z: int32, (nullptr), "ThreadIndex", 
"blockIdx.z")]` or `IterVar(blockIdx.z: int32, , "ThreadIndex", "blockIdx.z")]`
   > 4. Considering future parsing use, we must print the dtype for every const 
number. But we may use some shorthand for common dtype. e.g. `2f` for float32, 
`2h` for float16(half), direct`2` for int32 (for here most integer numbers in 
schedule are int32). But still, keep the complete form for every type. e.g. 
`int8(2)`, `float64(2)`(or may be `fp64(2)`) , also, `float32(2)` is legal as 
well.
   
   1. fixed
   2. fixed. But here we implicitly assume that `Allocate` and `Attr` will have 
at least one child. Otherwise, for such a scenario
   ```c++
   attr...;
   attr...;
   attr...;
   ```
   We can not determine whether it is `attr|attr|attr` or `attr(attr)|attr`
   3. fixed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418371862



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418371548



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418370164



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418368333



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418368333



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418367759



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418367759



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418365372



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


junrushao1994 commented on a change in pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#discussion_r418364252



##
File path: src/printer/tir_text_printer.cc
##
@@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file printer/tir_text_printer.cc
+ * \brief Printer to print out the IR text format
+ *that can be parsed by a parser.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "doc.h"
+#include "meta_data.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ *  \brief Meta node collector
+ *  If we decide to put some node into meta, then all the sub-nodes inside
+ *  it need to be put in meta as well, since when parsing we need to know
+ *  whether two refs are the same
+ */
+class MetaCollector : public StmtExprVisitor {
+ public:
+  explicit MetaCollector(TextMetaDataContext* meta) : meta_(meta) {}
+
+  void Collect(const ObjectRef& n) {
+// these nodes can be print directly(StringLiteral or use identifier to 
identify)
+if (!n.defined() || n.as() || n.as() || 
n.as()
+|| n.as() || n.as() || n.as()) {
+  return;
+}
+if (n->IsInstance()) {
+  VisitStmt(Downcast(n));
+} else if (n->IsInstance()) {
+  VisitExpr(Downcast(n));
+}
+  }
+
+  void VisitStmt(const Stmt& n) override {
+meta_->GetMetaNode(n);
+StmtVisitor::VisitStmt(n);
+  }
+
+  void VisitExpr(const PrimExpr& n) override {
+meta_->GetMetaNode(n);
+ExprVisitor::VisitExpr(n);
+  }
+
+ private:
+  TextMetaDataContext* meta_;
+};
+
+class TIRTextPrinter : public StmtFunctor,
+   public ExprFunctor,
+   public TypeFunctor {
+ public:
+  explicit TIRTextPrinter(bool show_meta) : show_meta_(show_meta), 
meta_collector_(_) {}
+
+  /*! \brief Print the node */
+  Doc Print(const ObjectRef& node);
+
+ private:
+  /*! \brief whether show meta data */
+  bool show_meta_;
+  /*! \brief meta data context */
+  TextMetaDataContext meta_;
+  /*! \brief meta collector */
+  MetaCollector meta_collector_;
+  /*! \brief Map from Var to Doc */
+  std::unordered_map memo_var_;
+  /*! \brief Map from Buffer to Doc */
+  std::unordered_map memo_buf_;
+  /*! \brief name allocation map */
+  std::unordered_map name_alloc_map_;
+
+  Doc VisitExpr_(const IntImmNode* op) override;
+  Doc VisitExpr_(const FloatImmNode* op) override;
+  Doc VisitExpr_(const StringImmNode* op) override;
+  Doc VisitExpr_(const CastNode* op) override;
+  Doc VisitExpr_(const VarNode* op) override;
+  Doc VisitExpr_(const AddNode* op) override;
+  Doc VisitExpr_(const SubNode* op) override;
+  Doc VisitExpr_(const MulNode* op) override;
+  Doc VisitExpr_(const DivNode* op) override;
+  Doc VisitExpr_(const ModNode* op) override;
+  Doc VisitExpr_(const FloorDivNode* op) override;
+  Doc VisitExpr_(const FloorModNode* op) override;
+  Doc VisitExpr_(const MinNode* op) override;
+  Doc VisitExpr_(const MaxNode* op) override;
+  Doc VisitExpr_(const EQNode* op) override;
+  Doc VisitExpr_(const NENode* op) override;
+  Doc VisitExpr_(const LTNode* op) override;
+  Doc VisitExpr_(const LENode* op) override;
+  Doc VisitExpr_(const GTNode* op) override;
+  Doc VisitExpr_(const GENode* op) override;
+  Doc VisitExpr_(const AndNode* op) override;
+  Doc VisitExpr_(const OrNode* op) override;
+  Doc VisitExpr_(const NotNode* op) override;
+  Doc VisitExpr_(const SelectNode* op) override;
+  Doc VisitExpr_(const BufferLoadNode* op) override;
+  Doc VisitExpr_(const LoadNode* op) override;
+  Doc VisitExpr_(const RampNode* op) override;
+  Doc VisitExpr_(const BroadcastNode* op) override;
+  Doc VisitExpr_(const LetNode* op) override;
+  Doc VisitExpr_(const CallNode* op) override;
+  Doc VisitExpr_(const ShuffleNode* op) override;
+  Doc VisitExpr_(const ReduceNode* op) override;
+  Doc VisitExprDefault_(const Object* op) override;
+
+  Doc VisitStmt_(const LetStmtNode* op) override;
+  Doc VisitStmt_(const AttrStmtNode* op) override;
+  Doc VisitStmt_(const AssertStmtNode* op) override;
+  Doc VisitStmt_(const StoreNode* op) override;
+  Doc 

[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-30 Thread GitBox


lixiaoquan commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r418341379



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   @icemelon9 This is solved, could you please take another look?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (9bbf58a -> 3aa103e)

2020-04-30 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9bbf58a  Removing older Object detection TFlite test (#5477)
 add 3aa103e  [IR] Initial stab at std::string->String upgrade (#5438)

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/span.h |  4 ++--
 include/tvm/ir/type.h |  4 ++--
 python/tvm/ir/json_compact.py | 27 ---
 src/ir/span.cc| 18 +++---
 src/ir/type.cc|  4 ++--
 5 files changed, 41 insertions(+), 16 deletions(-)



[GitHub] [incubator-tvm] comaniac commented on pull request #5493: [REFACTOR][BOYC] Non recursive partitioning

2020-04-30 Thread GitBox


comaniac commented on pull request #5493:
URL: https://github.com/apache/incubator-tvm/pull/5493#issuecomment-622139182


   Ah I think that's because I manually ran clang-format for the file. We 
should definitely build style checker in CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5493: [REFACTOR][BOYC] Non recursive partitioning

2020-04-30 Thread GitBox


zhiics commented on pull request #5493:
URL: https://github.com/apache/incubator-tvm/pull/5493#issuecomment-622129716


   @mbrookhart please take a look for the mixed mutator pattern. BTW, we will 
still need to refactor the infertype pass as it is the most frequently used 
pass.
   
   @comaniac @masahi @mbaret @manupa-arm @trevor-m please take a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics opened a new pull request #5493: [REFACTOR][BOYC] Non recursive partitioning

2020-04-30 Thread GitBox


zhiics opened a new pull request #5493:
URL: https://github.com/apache/incubator-tvm/pull/5493


   This PR refactors the partitioning pass by using non-recursive mutator. It 
also removes the unnecessary mutators as we only need to look at begin/end 
annotations which are definitely wrapped in call nodes. In addition, a metadata 
struct is used to maintain the intermediate data needed for partitioning.
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5467: [Relay]Improve Shape Func handling for Tuple inputs

2020-04-30 Thread GitBox


kevinthesun commented on a change in pull request #5467:
URL: https://github.com/apache/incubator-tvm/pull/5467#discussion_r418284406



##
File path: src/relay/op/memory/memory.cc
##
@@ -360,12 +360,26 @@ bool ShapeFuncRel(const Array& types, int 
num_inputs, const Attrs& attrs,
   auto tuple = TupleType(func_type->arg_types);
   auto in_types = FlattenTupleType(tuple);
   auto out_types = FlattenTupleType(func_type->ret_type);
+  int num_types = 0;

Review comment:
   The problem here is that we need to restore is_input to make it 
correspond to the flattened input types. However, is_input is created in memory 
alloc pass with already flatten pattern where a tuple input just get one single 
number instead of tuple of number. As a result we cannot use similar way of 
```FlattenTupleType```. This also makes it more complicated for handling nested 
tuple as input.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5467: [Relay]Improve Shape Func handling for Tuple inputs

2020-04-30 Thread GitBox


kevinthesun commented on a change in pull request #5467:
URL: https://github.com/apache/incubator-tvm/pull/5467#discussion_r418284406



##
File path: src/relay/op/memory/memory.cc
##
@@ -360,12 +360,26 @@ bool ShapeFuncRel(const Array& types, int 
num_inputs, const Attrs& attrs,
   auto tuple = TupleType(func_type->arg_types);
   auto in_types = FlattenTupleType(tuple);
   auto out_types = FlattenTupleType(func_type->ret_type);
+  int num_types = 0;

Review comment:
   The problem here is that we need to restore is_input to make it 
correspond to the flattened input types. However, is_input is created in memory 
alloc pass, it is in flatten pattern where a tuple input just get one single 
number instead of tuple of number. As a result we cannot use similar way of 
```FlattenTupleType```. This also makes it more complicated for handling nested 
tuple as input.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5467: [Relay]Improve Shape Func handling for Tuple inputs

2020-04-30 Thread GitBox


jroesch commented on a change in pull request #5467:
URL: https://github.com/apache/incubator-tvm/pull/5467#discussion_r418270872



##
File path: src/relay/op/memory/memory.cc
##
@@ -360,12 +360,26 @@ bool ShapeFuncRel(const Array& types, int 
num_inputs, const Attrs& attrs,
   auto tuple = TupleType(func_type->arg_types);
   auto in_types = FlattenTupleType(tuple);
   auto out_types = FlattenTupleType(func_type->ret_type);
+  int num_types = 0;

Review comment:
   Can you use the `FlattenTupleType` helper instead of manually processing 
them like this? this won't work for nesting and is pretty close to the same 
tuple processing code written everywhere. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5491: Make "none" DataType explicit

2020-04-30 Thread GitBox


kparzysz-quic commented on a change in pull request #5491:
URL: https://github.com/apache/incubator-tvm/pull/5491#discussion_r418270642



##
File path: include/tvm/runtime/data_type.h
##
@@ -211,6 +215,13 @@ class DataType {
   static DataType Handle(int bits = 64, int lanes = 1) {
 return DataType(kHandle, bits, lanes);
   }
+  /*!
+   * \brief Construct a None type.
+   * \return The constructed data type.
+   */
+  static DataType None() {

Review comment:
   Ah, of course.  I didn't notice that `GetType` returned `Type`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5491: Make "none" DataType explicit

2020-04-30 Thread GitBox


tqchen commented on a change in pull request #5491:
URL: https://github.com/apache/incubator-tvm/pull/5491#discussion_r418268635



##
File path: include/tvm/runtime/data_type.h
##
@@ -211,6 +215,13 @@ class DataType {
   static DataType Handle(int bits = 64, int lanes = 1) {
 return DataType(kHandle, bits, lanes);
   }
+  /*!
+   * \brief Construct a None type.
+   * \return The constructed data type.
+   */
+  static DataType None() {

Review comment:
   It would be great to have a single VoidType, so 
GetType(DataType::Void()) better returns VoidType() and 
GetRuntimeType(VoidType()) returns the DataType variant.
   
   The main reason of the additional return is that we suppose to add an 
WARNING if the check does not pass, see also the relations between runtime type 
and type in 
https://github.com/apache/incubator-tvm/blob/master/include/tvm/ir/type.h#L29





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5489: [Rust] Fixes for wasm32 target

2020-04-30 Thread GitBox


jroesch commented on pull request #5489:
URL: https://github.com/apache/incubator-tvm/pull/5489#issuecomment-622091138


   Could you actually add some kind of test which checks that we compile 
correctly for WASM target? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5491: Make "none" DataType explicit

2020-04-30 Thread GitBox


kparzysz-quic commented on a change in pull request #5491:
URL: https://github.com/apache/incubator-tvm/pull/5491#discussion_r418263860



##
File path: include/tvm/runtime/data_type.h
##
@@ -211,6 +215,13 @@ class DataType {
   static DataType Handle(int bits = 64, int lanes = 1) {
 return DataType(kHandle, bits, lanes);
   }
+  /*!
+   * \brief Construct a None type.
+   * \return The constructed data type.
+   */
+  static DataType None() {

Review comment:
   I'm not sure what to do in `GetType`.  It checks for a pointer type, and 
for other expressions it just returns the DataType member, so it seems like 
there is no extra handling needed.
   
   Btw, the if statement doesn't seem to be doing anything, since it's followed 
by the exact same return statement:
   https://github.com/apache/incubator-tvm/blob/master/src/tir/ir/op.cc#L61





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5491: Make "none" DataType explicit

2020-04-30 Thread GitBox


tqchen commented on a change in pull request #5491:
URL: https://github.com/apache/incubator-tvm/pull/5491#discussion_r418253950



##
File path: include/tvm/runtime/data_type.h
##
@@ -211,6 +215,13 @@ class DataType {
   static DataType Handle(int bits = 64, int lanes = 1) {
 return DataType(kHandle, bits, lanes);
   }
+  /*!
+   * \brief Construct a None type.
+   * \return The constructed data type.
+   */
+  static DataType None() {

Review comment:
   to be consistent with the VoidType, perhaps we can use Void here? See 
also 
https://github.com/apache/incubator-tvm/blob/master/include/tvm/ir/type.h#L375
   
   Also need to update GetRuntimeType and GetType here 
https://github.com/apache/incubator-tvm/blob/master/include/tvm/tir/op.h#L60





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ehsanmok commented on pull request #5489: [Rust] Fixes for wasm32 target

2020-04-30 Thread GitBox


ehsanmok commented on pull request #5489:
URL: https://github.com/apache/incubator-tvm/pull/5489#issuecomment-622074069


   Thanks @kazum! LGTM.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-04-30 Thread GitBox


kparzysz-quic opened a new pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492


   The driver (`sim_dev` executable) is the process running on the Hexagon 
simulator that handles the Hexagon-side communication with the TVM runtime 
running on x86.  The x86-side is implemented in 
src/runtime/hexagon/hexagon_device_sim.cc.
   
   This completes the part of the runtime needed to use the Hexagon simulator.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


tqchen edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622071330


   - Allocate `float32x32` is not the actual data type. Perhaps we can just 
show allocate the flattened size since that is the semantics. 
   - eval is not necessary since they can be implied in a call. 
   - We might need to update call later to something like 
`@intrin.func_name(args)`
   - Perhaps we do not need to add a new nested block for allocate, think of 
multiple let in



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


tqchen edited a comment on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622071330


   - Allocate `float32x32` is not the actual data type. Perhaps we can just 
show allocate the flattened size since that is the semantics. 
   - eval is not necessary since they can be implied in a call. 
   - We might need to update call later to something like 
`@intrin.func_name(args)`
   - Perhaps we do not need to add a new nested block for allocate



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


tqchen commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-622071330


   - Allocate `float32x32` is not the actual data type. Perhaps we can just 
show allocate the flattened size since that is the semantics. 
   - eval is not necessary since they can be implied in a call. 
   - We might need to update call later to something like 
`@intrin.func_name(args)`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5491: Make "none" DataType explicit

2020-04-30 Thread GitBox


kparzysz-quic opened a new pull request #5491:
URL: https://github.com/apache/incubator-tvm/pull/5491


   The `None` data type is created when converting an empty string to 
`DataType`.
   
   Add functions to create it and recognize it. Convert it to the `void` LLVM 
type in LLVM codegen (it was not handled).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (8d72496 -> 9bbf58a)

2020-04-30 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8d72496  [RUNTIME][uTVM] AutoTVM + uTVM for Cortex-M7 (#5417)
 add 9bbf58a  Removing older Object detection TFlite test (#5477)

No new revisions were added by this update.

Summary of changes:
 tests/python/frontend/tflite/test_forward.py | 20 
 1 file changed, 20 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on issue #5221: TextPrinter for PrimFunc in IRModule

2020-04-30 Thread GitBox


tqchen commented on issue #5221:
URL: https://github.com/apache/incubator-tvm/issues/5221#issuecomment-622051555


   close as we have a printable version, see also 
https://github.com/apache/incubator-tvm/pull/5483



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5396: [DOCS] Sphinx docs warning fixes

2020-04-30 Thread GitBox


tqchen commented on issue #5396:
URL: https://github.com/apache/incubator-tvm/issues/5396#issuecomment-622050912


   All the docs change are  landed by now



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5490: [REFACTOR] std::string -> String Migration in IR nodes

2020-04-30 Thread GitBox


tqchen commented on issue #5490:
URL: https://github.com/apache/incubator-tvm/issues/5490#issuecomment-622050597


   cc @icemelon9 @jroesch @zhiics please chime in if you would like to take a 
stab



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new issue #5490: [REFACTOR] std::string -> String Migration in IR nodes

2020-04-30 Thread GitBox


tqchen opened a new issue #5490:
URL: https://github.com/apache/incubator-tvm/issues/5490


   This is an issue to track the progress of std::string -> String migration in 
IR Nodes.
   
   - [ ] Merge initial example https://github.com/apache/incubator-tvm/pull/5438
   - [ ] Convert relay nodes
   - [ ] Convert base types
   - [ ] Convert tir nodes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on issue #5464: [OpenCL] `directly 4 8 bit int in integer` causes compiling error

2020-04-30 Thread GitBox


kazum commented on issue #5464:
URL: https://github.com/apache/incubator-tvm/issues/5464#issuecomment-622013436


   It looks wrong and should be fixed, I think.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5417: [RUNTIME][uTVM] AutoTVM + uTVM for Cortex-M7

2020-04-30 Thread GitBox


tmoreau89 commented on pull request #5417:
URL: https://github.com/apache/incubator-tvm/pull/5417#issuecomment-622010729


   Thanks @areusch , @liangfu @weberlo @u99127 the PR has been merged



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum opened a new pull request #5489: [Rust] Fixes for wasm32 target

2020-04-30 Thread GitBox


kazum opened a new pull request #5489:
URL: https://github.com/apache/incubator-tvm/pull/5489


   This PR fixes warnings and errors when targeting wasm32.
   - Update the BackendPackedCFunc signature which was changed in #4637.
   - Use derive_default() for bindgen to handle the generated padding field.
   - Add workaround for nom::length_data, which doesn't allow u64 on 32-bit 
architecture.
   - Terminate the thread safely instead of panic when crossbeam-channel is 
closed.
   - Remove unused import warnings.
   
   @jroesch @nhynes @ehsanmok @tqchen Please help to review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm-vta] huajsj commented on pull request #5: Add c++ and python local deploy example

2020-04-30 Thread GitBox


huajsj commented on pull request #5:
URL: https://github.com/apache/incubator-tvm-vta/pull/5#issuecomment-621996958


   @tmoreau89 , thanks for the kindly follow up, the code& test done, if you 
have time, could you help to have a review?  Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm-vta] huajsj commented on pull request #7: [pynq_driver] fix device early return

2020-04-30 Thread GitBox


huajsj commented on pull request #7:
URL: https://github.com/apache/incubator-tvm-vta/pull/7#issuecomment-621981787


   @remotego , thanks for the detailed explain, the logic make sense, code LGTM.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on pull request #5457: [Fix] Add ConstantNode to IsAtomic

2020-04-30 Thread GitBox


kevinthesun commented on pull request #5457:
URL: https://github.com/apache/incubator-tvm/pull/5457#issuecomment-621979744


   Thanks @zhiics @MarisaKirisame 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Fix] Add ConstantNode to IsAtomic (#5457)

2020-04-30 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository.

kevinthesun pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new ae89afe  [Fix] Add ConstantNode to IsAtomic (#5457)
ae89afe is described below

commit ae89afe0f09db85d11d92d75e5a6ca34b22fb323
Author: Zhi <5145158+zhi...@users.noreply.github.com>
AuthorDate: Thu Apr 30 10:00:27 2020 -0700

[Fix] Add ConstantNode to IsAtomic (#5457)

* add constantnode to atomic

* Add ToANormalForm to FoldConstant
---
 src/relay/transforms/fold_constant.cc |  1 +
 tests/python/relay/test_pass_fold_constant.py | 19 +++
 2 files changed, 20 insertions(+)

diff --git a/src/relay/transforms/fold_constant.cc 
b/src/relay/transforms/fold_constant.cc
index a52f420..fab184c 100644
--- a/src/relay/transforms/fold_constant.cc
+++ b/src/relay/transforms/fold_constant.cc
@@ -203,6 +203,7 @@ class ConstantFolder : public ExprMutator {
   // Constant evaluate a expression.
   Expr ConstEvaluate(Expr expr) {
 std::vector passes = {transform::FuseOps(0),
+   transform::ToANormalForm(),
transform::InferType()};
 Function func;
 if (expr.as()) {
diff --git a/tests/python/relay/test_pass_fold_constant.py 
b/tests/python/relay/test_pass_fold_constant.py
index b212b26..a981667 100644
--- a/tests/python/relay/test_pass_fold_constant.py
+++ b/tests/python/relay/test_pass_fold_constant.py
@@ -32,6 +32,25 @@ def run_opt_pass(expr, opt_pass):
 return entry if isinstance(expr, relay.Function) else entry.body
 
 
+def test_concatenate_const():
+def before():
+data = tvm.nd.array(np.array([1.0, 2.0, 3.0]))
+const = relay.const(data)
+concat = relay.op.concatenate([const, const], axis=0)
+func = relay.Function([], concat)
+return func
+
+def expected():
+data = tvm.nd.array(np.array([1.0, 2.0, 3.0, 1.0, 2.0, 3.0]))
+const = relay.const(data)
+func = relay.Function([], const)
+return func
+
+zz = run_opt_pass(before(), transform.FoldConstant())
+zexpected = run_opt_pass(expected(), transform.InferType())
+assert tvm.ir.structural_equal(zz, zexpected)
+
+
 def test_fold_const():
 c_data = np.array([1, 2, 3]).astype("float32")
 t = relay.TensorType([1, 2, 3], "float32")



[GitHub] [incubator-tvm] kparzysz-quic commented on pull request #5487: [Hexagon] Change "scalar" and "stack" in IDL from "inrout" to "in"

2020-04-30 Thread GitBox


kparzysz-quic commented on pull request #5487:
URL: https://github.com/apache/incubator-tvm/pull/5487#issuecomment-621974419


   Yes, we have tested this change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5436: [TFLite Runtime] Re-enable test for remote execution via RPC

2020-04-30 Thread GitBox


tqchen commented on pull request #5436:
URL: https://github.com/apache/incubator-tvm/pull/5436#issuecomment-621962718


   Please check the ci error 
https://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-5436/4/pipeline 
seems was due to changes to the cpu build config file.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5474: [Frontend][TFLite] ADD_N operator

2020-04-30 Thread GitBox


maheshambule commented on a change in pull request #5474:
URL: https://github.com/apache/incubator-tvm/pull/5474#discussion_r418116057



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -774,6 +775,21 @@ def convert_square(self, op):
 
 return out
 
+def get_tensor_or_const_expr(self, tensor):
+if self.has_expr(tensor.tensor_idx):

Review comment:
   This is already addressed in previous commits.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5486: [TFLITE]Select op support for tflite frontend

2020-04-30 Thread GitBox


FrozenGene commented on a change in pull request #5486:
URL: https://github.com/apache/incubator-tvm/pull/5486#discussion_r418114942



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -2357,6 +2371,20 @@ def get_expr(self, input_tensor_idx):
 def has_expr(self, input_tensor_idx):
 return self.exp_tab.has_expr(get_tensor_name(self.subgraph, 
input_tensor_idx))
 
+def get_tensor_or_const_expr(self, tensor):

Review comment:
   This name is trick. Here, we want to get relay expr and distinguish 
const. However, const is also expr in fact. Maybe we could name it 
`get_tensor_expr`? Do you have any better suggestion?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on a change in pull request #5474: [Frontend][TFLite] ADD_N operator

2020-04-30 Thread GitBox


u99127 commented on a change in pull request #5474:
URL: https://github.com/apache/incubator-tvm/pull/5474#discussion_r418110316



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -774,6 +775,21 @@ def convert_square(self, op):
 
 return out
 
+def get_tensor_or_const_expr(self, tensor):
+if self.has_expr(tensor.tensor_idx):

Review comment:
   This could do with a pydoc documentation for the interface.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] michalpiszczek commented on pull request #5436: [TFLite Runtime] Re-enable test for remote execution via RPC

2020-04-30 Thread GitBox


michalpiszczek commented on pull request #5436:
URL: https://github.com/apache/incubator-tvm/pull/5436#issuecomment-621939200


   @tqchen I believe I've done that here: 
https://github.com/apache/incubator-tvm/pull/5436/commits/530ee65c96db562745561092dcec43e136acbaec
 . I followed the pattern from `test_cudnn.py`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5487: [Hexagon] Change "scalar" and "stack" in IDL from "inrout" to "in"

2020-04-30 Thread GitBox


FrozenGene commented on pull request #5487:
URL: https://github.com/apache/incubator-tvm/pull/5487#issuecomment-621936093


   Thanks @kparzysz-quic . As we talked in 
https://github.com/apache/incubator-tvm/pull/5353, you mention that if we 
change to `in`, maybe cause cache/memory synchronization issue. Within this pr, 
means we have done the test and verify it has no problem, right?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (745c8a0 -> 5d75992)

2020-04-30 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 745c8a0  [RUNTIME] Improved Packed FFI for optional. (#5478)
 add 5d75992  [VTA] Fix VTA compile issue (#5481)

No new revisions were added by this update.

Summary of changes:
 cmake/modules/VTA.cmake | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)



[incubator-tvm] branch master updated (7ea834f -> 745c8a0)

2020-04-30 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 7ea834f  [team] add reviewer kparzysz-quic (#5482)
 add 745c8a0  [RUNTIME] Improved Packed FFI for optional. (#5478)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/packed_func.h | 38 +-
 tests/cpp/build_module_test.cc| 10 ++
 tests/cpp/container_test.cc   | 12 
 3 files changed, 43 insertions(+), 17 deletions(-)



[incubator-tvm] branch master updated: [team] add reviewer kparzysz-quic (#5482)

2020-04-30 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 7ea834f  [team] add reviewer kparzysz-quic (#5482)
7ea834f is described below

commit 7ea834f99b5faf8481d9ec57e1ac1c33d4b5e6de
Author: Yizhi Liu 
AuthorDate: Thu Apr 30 08:06:41 2020 -0700

[team] add reviewer kparzysz-quic (#5482)
---
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 3553e34..b14e65c 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -100,6 +100,7 @@ We do encourage everyone to work anything they are 
interested in.
 - [Kazutaka Morita](https://github.com/kazum): @kazum
 - [Tatsuya Nishiyama](https://github.com/nishi-t): @nishi-t
 - [Pariksheet Pinjari](https://github.com/PariksheetPinjari909): 
@PariksheetPinjari909
+- [Krzysztof Parzyszek](https://github.com/kparzysz-quic): @kparzysz-quic
 - [Josh Pollock](https://github.com/joshpoll): @joshpoll
 - [Jared Roesch](https://github.com/jroesch): @jroesch
 - [Siva](https://github.com/srkreddy1238): @srkreddy1238



[GitHub] [incubator-tvm] dhruvaray opened a new pull request #5488: [TFLITE] SELECT

2020-04-30 Thread GitBox


dhruvaray opened a new pull request #5488:
URL: https://github.com/apache/incubator-tvm/pull/5488


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] MarisaKirisame commented on pull request #5457: [Fix] Add ConstantNode to IsAtomic

2020-04-30 Thread GitBox


MarisaKirisame commented on pull request #5457:
URL: https://github.com/apache/incubator-tvm/pull/5457#issuecomment-621884928


   @zhiics the ANF pass is pretty fast, and it seems like if you sum the size 
of the graph of all call to ANF, the size will be smaller then the original 
graph. so i dont think it is bad.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5487: [Hexagon] Change "scalar" and "stack" in IDL from "inrout" to "in"

2020-04-30 Thread GitBox


kparzysz-quic opened a new pull request #5487:
URL: https://github.com/apache/incubator-tvm/pull/5487


   This changes the `scalar` and `stack` parameters to `kernel` to be input 
parameters only, as suggested by @FrozenGene .
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5486: [TFLITE]Select op support for tflite frontend

2020-04-30 Thread GitBox


siju-samuel opened a new pull request #5486:
URL: https://github.com/apache/incubator-tvm/pull/5486


   Added the support of `select` op in tflite frontend.
   
   @FrozenGene @masahi please help me to review and merge this PR. TIA
   Note: tflite `where` op is same as `select` in tflite.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Hzfengsy commented on pull request #5483: [TIR][Printer] text format printer considering future parsing use

2020-04-30 Thread GitBox


Hzfengsy commented on pull request #5483:
URL: https://github.com/apache/incubator-tvm/pull/5483#issuecomment-621810131


   Some comments:
   1. There are more than one spaces before the left brace in the allocation 
line 
   ```
   allocate(B.local, float32, [64])  {
   ```
   
   2. Can we use the same rule for the allocation stmt as the one for attr? 
Allocation stmt now will bring extra indentation
   
   3. ```
   attr [IterVar(blockIdx.z: int32, [(nullptr)], "ThreadIndex", 
"blockIdx.z")] "thread_extent" = 196;
   ```
   It is strange to print `nullptr` here especially in square brackets. 
Perhaps we can use `IterVar(blockIdx.z: int32, (nullptr), "ThreadIndex", 
"blockIdx.z")]` or `IterVar(blockIdx.z: int32, , "ThreadIndex", "blockIdx.z")]`
   
   4. Considering future parsing use, we must print the dtype for every const 
number. But we may use some shorthand for common dtype. e.g. `2f` for float32, 
`2h` for float16(half), direct`2` for int32 (for here most integer numbers in 
schedule are int32). But still, keep the complete form for every type. e.g. 
`int8(2)`, `float64(2)`(or may be `fp64(2)`) , also, `float32(2)` is legal as 
well.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene edited a comment on pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-04-30 Thread GitBox


FrozenGene edited a comment on pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485#issuecomment-621791710


   For performance, have you tried some other layouts on GPU? I have some exp 
on CPU. The more suitable layout on CPU of NHWC input is:
   
   ```
 input_tile: alpha, alpha, P, CI
 data_pack: alpha, alpha, P, CI
 bgemm: alpha, alpha, P, CO
 inverse: m, m, P, CO
 output: N H W CO
 kernel: alpha alpha CO CI
   ```
   For kernel, I design `alpha alpha CO CI`, because I want to vectorize CI. 
Maybe on GPU, alpha alpha CI CO is better.
   
   I test your layout compared the layout I mentioned, your layout on 
skylake-512 is 0.388ms, but my layout I mentioned is 0.375ms. I use 20 threads 
on workload (1, 56, 56, 64, 64). The performance could be reproduced stabilized.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-04-30 Thread GitBox


FrozenGene commented on a change in pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485#discussion_r417960925



##
File path: topi/python/topi/cuda/conv2d_nhwc_winograd.py
##
@@ -0,0 +1,639 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument
+# pylint: disable=too-many-arguments,too-many-locals
+# pylint: disable=too-many-statements
+"""Winograd template for cuda backend"""
+
+import tvm
+from tvm import te
+from tvm import autotvm
+from .. import nn
+from ..util import get_const_int, get_const_tuple, traverse_inline
+from ..nn.winograd_util import winograd_transform_matrices
+from .tensor_intrin import intrin_wmma_load_matrix_A
+from .tensor_intrin import intrin_wmma_load_matrix_W
+from .tensor_intrin import intrin_wmma_store_matrix
+from .tensor_intrin import intrin_wmma_gemm
+
+def _infer_tile_size(data, kernel):
+"""Compute the tile size"""
+N, H, W, CI = get_const_tuple(data.shape)
+if H % 8 == 0:
+return 4
+return 2
+
+
+def schedule_bgemm_tensorcore(cfg, s, bgemm, data_pack, kernel_pack):
+"""Schedule for bgemm tensorcore"""
+A = data_pack
+B = kernel_pack
+C = bgemm
+_, _, P, out_dim = get_const_tuple(C.shape)
+out_dtype = C.dtype
+
+# Explicit memory access
+AS = s.cache_read(A, 'shared', [C])
+BS = s.cache_read(B, 'shared', [C])
+AF = s.cache_read(AS, 'wmma.matrix_a', [C])
+BF = s.cache_read(BS, 'wmma.matrix_b', [C])
+CF = s.cache_write(C, 'wmma.accumulator')
+CS = s.cache_read(CF, 'shared', [C])
+
+# Create tuning space
+cfg.define_knob("block_row_warps", [1, 2, 4])
+cfg.define_knob("block_col_warps", [1, 2, 4])
+cfg.define_knob("warp_row_tiles", [1, 2, 4, 8])
+cfg.define_knob("warp_col_tiles", [1, 2, 4, 8])
+cfg.define_knob("chunk", [1, 2, 4, 8])
+cfg.define_knob("offset", [0, 1, 2, 4, 8])
+cfg.define_knob("offsetCS", [0, 1, 2, 4, 8])
+cfg.define_knob("vec", [1, 2, 4, 8])
+
+# Ensure that the default parameters are applicable when autotvm is not in 
use
+if (P % 16 == 0 and out_dim % 16 == 0):
+cfg.define_knob("wmma_m", [16, 8, 32])
+elif (P % 32 == 0 and out_dim % 8 == 0):
+cfg.define_knob("wmma_m", [32, 16, 8])
+elif (P % 8 == 0 and out_dim % 32 == 0):
+cfg.define_knob("wmma_m", [8, 16, 32])
+
+warp_size = 32
+wmma_k = 16
+block_row_warps = cfg["block_row_warps"].val
+block_col_warps = cfg["block_col_warps"].val
+warp_row_tiles = cfg["warp_row_tiles"].val
+warp_col_tiles = cfg["warp_col_tiles"].val
+chunk = cfg["chunk"].val
+offsetAB = cfg["offset"].val
+offsetCS = cfg["offsetCS"].val
+wmma_m = cfg["wmma_m"].val
+vec = cfg["vec"].val
+
+if wmma_m == 16:
+wmma_n = 16
+elif wmma_m == 8:
+wmma_n = 32
+elif wmma_m == 32:
+wmma_n = 8
+
+# Define the stride of intrin functions
+AS_align = chunk * wmma_k + offsetAB
+BS_align = warp_col_tiles * block_col_warps * wmma_n + offsetAB
+CS_align = warp_col_tiles * block_col_warps * wmma_n + offsetCS
+AS_stride = [AS_align, 1]
+BS_stride = [BS_align, 1]
+AF_stride = [wmma_k, 1]
+BF_stride = [wmma_n * warp_col_tiles, 1]
+CF_stride = [warp_col_tiles * wmma_n, 1]
+CS_stride = [CS_align, 1]
+block_x = te.thread_axis('blockIdx.x')
+block_y = te.thread_axis('blockIdx.y')
+block_z = te.thread_axis('blockIdx.z')
+thread_x = te.thread_axis('threadIdx.x')
+thread_y = te.thread_axis('threadIdx.y')
+thread_z = te.thread_axis('threadIdx.z')
+
+# Schedule for computation
+block_factor_b = wmma_m * warp_row_tiles * block_row_warps
+block_factor_o = wmma_n * warp_col_tiles * block_col_warps
+alpha_1, alpha_2, b, o = C.op.axis
+block_k = s[C].fuse(alpha_1, alpha_2)
+block_i, bc = s[C].split(b, factor=block_factor_b)
+block_j, oc = s[C].split(o, factor=block_factor_o)
+s[C].reorder(block_k, block_i, block_j, bc, oc)
+t = s[C].fuse(bc, oc)
+t, vi = s[C].split(t, factor=vec)
+t, tx = s[C].split(t, factor=warp_size)
+t, ty = s[C].split(t, factor=block_row_warps)
+t, tz = s[C].split(t, 

[GitHub] [incubator-tvm] FrozenGene commented on pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-04-30 Thread GitBox


FrozenGene commented on pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485#issuecomment-621791710


   For performance, have you tried some other layouts? I have some exp on CPU. 
The more suitable layout on CPU of NHWC input is:
   
   ```
 input_tile: alpha, alpha, P, CI
 data_pack: alpha, alpha, P, CI
 bgemm: alpha, alpha, P, CO
 inverse: m, m, P, CO
 output: N H W CO
 kernel: alpha alpha CO CI
   ```
   For kernel, I design `alpha alpha CO CI`, because I want to vectorize CI. 
Maybe on GPU, alpha alpha CI CO is better.
   
   I test your layout compared the layout I mentioned, your layout on 
skylake-512 is 0.388ms, but my layout I mentioned is 0.375ms. I use 20 threads 
on workload (1, 56, 56, 64, 64). The performance could be reproduced stabilized.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wsl-inspur commented on a change in pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-04-30 Thread GitBox


wsl-inspur commented on a change in pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485#discussion_r417902315



##
File path: topi/python/topi/cuda/conv2d_nhwc_winograd.py
##
@@ -0,0 +1,639 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument
+# pylint: disable=too-many-arguments,too-many-locals
+# pylint: disable=too-many-statements
+"""Winograd template for cuda backend"""
+
+import tvm
+from tvm import te
+from tvm import autotvm
+from .. import nn
+from ..util import get_const_int, get_const_tuple, traverse_inline
+from ..nn.winograd_util import winograd_transform_matrices
+from .tensor_intrin import intrin_wmma_load_matrix_A
+from .tensor_intrin import intrin_wmma_load_matrix_W
+from .tensor_intrin import intrin_wmma_store_matrix
+from .tensor_intrin import intrin_wmma_gemm
+
+def _infer_tile_size(data, kernel):
+"""Compute the tile size"""
+N, H, W, CI = get_const_tuple(data.shape)
+if H % 8 == 0:
+return 4
+return 2
+
+
+def schedule_bgemm_tensorcore(cfg, s, bgemm, data_pack, kernel_pack):
+"""Schedule for bgemm tensorcore"""
+A = data_pack
+B = kernel_pack
+C = bgemm
+_, _, P, out_dim = get_const_tuple(C.shape)
+out_dtype = C.dtype
+
+# Explicit memory access
+AS = s.cache_read(A, 'shared', [C])
+BS = s.cache_read(B, 'shared', [C])
+AF = s.cache_read(AS, 'wmma.matrix_a', [C])
+BF = s.cache_read(BS, 'wmma.matrix_b', [C])
+CF = s.cache_write(C, 'wmma.accumulator')
+CS = s.cache_read(CF, 'shared', [C])
+
+# Create tuning space
+cfg.define_knob("block_row_warps", [1, 2, 4])
+cfg.define_knob("block_col_warps", [1, 2, 4])
+cfg.define_knob("warp_row_tiles", [1, 2, 4, 8])
+cfg.define_knob("warp_col_tiles", [1, 2, 4, 8])
+cfg.define_knob("chunk", [1, 2, 4, 8])
+cfg.define_knob("offset", [0, 1, 2, 4, 8])
+cfg.define_knob("offsetCS", [0, 1, 2, 4, 8])
+cfg.define_knob("vec", [1, 2, 4, 8])
+
+# Ensure that the default parameters are applicable when autotvm is not in 
use
+if (P % 16 == 0 and out_dim % 16 == 0):
+cfg.define_knob("wmma_m", [16, 8, 32])
+elif (P % 32 == 0 and out_dim % 8 == 0):
+cfg.define_knob("wmma_m", [32, 16, 8])
+elif (P % 8 == 0 and out_dim % 32 == 0):
+cfg.define_knob("wmma_m", [8, 16, 32])
+
+warp_size = 32
+wmma_k = 16
+block_row_warps = cfg["block_row_warps"].val
+block_col_warps = cfg["block_col_warps"].val
+warp_row_tiles = cfg["warp_row_tiles"].val
+warp_col_tiles = cfg["warp_col_tiles"].val
+chunk = cfg["chunk"].val
+offsetAB = cfg["offset"].val
+offsetCS = cfg["offsetCS"].val
+wmma_m = cfg["wmma_m"].val
+vec = cfg["vec"].val
+
+if wmma_m == 16:
+wmma_n = 16
+elif wmma_m == 8:
+wmma_n = 32
+elif wmma_m == 32:
+wmma_n = 8
+
+# Define the stride of intrin functions
+AS_align = chunk * wmma_k + offsetAB
+BS_align = warp_col_tiles * block_col_warps * wmma_n + offsetAB
+CS_align = warp_col_tiles * block_col_warps * wmma_n + offsetCS
+AS_stride = [AS_align, 1]
+BS_stride = [BS_align, 1]
+AF_stride = [wmma_k, 1]
+BF_stride = [wmma_n * warp_col_tiles, 1]
+CF_stride = [warp_col_tiles * wmma_n, 1]
+CS_stride = [CS_align, 1]
+block_x = te.thread_axis('blockIdx.x')
+block_y = te.thread_axis('blockIdx.y')
+block_z = te.thread_axis('blockIdx.z')
+thread_x = te.thread_axis('threadIdx.x')
+thread_y = te.thread_axis('threadIdx.y')
+thread_z = te.thread_axis('threadIdx.z')
+
+# Schedule for computation
+block_factor_b = wmma_m * warp_row_tiles * block_row_warps
+block_factor_o = wmma_n * warp_col_tiles * block_col_warps
+alpha_1, alpha_2, b, o = C.op.axis
+block_k = s[C].fuse(alpha_1, alpha_2)
+block_i, bc = s[C].split(b, factor=block_factor_b)
+block_j, oc = s[C].split(o, factor=block_factor_o)
+s[C].reorder(block_k, block_i, block_j, bc, oc)
+t = s[C].fuse(bc, oc)
+t, vi = s[C].split(t, factor=vec)
+t, tx = s[C].split(t, factor=warp_size)
+t, ty = s[C].split(t, factor=block_row_warps)
+t, tz = s[C].split(t, 

[GitHub] [incubator-tvm-vta] remotego edited a comment on pull request #7: [pynq_driver] fix device early return

2020-04-30 Thread GitBox


remotego edited a comment on pull request #7:
URL: https://github.com/apache/incubator-tvm-vta/pull/7#issuecomment-621724098


   @huajsj Thank you for the reply. Let me explain more on this issue.
   
   The original name of 0x18 (24) register of Compute Module is 
XCOMPUTE_CONTROL_BUS_ADDR_DONE_O_DATA, it is a output from the FPGA hardware. 
From the point of view of S/W, it is a read-only register. Thus there is no way 
for software (i.e. driver) code to change the content of it. And this register 
is fully controlled by FPGA H/W.
   
   ```
   impl/ip/drivers/compute_v1_0/src/xcompute_hw.h
   
   // 0x18 : Data signal of done_o
   //bit 31~0 - done_o[31:0] (Read)
   ```
   
   If we trace the controlling of this register in h/w code, we could see the 
register will be set to '0' at the beginning compute module, and sets to '1' 
only when a FINISH instruction is encountered.
   
   ```
   void compute(
   ...
 // Set done value
 done = 0;
 // Perform action based on opcode
 if (insn.generic.opcode == VTA_OPCODE_FINISH) {
   // Set done flag if we reach a FINISH instruction
   done = 1;
 }
   ```
   
   When we start the VTA hardware by this code block,
   ```
   VTAWriteMappedReg(vta_fetch_handle_, 0x0, VTA_START);
   VTAWriteMappedReg(vta_load_handle_, 0x0, VTA_AUTORESTART);
   VTAWriteMappedReg(vta_compute_handle_, 0x0, VTA_AUTORESTART);
   VTAWriteMappedReg(vta_store_handle_, 0x0, VTA_AUTORESTART);
   ```
   The fetch module will dispatch the first compute instruction to compute 
module, and module will then set the Done register to '0'.
   
   However, at the same time, the driver code will attempt to check the value 
of Done register in a pooling loop. And if the Done register is equal to 1. the 
driver will break and return.
   ```
   if (flag == VTA_DONE) break;
   ```
   
   Thus we have a racing condition here:
   ```
   |START| ->|FPGA Compute module start|->|Done -> '0'|->|Other 
Computations...|->|Done -> '1'|
  |   |
  ->|Driver code attempts to check "done" register ??ms |->
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm-vta] remotego commented on pull request #7: [pynq_driver] fix device early return

2020-04-30 Thread GitBox


remotego commented on pull request #7:
URL: https://github.com/apache/incubator-tvm-vta/pull/7#issuecomment-621724098


   @huajsj Thank you for the reply. Let me explain more on this issue.
   
   The original name of 0x18 (24) register of Compute Module is 
XCOMPUTE_CONTROL_BUS_ADDR_DONE_O_DATA, it is a output from the FPGA hardware. 
From the point of view of S/W, it is a read-only register. Thus there is no way 
for software (i.e. driver) code to change the content of it. And this register 
is fully controlled by FPGA H/W.
   
   ```
   impl/ip/drivers/compute_v1_0/src/xcompute_hw.h
   
   // 0x18 : Data signal of done_o
   //bit 31~0 - done_o[31:0] (Read)
   ```
   
   If we trace the controlling of this register in h/w code, we could see the 
register will be set to '0' at the beginning compute module, and sets to '1' 
only when a FINISH instruction is encountered.
   
   ```
   void compute(
   ...
 // Set done value
 done = 0;
 // Perform action based on opcode
 if (insn.generic.opcode == VTA_OPCODE_FINISH) {
   // Set done flag if we reach a FINISH instruction
   done = 1;
 }
   ```
   
   When we start the VTA hardware by this code block,
   ```
   VTAWriteMappedReg(vta_fetch_handle_, 0x0, VTA_START);
   VTAWriteMappedReg(vta_load_handle_, 0x0, VTA_AUTORESTART);
   VTAWriteMappedReg(vta_compute_handle_, 0x0, VTA_AUTORESTART);
   VTAWriteMappedReg(vta_store_handle_, 0x0, VTA_AUTORESTART);
   ```
   The fetch module will dispatch the first compute instruction to compute 
module, and module will then set the Done register to '0'.
   
   However, at the same time, the driver code will attempt to check the value 
of Done register in a pooling loop. And if the Done register is equal to 1. the 
driver will break and return.
   ```
   if (flag == VTA_DONE) break;
   ```
   
   Thus we have a racing condition here:
   ```
   |START| ->|FPGA Compute module start|->|Done -> '0'|->|Other 
Computations...|->|Done -> '1'|
 |   |
 ->|Driver code attempts to check "done" register ??us |->
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5474: [Frontend][TFLite] ADD_N operator

2020-04-30 Thread GitBox


mbaret commented on a change in pull request #5474:
URL: https://github.com/apache/incubator-tvm/pull/5474#discussion_r417873031



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -863,6 +845,21 @@ def convert_add(self, op):
 return self._convert_elemwise(_qnn.op.add, op)
 return self._convert_elemwise(_op.add, op)
 
+def convert_add_n(self, op):
+"""Convert TFLite ADD_N"""
+output_tensors = self.get_output_tensors(op)
+assert len(output_tensors) == 1, "output tensors length should be 1"
+
+input_tensors = self.get_input_tensors(op)
+assert not input_tensors[0].qnn_params, "TFLite does not support 
quantized ADD_N."
+lhs_expr = self.get_tensor_or_const_expr(input_tensors[0])
+for rhs_tensor in input_tensors[1:]:

Review comment:
   I believe doing x[1:] with len(x) < 2 will just return an empty list 
rather than throw an error.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5474: [Frontend][TFLite] ADD_N operator

2020-04-30 Thread GitBox


siju-samuel commented on a change in pull request #5474:
URL: https://github.com/apache/incubator-tvm/pull/5474#discussion_r417851626



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -863,6 +845,21 @@ def convert_add(self, op):
 return self._convert_elemwise(_qnn.op.add, op)
 return self._convert_elemwise(_op.add, op)
 
+def convert_add_n(self, op):
+"""Convert TFLite ADD_N"""
+output_tensors = self.get_output_tensors(op)
+assert len(output_tensors) == 1, "output tensors length should be 1"
+
+input_tensors = self.get_input_tensors(op)
+assert not input_tensors[0].qnn_params, "TFLite does not support 
quantized ADD_N."
+lhs_expr = self.get_tensor_or_const_expr(input_tensors[0])
+for rhs_tensor in input_tensors[1:]:

Review comment:
   assert if len(input_tensors) <2 ? Othersiwe input_tensors[1: ] will 
throw error.

##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -1142,6 +1142,43 @@ def test_all_elemwise():
 _test_forward_elemwise(_test_floor_divide)
 _test_forward_elemwise(_test_floor_mod)
 
+
+###
+# AddN
+# --

Review comment:
   remove unnecessary dashes





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-04-30 Thread GitBox


FrozenGene commented on a change in pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485#discussion_r417839497



##
File path: topi/python/topi/cuda/conv2d_nhwc_winograd.py
##
@@ -0,0 +1,639 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument
+# pylint: disable=too-many-arguments,too-many-locals
+# pylint: disable=too-many-statements
+"""Winograd template for cuda backend"""
+
+import tvm
+from tvm import te
+from tvm import autotvm
+from .. import nn
+from ..util import get_const_int, get_const_tuple, traverse_inline
+from ..nn.winograd_util import winograd_transform_matrices
+from .tensor_intrin import intrin_wmma_load_matrix_A
+from .tensor_intrin import intrin_wmma_load_matrix_W
+from .tensor_intrin import intrin_wmma_store_matrix
+from .tensor_intrin import intrin_wmma_gemm
+
+def _infer_tile_size(data, kernel):
+"""Compute the tile size"""
+N, H, W, CI = get_const_tuple(data.shape)
+if H % 8 == 0:
+return 4
+return 2
+
+
+def schedule_bgemm_tensorcore(cfg, s, bgemm, data_pack, kernel_pack):
+"""Schedule for bgemm tensorcore"""
+A = data_pack
+B = kernel_pack
+C = bgemm
+_, _, P, out_dim = get_const_tuple(C.shape)
+out_dtype = C.dtype
+
+# Explicit memory access
+AS = s.cache_read(A, 'shared', [C])
+BS = s.cache_read(B, 'shared', [C])
+AF = s.cache_read(AS, 'wmma.matrix_a', [C])
+BF = s.cache_read(BS, 'wmma.matrix_b', [C])
+CF = s.cache_write(C, 'wmma.accumulator')
+CS = s.cache_read(CF, 'shared', [C])
+
+# Create tuning space
+cfg.define_knob("block_row_warps", [1, 2, 4])
+cfg.define_knob("block_col_warps", [1, 2, 4])
+cfg.define_knob("warp_row_tiles", [1, 2, 4, 8])
+cfg.define_knob("warp_col_tiles", [1, 2, 4, 8])
+cfg.define_knob("chunk", [1, 2, 4, 8])
+cfg.define_knob("offset", [0, 1, 2, 4, 8])
+cfg.define_knob("offsetCS", [0, 1, 2, 4, 8])
+cfg.define_knob("vec", [1, 2, 4, 8])
+
+# Ensure that the default parameters are applicable when autotvm is not in 
use
+if (P % 16 == 0 and out_dim % 16 == 0):
+cfg.define_knob("wmma_m", [16, 8, 32])
+elif (P % 32 == 0 and out_dim % 8 == 0):
+cfg.define_knob("wmma_m", [32, 16, 8])
+elif (P % 8 == 0 and out_dim % 32 == 0):
+cfg.define_knob("wmma_m", [8, 16, 32])
+
+warp_size = 32
+wmma_k = 16
+block_row_warps = cfg["block_row_warps"].val
+block_col_warps = cfg["block_col_warps"].val
+warp_row_tiles = cfg["warp_row_tiles"].val
+warp_col_tiles = cfg["warp_col_tiles"].val
+chunk = cfg["chunk"].val
+offsetAB = cfg["offset"].val
+offsetCS = cfg["offsetCS"].val
+wmma_m = cfg["wmma_m"].val
+vec = cfg["vec"].val
+
+if wmma_m == 16:
+wmma_n = 16
+elif wmma_m == 8:
+wmma_n = 32
+elif wmma_m == 32:
+wmma_n = 8
+
+# Define the stride of intrin functions
+AS_align = chunk * wmma_k + offsetAB
+BS_align = warp_col_tiles * block_col_warps * wmma_n + offsetAB
+CS_align = warp_col_tiles * block_col_warps * wmma_n + offsetCS
+AS_stride = [AS_align, 1]
+BS_stride = [BS_align, 1]
+AF_stride = [wmma_k, 1]
+BF_stride = [wmma_n * warp_col_tiles, 1]
+CF_stride = [warp_col_tiles * wmma_n, 1]
+CS_stride = [CS_align, 1]
+block_x = te.thread_axis('blockIdx.x')
+block_y = te.thread_axis('blockIdx.y')
+block_z = te.thread_axis('blockIdx.z')
+thread_x = te.thread_axis('threadIdx.x')
+thread_y = te.thread_axis('threadIdx.y')
+thread_z = te.thread_axis('threadIdx.z')
+
+# Schedule for computation
+block_factor_b = wmma_m * warp_row_tiles * block_row_warps
+block_factor_o = wmma_n * warp_col_tiles * block_col_warps
+alpha_1, alpha_2, b, o = C.op.axis
+block_k = s[C].fuse(alpha_1, alpha_2)
+block_i, bc = s[C].split(b, factor=block_factor_b)
+block_j, oc = s[C].split(o, factor=block_factor_o)
+s[C].reorder(block_k, block_i, block_j, bc, oc)
+t = s[C].fuse(bc, oc)
+t, vi = s[C].split(t, factor=vec)
+t, tx = s[C].split(t, factor=warp_size)
+t, ty = s[C].split(t, factor=block_row_warps)
+t, tz = s[C].split(t, 

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5461: [MXNET]broadcast and logical op support

2020-04-30 Thread GitBox


FrozenGene commented on a change in pull request #5461:
URL: https://github.com/apache/incubator-tvm/pull/5461#discussion_r417832135



##
File path: python/tvm/relay/frontend/mxnet.py
##
@@ -1712,6 +1712,33 @@ def _get_bias_requantize_scale(_inputs, _data_scale, 
_kernel_scale):
 res = _op.nn.relu(res)
 return res
 
+
+def _mx_broadcast_to(inputs, attrs):
+data = inputs[0]
+tgt_shape = attrs.get_int_tuple("shape", [])
+
+return _op.broadcast_to(data, tgt_shape)
+
+
+def _mx_logical_not(inputs, input_types):
+data = inputs[0]
+dtype = _infer_type(data).checked_type.dtype
+data = _op.cast(data, "bool") if dtype != "bool" else data
+
+return _op.cast(_op.logical_not(data), dtype)
+
+
+def _mx_broadcast_logical(logical_op):
+def impl(inputs, input_types):
+dtype0 = _infer_type(inputs[0]).checked_type.dtype

Review comment:
   code style. MXNet frontend names the variable as `dtype_0` style. Maybe 
we also could consider more meaningful name, like `lhs_dtype`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wsl-inspur opened a new pull request #5485: [TOPI][Winograd] Optimization of Conv2d Winograd algorithm on Tensor …

2020-04-30 Thread GitBox


wsl-inspur opened a new pull request #5485:
URL: https://github.com/apache/incubator-tvm/pull/5485


   
   - Optimization of Conv2d Winograd algorithm on Tensor Core for NHWC layout.
   - Winograd with tensor core outperforms original winograd algorithm for all 
the batchsizes. 
   - However, performance of winograd is worse than conv2d for large batchsizes 
when Tensor Core were enabled for both. 
   - Performance improvements of resnet50 are fairly good for small batchsizes. 
 
   
   Please see RFC link below for details:
   
[https://discuss.tvm.ai/t/rfc-tensor-core-optimization-of-winograd-conv2d-on-tensor-core/6543](url)
   
   @Hzfengsy @Laurawly @vinx13 @jwfromm Please help to review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (90b08f5 -> 095f565)

2020-04-30 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 90b08f5  [intrin] a few more math functions (#5468)
 add 095f565  [FRONTEND][TFLITE]Logical not op support (#5475)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 11 +++
 tests/python/frontend/tflite/test_forward.py | 12 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] FrozenGene commented on pull request #5475: [FRONTEND][TFLITE]Logical not op support

2020-04-30 Thread GitBox


FrozenGene commented on pull request #5475:
URL: https://github.com/apache/incubator-tvm/pull/5475#issuecomment-621682464


   Thanks @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5474: [Frontend][TFLite] ADD_N operator

2020-04-30 Thread GitBox


maheshambule commented on a change in pull request #5474:
URL: https://github.com/apache/incubator-tvm/pull/5474#discussion_r417808625



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -1896,6 +1896,41 @@ def test_forward_mediapipe_hand_landmark():
 tvm.testing.assert_allclose(np.squeeze(tvm_output[i]), 
np.squeeze(tflite_output[i]),
 rtol=1e-5, atol=1e-5)
 
+###

Review comment:
   Moved.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #5417: [RUNTIME][uTVM] AutoTVM + uTVM for Cortex-M7

2020-04-30 Thread GitBox


liangfu commented on a change in pull request #5417:
URL: https://github.com/apache/incubator-tvm/pull/5417#discussion_r417774863



##
File path: python/tvm/micro/base.py
##
@@ -133,44 +152,91 @@ def __exit__(self, exc_type, exc_value, exc_traceback):
 self._exit()
 
 
-def create_micro_mod(c_mod, dev_config):
+def _calc_max_workspace_usage(src):
+# TODO factor in alignment to the calculation (alloc sizes will be aligned 
up to the word size)
+alloc_re = re.compile(
+r'.*\* ?(.+) = (\(.+\))? TVMBackendAllocWorkspace\(.+, .+, 
\(uint64_t\)(.+), .+, .+\).*')
+free_re = re.compile(r'.*if \(TVMBackendFreeWorkspace\(.+, .+, 
(\(void\*\))? (.+)\) != 0\) {.*')
+max_usage = 0
+alloc_map = {}
+for line in src.split("\n"):
+if line.strip().startswith("//"):
+continue
+match = alloc_re.match(line)
+if match is not None:
+alloc_map[match.group(1)] = int(match.group(3))
+max_usage = max(max_usage, sum(alloc_map.values()))
+else:
+match = free_re.match(line)
+if match is not None:
+print(alloc_map)
+del alloc_map[match.group(2)]
+return max_usage
+
+
+def create_micro_mod(c_mod, dev_config, lib_src_paths=None, lib_headers=None,
+ lib_include_paths=None):
 """Produces a micro module from a given module.
 
 Parameters
 --
-c_mod : tvm.runtime.Module
+c_mod : tvm.module.Module
 module with "c" as its target backend
 
-dev_config : Dict[str, Any]
-MicroTVM config dict for the target device
+lib_src_paths: TODO
+TODO
+
+lib_headers: TODO
+TODO
+
+lib_include_paths: TODO
+TODO
 
 Return
 --
-micro_mod : tvm.runtim.Module
+micro_mod : tvm.module.Module
 micro module for the target device
 """
 temp_dir = _util.tempdir()
 lib_obj_path = temp_dir.relpath("dev_lib.obj")
+# TODO use dev config to dispatch on the type of C codegen to run through
+# (e.g., CodeGenCArm, CodeGenCHost, CodeGenCRiscV)
 c_mod.export_library(
 lib_obj_path,
-fcompile=cross_compiler(dev_config, LibType.OPERATOR))
+fcompile=cross_compiler(
+dev_config,
+LibType.OPERATOR,
+lib_src_paths=lib_src_paths,
+lib_headers=lib_headers,
+lib_include_paths=lib_include_paths))
 micro_mod = tvm.runtime.load_module(lib_obj_path)
 return micro_mod
 
 
-def cross_compiler(dev_config, lib_type):
-"""Create a cross-compile function that wraps `create_lib` for a `Binutil` 
instance.
+def cross_compiler(dev_config, lib_type, lib_src_paths=None, lib_headers=None,
+   lib_include_paths=None):
+"""Create a cross compile function that wraps `create_lib` for a `Binutil` 
instance.
 
 For use in `tvm.runtime.Module.export_library`.
 
 Parameters
 --
-dev_config : Dict[str, Any]
-MicroTVM config dict for the target device
+create_micro_lib : func
+function for creating MicroTVM libraries for a specific device (e.g.,
+
`tvm.micro.device.get_device_funcs('arm.stm32f746xx')['create_micro_lib']`)
 
 lib_type : micro.LibType
 whether to compile a MicroTVM runtime or operator library
 
+lib_src_paths: TODO
+TODO
+
+lib_headers: TODO
+e.g., `['cmsis_gcc.h', 'arm_math.h']`
+
+lib_include_paths: TODO

Review comment:
   Please add a meaningful comment here.

##
File path: python/tvm/micro/device/riscv_spike.py
##
@@ -62,56 +78,31 @@ def default_config(base_addr, server_addr, server_port):
 server_port : int
 port of OpenOCD server to connect to
 
+TODO correct type annotation?
+section_constraints: Optional[Dict[str, Tuple[Number, MemConstraint]]]
+TODO

Review comment:
   leave a meaningful comment

##
File path: python/tvm/micro/device/arm/stm32f746xx.py
##
@@ -36,23 +55,40 @@ def create_micro_lib(obj_path, src_path, lib_type, 
options=None):
 
 options : Optional[List[str]]
 additional options to pass to GCC
+
+lib_src_paths : Optional[List[str]]
+TODO

Review comment:
   Please put a meaningful comment here as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org