[GitHub] [incubator-tvm] zchuang11 commented on issue #5024: [OpenCL] Max Buildin Type Error

2020-04-16 Thread GitBox
zchuang11 commented on issue #5024: [OpenCL] Max Buildin Type Error
URL: https://github.com/apache/incubator-tvm/issues/5024#issuecomment-615033686
 
 
   @tqchen I have pushed a unsupported op as [Lambda in 
keras](https://discuss.tvm.ai/t/lambda-op-for-keras/6408) Thanks  for your 
replay!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (0075d8c -> 9bbee96)

2020-04-16 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 0075d8c  [CRT]Compilation warnings fixed for 32bit and 64bit 
compilation (#5349)
 add 9bbee96  [PYTORCH]Tensor creation ops support (#5347)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  | 110 +--
 tests/python/frontend/pytorch/test_forward.py | 145 ++
 2 files changed, 247 insertions(+), 8 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #5347: [PYTORCH]Tensor creation ops support

2020-04-16 Thread GitBox
masahi merged pull request #5347: [PYTORCH]Tensor creation ops support
URL: https://github.com/apache/incubator-tvm/pull/5347
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kparzysz-quic commented on issue #5346: [Hexagon] Add hexagon_posix.cc to TVM/RT sources in the right place

2020-04-16 Thread GitBox
kparzysz-quic commented on issue #5346: [Hexagon] Add hexagon_posix.cc to 
TVM/RT sources in the right place
URL: https://github.com/apache/incubator-tvm/pull/5346#issuecomment-614982880
 
 
   This was a mistake when copying the code changes over in the previous PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] siju-samuel commented on issue #5355: Restructure imports in tflite frontend.

2020-04-16 Thread GitBox
siju-samuel commented on issue #5355: Restructure imports in tflite frontend.
URL: https://github.com/apache/incubator-tvm/pull/5355#issuecomment-614980027
 
 
   @u99127 I think this PR will create unnecessary dependancy with tflite 
package and tvm. With this change you mandatorily need to install tflite to run 
tvm. Existing method make sure that tflite package is needed only to parse lite 
models.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5354: [Tutorial] AutoTVM for TFLite model on ARM CPUs.

2020-04-16 Thread GitBox
anijain2305 commented on a change in pull request #5354: [Tutorial] AutoTVM for 
TFLite model on ARM CPUs.
URL: https://github.com/apache/incubator-tvm/pull/5354#discussion_r409919577
 
 

 ##
 File path: tutorials/autotvm/tune_relay_tflite_arm.py
 ##
 @@ -0,0 +1,427 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Auto-tuning a TFLite network for ARM CPUs
+=
+**Author**: `Animesh Jain `_
+
+This is a tutorial on tuning a TFLite model for ARM CPUs. This tutorial is 
largely based on previous
+twp tutorials - `Compile TFLite Models 
`_
 and `Auto-tuning a convolutional network for ARM CPUs 
`_.
+
+Here, we will demonstrate reading a TFLite model, auto-tuning, compiling and 
executing it. While, most of the demonstration will be similar to previous two, 
we will discuss different types of data layouts options for conv2d. We will 
also demonstrate how a TVM user can control the set of configurations options 
while tuning.
+"""
+
+
+"""
+First we use Compile TFLite model tutorial to setup and read a TFLite model. 
The instructions are copied here for user friendliness.
+
+To get started, Flatbuffers and TFLite package needs to be installed as 
prerequisites.
+A quick solution is to install Flatbuffers via pip
+
+.. code-block:: bash
+
+pip install flatbuffers --user
+
+
+To install TFlite packages, you could use our prebuilt wheel:
+
+.. code-block:: bash
+
+# For python3:
+wget 
https://github.com/FrozenGene/tflite/releases/download/v1.13.1/tflite-1.13.1-py3-none-any.whl
+pip3 install -U tflite-1.13.1-py3-none-any.whl --user
+
+# For python2:
+wget 
https://github.com/FrozenGene/tflite/releases/download/v1.13.1/tflite-1.13.1-py2-none-any.whl
+pip install -U tflite-1.13.1-py2-none-any.whl --user
+
+
+or you could generate TFLite package yourself. The steps are the following:
+
+.. code-block:: bash
+
+# Get the flatc compiler.
+# Please refer to https://github.com/google/flatbuffers for details
+# and make sure it is properly installed.
+flatc --version
+
+# Get the TFLite schema.
+wget 
https://raw.githubusercontent.com/tensorflow/tensorflow/r1.13/tensorflow/lite/schema/schema.fbs
+
+# Generate TFLite package.
+flatc --python schema.fbs
+
+# Add current folder (which contains generated tflite module) to 
PYTHONPATH.
+export PYTHONPATH=${PYTHONPATH:+$PYTHONPATH:}$(pwd)
+
+
+Now please check if TFLite package is installed successfully, ``python -c 
"import tflite"``
+
+Below you can find an example on how to compile TFLite model using TVM.
+
+"""
+
+##
+# First, necessary imports
+import os
+import tvm
+from tvm import te
+from tvm import autotvm
+from tvm import relay
+import tvm.relay.testing
+from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
+from tvm.contrib.util import tempdir
+import tvm.contrib.graph_runtime as runtime
+
+##
+# Load a test image
+# -
+# A single cat dominates the examples!
+def load_image():
+from PIL import Image
+import numpy as np
+
+image_url = 
'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
+image_path = download_testdata(image_url, 'cat.png', module='data')
+resized_image = Image.open(image_path).resize((224, 224))
+image_data = np.asarray(resized_image).astype("float32")
+
+# Add a dimension to the image so that we have NHWC format layout
+image_data = np.expand_dims(image_data, axis=0)
+
+# Preprocess image as described here:
+# 
https://github.com/tensorflow/models/blob/edb6ed22a801665946c63d650ab9a0b23d98e1b1/research/slim/preprocessing/inception_preprocessing.py#L243
+image_data[:, :, :, 0] = 2.0 / 255.0 * image_data[:, :, :, 0] - 1
+image_data[:, :, :, 1] = 2.0 / 255.0 * image_data[:, :, :, 1] - 1
+image_data[:, 

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5354: [Tutorial] AutoTVM for TFLite model on ARM CPUs.

2020-04-16 Thread GitBox
anijain2305 commented on a change in pull request #5354: [Tutorial] AutoTVM for 
TFLite model on ARM CPUs.
URL: https://github.com/apache/incubator-tvm/pull/5354#discussion_r409919577
 
 

 ##
 File path: tutorials/autotvm/tune_relay_tflite_arm.py
 ##
 @@ -0,0 +1,427 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Auto-tuning a TFLite network for ARM CPUs
+=
+**Author**: `Animesh Jain `_
+
+This is a tutorial on tuning a TFLite model for ARM CPUs. This tutorial is 
largely based on previous
+twp tutorials - `Compile TFLite Models 
`_
 and `Auto-tuning a convolutional network for ARM CPUs 
`_.
+
+Here, we will demonstrate reading a TFLite model, auto-tuning, compiling and 
executing it. While, most of the demonstration will be similar to previous two, 
we will discuss different types of data layouts options for conv2d. We will 
also demonstrate how a TVM user can control the set of configurations options 
while tuning.
+"""
+
+
+"""
+First we use Compile TFLite model tutorial to setup and read a TFLite model. 
The instructions are copied here for user friendliness.
+
+To get started, Flatbuffers and TFLite package needs to be installed as 
prerequisites.
+A quick solution is to install Flatbuffers via pip
+
+.. code-block:: bash
+
+pip install flatbuffers --user
+
+
+To install TFlite packages, you could use our prebuilt wheel:
+
+.. code-block:: bash
+
+# For python3:
+wget 
https://github.com/FrozenGene/tflite/releases/download/v1.13.1/tflite-1.13.1-py3-none-any.whl
+pip3 install -U tflite-1.13.1-py3-none-any.whl --user
+
+# For python2:
+wget 
https://github.com/FrozenGene/tflite/releases/download/v1.13.1/tflite-1.13.1-py2-none-any.whl
+pip install -U tflite-1.13.1-py2-none-any.whl --user
+
+
+or you could generate TFLite package yourself. The steps are the following:
+
+.. code-block:: bash
+
+# Get the flatc compiler.
+# Please refer to https://github.com/google/flatbuffers for details
+# and make sure it is properly installed.
+flatc --version
+
+# Get the TFLite schema.
+wget 
https://raw.githubusercontent.com/tensorflow/tensorflow/r1.13/tensorflow/lite/schema/schema.fbs
+
+# Generate TFLite package.
+flatc --python schema.fbs
+
+# Add current folder (which contains generated tflite module) to 
PYTHONPATH.
+export PYTHONPATH=${PYTHONPATH:+$PYTHONPATH:}$(pwd)
+
+
+Now please check if TFLite package is installed successfully, ``python -c 
"import tflite"``
+
+Below you can find an example on how to compile TFLite model using TVM.
+
+"""
+
+##
+# First, necessary imports
+import os
+import tvm
+from tvm import te
+from tvm import autotvm
+from tvm import relay
+import tvm.relay.testing
+from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
+from tvm.contrib.util import tempdir
+import tvm.contrib.graph_runtime as runtime
+
+##
+# Load a test image
+# -
+# A single cat dominates the examples!
+def load_image():
+from PIL import Image
+import numpy as np
+
+image_url = 
'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
+image_path = download_testdata(image_url, 'cat.png', module='data')
+resized_image = Image.open(image_path).resize((224, 224))
+image_data = np.asarray(resized_image).astype("float32")
+
+# Add a dimension to the image so that we have NHWC format layout
+image_data = np.expand_dims(image_data, axis=0)
+
+# Preprocess image as described here:
+# 
https://github.com/tensorflow/models/blob/edb6ed22a801665946c63d650ab9a0b23d98e1b1/research/slim/preprocessing/inception_preprocessing.py#L243
+image_data[:, :, :, 0] = 2.0 / 255.0 * image_data[:, :, :, 0] - 1
+image_data[:, :, :, 1] = 2.0 / 255.0 * image_data[:, :, :, 1] - 1
+image_data[:, 

[GitHub] [incubator-tvm] u99127 commented on issue #5355: Restructure imports in tflite frontend.

2020-04-16 Thread GitBox
u99127 commented on issue #5355: Restructure imports in tflite frontend.
URL: https://github.com/apache/incubator-tvm/pull/5355#issuecomment-614961595
 
 
   hmm, not sure how it passed my test run with linting before I submitted 
this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 opened a new pull request #5355: Restructure imports in tflite frontend.

2020-04-16 Thread GitBox
u99127 opened a new pull request #5355: Restructure imports in tflite frontend.
URL: https://github.com/apache/incubator-tvm/pull/5355
 
 
   These python modules are needed for every tflite file parsed. Factorize out 
imports of the common ones that I spotted. Now that the import of operator is 
common, asserts can be commonized. 
   
   Loses 473 lines from the source, nice diffstat.
   
   @FrozenGene , could you please review this. 
   
   Thanks,
   Ramana
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #5354: [Tutorial] AutoTVM for TFLite model on ARM CPUs.

2020-04-16 Thread GitBox
anijain2305 opened a new pull request #5354: [Tutorial] AutoTVM for TFLite 
model on ARM CPUs.
URL: https://github.com/apache/incubator-tvm/pull/5354
 
 
   Discuss.- 
https://discuss.tvm.ai/t/topi-using-x86-schedules-for-arm-conv2d/6365


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #5349: [RUNTIME][CRT]Compilation warnings fixed for 32bit and 64bit compilation

2020-04-16 Thread GitBox
tqchen merged pull request #5349: [RUNTIME][CRT]Compilation warnings fixed for 
32bit and 64bit compilation
URL: https://github.com/apache/incubator-tvm/pull/5349
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (baff99c -> 0075d8c)

2020-04-16 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from baff99c  enable tsim and fsim for GPU build (#5352)
 add 0075d8c  [CRT]Compilation warnings fixed for 32bit and 64bit 
compilation (#5349)

No new revisions were added by this update.

Summary of changes:
 apps/bundle_deploy/demo_static.c  | 6 +++---
 src/runtime/crt/crt_backend_api.c | 2 +-
 src/runtime/crt/graph_runtime.c   | 4 ++--
 src/runtime/crt/load_json.c   | 1 +
 src/runtime/crt/ndarray.c | 8 
 5 files changed, 11 insertions(+), 10 deletions(-)



[GitHub] [incubator-tvm] liangfu commented on issue #5349: [RUNTIME][CRT]Compilation warnings fixed for 32bit and 64bit compilation

2020-04-16 Thread GitBox
liangfu commented on issue #5349: [RUNTIME][CRT]Compilation warnings fixed for 
32bit and 64bit compilation
URL: https://github.com/apache/incubator-tvm/pull/5349#issuecomment-614952876
 
 
   Thanks for your contribution. I think if this fixed the issue, we can then 
re-enable the unit test in CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409902806
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_set FindDominated(const 
DFPattern& node);
+  bool FindParent(const Expr& expr,
+  const std::unordered_set& 
dominated_exprs,
+  const DominatorPatternNode* op);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409902404
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_set FindDominated(const 
DFPattern& node);
+  bool FindParent(const Expr& expr,
+  const std::unordered_set& 
dominated_exprs,
+  const DominatorPatternNode* op);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409902168
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_set FindDominated(const 
DFPattern& node);
+  bool FindParent(const Expr& expr,
+  const std::unordered_set& 
dominated_exprs,
+  const DominatorPatternNode* op);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409900411
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_set FindDominated(const 
DFPattern& node);
+  bool FindParent(const Expr& expr,
+  const std::unordered_set& 
dominated_exprs,
+  const DominatorPatternNode* op);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409900186
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_set FindDominated(const 
DFPattern& node);
+  bool FindParent(const Expr& expr,
+  const std::unordered_set& 
dominated_exprs,
+  const DominatorPatternNode* op);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409900186
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,424 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_set FindDominated(const 
DFPattern& node);
+  bool FindParent(const Expr& expr,
+  const std::unordered_set& 
dominated_exprs,
+  const DominatorPatternNode* op);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409899250
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,434 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+ protected:
 
 Review comment:
   add new line before `protected`. `clang-format` can take of this including 
removing the black line at L35, but unfortunately it doesn't seem to add new 
line between functions automatically.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kparzysz-quic commented on issue #5353: [RUNTIME] FastRPC interface for Hexagon runtime

2020-04-16 Thread GitBox
kparzysz-quic commented on issue #5353: [RUNTIME] FastRPC interface for Hexagon 
runtime
URL: https://github.com/apache/incubator-tvm/pull/5353#issuecomment-614931719
 
 
   @FrozenGene 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5353: [RUNTIME] FastRPC interface for Hexagon runtime

2020-04-16 Thread GitBox
kparzysz-quic opened a new pull request #5353: [RUNTIME] FastRPC interface for 
Hexagon runtime
URL: https://github.com/apache/incubator-tvm/pull/5353
 
 
   Co-authored-by: Ravishankar Kolachana 
   Co-authored-by: Krzysztof Parzyszek 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics closed issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at the top level

2020-04-16 Thread GitBox
zhiics closed issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at 
the top level
URL: https://github.com/apache/incubator-tvm/issues/5351
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at the top level

2020-04-16 Thread GitBox
zhiics commented on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so 
at the top level
URL: https://github.com/apache/incubator-tvm/issues/5351#issuecomment-614929802
 
 
   close by #5352 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
mbrookhart commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409879725
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,440 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_map memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  friend DominatorMatcher;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memo_.count(pattern)) {
+return expr.same_as(memo_[pattern]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern] = expr;
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator std::string();
+}
+break;
+  default:
+throw "Unsupported type";
+}
+  }
+}
+  }
+  return matches;
+}
+
+Array reverse(const Array& args) {
+  Array new_args;
+  for (auto it = args.rbegin(); it != args.rend(); ++it) {
+new_args.push_back(*it);
+  }
+  return new_args;
+}
+
+bool 

[GitHub] [incubator-tvm] tqchen merged pull request #5352: [CI] Enable tsim and fsim for GPU build to avoid pack_lib error

2020-04-16 Thread GitBox
tqchen merged pull request #5352: [CI] Enable tsim and fsim for GPU build to 
avoid pack_lib error
URL: https://github.com/apache/incubator-tvm/pull/5352
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (c9cdddd -> baff99c)

2020-04-16 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c9c  [BYOC][FIX] Fix typo in "default" (#5348)
 add baff99c  enable tsim and fsim for GPU build (#5352)

No new revisions were added by this update.

Summary of changes:
 tests/scripts/task_config_build_gpu.sh | 2 ++
 1 file changed, 2 insertions(+)



[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409842978
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,440 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_map memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  friend DominatorMatcher;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memo_.count(pattern)) {
+return expr.same_as(memo_[pattern]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern] = expr;
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator std::string();
+}
+break;
+  default:
+throw "Unsupported type";
+}
+  }
+}
+  }
+  return matches;
+}
+
+Array reverse(const Array& args) {
+  Array new_args;
+  for (auto it = args.rbegin(); it != args.rend(); ++it) {
+new_args.push_back(*it);
+  }
+  return new_args;
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const 

[GitHub] [incubator-tvm] tqchen merged pull request #5348: [BYOC][FIX] Fix typo in "default"

2020-04-16 Thread GitBox
tqchen merged pull request #5348: [BYOC][FIX] Fix typo in "default"
URL: https://github.com/apache/incubator-tvm/pull/5348
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (37bd812 -> c9cdddd)

2020-04-16 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 37bd812  [RUNTIME][CRT] support DLTensor whose ndim == 0 (#5344)
 add c9c  [BYOC][FIX] Fix typo in "default" (#5348)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/annotate_target.cc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-tvm] zhiics opened a new pull request #5352: [CI] Enable tsim and fsim for GPU build to avoid pack_lib error

2020-04-16 Thread GitBox
zhiics opened a new pull request #5352: [CI] Enable tsim and fsim for GPU build 
to avoid pack_lib error
URL: https://github.com/apache/incubator-tvm/pull/5352
 
 
   #5351 
   
   @tqchen 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #5344: [RUNTIME][CRT] scalar's ndim is 0

2020-04-16 Thread GitBox
tqchen merged pull request #5344: [RUNTIME][CRT] scalar's ndim is 0
URL: https://github.com/apache/incubator-tvm/pull/5344
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (84d1eec -> 37bd812)

2020-04-16 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 84d1eec  [RELAY][BYOC] Register pattern tables from external codegens 
(#5262)
 add 37bd812  [RUNTIME][CRT] support DLTensor whose ndim == 0 (#5344)

No new revisions were added by this update.

Summary of changes:
 src/runtime/crt/ndarray.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)



[GitHub] [incubator-tvm] icemelon9 commented on issue #5319: [cuDNN] Add cuDNN grouped convolution support

2020-04-16 Thread GitBox
icemelon9 commented on issue #5319: [cuDNN] Add cuDNN grouped convolution 
support
URL: https://github.com/apache/incubator-tvm/pull/5319#issuecomment-614845264
 
 
   @wpan11nv The test cases for strategy mostly reside in the relay op tests. 
When an op is compiled  for cuda target, it will use the strategies defined in 
`cuda.py`, though it won't cover all the cases in strategy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5319: [cuDNN] Add cuDNN grouped convolution support

2020-04-16 Thread GitBox
icemelon9 commented on a change in pull request #5319: [cuDNN] Add cuDNN 
grouped convolution support
URL: https://github.com/apache/incubator-tvm/pull/5319#discussion_r409790621
 
 

 ##
 File path: python/tvm/relay/op/strategy/cuda.py
 ##
 @@ -91,6 +91,9 @@ def schedule_lrn_cuda(attrs, outs, target):
 @conv2d_strategy.register(["cuda", "gpu"])
 def conv2d_strategy_cuda(attrs, inputs, out_type, target):
 """conv2d cuda strategy"""
+if attrs.data_layout == "NCHW":
+raise ValueError("HERE")
 
 Review comment:
   why raise the error here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics edited a comment on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at the top level

2020-04-16 Thread GitBox
zhiics edited a comment on issue #5351: [Jenkinsfile] Should we include 
libvta_tsim.so at the top level
URL: https://github.com/apache/incubator-tvm/issues/5351#issuecomment-614841035
 
 
   I see. Thanks. Then should we add `set(USE_VTA_TSIM/FSIM, ON)` in the 
build_gpu.sh? We see errors when packing the lib for GPU


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at the top level

2020-04-16 Thread GitBox
zhiics commented on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so 
at the top level
URL: https://github.com/apache/incubator-tvm/issues/5351#issuecomment-614841035
 
 
   I see. Thanks. Then should we add `set(USE_VTA_TSIM/FSIM, ON)` in the 
build_gpu.sh? We see errors when packing the lib


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at the top level

2020-04-16 Thread GitBox
tqchen commented on issue #5351: [Jenkinsfile] Should we include libvta_tsim.so 
at the top level
URL: https://github.com/apache/incubator-tvm/issues/5351#issuecomment-614839000
 
 
   we still need them for building vta tutorials, which is in the last docs 
stage


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics opened a new issue #5351: [Jenkinsfile] Should we include libvta_tsim.so at the top level

2020-04-16 Thread GitBox
zhiics opened a new issue #5351: [Jenkinsfile] Should we include libvta_tsim.so 
at the top level
URL: https://github.com/apache/incubator-tvm/issues/5351
 
 
   @tqchen @tmoreau89 
   
   Should we include libvta_tsim.so and libvta_fsim.so here?
   
   
https://github.com/apache/incubator-tvm/blob/84d1eec39a10c559cb659d3f411cacce08cfdaff/Jenkinsfile#L56-L57
   
   it looks we should set them on in the CPU script, but not GPU
   
https://github.com/apache/incubator-tvm/blob/84d1eec39a10c559cb659d3f411cacce08cfdaff/tests/scripts/task_config_build_cpu.sh#L39-L40
   
   However, we will use them here when we build GPU here
   
https://github.com/apache/incubator-tvm/blob/84d1eec39a10c559cb659d3f411cacce08cfdaff/Jenkinsfile#L148
   
   CC @trevor-m 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5319: [cuDNN] Add cuDNN grouped convolution support

2020-04-16 Thread GitBox
icemelon9 commented on a change in pull request #5319: [cuDNN] Add cuDNN 
grouped convolution support
URL: https://github.com/apache/incubator-tvm/pull/5319#discussion_r409777883
 
 

 ##
 File path: topi/python/topi/cuda/conv2d.py
 ##
 @@ -67,7 +67,7 @@ def _callback(op):
 
 @autotvm.register_topi_compute("conv2d_cudnn.cuda")
 def conv2d_cudnn(cfg, data, kernel, strides, padding, dilation, layout='NCHW',
- out_dtype='float32'):
+ out_dtype='float32', groups=1):
 
 Review comment:
   That's just my suggestion. I feel `groups` is more common than `out_dtype`. 
Another reason is that `group_conv2d` also puts `groups` before `out_dtype`. 
It'll be more consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #5350: [TOPI-ARM] Do not alter layout if layout is NHWC

2020-04-16 Thread GitBox
anijain2305 opened a new pull request #5350: [TOPI-ARM] Do not alter layout if 
layout is NHWC
URL: https://github.com/apache/incubator-tvm/pull/5350
 
 
   @icemelon9 I think we missed this piece while working on op strategy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5349: [RUNTIME][CRT]Compilation warnings fixed for 32bit and 64bit compilation

2020-04-16 Thread GitBox
siju-samuel opened a new pull request #5349: [RUNTIME][CRT]Compilation warnings 
fixed for 32bit and 64bit compilation
URL: https://github.com/apache/incubator-tvm/pull/5349
 
 
   @liangfu Please help to review this PR. TIA
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on issue #5348: [BYOC][FIX] Fix typo in "default"

2020-04-16 Thread GitBox
comaniac commented on issue #5348: [BYOC][FIX] Fix typo in "default"
URL: https://github.com/apache/incubator-tvm/pull/5348#issuecomment-614780555
 
 
   Thanks for the fix. Apparently our unit tests didn't cover this case. Would 
you be able to improve the unit test a bit to cover it? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
mbrookhart commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409681253
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,440 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_map memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  friend DominatorMatcher;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memo_.count(pattern)) {
+return expr.same_as(memo_[pattern]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern] = expr;
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator std::string();
+}
+break;
+  default:
+throw "Unsupported type";
+}
+  }
+}
+  }
+  return matches;
+}
+
+Array reverse(const Array& args) {
+  Array new_args;
+  for (auto it = args.rbegin(); it != args.rend(); ++it) {
+new_args.push_back(*it);
+  }
+  return new_args;
+}
+
+bool 

[GitHub] [incubator-tvm] mbaret commented on issue #5348: [BYOC][FIX] Fix typo in "default"

2020-04-16 Thread GitBox
mbaret commented on issue #5348: [BYOC][FIX] Fix typo in "default"
URL: https://github.com/apache/incubator-tvm/pull/5348#issuecomment-614723215
 
 
   cc @comaniac @zhiics @manupa-arm 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret opened a new pull request #5348: [BYOC][FIX] Fix typo in "default"

2020-04-16 Thread GitBox
mbaret opened a new pull request #5348: [BYOC][FIX] Fix typo in "default"
URL: https://github.com/apache/incubator-tvm/pull/5348
 
 
   Default annotations were incorrectly being named 'defualt' which results in 
them not being removed in PartitionGraph.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on issue #5345: [RELAY] Move frontend utils

2020-04-16 Thread GitBox
mbaret commented on issue #5345: [RELAY] Move frontend utils
URL: https://github.com/apache/incubator-tvm/pull/5345#issuecomment-614706054
 
 
   cc @anijain2305 @zhiics 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5347: [PYTORCH]Tensor creation ops support

2020-04-16 Thread GitBox
siju-samuel opened a new pull request #5347: [PYTORCH]Tensor creation ops 
support
URL: https://github.com/apache/incubator-tvm/pull/5347
 
 
   - ones
   - ones_like
   - zeros
   - zeros_like
   - full
   - full_like
   - linspace
   
   @masahi Please help me to review and merge this PR. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5346: [Hexagon] Add hexagon_posix.cc to TVM/RT sources in the right place

2020-04-16 Thread GitBox
kparzysz-quic opened a new pull request #5346: [Hexagon] Add hexagon_posix.cc 
to TVM/RT sources in the right place
URL: https://github.com/apache/incubator-tvm/pull/5346
 
 
   This file was added before the variable with TVM/RT was initialized. The 
initialization overwrote the addition.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret opened a new pull request #5345: [RELAY] Move frontend utils

2020-04-16 Thread GitBox
mbaret opened a new pull request #5345: [RELAY] Move frontend utils
URL: https://github.com/apache/incubator-tvm/pull/5345
 
 
   The util file currently under frontend is used from outside of frontend (in 
qnn/op/legalizations). This suggests that the file should be pushed up to a 
higher level.
   
   The benefit from this change is that importing qnn no longer also imports 
all the frontends. I've run into this trying to import qnn from within 
op.contrib.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #5262: [RELAY][BYOC] Register pattern tables from external codegens

2020-04-16 Thread GitBox
masahi commented on issue #5262: [RELAY][BYOC] Register pattern tables from 
external codegens
URL: https://github.com/apache/incubator-tvm/pull/5262#issuecomment-614501472
 
 
   Thanks @mbaret 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi merged pull request #5262: [RELAY][BYOC] Register pattern tables from external codegens

2020-04-16 Thread GitBox
masahi merged pull request #5262: [RELAY][BYOC] Register pattern tables from 
external codegens
URL: https://github.com/apache/incubator-tvm/pull/5262
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (6e36da3 -> 84d1eec)

2020-04-16 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 6e36da3  [TOPI][PYTORCH]Logical & Bitwise operator support (#5341)
 add 84d1eec  [RELAY][BYOC] Register pattern tables from external codegens 
(#5262)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/__init__.py|  2 +
 python/tvm/relay/op/contrib/dnnl.py| 26 +++-
 .../tvm/relay/op/contrib/register.py   | 48 ++
 tests/python/relay/test_pass_partition_graph.py| 19 ++---
 4 files changed, 51 insertions(+), 44 deletions(-)
 copy topi/python/topi/generic/sort.py => 
python/tvm/relay/op/contrib/register.py (50%)



[GitHub] [incubator-tvm] masahi commented on issue #5341: [TOPI][PYTORCH]Logical & Bitwise operator support

2020-04-16 Thread GitBox
masahi commented on issue #5341: [TOPI][PYTORCH]Logical & Bitwise operator 
support
URL: https://github.com/apache/incubator-tvm/pull/5341#issuecomment-614500334
 
 
   Thanks @siju-samuel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (cc8cacb -> 6e36da3)

2020-04-16 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from cc8cacb  [DOCS] Bring relay docs to the top-level flat view (#5343)
 add 6e36da3  [TOPI][PYTORCH]Logical & Bitwise operator support (#5341)

No new revisions were added by this update.

Summary of changes:
 docs/api/python/topi.rst  |  2 +
 docs/langref/relay_op.rst |  1 +
 python/tvm/relay/frontend/pytorch.py  | 66 ++-
 python/tvm/relay/op/_tensor.py|  2 +
 python/tvm/relay/op/tensor.py | 17 +
 src/relay/op/tensor/binary.cc |  6 ++
 tests/python/frontend/pytorch/test_forward.py | 95 ++-
 topi/include/topi/broadcast.h | 13 
 topi/python/topi/broadcast.py | 19 ++
 topi/src/broadcast.cc |  1 +
 topi/tests/python/test_topi_broadcast.py  |  2 +
 11 files changed, 222 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #5341: [TOPI][PYTORCH]Logical & Bitwise operator support

2020-04-16 Thread GitBox
masahi merged pull request #5341: [TOPI][PYTORCH]Logical & Bitwise operator 
support
URL: https://github.com/apache/incubator-tvm/pull/5341
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409322475
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,421 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  bool Match(const DFPattern& pattern, const Expr& expr);
+
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_map memo_;
+  std::vector matched_nodes_;
+  };
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memo_.count(pattern)) {
+return expr.same_as(memo_[pattern]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern] = expr;
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator std::string();
+}
+break;
+  default:
+throw "Unsupported type";
+}
+  }
+}
+  }
+  return matches;
+}
+
+Array reverse(const Array& args) {
+  Array new_args;
+  for (auto it = args.rbegin(); it != args.rend(); ++it) {
+new_args.push_back(*it);
+  }
+  return new_args;
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const CallPatternNode* op, const Expr& 
expr) {
+  // utilities
+  auto get_op_node = [](const CallPatternNode* op) -> const tvm::OpNode* {
+if (op) {
+  if (auto* expr_pattern = 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, and Rewriter V0

2020-04-16 Thread GitBox
masahi commented on a change in pull request #5231: [POC] Pattern Language, 
Matcher, and Rewriter V0
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r409322475
 
 

 ##
 File path: src/relay/ir/dataflow_matcher.cc
 ##
 @@ -0,0 +1,421 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  bool Match(const DFPattern& pattern, const Expr& expr);
+
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  std::unordered_map memo_;
+  std::vector matched_nodes_;
+  };
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memo_.count(pattern)) {
+return expr.same_as(memo_[pattern]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern] = expr;
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator std::string();
+}
+break;
+  default:
+throw "Unsupported type";
+}
+  }
+}
+  }
+  return matches;
+}
+
+Array reverse(const Array& args) {
+  Array new_args;
+  for (auto it = args.rbegin(); it != args.rend(); ++it) {
+new_args.push_back(*it);
+  }
+  return new_args;
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const CallPatternNode* op, const Expr& 
expr) {
+  // utilities
+  auto get_op_node = [](const CallPatternNode* op) -> const tvm::OpNode* {
+if (op) {
+  if (auto* expr_pattern =