[GitHub] [incubator-tvm] icemelon9 merged pull request #5144: [Relay][VM] Memory planner (part 1)

2020-05-14 Thread GitBox


icemelon9 merged pull request #5144:
URL: https://github.com/apache/incubator-tvm/pull/5144


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (a400f82 -> 674f58a)

2020-05-14 Thread haichen
This is an automated email from the ASF dual-hosted git repository.

haichen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from a400f82  [TFLite Runtime] Fix bug and re-enable RPC execution test 
(#5436)
 add 674f58a  [Relay][VM] Memory planner (part 1) (#5144)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/ndarray.h  |   2 +
 include/tvm/runtime/vm.h   |  12 +-
 python/tvm/relay/__init__.py   |   6 +
 python/tvm/relay/expr.py   |   1 +
 python/tvm/relay/op/memory/memory.py   |   7 +-
 python/tvm/relay/transform/__init__.py |   1 -
 python/tvm/relay/transform/memory_alloc.py |  22 +-
 python/tvm/relay/transform/memory_plan.py  | 355 +
 src/relay/backend/vm/compiler.cc   | 138 +---
 src/relay/op/memory/memory.cc  |  31 +-
 src/runtime/ndarray.cc |   3 +
 src/runtime/vm/executable.cc   |  45 +--
 src/runtime/vm/memory_manager.cc   |  24 +-
 src/runtime/vm/vm.cc   |  66 +++-
 tests/python/frontend/onnx/test_forward.py |  11 +-
 ..._pass_memory_alloc.py => test_memory_passes.py} |  51 ++-
 16 files changed, 649 insertions(+), 126 deletions(-)
 create mode 100644 python/tvm/relay/transform/memory_plan.py
 rename tests/python/relay/{test_pass_memory_alloc.py => test_memory_passes.py} 
(62%)



[incubator-tvm] branch master updated: [TFLite Runtime] Fix bug and re-enable RPC execution test (#5436)

2020-05-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new a400f82  [TFLite Runtime] Fix bug and re-enable RPC execution test 
(#5436)
a400f82 is described below

commit a400f825281f3c6f0688e8b16deea4ba12ee6bb5
Author: Michal Piszczek 
AuthorDate: Thu May 14 20:16:57 2020 -0700

[TFLite Runtime] Fix bug and re-enable RPC execution test (#5436)
---
 src/runtime/contrib/tflite/tflite_runtime.cc |   8 +-
 src/runtime/contrib/tflite/tflite_runtime.h  |   3 +
 src/runtime/module.cc|   2 +
 tests/python/contrib/test_tflite_runtime.py  | 202 ---
 tests/scripts/task_config_build_cpu.sh   |   3 +
 5 files changed, 135 insertions(+), 83 deletions(-)

diff --git a/src/runtime/contrib/tflite/tflite_runtime.cc 
b/src/runtime/contrib/tflite/tflite_runtime.cc
index 53d7754..8b34e90 100644
--- a/src/runtime/contrib/tflite/tflite_runtime.cc
+++ b/src/runtime/contrib/tflite/tflite_runtime.cc
@@ -93,8 +93,12 @@ DataType TfLiteDType2TVMDType(TfLiteType dtype) {
 void TFLiteRuntime::Init(const std::string& tflite_model_bytes, TVMContext 
ctx) {
   const char* buffer = tflite_model_bytes.c_str();
   size_t buffer_size = tflite_model_bytes.size();
+  // The buffer used to construct the model must be kept alive for
+  // dependent interpreters to be used.
+  flatBuffersBuffer_ = std::unique_ptr(new char[buffer_size]);
+  std::memcpy(flatBuffersBuffer_.get(), buffer, buffer_size);
   std::unique_ptr model =
-  tflite::FlatBufferModel::BuildFromBuffer(buffer, buffer_size);
+  tflite::FlatBufferModel::BuildFromBuffer(flatBuffersBuffer_.get(), 
buffer_size);
   tflite::ops::builtin::BuiltinOpResolver resolver;
   // Build interpreter
   TfLiteStatus status = tflite::InterpreterBuilder(*model, 
resolver)(_);
@@ -173,5 +177,7 @@ Module TFLiteRuntimeCreate(const std::string& 
tflite_model_bytes, TVMContext ctx
 TVM_REGISTER_GLOBAL("tvm.tflite_runtime.create").set_body([](TVMArgs args, 
TVMRetValue* rv) {
   *rv = TFLiteRuntimeCreate(args[0], args[1]);
 });
+
+TVM_REGISTER_GLOBAL("target.runtime.tflite").set_body_typed(TFLiteRuntimeCreate);
 }  // namespace runtime
 }  // namespace tvm
diff --git a/src/runtime/contrib/tflite/tflite_runtime.h 
b/src/runtime/contrib/tflite/tflite_runtime.h
index f61f6ee..f3e3bd9 100644
--- a/src/runtime/contrib/tflite/tflite_runtime.h
+++ b/src/runtime/contrib/tflite/tflite_runtime.h
@@ -26,6 +26,7 @@
 #define TVM_RUNTIME_CONTRIB_TFLITE_TFLITE_RUNTIME_H_
 
 #include 
+#include 
 #include 
 #include 
 
@@ -93,6 +94,8 @@ class TFLiteRuntime : public ModuleNode {
*/
   NDArray GetOutput(int index) const;
 
+  // Buffer backing the interpreter's model
+  std::unique_ptr flatBuffersBuffer_;
   // TFLite interpreter
   std::unique_ptr interpreter_;
   // TVM context
diff --git a/src/runtime/module.cc b/src/runtime/module.cc
index be75ff2..46ef6fa 100644
--- a/src/runtime/module.cc
+++ b/src/runtime/module.cc
@@ -129,6 +129,8 @@ bool RuntimeEnabled(const std::string& target) {
 f_name = "device_api.opencl";
   } else if (target == "mtl" || target == "metal") {
 f_name = "device_api.metal";
+  } else if (target == "tflite") {
+f_name = "target.runtime.tflite";
   } else if (target == "vulkan") {
 f_name = "device_api.vulkan";
   } else if (target == "stackvm") {
diff --git a/tests/python/contrib/test_tflite_runtime.py 
b/tests/python/contrib/test_tflite_runtime.py
index 8c883b0..1b911b7 100644
--- a/tests/python/contrib/test_tflite_runtime.py
+++ b/tests/python/contrib/test_tflite_runtime.py
@@ -14,92 +14,130 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
+import pytest
+
 import tvm
 from tvm import te
 import numpy as np
 from tvm import rpc
 from tvm.contrib import util, tflite_runtime
-# import tensorflow as tf
-# import tflite_runtime.interpreter as tflite
-
-
-def skipped_test_tflite_runtime():
-
-def create_tflite_model():
-root = tf.Module()
-root.const = tf.constant([1., 2.], tf.float32)
-root.f = tf.function(lambda x: root.const * x)
-
-input_signature = tf.TensorSpec(shape=[2,  ], dtype=tf.float32)
-concrete_func = root.f.get_concrete_function(input_signature)
-converter = 
tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
-tflite_model = converter.convert()
-return tflite_model
-
-
-def check_local():
-tflite_fname = "model.tflite"
-tflite_model = create_tflite_model()
-temp = util.tempdir()
-tflite_model_path = temp.relpath(tflite_fname)
-open(tflite_model_path, 'wb').write(tflite_model)
-
-# inference via tflite interpreter python apis
-interpreter = 

[GitHub] [incubator-tvm] tqchen merged pull request #5436: [TFLite Runtime] Fix bug and re-enable RPC execution test

2020-05-14 Thread GitBox


tqchen merged pull request #5436:
URL: https://github.com/apache/incubator-tvm/pull/5436


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 edited a comment on pull request #5585: [Runtime] Introduce runtime::Array

2020-05-14 Thread GitBox


junrushao1994 edited a comment on pull request #5585:
URL: https://github.com/apache/incubator-tvm/pull/5585#issuecomment-628999732


   Could you help review the PR? Thanks! @tqchen @jwfromm @jroesch @zhiics 
@icemelon9 @yzhliu @wweic



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5585: [Runtime] Introduce runtime::Array

2020-05-14 Thread GitBox


junrushao1994 commented on pull request #5585:
URL: https://github.com/apache/incubator-tvm/pull/5585#issuecomment-628999732


   Could you help review the PR? Thanks! @tqchen @joshpoll @jroesch @zhiics 
@icemelon9 @yzhliu @wweic



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] michalpiszczek commented on pull request #5436: [TFLite Runtime] Fix bug and re-enable RPC execution test

2020-05-14 Thread GitBox


michalpiszczek commented on pull request #5436:
URL: https://github.com/apache/incubator-tvm/pull/5436#issuecomment-628945385


   @tmoreau89 @tqchen PTAL, now includes fix to keep alive the interpreter's 
backing buffer



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv opened a new pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-05-14 Thread GitBox


wpan11nv opened a new pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600


   - Do not use multiple kernels
   
   - Schedule with warp reductions
   
   - Fixed a bug on the lower warp memory pass
   
   - Fixed warp shuffle intrinsics for the nvptx backend.
   
   Signed-off-by: Wei Pan 
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner

2020-05-14 Thread GitBox


masahi commented on a change in pull request #5231:
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r425440445



##
File path: include/tvm/relay/dataflow_matcher.h
##
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/relay/dataflow_matcher.h
+ * \brief A pattern matcher for matching dataflow properties.
+ */
+#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_
+#define TVM_RELAY_DATAFLOW_MATCHER_H_
+
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+class DFPatternCallback;
+/*!
+ * \brief Base type of all dataflow pattern callbacks.
+ * \sa DFPatternCallback
+ */
+class DFPatternCallbackNode : public Object {
+ public:
+  /*! \brief Pattern this callback matches */
+  DFPattern pattern_;
+  /*! \brief Function to call when finding a matched expression */
+  PackedFunc function_;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {}
+
+  static constexpr const char* _type_key = "DFPatternCallbackNode";
+  TVM_DECLARE_BASE_OBJECT_INFO(DFPatternCallbackNode, Object);
+};
+
+/*!
+ * \brief Managed reference to dataflow pattern callbacks.
+ * \sa DFPatternCallbackNode
+ */
+class DFPatternCallback : public ObjectRef {

Review comment:
   I can see these pass functions can be useful for op fusion and BYOC 
related passes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner

2020-05-14 Thread GitBox


mbrookhart commented on a change in pull request #5231:
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r425433993



##
File path: include/tvm/relay/dataflow_matcher.h
##
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/relay/dataflow_matcher.h
+ * \brief A pattern matcher for matching dataflow properties.
+ */
+#ifndef TVM_RELAY_DATAFLOW_MATCHER_H_
+#define TVM_RELAY_DATAFLOW_MATCHER_H_
+
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+class DFPatternCallback;
+/*!
+ * \brief Base type of all dataflow pattern callbacks.
+ * \sa DFPatternCallback
+ */
+class DFPatternCallbackNode : public Object {
+ public:
+  /*! \brief Pattern this callback matches */
+  DFPattern pattern_;
+  /*! \brief Function to call when finding a matched expression */
+  PackedFunc function_;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {}
+
+  static constexpr const char* _type_key = "DFPatternCallbackNode";
+  TVM_DECLARE_BASE_OBJECT_INFO(DFPatternCallbackNode, Object);
+};
+
+/*!
+ * \brief Managed reference to dataflow pattern callbacks.
+ * \sa DFPatternCallbackNode
+ */
+class DFPatternCallback : public ObjectRef {

Review comment:
   Something got lost in a refactor. I want to users to be able to write 
pattern-based passes in C++, which requires this in a header, but I don't seem 
to have the pass functions exposed. Will fix.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (561f0c2 -> 8b8fba9)

2020-05-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 561f0c2  [DOCS] Improve document in reflection (#5593)
 add 8b8fba9  Overestimate binary size for microTVM compiled binaries. 
(#5590)

No new revisions were added by this update.

Summary of changes:
 python/tvm/contrib/binutil.py | 22 ++
 1 file changed, 6 insertions(+), 16 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5590: Overestimate binary size for microTVM compiled binaries.

2020-05-14 Thread GitBox


tqchen merged pull request #5590:
URL: https://github.com/apache/incubator-tvm/pull/5590


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #5231: [POC] Pattern Language, Matcher, Rewriter, and Function Paritioner

2020-05-14 Thread GitBox


mbrookhart commented on a change in pull request #5231:
URL: https://github.com/apache/incubator-tvm/pull/5231#discussion_r42547



##
File path: src/relay/ir/dataflow_matcher.cc
##
@@ -0,0 +1,656 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/tvm/relay/dataflow_matcher.cc
+ * \brief The dataflow pattern matcher for Relay.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "indexed_graph.h"
+
+namespace tvm {
+namespace relay {
+
+// Pattern Matcher
+
+class DominatorMatcher;
+
+class DFPatternMatcher : public DFPatternFunctor {
+ public:
+  explicit DFPatternMatcher(const Expr& root_expr) : 
expr_graph_(CreateIndexedGraph(root_expr)) {}
+  bool Match(const DFPattern& pattern, const Expr& expr);
+  Map> GetMemo() { return Map>(memo_); }
+
+ protected:
+  bool VisitDFPattern(const DFPattern& pattern, const Expr& expr) override;
+  bool VisitDFPattern_(const AltPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const AttrPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const CallPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const DominatorPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const ExprPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TupleGetItemPatternNode* op, const Expr& expr) 
override;
+  bool VisitDFPattern_(const TuplePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const TypePatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const VarPatternNode* op, const Expr& expr) override;
+  bool VisitDFPattern_(const WildcardPatternNode* op, const Expr& expr) 
override;
+
+  void ClearMap(size_t watermark);
+  bool MatchesPath(const DominatorPatternNode* op, const Expr& expr);
+  bool DominatesParent(const DominatorPatternNode* op, const Expr& expr);
+
+  std::unordered_map, ObjectHash, ObjectEqual> memo_;
+  std::vector matched_nodes_;
+  IndexedGraph expr_graph_;
+  IndexedGraph pattern_graph_;
+  bool memoize_ = true;
+};
+
+bool DFPatternMatcher::Match(const DFPattern& pattern, const Expr& expr) {
+  memo_.clear();
+  matched_nodes_.clear();
+  return VisitDFPattern(pattern, expr);
+}
+
+void DFPatternMatcher::ClearMap(size_t watermark) {
+  for (size_t i = watermark; i < matched_nodes_.size(); ++i) {
+memo_.erase(matched_nodes_[i]);
+  }
+  matched_nodes_.erase(matched_nodes_.begin() + watermark, 
matched_nodes_.end());
+}
+
+bool DFPatternMatcher::VisitDFPattern(const DFPattern& pattern, const Expr& 
expr) {
+  if (memoize_ && memo_.count(pattern)) {
+CHECK_EQ(memo_[pattern].size(), 1);
+return expr.same_as(memo_[pattern][0]);
+  } else {
+auto watermark = matched_nodes_.size();
+auto out = DFPatternFunctor::VisitDFPattern(pattern, expr);
+if (out) {
+  memo_[pattern].push_back(expr);
+  matched_nodes_.push_back(pattern);
+} else {
+  ClearMap(watermark);
+}
+return out;
+  }
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AltPatternNode* op, const Expr& 
expr) {
+  return VisitDFPattern(op->left, expr) || VisitDFPattern(op->right, expr);
+}
+
+bool DFPatternMatcher::VisitDFPattern_(const AttrPatternNode* attr_pattern, 
const Expr& expr) {
+  bool matches = false;
+  if (const auto* op_node = expr.as()) {
+Op op = GetRef(op_node);
+auto attributes = attr_pattern->attrs.as()->dict;
+for (auto kv : attributes) {
+  auto attr_name = kv.first;
+  auto attr_value = kv.second;
+  auto op_map = Op::GetAttr(attr_name);
+  if (op_map.count(op)) {
+switch (op_map[op].type_code()) {
+  case kDLInt:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator int64_t();
+}
+break;
+  case kDLFloat:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator double();
+}
+break;
+  case kTVMStr:
+if (auto* val = kv.second.as()) {
+  matches = val->value == op_map[op].operator std::string();
+}
+break;
+  default:
+

[GitHub] [incubator-tvm] areusch opened a new pull request #5599: Fix TVMArray layout on device

2020-05-14 Thread GitBox


areusch opened a new pull request #5599:
URL: https://github.com/apache/incubator-tvm/pull/5599


   Some differences existed between the checked-in version and DLTensor, which 
were only seen when trying to use e.g. strides from hand-written PackedFuncs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #5590: Overestimate binary size for microTVM compiled binaries.

2020-05-14 Thread GitBox


areusch commented on pull request #5590:
URL: https://github.com/apache/incubator-tvm/pull/5590#issuecomment-628867117


   fixed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm-site] branch master updated: WebGPU blog (#8)

2020-05-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/master by this push:
 new d60217b  WebGPU blog (#8)
d60217b is described below

commit d60217bfad6507f3997718253c74cb5e4143b236
Author: Tianqi Chen 
AuthorDate: Thu May 14 10:59:24 2020 -0700

WebGPU blog (#8)
---
 ...g-machine-learning-to-webassembly-and-webgpu.md |  88 +
 images/webgpu/ml-compiler-flow.png | Bin 0 -> 197380 bytes
 images/webgpu/tvm-wasm-stack.png   | Bin 0 -> 412428 bytes
 images/webgpu/webgpu-mobilenet-perf.png| Bin 0 -> 90966 bytes
 4 files changed, 88 insertions(+)

diff --git 
a/_posts/2020-05-14-compiling-machine-learning-to-webassembly-and-webgpu.md 
b/_posts/2020-05-14-compiling-machine-learning-to-webassembly-and-webgpu.md
new file mode 100644
index 000..a24fae6
--- /dev/null
+++ b/_posts/2020-05-14-compiling-machine-learning-to-webassembly-and-webgpu.md
@@ -0,0 +1,88 @@
+---
+layout: post
+title: 'Compiling Machine Learning to WASM and WebGPU with Apache TVM'
+author: Tianqi Chen and Jared Roesch, OctoML
+date: 2020-05-14
+---
+
+**TLDR**
+
+We introduced support for WASM and WebGPU to the Apache TVM deep learning 
compiler. Our experiments shows that  TVM's WebGPU backend can get **close to 
native** **GPU performance** when deploying models to the web.
+
+{:center: style="text-align: center"}
+![image](/images/webgpu/webgpu-mobilenet-perf.png){: width="55%"}
+{:center}
+
+## Introduction
+
+Computing is one of the pillars of modern machine learning applications. The 
introduction of the GPU to accelerate deep learning workloads has increased the 
rate of progress dramatically. Given the growing requirement to deploy machine 
learning everywhere, the browser becomes a natural place to deploy intelligent 
applications.
+
+While TensorFlow.js and ONNX.js are existing efforts to bring machine learning 
to the browser, there still exist non-trivial gaps in performance between the 
web versions and native ones. One of the many reasons is the lack of standard 
and performant access to the GPU on the web. WebGL lacks important features 
such as compute shaders and generic storage buffers that are necessary for high 
performance deep learning.
+
+WebGPU is the upcoming standard for next generation web graphics which has the 
possibility to dramatically change this situation. Like the latest generation 
graphics APIs such as Vulkan and Metal, WebGPU offers first-class compute 
shader support.
+
+To explore the potential of using WebGPU for machine learning deployment in 
the browser, we enhanced the deep learning compiler Apache(incubating) TVM to 
target WASM (for host code that computes the launching parameters and calls 
into the device launch) and WebGPU (for device execution). Our preliminary 
results are quite positive — for the first time, we can deploy machine learning 
applications on the web while still getting near native performance on the GPU.
+
+## Machine Learning Compiler
+
+{:center: style="text-align: center"}
+![image](/images/webgpu/ml-compiler-flow.png){: width="65%"}
+{:center}
+
+One natural reaction when trying out WebGPU is to write shaders for primitive 
operators in deep neural networks (matrix multiplication and convolution) and 
then directly optimize their performance. This is the traditional workflow used 
 by existing frameworks such as TensorFlow.js.
+
+Instead, we apply a compilation based approach. TVM automatically ingests 
models from high-level frameworks such as TensorFlow, Keras, PyTorch, MXNet and 
ONNX and uses a machine learning driven approach to automatically generate low 
level code, in this case compute shaders in SPIR-V format. The generated code 
can then be packaged as a deployable module.
+
+One important advantage of the compilation based approach is the reuse of 
infrastructure. We are able to effortlessly (relative to [other 
approaches](https://arxiv.org/abs/1901.05350)) target the web by reusing the 
infrastructure for optimizing GPU kernels for native platforms such as CUDA, 
Metal and OpenCL. If the mapping of the WebGPU API to native APIs is efficient 
we can expect similar performance with very little work. More importantly, the 
[AutoTVM](https://tvm.apache.org/2018/10/0 [...]
+
+## Building a WASM and WebGPU Compiler
+
+In order to build a compiler that can target WASM and WebGPU, we need the 
following elements:
+
+- A SPIR-V generator for compute shaders.
+- A WASM generator for the host program.
+- A runtime to load and execute the generated program.
+
+Luckily, TVM already has a SPIR-V target for Vulkan, and uses LLVM for host 
code generation. So we can just repurpose the two to generate the device and 
host programs.
+
+The main challenge is the runtime. We need a runtime to load the shader code, 
and to enable  the 

[incubator-tvm-site] branch asf-site updated: Build at Thu May 14 10:59:43 PDT 2020

2020-05-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a8e1e78  Build at Thu May 14 10:59:43 PDT 2020
a8e1e78 is described below

commit a8e1e78ffb213a5564c99d94f564128da1d60875
Author: tqchen 
AuthorDate: Thu May 14 10:59:43 2020 -0700

Build at Thu May 14 10:59:43 PDT 2020
---
 ...s-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html |  16 +-
 ...machine-learning-to-webassembly-and-webgpu.html | 283 +
 atom.xml   | 105 +++-
 blog.html  |  10 +
 images/webgpu/ml-compiler-flow.png | Bin 0 -> 197380 bytes
 images/webgpu/tvm-wasm-stack.png   | Bin 0 -> 412428 bytes
 images/webgpu/webgpu-mobilenet-perf.png| Bin 0 -> 90966 bytes
 rss.xml| 107 +++-
 sitemap.txt|   1 +
 9 files changed, 495 insertions(+), 27 deletions(-)

diff --git 
a/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html 
b/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
index 07f0cb6..7d0db87 100644
--- a/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
+++ b/2017/10/30/Bringing-AMDGPUs-to-TVM-Stack-and-NNVM-Compiler-with-ROCm.html
@@ -262,13 +262,13 @@ We are starting to look at performance optimization and 
we expect more improveme
 You should see something like this:
 
 ; ModuleID = 'myadd__kernel0'
-source_filename = "myadd__kernel0"
+source_filename = "myadd__kernel0"
 target datalayout = "e-p:32:32-p1:64:64-p2:64:64-p3:32:32-p4:64:64-p5:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64"
 target triple = "amdgcn-amd-amdhsa-hcc"
 
 
 ; Function Attrs: nounwind
-define dllexport amdgpu_kernel void @myadd__kernel0(float addrspace(1)* noalias nocapture, float addrspacedefine dllexport amdgpu_kernel void @myadd__kernel0(float addrspace(1)* noalias entry:
   %4 = tail call i32 
@llvm.amdgcn.workgroup.id.x()
   %5 = tail call i32 
@llvm.amdgcn.workitem.id.x()
@@ -288,14 +288,14 @@ We are starting to look at performance optimization and 
we expect more improveme
   %10 = add nsw i32 
%.pre-phi, %5
   %11 = add nsw i32 
%.pre-phi, %5
   %12 = sext i32 %11 
to i64
-  %13 = getelementptr inbounds float, float 
addrspace(1)* %2, i64 %12
-  %14 = load float, 
float addrspace(1)* %13, align 
4, !tbaa 
!2
-  %15 = getelementptr inbounds float, float 
addrspace(1)* %1, i64 %12
-  %16 = load float, 
float addrspace(1)* %15, align 
4, !tbaa 
!6
+  %13 = getelementptr inbounds float, float 
addrspace(1)* %2, i64 %12
+  %14 = load float, 
float addrspace(1)* %13, align 
4, %15 = getelementptr inbounds float, float 
addrspace(1)* %1, i64 %12
+  %16 = load float, 
float addrspace(1)* %15, align 
4, %17 = fadd float %14, %16
   %18 = sext i32 %10 
to i64
-  %19 = getelementptr inbounds float, float 
addrspace(1)* %0, i64 %18
-  store float %17, float 
addrspace(1)* %19, align 4, !tbaa !9
+  %19 = getelementptr inbounds float, float 
addrspace(1)* %0, i64 %18
+  store float %17, float 
addrspace(1)* %19, align 4, !tbaa br label %if_end
 
 
diff --git 
a/2020/05/14/compiling-machine-learning-to-webassembly-and-webgpu.html 
b/2020/05/14/compiling-machine-learning-to-webassembly-and-webgpu.html
new file mode 100644
index 000..c51e923
--- /dev/null
+++ b/2020/05/14/compiling-machine-learning-to-webassembly-and-webgpu.html
@@ -0,0 +1,283 @@
+
+
+
+  
+
+Compiling Machine Learning to WASM and WebGPU with Apache 
TVM
+
+
+
+
+
+
+
+
+
+
+
+  
+  
+  
+  https://www.googletagmanager.com/gtag/js?id=UA-75982049-2";>
+  
+window.dataLayer = window.dataLayer || [];
+function gtag(){dataLayer.push(arguments);}
+
+gtag('js', new Date());
+gtag('config', 'UA-75982049-2');
+  
+
+
+
+  
+
+  
+
+  
+
+  
+
+  
+  
+
+
+
+
+
+
+  
+
+  
+  
+
+  
+
+  
+  
+
+  
+
+  
+  
+
+  
+
+  
+  
+
+  
+
+  
+  
+
+  
+
+  
+  
+
+  
+
+  
+   
+   Community
+   
+  
+  
+
+  
+
+  
+   
+   Download
+   
+  
+  
+
+  
+
+  
+   
+   About
+   
+  
+  
+
+  
+
+  
+  
+
+  
+
+  
+   
+   VTA
+   
+  
+  
+
+  
+
+  
+  
+   
+   Blog
+   
+  
+
+  
+
+
+
+
+ https://tvm.apache.org/docs;>Docs
+ https://tvmconf.org;>TVM Conference
+ 

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5595: [TUTORIAL]TFLite QNN Tutorial

2020-05-14 Thread GitBox


anijain2305 commented on a change in pull request #5595:
URL: https://github.com/apache/incubator-tvm/pull/5595#discussion_r425316098



##
File path: tutorials/frontend/deploy_prequantized_tflite.py
##
@@ -0,0 +1,244 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)
+
+**Author**: `Siju Samuel `_
+Welcome to part 3 of the Deploy Framework-Prequantized Model with TVM tutorial.
+In this part, we will start with a Quantized TFLite graph and then compile and 
execute it via TVM.
+
+
+For more details on quantizing the model using TFLite, readers are encouraged 
to
+go through `Converting Quantized Models
+`_.
+
+The TFLite models can be downloaded from this `link
+`_.
+
+To get started, Tensorflow and TFLite package needs to be installed as 
prerequisite.
+
+.. code-block:: bash
+
+# install tensorflow and tflite
+pip install tensorflow==2.1.0
+pip install tflite==2.1.0
+
+Now please check if TFLite package is installed successfully, ``python -c 
"import tflite"``
+
+"""
+
+###
+# Necessary imports
+# -
+import os
+
+import numpy as np
+import tflite
+
+import tvm
+from tvm import relay
+
+
+##
+# Download pretrained Quantized TFLite model
+# --
+
+# Download mobilenet V2 TFLite model provided by Google
+from tvm.contrib.download import download_testdata
+
+model_url = "https://storage.googleapis.com/download.tensorflow.org/models/; \
+ "tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgz"
+
+# Download model tar file and extract it to get mobilenet_v2_1.0_224.tflite
+model_path = download_testdata(model_url, "mobilenet_v2_1.0_224_quant.tgz",
+ module=['tf', 'official'])
+model_dir = os.path.dirname(model_path)
+
+
+##
+# Utils for downloading and extracting zip files
+# --
+def extract(path):
+import tarfile
+if path.endswith("tgz") or path.endswith("gz"):
+dir_path = os.path.dirname(path)
+tar = tarfile.open(path)
+tar.extractall(path=dir_path)
+tar.close()
+else:
+raise RuntimeError('Could not decompress the file: ' + path)
+
+extract(model_path)
+
+
+##
+# Load a test image
+# -
+
+###
+# Get a real image for e2e testing
+# 
+def get_real_image(im_height, im_width):
+from PIL import Image
+repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
+img_name = 'elephant-299.jpg'
+image_url = os.path.join(repo_base, img_name)
+img_path = download_testdata(image_url, img_name, module='data')
+image = Image.open(img_path).resize((im_height, im_width))
+x = np.array(image).astype('uint8')
+data = np.reshape(x, (1, im_height, im_width, 3))
+return data
+
+data = get_real_image(224, 224)
+
+##
+# Load a tflite model
+# ---
+
+##
+# Now we can open mobilenet_v2_1.0_224.tflite
+tflite_model_file = os.path.join(model_dir, 
"mobilenet_v2_1.0_224_quant.tflite")
+tflite_model_buf = open(tflite_model_file, "rb").read()
+
+tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0)
+
+
+###
+# Lets run TFLite pre-quantized model inference and get the TFLite prediction.
+def run_tflite_model(tflite_model_buf, input_data):
+""" Generic function to execute TFLite """
+try:
+from tensorflow import 

[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5551: [Reduction] Fix cross thread reduction

2020-05-14 Thread GitBox


wpan11nv commented on a change in pull request #5551:
URL: https://github.com/apache/incubator-tvm/pull/5551#discussion_r425288053



##
File path: src/te/operation/cross_thread_reduction.cc
##
@@ -48,9 +97,18 @@ Stmt MakeCrossThreadReduction(const ComputeOpNode* self, 
const Stage& stage,
 CHECK(reduce);
 reduces[i] = reduce;
   }
-  PrimExpr cond = reduces[0]->condition;
-  for (PrimExpr v : conds) {
-cond = cond && v;
+
+  // This computes the bound checking predicates in normal reduction.
+  auto normal_preds =
+  MakeBoundCheck(stage, dom_map, value_map, false, 
std::unordered_set());
+
+  // The existing reduction predicate (only from the first one one?)
+  PrimExpr input_pred = reduces[0]->condition;
+
+  // normal_pred = input_pred && normal_pred
+  normal_preds.push_back(input_pred);
+  for (PrimExpr v : normal_preds) {
+if (v.defined()) normal_preds.push_back(v);
   }

Review comment:
   Thanks for catching this! normal_preds may contain null  expressions, so 
we need to filter them out.  I just fixed it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5598: [LLVM] Represent alignment information in LLVM IR

2020-05-14 Thread GitBox


kparzysz-quic opened a new pull request #5598:
URL: https://github.com/apache/incubator-tvm/pull/5598


   - Insert alignment assumptions for aligned variables.
   - Insert align attributes for function parameters.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5551: [Reduction] Fix cross thread reduction

2020-05-14 Thread GitBox


wpan11nv commented on a change in pull request #5551:
URL: https://github.com/apache/incubator-tvm/pull/5551#discussion_r425272357



##
File path: src/te/operation/cross_thread_reduction.cc
##
@@ -48,9 +97,18 @@ Stmt MakeCrossThreadReduction(const ComputeOpNode* self, 
const Stage& stage,
 CHECK(reduce);
 reduces[i] = reduce;
   }
-  PrimExpr cond = reduces[0]->condition;
-  for (PrimExpr v : conds) {
-cond = cond && v;
+
+  // This computes the bound checking predicates in normal reduction.
+  auto normal_preds =
+  MakeBoundCheck(stage, dom_map, value_map, false, 
std::unordered_set());
+
+  // The existing reduction predicate (only from the first one one?)
+  PrimExpr input_pred = reduces[0]->condition;
+
+  // normal_pred = input_pred && normal_pred
+  normal_preds.push_back(input_pred);
+  for (PrimExpr v : normal_preds) {
+if (v.defined()) normal_preds.push_back(v);
   }

Review comment:
   Yes, this looks odd. I saw null predicates in this vector. So this is to 
remove them. Let me add a comment. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #5594: segment fault,when convert mxnet-model to tvm-model

2020-05-14 Thread GitBox


tqchen closed issue #5594:
URL: https://github.com/apache/incubator-tvm/issues/5594


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #5597: make fails during installation on ARM error: ‘amdgcn_s_barrier’ is not a member of ‘llvm::Intrinsic’

2020-05-14 Thread GitBox


tqchen closed issue #5597:
URL: https://github.com/apache/incubator-tvm/issues/5597


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5597: make fails during installation on ARM error: ‘amdgcn_s_barrier’ is not a member of ‘llvm::Intrinsic’

2020-05-14 Thread GitBox


tqchen edited a comment on issue #5597:
URL: https://github.com/apache/incubator-tvm/issues/5597#issuecomment-628694939


   Please open a new thead on https://discuss.tvm.ai/, where the community uses 
for trouble shooting. This is likely due to your version of LLVM idd not 
install with these target.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5597: make fails during installation on ARM error: ‘amdgcn_s_barrier’ is not a member of ‘llvm::Intrinsic’

2020-05-14 Thread GitBox


tqchen edited a comment on issue #5597:
URL: https://github.com/apache/incubator-tvm/issues/5597#issuecomment-628694939


   Please open a new thead on https://discuss.tvm.ai/, This is likely due to 
your version of LLVM idd not install with these target.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5597: make fails during installation on ARM error: ‘amdgcn_s_barrier’ is not a member of ‘llvm::Intrinsic’

2020-05-14 Thread GitBox


tqchen commented on issue #5597:
URL: https://github.com/apache/incubator-tvm/issues/5597#issuecomment-628694939


   Okease ioeb a new thead on https://discuss.tvm.ai/, This is likely due to 
your version of LLVM idd not install with these target.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5581: Add debug mode to tempdir()

2020-05-14 Thread GitBox


tqchen commented on a change in pull request #5581:
URL: https://github.com/apache/incubator-tvm/pull/5581#discussion_r425207128



##
File path: python/tvm/contrib/util.py
##
@@ -30,6 +32,32 @@ class TempDirectory(object):
 Automatically removes the directory when it went out of scope.
 """
 
+# When True, all TempDirectory are *NOT* deleted and instead live inside a 
predicable directory
+# tree.
+DEBUG_MODE = False

Review comment:
   Thinking a bit about API, how about
   ```python
   with tvm.util.TemporaryDirectory.debug_mode():
   # content
   # no need to set it back later
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5593: [DOCS] Improve document in reflection

2020-05-14 Thread GitBox


tqchen commented on pull request #5593:
URL: https://github.com/apache/incubator-tvm/pull/5593#issuecomment-628692567


   Thanks @liangfu @MarisaKirisame !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (482e341 -> 561f0c2)

2020-05-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 482e341  Fix JSON graph dumping. (#5591)
 add 561f0c2  [DOCS] Improve document in reflection (#5593)

No new revisions were added by this update.

Summary of changes:
 apps/cpp_rpc/README.md| 4 ++--
 apps/extension/python/tvm_ext/__init__.py | 2 +-
 include/tvm/node/reflection.h | 6 +-
 3 files changed, 8 insertions(+), 4 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5593: [DOCS] Improve document in reflection

2020-05-14 Thread GitBox


tqchen merged pull request #5593:
URL: https://github.com/apache/incubator-tvm/pull/5593


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] MarisaKirisame commented on a change in pull request #5593: [DOCS] Improve document in reflection

2020-05-14 Thread GitBox


MarisaKirisame commented on a change in pull request #5593:
URL: https://github.com/apache/incubator-tvm/pull/5593#discussion_r425201430



##
File path: apps/cpp_rpc/README.md
##
@@ -59,4 +59,4 @@ Command line usage
 ```
 
 ## Note
-Currently support is only there for Linux / Android / Windows environment and 
proxy mode doesn't be supported currently.
\ No newline at end of file
+Currently support is only there for Linux / Android / Windows environment and 
proxy mode doesn't be supported currently.

Review comment:
   ```suggestion
   Currently only Linux / Android / Windows environment is supported, and proxy 
mode is not supported.
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] GalMoore opened a new issue #5597: make fails during installation on ARM error: ‘amdgcn_s_barrier’ is not a member of ‘llvm::Intrinsic’

2020-05-14 Thread GitBox


GalMoore opened a new issue #5597:
URL: https://github.com/apache/incubator-tvm/issues/5597


   Hey, 
   
   I'm trying to install TVM (with LLVM) on new EulerOS aarch64 server. 
   'Cmake' works fine: 
   
   `gal@localhost build]$ cmake ..
   -- Build with RPC support...
   -- Build with Graph runtime support...
   -- Build VTA runtime with target: sim
   -- Use 
llvm-config=/home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config
   /home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config: 
/lib64/libtinfo.so.5: no version information available (required by 
/home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config)
   /home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config: 
/lib64/libtinfo.so.5: no version information available (required by 
/home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config)
   /home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config: 
/lib64/libtinfo.so.5: no version information available (required by 
/home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config)
   /home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config: 
/lib64/libtinfo.so.5: no version information available (required by 
/home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/bin/llvm-config)
   -- /home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/include
   -- Found 
LLVM_INCLUDE_DIRS=/home/gal/llvm/clang+llvm-10.0.0-aarch64-linux-gnu/include
   -- Found LLVM_DEFINITIONS= -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS 
-D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS
   -- Found TVM_LLVM_VERSION=100
   -- Build with LLVM 
   -- Set TVM_LLVM_VERSION=100
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Performing Test SUPPORT_CXX14
   -- Performing Test SUPPORT_CXX14 - Success
   -- Build with c++14
   -- Build with thread support...
   -- Looking for pthread.h
   -- Looking for pthread.h - found
   -- Looking for pthread_create
   -- Looking for pthread_create - not found
   -- Check if compiler accepts -pthread
   -- Check if compiler accepts -pthread - yes
   -- Found Threads: TRUE  
   -- Configuring done
   -- Generating done
   -- Build files have been written to: /home/gal/code/TVM/tvm/build
   `
   
   but 'make -j4' command runs into error: 
   
   `[ 77%] Building CXX object 
CMakeFiles/tvm.dir/src/codegen/llvm/codegen_cpu.cc.o
   [ 78%] Building CXX object 
CMakeFiles/tvm.dir/src/codegen/llvm/codegen_llvm.cc.o
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc: In member 
function ‘virtual llvm::Value* 
tvm::codegen::CodeGenAMDGPU::GetThreadIndex(const tvm::IterVar&)’:
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:135:56: error: 
‘amdgcn_workitem_id_x’ is not a member of ‘llvm::Intrinsic’
llvm::Intrinsic::ID intrin_id = ::llvm::Intrinsic::amdgcn_workitem_id_x;
   ^~~~
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:138:48: error: 
‘amdgcn_workitem_id_x’ is not a member of ‘llvm::Intrinsic’
case 0: intrin_id = ::llvm::Intrinsic::amdgcn_workitem_id_x; break;
   ^~~~
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:139:48: error: 
‘amdgcn_workitem_id_y’ is not a member of ‘llvm::Intrinsic’
case 1: intrin_id = ::llvm::Intrinsic::amdgcn_workitem_id_y; break;
   ^~~~
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:140:48: error: 
‘amdgcn_workitem_id_z’ is not a member of ‘llvm::Intrinsic’
case 2: intrin_id = ::llvm::Intrinsic::amdgcn_workitem_id_z; break;
   ^~~~
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:146:48: error: 
‘amdgcn_workgroup_id_x’ is not a member of ‘llvm::Intrinsic’
case 0: intrin_id = ::llvm::Intrinsic::amdgcn_workgroup_id_x; break;
   ^
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:147:48: error: 
‘amdgcn_workgroup_id_y’ is not a member of ‘llvm::Intrinsic’
case 1: intrin_id = ::llvm::Intrinsic::amdgcn_workgroup_id_y; break;
   ^
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:148:48: error: 
‘amdgcn_workgroup_id_z’ is not a member of ‘llvm::Intrinsic’
case 2: intrin_id = ::llvm::Intrinsic::amdgcn_workgroup_id_z; break;
   ^
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc: In member 
function ‘virtual llvm::Value* 
tvm::codegen::CodeGenAMDGPU::CreateStorageSync(const tvm::ir::Call*)’:
   /home/gal/code/TVM/tvm/src/codegen/llvm/codegen_amdgpu.cc:163:30: error: 
‘amdgcn_s_barrier’ is not a member of ‘llvm::Intrinsic’
  

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5578: [Relay][Refactor][std::string --> String] Relay updated with String

2020-05-14 Thread GitBox


ANSHUMAN87 commented on pull request #5578:
URL: https://github.com/apache/incubator-tvm/pull/5578#issuecomment-628673263


   @zhiics : Thanks for review! Your comment is handled now!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m closed pull request #5501: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-14 Thread GitBox


cchung100m closed pull request #5501:
URL: https://github.com/apache/incubator-tvm/pull/5501


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m opened a new pull request #5596: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-14 Thread GitBox


cchung100m opened a new pull request #5596:
URL: https://github.com/apache/incubator-tvm/pull/5596


   Hi @tqchen @zhiics @jroesch 
   
   Following issue #5490 , this PR is working for `std::string` -> `String` 
Migration in TIR nodes. I would appreciate if you can help to review it, many 
thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5595: [TUTORIAL]TFLite QNN Tutorial

2020-05-14 Thread GitBox


siju-samuel opened a new pull request #5595:
URL: https://github.com/apache/incubator-tvm/pull/5595


   QNN Tutorial series on tflite. Continuation of PRs #5321(PyTorch) & 
#5362(MxNet)
   This PR can be merged after #5362
   
   @anijain2305 @masahi Please help to review this PR.




This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] huanleo opened a new issue #5594: segment fault,when convert mxnet-model to tvm-model

2020-05-14 Thread GitBox


huanleo opened a new issue #5594:
URL: https://github.com/apache/incubator-tvm/issues/5594


   when i convert mxnet-model to tvm-model, resnest-50 is ok, but when convert 
lager network(resnest101  or  resnest-200) will get "segment fault"——whether 
llvm or cuda mode
   
   I suspect there's not enough space tvm allocated,but i don't know where to 
change



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #5593: [DOCS] Improve document in reflection

2020-05-14 Thread GitBox


liangfu commented on pull request #5593:
URL: https://github.com/apache/incubator-tvm/pull/5593#issuecomment-628464822


   @MarisaKirisame Please review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu opened a new pull request #5593: [DOCS] Improve document in reflection

2020-05-14 Thread GitBox


liangfu opened a new pull request #5593:
URL: https://github.com/apache/incubator-tvm/pull/5593


   This PR updates document in reflection for changes introduced in #5160 .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5578: [Relay][Refactor][std::string --> String] Relay updated with String

2020-05-14 Thread GitBox


zhiics commented on a change in pull request #5578:
URL: https://github.com/apache/incubator-tvm/pull/5578#discussion_r424893721



##
File path: src/relay/backend/compile_engine.cc
##
@@ -580,7 +580,7 @@ class CompileEngineImpl : public CompileEngineNode {
 auto symbol_name = src_func->GetAttr(tvm::attr::kGlobalSymbol);
 CHECK(symbol_name.defined()) << "No external symbol is set for:\n"
  << AsText(src_func, false);
-auto gv = GlobalVar(std::string(symbol_name.value()));
+auto gv = GlobalVar(String(symbol_name.value()));

Review comment:
   just `GlobalVar(symbol_name)`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org