[GitHub] [incubator-tvm] jroesch commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


jroesch commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650484640


   +1 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


siju-samuel commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650478109


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


junrushao1994 commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650476981


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ZihengJiang commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


ZihengJiang commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650471485


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tvm-archiver commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


tvm-archiver commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650469065


   +1
   
   Thierry
   
   > On Jun 26, 2020, at 6:16 PM, masahi  wrote:
   > 
   > +1
   > 
   > -- 
   > You are receiving this because you are subscribed to this thread.
   > Reply to this email directly or view it on GitHub:
   > https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650468556
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] weberlo opened a new pull request #5940: Add Quantize/Dequantize Partitioning

2020-06-26 Thread GitBox


weberlo opened a new pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940


   Implements step 1 of the [Improvements to Automatic Quantization for 
Bare-Metal 
RFC](https://discuss.tvm.ai/t/rfc-improvements-to-automatic-quantization-for-bare-metal/7108).
   
   The code still needs more thorough documentation, and I'm gonna wait to bake 
the visitors into C++ until we get some design feedback (either here or on the 
RFC).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


masahi commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650468556


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


tqchen commented on issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650460122


   +1 (binding),  I checked
   
   - Signatures and hashes good
   - DISCLAIMER, LICENSE, NOTICE
   - Signatures and hashes
   - No unexpected binary files
   - Code compiles
   
   TQ



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


masahi commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446457483



##
File path: src/runtime/contrib/dnnl/dnnl_json_runtime.cc
##
@@ -0,0 +1,456 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/contrib/dnnl/dnnl_json_runtime.cc
+ * \brief A simple JSON runtime for DNNL.
+ */
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+
+#include "../json/json_node.h"
+#include "../json/json_runtime.h"
+#include "dnnl.hpp"
+
+namespace tvm {
+namespace runtime {
+namespace contrib {
+
+using namespace tvm::runtime;
+using namespace tvm::runtime::json;
+
+class DNNLJSONRuntime : public JSONRuntimeBase {
+  using tag = dnnl::memory::format_tag;
+  using dt = dnnl::memory::data_type;
+
+ public:
+  DNNLJSONRuntime(const std::string& symbol_name, const std::string& 
graph_json,
+  const Array const_names)
+  : JSONRuntimeBase(symbol_name, graph_json, const_names) {}
+
+  const char* type_key() const { return "dnnl_json"; }
+
+  void Init(const Array& consts) override {
+BuildEngine();
+
+CHECK_EQ(consts.size(), const_idx_.size())
+<< "The number of input constants must match the number of required.";
+
+// Setup constants entries for weights.
+SetupConstants(consts);
+  }
+
+  void Run() override {
+// Fill in the input buffers.
+for (size_t i = 0; i < input_nodes_.size(); ++i) {
+  auto eid = EntryID(input_nodes_[i], 0);
+  // TODO(@comaniac): Support other data lengths.
+  size_t offset_in_bytes = entry_out_mem_[eid].second * 4;
+  size_t buffer_size = GetDataSize(*data_entry_[eid]);
+  write_to_dnnl_memory(data_entry_[eid]->data, entry_out_mem_[eid].first, 
buffer_size,
+   offset_in_bytes);
+}
+
+// Invoke the engine through intepreting the stream.
+for (size_t i = 0; i < net_.size(); ++i) {
+  net_.at(i).execute(stream_, net_args_.at(i));
+}
+stream_.wait();
+
+// Read output buffers.
+for (size_t i = 0; i < outputs_.size(); ++i) {
+  auto eid = EntryID(outputs_[i]);
+  size_t offset_in_bytes = entry_out_mem_[eid].second * 4;
+  size_t buffer_size = GetDataSize(*data_entry_[eid]);
+  read_from_dnnl_memory(data_entry_[eid]->data, entry_out_mem_[eid].first, 
buffer_size,
+offset_in_bytes);
+}
+  }
+
+ private:
+  // Build up the engine based on the input graph.
+  void BuildEngine() {
+engine_ = dnnl::engine(dnnl::engine::kind::cpu, 0);
+stream_ = dnnl::stream(engine_);
+
+// Build subgraph engine.
+for (size_t nid = 0; nid < nodes_.size(); ++nid) {
+  const auto& node = nodes_[nid];
+  if (node.GetOpType() == "kernel") {
+CHECK_EQ(node.GetOpType(), "kernel");
+auto op_name = node.GetOpName();
+if ("nn.conv2d" == op_name) {
+  Conv2d(nid);
+} else if ("dnnl.conv2d_relu" == op_name) {
+  Conv2d(nid, true, false);
+} else if ("dnnl.conv2d_bias_relu" == op_name) {
+  Conv2d(nid, true, true);
+} else if ("nn.dense" == op_name) {
+  Dense(nid);
+} else if ("nn.batch_norm" == op_name) {
+  BatchNorm(nid);
+} else if ("nn.relu" == op_name) {
+  Relu(nid);
+} else if ("add" == op_name) {
+  Add(nid);
+} else {
+  LOG(FATAL) << "Unsupported op: " << op_name;
+}
+  }
+}
+  }
+
+  // Bind a JSON graph node entry to a DNNL memory.
+  dnnl::memory BindDNNLMemory(const JSONGraphNodeEntry& entry, 
dnnl::memory::desc mem_desc,
+  size_t offset = 0) {
+auto eid = EntryID(entry);
+if (entry_out_mem_.count(eid) == 0) {
+  return BindDNNLMemory(entry, dnnl::memory(mem_desc, engine_), offset);
+}
+return entry_out_mem_[eid].first;
+  }
+
+  // Bind a JSON graph node entry to a given DNNL memory.
+  dnnl::memory BindDNNLMemory(const JSONGraphNodeEntry& entry, dnnl::memory 
mem,
+  size_t offset = 0) {
+auto eid = EntryID(entry);
+// Since the DNNL memory has been created before calling this function, we 
assume the 

[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5921: µTVM CRT modifications for on-device RPC server

2020-06-26 Thread GitBox


tqchen commented on a change in pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#discussion_r446457499



##
File path: src/runtime/crt/Makefile
##
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   I will let you guys decide, but it might make sense to show a make flow 
if it is  simple enough, because others can learn from it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5925: Fix small typo in nn.conv2d_gemm_weight_transform

2020-06-26 Thread GitBox


tqchen commented on pull request #5925:
URL: https://github.com/apache/incubator-tvm/pull/5925#issuecomment-650457793


   ping @giuseros 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu opened a new issue #5939: [VOTE] Release Apache TVM (incubating) v0.6.1.rc0

2020-06-26 Thread GitBox


yzhliu opened a new issue #5939:
URL: https://github.com/apache/incubator-tvm/issues/5939


   Dear TVM community,
   
   This is a call for vote to release Apache TVM (incubating) version 0.6.1. 
This is a maintenance release incorporating important bug fixes. All users of 
Apache TVM (incubating) 0.6.0 are advised to upgrade.
   
   Link to release notes:
   https://github.com/apache/incubator-tvm/releases/tag/v0.6.1.rc0
   
   Link to release candidate:
   https://dist.apache.org/repos/dist/dev/incubator/tvm/tvm-v0.6.1-rc0
   
   The vote will be open for at least 72 hours. Everyone is welcomed to vote. 
Please vote by replying to this thread explicitly.
   
   +1 = approve
   +0 = no opinion
   -1 = disapprove (provide reason)
   
   NOTE: this thread is being mirrored in dev@



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on a change in pull request #5921: µTVM CRT modifications for on-device RPC server

2020-06-26 Thread GitBox


areusch commented on a change in pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#discussion_r446455221



##
File path: src/runtime/crt/Makefile
##
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   Im not sure how to do this without dragging in extra tvm configuration 
from e.g. include_directories() calls. The other thing is that the make-based 
flow is a good example for firmware engineers not used to cmake. What do you 
think? Cc @tqchen





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




svn commit: r40189 - in /dev/incubator/tvm/tvm-v0.6.1-rc0: ./ apache-tvm-src-v0.6.1.rc0-incubating.tar.gz apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.asc apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.sh

2020-06-26 Thread liuyizhi
Author: liuyizhi
Date: Fri Jun 26 23:51:23 2020
New Revision: 40189

Log:
Add v0.6.1 RC0

Added:
dev/incubator/tvm/tvm-v0.6.1-rc0/

dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz   
(with props)

dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.asc

dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.sha512

Added: 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz
==
Binary file - no diff available.

Propchange: 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz
--
svn:mime-type = application/octet-stream

Added: 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.asc
==
--- 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.asc
 (added)
+++ 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.asc
 Fri Jun 26 23:51:23 2020
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE9CwabmNMEF6NmFEFynUSVOl7n+QFAl72gZEACgkQynUSVOl7
+n+SSGxAAiM/7ZDI+16xhew2kIrcKdVB2bDrwhNw8Lq074IeZmDLUnwHyFMsncU2/
+XspT5Lj4MlXYxibZ9UfAS35g5JXEMx/OenWT+xqKFt+u9Tm3Aokx9Hhzn98YW6BP
+5sh1mxWEHMZdxdJUmSUTo4nvoHzPC9iERKD43zhSxsQ9WU2bjJYal5iJI7I4PpMY
+wTsqRslSldbD8XE+uCeSP2q/JMH1xt5DgiIqU8kpTUTxU22iPdLNDUmJTb6Yi2fg
+EqyuBlzrwavvQTNJxJ0rtWdsMsWYqKYI3epujvQWa1VwRYlHPnhdw59j1DEea/Yx
+w0GZrlw6jqMI4IArF9uiL9H76zxW3GhGVphjPxjolIwMbdfTYt3Kfh1vk4I6Stpx
+/bZla35RTwAlXHQL0IHSpEUiDuRVAviTzRe9jF0RxzUB/zZ8FBqSd0WKkIWy9k4D
+DlisGYqGXjjmDzgY85asrQ7kFmePrS6tLBIiAQacWTBfPByLP0PY4RQAMHgI7Piu
+6Fejvr5/A7IdxtnBW/mWv/fNj+Wpjn4cwW8BgiAviolbcb50vWc1hLAQp9Q+DLUp
+SK2A74ZtUIpMQANsJZNVge3EtK2AQBZEDhqcMlBeJIJ2U7t/vlE/0/ozL1bbxGOs
+Q0wim14O0SvzAfi/UD8+1sBlZR5MDod+Y4SgbsMCrE1JHIZAZc4=
+=+KNl
+-END PGP SIGNATURE-

Added: 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.sha512
==
--- 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.sha512
 (added)
+++ 
dev/incubator/tvm/tvm-v0.6.1-rc0/apache-tvm-src-v0.6.1.rc0-incubating.tar.gz.sha512
 Fri Jun 26 23:51:23 2020
@@ -0,0 +1 @@
+33f0287af9db76e1a82b3d7e3a0f4928fbb60066ccd36db44a6dadd51039862e8f2e833427687f8cd7c828818d0c9fa22057a7c9c32a107ba99d6df73f3634e0
  apache-tvm-src-v0.6.1.rc0-incubating.tar.gz




[GitHub] [incubator-tvm] merrymercy commented on pull request #5938: [TOPI] Fix x86 conv2d template when tuning with unpacked layout

2020-06-26 Thread GitBox


merrymercy commented on pull request #5938:
URL: https://github.com/apache/incubator-tvm/pull/5938#issuecomment-650450425


   cc @kevinthesun 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy opened a new pull request #5938: Fix x86 conv2d template

2020-06-26 Thread GitBox


merrymercy opened a new pull request #5938:
URL: https://github.com/apache/incubator-tvm/pull/5938


   Fix a bug occurred when tuning a conv2d_nhwc with the unpacked layout.
   Without this fix, we will have 
   ```
   UnboundLocalError("local variable 'oc_bn' referenced before assignment"
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


zhiics commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446450843



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] liangfu commented on a change in pull request #5921: µTVM CRT modifications for on-device RPC server

2020-06-26 Thread GitBox


liangfu commented on a change in pull request #5921:
URL: https://github.com/apache/incubator-tvm/pull/5921#discussion_r446450205



##
File path: src/runtime/crt/Makefile
##
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   Given that we are going to have StandaloneCrt.cmake, can we use CMake to 
build this instead?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


masahi commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446449540



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] tqchen commented on pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


tqchen commented on pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936#issuecomment-650440958


   OK, this is merged. the binary need to be manually updated,  will report 
back when it is ready, 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


tqchen merged pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] tag v0.6.1.rc0 created (now 802f055)

2020-06-26 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to tag v0.6.1.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at 802f055  (commit)
No new revisions were added by this update.



[incubator-tvm] branch master updated (69313a7 -> 5786e82)

2020-06-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 69313a7  [CODEGEN][CONTRIB] Various update for CoreML codegen (#5934)
 add 5786e82  add dnnl (#5936)

No new revisions were added by this update.

Summary of changes:
 docker/Dockerfile.ci_cpu  | 3 +++
 .../install/{ubuntu_install_antlr.sh => ubuntu_install_dnnl.sh}   | 8 ++--
 2 files changed, 9 insertions(+), 2 deletions(-)
 copy docker/install/{ubuntu_install_antlr.sh => ubuntu_install_dnnl.sh} (72%)



[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5863: [TIR][REFACTOR][API-CHANGE] Change Call.name to Call.op(RelayExpr)

2020-06-26 Thread GitBox


tqchen edited a comment on pull request #5863:
URL: https://github.com/apache/incubator-tvm/pull/5863#issuecomment-650440226


   Followup PR https://github.com/apache/incubator-tvm/pull/5937



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5937: [TIR][OP][API-CHANGE] Remove CallNode.call_type in favor of attribute.

2020-06-26 Thread GitBox


tqchen commented on pull request #5937:
URL: https://github.com/apache/incubator-tvm/pull/5937#issuecomment-650440265


   
   cc @junrushao1994 @yzhliu @merrymercy @ZihengJiang @wpan11nv @yongfeng-nv 
@masahi @Hzfengsy @spectrometerHBH @xqdan @FrozenGene @antinucleon @vinx13 
@jwfromm



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5863: [TIR][REFACTOR][API-CHANGE] Change Call.name to Call.op(RelayExpr)

2020-06-26 Thread GitBox


tqchen commented on pull request #5863:
URL: https://github.com/apache/incubator-tvm/pull/5863#issuecomment-650440226


   Followup PR



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446439808



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;

[GitHub] [incubator-tvm] comaniac commented on pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


comaniac commented on pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936#issuecomment-650432776


   Verified that this docker image with DNNL works for AMD CPU (EC2 m5a.4xlarge 
instance).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


comaniac edited a comment on pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936#issuecomment-650406197


   OK will verify it on EC2 M5a instance and get back here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5937: [TIR][OP][API-CHANGE] Remove CallNode.call_type in favor of attribute.

2020-06-26 Thread GitBox


tqchen commented on pull request #5937:
URL: https://github.com/apache/incubator-tvm/pull/5937#issuecomment-650430255


   Followup of https://github.com/apache/incubator-tvm/pull/5863



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446435931



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;

[GitHub] [incubator-tvm] tqchen opened a new pull request #5937: [TIR][OP][API-CHANGE] Remove CallNode.call_type in favor of attribute.

2020-06-26 Thread GitBox


tqchen opened a new pull request #5937:
URL: https://github.com/apache/incubator-tvm/pull/5937


   This is a followup refactor for tir::Call.
   Now that we have switched call->name to call->op, the function effect 
property
   can be registered through the op itself, so we no longer need the call_type 
in the CallNode.
   
   - Introduce CallEffectKind to provide a more fine grained categorization of 
calls.
   - Introduce call_pure_extern and call_llvm_pure_intrin to
 allow us to indicate pure calls in those cases.
   - Migrate existing usecases to the new API.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac edited a comment on pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


comaniac edited a comment on pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936#issuecomment-650406197


   OK will verify it on EC2 M5c instance and get back here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


zhiics commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446429394



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


zhiics commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446429394



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


masahi commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446427066



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


masahi commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446427066



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446425905



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


masahi commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446425296



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;
+  

[GitHub] [incubator-tvm] comaniac commented on pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


comaniac commented on pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936#issuecomment-650406197


   OK will verify it on EC2 A1 instance and get back here.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


tqchen commented on pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936#issuecomment-650393102


   Please confirm if the dnnl works with AMD cpu, as we also have AMD cpus in 
the CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5855: [RELAY][VM] Add shape_of instruction

2020-06-26 Thread GitBox


zhiics commented on a change in pull request #5855:
URL: https://github.com/apache/incubator-tvm/pull/5855#discussion_r446401587



##
File path: src/relay/op/vm/vm.cc
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/op/vm/vm.cc
+ * \brief Dialect operators for Relay VM.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../transforms/infer_layout_util.h"
+#include "../op_common.h"
+#include "../type_relations.h"
+
+namespace tvm {
+namespace relay {
+
+// Forward declare the shape_of type relation function.
+bool ShapeOfRel(const Array& types, int num_inputs, const Attrs& attrs,

Review comment:
   Yeah, I was a bit hesitating in doing this as there are only two uses. I 
will move the declaration to  type_relations.h since you also have this 
consideration.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new pull request #5936: [CI] Install DNNL (OneDNN) to CI Environment

2020-06-26 Thread GitBox


comaniac opened a new pull request #5936:
URL: https://github.com/apache/incubator-tvm/pull/5936


   The current BYOC flow uses DNNL as an example backend to demonstrate and 
test its functionality, but all related tests are currently skipped in CI due 
to the lack of DNNL in the environment. In this PR, we add scripts to deploy 
pre-built DNNL to the docker image so that the unit tests in #5919 for example 
can be effective.
   
   I've built a docker image locally and passed unit tests of #5919 with it.
   
   cc @zhiics @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #5826: [DYNAMIC] Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass

2020-06-26 Thread GitBox


jroesch commented on a change in pull request #5826:
URL: https://github.com/apache/incubator-tvm/pull/5826#discussion_r446388132



##
File path: python/tvm/relay/op/dyn/transform.py
##
@@ -0,0 +1,74 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# pylint: disable=import-outside-toplevel
+"""Dynamic Transform operators."""
+
+from . import _make
+
+
+def reshape(data, newshape):

Review comment:
   I think its okay to have a single interface in the Python code, at least 
I talked with Haichen about it offline. It seems less confusing for end users 
who want to build models, but will require that people understand the 
difference between the two ops in the backend (which is true either way).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446383112



##
File path: tests/python/relay/test_json_runtime.py
##
@@ -0,0 +1,625 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for JSON codegen and runtime."""
+import os
+import sys
+
+import numpy as np
+
+import tvm
+import tvm.relay.op as reg
+import tvm.relay.testing
+from tvm import relay, runtime
+from tvm.contrib import util
+from tvm.relay import transform
+from tvm.relay.backend import compile_engine
+from tvm.relay.build_module import bind_params_by_name
+from tvm.relay.op.contrib.register import get_pattern_table
+
+
+def set_func_attr(func, compile_name, symbol_name):
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", compile_name)
+func = func.with_attr("global_symbol", symbol_name)
+return func
+
+
+def check_result(mod,
+ ref_mod,
+ map_inputs,
+ out_shape,
+ tol=1e-5,
+ target="llvm",
+ ctx=tvm.cpu(),
+ params=None):
+if sys.platform == "win32":
+print("Skip test on Windows for now")
+return
+
+# Run the reference result
+compile_engine.get().clear()
+with relay.build_config(opt_level=3):

Review comment:
   Good catch. Wil change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


comaniac commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r446382837



##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class JSONGraphNode {
+ public:
+  // Constructors.
+  JSONGraphNode() = default;
+  JSONGraphNode(const std::string& name, const std::string& op_type,
+const std::vector& inputs = {}, size_t 
num_outputs = 1) {
+name_ = name;
+op_type_ = op_type;
+num_inputs_ = inputs.size();
+inputs_ = inputs;
+num_outputs_ = num_outputs;
+  }
+
+  /*!
+   * \brief Serialize a node so that it can be saved to disk.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) {
+writer->BeginObject();
+writer->WriteObjectKeyValue("op", op_type_);
+writer->WriteObjectKeyValue("name", name_);
+if (!inputs_.empty()) {
+  SetAttr("num_inputs", std::to_string(inputs_.size()));
+  SetAttr("num_outputs", std::to_string(num_outputs_));
+  writer->WriteObjectKeyValue("inputs", this->inputs_);
+}
+if (!attrs_.empty()) {
+  writer->WriteObjectKeyValue("attrs", attrs_);
+}
+writer->EndObject();
+  }
+
+  /*!
+   * \brief Load the attribute of a node in the json string.
+   * \param reader The json reader.
+   */
+  void LoadAttrs(dmlc::JSONReader* reader) {
+std::string key, value;
+reader->BeginObject();
+while (reader->NextObjectItem()) {
+  if (key == "num_inputs") {
+reader->Read();
+num_inputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "num_outputs") {
+reader->Read();
+num_outputs_ = strtoul(value.c_str(), nullptr, 10);
+  } else if (key == "dtype") {
+std::vector tmp;
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read();
+CHECK(!reader->NextArrayItem());
+for (const auto& it : tmp) {
+  dtype_.push_back(tvm::runtime::String2DLDataType(it));
+}
+  } else if (key == "shape") {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+reader->Read(_);
+CHECK(!reader->NextArrayItem());
+  } else {
+reader->BeginArray();
+CHECK(reader->NextArrayItem());
+std::vector tmp;
+reader->Read();
+attrs_[key] = tmp;

[GitHub] [incubator-tvm] weberlo commented on a change in pull request #5932: [Frontend][Relay] Add Parser 2.0

2020-06-26 Thread GitBox


weberlo commented on a change in pull request #5932:
URL: https://github.com/apache/incubator-tvm/pull/5932#discussion_r446348563



##
File path: src/parser/parser.cc
##
@@ -0,0 +1,968 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file parser.cc
+ * \brief A parser for TVM IR.
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "./tokenizer.h"
+
+namespace tvm {
+namespace parser {
+
+using namespace relay;
+using Expr = relay::Expr;
+
+// expr
+//   // operators
+//   : '(' expr ')' # paren
+//   // function application
+//   | expr '(' callList ')'# call
+//   | '-' expr # neg
+//   | expr op=('*'|'/') expr   # binOp
+//   | expr op=('+'|'-') expr   # binOp
+//   | expr op=('<'|'>'|'<='|'>=') expr # binOp
+//   | expr op=('=='|'!=') expr # binOp
+//   // function definition
+//   | func # funcExpr
+//   // tuples and tensors
+//   | '(' ')'  # tuple
+//   | '(' expr ',' ')' # tuple
+//   | '(' expr (',' expr)+ ')' # tuple
+//   | '[' (expr (',' expr)*)? ']'  # tensor
+//   | 'if' '(' expr ')' body 'else' body   # ifElse
+//   | matchType expr '{' matchClauseList? '}'  # match
+//   | expr '.' NAT # projection
+//   // sequencing
+//   | 'let' var '=' expr ';' expr  # let
+//   // sugar for let %_ = expr; expr
+//   | expr ';;' expr   # let
+//   | graphVar '=' expr ';' expr   # graph
+//   | ident# identExpr
+//   | scalar   # scalarExpr
+//   | meta # metaExpr
+//   | QUOTED_STRING# stringExpr
+//   ;
+
+// func: 'fn' typeParamList? '(' argList ')' ('->' typeExpr)? body ;
+// defn
+//   : 'def' globalVar typeParamList? '(' argList ')' ('->' typeExpr)? body  # 
funcDefn
+//   | 'extern' 'type' generalIdent typeParamList?   # 
externAdtDefn
+//   | 'type' generalIdent typeParamList? '{' adtConsDefnList? '}'   # 
adtDefn
+//   ;
+
+// constructorName: CNAME ;
+
+// adtConsDefnList: adtConsDefn (',' adtConsDefn)* ','? ;
+// adtConsDefn: constructorName ('(' typeExpr (',' typeExpr)* ')')? ;
+// matchClauseList: matchClause (',' matchClause)* ','? ;
+// matchClause: pattern '=>' ('{' expr '}' | expr) ;
+// // complete or incomplete match, respectively
+// matchType : 'match' | 'match?' ;
+
+// patternList: '(' pattern (',' pattern)* ')';
+// pattern
+//   : '_' # wildcardPattern
+//   | localVar (':' typeExpr)?# varPattern
+//   | constructorName patternList?# constructorPattern
+//   | patternList # tuplePattern
+//   ;
+
+// adtCons: constructorName adtConsParamList? ;
+// adtConsParamList: '(' adtConsParam (',' adtConsParam)* ')' ;
+// adtConsParam: localVar | constructorName ;
+
+// argList
+//   : varList # argNoAttr
+//   | (var ',')* attrSeq  # argWithAttr
+//   ;
+
+// varList: (var (',' var)*)? ;
+// var: localVar (':' typeExpr)? ;
+
+// attrSeq: attr (',' attr)* ;
+// attr: CNAME '=' expr ;
+
+// typeExpr
+//   : '(' ')'
# tupleType
+//   | '(' typeExpr ')'   
# typeParen
+//   | '(' typeExpr ',' ')'   
# tupleType
+//   | '(' typeExpr (',' typeExpr)+ ')'   
# tupleType
+//   | generalIdent typeParamList 
# typeCallType
+//   | generalIdent   
# typeIdentType
+//   | 'Tensor' '[' shapeList ',' typeExpr ']'
# tensorType
+//   | 'fn' typeParamList? '(' (typeExpr (',' typeExpr)*)? ')' '->' typeExpr  
# funcType
+//   | '_' 

[GitHub] [incubator-tvm] weberlo commented on pull request #5932: [Frontend][Relay] Add Parser 2.0

2020-06-26 Thread GitBox


weberlo commented on pull request #5932:
URL: https://github.com/apache/incubator-tvm/pull/5932#issuecomment-650369009


   A few thoughts:
   It's not clear to me that modifying this parser is any easier than the 
current parser.  One could make a case that the current parser is suboptimal, 
because ANTLR does a sort of "covering parse", and 
[_parser.py](https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/_parser.py)
 then does **another** stage of parsing that incorporates context, but I would 
argue there's value in this separation of concerns, because you no longer need 
to worry about the syntactic components of parsing (e.g., [precedence and 
associativity](https://github.com/apache/incubator-tvm/pull/5932/files#diff-807cc0a7f01f9113c1903d4614a3649dR749-R769)).
   
   Another benefit of using a parser generator like ANTLR is that you have a 
[specification](https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/grammar/Relay.g4)
 of the language that serves as documentation **and** defines the parsing 
behavior, keeping the documentation always up to date.
   
   I see the value in error reporting integration and removing the external 
dependency, but it would be good to further motivate these changes and maybe 
find ways to further modularize version 2.0 to make it noob-friendly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#issuecomment-650352627


   @sergei-grechanik @ANSHUMAN87 @tqchen @MarisaKirisame Please take a look 
again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446365570



##
File path: tests/python/unittest/test_arith_solve_linear_inequality.py
##
@@ -0,0 +1,166 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import random
+import sys
+import pytest
+import tvm
+from tvm import te, arith, ir, tir, testing
+
+
+def test_solve_system_of_inequalities():
+seed = random.randrange(sys.maxsize)
+print("\nThis test is intentionally non-deterministic, "
+  "if it fails please report it in github issue together with this 
seed {}\n".format(seed))
+random.seed(seed)
+
+def _check(variables, formulas, coef=(-5, 5), bounds=(-20, 20)):
+vs = [te.var("x" + str(i)) for i in range(variables)]
+
+fs = []
+for i in range(formulas):
+s1 = sum([v*random.randint(coef[0], coef[1]) for v in vs])
+s1 += random.randint(coef[0], coef[1])
+s2 = sum([v*random.randint(coef[0], coef[1]) for v in vs])
+s2 += random.randint(coef[0], coef[1])
+op = random.choice([tir.expr.EQ, tir.expr.LE, tir.expr.LT, 
tir.expr.GE, tir.expr.GT])
+fs.append(op(s1, s2))
+
+vranges = {v: tvm.ir.expr.Range(bounds[0], bounds[1] + 1) for v in vs}
+before = te.all(tir.const(1, 'bool'), *fs)
+after = arith._ffi_api.SolveInequalitiesAsCondition(vs, vranges, fs)
+after = te.all(tir.const(1, 'bool'), *after)
+testing.check_bool_expr_is_true(before == after, vranges)

Review comment:
   The issue resolves after merging the canonical simplify fix. I also 
remove the workaround for equations.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5826: [DYNAMIC] Add Dynamic reshape to a dynamic namespace and add DynamicToStatic Pass

2020-06-26 Thread GitBox


mbrookhart commented on pull request #5826:
URL: https://github.com/apache/incubator-tvm/pull/5826#issuecomment-650348127


   @icemelon9 @lixiaoquan Per Haichen's request, I removed the dynamic option 
from the standard reshape and changed the places that used it to use the new 
purely dynamic op. This touches many more files, could you take another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5842: [VTA][OpenCL] Cloud FPGA support

2020-06-26 Thread GitBox


tmoreau89 commented on pull request #5842:
URL: https://github.com/apache/incubator-tvm/pull/5842#issuecomment-650347598


   In addition were you able to put a guide together? Which Intel FPGA have you 
tested this on?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5842: [VTA][OpenCL] Cloud FPGA support

2020-06-26 Thread GitBox


tmoreau89 commented on pull request #5842:
URL: https://github.com/apache/incubator-tvm/pull/5842#issuecomment-650347427


   @zhanghaohit  good news, the end to end tests ran successfully on VTA 
hardware with your changes. Given that this PR brings modifications to the ISA 
spec, we need to bump the `HW_VER` string from `0.0.1` to `0.0.2`. I can upload 
the Pynq bitstream so it's ready for use. I will need some help from @liangfu 
to re-generate the DE-10 bitstream as well.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446359570



##
File path: tests/python/unittest/test_arith_solve_linear_inequality.py
##
@@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import random
+import sys
+import pytest
+import tvm
+from tvm import te, arith, ir, tir, testing
+
+
+def test_solution_consistency():
+seed = random.randrange(sys.maxsize)
+print("\nThis test is intentionally non-deterministic, "
+  "if it fails please report it in github issue together with this 
seed {}\n".format(seed))
+random.seed(seed)
+
+def _check(variables, formulas, coef=(-5, 5), bounds=(-20, 20)):
+vs = [te.var("x" + str(i)) for i in range(variables)]
+
+fs = []
+for i in range(formulas):
+s1 = sum([v*random.randint(coef[0], coef[1]) for v in vs])
+s1 += random.randint(coef[0], coef[1])
+s2 = sum([v*random.randint(coef[0], coef[1]) for v in vs])
+s2 += random.randint(coef[0], coef[1])
+op = random.choice([tir.expr.EQ, tir.expr.LE, tir.expr.LT, 
tir.expr.GE, tir.expr.GT])
+fs.append(op(s1, s2))
+
+vranges = {v: tvm.ir.expr.Range(bounds[0], bounds[1] + 1) for v in vs}
+before = te.all(tir.const(1, 'bool'), *fs)
+after = arith._ffi_api.SolveInequalitiesAsCondition(vs, vranges, fs)
+after = te.all(tir.const(1, 'bool'), *after)
+testing.check_bool_expr_is_true(before == after, vranges)
+
+solution = arith.solve_linear_inequalities(fs, vs, vranges, 
deskew_range=True)
+testing.check_int_constraints_trans_consistency(solution)
+
+for i in range(3):
+_check(1, 1)
+for i in range(3):
+_check(1, 2)
+
+for i in range(3):
+_check(2, 1)
+for i in range(3):
+_check(2, 2)
+for i in range(3):
+_check(2, 3)
+
+# Somewhere here coefficients in the results become too large, leading to 
overflow,
+# so we use smaller initial coefficients
+for i in range(5):
+_check(3, 3, coef=(-2, 2))
+for i in range(5):
+_check(3, 4, coef=(-2, 2))
+
+for i in range(5):
+_check(4, 3, coef=(-1, 1))
+
+for i in range(5):
+_check(10, 2, coef=(-1, 1), bounds=(0, 4))
+for i in range(5):
+_check(10, 3, coef=(0, 1), bounds=(0, 4))
+
+
+def test_dual_variable():
+x, y = te.var("x"), te.var("y")
+
+variables = [x, y]
+ranges = {
+x: tvm.ir.Range(-100, 100),
+y: tvm.ir.Range(0, 10),
+}
+problem = [
+tvm.tir.LE(x + y, 20),
+tvm.tir.GE(x - y, 10),
+]
+
+# solution as conditions
+solution = arith._ffi_api.SolveInequalitiesAsCondition(variables, ranges, 
problem)
+assert ir.structural_equal(solution[0], x >= (y + 10))
+assert ir.structural_equal(solution[1], x <= (20 - y))
+assert ir.structural_equal(solution[2], y >= 0)
+assert ir.structural_equal(solution[3], y <= 5)
+
+# solve and get the ranges
+solution = arith.solve_linear_inequalities([
+tvm.tir.LE(x + y, 20),
+tvm.tir.GE(x - y, 10),
+], [x, y], ranges)
+# 0 <= y <=5
+assert solution.ranges[y].min == 0
+assert solution.ranges[y].extent == 6
+# y + 10 <= x <= 20 - y
+assert ir.structural_equal(solution.ranges[x].min, y + 10)
+assert solution.ranges[x].extent == 11  # max(10 - 2y)
+
+# deskew the solved ranges to be starting from zero
+solution = arith.solve_linear_inequalities(problem, variables, ranges, 
deskew_range=True)
+[x_new, y_new] = solution.dst.variables
+[rel] = solution.dst.relations
+assert ir.structural_equal(rel, (y_new*2) + x_new <= 10)
+assert ir.structural_equal(solution.dst.ranges[x_new].min, 0)
+assert ir.structural_equal(solution.dst.ranges[x_new].extent, 11)
+assert ir.structural_equal(solution.dst.ranges[y_new].min, 0)
+assert ir.structural_equal(solution.dst.ranges[y_new].extent, 6)
+assert ir.structural_equal(solution.src_to_dst[x], x_new + (y_new + 10))
+assert ir.structural_equal(solution.src_to_dst[y], y_new)
+assert 

[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446359262



##
File path: python/tvm/testing.py
##
@@ -188,5 +189,96 @@ def assert_prim_expr_equal(lhs, rhs):
 raise ValueError("{} and {} are not equal".format(lhs, rhs))
 
 
+def check_bool_expr_is_true(bool_expr, vranges, cond=None):
+""" Check that bool_expr holds given the condition cond
+for every value of free variables from vranges.
+
+Parameters
+--
+bool_expr : tvm.ir.expr.PrimExpr
+Boolean expression to check
+vranges: Dict[tvm.tir.expr.Var, tvm.ir.Range]
+Free variables and their ranges
+cond: tvm.ir.expr.PrimExpr
+extra conditions needs to be satisfied.
+"""
+if cond is not None:
+bool_expr = tvm.te.any(tvm.tir.Not(cond), bool_expr)

Review comment:
   for example, `2x > 4y` solves to `x > 2y` given x \in (0, 10), y \in (0, 
10)
   It essentially creates iterations 
   ```python
   for x in range(10):
 for y in range(10):
   assert !(2x > 4y) || (x > 2y)
   ```
   I'll add the above to the comments





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5910: [Please Do Not Review] Trying to Unlink libcuda.so

2020-06-26 Thread GitBox


junrushao1994 commented on pull request #5910:
URL: https://github.com/apache/incubator-tvm/pull/5910#issuecomment-650343229


   Just FYI, if you want to run CI locally, you may find the commands in 
Jenkins helpful. More specifically, you can find commands like below to run 
what tvm ci exactly runs:
   
   ```
   # under tvm's root directory
   ./docker/bash.sh tvmai/ci-i386:v0.52 ./tests/scripts/task_build.sh build -j12
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #5919: [BYOC] JSON Runtime with DNNL End-to-End Flow

2020-06-26 Thread GitBox


lhutton1 commented on a change in pull request #5919:
URL: https://github.com/apache/incubator-tvm/pull/5919#discussion_r445562478



##
File path: tests/python/relay/test_json_runtime.py
##
@@ -0,0 +1,625 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for JSON codegen and runtime."""
+import os
+import sys
+
+import numpy as np
+
+import tvm
+import tvm.relay.op as reg
+import tvm.relay.testing
+from tvm import relay, runtime
+from tvm.contrib import util
+from tvm.relay import transform
+from tvm.relay.backend import compile_engine
+from tvm.relay.build_module import bind_params_by_name
+from tvm.relay.op.contrib.register import get_pattern_table
+
+
+def set_func_attr(func, compile_name, symbol_name):
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", compile_name)
+func = func.with_attr("global_symbol", symbol_name)
+return func
+
+
+def check_result(mod,
+ ref_mod,
+ map_inputs,
+ out_shape,
+ tol=1e-5,
+ target="llvm",
+ ctx=tvm.cpu(),
+ params=None):
+if sys.platform == "win32":
+print("Skip test on Windows for now")
+return
+
+# Run the reference result
+compile_engine.get().clear()
+with relay.build_config(opt_level=3):

Review comment:
   Better to replace with tvm.transform.PassContext?

##
File path: src/runtime/contrib/json/json_node.h
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/json/json_node.h
+ * \brief The graph nodes used by JSON runtime.
+ */
+
+#ifndef TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+#define TVM_RUNTIME_CONTRIB_JSON_JSON_NODE_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+namespace json {
+
+using namespace tvm::runtime;
+using JSONGraphAttrs = std::unordered_map;
+
+/*!
+ * \brief The node entry in the serialized json graph.
+ */
+class JSONGraphNodeEntry {
+ public:
+  // Constructors.
+  JSONGraphNodeEntry() = default;
+  JSONGraphNodeEntry(int id, int index, int version = 0)
+  : id_(id), index_(index), version_(version) {}
+
+  /*!
+   * \brief Serialize a node entry.
+   * \param writer The json writer.
+   */
+  void Save(dmlc::JSONWriter* writer) const {
+writer->BeginArray();
+writer->WriteArrayItem(id_);
+writer->WriteArrayItem(index_);
+writer->WriteArrayItem(version_);
+writer->EndArray();
+  }
+
+  /*!
+   * \brief Deserialize the json string into a node entry.
+   * \param reader The json reader.
+   */
+  void Load(dmlc::JSONReader* reader) {
+reader->BeginArray();
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+CHECK(reader->NextArrayItem()) << "invalid json format";
+reader->Read(_);
+if (reader->NextArrayItem()) {
+  reader->Read(_);
+  CHECK(!reader->NextArrayItem()) << "invalid json format";
+} else {
+  version_ = 0;
+}
+  }
+
+  /*! \brief The json graph node ID. */
+  uint32_t id_;
+  /*! \brief The entry index. */
+  uint32_t index_;
+  uint32_t version_;
+};
+
+/*!
+ * \brief The node of the serialized json graph. It includes an array of
+ * entries.
+ */
+class 

[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446351576



##
File path: include/tvm/arith/int_solver.h
##
@@ -191,6 +286,56 @@ void 
SmithNormalFormDiag(std::vector>* S, std::vector= f0(x1, x2, ..., xn)
+ *x0 <= g0(x1, x2, ..., xn)
+ *x1 >= f1(x2, ..., xn)
+ *x1 <= g1(x2, ..., xn)
+ *...
+ *xn >= fn()  // just a constant
+ *xn <= gn()  // just a constant
+ *
+ * \return A map of variables and their solved bounds,
+ * and constrains that cannot be solved to bounds.
+ */
+PartialSolvedInequalities SolveLinearInequalities(const IntConstraints& 
system_to_solve);
+
+/*!
+ * \brief Solve linear inequalities and infer the range of each variable.
+ * \param system_to_solve the variables to solve, their ranges, and a list of 
inequalities.
+ * \return The result ranges for each variables.
+ * The returned IntConstraints(variables, ranges, relations) contains,
+ * 1. variables  - the variables that have been solved.
+ * 2. ranges - the best range of each variable.
+ * 3. relations  - constraints that cannot be transformed to
+ * Range will be stored in relations.
+ */
+IntConstraints SolveInequalitiesToRange(const IntConstraints& system_to_solve);
+
+/*!
+ * \brief Solve linear inequalities and deskew the ranges towards zero.
+ * \param system_to_solve the variables to solve, their ranges, and a list of 
inequalities.
+ * \return A transform (src IntConstraints -> dst IntConstraints)
+ * from original variables to a set of new variables.
+ * The ranges of new variables always start from zero,
+ * their extents are solved from \p system_to_solve.
+ * src IntConstraints is the same as \p system_to_solve.
+ * dst IntConstraints(variables, ranges, relations) contains,
+ * 1. variables  - the variables that have been solved.
+ * 2. ranges - the best range (start from zero) of each variable.
+ * 3. relations  - constraints that cannot be transformed to
+ * Range will be stored in relations.
+ * Variable mapping can be obtained from
+ * IntConstraintsTransform.src_to_dst and 
IntConstraintsTransform.dst_to_src.
+ */
+IntConstraintsTransform SolveInequalitiesDeskewRange(const IntConstraints& 
system_to_solve);

Review comment:
   `SolveInequalitiesToRange` finds the best range for the inequalities, 
and stores in `IntConstraints.ranges`. the reason we need `IntConstraints` 
instead of `Map` is that, there might be some 
equations/inequalities that we can not deal, we don't silently drop but store 
them in `IntConstraints.relations`.
   
   `SolveInequalitiesDeskewRange` does one more step to create new variables 
which satisfy the original inequality constraints, yet always starts from zero. 
This is very useful when we deal with iteration indices in tvm compute. And 
maintaining the mapping between old variables is necessary for 1) transform 
the old expression (tvm compute) to the new. 2) `SolveInequalitiesDeskewRange` 
can be run multiple times (until they converge) to get better results, thus a 
chain of mapping is needed.
   
   You might have question why can't user invoke `SolveInequalitiesToRange` 
first and deskew to zero themselves? It's because variable ranges are solved 
iteratively, one variable's range depends on the previous solved range. 
`FindBestRange of var a  -> Deskew a -> FindBestRange of var b (given deskewed 
a) -> Deskew b (given deskewed a)  -> ...` is much more precise than 
`FindBestRange of var a  -> FindBestRange of var b (given range a) -> Deskew a 
& b independently`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446351576



##
File path: include/tvm/arith/int_solver.h
##
@@ -191,6 +286,56 @@ void 
SmithNormalFormDiag(std::vector>* S, std::vector= f0(x1, x2, ..., xn)
+ *x0 <= g0(x1, x2, ..., xn)
+ *x1 >= f1(x2, ..., xn)
+ *x1 <= g1(x2, ..., xn)
+ *...
+ *xn >= fn()  // just a constant
+ *xn <= gn()  // just a constant
+ *
+ * \return A map of variables and their solved bounds,
+ * and constrains that cannot be solved to bounds.
+ */
+PartialSolvedInequalities SolveLinearInequalities(const IntConstraints& 
system_to_solve);
+
+/*!
+ * \brief Solve linear inequalities and infer the range of each variable.
+ * \param system_to_solve the variables to solve, their ranges, and a list of 
inequalities.
+ * \return The result ranges for each variables.
+ * The returned IntConstraints(variables, ranges, relations) contains,
+ * 1. variables  - the variables that have been solved.
+ * 2. ranges - the best range of each variable.
+ * 3. relations  - constraints that cannot be transformed to
+ * Range will be stored in relations.
+ */
+IntConstraints SolveInequalitiesToRange(const IntConstraints& system_to_solve);
+
+/*!
+ * \brief Solve linear inequalities and deskew the ranges towards zero.
+ * \param system_to_solve the variables to solve, their ranges, and a list of 
inequalities.
+ * \return A transform (src IntConstraints -> dst IntConstraints)
+ * from original variables to a set of new variables.
+ * The ranges of new variables always start from zero,
+ * their extents are solved from \p system_to_solve.
+ * src IntConstraints is the same as \p system_to_solve.
+ * dst IntConstraints(variables, ranges, relations) contains,
+ * 1. variables  - the variables that have been solved.
+ * 2. ranges - the best range (start from zero) of each variable.
+ * 3. relations  - constraints that cannot be transformed to
+ * Range will be stored in relations.
+ * Variable mapping can be obtained from
+ * IntConstraintsTransform.src_to_dst and 
IntConstraintsTransform.dst_to_src.
+ */
+IntConstraintsTransform SolveInequalitiesDeskewRange(const IntConstraints& 
system_to_solve);

Review comment:
   `SolveInequalitiesToRange` finds the best range for the inequalities, 
and stores in `IntConstraints.ranges`. the reason we need `IntConstraints` 
instead of `Map` is that, there might be some 
equations/inequalities that we can not deal, we don't silently drop but store 
them in `IntConstraints.relations`.
   
   `SolveInequalitiesDeskewRange` does one more step to create new variables 
which satisfy the original inequality constraints, yet always starts from zero. 
This is very useful when we deal with iteration indices in tvm compute. And 
maintaining the transform between old variables is necessary for 1) 
transform the old expression (tvm compute) to the new. 2) 
`SolveInequalitiesDeskewRange` can be run multiple times (until they converge) 
to get better results, thus a chain of transform is needed.
   
   You might have question why can't user invoke `SolveInequalitiesToRange` 
first and deskew to zero themselves? It's because variable ranges are solved 
iteratively, one variable's range depends on the previous solved range. 
`FindBestRange of var a  -> Deskew a -> FindBestRange of var b (given deskewed 
a) -> Deskew b (given deskewed a)  -> ...` is much more precise than 
`FindBestRange of var a  -> FindBestRange of var b (given range a) -> Deskew a 
& b independently`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446351576



##
File path: include/tvm/arith/int_solver.h
##
@@ -191,6 +286,56 @@ void 
SmithNormalFormDiag(std::vector>* S, std::vector= f0(x1, x2, ..., xn)
+ *x0 <= g0(x1, x2, ..., xn)
+ *x1 >= f1(x2, ..., xn)
+ *x1 <= g1(x2, ..., xn)
+ *...
+ *xn >= fn()  // just a constant
+ *xn <= gn()  // just a constant
+ *
+ * \return A map of variables and their solved bounds,
+ * and constrains that cannot be solved to bounds.
+ */
+PartialSolvedInequalities SolveLinearInequalities(const IntConstraints& 
system_to_solve);
+
+/*!
+ * \brief Solve linear inequalities and infer the range of each variable.
+ * \param system_to_solve the variables to solve, their ranges, and a list of 
inequalities.
+ * \return The result ranges for each variables.
+ * The returned IntConstraints(variables, ranges, relations) contains,
+ * 1. variables  - the variables that have been solved.
+ * 2. ranges - the best range of each variable.
+ * 3. relations  - constraints that cannot be transformed to
+ * Range will be stored in relations.
+ */
+IntConstraints SolveInequalitiesToRange(const IntConstraints& system_to_solve);
+
+/*!
+ * \brief Solve linear inequalities and deskew the ranges towards zero.
+ * \param system_to_solve the variables to solve, their ranges, and a list of 
inequalities.
+ * \return A transform (src IntConstraints -> dst IntConstraints)
+ * from original variables to a set of new variables.
+ * The ranges of new variables always start from zero,
+ * their extents are solved from \p system_to_solve.
+ * src IntConstraints is the same as \p system_to_solve.
+ * dst IntConstraints(variables, ranges, relations) contains,
+ * 1. variables  - the variables that have been solved.
+ * 2. ranges - the best range (start from zero) of each variable.
+ * 3. relations  - constraints that cannot be transformed to
+ * Range will be stored in relations.
+ * Variable mapping can be obtained from
+ * IntConstraintsTransform.src_to_dst and 
IntConstraintsTransform.dst_to_src.
+ */
+IntConstraintsTransform SolveInequalitiesDeskewRange(const IntConstraints& 
system_to_solve);

Review comment:
   `SolveInequalitiesToRange` finds the best range for the inequalities, 
and stores in `IntConstraints.ranges`. the reason we need `IntConstraints` 
instead of `Map` is that, there might be some 
equations/inequalities that we can not deal, we don't silently drop but store 
them in `IntConstraints.relations`.
   
   `SolveInequalitiesDeskewRange` does one more step to create new variables 
which satisfy the original inequality constraints, but always starts from zero. 
This is very useful when we deal with iteration indices in tvm compute. And 
maintaining the transform between old variables is necessary for 1) 
transform the old expression (tvm compute) to the new. 2) 
`SolveInequalitiesDeskewRange` can be run multiple times (until they converge) 
to get better results, thus a chain of transform is needed.
   
   You might have question why can't user invoke `SolveInequalitiesToRange` 
first and deskew to zero themselves? It's because variable ranges are solved 
iteratively, one variable's range depends on the previous solved range. 
`FindBestRange of var a  -> Deskew a -> FindBestRange of var b (given deskewed 
a) -> Deskew b (given deskewed a)  -> ...` is much more precise than 
`FindBestRange of var a  -> FindBestRange of var b (given range a) -> Deskew a 
& b independently`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5855: [RELAY][VM] Add shape_of instruction

2020-06-26 Thread GitBox


icemelon9 commented on a change in pull request #5855:
URL: https://github.com/apache/incubator-tvm/pull/5855#discussion_r446347040



##
File path: src/relay/op/vm/vm.cc
##
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/op/vm/vm.cc
+ * \brief Dialect operators for Relay VM.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../transforms/infer_layout_util.h"
+#include "../op_common.h"
+#include "../type_relations.h"
+
+namespace tvm {
+namespace relay {
+
+// Forward declare the shape_of type relation function.
+bool ShapeOfRel(const Array& types, int num_inputs, const Attrs& attrs,

Review comment:
   We could just put `ShapeOfRel` into header file.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5618: [Arith] Inequalities solver

2020-06-26 Thread GitBox


yzhliu commented on a change in pull request #5618:
URL: https://github.com/apache/incubator-tvm/pull/5618#discussion_r446343013



##
File path: include/tvm/arith/int_solver.h
##
@@ -26,17 +26,110 @@
 
 #include 
 #include 
+#include 
 
 #include 
+#include 
 #include 
 
+#include "analyzer.h"
+
 namespace tvm {
 namespace arith {
 
 using tir::IterVar;
 using tir::Var;
 using tir::VarNode;
 
+/*!
+ * \brief Represent integer grouped bounds which are classified into
+ *lower bounds (inclusive), upper bounds (inclusive) and equalities.
+ *It also contains coefficient as a multiplier for the bounds, i.e.,
+ *coef * var >= lower
+ *coef * var == equal
+ *coef * var <= upper
+ * \sa IntGrpBounds
+ */
+class IntGrpBoundsNode : public Object {
+ public:
+  PrimExpr coef;
+  Array lower;
+  Array equal;
+  Array upper;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("coef", );
+v->Visit("lower", );
+v->Visit("equal", );
+v->Visit("upper", );
+  }
+
+  bool SEqualReduce(const IntGrpBoundsNode* other, SEqualReducer eq) const {
+return eq(coef, other->coef) && eq(lower, other->lower) && eq(equal, 
other->equal) &&
+   eq(upper, other->upper);
+  }
+
+  void SHashReduce(SHashReducer hash_reduce) const {
+hash_reduce(coef);
+hash_reduce(lower);
+hash_reduce(equal);
+hash_reduce(upper);
+  }
+
+  static constexpr const bool _type_has_method_sequal_reduce = true;
+  static constexpr const char* _type_key = "arith.IntGrpBounds";
+  TVM_DECLARE_FINAL_OBJECT_INFO(IntGrpBoundsNode, Object);
+};
+
+/*!
+ * \brief Managed reference to IntGrpBoundsNode.
+ * \sa IntGrpBoundsNode
+ */
+class IntGrpBounds : public ObjectRef {
+ public:
+  /*!
+   * \brief Constructor by fields
+   * \param coef The coefficient. Must be integer.
+   *coef * var >= lower
+   *coef * var == equal
+   *coef * var >= upper
+   * \param lower the lower bounds (include)
+   * \param equal equalities
+   * \param upper the upper bounds (include)
+   */
+  TVM_DLL IntGrpBounds(PrimExpr coef, Array lower, Array 
equal,
+   Array upper);
+
+  /*!
+   * \brief Construct bounds from a range.
+   * \param r The range
+   * \return constructed bounds.
+   */
+  static IntGrpBounds range(const Range& r);

Review comment:
   I'm ok with either way. This is following 
https://github.com/apache/incubator-tvm/blob/master/src/arith/int_set.cc#L600





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on issue #5809: [RFC][AutoTVM] Non-square ConfigSpace

2020-06-26 Thread GitBox


comaniac commented on issue #5809:
URL: https://github.com/apache/incubator-tvm/issues/5809#issuecomment-650312544


   Polyhedral analysis would be an approach to generate the constraints in this 
scenario. On the other hand, the runtime validation sounds not a general 
solution, because it might affect the tuner. For example, throwing invalid 
configs in `next_batch` would result in no measurement results for those 
records, which means the learning based tuner won't get the feedback of invalid 
configs. I would prefer either of the following:
   
   1. Propose a new config space representation to support non-grid config 
space.
   2. Let verify passes pluggable. Currently, we have `VerifyGPU` pass that 
traverses TIR to estimate the memory usage and rejects invalid configs before 
sending them for compilation. Since this is at the evaluation stage, the 
rejected configs will still appear at the log file with proper error code, so 
that the tuner can benefit from it. We can make this mechanism as a callback so 
that users can bring their own verifier. The problem is that the verifier does 
not have config space information but just a graph in TIR, so it might be more 
difficult to check if it's valid or not.
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (2bbfbb1 -> 69313a7)

2020-06-26 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2bbfbb1  [Runtime] Only initialize required module (#5926)
 add 69313a7  [CODEGEN][CONTRIB] Various update for CoreML codegen (#5934)

No new revisions were added by this update.

Summary of changes:
 apps/ios_rpc/tvmrpc/TVMRuntime.mm|   1 +
 python/tvm/contrib/target/coreml.py  |  42 +++---
 src/runtime/contrib/coreml/coreml_runtime.mm |  12 ++-
 tests/python/contrib/test_coreml_codegen.py  | 115 +++
 4 files changed, 144 insertions(+), 26 deletions(-)



[GitHub] [incubator-tvm] zhiics commented on pull request #5934: [CODEGEN][CONTRIB] Various update for CoreML codegen

2020-06-26 Thread GitBox


zhiics commented on pull request #5934:
URL: https://github.com/apache/incubator-tvm/pull/5934#issuecomment-650305829


   Thanks @kazum 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #5934: [CODEGEN][CONTRIB] Various update for CoreML codegen

2020-06-26 Thread GitBox


zhiics merged pull request #5934:
URL: https://github.com/apache/incubator-tvm/pull/5934


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5934: [CODEGEN][CONTRIB] Various update for CoreML codegen

2020-06-26 Thread GitBox


zhiics commented on pull request #5934:
URL: https://github.com/apache/incubator-tvm/pull/5934#issuecomment-650303757


   @kazum no problem. We shouldn't do that in this PR. I was just curious if 
you have a plan to do that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on pull request #5934: [CODEGEN][CONTRIB] Various update for CoreML codegen

2020-06-26 Thread GitBox


kazum commented on pull request #5934:
URL: https://github.com/apache/incubator-tvm/pull/5934#issuecomment-650301527


   @zhiics Relevant changes to #5770 are:
   
https://github.com/apache/incubator-tvm/pull/5934/files#diff-639424334c798703f0e62ec8a5eaf779R30
   
https://github.com/apache/incubator-tvm/pull/5934/files#diff-271e7167e72d0f1de3aee097fa5cb5d2R232-R245
   
   >  Is it possible to pre-allocate memory or directly do zero-copy by sharing 
the passed tensor?
   
   We can at least pre-allocate memory.  I think of doing more optimization for 
input tensor management in another PR :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 closed pull request #5935: [BACKPORT-0.6] [BUGFIX] [TFLite] Using real image for QNN testing.

2020-06-26 Thread GitBox


anijain2305 closed pull request #5935:
URL: https://github.com/apache/incubator-tvm/pull/5935


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 opened a new pull request #5935: [BACKPORT-0.6] [BUGFIX] [TFLite] Using real image for QNN testing.

2020-06-26 Thread GitBox


anijain2305 opened a new pull request #5935:
URL: https://github.com/apache/incubator-tvm/pull/5935


   Backport request - https://github.com/apache/incubator-tvm/issues/4824
   Bugfix PR - https://github.com/apache/incubator-tvm/pull/4816



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shoubhik commented on pull request #5880: [BACKPORT-0.6][Bugfix] fskip of EliminateCommonSubexpr cannot always return false

2020-06-26 Thread GitBox


shoubhik commented on pull request #5880:
URL: https://github.com/apache/incubator-tvm/pull/5880#issuecomment-650266446


   Could you point me the CI commands that run for v0.6. I have a use case 
where I need to verify the v0.6 build by running test cases for v0.6 with 
docker CI locally



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on pull request #5857: [OpenCL] Fix OpenCL get_valid_counts errors due to intrinsic atomic_add

2020-06-26 Thread GitBox


trevor-m commented on pull request #5857:
URL: https://github.com/apache/incubator-tvm/pull/5857#issuecomment-650261839


   > Looks good to me. I'll merge this after CI is passed.
   
   Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics edited a comment on pull request #5934: [CODEGEN][CONTRIB] Various update for CoreML codegen

2020-06-26 Thread GitBox


zhiics edited a comment on pull request #5934:
URL: https://github.com/apache/incubator-tvm/pull/5934#issuecomment-650246386


   aah, I haven't noticed that CoreML codegen was also using BYOC infra. Thanks 
for updating it. I have a question which is not relevant to this change. I see 
`SetInput` needs to allocate memory and copy from DLTensor to the local memory 
every time. Is it possible to pre-allocate memory or directly do zero-copy by 
sharing the passed tensor?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5855: [RELAY][VM] Add shape_of instruction

2020-06-26 Thread GitBox


zhiics commented on a change in pull request #5855:
URL: https://github.com/apache/incubator-tvm/pull/5855#discussion_r446250485



##
File path: python/tvm/relay/op/dialect/vm.py
##
@@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file

Review comment:
   Thanks for pointing out. I changed it to relay.op.vm. Maybe we should 
have such a namespace in the long run? Likely, we can have dialect.vm, 
dialect.memory, dialect.qnn, etc.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5926: [Runtime] Only initialize required module

2020-06-26 Thread GitBox


zhiics commented on pull request #5926:
URL: https://github.com/apache/incubator-tvm/pull/5926#issuecomment-650229136


   Thanks @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #5926: [Runtime] Only initialize required module

2020-06-26 Thread GitBox


zhiics merged pull request #5926:
URL: https://github.com/apache/incubator-tvm/pull/5926


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Runtime] Only initialize required module (#5926)

2020-06-26 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 2bbfbb1  [Runtime] Only initialize required module (#5926)
2bbfbb1 is described below

commit 2bbfbb10ba53c649d110828b4978df12b4f3b3e2
Author: Cody Yu 
AuthorDate: Fri Jun 26 08:05:12 2020 -0700

[Runtime] Only initialize required module (#5926)

* init required modules

* trigger ci

* trigger ci
---
 src/runtime/metadata_module.cc | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/src/runtime/metadata_module.cc b/src/runtime/metadata_module.cc
index cf3d547..56f894c 100644
--- a/src/runtime/metadata_module.cc
+++ b/src/runtime/metadata_module.cc
@@ -48,15 +48,22 @@ class MetadataModuleNode : public ModuleNode {
  public:
   MetadataModuleNode(const std::unordered_map& metadata,
  const std::unordered_map>& sym_vars)
-  : metadata_(metadata), sym_vars_(sym_vars) {}
+  : metadata_(metadata), sym_vars_(sym_vars) {
+// Only the related submodules are cached to reduce the number of runtime
+// symbol lookup for initialization. Otherwise, symbols/primitives in the
+// DSO module will also be cached but they never need to be initialized.
+for (const auto& it : sym_vars_) {
+  initialized_[it.first] = false;
+}
+  }
 
   PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
 // Initialize and memoize the module.
 // Usually, we have some warmup runs. The module initialization should be
 // done at this stage. Therefore, runtime overhead is not a concern.
-if (initialized_.count(name) == 0) {
+if (initialized_.count(name) && !initialized_.at(name)) {
   this->InitSubModule(name);
-  initialized_.emplace(name);
+  initialized_[name] = true;
 }
 
 // Run the module.
@@ -202,7 +209,7 @@ class MetadataModuleNode : public ModuleNode {
* \brief Record if a module is initialized. It is needed by imported
* modules using execution engine.
*/
-  std::unordered_set initialized_;
+  std::unordered_map initialized_;
   /*! \brief Variable name to NDArray mapping. */
   std::unordered_map metadata_;
   /*! \brief Symbol name to required constant variables mapping. */



[GitHub] [incubator-tvm] tqchen commented on issue #5809: [RFC][AutoTVM] Non-square ConfigSpace

2020-06-26 Thread GitBox


tqchen commented on issue #5809:
URL: https://github.com/apache/incubator-tvm/issues/5809#issuecomment-650211024


   In this case it is not polyhedral model, but just some constraints on the 
config space.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #4704: [Relay][Frontend][TFLite] Add parser support for arg_min_max

2020-06-26 Thread GitBox


tqchen commented on pull request #4704:
URL: https://github.com/apache/incubator-tvm/pull/4704#issuecomment-650208021


   ping @inadob @anijain2305 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5931: Add TupleGetItem to CSE

2020-06-26 Thread GitBox


tqchen commented on pull request #5931:
URL: https://github.com/apache/incubator-tvm/pull/5931#issuecomment-650206266


   Thanks @mbrookhart , thanks @comaniac for reviewing.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5923: Update code_review.rst

2020-06-26 Thread GitBox


tqchen commented on pull request #5923:
URL: https://github.com/apache/incubator-tvm/pull/5923#issuecomment-650206455


   Thanks @badenh ! Thanks @liangfu @junrushao1994 for reviewing



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (c9203c7 -> 75f2539)

2020-06-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c9203c7  Add TupleGetItem to CSE (#5931)
 add 75f2539  Update code_review.rst (#5923)

No new revisions were added by this update.

Summary of changes:
 docs/contribute/code_review.rst | 47 +
 1 file changed, 24 insertions(+), 23 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5931: Add TupleGetItem to CSE

2020-06-26 Thread GitBox


tqchen merged pull request #5931:
URL: https://github.com/apache/incubator-tvm/pull/5931


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (fcbebea -> c9203c7)

2020-06-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from fcbebea  [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative 
Fetching (#5924)
 add c9203c7  Add TupleGetItem to CSE (#5931)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/eliminate_common_subexpr.cc   | 51 --
 .../relay/test_pass_eliminate_common_subexpr.py| 29 
 2 files changed, 67 insertions(+), 13 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5923: Update code_review.rst

2020-06-26 Thread GitBox


tqchen merged pull request #5923:
URL: https://github.com/apache/incubator-tvm/pull/5923


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5855: [RELAY][VM] Add shape_of instruction

2020-06-26 Thread GitBox


tqchen commented on a change in pull request #5855:
URL: https://github.com/apache/incubator-tvm/pull/5855#discussion_r446214527



##
File path: python/tvm/relay/op/dialect/vm.py
##
@@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file

Review comment:
   Let us simply use relay.op.vm and mark it as a dialect in the comment. 
Since QNN is also a dialect but does not belong to the dialect namespace

##
File path: python/tvm/relay/op/dialect/_make.py
##
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Constructor APIs"""
+import tvm._ffi
+
+tvm._ffi._init_api("relay.op.dialect._make", __name__)

Review comment:
   Use the new style constructor _ffi_api.py as in other parts





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-26 Thread GitBox


tqchen merged pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5914: [clflush] Enable x86 cpu cache flush

2020-06-26 Thread GitBox


tqchen commented on pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#issuecomment-650205070


   @FrozenGene please followup



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-26 Thread GitBox


tqchen commented on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-650204858


   Thanks @jcf94 @merrymercy . this PR is now merged



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (e1a1c2a -> fcbebea)

2020-06-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e1a1c2a  [PatternLang] Don't rewrite expressions used outside of the 
pattern (#5930)
 add fcbebea  [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative 
Fetching (#5924)

No new revisions were added by this update.

Summary of changes:
 src/arith/rewrite_simplify.cc  |  13 +-
 .../python/unittest/test_arith_rewrite_simplify.py |  46 ++
 tests/python/unittest/test_target_codegen_cuda.py  | 184 +
 3 files changed, 241 insertions(+), 2 deletions(-)



[incubator-tvm] branch master updated (96bf271 -> e1a1c2a)

2020-06-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 96bf271  [TE] Add LegalizeInvalidAttach to legalize the compute_at 
location after split or fuse (#5917)
 add e1a1c2a  [PatternLang] Don't rewrite expressions used outside of the 
pattern (#5930)

No new revisions were added by this update.

Summary of changes:
 src/relay/ir/dataflow_matcher.cc| 62 +++--
 tests/python/relay/test_dataflow_pattern.py | 31 +++
 2 files changed, 71 insertions(+), 22 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #5930: [PatternLang] Don't rewrite expressions used outside of the pattern

2020-06-26 Thread GitBox


tqchen commented on pull request #5930:
URL: https://github.com/apache/incubator-tvm/pull/5930#issuecomment-650202765


   Thanks @mbrookhart , Thanks @comaniac for reviewing



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5933: Fix string argument mismatch in GraphRuntimeCodegen

2020-06-26 Thread GitBox


tqchen commented on a change in pull request #5933:
URL: https://github.com/apache/incubator-tvm/pull/5933#discussion_r446210500



##
File path: python/tvm/relay/backend/graph_runtime_codegen.py
##
@@ -85,7 +85,7 @@ def codegen(self, func):
 param_names = self._list_params_name()
 params = {}
 for key in param_names:
-arr = self._get_param_by_name(key)
+arr = self._get_param_by_name(str(key))

Review comment:
   This is something that we need to update in GraphRuntimeCodegen instead, 
to explicit get an String instead of str





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #5928: [PatternLang] The pattern failed to match some subgraphs in a model

2020-06-26 Thread GitBox


tqchen closed issue #5928:
URL: https://github.com/apache/incubator-tvm/issues/5928


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5930: [PatternLang] Don't rewrite expressions used outside of the pattern

2020-06-26 Thread GitBox


tqchen merged pull request #5930:
URL: https://github.com/apache/incubator-tvm/pull/5930


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum opened a new pull request #5934: [CODEGEN][CONTRIB] Various update for CoreML codegen

2020-06-26 Thread GitBox


kazum opened a new pull request #5934:
URL: https://github.com/apache/incubator-tvm/pull/5934


   - Update relay.ext.coremlcompiler based on the change in #5770.
   - Support int32 for Core ML input and output
   - Handle "run" in CoreMLRuntime::GetFunction to measure time with 
Module.time_evaluator.
   - Return PackedFunc() when CoreMLRuntime::GetFunction handles nothing.
   - Add test for each operator supported by CoreML codegen.
   - Support relay.expand_dims and relay.nn.relu.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 edited a comment on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-26 Thread GitBox


jcf94 edited a comment on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-650045223


   > @jcf94 Did our old rule affect the correctness of common operators?
   
   Yes, with those rules several other UTs will fail.
   For example in `test_arith_intset.py:test_mod()`,
   ```
   ck.verify(flm(y, 8), {y : tvm.arith.IntervalSet(z*8+x*4, z*8+x*4+3)}, (0, 7))
   ```
   Our rules make it to be
   ```
   (((z*8) + (x*4)) - (8*floordiv(((z*8) + (x*4)), 8))), z*8) + (x*4)) + 3) 
- (8*floordiv(((z*8) + (x*4)), 8)))
   ```
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 edited a comment on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-26 Thread GitBox


jcf94 edited a comment on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-650045223


   > @jcf94 Did our old rule affect the correctness of common operators?
   
   Yes, with those rules several other UTs will fail, for exp 
`test_arith_intset.py`.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 edited a comment on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-26 Thread GitBox


jcf94 edited a comment on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-650045223


   > @jcf94 Did our old rule affect the correctness of common operators?
   
   Yes, with those rules several other UTs will fail.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on pull request #5924: [Arith][GPU]Rewrite simplify fix for Vectorized Cooperative Fetching

2020-06-26 Thread GitBox


jcf94 commented on pull request #5924:
URL: https://github.com/apache/incubator-tvm/pull/5924#issuecomment-650045223


   > @jcf94 Did our old rule affect the correctness of common operators?
   
   Yes, with those rules several other UTs will fail, they're actually not 
always correct.
   For example we have:
   ```
   TVM_TRY_REWRITE_IF(floordiv(x * c1 + y, c2), floordiv(x * c1, c2),
  c1.Eval()->value > 0 && c2.Eval()->value > 0 &&
  c2.Eval()->value % c1.Eval()->value == 0 &&
  CanProveGreaterEqual(-y.Eval(), -c1.Eval()->value + 
1));
   ```
   while `floordiv(x * 4 + 4, 8)` cannot be simplified to `floordiv(x * 4, 8)`.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >