[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5913: [random] support random fill

2020-08-11 Thread GitBox


tqchen commented on a change in pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#discussion_r468988185



##
File path: src/runtime/contrib/random/mt_random_engine.cc
##
@@ -111,6 +115,56 @@ class RandomEngine {
 }
   }
 
+  void RandomFill(DLTensor* data) {
+int64_t size = 1;
+for (int i = 0; i < data->ndim; ++i) {
+  size *= data->shape[i];
+}
+
+if (data->ctx.device_type == kDLCPU) {
+  FillData(data, size);
+} else {
+  DLTensor local;

Review comment:
   do not call raw allocate API, instead use NDArray::Empty here





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4495: Improve ANTLR Language Dependency

2020-08-11 Thread GitBox


tqchen commented on issue #4495:
URL: https://github.com/apache/incubator-tvm/issues/4495#issuecomment-672549627


   https://github.com/apache/incubator-tvm/pull/6162



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #4495: Improve ANTLR Language Dependency

2020-08-11 Thread GitBox


tqchen closed issue #4495:
URL: https://github.com/apache/incubator-tvm/issues/4495


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6242: [relay][ir] add string type to relay ir

2020-08-11 Thread GitBox


cloud-mxd commented on pull request #6242:
URL: https://github.com/apache/incubator-tvm/pull/6242#issuecomment-672514812


   > I suppose this series of change are really big. Would you like to add 
testcases for this change?
   
   No problem, I will add some test code, thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6253: fix cuda half math function is undefined: hpow, htanh

2020-08-11 Thread GitBox


cloud-mxd commented on pull request #6253:
URL: https://github.com/apache/incubator-tvm/pull/6253#issuecomment-672496307


   The above code has been tested in our production environment (T4/V100)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd edited a comment on pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


cloud-mxd edited a comment on pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249#issuecomment-672475552


   another PR: https://github.com/apache/incubator-tvm/pull/6253 cc @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


cloud-mxd commented on pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249#issuecomment-672475552


   another PR: https://github.com/apache/incubator-tvm/pull/6253



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd opened a new pull request #6253: fix cuda half math function is undefined: hpow, htanh

2020-08-11 Thread GitBox


cloud-mxd opened a new pull request #6253:
URL: https://github.com/apache/incubator-tvm/pull/6253


   ref: https://github.com/apache/incubator-tvm/pull/6225
   add cuda arch check



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #6252: [COREML]Reduceops support added to frontend

2020-08-11 Thread GitBox


siju-samuel opened a new pull request #6252:
URL: https://github.com/apache/incubator-tvm/pull/6252


   Coreml `ReduceLayerParams` support is added.
   - sum
   - prod
   - mean
   - min
   - max
   - argmax
   
   @FrozenGene Please help me to review this. TIA
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 edited a comment on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 edited a comment on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-672413222


   Per offline discussion with @comaniac:
   * According to the RFC, the new Target design is beyond a raw string. It 
supports arbitrary nesting, more data types, etc. Therefore, converting target 
to raw string might be deprecated in the future (but converting from raw string 
will be preserved for simplicity)
   * Target serialization to JSON-like structure (tvm::Map actually) is in the 
near future after this PR is merged.
   * I addressed all the TODOs previously left in this PR
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468947442



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   I won't take either of your suggestions. I use  `int[x] variable` to 
denote an array of x integer values with the name `variable`.
   
   @junrushao1994 's suggestion is wrong because `int size[n]` has another 
meaning of n+1 integers, but here we actually mean nth integer.
   @yangjunpro 's suggestion is also wrong. It makes `sizes` become the name of 
the filed, but actually `size` only specifies the size.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468947442



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   I won't take either of your suggestions. I use  `int[x]` to denote an 
array of x integer values. @junrushao1994 's suggestion is wrong because `int 
size[n]` has another meaning of n+1 integers, but here we actually mean nth 
integer.
   @yangjunpro 's suggestion is also wrong. It makes `sizes` becaome the name 
of this filed, but actually `size` only specifies the size.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-672413222


   Per offline discussion with @comaniac:
   * According to the RFC, the new Target design is beyond a raw string. It 
supports arbitrary nesting, more data types, etc. Therefore, the raw string 
conversion might be deprecated in the future.
   * Target serialization to JSON-like structure (tvm::Map actually) is in the 
near future after this PR is merged.
   * I addressed all the TODOs previously left in this PR



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   The `int sizes[n + 1]` proposed by you is not a valid declaration in C. 
I think the old comment is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468944432



##
File path: src/target/target.cc
##
@@ -30,20 +30,195 @@
 #include 
 #include 
 
+#include "../runtime/object_internal.h"
+
 namespace tvm {
 
 using runtime::PackedFunc;
 using runtime::TVMArgs;
 using runtime::TVMRetValue;
 
+TVM_REGISTER_NODE_TYPE(TargetNode);
+
+static inline size_t CountNumPrefixDashes(const std::string& s) {
+  size_t i = 0;
+  for (; i < s.length() && s[i] == '-'; ++i) {
+  }
+  return i;
+}
+
+static inline int FindUniqueSubstr(const std::string& str, const std::string& 
substr) {
+  size_t pos = str.find_first_of(substr);
+  if (pos == std::string::npos) {
+return -1;
+  }
+  size_t next_pos = pos + substr.size();
+  CHECK(next_pos >= str.size() || str.find_first_of(substr, next_pos) == 
std::string::npos)
+  << "ValueError: At most one \"" << substr << "\" is allowed in "
+  << "the the given string \"" << str << "\"";
+  return pos;
+}
+
+static inline ObjectRef ParseAtomicType(uint32_t type_index, const 
std::string& str) {
+  std::istringstream is(str);
+  if (type_index == Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+int v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : Integer(v);
+  } else if (type_index == 
String::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+std::string v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : String(v);
+  }
+  return ObjectRef(nullptr);
+}
+
+Map TargetNode::ParseAttrsFromRaw(
+const std::vector& options) const {
+  std::unordered_map attrs;
+  for (size_t iter = 0, end = options.size(); iter < end;) {
+std::string s = options[iter++];
+// remove the prefix dashes
+size_t n_dashes = CountNumPrefixDashes(s);
+CHECK(0 < n_dashes && n_dashes < s.size())
+<< "ValueError: Not an attribute key \"" << s << "\"";
+s = s.substr(n_dashes);
+// parse name-obj pair
+std::string name;
+std::string obj;
+int pos;
+if ((pos = FindUniqueSubstr(s, "=")) != -1) {
+  // case 1. --key=value
+  name = s.substr(0, pos);
+  obj = s.substr(pos + 1);
+  CHECK(!name.empty()) << "ValueError: Empty attribute key in \"" << 
options[iter - 1] << "\"";
+  CHECK(!obj.empty()) << "ValueError: Empty attribute in \"" << 
options[iter - 1] << "\"";
+} else if (iter < end && options[iter][0] != '-') {
+  // case 2. --key value
+  name = s;
+  obj = options[iter++];
+} else {
+  // case 3. --boolean-key
+  name = s;
+  obj = "1";
+}
+// check if `name` is invalid
+auto it = this->kind->key2vtype_.find(name);
+if (it == this->kind->key2vtype_.end()) {
+  std::ostringstream os;
+  os << "AttributeError: Invalid config option, cannot recognize \'" << 
name
+ << "\'. Candidates are:";
+  for (const auto& kv : this->kind->key2vtype_) {
+os << "\n  " << kv.first;
+  }
+  LOG(FATAL) << os.str();
+}
+// check if `name` has been set once
+CHECK(!attrs.count(name)) << "AttributeError: key \"" << name
+  << "\" appears more than once in the target 
string";
+// then `name` is valid, let's parse them
+// only several types are supported when parsing raw string
+const auto& info = it->second;
+ObjectRef parsed_obj(nullptr);
+if (info.type_index != ArrayNode::_type_index) {
+  parsed_obj = ParseAtomicType(info.type_index, obj);
+} else {
+  Array array;
+  std::string item;
+  bool failed = false;
+  uint32_t type_index = info.key->type_index;
+  for (std::istringstream is(obj); std::getline(is, item, ',');) {
+ObjectRef parsed_obj = ParseAtomicType(type_index, item);
+if (parsed_obj.defined()) {
+  array.push_back(parsed_obj);
+} else {
+  failed = true;
+  break;
+}
+  }
+  if (!failed) {
+parsed_obj = std::move(array);
+  }
+}
+if (!parsed_obj.defined()) {
+  LOG(FATAL) << "ValueError: Cannot parse type \"" << info.type_key << "\""
+ << ", where attribute key is \"" << name << "\""
+ << ", and attribute is \"" << obj << "\"";
+}
+attrs[name] = std::move(parsed_obj);
+  }
+  // set default attribute values if they do not exist
+  for (const auto& kv : this->kind->key2default_) {
+if (!attrs.count(kv.first)) {
+  attrs[kv.first] = kv.second;
+}
+  }
+  return attrs;
+}
+
+static inline Optional StringifyAtomicType(const ObjectRef& obj) {
+  if (const auto* p = obj.as()) {
+return String(std::to_string(p->value));
+  }
+  if (const auto* p = obj.as()) {
+return GetRef(p);
+  }
+  return NullOpt;
+}
+
+static inline Optional JoinString(const std::vector& array, 
char separator) {
+  if (array.empty()) {
+return NullOpt;
+  }
+  std::ostringstream os;
+  os << array[0];
+  for (size_t i = 1; 

[GitHub] [incubator-tvm] csullivan opened a new pull request #6251: [ONNX] Add Clip importer to handle when min/max are provided as inputs.

2020-08-11 Thread GitBox


csullivan opened a new pull request #6251:
URL: https://github.com/apache/incubator-tvm/pull/6251


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yangjunpro commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


yangjunpro commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468939081



##
File path: include/tvm/auto_scheduler/feature.h
##
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_scheduler/feature.h
+ * \brief Feature extraction for the cost model.
+ * We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+ * so we call this feature as "Per Store" feature.
+ * The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+ * the predictions as the whole score for a TVM IR (Stmt).
+ *
+ * The feature specification is defined by `src/auto_scheduler/feature.cc:: 
FeatureSet`
+ */
+
+#ifndef TVM_AUTO_SCHEDULER_FEATURE_H_
+#define TVM_AUTO_SCHEDULER_FEATURE_H_
+
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace auto_scheduler {
+
+/*!
+ * \brief Get per-store feature from a TIR Stmt
+ * \param stmt The input lowered TIR statement
+ * \param cache_line_size The size of cache line in bytes
+ * \param max_n_bufs The maximum number of extracted buffers for one statement
+ * \param ret The returned feature vector
+ */
+void GetPerStoreFeature(const Stmt& stmt, int cache_line_size, int max_n_bufs,
+std::vector* ret);
+
+/*
+ * \brief Get the names of elements in the feature vector. Use this for debug 
and inspection.
+ * \param max_n_bufs The maximum number of extracted buffers for one statement
+ * \param ret The returned names.
+ */
+void GetPerStoreFeatureName(int max_n_bufs, std::vector* ret);
+
+/*!
+ * \brief Get per-store feature from states of the same task
+ * \param states The input states
+ * \param task The same search task for all states
+ * \param skip_first_n_feature_extraction Skip feature extraction for the 
first n states
+ * \param max_n_bufs The maximum number of extracted buffers for one statement
+ * \param features The returned feature vector. The innermost vector contains 
the
+ * feature vectors for all BufferStoreNode statements
+ */
+void GetPerStoreFeaturesFromStates(const Array& states, const 
SearchTask& task,
+   int skip_first_n_feature_extraction, int 
max_n_bufs,

Review comment:
   "skip_fst_n_features" may be more compact?

##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


comaniac commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468942491



##
File path: src/target/target.cc
##
@@ -30,20 +30,195 @@
 #include 
 #include 
 
+#include "../runtime/object_internal.h"
+
 namespace tvm {
 
 using runtime::PackedFunc;
 using runtime::TVMArgs;
 using runtime::TVMRetValue;
 
+TVM_REGISTER_NODE_TYPE(TargetNode);
+
+static inline size_t CountNumPrefixDashes(const std::string& s) {
+  size_t i = 0;
+  for (; i < s.length() && s[i] == '-'; ++i) {
+  }
+  return i;
+}
+
+static inline int FindUniqueSubstr(const std::string& str, const std::string& 
substr) {
+  size_t pos = str.find_first_of(substr);
+  if (pos == std::string::npos) {
+return -1;
+  }
+  size_t next_pos = pos + substr.size();
+  CHECK(next_pos >= str.size() || str.find_first_of(substr, next_pos) == 
std::string::npos)
+  << "ValueError: At most one \"" << substr << "\" is allowed in "
+  << "the the given string \"" << str << "\"";
+  return pos;
+}
+
+static inline ObjectRef ParseAtomicType(uint32_t type_index, const 
std::string& str) {
+  std::istringstream is(str);
+  if (type_index == Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+int v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : Integer(v);
+  } else if (type_index == 
String::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+std::string v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : String(v);
+  }
+  return ObjectRef(nullptr);
+}
+
+Map TargetNode::ParseAttrsFromRaw(
+const std::vector& options) const {
+  std::unordered_map attrs;
+  for (size_t iter = 0, end = options.size(); iter < end;) {
+std::string s = options[iter++];
+// remove the prefix dashes
+size_t n_dashes = CountNumPrefixDashes(s);
+CHECK(0 < n_dashes && n_dashes < s.size())
+<< "ValueError: Not an attribute key \"" << s << "\"";
+s = s.substr(n_dashes);
+// parse name-obj pair
+std::string name;
+std::string obj;
+int pos;
+if ((pos = FindUniqueSubstr(s, "=")) != -1) {
+  // case 1. --key=value
+  name = s.substr(0, pos);
+  obj = s.substr(pos + 1);
+  CHECK(!name.empty()) << "ValueError: Empty attribute key in \"" << 
options[iter - 1] << "\"";
+  CHECK(!obj.empty()) << "ValueError: Empty attribute in \"" << 
options[iter - 1] << "\"";
+} else if (iter < end && options[iter][0] != '-') {
+  // case 2. --key value
+  name = s;
+  obj = options[iter++];
+} else {
+  // case 3. --boolean-key
+  name = s;
+  obj = "1";
+}
+// check if `name` is invalid
+auto it = this->kind->key2vtype_.find(name);
+if (it == this->kind->key2vtype_.end()) {
+  std::ostringstream os;
+  os << "AttributeError: Invalid config option, cannot recognize \'" << 
name
+ << "\'. Candidates are:";
+  for (const auto& kv : this->kind->key2vtype_) {
+os << "\n  " << kv.first;
+  }
+  LOG(FATAL) << os.str();
+}
+// check if `name` has been set once
+CHECK(!attrs.count(name)) << "AttributeError: key \"" << name
+  << "\" appears more than once in the target 
string";
+// then `name` is valid, let's parse them
+// only several types are supported when parsing raw string
+const auto& info = it->second;
+ObjectRef parsed_obj(nullptr);
+if (info.type_index != ArrayNode::_type_index) {
+  parsed_obj = ParseAtomicType(info.type_index, obj);
+} else {
+  Array array;
+  std::string item;
+  bool failed = false;
+  uint32_t type_index = info.key->type_index;
+  for (std::istringstream is(obj); std::getline(is, item, ',');) {
+ObjectRef parsed_obj = ParseAtomicType(type_index, item);
+if (parsed_obj.defined()) {
+  array.push_back(parsed_obj);
+} else {
+  failed = true;
+  break;
+}
+  }
+  if (!failed) {
+parsed_obj = std::move(array);
+  }
+}
+if (!parsed_obj.defined()) {
+  LOG(FATAL) << "ValueError: Cannot parse type \"" << info.type_key << "\""
+ << ", where attribute key is \"" << name << "\""
+ << ", and attribute is \"" << obj << "\"";
+}
+attrs[name] = std::move(parsed_obj);
+  }
+  // set default attribute values if they do not exist
+  for (const auto& kv : this->kind->key2default_) {
+if (!attrs.count(kv.first)) {
+  attrs[kv.first] = kv.second;
+}
+  }
+  return attrs;
+}
+
+static inline Optional StringifyAtomicType(const ObjectRef& obj) {
+  if (const auto* p = obj.as()) {
+return String(std::to_string(p->value));
+  }
+  if (const auto* p = obj.as()) {
+return GetRef(p);
+  }
+  return NullOpt;
+}
+
+static inline Optional JoinString(const std::vector& array, 
char separator) {
+  if (array.empty()) {
+return NullOpt;
+  }
+  std::ostringstream os;
+  os << array[0];
+  for (size_t i = 1; i < 

[GitHub] [incubator-tvm] zhiics opened a new pull request #6250: [TOPI] Fix reduction

2020-08-11 Thread GitBox


zhiics opened a new pull request #6250:
URL: https://github.com/apache/incubator-tvm/pull/6250


   A small fix to cuda reduction schedule
   
   cc @icemelon9 @masahi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468939010



##
File path: src/target/target.cc
##
@@ -30,20 +30,195 @@
 #include 
 #include 
 
+#include "../runtime/object_internal.h"
+
 namespace tvm {
 
 using runtime::PackedFunc;
 using runtime::TVMArgs;
 using runtime::TVMRetValue;
 
+TVM_REGISTER_NODE_TYPE(TargetNode);
+
+static inline size_t CountNumPrefixDashes(const std::string& s) {
+  size_t i = 0;
+  for (; i < s.length() && s[i] == '-'; ++i) {
+  }
+  return i;
+}
+
+static inline int FindUniqueSubstr(const std::string& str, const std::string& 
substr) {
+  size_t pos = str.find_first_of(substr);
+  if (pos == std::string::npos) {
+return -1;
+  }
+  size_t next_pos = pos + substr.size();
+  CHECK(next_pos >= str.size() || str.find_first_of(substr, next_pos) == 
std::string::npos)
+  << "ValueError: At most one \"" << substr << "\" is allowed in "
+  << "the the given string \"" << str << "\"";
+  return pos;
+}
+
+static inline ObjectRef ParseAtomicType(uint32_t type_index, const 
std::string& str) {
+  std::istringstream is(str);
+  if (type_index == Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+int v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : Integer(v);
+  } else if (type_index == 
String::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+std::string v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : String(v);
+  }
+  return ObjectRef(nullptr);
+}
+
+Map TargetNode::ParseAttrsFromRaw(
+const std::vector& options) const {
+  std::unordered_map attrs;
+  for (size_t iter = 0, end = options.size(); iter < end;) {
+std::string s = options[iter++];
+// remove the prefix dashes
+size_t n_dashes = CountNumPrefixDashes(s);
+CHECK(0 < n_dashes && n_dashes < s.size())
+<< "ValueError: Not an attribute key \"" << s << "\"";
+s = s.substr(n_dashes);
+// parse name-obj pair
+std::string name;
+std::string obj;
+int pos;
+if ((pos = FindUniqueSubstr(s, "=")) != -1) {
+  // case 1. --key=value
+  name = s.substr(0, pos);
+  obj = s.substr(pos + 1);
+  CHECK(!name.empty()) << "ValueError: Empty attribute key in \"" << 
options[iter - 1] << "\"";
+  CHECK(!obj.empty()) << "ValueError: Empty attribute in \"" << 
options[iter - 1] << "\"";
+} else if (iter < end && options[iter][0] != '-') {
+  // case 2. --key value
+  name = s;
+  obj = options[iter++];
+} else {
+  // case 3. --boolean-key
+  name = s;
+  obj = "1";
+}
+// check if `name` is invalid
+auto it = this->kind->key2vtype_.find(name);
+if (it == this->kind->key2vtype_.end()) {
+  std::ostringstream os;
+  os << "AttributeError: Invalid config option, cannot recognize \'" << 
name
+ << "\'. Candidates are:";
+  for (const auto& kv : this->kind->key2vtype_) {
+os << "\n  " << kv.first;
+  }
+  LOG(FATAL) << os.str();
+}
+// check if `name` has been set once
+CHECK(!attrs.count(name)) << "AttributeError: key \"" << name
+  << "\" appears more than once in the target 
string";
+// then `name` is valid, let's parse them
+// only several types are supported when parsing raw string
+const auto& info = it->second;
+ObjectRef parsed_obj(nullptr);
+if (info.type_index != ArrayNode::_type_index) {
+  parsed_obj = ParseAtomicType(info.type_index, obj);
+} else {
+  Array array;
+  std::string item;
+  bool failed = false;
+  uint32_t type_index = info.key->type_index;
+  for (std::istringstream is(obj); std::getline(is, item, ',');) {
+ObjectRef parsed_obj = ParseAtomicType(type_index, item);
+if (parsed_obj.defined()) {
+  array.push_back(parsed_obj);
+} else {
+  failed = true;
+  break;
+}
+  }
+  if (!failed) {
+parsed_obj = std::move(array);
+  }
+}
+if (!parsed_obj.defined()) {
+  LOG(FATAL) << "ValueError: Cannot parse type \"" << info.type_key << "\""
+ << ", where attribute key is \"" << name << "\""
+ << ", and attribute is \"" << obj << "\"";
+}
+attrs[name] = std::move(parsed_obj);
+  }
+  // set default attribute values if they do not exist
+  for (const auto& kv : this->kind->key2default_) {
+if (!attrs.count(kv.first)) {
+  attrs[kv.first] = kv.second;
+}
+  }
+  return attrs;
+}
+
+static inline Optional StringifyAtomicType(const ObjectRef& obj) {
+  if (const auto* p = obj.as()) {
+return String(std::to_string(p->value));
+  }
+  if (const auto* p = obj.as()) {
+return GetRef(p);
+  }
+  return NullOpt;
+}
+
+static inline Optional JoinString(const std::vector& array, 
char separator) {
+  if (array.empty()) {
+return NullOpt;
+  }
+  std::ostringstream os;
+  os << array[0];
+  for (size_t i = 1; 

[incubator-tvm] branch master updated: Revert "fix cuda half math function is undefined: hpow, htanh (#6225)" (#6249)

2020-08-11 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 7174ac2  Revert "fix cuda half math function is undefined: hpow, htanh 
(#6225)" (#6249)
7174ac2 is described below

commit 7174ac27c670c84c3b4a9783649f3e9e31754dd9
Author: Tianqi Chen 
AuthorDate: Tue Aug 11 17:32:57 2020 -0700

Revert "fix cuda half math function is undefined: hpow, htanh (#6225)" 
(#6249)

This reverts commit ed04cdd35f1990959ec788be0131b1388fd11d31.
---
 src/target/source/literal/cuda_half_t.h | 13 -
 1 file changed, 13 deletions(-)

diff --git a/src/target/source/literal/cuda_half_t.h 
b/src/target/source/literal/cuda_half_t.h
index 422d2c0..baf4ba7 100644
--- a/src/target/source/literal/cuda_half_t.h
+++ b/src/target/source/literal/cuda_half_t.h
@@ -293,19 +293,6 @@ __pack_half2(const half x, const half y) {
   unsigned v1 = *((unsigned short *));
   return (v1 << 16) | v0;
 }
-
-static inline __device__ __host__ half hpow(half x, half y) {
-  float tmp_x = __half2float(x);
-  float tmp_y = __half2float(y);
-  float result = powf(tmp_x, tmp_y);
-  return __float2half(result);
-}
-
-static inline __device__ __host__ half htanh(half x) {
-  float tmp_x = __half2float(x);
-  float result = tanhf(tmp_x);
-  return __float2half(result);
-}
 )";
 
 static constexpr const char* _cuda_warp_intrinsic_util = R"(



[GitHub] [incubator-tvm] tqchen merged pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


tqchen merged pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


comaniac commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468935069



##
File path: src/target/target.cc
##
@@ -162,14 +313,149 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();

Review comment:
   Fair enough. Let's keep them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468933421



##
File path: src/target/target.cc
##
@@ -30,20 +30,195 @@
 #include 
 #include 
 
+#include "../runtime/object_internal.h"
+
 namespace tvm {
 
 using runtime::PackedFunc;
 using runtime::TVMArgs;
 using runtime::TVMRetValue;
 
+TVM_REGISTER_NODE_TYPE(TargetNode);
+
+static inline size_t CountNumPrefixDashes(const std::string& s) {
+  size_t i = 0;
+  for (; i < s.length() && s[i] == '-'; ++i) {
+  }
+  return i;
+}
+
+static inline int FindUniqueSubstr(const std::string& str, const std::string& 
substr) {
+  size_t pos = str.find_first_of(substr);
+  if (pos == std::string::npos) {
+return -1;
+  }
+  size_t next_pos = pos + substr.size();
+  CHECK(next_pos >= str.size() || str.find_first_of(substr, next_pos) == 
std::string::npos)
+  << "ValueError: At most one \"" << substr << "\" is allowed in "
+  << "the the given string \"" << str << "\"";
+  return pos;
+}
+
+static inline ObjectRef ParseAtomicType(uint32_t type_index, const 
std::string& str) {
+  std::istringstream is(str);
+  if (type_index == Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+int v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : Integer(v);
+  } else if (type_index == 
String::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+std::string v;
+is >> v;
+return is.fail() ? ObjectRef(nullptr) : String(v);
+  }
+  return ObjectRef(nullptr);
+}
+
+Map TargetNode::ParseAttrsFromRaw(
+const std::vector& options) const {
+  std::unordered_map attrs;
+  for (size_t iter = 0, end = options.size(); iter < end;) {
+std::string s = options[iter++];
+// remove the prefix dashes
+size_t n_dashes = CountNumPrefixDashes(s);
+CHECK(0 < n_dashes && n_dashes < s.size())
+<< "ValueError: Not an attribute key \"" << s << "\"";
+s = s.substr(n_dashes);
+// parse name-obj pair
+std::string name;
+std::string obj;
+int pos;
+if ((pos = FindUniqueSubstr(s, "=")) != -1) {
+  // case 1. --key=value
+  name = s.substr(0, pos);
+  obj = s.substr(pos + 1);
+  CHECK(!name.empty()) << "ValueError: Empty attribute key in \"" << 
options[iter - 1] << "\"";
+  CHECK(!obj.empty()) << "ValueError: Empty attribute in \"" << 
options[iter - 1] << "\"";
+} else if (iter < end && options[iter][0] != '-') {
+  // case 2. --key value
+  name = s;
+  obj = options[iter++];
+} else {
+  // case 3. --boolean-key
+  name = s;
+  obj = "1";
+}
+// check if `name` is invalid
+auto it = this->kind->key2vtype_.find(name);
+if (it == this->kind->key2vtype_.end()) {
+  std::ostringstream os;
+  os << "AttributeError: Invalid config option, cannot recognize \'" << 
name
+ << "\'. Candidates are:";
+  for (const auto& kv : this->kind->key2vtype_) {
+os << "\n  " << kv.first;
+  }
+  LOG(FATAL) << os.str();
+}
+// check if `name` has been set once
+CHECK(!attrs.count(name)) << "AttributeError: key \"" << name
+  << "\" appears more than once in the target 
string";
+// then `name` is valid, let's parse them
+// only several types are supported when parsing raw string
+const auto& info = it->second;
+ObjectRef parsed_obj(nullptr);
+if (info.type_index != ArrayNode::_type_index) {
+  parsed_obj = ParseAtomicType(info.type_index, obj);
+} else {
+  Array array;
+  std::string item;
+  bool failed = false;
+  uint32_t type_index = info.key->type_index;
+  for (std::istringstream is(obj); std::getline(is, item, ',');) {
+ObjectRef parsed_obj = ParseAtomicType(type_index, item);
+if (parsed_obj.defined()) {
+  array.push_back(parsed_obj);
+} else {
+  failed = true;
+  break;
+}
+  }
+  if (!failed) {
+parsed_obj = std::move(array);
+  }
+}
+if (!parsed_obj.defined()) {
+  LOG(FATAL) << "ValueError: Cannot parse type \"" << info.type_key << "\""
+ << ", where attribute key is \"" << name << "\""
+ << ", and attribute is \"" << obj << "\"";
+}
+attrs[name] = std::move(parsed_obj);

Review comment:
   Yes, we will finally deprecate string serialization.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468931440



##
File path: src/target/target.cc
##
@@ -30,20 +30,195 @@
 #include 
 #include 
 
+#include "../runtime/object_internal.h"
+
 namespace tvm {
 
 using runtime::PackedFunc;
 using runtime::TVMArgs;
 using runtime::TVMRetValue;
 
+TVM_REGISTER_NODE_TYPE(TargetNode);
+
+static inline size_t CountNumPrefixDashes(const std::string& s) {

Review comment:
   You are right. Thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468930805



##
File path: src/target/target.cc
##
@@ -162,14 +313,149 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();
+target->kind = TargetKind::Get(GetRef(kind));
+config.erase(kKind);
+  } else {
+LOG(FATAL) << "AttributeError: Field 'kind' is not found";
+  }
+  // parse "tag"
+  if (config.count(kTag)) {
+const auto* tag = config[kTag].as();
+CHECK(tag != nullptr) << "AttributeError: Expect type of field 'tag' is 
string, but get: "
+  << config[kTag]->GetTypeKey();
+target->tag = GetRef(tag);
+config.erase(kTag);
+  } else {
+target->tag = "";
+  }
+  // parse "keys"
+  // TODO(@junrushao1994): add more keys according to CreateTarget

Review comment:
   Except for keys given from users, there are several keys we have to 
append, according to the convention of CreateTarget
   1) device name
   2) default keys
   3) and then de-duplicate those keys
   
   I will fix this in the PR





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468929183



##
File path: src/target/target.cc
##
@@ -162,14 +313,149 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();

Review comment:
   Thanks for the suggestion! I thought for it once last week, but finally 
decided not to do so for the following reasons:
   1) It is very short 2 lines of code, we cannot save too much with a macro or 
function
   2) It is only used in a single function in a single cc file, and I want to 
avoid being over-engineered
   3) Let's keep the error reporting logistics closer to the code that uses it 
for better reading experience





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


comaniac commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r468900741



##
File path: src/target/target.cc
##
@@ -162,14 +313,149 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();
+target->kind = TargetKind::Get(GetRef(kind));
+config.erase(kKind);
+  } else {
+LOG(FATAL) << "AttributeError: Field 'kind' is not found";
+  }
+  // parse "tag"
+  if (config.count(kTag)) {
+const auto* tag = config[kTag].as();
+CHECK(tag != nullptr) << "AttributeError: Expect type of field 'tag' is 
string, but get: "
+  << config[kTag]->GetTypeKey();
+target->tag = GetRef(tag);
+config.erase(kTag);
+  } else {
+target->tag = "";
+  }
+  // parse "keys"
+  // TODO(@junrushao1994): add more keys according to CreateTarget
+  if (config.count(kKeys)) {
+const auto* keys = config[kKeys].as();
+CHECK(keys != nullptr) << "AttributeError: Expect type of field 'keys' is 
an Array, but get: "
+   << config[kTag]->GetTypeKey();
+target->keys = {};
+for (const ObjectRef& e : *keys) {
+  const auto* key = e.as();
+  CHECK(key != nullptr) << "AttributeError: Expect 'keys' to be an array 
of strings, but it "
+   "contains an element of type: "
+<< e->GetTypeKey();
+  target->keys.push_back(GetRef(key));
+}
+config.erase(kKeys);
+  } else {
+target->keys = {};
+  }
+  // parse attrs
+  // TODO(@junrushao1994): add default values
+  std::unordered_map attrs;
+  const auto& key2vtype = target->kind->key2vtype_;
+  for (const auto& cfg_kv : config) {
+const String& name = cfg_kv.first;
+const ObjectRef& obj = cfg_kv.second;
+if (!key2vtype.count(name)) {
+  std::ostringstream os;
+  os << "AttributeError: Invalid config option, cannot recognize \"" << 

[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468927343



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT32 = 4
+SIZE_OF_FLOAT32 = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes
+
+#   float[sizes[0]]feature for record 1
+#   float[sizes[1]]feature for record 2
+#   ...feature for record i...
+#   float[sizes[n-1]]  feature for record n
+
+#   float[sizes[n]]normalized throughput for n records
+#   int[sizes[n+1]]task id for n records
+# }
+
+vec_len = DEFAULT_FEATURE_VEC_LEN
+
+# unpack sizes
+offset = 0
+n = struct.unpack_from("1i", byte_arr, offset=offset)[0]
+offset += SIZE_OF_INT32
+
+sizes = struct.unpack_from("%di" % (n+2), byte_arr, offset=offset)
+offset += SIZE_OF_INT32 * (n+2)
+
+# unpack features
+features = []
+for size in sizes[:-2]:
+row = []
+
+# Now, we need to unpack the feature for multiple statements.
+# The format is:
+# {
+# int n_stmts
+# float[n_stmt][vec_len] feature_vecs
+# }
+# where vec_len can be calculated by `(size - 1) / n_stmts`
+
+if size == 0:
+# failed during lowering
+features.append(np.zeros((1, vec_len)))
+else:
+n_stmts = struct.unpack_from("f", byte_arr, offset=offset)
+offset += SIZE_OF_FLOAT32
+
+n_stmts = int(n_stmts[0] + 0.5)
+tmp_vec_len = (size - 1) // n_stmts
+assert tmp_vec_len == vec_len, "The lenght of feature vector is 
wrong. " \
+   "Expected %d but got %d." % 
(vec_len, tmp_vec_len)
+assert tmp_vec_len * n_stmts == size - 1
+for _ in range(n_stmts):
+x = struct.unpack_from("%df" % vec_len, byte_arr, 
offset=offset)
+offset += vec_len * SIZE_OF_FLOAT32
+row.append(x)
+
+features.append(np.array(row))
+
+# unpack normalized_throughputs
+m = sizes[-2]
+normalized_throughputs = struct.unpack_from("%df" % m, byte_arr, 
offset=offset)
+offset += m * SIZE_OF_INT32
+
+# unpack task_ids
+m = sizes[-1]
+task_ids = struct.unpack_from("%di" % m, byte_arr, offset=offset)
+offset += m * SIZE_OF_INT32
+
+assert offset == len(byte_arr), "%d vs %d" % (offset, len(byte_arr))
+return np.array(features, dtype=object), np.array(normalized_throughputs), 
np.array(task_ids)
+
+
+def get_per_store_features_from_file(filename: str,
+ max_lines: int,
+ max_n_bufs: 

[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468927281



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT32 = 4
+SIZE_OF_FLOAT32 = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes
+
+#   float[sizes[0]]feature for record 1
+#   float[sizes[1]]feature for record 2
+#   ...feature for record i...
+#   float[sizes[n-1]]  feature for record n
+
+#   float[sizes[n]]normalized throughput for n records
+#   int[sizes[n+1]]task id for n records
+# }
+
+vec_len = DEFAULT_FEATURE_VEC_LEN
+
+# unpack sizes
+offset = 0
+n = struct.unpack_from("1i", byte_arr, offset=offset)[0]
+offset += SIZE_OF_INT32
+
+sizes = struct.unpack_from("%di" % (n+2), byte_arr, offset=offset)
+offset += SIZE_OF_INT32 * (n+2)
+
+# unpack features
+features = []
+for size in sizes[:-2]:
+row = []
+
+# Now, we need to unpack the feature for multiple statements.
+# The format is:
+# {
+# int n_stmts
+# float[n_stmt][vec_len] feature_vecs
+# }
+# where vec_len can be calculated by `(size - 1) / n_stmts`
+
+if size == 0:
+# failed during lowering
+features.append(np.zeros((1, vec_len)))
+else:
+n_stmts = struct.unpack_from("f", byte_arr, offset=offset)
+offset += SIZE_OF_FLOAT32
+
+n_stmts = int(n_stmts[0] + 0.5)
+tmp_vec_len = (size - 1) // n_stmts
+assert tmp_vec_len == vec_len, "The lenght of feature vector is 
wrong. " \
+   "Expected %d but got %d." % 
(vec_len, tmp_vec_len)
+assert tmp_vec_len * n_stmts == size - 1
+for _ in range(n_stmts):
+x = struct.unpack_from("%df" % vec_len, byte_arr, 
offset=offset)
+offset += vec_len * SIZE_OF_FLOAT32
+row.append(x)
+
+features.append(np.array(row))
+
+# unpack normalized_throughputs
+m = sizes[-2]
+normalized_throughputs = struct.unpack_from("%df" % m, byte_arr, 
offset=offset)
+offset += m * SIZE_OF_INT32
+
+# unpack task_ids
+m = sizes[-1]
+task_ids = struct.unpack_from("%di" % m, byte_arr, offset=offset)
+offset += m * SIZE_OF_INT32
+
+assert offset == len(byte_arr), "%d vs %d" % (offset, len(byte_arr))
+return np.array(features, dtype=object), np.array(normalized_throughputs), 
np.array(task_ids)
+
+
+def get_per_store_features_from_file(filename: str,
+ max_lines: int,
+ max_n_bufs: 

[GitHub] [incubator-tvm] jroesch merged pull request #6162: [Parser] Parser 2.0 part 2

2020-08-11 Thread GitBox


jroesch merged pull request #6162:
URL: https://github.com/apache/incubator-tvm/pull/6162


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (75b8318 -> fa2213f)

2020-08-11 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 75b8318  [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy 
(#6184)
 add fa2213f  [Parser] Parser 2.0 part 2  (#6162)

No new revisions were added by this update.

Summary of changes:
 .gitignore  |4 -
 CMakeLists.txt  |3 -
 cmake/modules/ANTLR.cmake   |   40 -
 cmake/util/FindANTLR.cmake  |   65 -
 docker/Dockerfile.ci_cpu|3 -
 docker/Dockerfile.ci_gpu|3 -
 docker/Dockerfile.ci_wasm   |3 -
 docker/install/ubuntu_install_antlr.sh  |   25 -
 docker/install/ubuntu_install_python_package.sh |2 +-
 docs/README.txt |4 +-
 docs/install/from_source.rst|7 -
 include/tvm/ir/attrs.h  |3 +-
 include/tvm/ir/span.h   |   19 +-
 include/tvm/parser/parser.h |2 +-
 include/tvm/parser/source_map.h |  110 +
 include/tvm/relay/adt.h |3 +-
 include/tvm/relay/expr.h|   40 +-
 include/tvm/relay/expr_functor.h|1 +
 include/tvm/relay/function.h|3 +-
 python/setup.py |3 +-
 python/tvm/error.py |7 +
 python/tvm/ir/base.py   |4 +-
 python/tvm/parser/__init__.py   |2 +-
 python/tvm/relay/__init__.py|6 +-
 python/tvm/relay/_parser.py |  771 -
 python/tvm/relay/expr.py|2 +-
 python/tvm/relay/grammar/.gitignore |1 -
 python/tvm/relay/grammar/Relay.g4   |  199 --
 python/tvm/relay/grammar/__init__.py|   16 -
 python/tvm/relay/grammar/py3/.gitattributes |3 -
 python/tvm/relay/grammar/py3/RelayLexer.py  |  256 --
 python/tvm/relay/grammar/py3/RelayParser.py | 3732 ---
 python/tvm/relay/grammar/py3/RelayVisitor.py|  343 ---
 python/tvm/relay/grammar/py3/__init__.py|   16 -
 python/tvm/relay/parser.py  |   30 -
 python/tvm/relay/std/core.rly   |3 +-
 python/tvm/relay/std/gradient.rly   |3 +-
 python/tvm/relay/std/prelude.rly|6 +-
 src/ir/module.cc|6 +-
 src/ir/span.cc  |   26 +-
 src/parser/diagnostic.h |  179 +-
 src/parser/meta_ref.cc  |  100 +
 src/parser/meta_ref.h   |   85 +
 src/parser/op_table.h   |   20 +-
 src/parser/parser.cc|  832 +++--
 src/parser/source_map.cc|  113 +
 src/parser/token.h  |  349 ++-
 src/parser/tokenizer.h  |  306 +-
 src/printer/relay_text_printer.cc   |7 +-
 src/printer/text_printer.cc |   16 +-
 src/printer/text_printer.h  |   18 +-
 src/relay/ir/adt.cc |3 +-
 src/relay/ir/expr.cc|   30 +-
 src/relay/ir/expr_functor.cc|   62 +-
 src/relay/ir/function.cc|3 +-
 src/relay/transforms/type_infer.cc  |   16 +-
 src/runtime/graph/graph_runtime.h   |1 +
 tests/lint/rat-excludes |5 -
 tests/python/relay/test_ir_nodes.py |   12 +-
 tests/python/relay/test_ir_parser.py|  320 +-
 tests/python/relay/test_ir_parser2.py   |  891 --
 tests/python/relay/test_ir_text_printer.py  |   75 +-
 tests/python/relay/test_op_level10.py   |8 +-
 tests/python/relay/test_pass_eta_expand.py  |   24 +-
 tests/python/relay/test_pass_unmatched_cases.py |4 +-
 65 files changed, 1902 insertions(+), 7352 deletions(-)
 delete mode 100644 cmake/modules/ANTLR.cmake
 delete mode 100644 cmake/util/FindANTLR.cmake
 delete mode 100755 docker/install/ubuntu_install_antlr.sh
 create mode 100644 include/tvm/parser/source_map.h
 delete mode 100644 python/tvm/relay/_parser.py
 delete mode 100644 python/tvm/relay/grammar/.gitignore
 delete mode 100644 python/tvm/relay/grammar/Relay.g4
 delete mode 100644 python/tvm/relay/grammar/__init__.py
 delete mode 100644 python/tvm/relay/grammar/py3/.gitattributes
 delete mode 100644 python/tvm/relay/grammar/py3/RelayLexer.py
 delete mode 100644 python/tvm/relay/grammar/py3/RelayParser.py
 delete mode 100644 python/tvm/relay/grammar/py3/RelayVisitor.py
 delete mode 100644 

[GitHub] [incubator-tvm] tqchen commented on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


tqchen commented on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672337634


   @merrymercy please fix the conflict 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-11 Thread GitBox


tqchen commented on a change in pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#discussion_r468908098



##
File path: include/tvm/node/container.h
##
@@ -1287,6 +1287,18 @@ class Map : public ObjectRef {
 data_ = other.data_;
 return *this;
   }
+  /*!
+   * \brief Merge with another Map. It does not mutate the current one.
+   * \param other Map to be merged.
+   * @return The merged Array. Original Map is kept unchanged.
+   */
+  Map Merge(const Map& other) const {

Review comment:
   Shall we make it as a global function instead of member? so it is not 
ambiguitous (that the result is a new map)

##
File path: include/tvm/runtime/container.h
##
@@ -956,6 +956,19 @@ class Array : public ObjectRef {
 return static_cast(data_.get());
   }
 
+  /*!
+   * \brief Concat with another Array. It does not mutate the current one.
+   * \param other Array to be concatenated.
+   * @return The concatenated Array. Original Array is kept unchanged.
+   */
+  Array Concat(const Array& other) const {

Review comment:
   Consider make it as a global function, which also enables copy on write 
on the lhs(this)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] weberlo commented on pull request #5940: Add Quantize/Dequantize Partitioning

2020-08-11 Thread GitBox


weberlo commented on pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940#issuecomment-672323092


   > @weberlo Will this affect the original pipeline when the config 
`partition_conversions` is `disabled`?
   
   @ZihengJiang It won't affect the original pipeline, since we just return the 
`mod` 
[here](https://github.com/apache/incubator-tvm/pull/5940/files#diff-0ca944afca91e5a6d46981a4ac9e3dd6R378)
 when `partition_conversions` is `disabled`.
   
   > Nice work @weberlo, do can we add a partition_conversions=disabled unit 
test as well?
   
   It's implicitly tested in [this helper 
function](https://github.com/apache/incubator-tvm/pull/5940/files#diff-c82103045cf6ee84456fa38e019e074fR143-R146).
  I explicitly set it to `disabled` to make it clearer in the most recent 
commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #5940: Add Quantize/Dequantize Partitioning

2020-08-11 Thread GitBox


tmoreau89 commented on pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940#issuecomment-672314270


   Nice work @weberlo, do can we add a `partition_conversions=disabled` unit 
test as well?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-11 Thread GitBox


junrushao1994 commented on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-672304325


   This PR is ready for review. Please take a look. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ZihengJiang commented on pull request #5940: Add Quantize/Dequantize Partitioning

2020-08-11 Thread GitBox


ZihengJiang commented on pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940#issuecomment-672284273


   @weberlo Will this affect the original pipeline when the config 
`partition_conversions` is `disabled`?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] weberlo commented on pull request #5940: Add Quantize/Dequantize Partitioning

2020-08-11 Thread GitBox


weberlo commented on pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940#issuecomment-672282100


   @ZihengJiang I've rebased and converted the datatype collector pass into 
C++, but I won't be prioritizing C++ conversion for the other visitors for the 
next few weeks.  To prevent bit rot, I think we should work towards merging the 
PR in its current state.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (db6e0c1 -> 75b8318)

2020-08-11 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from db6e0c1  [JVM] Support overriding RPCWatchdog termination behavior on 
Android and other platforms (#6216)
 add 75b8318  [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy 
(#6184)

No new revisions were added by this update.

Summary of changes:
 include/tvm/auto_scheduler/auto_schedule.h |  16 +-
 include/tvm/auto_scheduler/compute_dag.h   |  18 +-
 include/tvm/auto_scheduler/cost_model.h|  12 +-
 include/tvm/auto_scheduler/search_policy.h |  37 +-
 include/tvm/auto_scheduler/transform_step.h|   2 +-
 python/tvm/auto_scheduler/__init__.py  |   2 +-
 python/tvm/auto_scheduler/auto_schedule.py | 126 -
 src/auto_scheduler/auto_schedule.cc|  24 +-
 src/auto_scheduler/compute_dag.cc  |  94 +++-
 src/auto_scheduler/cost_model.cc   |  14 +-
 src/auto_scheduler/loop_state.cc   |   8 +-
 src/auto_scheduler/search_policy/empty_policy.cc   |  35 +-
 src/auto_scheduler/search_policy/empty_policy.h|   7 +-
 src/auto_scheduler/search_policy/search_policy.cc  |  14 +-
 src/auto_scheduler/search_policy/sketch_policy.cc  | 401 ++
 src/auto_scheduler/search_policy/sketch_policy.h   | 176 +++
 .../search_policy/sketch_policy_rules.cc   | 584 +
 .../search_policy/sketch_policy_rules.h| 207 
 src/auto_scheduler/search_policy/utils.cc  | 286 ++
 src/auto_scheduler/search_policy/utils.h   | 484 +
 src/auto_scheduler/utils.h |  47 +-
 tests/cpp/auto_scheduler_test.cc   |   4 +-
 .../python/unittest/test_auto_scheduler_common.py  | 112 +++-
 .../unittest/test_auto_scheduler_loop_state.py |   7 +-
 .../unittest/test_auto_scheduler_search_policy.py  |  31 +-
 .../test_auto_scheduler_sketch_generation.py   | 102 
 26 files changed, 2713 insertions(+), 137 deletions(-)
 create mode 100644 src/auto_scheduler/search_policy/sketch_policy.cc
 create mode 100644 src/auto_scheduler/search_policy/sketch_policy.h
 create mode 100644 src/auto_scheduler/search_policy/sketch_policy_rules.cc
 create mode 100644 src/auto_scheduler/search_policy/sketch_policy_rules.h
 create mode 100644 src/auto_scheduler/search_policy/utils.cc
 create mode 100644 src/auto_scheduler/search_policy/utils.h
 create mode 100644 
tests/python/unittest/test_auto_scheduler_sketch_generation.py



[GitHub] [incubator-tvm] merrymercy merged pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-11 Thread GitBox


merrymercy merged pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-11 Thread GitBox


merrymercy commented on pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-672269797


   @comaniac @junrushao1994  Wait for your approval and I will merge this



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy edited a comment on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function, they are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```c++
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is over 
design and make things more complicated.
   
   @tqchen @comaniac @FrozenGene @jroesch @junrushao1994 Your comments are all 
addressed. This PR is ready to be merged.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy edited a comment on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function, they are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```c++
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is just 
over design and make things more complicated.
   
   @tqchen @comaniac @FrozenGene @jroesch @junrushao1994 Your comments are all 
addressed. This PR is ready to be merged.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy edited a comment on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function. They are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is over 
design and makes things more complicated.
   
   @tqchen @comaniac @FrozenGene @jroesch @junrushao1994 Your comments are all 
addressed. This PR is ready to be merged.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy edited a comment on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function. They are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is just 
over design and make things more complicated.
   
   @tqchen @comaniac @FrozenGene @jroesch @junrushao1994 Your comments are all 
addressed. This PR is ready to be merged.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy edited a comment on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function. They are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is over 
design and make things more complicated.
   
   @tqchen @comaniac @FrozenGene @jroesch @junrushao1994 Your comments are all 
addressed. This PR is ready to be merged.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy edited a comment on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function, they are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is just 
over design and make things more complicated.
   
   @tqchen @comaniac @FrozenGene @jroesch @junrushao1994 Your comments are all 
addressed. This PR is ready to be merged.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-672266466


   I organized the features into 5 groups.
   ```
 // Group 1: Computation related features
 // Group 2: Buffer access related features (per buffer)
 // Group 3: Arithmetic intensity related features
 // Group 4: Allocation related features
 // Group 5: Outer scope related features
   ```
   The specification can be found in 
`src/auto_scheduler/feature.cc::FeatureSet`.
   
   Each group has one corresponding extraction function, they are called in the 
main visitor (`PerStoreFeatureExtractor`).
   ```
 void VisitStmt_(const BufferStoreNode* node) final {
   ...
   // Group 1: Computation related features
   ExtractComputationFeature(node, math_op_counter);
   
   // Group 2: Buffer access related features (per buffer)
   ExtractBufferAccessFeature(node, math_op_counter, _compute_ops, 
_ops_list,
  _bytes_list);
   
   // Group 3: Arithmetic intensity related features
   ExtractArithmeticIntensityFeature(node, cur_compute_ops, 
compute_ops_list, mem_bytes_list);
   
   // Group 4: Allocation related features
   ExtractOuterScopeFeature(node);
 }
   
 void VisitStmt_(const BufferRealizeNode* node) final {
   StmtExprVisitor::VisitStmt_(node);
   
   // Group 5: Outer scope related features
   ExtractAllocationFeature(node);
 }
   ```
   
   I think the code is very clean and much better than the old autotvm now.
   I don't like registration or adding an extra layer of callback. It is just 
over design and make things more complicated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


zhiics commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468844000



##
File path: tests/python/contrib/test_ethosn/infrastructure.py
##
@@ -0,0 +1,225 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Expose Ethos test functions to the Python front end"""
+
+from __future__ import absolute_import, print_function
+import tvm
+from tvm import relay
+from tvm.contrib import util, graph_runtime, download
+from tvm.relay.testing import run_opt_pass
+from enum import Enum
+from hashlib import md5
+from itertools import zip_longest, combinations
+import numpy as np
+from PIL import Image
+import os
+
+from . import _infrastructure
+from tvm.relay.op.contrib import get_pattern_table
+
+
+class Available(Enum):
+UNAVAILABLE = 0
+SW_ONLY = 1
+SW_AND_HW = 2
+
+
+def ethosn_available():
+"""Return whether Ethos-N software and hardware support is available"""
+if not tvm.get_global_func("relay.ethos-n.query", True):
+print("skip because Ethos-N module is not available")
+return Available.UNAVAILABLE
+else:
+hw = tvm.get_global_func("relay.ethos-n.query")()
+return Available.SW_AND_HW if hw else Available.SW_ONLY
+
+
+def get_real_image(im_height, im_width):
+repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
+img_name = 'elephant-299.jpg'
+image_url = os.path.join(repo_base, img_name)
+img_path = download.download_testdata(image_url, img_name, module='data')
+image = Image.open(img_path).resize((im_height, im_width))
+x = np.array(image).astype('uint8')
+data = np.reshape(x, (1, im_height, im_width, 3))
+return data
+
+
+def assert_lib_hash(lib, golden):
+temp = util.tempdir()
+path = temp.relpath("lib.cmm")
+lib.imported_modules[1].save(path)
+lib_hash = md5(open(path, 'rb').read()).hexdigest()
+assert lib_hash == golden, "Expected hash: {} Got hash: {}".format(golden, 
lib_hash)
+
+
+def make_module(func, params):
+func = relay.Function(relay.analysis.free_vars(func), func)
+if len(params):
+relay.build_module.bind_params_by_name(func, params)
+return tvm.IRModule.from_expr(func)
+
+
+def make_ethosn_composite(ethosn_expr, name):
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function([relay.Var("a")], ethosn_expr)
+func = func.with_attr("Composite", name)
+call = relay.Call(func, vars)
+return call
+
+
+def make_ethosn_partition(ethosn_expr):
+# Create an Ethos-N global function
+mod = tvm.IRModule({})
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function(vars, ethosn_expr)
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", "ethos-n")
+func = func.with_attr("global_symbol", "ethos-n_0")
+g1 = relay.GlobalVar("ethos-n_0")
+mod[g1] = func
+
+# These are the vars to call the Ethos-N partition with
+more_vars = relay.analysis.free_vars(ethosn_expr)
+# Call the Ethos-N partition in main
+call_fn1 = g1(*more_vars)
+mod["main"] = relay.Function(more_vars, call_fn1)
+return mod
+
+
+def get_cpu_op_count(mod):
+class Counter(tvm.relay.ExprVisitor):
+def __init__(self):
+super().__init__()
+self.count = 0
+
+def visit_call(self, call):
+if isinstance(call.op, tvm.ir.Op):
+self.count += 1
+
+super().visit_call(call)
+
+c = Counter()
+c.visit(mod["main"])
+return c.count
+
+
+def build(mod, params, npu=True, cpu_ops=0, npu_partitions=1):
+relay.backend.compile_engine.get().clear()
+with tvm.transform.PassContext(opt_level=3, config={
+"relay.ext.ethos-n.options": {"variant": 0}
+}):
+with tvm.target.create("llvm -mcpu=core-avx2"):

Review comment:
   Yeah, target "llvm" seems only problematic on certain LLVM versions. But 
with "-mcpu=core-avx2", we can lead to CI failure if some instructions are 
generated for avx2 but executed on the CI with older arch. It looks for all 
tests that use 

[GitHub] [incubator-tvm] manupa-arm commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-11 Thread GitBox


manupa-arm commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-672259003


   @MarisaKirisame , thanks for the clarification!
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


zhiics commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468838305



##
File path: src/runtime/contrib/ethosn/ethosn_runtime.h
##
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ethosn_runtime.h
+ * \brief Execution handling of Ethos-N command streams.
+ */
+#ifndef TVM_RUNTIME_CONTRIB_ETHOSN_ETHOSN_RUNTIME_H_
+#define TVM_RUNTIME_CONTRIB_ETHOSN_ETHOSN_RUNTIME_H_
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethosn_support_library/Support.hpp"
+
+namespace tvm {
+namespace runtime {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+
+struct OrderedCompiledNetwork {
+  std::unique_ptr cmm;
+  std::string name;
+  std::vector inputs;
+  std::vector outputs;
+};
+
+class EthosnModule : public ModuleNode {
+ public:
+  /*!
+   * \brief The Ethos-N runtime module.
+   * \param cmms A vector of compiled networks with input/output orders.
+   */
+  explicit EthosnModule(std::vector* cmms);
+
+  /*!
+   * \brief Get a PackedFunc from the Ethos-N module.
+   * \param name The name of the function.
+   * \param sptr_to_self The ObjectPtr that points to this module node.
+   * \return The function pointer when it is found, otherwise, 
PackedFunc(nullptr).
+   */
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final;
+  /*!
+   * \brief Save a compiled network to a binary stream, which can then be
+   * serialized to disk.
+   * \param stream The stream to save the binary.
+   * \note See EthosnModule::LoadFromBinary for the serialization format.
+   */
+  void SaveToBinary(dmlc::Stream* stream) final;
+  /*!
+   * \brief Load a compiled network from stream.
+   * \param strm The binary stream to load.
+   * \return The created Ethos-N module.
+   * \note The serialization format is:
+   *
+   *   size_t : number of functions
+   *   [
+   * std::string : name of function (symbol)
+   * std::string : serialized command stream
+   * size_t  : number of inputs
+   * std::vector : order of inputs
+   * size_t  : number of outputs
+   * std::vector : order of outputs
+   *   ] * number of functions
+   */
+  static Module LoadFromBinary(void* strm);
+  /*!
+   * \brief Save a module to a specified path.
+   * \param path Where to save the serialized module.
+   */
+  void SaveToFile(const std::string& path, const std::string& format) override;
+  /*!
+   * \brief Create a module from a file.
+   * \param path The path of the file containing the serialized module.
+   * \return The created Ethos-N module.
+   */
+  static Module LoadFromFile(const std::string& path);

Review comment:
   Yeah, I think we can remove them. For debugging, you pretty much still 
use the SaveToBinary and LoadFromBinary





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (12da324 -> db6e0c1)

2020-08-11 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 12da324  Fix division range estimation error in simplifier (#6244)
 add db6e0c1  [JVM] Support overriding RPCWatchdog termination behavior on 
Android and other platforms (#6216)

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/tvm/tvmrpc/RPCActivity.java|  3 ++-
 .../org/apache/tvm/tvmrpc/RPCAndroidWatchdog.java  | 25 +++---
 .../java/org/apache/tvm/tvmrpc/RPCProcessor.java   | 10 +++--
 .../main/java/org/apache/tvm/rpc/RPCWatchdog.java  |  9 +++-
 4 files changed, 31 insertions(+), 16 deletions(-)
 copy jvm/core/src/main/java/org/apache/tvm/APIInternal.java => 
apps/android_rpc/app/src/main/java/org/apache/tvm/tvmrpc/RPCAndroidWatchdog.java
 (64%)



[GitHub] [incubator-tvm] tqchen merged pull request #6216: [JVM] Support overriding RPCWatchdog termination behavior on Android and other platforms

2020-08-11 Thread GitBox


tqchen merged pull request #6216:
URL: https://github.com/apache/incubator-tvm/pull/6216


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan commented on pull request #6216: [JVM] Support overriding RPCWatchdog termination behavior on Android and other platforms

2020-08-11 Thread GitBox


csullivan commented on pull request #6216:
URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-672247353


   @tmoreau89, @tqchen, if you have time please review the updates. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6242: [relay][ir] add string type to relay ir

2020-08-11 Thread GitBox


junrushao1994 commented on pull request #6242:
URL: https://github.com/apache/incubator-tvm/pull/6242#issuecomment-672241664


   I suppose this series of change are really big. Would you like to add 
testcases for this change?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


tqchen commented on pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249#issuecomment-672180416


   cc @jroesch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


tqchen edited a comment on pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249#issuecomment-672179856


   cc @cloud-mxd due to the CI problem as in 
https://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-6162/29/pipeline/
   
   Please feel free to send another PR and fix the related problem.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


tqchen commented on pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249#issuecomment-672179856


   cc @cloud-mxd due to the CI problem defined in 
https://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-6162/29/pipeline/
   
   Please feel free to send another PR and fix the related problem.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch revert-6225-dev_mxd_fix_cuda_fp16 created (now 45415b3)

2020-08-11 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch revert-6225-dev_mxd_fix_cuda_fp16
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at 45415b3  Revert "fix cuda half math function is undefined: hpow, htanh 
(#6225)"

No new revisions were added by this update.



[GitHub] [incubator-tvm] tqchen opened a new pull request #6249: Revert "fix cuda half math function is undefined: hpow, htanh"

2020-08-11 Thread GitBox


tqchen opened a new pull request #6249:
URL: https://github.com/apache/incubator-tvm/pull/6249


   Reverts apache/incubator-tvm#6225



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


mbaret commented on pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#issuecomment-672137931


   > It looks that this is a different from C source module and the json 
runtime module. Could you elaborate a bit why those two don't fit?
   
   The Ethos-N compiler compiles the graph down to a so-called 'command stream' 
which is a binary artifact that is directly consumed by the driver to execute 
the inference. It has encoded within it all the constants/weights. We're not 
calling out to a C library so the C source module wouldn't make sense here. 
Additionally, the JSON runtime wouldn't really add any useful functionality. We 
just want to load the artifact off the disk and wrap it in a packed function.
   
   > It seems that we are not really able to test them on HW in the CI. Is 
there any plan on this?
   
   We don't expect there to be HW in CI. CI will run SW testing including a 
mocked inference function so that we can test the compilation flow all the way 
through to the runtime. We will be running automated HW testing on a range of 
networks within Arm.
   
   To be explicit about the next 2 PRs, they are already written and only add 
more operators. Specifically, the next PR will add quantized conv2d and the 
final one will add quite a few more less complex operators. We wanted to split 
these up to make it easier to see how the codegen operates without the added 
complexity of all the other operators. The final PR will also include network 
tests for mobilenet, inceptionv3/4 and ssd mobilenet.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468752615



##
File path: tests/python/contrib/test_ethosn/infrastructure.py
##
@@ -0,0 +1,225 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Expose Ethos test functions to the Python front end"""
+
+from __future__ import absolute_import, print_function
+import tvm
+from tvm import relay
+from tvm.contrib import util, graph_runtime, download
+from tvm.relay.testing import run_opt_pass
+from enum import Enum
+from hashlib import md5
+from itertools import zip_longest, combinations
+import numpy as np
+from PIL import Image
+import os
+
+from . import _infrastructure
+from tvm.relay.op.contrib import get_pattern_table
+
+
+class Available(Enum):
+UNAVAILABLE = 0
+SW_ONLY = 1
+SW_AND_HW = 2
+
+
+def ethosn_available():
+"""Return whether Ethos-N software and hardware support is available"""
+if not tvm.get_global_func("relay.ethos-n.query", True):
+print("skip because Ethos-N module is not available")
+return Available.UNAVAILABLE
+else:
+hw = tvm.get_global_func("relay.ethos-n.query")()
+return Available.SW_AND_HW if hw else Available.SW_ONLY
+
+
+def get_real_image(im_height, im_width):
+repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
+img_name = 'elephant-299.jpg'
+image_url = os.path.join(repo_base, img_name)
+img_path = download.download_testdata(image_url, img_name, module='data')
+image = Image.open(img_path).resize((im_height, im_width))
+x = np.array(image).astype('uint8')
+data = np.reshape(x, (1, im_height, im_width, 3))
+return data
+
+
+def assert_lib_hash(lib, golden):
+temp = util.tempdir()
+path = temp.relpath("lib.cmm")
+lib.imported_modules[1].save(path)
+lib_hash = md5(open(path, 'rb').read()).hexdigest()
+assert lib_hash == golden, "Expected hash: {} Got hash: {}".format(golden, 
lib_hash)
+
+
+def make_module(func, params):
+func = relay.Function(relay.analysis.free_vars(func), func)
+if len(params):
+relay.build_module.bind_params_by_name(func, params)
+return tvm.IRModule.from_expr(func)
+
+
+def make_ethosn_composite(ethosn_expr, name):
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function([relay.Var("a")], ethosn_expr)
+func = func.with_attr("Composite", name)
+call = relay.Call(func, vars)
+return call
+
+
+def make_ethosn_partition(ethosn_expr):
+# Create an Ethos-N global function
+mod = tvm.IRModule({})
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function(vars, ethosn_expr)
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", "ethos-n")
+func = func.with_attr("global_symbol", "ethos-n_0")
+g1 = relay.GlobalVar("ethos-n_0")
+mod[g1] = func
+
+# These are the vars to call the Ethos-N partition with
+more_vars = relay.analysis.free_vars(ethosn_expr)
+# Call the Ethos-N partition in main
+call_fn1 = g1(*more_vars)
+mod["main"] = relay.Function(more_vars, call_fn1)
+return mod
+
+
+def get_cpu_op_count(mod):
+class Counter(tvm.relay.ExprVisitor):
+def __init__(self):
+super().__init__()
+self.count = 0
+
+def visit_call(self, call):
+if isinstance(call.op, tvm.ir.Op):
+self.count += 1
+
+super().visit_call(call)
+
+c = Counter()
+c.visit(mod["main"])
+return c.count
+
+
+def build(mod, params, npu=True, cpu_ops=0, npu_partitions=1):
+relay.backend.compile_engine.get().clear()
+with tvm.transform.PassContext(opt_level=3, config={
+"relay.ext.ethos-n.options": {"variant": 0}
+}):
+with tvm.target.create("llvm -mcpu=core-avx2"):

Review comment:
   This was introduced in response to failures with SSD Mobilenet testing 
(coming in the 3rd PR). I'm not particularly familiar with the issue here so 
this is the only workaround I know at the moment.





[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468750593



##
File path: src/relay/backend/contrib/ethosn/capabilities.h
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/ethosn/capabilities.h
+ * \brief The Ethos-N processor series has four variants, the Ethos-N37, 
Ethos-N57, Ethos-N77
+ * and the Ethos-N78. This release of the integration supports the first three 
variants.
+ * Configuration information for each variant is stored as a blob in this 
file. These blobs
+ * are passed into the Ethos-N support library, which in turn uses them to 
optimize the
+ * generated command-stream appropriately for the specified variant.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_CAPABILITIES_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ETHOSN_CAPABILITIES_H_
+
+#include 
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+/* Ethos-N variants (N77, N57 and N37)

Review comment:
   It's the same architecture/software stack, so we anticipate it is just 
an extension of what is already here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468749731



##
File path: src/runtime/contrib/ethosn/ethosn_runtime.h
##
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ethosn_runtime.h
+ * \brief Execution handling of Ethos-N command streams.
+ */
+#ifndef TVM_RUNTIME_CONTRIB_ETHOSN_ETHOSN_RUNTIME_H_
+#define TVM_RUNTIME_CONTRIB_ETHOSN_ETHOSN_RUNTIME_H_
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethosn_support_library/Support.hpp"
+
+namespace tvm {
+namespace runtime {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+
+struct OrderedCompiledNetwork {
+  std::unique_ptr cmm;
+  std::string name;
+  std::vector inputs;
+  std::vector outputs;
+};
+
+class EthosnModule : public ModuleNode {
+ public:
+  /*!
+   * \brief The Ethos-N runtime module.
+   * \param cmms A vector of compiled networks with input/output orders.
+   */
+  explicit EthosnModule(std::vector* cmms);
+
+  /*!
+   * \brief Get a PackedFunc from the Ethos-N module.
+   * \param name The name of the function.
+   * \param sptr_to_self The ObjectPtr that points to this module node.
+   * \return The function pointer when it is found, otherwise, 
PackedFunc(nullptr).
+   */
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final;
+  /*!
+   * \brief Save a compiled network to a binary stream, which can then be
+   * serialized to disk.
+   * \param stream The stream to save the binary.
+   * \note See EthosnModule::LoadFromBinary for the serialization format.
+   */
+  void SaveToBinary(dmlc::Stream* stream) final;
+  /*!
+   * \brief Load a compiled network from stream.
+   * \param strm The binary stream to load.
+   * \return The created Ethos-N module.
+   * \note The serialization format is:
+   *
+   *   size_t : number of functions
+   *   [
+   * std::string : name of function (symbol)
+   * std::string : serialized command stream
+   * size_t  : number of inputs
+   * std::vector : order of inputs
+   * size_t  : number of outputs
+   * std::vector : order of outputs
+   *   ] * number of functions
+   */
+  static Module LoadFromBinary(void* strm);
+  /*!
+   * \brief Save a module to a specified path.
+   * \param path Where to save the serialized module.
+   */
+  void SaveToFile(const std::string& path, const std::string& format) override;
+  /*!
+   * \brief Create a module from a file.
+   * \param path The path of the file containing the serialized module.
+   * \return The created Ethos-N module.
+   */
+  static Module LoadFromFile(const std::string& path);

Review comment:
   I remember using these for debugging, but agreed they're not strictly 
required. Do you think it's worth keeping them for debugging?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468748218



##
File path: src/runtime/contrib/ethosn/ethosn_runtime.cc
##
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ethosn_runtime.cc
+ * \brief Execution handling of Ethos-N command streams.
+ */
+
+#include "ethosn_runtime.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+
+#include "../../file_util.h"
+#include "ethosn_device.h"
+#include "ethosn_driver_library/Inference.hpp"
+#include "ethosn_driver_library/Network.hpp"
+#include "ethosn_support_library/Support.hpp"
+
+namespace tvm {
+namespace runtime {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+namespace dl = ::ethosn::driver_library;
+
+EthosnModule::EthosnModule(std::vector* cmms) {

Review comment:
   IIRC, it's because of the unique_ptr in cmms. We need to std::move it, 
which counts as modifying cmms so it has to be a pointer rather than a 
reference.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468745993



##
File path: python/tvm/relay/op/contrib/ethosn.py
##
@@ -0,0 +1,90 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument
+"""Arm(R) Ethos(TM) -N NPU supported operators."""
+import tvm.ir
+from enum import Enum
+from ... import qnn as _qnn
+from . import _ethosn as support
+
+
+class Available(Enum):
+UNAVAILABLE = 0
+SW_ONLY = 1
+SW_AND_HW = 2
+
+def __bool__(self):
+return self != Available.UNAVAILABLE
+
+
+def ethosn_available():
+"""Return whether Ethos-N software and hardware support is available"""
+if not tvm.get_global_func("relay.ethos-n.query", True):
+print("skip because Ethos-N module is not available")
+return Available.UNAVAILABLE
+else:
+hw = tvm.get_global_func("relay.ethos-n.query")()
+return Available.SW_AND_HW if hw else Available.SW_ONLY
+
+
+@tvm.ir.register_op_attr("qnn.concatenate", "target.ethos-n")
+def qnn_concatenate(attrs, args):
+"""Check if a concatenate is supported by Ethos-N."""
+if not ethosn_available():
+return False
+
+conc = _qnn.op.concatenate(*args, **attrs)
+if not support.concatenate(conc):
+return False
+
+# Support library has some unenforced restrictions on qnn params
+min_range = 1e9
+max_range = -1e9
+qnn_params = []
+for i in range(len(args[1].fields)):
+scale = args[1].fields[i].data.asnumpy()
+zero_point = args[2].fields[i].data.asnumpy()
+min_range = min(-1 * zero_point * scale, min_range)
+max_range = max((255 - zero_point) * scale, max_range)
+qnn_params.append((scale, zero_point))
+
+scale = (max_range - min_range) / 255
+zero_point = int(-min_range/scale)
+if (scale, zero_point) in qnn_params:
+return True
+
+return False
+
+
+@tvm.ir.register_op_attr("split", "target.ethos-n")
+def split(attrs, args):

Review comment:
   Conv2D is coming in the next PR. We split it up like this so that we 
could focus initially on the mechanics of the integration itself. Split/Concat 
motivate the tuple handling in the codegen which is why they were introduced 
now. Conv2D has a lot of other complexity to do with conversion between TVM and 
Support Library and so we thought that would be best handled separately.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] MarisaKirisame commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-11 Thread GitBox


MarisaKirisame commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-672111705


   @manupa-arm the primitive function is still lowered to TIR - we only compile 
Relay fragment to C++. This is in accordance to how Relay had work for 
Interpreter/VM - Relay fragment get handle separately where primtive function 
get lowered to TIR and handled by TVM.
   
   If you are talking about lowering everything to TIR, the biggest problem is 
the design implication. It will make TIR less like fortran and way more like 
SML.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-11 Thread GitBox


zhiics commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468690336



##
File path: cmake/modules/contrib/EthosN.cmake
##
@@ -0,0 +1,58 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# Arm Ethos-N rules
+
+if(NOT USE_ETHOSN STREQUAL "OFF")
+  find_ethosn(${USE_ETHOSN})
+
+  if(NOT ETHOSN_FOUND)
+message(FATAL_ERROR "Cannot find Ethos-N, USE_ETHOSN=" ${USE_ETHOSN})
+  endif()
+
+  if (ETHOSN_FOUND)

Review comment:
   else()

##
File path: cmake/util/FindEthosN.cmake
##
@@ -0,0 +1,95 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+###
+# Find Arm Ethos-N libraries
+#
+# Usage:
+#   find_ethosn(${USE_ETHOSN})
+#
+# - When USE_ETHOSN=/path/to/ethos-sdk-path, use the path from USE_ETHOSN
+# - Else, when environment variable ETHOSN_STACK is set, use that path
+# - When USE_ETHOSN=ON, use auto search
+#
+# Provide variables:
+#
+# - ETHOSN_FOUND
+# - ETHOSN_PACKAGE_VERSION
+# - ETHOSN_DEFINITIONS
+# - ETHOSN_INCLUDE_DIRS
+# - ETHOSN_COMPILER_LIBRARY
+# - ETHOSN_RUNTIME_LIBRARY
+
+macro(find_ethosn use_ethosn)
+  set(__use_ethosn ${use_ethosn})
+  if(IS_DIRECTORY ${__use_ethosn})
+set(__ethosn_stack ${__use_ethosn})
+message(STATUS "Arm Ethos-N driver stack PATH=" ${__use_ethosn})
+  elseif(IS_DIRECTORY $ENV{ETHOSN_STACK})
+ set(__ethosn_stack $ENV{ETHOSN_STACK})
+message(STATUS "Arm Ethos-N driver stack from env=" ${__use_ethosn})
+  else()
+ set(__ethosn_stack "")
+  endif()
+
+  if(__ethosn_stack)
+set(ETHOSN_INCLUDE_DIRS "")
+# Compile-time support
+find_path(_SL_DIR NAMES Support.hpp
+  PATHS ${__ethosn_stack}/include/ethosn_support_library)
+string(REGEX REPLACE "/ethosn_support_library" "" _SL_DIR2 ${_SL_DIR})
+list(APPEND ETHOSN_INCLUDE_DIRS "${_SL_DIR2}")
+
+find_library(ETHOSN_COMPILER_LIBRARY NAMES EthosNSupport
+  PATHS ${__ethosn_stack}/lib)
+find_library(ETHOSN_COMPILER_LIBRARY NAMES EthosNSupport)
+
+set(ETHOSN_PACKAGE_VERSION "0.1.1")
+
+if(USE_ETHOSN_HW STREQUAL "ON")
+  # Runtime hardware support
+  find_path(_DL_DIR NAMES Network.hpp
+PATHS ${__ethosn_stack}/include/ethosn_driver_library)
+  string(REGEX REPLACE "/ethosn_driver_library" "" _DL_DIR2 ${_DL_DIR})
+  list(APPEND ETHOSN_INCLUDE_DIRS "${_DL_DIR2}")
+
+  find_library(ETHOSN_RUNTIME_LIBRARY NAMES EthosNDriver
+PATHS ${__ethosn_stack}/lib)
+  find_library(ETHOSN_RUNTIME_LIBRARY NAMES EthosNDriver)
+  set(ETHOSN_DEFINITIONS -DETHOSN_HW)
+endif ()
+
+if(ETHOSN_COMPILER_LIBRARY)
+  set(ETHOSN_FOUND TRUE)
+endif()
+  endif(__ethosn_stack)
+
+  if(NOT ETHOSN_FOUND)
+if(__use_ethosn STREQUAL "ON")
+  message(WARNING "No cmake find_package available for Arm Ethos-N")
+endif()
+  endif()
+
+  # additional libraries
+  if(ETHOSN_FOUND)

Review comment:
   else()

##
File path: src/runtime/contrib/ethosn/ethosn_device.cc
##
@@ -0,0 +1,228 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy 

[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


jwfromm commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468701334



##
File path: tests/python/unittest/test_auto_scheduler_feature.py
##
@@ -0,0 +1,149 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Test feature extraction"""
+
+import math
+import tempfile
+
+import tvm
+from tvm import te, auto_scheduler
+
+from test_auto_scheduler_common import matmul_auto_scheduler_test
+
+
+def fequal(a, b):
+return math.fabs(a - b) < 1e-6
+
+
+def test_cpu_matmul():

Review comment:
   Can you add a description for each of these tests making it clear what 
they're testing? Given how hard coded the asserts are its difficult to tell 
what we're checking here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (2845329 -> 12da324)

2020-08-11 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2845329  [TFLite] Implemented EXPAND_DIMS Operator for TFLite. (#6243)
 add 12da324  Fix division range estimation error in simplifier (#6244)

No new revisions were added by this update.

Summary of changes:
 src/arith/const_int_bound.cc   | 52 --
 .../python/unittest/test_arith_const_int_bound.py  | 12 +
 2 files changed, 51 insertions(+), 13 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #6244: Fix division range estimation error in simplifier

2020-08-11 Thread GitBox


tqchen merged pull request #6244:
URL: https://github.com/apache/incubator-tvm/pull/6244


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6244: Fix division range estimation error in simplifier

2020-08-11 Thread GitBox


tqchen commented on pull request #6244:
URL: https://github.com/apache/incubator-tvm/pull/6244#issuecomment-672021229


   Thanks @kparzysz-quic !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] quic-sanirudh commented on pull request #6138: Add `init` member to ReduceNode

2020-08-11 Thread GitBox


quic-sanirudh commented on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671992636


   > Oh, i meant test cases that use this feature to compile a reduction with 
init value like those in `tests/python/integration`
   
   Ah okay, thanks for the clarification. I'll add those too.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6138: Add `init` member to ReduceNode

2020-08-11 Thread GitBox


tqchen commented on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671987519


   Oh, i meant test cases that use this feature to compile a reduction with 
init value like those in `tests/python/integration`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 opened a new pull request #6248: [BYOC][ACL] Improved pooling support

2020-08-11 Thread GitBox


lhutton1 opened a new pull request #6248:
URL: https://github.com/apache/incubator-tvm/pull/6248


   This patch adds support for various pooling operators, namely: average pool 
2d, global max pool 2d, global average pool 2d and l2 pooling (an equivalent 
combination of relay operators).
   
   For fp32:
   - nn.avg_pool2d
   - nn.global_max_pool2d
   - nn.global_avg_pool2d
   - power(2) + nn.avg_pool2d + sqrt -> L2 pooling (only supports fp32)
   
   For uint8:
   - cast + nn.avg_pool2d + cast
   - nn.global_max_pool2d
   - cast + nn.global_avg_pool2d + cast
   
   Tests updated to reflect these changes.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] quic-sanirudh edited a comment on pull request #6138: Add `init` member to ReduceNode

2020-08-11 Thread GitBox


quic-sanirudh edited a comment on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671882417


   Thanks @tqchen for the suggestion. I'll work on adding the rfactor support 
and update the PR once its done.
   
   Also, could you explain what you meant by adding "compiled" testcases as I'm 
a little confused by that. Did you mean cpp tests?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] manupa-arm commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-11 Thread GitBox


manupa-arm commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-671903407


   @slyubomirsky Thanks for the work! -- Some very high-level comments.
   
   IIUC, this is bypassing the TIR lowering as it stands today. Thus, possibly 
losing the benefits TIR scheduling based optimizations in AOT compilation path. 
I just wanted to know what the roadmap looks like if we are to re-target the 
AOT codegen at the TIR level. Would it be an incremental work on top of this ? 
or would it require a complete re-write ?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] quic-sanirudh commented on pull request #6138: Add `init` member to ReduceNode

2020-08-11 Thread GitBox


quic-sanirudh commented on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671882417


   Thanks @tqchen for the suggestion. I'll work on adding the rfactor support 
and update the PR once its done.
   
   Could you explain what you meant by adding "compiled" testcases. Did you 
mean cpp tests?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-11 Thread GitBox


leandron commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r468454560



##
File path: python/tvm/driver/tvmc/main.py
##
@@ -0,0 +1,90 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+TVMC - TVM driver command-line interface
+"""
+import argparse
+import logging
+import sys
+
+import pkg_resources
+
+from tvm.driver.tvmc.common import TVMCException
+
+
+def add_help_parser(subparsers):
+""" Include parser for 'help' subcommand """
+
+parser = subparsers.add_parser("help", help="show help page")
+# 'func' points to a function that will receive all the arguments
+# provided by the user. This is the only required attribute
+parser.set_defaults(func=drive_help)
+
+
+def drive_help(args):
+""" Show help page """
+
+print("This is a placeholder command. Args = {0}".format(args))
+
+
+
+def _main(argv):
+""" TVM command line interface. """
+
+parser = argparse.ArgumentParser(
+prog='tvmc',
+formatter_class=argparse.RawDescriptionHelpFormatter,
+description="TVM compiler driver",
+epilog=__doc__,
+)
+parser.add_argument(
+"-v", "--verbose", action="count", default=0, help="increase verbosity"
+)
+parser.add_argument(
+"--version", action="store_true", help="print the version and exit"
+)
+
+subparsers = parser.add_subparsers(title="commands")
+
+add_help_parser(subparsers)

Review comment:
   I think it works. It just moves some logic that was concentrated in 
`main`, to the respective subcommand implementer files such as `compile.py` and 
`runner.py` (to come in next PRs, once this gets merged).
   
   I will update it accordingly.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #6162: [Parser] Parser 2.0 part 2

2020-08-11 Thread GitBox


jroesch commented on pull request #6162:
URL: https://github.com/apache/incubator-tvm/pull/6162#issuecomment-671818745


   @tqchen this should be gtg, looks like I got passed the last few CPU tests.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #6243: [TFLite] Implemented EXPAND_DIMS Operator for TFLite.

2020-08-11 Thread GitBox


FrozenGene commented on pull request #6243:
URL: https://github.com/apache/incubator-tvm/pull/6243#issuecomment-671796154


   Thanks @jainris 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [TFLite] Implemented EXPAND_DIMS Operator for TFLite. (#6243)

2020-08-11 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 2845329  [TFLite] Implemented EXPAND_DIMS Operator for TFLite. (#6243)
2845329 is described below

commit 2845329009a42dcfbb3ac6d6dda7b578b8f8c585
Author: Rishabh Jain <56974688+jain...@users.noreply.github.com>
AuthorDate: Tue Aug 11 13:35:55 2020 +0530

[TFLite] Implemented EXPAND_DIMS Operator for TFLite. (#6243)
---
 python/tvm/relay/frontend/tflite.py  | 26 +
 tests/python/frontend/tflite/test_forward.py | 56 
 2 files changed, 82 insertions(+)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index f168f1b..11d6576 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -84,6 +84,7 @@ class OperatorConverter(object):
 'ELU': self.convert_elu,
 'EQUAL': self.convert_equal,
 'EXP': self.convert_exp,
+'EXPAND_DIMS': self.convert_expand_dims,
 'FILL': self.convert_fill,
 'FLOOR_DIV': self.convert_floor_div,
 'FLOOR_MOD': self.convert_floor_mod,
@@ -2904,6 +2905,31 @@ class OperatorConverter(object):
 ret = _expr.TupleWrapper(_expr.Tuple([boxes, cls_ids, scores, 
valid_count]), size=4)
 return ret
 
+def convert_expand_dims(self, op):
+"""Convert TFLite EXPAND_DIMS"""
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 2, "input tensors length should be 2"
+
+if input_tensors[0].qnn_params:
+# Check that input and output tensor have same qnn params.
+output_tensors = self.get_output_tensors(op)
+assert self.has_same_qnn_params(input_tensors[0], 
output_tensors[0]), \
+"TFLite EXPAND_DIMS requires input and output tensors' \
+scale and zero points to be equal"
+
+input_expr = self.get_tensor_expr(input_tensors[0])
+axis = self.get_tensor_value(input_tensors[1])
+if isinstance(axis, np.ndarray):
+assert len(axis) == 1, "only one value is expected."
+axis = int(axis)
+
+ndims = len(input_tensors[0].tensor.ShapeAsNumpy())
+assert (-1-ndims <= axis <= ndims), "axis out of range"
+
+out = _op.expand_dims(input_expr, axis, 1)
+
+return out
+
 def convert_one_hot(self, op):
 """Convert TFLite ONE_HOT"""
 try:
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 2e57175..33ac6d4 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -2031,6 +2031,61 @@ def test_forward_padv2():
 
 
 ###
+# EXPAND_DIMS
+# ---
+
+def _test_expand_dims(input_shape, input_type, axis, quantized=False):
+""" One iteration of EXPAND_DIMS """
+with tf.Graph().as_default():
+axis= ops.convert_to_tensor(axis, dtype=axis.dtype)
+
+if quantized:
+# ignoring input_type as quantized requires uint8
+input = np.random.uniform(0, 256, input_shape).astype('uint8')
+in_input = tf.placeholder(dtype='float32', shape=input.shape, 
name="input")
+
+input_range = {'q_input': (-100, 100)}
+inq_input = tf.quantization.fake_quant_with_min_max_args(
+in_input,
+min=-100,
+max=100,
+name="q_input")
+
+out = array_ops.expand_dims(inq_input, axis=axis)
+out = tf.quantization.fake_quant_with_min_max_args(
+out,
+min=-100,
+max=100,
+name="out")
+
+compare_tflite_with_tvm(
+[input],
+["q_input"],
+[inq_input],
+[out],
+quantized=True,
+input_range=input_range)
+else:
+input = np.random.uniform(-100, 100, 
input_shape).astype(input_type)
+in_input = tf.placeholder(dtype=input.dtype, shape=input.shape, 
name="input")
+
+out = array_ops.expand_dims(in_input, axis=axis)
+
+compare_tflite_with_tvm(
+[input],
+["input"],
+[in_input],
+[out])
+
+def test_forward_expand_dims():
+""" EXPAND_DIMS """
+for quantized in [False, True]:
+_test_expand_dims((6, 2, 7, 5), 'float32', np.int32(0), 
quantized=quantized)
+_test_expand_dims((1, 2, 3), 'int32', np.int32(-2), 
quantized=quantized)
+_test_expand_dims((2, 4, 5), 'float32', np.array([1], dtype=np.int32), 
quantized=quantized)
+
+

[GitHub] [incubator-tvm] FrozenGene merged pull request #6243: [TFLite] Implemented EXPAND_DIMS Operator for TFLite.

2020-08-11 Thread GitBox


FrozenGene merged pull request #6243:
URL: https://github.com/apache/incubator-tvm/pull/6243


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene edited a comment on pull request #5913: [random] support random fill

2020-08-11 Thread GitBox


FrozenGene edited a comment on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-671789877


   > > @FrozenGene Please followup. It is okay to do the path 
`CPU@remote_device -> GPU@remote_device` for now, as long as there is no RPC 
communication cost (i.e. no `local_device` -> `remote device`)
   > > I remembered that we tried to do this in our internal repo but failed. 
What's the problem at that time?
   > 
   > @merrymercy Our current method is we will introduce one `dummy cpu` 
context in the remote and pass the data to the remote target (like OpenCL, 
CUDA). Previous time we want to do is to generate non empty data in the remote 
target but failed.
   > 
   > @tqchen 's suggestion we could leverage `empty` interface and fill the 
data into the allocated tensor to avoid introducing new `non_empty` api in the 
C / ndarray interface and generate random data directly in the remote device. 
Previous comment is to make sure that we maybe have to introduce cpu like our 
current way.
   > 
   > I will follow up my pr that move our implementation to the 
`contrib/random/random.cc` and turn it on always as our auto scheduler has 
local builder / local runner also rely on it (not just rpc).
   
   @merrymercy @tqchen I have updated the code and verified it in the remote 
cpu / remote mali gpu. We could do `CPU@remote_device -> GPU@remote_device` 
directly, not `CPU@host->CPU@remote_device -> GPU@remote_device`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5913: [random] support random fill

2020-08-11 Thread GitBox


FrozenGene commented on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-671789877


   > > @FrozenGene Please followup. It is okay to do the path 
`CPU@remote_device -> GPU@remote_device` for now, as long as there is no RPC 
communication cost (i.e. no `local_device` -> `remote device`)
   > > I remembered that we tried to do this in our internal repo but failed. 
What's the problem at that time?
   > 
   > @merrymercy Our current method is we will introduce one `dummy cpu` 
context in the remote and pass the data to the remote target (like OpenCL, 
CUDA). Previous time we want to do is to generate non empty data in the remote 
target but failed.
   > 
   > @tqchen 's suggestion we could leverage `empty` interface and fill the 
data into the allocated tensor to avoid introducing new `non_empty` api in the 
C / ndarray interface and generate random data directly in the remote device. 
Previous comment is to make sure that we maybe have to introduce cpu like our 
current way.
   > 
   > I will follow up my pr that move our implementation to the 
`contrib/random/random.cc` and turn it on always as our auto scheduler has 
local builder / local runner also rely on it (not just rpc).
   
   @merrymercy @tqchen I have updated the code and verified it in the remote 
cpu / remote mali gpu. We could do `CPU@remote_device -> GPU@remote_device` , 
no `CPU@host->CPU@remote_device -> GPU@remote_device`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene edited a comment on pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-11 Thread GitBox


FrozenGene edited a comment on pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229#issuecomment-671787510


   Thanks @csullivan I've verified it and it works.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-11 Thread GitBox


FrozenGene commented on pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229#issuecomment-671787510


   Thanks @csullivan I verified it and it works.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene merged pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-11 Thread GitBox


FrozenGene merged pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (ee33056 -> 14f4efe)

2020-08-11 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ee33056  [Topi,x86] Split MKL from BLAS. (#6182)
 add 14f4efe  [RPC] Update build support for cross compiling apps/cpp_rpc 
with OpenCL (#6229)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt  |  4 +--
 apps/cpp_rpc/CMakeLists.txt | 12 
 apps/cpp_rpc/Makefile   | 53 --
 apps/cpp_rpc/README.md  | 44 ++--
 cmake/config.cmake  |  8 ++
 cmake/modules/OpenCL.cmake  |  6 ++--
 cmake/util/FindOpenCL.cmake | 70 +
 7 files changed, 124 insertions(+), 73 deletions(-)
 delete mode 100644 apps/cpp_rpc/Makefile
 create mode 100644 cmake/util/FindOpenCL.cmake



[GitHub] [incubator-tvm] jcf94 commented on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-11 Thread GitBox


jcf94 commented on pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-671763210


   > Overall LGTM, but I'd like to raise the discussion about the file 
organization for search policy. Now `sketch_search_policy.cc` has about one 
thousand line and it might continue to grow in the future. Here is the 
organization in my mind:
   > 
   > ```
   > auto_scheduler
   > |- serach_policy
   >   |- sketch_search
   > |- sketch_search_policy.{h,cc}
   > |- sketch_rules.{h,cc}
   > |- utils.{h,cc}
   >   |- empty_search
   >   |- utils.{h,cc}
   > ```
   > 
   > * Have `auto_scheduler/search_policy/{sketch_policy, empty_policy}`.
   > * Separate all `SketchGenerationRule` and `InitPopulationRule` to 
`search_policy/sketch_policy/sketch_rules`.
   > * Rename `src/auto_scheduler/search_policy/utils.{h,cc}` to 
'src/auto_scheduler/search_policy/utils.{h,cc}' (still under `search_policy`), 
and move all sketch search specific functions such as Mutation (not included in 
this PR) to `auto_scheduler/search_policy/sketch_policy/utils.{h,cc}`.
   
   Currently I just split out the `sketch_policy_rules.h/cc`, we can continue 
to consider about the directory structure.
   We can also put the Evolutionary Search to a separate file.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359065



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes
+
+#   float[sizes[0]]feature for record 1
+#   float[sizes[1]]feature for record 2
+#   ...feature for record i...
+#   float[sizes[n-1]]  feature for record n
+
+#   float[sizes[n]]normalized throughput for n records
+#   int[sizes[n+1]]task id for n records
+# }
+
+vec_len = DEFAULT_FEATURE_VEC_LEN
+
+# unpack sizes
+offset = 0
+n = struct.unpack_from("1i", byte_arr, offset=offset)[0]
+offset += SIZE_OF_INT
+
+sizes = struct.unpack_from("%di" % (n+2), byte_arr, offset=offset)
+offset += SIZE_OF_INT * (n+2)
+
+# unpack features
+features = []
+for size in sizes[:-2]:
+row = []
+
+# Now, we need to unpack the feature for multiple statements.
+# The format is:
+# {
+# int n_stmts
+# float[n_stmt][vec_len] feature_vecs
+# }
+# where vec_len can be calculated by `(size - 1) / n_stmts`
+
+if size == 0:
+# failed during lowering
+features.append(np.zeros((1, vec_len)))
+else:
+n_stmts = struct.unpack_from("f", byte_arr, offset=offset)
+offset += SIZE_OF_FLOAT
+
+n_stmts = int(n_stmts[0] + 0.5)

Review comment:
   Some of them are int while the others are float. I want to store all of 
them in a single array, but we do not have union in tvm::Object. So I use a 
single float array to store both int and float





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   The `int sizes[n + 1]` proposed by you is not a valid declaration in C. 
I think the existing form is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   ` int sizes[n + 1]` is not a valid declaration in C. I think the 
existing form is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   The `int sizes[n + 1]` proposed by you is not a valid declaration in C. 
I think the old comment is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-11 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359065



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes
+
+#   float[sizes[0]]feature for record 1
+#   float[sizes[1]]feature for record 2
+#   ...feature for record i...
+#   float[sizes[n-1]]  feature for record n
+
+#   float[sizes[n]]normalized throughput for n records
+#   int[sizes[n+1]]task id for n records
+# }
+
+vec_len = DEFAULT_FEATURE_VEC_LEN
+
+# unpack sizes
+offset = 0
+n = struct.unpack_from("1i", byte_arr, offset=offset)[0]
+offset += SIZE_OF_INT
+
+sizes = struct.unpack_from("%di" % (n+2), byte_arr, offset=offset)
+offset += SIZE_OF_INT * (n+2)
+
+# unpack features
+features = []
+for size in sizes[:-2]:
+row = []
+
+# Now, we need to unpack the feature for multiple statements.
+# The format is:
+# {
+# int n_stmts
+# float[n_stmt][vec_len] feature_vecs
+# }
+# where vec_len can be calculated by `(size - 1) / n_stmts`
+
+if size == 0:
+# failed during lowering
+features.append(np.zeros((1, vec_len)))
+else:
+n_stmts = struct.unpack_from("f", byte_arr, offset=offset)
+offset += SIZE_OF_FLOAT
+
+n_stmts = int(n_stmts[0] + 0.5)

Review comment:
   Some of them are int while the others are float. I want to store all of 
them in a single array, but we do not have union in tvm::Object.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-11 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r468346294



##
File path: tests/python/unittest/test_auto_scheduler_sketch_generation.py
##
@@ -0,0 +1,107 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+""" Test sketch generation. """
+
+import tvm
+from tvm import te, auto_scheduler
+
+from test_auto_scheduler_common import (matmul_auto_scheduler_test, 
conv2d_nchw_bn_relu_auto_scheduler_test,
+max_pool2d_auto_scheduler_test, 
min_nm_auto_scheduler_test,
+softmax_nm_auto_scheduler_test, 
softmax_abcd_auto_scheduler_test,
+
conv2d_winograd_nhwc_auto_scheduler_test)
+
+def print_sketches(sketches):

Review comment:
   Merged this with the `SketchPolicy::generate_sketches()`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org