[GitHub] [incubator-tvm] junrushao1994 opened a new pull request #5796: [Bugfix][RPC] Allow RPCWrappedFunc to rewrite runtime::String as std::string

2020-06-12 Thread GitBox


junrushao1994 opened a new pull request #5796:
URL: https://github.com/apache/incubator-tvm/pull/5796


   Per discussion: 
https://discuss.tvm.ai/t/rpc-vta-rpc-error-after-recent-code-refactor/6952



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5787: support aten::type_as in the pytorch frontend

2020-06-12 Thread GitBox


masahi commented on pull request #5787:
URL: https://github.com/apache/incubator-tvm/pull/5787#issuecomment-643569549


   Thanks @randxie @t-vi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #5787: support aten::type_as in the pytorch frontend

2020-06-12 Thread GitBox


masahi merged pull request #5787:
URL: https://github.com/apache/incubator-tvm/pull/5787


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: support aten::type_as in the pytorch frontend (#5787)

2020-06-12 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 59f5cbe  support aten::type_as in the pytorch frontend (#5787)
59f5cbe is described below

commit 59f5cbe921cf329febcd9d6eff2df94d80f1c523
Author: Rand Xie 
AuthorDate: Fri Jun 12 21:52:45 2020 -0700

support aten::type_as in the pytorch frontend (#5787)

* support aten::type_as in the pytorch frontend

* use _convert_data_type to convert torch type to tvm type and add more 
types in the type_as test
---
 python/tvm/relay/frontend/pytorch.py  |  9 +++
 tests/python/frontend/pytorch/test_forward.py | 37 +++
 2 files changed, 46 insertions(+)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index a9f4a7b..d2451cd 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -1645,6 +1645,14 @@ def _list_len(prelude):
 return _impl
 
 
+def _type_as():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+assert len(input_types) == 2
+return _op.cast(inputs[0], _convert_data_type(input_types[1]))
+return _impl
+
+
 def _add(prelude):
 # add_ is overloaded for tensor add and list concat
 def _impl(inputs, input_types):
@@ -1953,6 +1961,7 @@ def _get_convert_map(prelude):
 "aten::stack"   : _tensor_array_stack(prelude),
 "aten::__getitem__" : _list_getitem(prelude),
 "aten::len" : _list_len(prelude),
+"aten::type_as" : _type_as(),
 }
 return convert_map
 
diff --git a/tests/python/frontend/pytorch/test_forward.py 
b/tests/python/frontend/pytorch/test_forward.py
index 86fb409..f8fb57f 100644
--- a/tests/python/frontend/pytorch/test_forward.py
+++ b/tests/python/frontend/pytorch/test_forward.py
@@ -27,6 +27,7 @@ import torchvision
 
 from tvm import relay
 from tvm.contrib import graph_runtime
+from tvm.contrib.nvcc import have_fp16
 from tvm.relay.testing.config import ctx_list
 
 
@@ -837,6 +838,41 @@ def test_forward_size():
 input_data = torch.rand(input_shape).float()
 verify_model(Size1().float().eval(), input_data=input_data)
 
+
+def test_type_as():
+torch.set_grad_enabled(False)
+input_shape = [1, 3]
+
+def _create_module(dtype):
+class TypeAs(Module):
+def forward(self, *args):
+expected_type_tensor = torch.zeros(1, 3, dtype=dtype)
+return args[0].type_as(expected_type_tensor)
+
+return TypeAs()
+
+input_data = torch.randn(input_shape).float()
+verify_model(_create_module(torch.float64), input_data=input_data)
+verify_model(_create_module(torch.float32), input_data=input_data)
+verify_model(_create_module(torch.int64), input_data=input_data)
+verify_model(_create_module(torch.int32), input_data=input_data)
+verify_model(_create_module(torch.int16), input_data=input_data)
+verify_model(_create_module(torch.int8), input_data=input_data)
+
+if torch.cuda.is_available():
+check_fp16 = False
+try:
+# Only check half precision on supported hardwares.
+if have_fp16(tvm.gpu(0).compute_version):
+check_fp16 = True
+except Exception as e:
+# If GPU is not enabled in TVM, skip the fp16 test.
+pass
+
+if check_fp16:
+verify_model(_create_module(torch.float16), input_data=input_data)
+
+
 def test_forward_view():
 torch.set_grad_enabled(False)
 input_shape = [1, 3, 10, 10]
@@ -2575,6 +2611,7 @@ if __name__ == "__main__":
 test_upsample()
 test_forward_upsample3d()
 test_to()
+test_type_as()
 test_forward_functional_pad()
 test_forward_zero_pad2d()
 test_forward_constant_pad1d()



[GitHub] [incubator-tvm] lixiaoquan opened a new pull request #5795: [Relay] Keep fixed dim when unifying dynamic shape

2020-06-12 Thread GitBox


lixiaoquan opened a new pull request #5795:
URL: https://github.com/apache/incubator-tvm/pull/5795


   For this function:
   
   ```
   fn (%True: Tensor[(?, 1), float32], %False: Tensor[(?, ?), float32]) {
 free_var %f: fn () -> bool
 %0 = %f();
 if (%0) {
   %True
 } else {
   %False
 }
   }
   ```
   
   Original type inference result:
   
   ```
   fn (%True: Tensor[(?, ?), float32], %False: Tensor[(?, ?), float32], %f: fn 
() -> bool) -> Tensor[(?, ?), float32] {
 %0 = %f() /* ty=bool */;
 if (%0) {
   %True
 } else {
   %False
 }
   }
   ```
   
   Type inference result with this patch, which keeps the fixed dim
   
   ```
   fn (%True: Tensor[(?, 1), float32], %False: Tensor[(?, 1), float32], %f: fn 
() -> bool) -> Tensor[(?, 1), float32] {
 %0 = %f() /* ty=bool */;
 if (%0) {
   %True
 } else {
   %False
 }
   }
   ```
   
   cc @icemelon9 @kevinthesun @zhiics  Could you please review?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #5794: [Frontend][Tensorflow]Improve TF Parser to keep output nodes for saved_model

2020-06-12 Thread GitBox


zhiics merged pull request #5794:
URL: https://github.com/apache/incubator-tvm/pull/5794


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Fix tf parser (#5794)

2020-06-12 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 7a41971  Fix tf parser (#5794)
7a41971 is described below

commit 7a419718c121164fc260864014e1d0d81f556949
Author: Yao Wang 
AuthorDate: Fri Jun 12 20:32:46 2020 -0700

Fix tf parser (#5794)
---
 python/tvm/relay/frontend/tensorflow.py| 12 
 python/tvm/relay/frontend/tensorflow_parser.py | 10 --
 2 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/python/tvm/relay/frontend/tensorflow.py 
b/python/tvm/relay/frontend/tensorflow.py
index 5778b25..af09877 100644
--- a/python/tvm/relay/frontend/tensorflow.py
+++ b/python/tvm/relay/frontend/tensorflow.py
@@ -1322,14 +1322,10 @@ def _shape():
 
 def _fill():
 def _impl(inputs, attr, params, mod):
-output_shape = attr['_output_shapes'][0]
-# Output shape must be defined to avoid errors. If any axis is not, we 
must
-# try to compute its shape.
-if output_shape is None or -1 in output_shape:
-try:
-output_shape = _expr.Constant(_infer_value(inputs[0], params, 
mod))
-except Exception:
-output_shape = inputs[0]
+try:
+output_shape = _infer_value(inputs[0], params, 
mod).asnumpy().tolist()
+except Exception:
+output_shape = inputs[0]
 
 return _op.full(inputs[1], output_shape, attr['T'].name)
 return _impl
diff --git a/python/tvm/relay/frontend/tensorflow_parser.py 
b/python/tvm/relay/frontend/tensorflow_parser.py
index fdbb876..771aed0 100644
--- a/python/tvm/relay/frontend/tensorflow_parser.py
+++ b/python/tvm/relay/frontend/tensorflow_parser.py
@@ -30,6 +30,10 @@ class TFParser(object):
 model_dir : tensorflow frozen pb file or a directory that contains saved
 model or checkpoints.
 
+outputs : List of output tensor names (Optional)
+Optional output node names. This will be protected for saved model
+when we do remove training nodes.
+
 Examples
 
 .. code-block:: python
@@ -38,11 +42,12 @@ class TFParser(object):
 graphdef = parser.parse()
 """
 
-def __init__(self, model_dir):
+def __init__(self, model_dir, outputs=None):
 from tensorflow.core.framework import graph_pb2
 self._tmp_dir = util.tempdir()
 self._model_dir = model_dir
 self._graph = graph_pb2.GraphDef()
+self._outputs = outputs or []
 
 def _set_graph(self, graph):
 """Set Graph"""
@@ -128,7 +133,8 @@ class TFParser(object):
 output_graph_def = graph_pb2.GraphDef()
 with open(output_graph_filename, "rb") as f:
 output_graph_def.ParseFromString(f.read())
-output_graph_def = 
graph_util.remove_training_nodes(output_graph_def)
+output_graph_def = 
graph_util.remove_training_nodes(output_graph_def,
+
protected_nodes=self._outputs)
 return output_graph_def
 
 def _load_ckpt(self):



[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5601: [DataType] Add bfloat16

2020-06-12 Thread GitBox


Menooker commented on a change in pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#discussion_r439704765



##
File path: tests/python/unittest/test_tir_transform_bf16_legalize.py
##
@@ -0,0 +1,152 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import tvm
+import topi
+from tvm import te
+from tvm.tir import const
+
+
+def lower_stmt(sche, params, passfunc):
+func = tvm.driver.build_module.form_irmodule(sche, params, "main", 
None)["main"]
+func = passfunc()(
+tvm.IRModule.from_expr(func))["main"]
+stmt = func.body
+return stmt
+
+
+def test_promote():
+def runpass(op, passfunc):
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i: op(a[i], b[i]))
+s = te.create_schedule(c.op)
+return lower_stmt(s, [a, b, c], passfunc)
+
+def get_promoted(op):
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i:
+topi.cast(op(topi.cast(a[i],'float'),
+topi.cast(b[i],'float')), 'bfloat16')
+)
+s = te.create_schedule(c.op)
+func = tvm.driver.build_module.form_irmodule(s, [a,b,c], "main", 
None)["main"]
+return func.body
+
+def test_promoted(op):
+stmt = runpass(op, tvm.tir.transform.BF16Promote)
+tvm.ir.assert_structural_equal(stmt, get_promoted(op))
+test_promoted(topi.add)
+test_promoted(topi.subtract)
+test_promoted(topi.multiply)
+test_promoted(topi.divide)
+
+def test_eliminate():
+def to32(v):
+return topi.cast(v, 'float')
+def to16(v):
+return topi.cast(v, 'bfloat16')
+def get_eliminated():
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i: to16(
+topi.add(
+to32(
+to16(
+topi.add(
+to32(a[i]),
+to32(b[i]),
+)
+)
+),
+to32(
+to16(
+topi.add(
+to32(a[i]),
+to32(b[i]),
+)
+)
+)
+)
+))
+s = te.create_schedule(c.op)
+stmt = lower_stmt(s, [a, b, c], tvm.tir.transform.BF16CastElimination)
+return stmt
+
+def get_target():
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i: to16(
+topi.add(topi.add(
+to32(a[i]),
+to32(b[i]),
+),
+topi.add(
+to32(a[i]),
+to32(b[i]),
+)
+)
+))
+s = te.create_schedule(c.op)
+func = tvm.driver.build_module.form_irmodule(s, [a,b,c], "main", 
None)["main"]
+return func.body
+tvm.ir.assert_structural_equal(get_eliminated(), get_target())
+
+def test_legalize():
+def to32(v):
+uint32_v = topi.cast(v, "uint32")
+uint32_v = tvm.tir.call_pure_intrin("uint32", "shift_left", uint32_v, 
tvm.tir.const(16, "uint32"))
+return tvm.tir.call_pure_intrin("float32", "reinterpret", uint32_v)
+def to16(v):
+uint32_v = tvm.tir.call_pure_intrin("uint32", "reinterpret", v)
+rounding_bias = tvm.tir.call_pure_intrin("uint32", "shift_right", 
uint32_v, tvm.tir.const(16, "uint32"))
+rounding_bias = tvm.tir.call_pure_intrin("uint32", "bitwise_and", 
rounding_bias, tvm.tir.const(1, "uint32"))
+rounding_bias = rounding_bias + tvm.tir.const(0x7FFF, "uint16")
+uint32_v = uint32_v + rounding_bias
+uint32_v = tvm.tir.call_pure_intrin("uint32", "shift_right", uint32_v, 
tvm.tir.const(16, "uint32"))
+return topi.cast(uint32_v, 'uint16')
+
+def 

[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5740: [Object][Runtime] Introduce runtime::Map

2020-06-12 Thread GitBox


junrushao1994 commented on pull request #5740:
URL: https://github.com/apache/incubator-tvm/pull/5740#issuecomment-643561278


   A4 is done with the last commit



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5740: [Object][Runtime] Introduce runtime::Map

2020-06-12 Thread GitBox


junrushao1994 commented on a change in pull request #5740:
URL: https://github.com/apache/incubator-tvm/pull/5740#discussion_r439698484



##
File path: include/tvm/runtime/container.h
##
@@ -1554,6 +1593,954 @@ struct PackedFuncValueConverter> {
   }
 };
 
+/*! \brief map node content */
+class MapNode : public Object {
+  /*! \brief The number of elements in a memory block */
+  static constexpr int kBlockCap = 16;
+  /*! \brief Maximum load factor of the hash map */
+  static constexpr double kMaxLoadFactor = 0.99;
+  /*! \brief Binary representation of the metadata of an empty slot */
+  static constexpr uint8_t kEmptySlot = uint8_t(0b);
+  /*! \brief Binary representation of the metadata of a protected slot */
+  static constexpr uint8_t kProtectedSlot = uint8_t(0b1110);
+  /*! \brief Number of probing choices available */
+  static constexpr int kNumJumpDists = 126;
+  /* clang-format off */
+  /*! \brief Candidates of probing distance */
+  TVM_DLL static constexpr uint64_t kJumpDists[kNumJumpDists] {
+0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+// Quadratic probing with triangle numbers. See also:
+// 1) https://en.wikipedia.org/wiki/Quadratic_probing
+// 2) https://fgiesen.wordpress.com/2015/02/22/triangular-numbers-mod-2n/
+// 3) https://github.com/skarupke/flat_hash_map
+21, 28, 36, 45, 55, 66, 78, 91, 105, 120,
+136, 153, 171, 190, 210, 231, 253, 276, 300, 325,
+351, 378, 406, 435, 465, 496, 528, 561, 595, 630,
+666, 703, 741, 780, 820, 861, 903, 946, 990, 1035,
+1081, 1128, 1176, 1225, 1275, 1326, 1378, 1431, 1485, 1540,
+1596, 1653, 1711, 1770, 1830, 1891, 1953, 2016, 2080, 2145,
+2211, 2278, 2346, 2415, 2485, 2556, 2628,
+// larger triangle numbers
+8515, 19110, 42778, 96141, 216153,
+486591, 1092981, 2458653, 5532801, 12442566,
+27993903, 62983476, 141717030, 318844378, 717352503,
+1614057336, 3631522476, 8170957530, 18384510628, 41364789378,
+93070452520, 209408356380, 471168559170, 1060128894105, 2385289465695,
+5366898840628, 12075518705635, 27169915244790, 61132312065111, 
137547689707000,
+309482283181501, 696335127828753, 1566753995631385, 3525196511162271, 
7931691992677701,
+17846306936293605, 40154190677507445, 90346928918121501, 
203280589587557251, 457381325854679626,
+1029107982097042876, 2315492959180353330, 5209859154120846435,
+  };
+  /* clang-format on */
+
+ public:
+  /*! \brief Type of the keys in the hash map */
+  using key_type = ObjectRef;
+  /*! \brief Type of the values in the hash map */
+  using mapped_type = ObjectRef;
+
+  static constexpr const uint32_t _type_index = TypeIndex::kRuntimeMap;
+  static constexpr const char* _type_key = "Map";
+  TVM_DECLARE_FINAL_OBJECT_INFO(MapNode, Object);
+
+ private:
+  struct KVType;
+  struct Block;
+  struct ListNode;
+
+ public:
+  class iterator;
+
+  /*!
+   * \brief Destroy the MapNode
+   */
+  ~MapNode() { this->Reset(); }
+
+  /*!
+   * \brief Number of elements in the MapNode
+   * \return The result
+   */
+  size_t size() const { return size_; }
+
+  /*!
+   * \brief Index value associated with a key, create new entry if the key 
does not exist
+   * \param key The indexing key
+   * \return The mutable reference to the value
+   */
+  mapped_type& operator[](const key_type& key) { return Emplace(key, 
mapped_type()).Val(); }
+
+  /*!
+   * \brief Count the number of times a key exists in the MapNode
+   * \param key The indexing key
+   * \return The result, 0 or 1
+   */
+  size_t count(const key_type& key) const { return !Search(key).IsNone(); }
+
+  /*!
+   * \brief Index value associated with a key, throw exception if the key does 
not exist
+   * \param key The indexing key
+   * \return The const reference to the value
+   */
+  const mapped_type& at(const key_type& key) const { return At(key); }
+
+  /*!
+   * \brief Index value associated with a key, throw exception if the key does 
not exist
+   * \param key The indexing key
+   * \return The mutable reference to the value
+   */
+  mapped_type& at(const key_type& key) { return At(key); }
+
+  /*! \return begin iterator */
+  iterator begin() const { return size_ == 0 ? iterator() : iterator(0, this); 
}
+
+  /*! \return end iterator */
+  iterator end() const { return size_ == 0 ? iterator() : iterator(slots_ + 1, 
this); }
+
+  /*!
+   * \brief Index value associated with a key
+   * \param key The indexing key
+   * \return The iterator of the entry associated with the key, end iterator 
if not exists
+   */
+  iterator find(const key_type& key) const {
+ListNode n = Search(key);
+return n.IsNone() ? end() : iterator(n.i, this);
+  }
+
+  /*!
+   * \brief Insert and construct in-place with the given args, do nothing if 
key already exists
+   * \tparam Args Type of the args forwarded to the constructor
+   */
+  template 
+  void emplace(Args&&... args) {
+Emplace(std::forward(args)...);
+  }
+
+  /*!
+   * 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5740: [Object][Runtime] Introduce runtime::Map

2020-06-12 Thread GitBox


junrushao1994 commented on a change in pull request #5740:
URL: https://github.com/apache/incubator-tvm/pull/5740#discussion_r439698113



##
File path: include/tvm/runtime/container.h
##
@@ -1554,6 +1593,954 @@ struct PackedFuncValueConverter> {
   }
 };
 
+/*! \brief map node content */
+class MapNode : public Object {
+  /*! \brief The number of elements in a memory block */
+  static constexpr int kBlockCap = 16;
+  /*! \brief Maximum load factor of the hash map */
+  static constexpr double kMaxLoadFactor = 0.99;
+  /*! \brief Binary representation of the metadata of an empty slot */
+  static constexpr uint8_t kEmptySlot = uint8_t(0b);
+  /*! \brief Binary representation of the metadata of a protected slot */
+  static constexpr uint8_t kProtectedSlot = uint8_t(0b1110);
+  /*! \brief Number of probing choices available */
+  static constexpr int kNumJumpDists = 126;
+  /* clang-format off */
+  /*! \brief Candidates of probing distance */
+  TVM_DLL static constexpr uint64_t kJumpDists[kNumJumpDists] {
+0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
+// Quadratic probing with triangle numbers. See also:
+// 1) https://en.wikipedia.org/wiki/Quadratic_probing

Review comment:
   I will have a subsection for this algorithm in the doc





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5740: [Object][Runtime] Introduce runtime::Map

2020-06-12 Thread GitBox


junrushao1994 commented on a change in pull request #5740:
URL: https://github.com/apache/incubator-tvm/pull/5740#discussion_r439698131



##
File path: include/tvm/runtime/container.h
##
@@ -1554,6 +1593,954 @@ struct PackedFuncValueConverter> {
   }
 };
 
+/*! \brief map node content */
+class MapNode : public Object {
+  /*! \brief The number of elements in a memory block */
+  static constexpr int kBlockCap = 16;
+  /*! \brief Maximum load factor of the hash map */
+  static constexpr double kMaxLoadFactor = 0.99;
+  /*! \brief Binary representation of the metadata of an empty slot */
+  static constexpr uint8_t kEmptySlot = uint8_t(0b);
+  /*! \brief Binary representation of the metadata of a protected slot */
+  static constexpr uint8_t kProtectedSlot = uint8_t(0b1110);
+  /*! \brief Number of probing choices available */
+  static constexpr int kNumJumpDists = 126;
+  /* clang-format off */
+  /*! \brief Candidates of probing distance */
+  TVM_DLL static constexpr uint64_t kJumpDists[kNumJumpDists] {

Review comment:
   good idea :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on pull request #5794: [Frontend][Tensorflow]Improve TF Parser to keep output nodes for saved_model

2020-06-12 Thread GitBox


lixiaoquan commented on pull request #5794:
URL: https://github.com/apache/incubator-tvm/pull/5794#issuecomment-643549934


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] randxie commented on a change in pull request #5787: support aten::type_as in the pytorch frontend

2020-06-12 Thread GitBox


randxie commented on a change in pull request #5787:
URL: https://github.com/apache/incubator-tvm/pull/5787#discussion_r439694860



##
File path: tests/python/frontend/pytorch/test_forward.py
##
@@ -836,6 +836,20 @@ def forward(self, *args):
 input_data = torch.rand(input_shape).float()
 verify_model(Size1().float().eval(), input_data=input_data)
 
+
+def test_type_as():
+torch.set_grad_enabled(False)
+input_shape = [1, 3]
+
+class TypeAsInt32(Module):
+def forward(self, *args):
+int32_tensor = torch.zeros(1, 3, dtype=torch.int32)
+return args[0].type_as(int32_tensor)

Review comment:
   Done. Did not realize the torch types and tvm type has different naming. 
Updated the test with more types covered.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


tqchen commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643531184


   Seems the most contentious part is the name handling? It would be great if 
we can also list all the alternatives (in labeled form), and discuss their 
sides, then talk about the reasoning :) It will make the reasoning clear.
   
   @t-vi perhaps we can first go forward with dtype handling and discuss the 
name and shape handling in the forum?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5789: [TIR][REFACTOR] Cleanup unused classes

2020-06-12 Thread GitBox


tqchen merged pull request #5789:
URL: https://github.com/apache/incubator-tvm/pull/5789


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [TIR][REFACTOR] Cleanup unused classes (#5789)

2020-06-12 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 1c256f4  [TIR][REFACTOR] Cleanup unused classes (#5789)
1c256f4 is described below

commit 1c256f48a415e7c775cbf2a892a3d8ca29e3d25d
Author: Tianqi Chen 
AuthorDate: Fri Jun 12 17:23:05 2020 -0700

[TIR][REFACTOR] Cleanup unused classes (#5789)
---
 include/tvm/arith/bound.h |  8 ++--
 include/tvm/te/operation.h|  8 +---
 include/tvm/te/tensor.h   |  5 +++--
 include/tvm/tir/expr.h| 34 --
 include/tvm/tir/var.h |  2 --
 src/arith/domain_touched.cc   |  6 +++---
 src/contrib/hybrid/codegen_hybrid.cc  |  2 +-
 src/te/schedule/graph.cc  | 12 ++--
 src/tir/transforms/inject_prefetch.cc |  2 +-
 9 files changed, 21 insertions(+), 58 deletions(-)

diff --git a/include/tvm/arith/bound.h b/include/tvm/arith/bound.h
index df1a9e7..12b91cc 100644
--- a/include/tvm/arith/bound.h
+++ b/include/tvm/arith/bound.h
@@ -32,13 +32,9 @@
 #include 
 
 namespace tvm {
-// forward delcare Tensor
-namespace te {
-class Tensor;
-}
 namespace arith {
 
-using tir::Domain;
+using tir::Region;
 using tir::Stmt;
 using tir::Var;
 using tir::VarNode;
@@ -82,7 +78,7 @@ IntSet DeduceBound(PrimExpr v, PrimExpr cond,
  * \param consider_stores If stores are considered.
  * \return The domain that covers all the calls or provides within the given 
statement.
  */
-Domain DomainTouched(const Stmt& body, const tir::Buffer& buffer, bool 
consider_loads,
+Region DomainTouched(const Stmt& body, const tir::Buffer& buffer, bool 
consider_loads,
  bool consider_stores);
 
 }  // namespace arith
diff --git a/include/tvm/te/operation.h b/include/tvm/te/operation.h
index 4b7037a..dbd07fa 100644
--- a/include/tvm/te/operation.h
+++ b/include/tvm/te/operation.h
@@ -53,7 +53,7 @@ struct TensorDom {
 /*!
  * \brief Base class of all operation nodes
  */
-class OperationNode : public tir::FunctionBaseNode {
+class OperationNode : public Object {
  public:
   /*! \brief optional name of the operation */
   std::string name;
@@ -61,8 +61,10 @@ class OperationNode : public tir::FunctionBaseNode {
   std::string tag;
   /*! \brief additional attributes of the operation*/
   Map attrs;
-  /*! \return name of the operation */
-  const std::string& func_name() const final { return name; }
+  // virtual destructor.
+  virtual ~OperationNode() {}
+  /*! \return number of outputs */
+  virtual int num_outputs() const = 0;
   /*!
* \return The list of iteration variable at root
* \note root_iter_vars decides the shape of the outputs.
diff --git a/include/tvm/te/tensor.h b/include/tvm/te/tensor.h
index 0c4af4b..2f9fa2f 100644
--- a/include/tvm/te/tensor.h
+++ b/include/tvm/te/tensor.h
@@ -42,13 +42,14 @@ using namespace tvm::tir;
 
 // internal node container for Operation
 class OperationNode;
+class Tensor;
 
 /*! \brief Operation that produces tensors */
-class Operation : public tir::FunctionRef {
+class Operation : public ObjectRef {
  public:
   /*! \brief default constructor  */
   Operation() {}
-  explicit Operation(ObjectPtr n) : FunctionRef(n) {}
+  explicit Operation(ObjectPtr n) : ObjectRef(n) {}
   /*!
* \brief access the internal node container
* \return the pointer to the internal node container
diff --git a/include/tvm/tir/expr.h b/include/tvm/tir/expr.h
index 423f09e..4b6b28d 100644
--- a/include/tvm/tir/expr.h
+++ b/include/tvm/tir/expr.h
@@ -870,40 +870,6 @@ class Let : public PrimExpr {
   TVM_DEFINE_OBJECT_REF_METHODS(Let, PrimExpr, LetNode);
 };
 
-// Call node, represent a function call or a multi-dimensional array load.
-//
-// TODO(tvm-team):
-// Refactor call with more explicit property registrations.
-// rather than calling a string symbol.
-// We should move most information into function itself and remove name.
-
-/*! \brief Base node of internal functions. */
-class FunctionBaseNode : public Object {
- public:
-  /*! \brief virtual destructor */
-  virtual ~FunctionBaseNode() {}
-  /*! \return the name of the function */
-  virtual const std::string& func_name() const = 0;
-  /*! \return the number of outputs of this function */
-  virtual int num_outputs() const = 0;
-
-  // fall back to pointer equality now before refactor.
-  bool SEqualReduce(const FunctionBaseNode* other, SEqualReducer equal) const {
-return this == other;
-  }
-
-  void SHashReduce(SHashReducer hash_reduce) const {}
-
-  static constexpr const bool _type_has_method_sequal_reduce = true;
-  static constexpr const bool _type_has_method_shash_reduce = true;
-};
-
-/*! \brief reference to a function */
-class FunctionRef : public ObjectRef {
- public:
-  TVM_DEFINE_OBJECT_REF_METHODS(FunctionRef, ObjectRef, FunctionBaseNode);
-};
-
 /*!
  

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439670048



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   This should be possible. Let me spend some time to refactor the c source 
code generator over the weekend.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643509762


   I don't have any objection regarding dtype handling in this PR. At the 
moment we assume fp32 everywhere by default, but to support doubles we need to 
somehow pass dtype information from user. I think we can pass an optional list 
of dtypes, corresponding to the entries in `input_shapes` (a required 
argument). If not passed we can fill in the dtype list with fp32.
   
   I think allowing `input_shape` to be optional is an orthogonal change to 
dtype issues that we can discuss elsewhere. I've already explained my 
reasoning, but to reiterate: making it optional is technically possible since 
Torch maintains names and traced Torch modules know the input shape. 
   
   But it doesn't work for scripted modules. If names are omitted, user need to 
be aware of (or we need to explain) how to correctly figure out the names Torch 
maintains. Since this is not trivial and Torch may change the way they handle 
naming on a whim in the future, we shouldn't rely on naming chosen by Torch.
   
   On top of above, I think it is better to keep the API as close as possible 
to other frontends. They all require input names and shapes to be passed 
explicitly. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643509762


   I don't have any objection regarding dtype handling in this PR. At the 
moment we assume fp32 everywhere by default, but to support doubles we need to 
somehow pass dtype information from user. I think we can pass an optional list 
of dtypes, corresponding to the entries in `input_shapes` (a required 
argument). If not passed we can fill in the dtype list with fp32.
   
   I think allowing `input_shape` to be optional is an orthogonal change to 
dtype issues that we can discuss elsewhere. I've already explained my 
reasoning, but to reiterate: making it optional is technically possible since 
Torch maintains names and traced Torch modules know the input shape. 
   
   But it doesn't work for scripted modules. If names are omitted, user need to 
be aware of (or we need to explain) how to correctly figure out the names Torch 
maintains. Since this is not trivial and Torch may change the way they handle 
naming on a whim, we shouldn't rely on naming chosen by Torch.
   
   On top of above, I think it is better to keep the API as close as possible 
to other frontends. They all require input names and shapes to be passed 
explicitly. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


tqchen commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643504399


   cc @masahi @t-vi it would be great if we can summarize and dissect the pts a 
bit more :) 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5793: [TIR][REFACTOR] Update TIR nodes std::string->String.

2020-06-12 Thread GitBox


tqchen commented on pull request #5793:
URL: https://github.com/apache/incubator-tvm/pull/5793#issuecomment-643504108


   cc @junrushao1994 @yzhliu @zhiics @wweic 
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun opened a new pull request #5794: [Frontend][Tensorflow]Improve TF Parser to keep output nodes for saved_model

2020-06-12 Thread GitBox


kevinthesun opened a new pull request #5794:
URL: https://github.com/apache/incubator-tvm/pull/5794


   This PR fixes two things for some tf models in the format of saved model:
   1. Make fill not depend on node attr ```_output_shapes``` since we can 
always get that by trying to ```_infer_value```. Some saved model will generate 
unknown rank shape for this attr which is actually not true.
   2. Add output nodes as ```protected_nodes``` when parsing saved models.
   
   @yongwww @lixiaoquan @zhiics 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #5755: Edit onnx parser to infer values in post order

2020-06-12 Thread GitBox


masahi merged pull request #5755:
URL: https://github.com/apache/incubator-tvm/pull/5755


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5755: Edit onnx parser to infer values in post order

2020-06-12 Thread GitBox


masahi commented on pull request #5755:
URL: https://github.com/apache/incubator-tvm/pull/5755#issuecomment-643503058


   Thanks @mbrookhart @jwfromm @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Edit onnx parser to infer values in post order (#5755)

2020-06-12 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 995b9ff  Edit onnx parser to infer values in post order (#5755)
995b9ff is described below

commit 995b9ff8a452bb46b64080b9b1fc0f10f0a778cf
Author: Matthew Brookhart 
AuthorDate: Fri Jun 12 15:11:34 2020 -0700

Edit onnx parser to infer values in post order (#5755)

* edit onnx parser to infer values in post order to speed up onnx imports 
with many calls to infer_value

* fix pylint
---
 python/tvm/relay/frontend/onnx.py | 119 +-
 1 file changed, 116 insertions(+), 3 deletions(-)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index 17cb148..dabe55f 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -27,12 +27,29 @@ from .. import expr as _expr
 from .. import function as _function
 from .. import op as _op
 from .. import vision as _vision
+
+from ..function import Function
+from ..expr import Call, Let
+from ..expr import If, Tuple, TupleGetItem
+from ..expr import RefCreate, RefRead, RefWrite
+from ..expr_functor import ExprFunctor
+from ..adt import Match, Clause
+
 from .common import AttrCvt, Renamer
 from .common import get_relay_op, new_var, infer_shape, infer_channels
-from .common import infer_type, infer_value, infer_value_simulated, get_name
+from .common import infer_type, get_name
+from .common import infer_value as _infer_value
+from .common import infer_value_simulated as _infer_value_simulated
 
 __all__ = ['from_onnx']
 
+g = None
+
+def infer_value(input_val, params, mod=None):
+return g.infer_value(input_val, params, mod)
+
+def infer_value_simulated(input_val, params):
+return g.infer_value_simulated(input_val, params)
 
 class onnx_input():
 """ Dual purpose list or dictionary access object."""
@@ -1891,8 +1908,7 @@ def _get_convert_map(opset):
 'NonZero': NonZero.get_converter(opset),
 }
 
-
-class GraphProto(object):
+class GraphProto(ExprFunctor):
 """A helper class for handling Relay expression copying from 
pb2.GraphProto.
 Definition: https://github.com/onnx/onnx/blob/master/onnx/onnx.proto
 
@@ -1914,6 +1930,101 @@ class GraphProto(object):
 self._shape = shape if shape else {}
 self._dtype = dtype
 
+#For infering Values
+self._tmp_params = {}
+self._infer_simulated = True
+self._mod = None
+super(GraphProto, self).__init__()
+
+def infer_value(self, input_val, params, mod=None):
+self._tmp_params = params
+self._infer_simulated = False
+self._mod = mod
+return self.visit(input_val).data
+#return _infer_value(input_val, params, mod)
+
+def infer_value_simulated(self, input_val, params):
+self._tmp_params = params
+self._infer_simulated = True
+return self.visit(input_val).data
+#return _infer_value_simulated(input_val, params)
+
+def infer(self, expr):
+if self._infer_simulated:
+out = _infer_value_simulated(expr, self._tmp_params)
+else:
+out = _infer_value(expr, self._tmp_params)
+return _expr.const(out.asnumpy())
+
+def visit_function(self, fn):
+new_params = [self.visit(x) for x in fn.params]
+new_body = self.visit(fn.body)
+return self.infer(Function(
+list(new_params),
+new_body,
+fn.ret_type,
+fn.type_params,
+fn.attrs))
+
+def visit_let(self, let):
+newvar = self.visit(let.var)
+newval = self.visit(let.value)
+newbody = self.visit(let.body)
+return self.infer(Let(newvar, newval, newbody))
+
+def visit_call(self, call):
+new_fn = self.visit(call.op)
+new_args = [self.visit(arg) for arg in call.args]
+return self.infer(Call(new_fn, new_args, call.attrs))
+
+def visit_var(self, var):
+return self.infer(var)
+
+def visit_global_id(self, global_var):
+return self.infer(global_var)
+
+def visit_if(self, ite):
+return self.infer(If(
+self.visit(ite.cond),
+self.visit(ite.true_branch),
+self.visit(ite.false_branch)))
+
+def visit_tuple(self, tup):
+return Tuple([self.visit(field) for field in tup.fields])
+
+def visit_tuple_getitem(self, op):
+tuple_value = self.visit(op.tuple_value)
+if not tuple_value.same_as(op.tuple_value):
+return self.infer(TupleGetItem(tuple_value, op.index))
+return self.infer(op)
+
+def visit_global_var(self, gvar):
+return self.infer(gvar)
+
+def visit_op(self, op):
+return op
+
+def visit_constant(self, const):
+return const
+
+def 

[GitHub] [incubator-tvm] tqchen commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


tqchen commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643499965


   It would be great to have a constructive discussion about the technical 
choices, agree on the pros and cons, before we reach a conclusion. Everyone is 
contributing to a common project (and we value everyone's opinion) and I think 
it would be great if we can have a clear discussion.  We also need to 
acknowledge that engineering decisions have tradeoffs and there is no true 
answer to the problem. 
   
   One common way I find that I find useful, is to dissect the discussion, 
label each discussion points, try to agree on sub points and rationales.
   
   In the conversation so far, I see a few choices:
   
   -  Ways to handle names:
- T0: Allow optional input names from torchscript.
- T1: Only user to pass in name override
- T2: T0 and optionally allow T1
   - Ways to handle data type
- D0: Assume most things are fp32
- D1: Being able to convert the right data type from torchscript.
   
   We can then discuss their pros and cons. For example
   
   T0 is certainly more convenient, but it also depends on the stablity of the 
torchscript's ability to keep names. T1 is more explicit when a user intend to 
name the input.
   
   D0 solves most of the common problems, but as the machine learning models 
move to mixed precision, we will inevitably want to support more data types, 
that likely makes D1 more appealing.
   
   Because the pros and cons are mainly technical, I hope that most of us can 
agree on the technical points. The main thing that we might not agree on, would 
be something like the priorization of technical tradeoffs.
   
   For example, I might favor clear naming scheme over implicit and thus prefer 
T2. A different person might think simplicity is key and fp32 is fine, so D0 is 
OK. This should be the only part we disagree on. 
   
   When we find more comon grounds, it is much easier to reach agreements. In 
the cases as this, one thing we can do is to have a constructive discussion, 
and perhaps bringing up more people to see what everyone's thoughts. In many 
cases we can find that we do not disagree that much after all. It could be a 
good discuss forum thread.
   
   Regardless of the outcome, I want to say that we value good technical 
debates and usually they leads to better code overall. Many parts of this PR 
are certainly valuable, like the better data type handling. So let us have a 
good conversation and bring a better Pytorch support for everyone.
   
   
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 merged pull request #5790: [COMMUNITY] @wpan11nv -> Reviewer

2020-06-12 Thread GitBox


icemelon9 merged pull request #5790:
URL: https://github.com/apache/incubator-tvm/pull/5790


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [COMMUNITY] @wpan11nv -> Reviewer (#5790)

2020-06-12 Thread haichen
This is an automated email from the ASF dual-hosted git repository.

haichen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 456ecc6  [COMMUNITY] @wpan11nv -> Reviewer (#5790)
456ecc6 is described below

commit 456ecc65b9ca97a2d97fb7a34b97256a6242ecca
Author: Tianqi Chen 
AuthorDate: Fri Jun 12 14:19:40 2020 -0700

[COMMUNITY] @wpan11nv -> Reviewer (#5790)
---
 CONTRIBUTORS.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 3bfe359..8945adb 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -99,8 +99,9 @@ We do encourage everyone to work anything they are interested 
in.
 - [Thierry Moreau](https://github.com/tmoreau89): @tmoreau89
 - [Kazutaka Morita](https://github.com/kazum): @kazum
 - [Tatsuya Nishiyama](https://github.com/nishi-t): @nishi-t
-- [Pariksheet Pinjari](https://github.com/PariksheetPinjari909): 
@PariksheetPinjari909
+- [Wei Pan](https://github.com/wpan11nv): @wpan11nv
 - [Krzysztof Parzyszek](https://github.com/kparzysz-quic): @kparzysz-quic
+- [Pariksheet Pinjari](https://github.com/PariksheetPinjari909): 
@PariksheetPinjari909
 - [Josh Pollock](https://github.com/joshpoll): @joshpoll
 - [Jared Roesch](https://github.com/jroesch): @jroesch
 - [Siva](https://github.com/srkreddy1238): @srkreddy1238



[GitHub] [incubator-tvm] tqchen opened a new pull request #5793: [TIR][REFACTIR] Update TIR nodes std::string->String.

2020-06-12 Thread GitBox


tqchen opened a new pull request #5793:
URL: https://github.com/apache/incubator-tvm/pull/5793


   This PR updates the remaining TIR node's member to use
   String instead of std::string.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643469350


   > The user-supplied(!) names are the part before the (last, ha, here only) 
`.` and they're stable.
   
   Do you mean Torch allows users to set the argument name? If you also know 
when and how exactly Torch changes input names, then sure I can see passing 
another names for TVM would be annoying. But I'd argue that most users are not 
familiar with such details of Torchscript, so we shouldn't expect them to 
correctly deal with names chosen by Torch.
   
   Requiring input names are common across other frontends. I think making it 
optional makes API a bit confusing and we need to explain what input names are 
expected if omitted, while benefiting only users who are intimately familiar 
with Torchscript internals. Making the API as close as possible to other 
frontends also applies to input shapes, so I don't want to make it optional, 
either. Shapes are required because Relay assumes static input shapes.
   
   So my opinion is not make `input_shapes` optional, to keep the API 
straightforward/less confusing, and close to other frontends.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5792: [RUNTIME][IR] String operator+

2020-06-12 Thread GitBox


tqchen commented on issue #5792:
URL: https://github.com/apache/incubator-tvm/issues/5792#issuecomment-643471539


   k, it is mainly about the usability, than the perf aspect



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi closed pull request #5791: Add a combine batch_matmul pass

2020-06-12 Thread GitBox


t-vi closed pull request #5791:
URL: https://github.com/apache/incubator-tvm/pull/5791


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643469350


   > The user-supplied(!) names are the part before the (last, ha, here only) 
`.` and they're stable.
   
   Do you mean Torch allows users to set the argument name? If you also know 
when and how exactly Torch changes input names, then sure I can see passing 
another names for TVM would be annoying. But I'd argue that most users are not 
familiar with such details of Torchscript, so we shouldn't expect them to 
correctly deal with names chosen by Torch.
   
   Requiring input names are common across other framework. I think making it 
optional makes API a bit confusing and we need to explain what input names are 
expected if omitted, while benefiting only users who are intimately familiar 
with Torchscript internals. Making the API as close as possible to other 
frontends also applies to input shapes, so I don't want to make it optional, 
either. Shapes are required because Relay assumes static input shapes.
   
   So my opinion is not make `input_shapes` optional, to keep the API 
straightforward/less confusing, and close to other frontends.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi closed pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi closed pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643470756


   Well, I see that this is not going anywhere..



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643469350


   > The user-supplied(!) names are the part before the (last, ha, here only) 
`.` and they're stable.
   
   Do you mean Torch allows users to set the argument name? If you also know 
when exactly Torch changes input names, then sure I can see passing another 
names for TVM would be annoying. But I'd argue that most users are not familiar 
with such details of Torchscript, so we shouldn't expect them to correctly deal 
with names chosen by Torch.
   
   Requiring input names are common across other framework. I think making it 
optional makes API a bit confusing, while benefiting only users who are 
intimately familiar with Torchscript internals. Making the API as close as 
possible to other frontends also applies to input shapes, so I don't want to 
make it optional, either. Shapes are required because Relay assumes static 
input shapes.
   
   So my opinion is not make `input_shapes` optional, to keep the API 
straightforward/less confusing, and close to other frontends.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643469350


   > The user-supplied(!) names are the part before the (last, ha, here only) 
`.` and they're stable.
   
   Do you mean Torch allows users to set the argument name? If you also know 
when and how exactly Torch changes input names, then sure I can see passing 
another names for TVM would be annoying. But I'd argue that most users are not 
familiar with such details of Torchscript, so we shouldn't expect them to 
correctly deal with names chosen by Torch.
   
   Requiring input names are common across other framework. I think making it 
optional makes API a bit confusing, while benefiting only users who are 
intimately familiar with Torchscript internals. Making the API as close as 
possible to other frontends also applies to input shapes, so I don't want to 
make it optional, either. Shapes are required because Relay assumes static 
input shapes.
   
   So my opinion is not make `input_shapes` optional, to keep the API 
straightforward/less confusing, and close to other frontends.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on issue #5792: [RUNTIME][IR] String operator+

2020-06-12 Thread GitBox


junrushao1994 commented on issue #5792:
URL: https://github.com/apache/incubator-tvm/issues/5792#issuecomment-643467187


   I agree, not sure about the performance impact though



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439618825



##
File path: src/target/source/source_module.cc
##
@@ -152,8 +153,92 @@ runtime::Module DeviceSourceModuleCreate(
   return runtime::Module(n);
 }
 
+// A helper used to wrap different types of modules and pass through 
packedfunc.
+// This module will never be used for compilation and execution.
+class ModuleClassWrapperNode : public runtime::ModuleNode {
+ public:
+  ModuleClassWrapperNode() = default;
+  const char* type_key() const { return "module_class_wrapper"; }
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+LOG(FATAL) << "Cannot execute module wrapper";
+return PackedFunc();
+  }
+};
+
+runtime::Module ModuleClassWrapperCreate() {
+  auto n = make_object();
+  return runtime::Module(n);
+}
+
+// Pack the source code and metadata, where source code could be any
+// user-defined code, i.e. c source code, json graph representation, etc.
+class SourceMetadataModuleNode final : public runtime::ModuleNode {
+ public:
+  SourceMetadataModuleNode(const String& func_symbol, const String& code, 
const String& source_type,
+   const Array& variables, const 
Array& metadata)
+  : func_symbol_(func_symbol),
+code_(code),
+source_type_(source_type),
+variables_(variables),
+metadata_(metadata) {}
+
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+if (name == "get_source") {
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { 
*rv = this->code_; });
+} else if (name == "get_source_type") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->source_type_; });
+} else if (name == "get_symbol") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->func_symbol_; });
+} else if (name == "get_vars") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->variables_; });
+} else if (name == "get_metadata") {
+  return PackedFunc(

Review comment:
   Okay, I left a TODO in the code for this. Let me give it a try in this 
PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643457334


   > If you want to use names chosen by Torch, how are you going to figure out 
the correct names to give to TVM at deploy time? The names are the one attached 
to the graph after this line 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2504,
 rather than the graph you supply to the frontend. You also need to remember 
whatever names Torch chooses until deploy time, since TVM doesn't export input 
names but they are needed to correctly set inputs.
   
   Thank you for insisting on using stable names. The user-supplied(!) names 
are the part before the (last, ha, here only) `.` and they're stable. This is 
e.g. what PyTorch itself does when you print `script_module.code` or to give 
you the name of the argument when you are missing an input.
   
   The function ultimately doing this in PyTorch is DebugNameBase:
   
https://github.com/pytorch/pytorch/blob/a9aa6367c2b1647f1d2772678f9971740c598c7a/torch/csrc/jit/ir/ir.cpp#L735
   
   > Passing dtypes is not something we (not only pytorch, but other frontends 
too) thought about, since we always assume float32 inputs. We can discuss how 
to integrate them. But most of the times inputs are fp32, so I don't want to 
introduce breaking API changes to allow dtype option.
   
   I have to strongly differ that most inputs are fp32, starting with anything 
NLP.
   Again, I think it is a misunderstanding that any of this has breaking API 
changes, but the suggestion is to make things more optional. I do see that 
splitting of the disambiguation counter is a good idea. But then we should just 
take what the user supplied in the  the model definition.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on pull request #5052: [TARGET] ONNX codegen

2020-06-12 Thread GitBox


maheshambule commented on pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#issuecomment-643457342


   @tqchen, @zhiics Thanks. Comments are resolved.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643443113


   > I'm not sure why you would have to insist on passing them if the user is 
fine with the TorchScript provided ones
   
   If you want to use names chosen by Torch, how are you going to figure out 
the correct names to give to TVM at deploy time? The names are the one attached 
to the graph after this line 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2504,
 rather than the graph you supply to the frontend. You also need to remember 
whatever names Torch chooses until deploy time, since TVM doesn't export input 
names but they are needed to correctly set inputs.
   
I think most of the times names don't change. Previously we were using 
names chosen by Torch, but due to the corner case reasons discussed in the 
thread above, we decided to it is better to let user chooses whatever name they 
like (ones that don't require remembering).
   
   
   > Passing the shapes should be needed very little, and I am surprised that 
you would need the user to do that.
   
   Passing the pairs of (name, shape) is common across other frontends. We 
initially started with the same API as others, and we haven't found a good 
reason to deviate from it (we did change dict to list, since the argument order 
matters in Torch). Yes Torch knows the shape when traced, but for scripted 
cases it doesn't. The Relay itself is not ready for dynamic shape input. For 
these reasons, we require input shapes to be passed explicitly. Since TVM users 
are supposed to know the input shape, I don't think it is a problem.
   
   Passing dtypes is not something we (not only pytorch, but other frontends 
too) thought about, since we always assume float32 inputs. We can discuss how 
to integrate them. But most of the times inputs are fp32, so I don't want to 
introduce breaking API changes to allow dtype option. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643443113


   > I'm not sure why you would have to insist on passing them if the user is 
fine with the TorchScript provided ones
   
   If you want to use names chosen by Torch, how are you going to figure out 
the correct names to give to TVM at deploy time? The names are the one attached 
to the graph after this line 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2504,
 rather than the graph you supply to the frontend. You also need to remember 
whatever names Torch chooses until deploy time, since TVM doesn't export input 
names but they are needed to correctly set inputs.
   
I think most of the times names don't change. Previously we were using 
names chosen by Torch, but due to the corner case reasons discussed in the 
thread above, we decided to it is better to let user chooses whatever name they 
like (ones that don't require remembering).
   
   
   > Passing the shapes should be needed very little, and I am surprised that 
you would need the user to do that.
   
   Passing the pairs of (name, shape) is common across other frontends. We 
initially started with the same API as others, and we haven't found a good 
reason to deviate from it. Yes Torch knows the shape when traced, but for 
scripted cases it doesn't. The Relay itself is not ready for dynamic shape 
input. For these reasons, we require input shapes to be passed explicitly. 
Since TVM users are supposed to know the input shape, I don't think it is a 
problem.
   
   Passing dtypes is not something we (not only pytorch, but other frontends 
too) thought about, since we always assume float32 inputs. We can discuss how 
to integrate them. But most of the times inputs are fp32, so I don't want to 
introduce breaking API changes to allow dtype option. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643443113


   > I'm not sure why you would have to insist on passing them if the user is 
fine with the TorchScript provided ones
   
   If you want to use names chosen by Torch, how are you going to figure out 
the correct names to give to TVM at deploy time? The names are the one attached 
to the graph after this line 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2504,
 rather than the graph you supply to the frontend. You also need to remember 
whatever names Torch chooses until deploy time, since TVM doesn't export input 
names but they are needed to correctly set inputs.
   
I think most of the times names don't change. Previously we were using 
names chosen by Torch, but due to the corner case reasons discussed in the 
thread above, we decided to it is better to let user chooses whatever name they 
like (ones that don't require remembering).
   
   
   > Passing the shapes should be needed very little, and I am surprised that 
you would need the user to do that.
   
   Passing the pairs of (name, shape) is common across other frontends. We 
initially started with the same API as others, and we haven't found a good 
reason to deviate from it. Yes Torch knows the shape when traced, but for 
scripted it doesn't. The Relay itself is not ready for dynamic shape input. For 
these reasons, we require input shapes to be passed explicitly. Since TVM users 
are supposed to know the input shape, I don't think it is a problem.
   
   Passing dtypes is not something we (not only pytorch, but other frontends 
too) thought about, since we always assume float32 inputs. We can discuss how 
to integrate them. But most of the times inputs are fp32, so I don't want to 
introduce breaking API changes to allow dtype option. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643440234


   The other part is that splitting at a potential `.` and taking the first 
part would actually reliably give the name from PyTorch. So in the end this is 
all about funny business.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439596323



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   Can we change the subgraph generator to generate 
`InitMod0Func0(Array)` instead? so that we don't have to compile the 
meta data into the C source?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439596323



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   Can we change the subgraph generator to instead expose 
`InitMod0Func0(Array)` instead? so that we don't have to compile the 
meta data into the C source?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439595480



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   This is needed for partitioned subgraphs because they expect the plain c 
source data as the input to compile together with the C source code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439595480



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   This is needed for partitioned subgraphs because they expect the plain c 
source data as the input. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439593516



##
File path: src/target/source/source_module.cc
##
@@ -152,8 +153,92 @@ runtime::Module DeviceSourceModuleCreate(
   return runtime::Module(n);
 }
 
+// A helper used to wrap different types of modules and pass through 
packedfunc.
+// This module will never be used for compilation and execution.
+class ModuleClassWrapperNode : public runtime::ModuleNode {
+ public:
+  ModuleClassWrapperNode() = default;
+  const char* type_key() const { return "module_class_wrapper"; }
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+LOG(FATAL) << "Cannot execute module wrapper";
+return PackedFunc();
+  }
+};
+
+runtime::Module ModuleClassWrapperCreate() {
+  auto n = make_object();
+  return runtime::Module(n);
+}
+
+// Pack the source code and metadata, where source code could be any
+// user-defined code, i.e. c source code, json graph representation, etc.
+class SourceMetadataModuleNode final : public runtime::ModuleNode {
+ public:
+  SourceMetadataModuleNode(const String& func_symbol, const String& code, 
const String& source_type,
+   const Array& variables, const 
Array& metadata)
+  : func_symbol_(func_symbol),
+code_(code),
+source_type_(source_type),
+variables_(variables),
+metadata_(metadata) {}
+
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+if (name == "get_source") {
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { 
*rv = this->code_; });
+} else if (name == "get_source_type") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->source_type_; });
+} else if (name == "get_symbol") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->func_symbol_; });
+} else if (name == "get_vars") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->variables_; });
+} else if (name == "get_metadata") {
+  return PackedFunc(

Review comment:
   It would be great to see if we can reduce to only MetaDataModule, and 
reuse mechanism here 
https://tvm.apache.org/docs/dev/introduction_to_module_serialization.html for 
serialization and packaging.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439592861



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   Do we really need this one? My guess is that we can return 
MetaInitWrapper(that wraps CSourceModule) and our module serialization 
mechanism should take care of loading the meta-data back, without having to 
serialie the meta data to the C source.

##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 

Review comment:
   see also 
https://tvm.apache.org/docs/dev/introduction_to_module_serialization.html





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439592645



##
File path: src/target/source/source_module.cc
##
@@ -152,8 +153,92 @@ runtime::Module DeviceSourceModuleCreate(
   return runtime::Module(n);
 }
 
+// A helper used to wrap different types of modules and pass through 
packedfunc.
+// This module will never be used for compilation and execution.
+class ModuleClassWrapperNode : public runtime::ModuleNode {
+ public:
+  ModuleClassWrapperNode() = default;
+  const char* type_key() const { return "module_class_wrapper"; }
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+LOG(FATAL) << "Cannot execute module wrapper";
+return PackedFunc();
+  }
+};
+
+runtime::Module ModuleClassWrapperCreate() {
+  auto n = make_object();
+  return runtime::Module(n);
+}
+
+// Pack the source code and metadata, where source code could be any
+// user-defined code, i.e. c source code, json graph representation, etc.
+class SourceMetadataModuleNode final : public runtime::ModuleNode {
+ public:
+  SourceMetadataModuleNode(const String& func_symbol, const String& code, 
const String& source_type,
+   const Array& variables, const 
Array& metadata)
+  : func_symbol_(func_symbol),
+code_(code),
+source_type_(source_type),
+variables_(variables),
+metadata_(metadata) {}
+
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+if (name == "get_source") {
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { 
*rv = this->code_; });
+} else if (name == "get_source_type") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->source_type_; });
+} else if (name == "get_symbol") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->func_symbol_; });
+} else if (name == "get_vars") {
+  return PackedFunc(
+  [sptr_to_self, this](TVMArgs args, TVMRetValue* rv) { *rv = 
this->variables_; });
+} else if (name == "get_metadata") {
+  return PackedFunc(

Review comment:
   This module saves the data including source and metadata from the 
partitioned graphs. It is only used for packaging purpose. CSourceMoudle and 
later on other modules (e.g json runtime module) will take the code from it. 
The `ModuleInitWrapper` will take the variables and metadata from it. 
   
   If we can let MetadataModule take all {var: ndarray} mapping and let 
CSourceModule get the needed variables. We should be able to remove this module 
as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439591969



##
File path: python/tvm/runtime/module.py
##
@@ -33,6 +33,25 @@
 ProfileResult = namedtuple("ProfileResult", ["mean", "results"])
 
 
+def ModuleInitWrapper(variables, metadata):
+"""Create a module initialization wrapper.

Review comment:
   Thanks! One thing that we want to be mindful of is to reduce the amount 
of concepts that an developer should learn. In most cases, we only want the 
developer to be aware of runtime.Module and not beyond that so the backend impl 
can change over time.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439591410



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 
+  void GetElements(const std::string& var_name, const std::string& type_name,
+   const runtime::NDArray& arr) {
+// Get the number of elements.
+int64_t num_elems = 1;
+for (auto i : arr.Shape()) num_elems *= i;
+stream_ << "static " << type_name << " " << var_name << "[" << num_elems 
<< "] = {";
+T* ptr = static_cast(arr->data);
+for (int64_t i = 0; i < num_elems - 1; i++) {
+  stream_ << ptr[i] << ",";
+}
+if (num_elems > 0) stream_ << ptr[num_elems - 1];
+stream_ << "};\n";
+  }
+
+  std::string Init() {
+for (const auto& it : metadata_) {
+  std::string var_name = it.first.operator std::string();
+  runtime::NDArray data = it.second;
+  CHECK_EQ(data->dtype.lanes, 1U);
+  if (data->dtype.code == kDLFloat) {
+if (data->dtype.bits == 32) {
+  stream_.precision(std::numeric_limits::digits10 + 1);
+  GetElements(var_name, "float", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 64);
+  stream_.precision(std::numeric_limits::digits10 + 1);
+  GetElements(var_name, "double", data);
+}
+  } else if (data->dtype.code == kDLUInt) {
+if (data->dtype.bits == 8) {
+  GetElements(var_name, "uint8_t", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 32);
+  GetElements(var_name, "uint32_t", data);
+}
+  } else {
+if (data->dtype.bits == 8) {
+  GetElements(var_name, "int8_t", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 32);
+  GetElements(var_name, "int32_t", data);
+}
+  }
+}
+return stream_.str();
+  }
+
+ private:
+  /*! \brief The stream to print constant data. */
+  std::ostringstream stream_;
+  /*! \brief variable name to NDArray mapping. */
+  StringNDArrayMap metadata_;
+};
+
+class ModuleInitWrapper : public runtime::ModuleNode {

Review comment:
   MetadataModule sounds good





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new issue #5792: [RUNTIME] String operator+

2020-06-12 Thread GitBox


tqchen opened a new issue #5792:
URL: https://github.com/apache/incubator-tvm/issues/5792


   We have a few places in the code base that uses operator+ of String. Right 
now:
   
   -  we either convert to std string 
https://github.com/apache/incubator-tvm/blob/master/src/tir/ir/expr.cc#L86
   - Or added some overload that converts to std::string 
https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/container.h#L1391
   
   Given that operator+ is quite common, and we can have a better solution in 
these cases(by allocating the right result length then copy), perhaps we should 
have a good overload for most cases.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5792: [RUNTIME] String operator+

2020-06-12 Thread GitBox


tqchen commented on issue #5792:
URL: https://github.com/apache/incubator-tvm/issues/5792#issuecomment-643433599


   cc @zhiics @wweic @junrushao1994 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi opened a new pull request #5791: Add a combine batch_matmul pass

2020-06-12 Thread GitBox


t-vi opened a new pull request #5791:
URL: https://github.com/apache/incubator-tvm/pull/5791


   Contrary what you might expect, this doesn't share as much code with the 
combine dense pass as it does with the combine 2d conv pass. This is because it 
concatenates the "output feature" dimensions just like the 2d conv pass 
concatenates output channels, whereas combine dense stacks the various matmul 
arguments.
   
   I'm not sure if there is a deeper reason not to concatenate for dense, too, 
but maybe there is.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643420736


   Note that I don't remove the possibility to pass in names. As the thread 
suggests, people will find that useful. I'm not sure why you would have to 
insist on passing them if the user is fine with the TorchScript provided ones. 
I'm not taking away passing input names, I just soften the mandates.
   
   Passing the shapes should be needed very little, and I am surprised that you 
would need the user to do that. Ignoring the dtypes in of the inputs is 
actively terrible.
   
   How about doing the following:
   - Allow passing nothing.
   - Allow passing names only. (A list of strings.)
   - Allow passing names and shapes (for backward compat).
   - Allow passing names and shapes and dtypes (as a list of triples).
   
   If you insist, I could also live with just the last three.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 edited a comment on pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures

2020-06-12 Thread GitBox


anijain2305 edited a comment on pull request #5754:
URL: https://github.com/apache/incubator-tvm/pull/5754#issuecomment-643422726


   @FrozenGene @giuseros If QNN Legalization is causing issues, we can remove 
QNN legalization for ARM CPUs altogether and move the logic to Alter Op layout. 
Alter op layout might become more complicated (like we might have to handle 
uint8 x int8 input and kernel dtype in alter op layout now). Just an idea if 
consolidating things at one place makes life easier.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures

2020-06-12 Thread GitBox


anijain2305 commented on pull request #5754:
URL: https://github.com/apache/incubator-tvm/pull/5754#issuecomment-643422726


   @FrozenGene @giuseros If QNN Legalization is causing issues, we can remove 
QNN legalization for ARM CPUs altogether and move the logic to Alter Op layout. 
Alter op layout might become more complicated (like we might have to handle 
uint8 x int8 input and kernel dtype in alter op layout now). Just an idea if it 
consolidating things at one place makes life easier.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643420736


   Note that I don't remove the possibility to pass in names. As the thread 
suggests, people will find that useful. I'm not sure why you would have to 
insist on passing them if the user is fine with the TorchScript provided ones. 
I'm not taking away passing input names, I just soften the mandates.
   
   Passing the shapes should be needed very little, and I am surprised that you 
would need the user to do that. Ignoring the dtypes in of the inputs is 
actively terrible.
   
   How about doing the following:
   - Allow passing nothing.
   - Allow passing names only. (A list of strings.)
   - Allow passing names and shapes (for backward compat).
   - Allow passing names and shapes and dtypes (as a list of triples).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5754: [RFC] Improve quantized convolution performance for armv8 architectures

2020-06-12 Thread GitBox


anijain2305 commented on a change in pull request #5754:
URL: https://github.com/apache/incubator-tvm/pull/5754#discussion_r439561237



##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -1976,6 +1976,74 @@ def 
contrib_conv2d_winograd_without_weight_transform(data,
 kernel_layout, out_layout, out_dtype)
 
 
+def contrib_conv2d_gemm_without_weight_transform(data,
+ weight,
+ strides=(1, 1),
+ padding=(0, 0),
+ dilation=(1, 1),
+ groups=1,
+ channels=None,
+ kernel_size=None,
+ data_layout="NCHW",
+ kernel_layout="OIHW",
+ out_layout="",
+ out_dtype=""):
+r"""2D convolution with gemm algorithm.

Review comment:
   Is r necessary?

##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -2134,6 +2202,25 @@ def contrib_conv2d_winograd_weight_transform(weight,
 return _make.contrib_conv2d_winograd_weight_transform(weight, tile_size)
 
 
+def contrib_conv2d_gemm_weight_transform(weights):

Review comment:
   Does this need layout?

##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -421,6 +421,24 @@ def compute_mirror_pad(attrs, inputs, out_dtype):
 reg.register_pattern("nn.contrib_conv2d_winograd_without_weight_transform",
  OpPattern.OUT_ELEMWISE_FUSABLE)
 
+# conv2d_gemm related operators
+reg.register_strategy("nn.contrib_conv2d_gemm_without_weight_transform",
+  strategy.conv2d_gemm_without_weight_transform_strategy)
+reg.register_pattern("nn.contrib_conv2d_gemm_without_weight_transform",
+ OpPattern.OUT_ELEMWISE_FUSABLE)
+
+
+@reg.register_compute("nn.contrib_conv2d_gemm_weight_transform")
+def compute_contrib_conv2d_gemm_weight_transform(attrs, inputs, out_dtype):
+"""Compute definition of contrib_conv2d_gemm_weight_transform"""
+out = topi.nn.conv2d_gemm_weight_transform(
+inputs[0])

Review comment:
   Can we can move this to previous line?

##
File path: topi/python/topi/arm_cpu/conv2d_alter_op.py
##
@@ -235,5 +239,37 @@ def _alter_conv2d_layout(attrs, inputs, tinfos, out_type):
  new_attrs['out_layout'], out_dtype], topi_tmpl)
 dispatch_ctx.update(target, new_workload, cfg)
 return relay.nn.contrib_depthwise_conv2d_nchwc(*inputs, **new_attrs)
+if topi_tmpl == "conv2d_NHWC_quantized.arm_cpu":
+assert (data.dtype == 'int8' and kernel.dtype == 'int8' or
+data.dtype == 'uint8' and kernel.dtype == 'uint8')
+CO, IC, KH, KW = get_const_tuple(kernel.shape)
+
+K = KH * KW * IC
+N = CO
+
+pad_k = 0
+pad_n = 0
+
+if N % 4 != 0:
+pad_n = 4 - (N % 4)
+
+if K % 16 != 0:
+pad_k = 16 - (K % 16)
+
+N_padded = N + pad_n
+K_padded = K + pad_k
+
+kernel_expr = relay.nn.contrib_conv2d_gemm_weight_transform(inputs[1])

Review comment:
   Was wondering if it is possible to represent weight transform as a 
sequence of existing relay ops? In that case, we would not need a new contrib 
op, we can put that sequence here, and foldConstant will optimize the sequence 
away.
   
   If not, do you think we need to pass kernel layout information. Also should 
we call it on the lines of `contrib_conv2d_gemm_hwio_weight_transform`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] t-vi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


t-vi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643420736


   Note that I don't remove the possibility to pass in names. As the thread 
suggests, people will find that useful. I'm not sure why you would have to 
insist on passing them if the user is fine with the TorchScript provided ones. 
I'm not taking away passing input names, I just soften the mandates.
   
   Passing the shapes should be needed very little, and I am surprised that you 
would need the user to do that. Ignoring the dtypes in of the inputs is 
actively terrible.
   
   How about doing the following:
   - Allow passing nothing.
   - Allow passing names only. (A list of strings.)
   - Allow passing names and shapes (for backward compat).
   - Allow passing names and shapes and dtypes.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5790: [COMMUNITY] @wpan11nv -> Reviewer

2020-06-12 Thread GitBox


tqchen opened a new pull request #5790:
URL: https://github.com/apache/incubator-tvm/pull/5790


   Please join us to welcome @wpan11nv as a new reviewer :) He has been quite 
active
   contributing to the CUDA backend and reviewed many non-trival code related 
to tensor core and warp level parallelism.
   
   - [Commits 
History](https://github.com/apache/incubator-tvm/commits?author=wpan11nv)
   - [Code 
Review](https://github.com/apache/incubator-tvm/pulls?utf8=%E2%9C%93=reviewed-by%3Awpan11nv)
   - [Community Forum Summary](https://discuss.tvm.ai/u/wpan11nv/summary)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5529: [BUG] ConvertLayout pass doesn't handle ops attributes

2020-06-12 Thread GitBox


tqchen commented on issue #5529:
URL: https://github.com/apache/incubator-tvm/issues/5529#issuecomment-643408715


   I see, if the result is generated corrected, then it should be missing 
feature case.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on issue #5529: [BUG] ConvertLayout pass doesn't handle ops attributes

2020-06-12 Thread GitBox


anijain2305 commented on issue #5529:
URL: https://github.com/apache/incubator-tvm/issues/5529#issuecomment-643404059


   Yes, we dont have a registry for LRN. Not sure this is a bug. This seems 
like a missing feature. ConvertLayout will put LayoutTransforms before and 
after LRN.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on pull request #5762: [TF] Support symbolic inputs of Fill

2020-06-12 Thread GitBox


kevinthesun commented on pull request #5762:
URL: https://github.com/apache/incubator-tvm/pull/5762#issuecomment-643402869


   Thanks @lixiaoquan 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (9a3b6b2 -> 65224d9)

2020-06-12 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository.

kevinthesun pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9a3b6b2  [TENSORFLOW]Conv3d Transpose OP added (#5775)
 add 65224d9  [TF] Support symbolic inputs of Fill (#5762)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tensorflow.py  |  5 +
 tests/python/frontend/tensorflow/test_forward.py | 16 ++--
 2 files changed, 15 insertions(+), 6 deletions(-)



[GitHub] [incubator-tvm] kevinthesun merged pull request #5762: [TF] Support symbolic inputs of Fill

2020-06-12 Thread GitBox


kevinthesun merged pull request #5762:
URL: https://github.com/apache/incubator-tvm/pull/5762


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643402135


   The relevant discussion 
https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6055



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi edited a comment on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643401330


   It seems you are allowing `input_shapes` to be None. The input names that 
are passed as part of `input_shapes` is important: These are the names, of 
users choosing, that will be needed at deploy times.
   
   If we use Torch IR input names, users need to manually inspect IR and 
somehow remember these names. The tricky part is Torch sometimes changes these 
input names when copying or saving/loading the same modules. So in the end what 
TVM expects as input names can be different from what users see as inputs to 
Torch IR.
   
   To workaround this, we decided not to use names chosen by Torch and instead 
let users choose and supply input names (something obvious like input0, input1 
that don't require remembering) as part of `input_shapes`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5779: Improve separation of PyTorch dtypes and TVM dtypes in relay PyTorch frontend

2020-06-12 Thread GitBox


masahi commented on pull request #5779:
URL: https://github.com/apache/incubator-tvm/pull/5779#issuecomment-643401330


   It seems you are allowing `input_shapes` to be None. The input names that 
are passed as part of `input_shapes` is important: These are the names, of 
users choosing, that will be needed at deploy times.
   
   If we use Torch IR input names, users need to manually inspect IR and 
somehow remember these names. The tricky part is Torch sometimes changes these 
input names when copying or saving/loading the same modules. So in the end what 
TVM expects as input names can be different from what users see as inputs to 
Torch IR.
   
   To workaround this, we decided not to use names chosen by Torch and instead 
let users choose and supply input names (something obvious like input0, input1 
that don't require remembering)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5788: Migrate IntImm & FloatImm ObjectRef to not-null

2020-06-12 Thread GitBox


junrushao1994 commented on a change in pull request #5788:
URL: https://github.com/apache/incubator-tvm/pull/5788#discussion_r439553237



##
File path: src/arith/rewrite_simplify.cc
##
@@ -123,7 +123,7 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const AddNode* 
op) {
   // Pattern var to match any expression
   PVar x, y, z, b1, b2, s1, s2;
   // Pattern var match IntImm
-  PVar c1, c2, c3;
+  PVarOpt> c1, c2, c3;

Review comment:
   I agree too...Is there any better to do this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5601: [DataType] Add bfloat16

2020-06-12 Thread GitBox


junrushao1994 commented on a change in pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#discussion_r439551002



##
File path: tests/python/unittest/test_tir_transform_bf16_legalize.py
##
@@ -0,0 +1,152 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import tvm
+import topi
+from tvm import te
+from tvm.tir import const
+
+
+def lower_stmt(sche, params, passfunc):
+func = tvm.driver.build_module.form_irmodule(sche, params, "main", 
None)["main"]
+func = passfunc()(
+tvm.IRModule.from_expr(func))["main"]
+stmt = func.body
+return stmt
+
+
+def test_promote():
+def runpass(op, passfunc):
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i: op(a[i], b[i]))
+s = te.create_schedule(c.op)
+return lower_stmt(s, [a, b, c], passfunc)
+
+def get_promoted(op):
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i:
+topi.cast(op(topi.cast(a[i],'float'),
+topi.cast(b[i],'float')), 'bfloat16')
+)
+s = te.create_schedule(c.op)
+func = tvm.driver.build_module.form_irmodule(s, [a,b,c], "main", 
None)["main"]
+return func.body
+
+def test_promoted(op):
+stmt = runpass(op, tvm.tir.transform.BF16Promote)
+tvm.ir.assert_structural_equal(stmt, get_promoted(op))
+test_promoted(topi.add)
+test_promoted(topi.subtract)
+test_promoted(topi.multiply)
+test_promoted(topi.divide)
+
+def test_eliminate():
+def to32(v):
+return topi.cast(v, 'float')
+def to16(v):
+return topi.cast(v, 'bfloat16')
+def get_eliminated():
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i: to16(
+topi.add(
+to32(
+to16(
+topi.add(
+to32(a[i]),
+to32(b[i]),
+)
+)
+),
+to32(
+to16(
+topi.add(
+to32(a[i]),
+to32(b[i]),
+)
+)
+)
+)
+))
+s = te.create_schedule(c.op)
+stmt = lower_stmt(s, [a, b, c], tvm.tir.transform.BF16CastElimination)
+return stmt
+
+def get_target():
+a = te.placeholder((100,), dtype='bfloat16')
+b = te.placeholder((100,), dtype='bfloat16')
+c = te.compute((100,), lambda i: to16(
+topi.add(topi.add(
+to32(a[i]),
+to32(b[i]),
+),
+topi.add(
+to32(a[i]),
+to32(b[i]),
+)
+)
+))
+s = te.create_schedule(c.op)
+func = tvm.driver.build_module.form_irmodule(s, [a,b,c], "main", 
None)["main"]
+return func.body
+tvm.ir.assert_structural_equal(get_eliminated(), get_target())
+
+def test_legalize():
+def to32(v):
+uint32_v = topi.cast(v, "uint32")
+uint32_v = tvm.tir.call_pure_intrin("uint32", "shift_left", uint32_v, 
tvm.tir.const(16, "uint32"))
+return tvm.tir.call_pure_intrin("float32", "reinterpret", uint32_v)
+def to16(v):
+uint32_v = tvm.tir.call_pure_intrin("uint32", "reinterpret", v)
+rounding_bias = tvm.tir.call_pure_intrin("uint32", "shift_right", 
uint32_v, tvm.tir.const(16, "uint32"))
+rounding_bias = tvm.tir.call_pure_intrin("uint32", "bitwise_and", 
rounding_bias, tvm.tir.const(1, "uint32"))
+rounding_bias = rounding_bias + tvm.tir.const(0x7FFF, "uint16")
+uint32_v = uint32_v + rounding_bias
+uint32_v = tvm.tir.call_pure_intrin("uint32", "shift_right", uint32_v, 
tvm.tir.const(16, "uint32"))
+return topi.cast(uint32_v, 'uint16')
+
+def 

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5789: [TIR][REFACTOR] Cleanup unused classes

2020-06-12 Thread GitBox


junrushao1994 commented on a change in pull request #5789:
URL: https://github.com/apache/incubator-tvm/pull/5789#discussion_r439549304



##
File path: include/tvm/te/tensor.h
##
@@ -40,15 +40,16 @@ namespace te {
 using arith::IntSet;
 using namespace tvm::tir;
 
-// internal node container for Operation
+// internal node container for Operationc

Review comment:
   typo





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5789: [TIR][REFACTOR] Cleanup unused classes

2020-06-12 Thread GitBox


tqchen opened a new pull request #5789:
URL: https://github.com/apache/incubator-tvm/pull/5789


   cc @junrushao1994 @yzhliu @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on issue #5529: [BUG] ConvertLayout pass doesn't handle ops attributes

2020-06-12 Thread GitBox


yzhliu commented on issue #5529:
URL: https://github.com/apache/incubator-tvm/issues/5529#issuecomment-643392813


   The attributes are handled in registered python code, e.g., 
https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/x86/conv2d_alter_op.py
 
   I don't see such registry for LRN, did I miss anything @anijain2305 ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (82a2f35 -> 9a3b6b2)

2020-06-12 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 82a2f35  [PYTORCH]aten::norm support added (#5776)
 add 9a3b6b2  [TENSORFLOW]Conv3d Transpose OP added (#5775)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tensorflow.py  |  3 +-
 tests/python/frontend/tensorflow/test_forward.py | 87 
 2 files changed, 89 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] masahi merged pull request #5775: [TENSORFLOW]Conv3d Transpose OP added

2020-06-12 Thread GitBox


masahi merged pull request #5775:
URL: https://github.com/apache/incubator-tvm/pull/5775


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #5776: [PYTORCH]aten::norm support added

2020-06-12 Thread GitBox


masahi merged pull request #5776:
URL: https://github.com/apache/incubator-tvm/pull/5776


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (8c1bfad -> 82a2f35)

2020-06-12 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8c1bfad  [FRONTEND]Darknet support batch size for yolo (#5688)
 add 82a2f35  [PYTORCH]aten::norm support added (#5776)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  | 40 
 tests/python/frontend/pytorch/test_forward.py | 87 +++
 2 files changed, 127 insertions(+)



[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439528514



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 
+  void GetElements(const std::string& var_name, const std::string& type_name,
+   const runtime::NDArray& arr) {
+// Get the number of elements.
+int64_t num_elems = 1;
+for (auto i : arr.Shape()) num_elems *= i;
+stream_ << "static " << type_name << " " << var_name << "[" << num_elems 
<< "] = {";
+T* ptr = static_cast(arr->data);
+for (int64_t i = 0; i < num_elems - 1; i++) {
+  stream_ << ptr[i] << ",";
+}
+if (num_elems > 0) stream_ << ptr[num_elems - 1];
+stream_ << "};\n";
+  }
+
+  std::string Init() {
+for (const auto& it : metadata_) {
+  std::string var_name = it.first.operator std::string();
+  runtime::NDArray data = it.second;
+  CHECK_EQ(data->dtype.lanes, 1U);
+  if (data->dtype.code == kDLFloat) {
+if (data->dtype.bits == 32) {
+  stream_.precision(std::numeric_limits::digits10 + 1);
+  GetElements(var_name, "float", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 64);
+  stream_.precision(std::numeric_limits::digits10 + 1);
+  GetElements(var_name, "double", data);
+}
+  } else if (data->dtype.code == kDLUInt) {
+if (data->dtype.bits == 8) {
+  GetElements(var_name, "uint8_t", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 32);
+  GetElements(var_name, "uint32_t", data);
+}
+  } else {
+if (data->dtype.bits == 8) {
+  GetElements(var_name, "int8_t", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 32);
+  GetElements(var_name, "int32_t", data);
+}
+  }
+}
+return stream_.str();
+  }
+
+ private:
+  /*! \brief The stream to print constant data. */
+  std::ostringstream stream_;
+  /*! \brief variable name to NDArray mapping. */
+  StringNDArrayMap metadata_;
+};
+
+class ModuleInitWrapper : public runtime::ModuleNode {

Review comment:
   how about `MetadataInitModule` or just `MetadataModule`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439527783



##
File path: python/tvm/runtime/module.py
##
@@ -33,6 +33,25 @@
 ProfileResult = namedtuple("ProfileResult", ["mean", "results"])
 
 
+def ModuleInitWrapper(variables, metadata):
+"""Create a module initialization wrapper.

Review comment:
   ah, I thought we want to allow users to play with these APIs in the 
frontend. NVM, I can hide them in the backend and some wrappers can be removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (04496d3 -> 8c1bfad)

2020-06-12 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 04496d3  [BYOC][FIX] Infer types in MergeComposite (#5766)
 add 8c1bfad  [FRONTEND]Darknet support batch size for yolo (#5688)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/darknet.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5688: [FRONTEND]Darknet support batch size for yolo

2020-06-12 Thread GitBox


tqchen merged pull request #5688:
URL: https://github.com/apache/incubator-tvm/pull/5688


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5655: Add MicroTVM tutorial using the STM32F746 discovery board

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5655:
URL: https://github.com/apache/incubator-tvm/pull/5655#discussion_r439508442



##
File path: tutorials/micro/micro_tflite.py
##
@@ -0,0 +1,219 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+.. _tutorial-micro-tflite:
+
+Micro TVM with TFLite Models
+
+**Author**: `Tom Gall `_
+
+This tutorial is an introduction to working with MicroTVM and TFLite models 
with Relay.
+"""
+##
+# Setup
+# -
+#
+# To get started, TFLite package needs to be installed as prerequisite.
+# 
+# install tflite
+# .. code-block:: bash
+#
+#   pip install tflite=2.1.0 --user
+#
+# or you could generate TFLite package yourself. The steps are the following:
+#
+#   Get the flatc compiler.
+#   Please refer to https://github.com/google/flatbuffers for details
+#   and make sure it is properly installed.
+#
+# .. code-block:: bash
+#
+#   flatc --version
+#
+# Get the TFLite schema.
+#
+# .. code-block:: bash
+#
+#   wget 
https://raw.githubusercontent.com/tensorflow/tensorflow/r1.13/tensorflow/lite/schema/schema.fbs
+#
+# Generate TFLite package.
+#
+# .. code-block:: bash
+#
+#   flatc --python schema.fbs
+#
+# Add current folder (which contains generated tflite module) to PYTHONPATH.
+#
+# .. code-block:: bash
+#
+#   export PYTHONPATH=${PYTHONPATH:+$PYTHONPATH:}$(pwd)
+#
+# To validate that the TFLite package was installed successfully, ``python -c 
"import tflite"``
+#
+# CMSIS needs to be downloaded and the CMSIS_ST_PATH environment variable setup
+# This tutorial only supports the STM32F7xx series of boards.
+# Download from : https://www.st.com/en/embedded-software/stm32cubef7.html
+# After you've expanded the zip file
+#
+# .. code-block:: bash
+#
+# export CMSIS_ST_PATH=/path/to/STM32Cube_FW_F7_V1.16.0/Drivers/CMSIS
+#
+# Next we need to download a pretrained TFLite model. When working with 
microcontrollers
+# you need to be mindful these are highly resource constrained devices as such 
standard 
+# models like Mobilenet may not fit into their modest memory. 
+#
+# For this tutorial, we'll make use of one of the TF Micro example models.
+# 
+# If you wish to replicate the training steps see:
+# 
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world/train
+#
+# .. code-block:: bash
+#
+# if you download the example pretrained model from
+#   wget 
https://storage.googleapis.com/download.tensorflow.org/models/tflite/micro/hello_world_2020_04_13.zip
+#   unzip hello_world_2020_04_13.zip
+#   this will fail due to an unimplemented opcode (114)
+#   I've saved an older version of the pre-trailed model and made it available 
on linaro.org
+
+##
+# Python imports for tvm, numpy etc
+# --
+import os
+import numpy as np
+import tvm
+import tvm.micro as micro
+import requests
+
+from tvm.contrib import graph_runtime, util
+from tvm import relay
+
+
+##
+# Load the pretrained TFLite model from a file in your current 
+# directory into a buffer
+model_url = 'https://people.linaro.org/~tom.gall/sine_model.tflite'

Review comment:
   We cannot check in binary files into the codebase, please place the 
model elsewhere where we can download via download_testdata. The usage of 
download_testdata looks good





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#discussion_r439504941



##
File path: src/runtime/module_init_wrapper.cc
##
@@ -0,0 +1,234 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/module_init_wrapper.cc
+ * \brief A wrapper for initializing modules using metadata
+ */
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "file_util.h"
+
+namespace tvm {
+namespace runtime {
+
+using StringNDArrayMap = std::unordered_map;
+
+class CSourceMetadataInitializer {
+ public:
+  explicit CSourceMetadataInitializer(const StringNDArrayMap& metadata) : 
metadata_(metadata) {}
+
+  template 
+  void GetElements(const std::string& var_name, const std::string& type_name,
+   const runtime::NDArray& arr) {
+// Get the number of elements.
+int64_t num_elems = 1;
+for (auto i : arr.Shape()) num_elems *= i;
+stream_ << "static " << type_name << " " << var_name << "[" << num_elems 
<< "] = {";
+T* ptr = static_cast(arr->data);
+for (int64_t i = 0; i < num_elems - 1; i++) {
+  stream_ << ptr[i] << ",";
+}
+if (num_elems > 0) stream_ << ptr[num_elems - 1];
+stream_ << "};\n";
+  }
+
+  std::string Init() {
+for (const auto& it : metadata_) {
+  std::string var_name = it.first.operator std::string();
+  runtime::NDArray data = it.second;
+  CHECK_EQ(data->dtype.lanes, 1U);
+  if (data->dtype.code == kDLFloat) {
+if (data->dtype.bits == 32) {
+  stream_.precision(std::numeric_limits::digits10 + 1);
+  GetElements(var_name, "float", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 64);
+  stream_.precision(std::numeric_limits::digits10 + 1);
+  GetElements(var_name, "double", data);
+}
+  } else if (data->dtype.code == kDLUInt) {
+if (data->dtype.bits == 8) {
+  GetElements(var_name, "uint8_t", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 32);
+  GetElements(var_name, "uint32_t", data);
+}
+  } else {
+if (data->dtype.bits == 8) {
+  GetElements(var_name, "int8_t", data);
+} else {
+  CHECK_EQ(data->dtype.bits, 32);
+  GetElements(var_name, "int32_t", data);
+}
+  }
+}
+return stream_.str();
+  }
+
+ private:
+  /*! \brief The stream to print constant data. */
+  std::ostringstream stream_;
+  /*! \brief variable name to NDArray mapping. */
+  StringNDArrayMap metadata_;
+};
+
+class ModuleInitWrapper : public runtime::ModuleNode {

Review comment:
   Let us think a bit about the name. ModuleInitWrapper may not be the best 
name.

##
File path: python/tvm/runtime/module.py
##
@@ -33,6 +33,25 @@
 ProfileResult = namedtuple("ProfileResult", ["mean", "results"])
 
 
+def ModuleInitWrapper(variables, metadata):
+"""Create a module initialization wrapper.

Review comment:
   Do we need to expose ModuleInitWrapper in the python side? Is it 
possible to simply hide the Module as part of backend, instead a frontend 
entity?

##
File path: src/target/source/source_module.cc
##
@@ -152,8 +153,92 @@ runtime::Module DeviceSourceModuleCreate(
   return runtime::Module(n);
 }
 
+// A helper used to wrap different types of modules and pass through 
packedfunc.
+// This module will never be used for compilation and execution.
+class ModuleClassWrapperNode : public runtime::ModuleNode {
+ public:
+  ModuleClassWrapperNode() = default;
+  const char* type_key() const { return "module_class_wrapper"; }
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) final {
+LOG(FATAL) << "Cannot execute module wrapper";
+return PackedFunc();
+  }
+};
+
+runtime::Module ModuleClassWrapperCreate() {
+  auto n = make_object();
+  return runtime::Module(n);
+}
+
+// Pack the source code and metadata, where source code could be any
+// user-defined code, i.e. c source code, json graph representation, etc.
+class SourceMetadataModuleNode final : public runtime::ModuleNode {
+ public:
+  SourceMetadataModuleNode(const String& 

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5052: [TARGET] ONNX codegen

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#discussion_r439506457



##
File path: tests/python/contrib/test_onnx_model.py
##
@@ -0,0 +1,167 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Relay to ONNX serialization test cases"""
+import pytest
+pytest.importorskip('onnx')
+pytest.importorskip('onnxruntime')
+
+from collections import OrderedDict
+import numpy as np
+import onnxruntime as rt
+import tvm
+from tvm import relay
+from tvm.contrib.target.onnx import to_onnx
+import tvm.relay.testing
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+from tvm.ir import IRModule
+from tvm.relay import transform
+
+
+def func_to_onnx(mod, params, name):
+onnx_model = to_onnx(mod, params, name, path=None)
+return onnx_model.SerializeToString()
+
+
+def run_onnx(mod, params, name, input_data):
+onnx_model = func_to_onnx(mod, params, name)
+sess = rt.InferenceSession(onnx_model)
+input_names = {}
+for input, data in zip(sess.get_inputs(), input_data):
+input_names[input.name] = data
+output_names = [output.name for output in sess.get_outputs()]
+res = sess.run(output_names, input_names)
+return res[0]
+
+
+def get_data(in_data_shapes, dtype='float32'):
+in_data = OrderedDict()
+for name, shape in in_data_shapes.items():
+in_data[name] = np.random.uniform(size=shape).astype(dtype)
+return in_data
+
+
+def run_relay(mod, params, in_data):
+target = 'llvm'
+ctx = tvm.context('llvm', 0)
+intrp = relay.create_executor("graph", mod, ctx=ctx, target=target)
+in_data = [tvm.nd.array(value) for value in in_data.values()]
+return intrp.evaluate()(*in_data, **params).asnumpy()
+
+
+def _verify_results(mod, params, in_data):
+a = run_relay(mod, params, in_data)
+b = run_onnx(mod, params, 'test_resent', in_data.values())
+np.testing.assert_allclose(a, b, rtol=1e-7, atol=1e-7)
+
+
+def test_resnet():
+num_class = 1000
+in_data_shapes = OrderedDict({"data": (1, 3, 224, 224)})
+in_data = get_data(in_data_shapes, dtype="float32")
+for n in [18, 34, 50, 101]:
+mod, params = tvm.relay.testing.resnet.get_workload(
+1, num_class, num_layers=n)
+_verify_results(mod, params, in_data)
+
+
+def test_squeezenet():
+in_data_shapes = OrderedDict({"data": (1, 3, 224, 224)})
+in_data = get_data(in_data_shapes, dtype="float32")
+for version in ['1.0', '1.1']:
+mod, params = tvm.relay.testing.squeezenet.get_workload(1, 
version=version)
+_verify_results(mod, params, in_data)
+
+
+def skipped_test_partition():
+in_1 = relay.var('in_1', shape=(10, 10), dtype='float32')
+in_2 = relay.var('in_2', shape=(10, 10), dtype='float32')
+in_3 = relay.var('in_3', shape=(10, 10), dtype='float32')
+in_4 = relay.var('in_4', shape=(10, 10), dtype='float32')
+in_5 = relay.var('in_5', shape=(10, 10), dtype='float32')
+in_6 = relay.var('in_6', shape=(10, 10), dtype='float32')
+in_7 = relay.var('in_7', shape=(10, 10), dtype='float32')
+in_8 = relay.var('in_8', shape=(10, 10), dtype='float32')
+in_9 = relay.var('in_9', shape=(10, 10), dtype='float32')
+in_10 = relay.var('in_10', shape=(10, 10), dtype='float32')
+
+begin0 = compiler_begin(in_1, "onnx")
+begin1 = compiler_begin(in_2, "onnx")
+begin2 = compiler_begin(in_3, "onnx")
+begin3 = compiler_begin(in_4, "onnx")
+node0 = relay.add(begin0, begin1)
+node1 = relay.add(begin2, begin3)
+end0 = compiler_end(node0, "onnx")
+end1 = compiler_end(node1, "onnx")
+begin4 = compiler_begin(end0, "onnx")
+begin5 = compiler_begin(end1, "onnx")
+node2 = relay.add(begin4, begin5)
+end2 = compiler_end(node2, "onnx")
+
+dbegin0 = compiler_begin(in_5, "default")
+dbegin1 = compiler_begin(in_6, "default")
+node3 = relay.subtract(dbegin0, dbegin1)
+dbegin2 = compiler_begin(in_7, "default")
+dend1 = compiler_end(node3, "default")
+dbegin3 = compiler_begin(dend1, "default")
+node4 = relay.subtract(dbegin2, dbegin3)
+dend2 = compiler_end(node4, "default")
+
+begin6 = compiler_begin(end2, "onnx")
+begin7 = 

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5052: [TARGET] ONNX codegen

2020-06-12 Thread GitBox


zhiics commented on a change in pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#discussion_r439504954



##
File path: python/tvm/contrib/target/__init__.py
##
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Codegen and runtime APIs for targets.

Review comment:
   if nothing is imported from this file, we can just remove it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5770: [BYOC][runtime] Separate code and metadata for CSourceModule

2020-06-12 Thread GitBox


zhiics commented on pull request #5770:
URL: https://github.com/apache/incubator-tvm/pull/5770#issuecomment-643348640


   cc @tqchen @comaniac @junrushao1994 @FrozenGene @lhutton1 @trevor-m @masahi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5052: [TARGET] ONNX codegen

2020-06-12 Thread GitBox


tqchen commented on a change in pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#discussion_r439502719



##
File path: CMakeLists.txt
##
@@ -69,6 +69,7 @@ tvm_option(USE_CPP_RPC "Build CPP RPC" OFF)
 tvm_option(USE_TFLITE "Build with tflite support" OFF)
 tvm_option(USE_TENSORFLOW_PATH "TensorFlow root path when use TFLite" none)
 tvm_option(USE_COREML "Build with coreml support" OFF)
+tvm_option(USE_ONNX_CODEGEN "Build with ONNX Codegen support" OFF)

Review comment:
   USE_TARGET_ONNX





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on a change in pull request #5788: Migrate IntImm & FloatImm ObjectRef to not-null

2020-06-12 Thread GitBox


ANSHUMAN87 commented on a change in pull request #5788:
URL: https://github.com/apache/incubator-tvm/pull/5788#discussion_r439498500



##
File path: src/arith/rewrite_simplify.cc
##
@@ -123,7 +123,7 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const AddNode* 
op) {
   // Pattern var to match any expression
   PVar x, y, z, b1, b2, s1, s2;
   // Pattern var match IntImm
-  PVar c1, c2, c3;
+  PVarOpt> c1, c2, c3;

Review comment:
   Yes, I agree. It is only to accommodate absent of default constructor. 
However the behaviour don't violate with PVar I think. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-06-12 Thread GitBox


tqchen commented on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-643344778


   Let us try an impl that support round down behavior, which is typical in 
float to int conversion



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5529: [BUG] ConvertLayout pass doesn't handle ops attributes

2020-06-12 Thread GitBox


tqchen commented on issue #5529:
URL: https://github.com/apache/incubator-tvm/issues/5529#issuecomment-643343553


   cc @yzhliu @anijain2305 can you look into the issue?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #5384: [ARITH] Merge Impl of Extended Euclidean

2020-06-12 Thread GitBox


tqchen closed issue #5384:
URL: https://github.com/apache/incubator-tvm/issues/5384


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5772: [ARITH]add simplify rule for div

2020-06-12 Thread GitBox


tqchen commented on pull request #5772:
URL: https://github.com/apache/incubator-tvm/pull/5772#issuecomment-643343033


   cc @yongfeng-nv @yzhliu @zhiics @wweic @junrushao1994 please help to take a 
look



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #5766: [BYOC][FIX] Infer types in MergeComposite

2020-06-12 Thread GitBox


zhiics merged pull request #5766:
URL: https://github.com/apache/incubator-tvm/pull/5766


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (f672639 -> 04496d3)

2020-06-12 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from f672639  Add ignore storage_order attribute to onnx pooling parser. 
(#5781)
 add 04496d3  [BYOC][FIX] Infer types in MergeComposite (#5766)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/merge_composite.cc | 13 +++--
 tests/python/relay/test_pass_merge_composite.py | 64 +++--
 2 files changed, 59 insertions(+), 18 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5781: [Relay][Frontend][ONNX] Add storage_order ignore in pooling layer.

2020-06-12 Thread GitBox


tqchen merged pull request #5781:
URL: https://github.com/apache/incubator-tvm/pull/5781


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >