[GitHub] [incubator-tvm] tqchen commented on issue #5440: import vta: Cannot find config in /3rdparty/vta-hw/config/vta_config.json

2020-04-25 Thread GitBox


tqchen commented on issue #5440:
URL: https://github.com/apache/incubator-tvm/issues/5440#issuecomment-619479804


   Please try the suggested step, pleas also feel free to followup on the 
https://discuss.tvm.ai/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-04-25 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r415208733



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -314,6 +318,42 @@ class ForwardPrep : private ExprVisitor {
   }
 };
 
+static bool IsIntInArray(const Array& axis, int v) {
+  for (size_t i = 0; i < axis.size(); i++) {
+if (axis[i] == v)
+  return true;
+  }
+  return false;
+}
+
+static Expr ReshapeToMatchAxis(Expr scale, const Array& shape,
+  const Array& axis) {
+  Array arr;
+  for (size_t i = 0; i < shape.size(); i++) {
+if (IsIntInArray(axis, i)) {
+  auto node = shape[i].as();
+  if (!node) {
+// if the shape is not a constant, use normal transform
+return Expr();

Review comment:
   I have changed `CHECK(scale.defined())` now. It now fallbacks to a 
"optimization failure" rather than a compilation error.
   For ForwardRewrite, if the shape is not constant, the rewriter will return 
`Expr()`.
   For BackwardRewrite, if the shape is not constant, the rewriter will return 
`transformer->NormalCallTransform(call.operator->())`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-04-25 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r415207561



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -39,6 +39,10 @@ namespace relay {
  *
  * Use namespace to reduce potential naming conflict.
  */
+
+extern Expr MakeReshape(Expr data,
+ Array newshape);

Review comment:
   Would you plz suggest a `.h` where we can move this declaration to? The 
function is defined in `relay/transform.cc`. But it is not declared anywhere in 
a header file.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-04-25 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r415206359



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -518,13 +564,30 @@ Expr Conv2DForwardRewrite(const Call& ref_call,
 
   // match the ic_axis
   if (is_depthwise_conv2d) {
-Expr scale = ExpandBiasToMatchAxis(
-sdata->scale, kernel_layout.ndim(), {big_oc_axis});
-weight = Multiply(weight, scale);
+if (is_simple) {
+  Expr scale = ExpandBiasToMatchAxis(
+  sdata->scale, kernel_layout.ndim(), {big_ko_axis});
+  weight = Multiply(weight, scale);
+} else {
+  weight = Multiply(weight, ReshapeToMatchAxis(sdata->scale,
+  weight->type_as()->shape,
+  {big_ko_axis, small_ko_axis}));
+  if (!weight.defined())
+return Expr();

Review comment:
   If I am understanding correctly, line 166~175 in 
`src/relay/transforms/forward_rewrite.cc` will take care of `Expr()`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-04-25 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r415204685



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -518,13 +564,30 @@ Expr Conv2DForwardRewrite(const Call& ref_call,
 
   // match the ic_axis
   if (is_depthwise_conv2d) {
-Expr scale = ExpandBiasToMatchAxis(
-sdata->scale, kernel_layout.ndim(), {big_oc_axis});
-weight = Multiply(weight, scale);
+if (is_simple) {
+  Expr scale = ExpandBiasToMatchAxis(
+  sdata->scale, kernel_layout.ndim(), {big_ko_axis});
+  weight = Multiply(weight, scale);
+} else {
+  weight = Multiply(weight, ReshapeToMatchAxis(sdata->scale,
+  weight->type_as()->shape,
+  {big_ko_axis, small_ko_axis}));
+  if (!weight.defined())
+return Expr();

Review comment:
   I can see in line 542 and 543 in the same function:
   ```
if (sdata == nullptr) return Expr();
if (sweight != nullptr) return Expr();
   ```
   
   Looks like returning `Expr()` just mean that the optimization fails?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5444: [KERAS]Embedding layer

2020-04-25 Thread GitBox


FrozenGene commented on pull request #5444:
URL: https://github.com/apache/incubator-tvm/pull/5444#issuecomment-619472461


   Thanks @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [KERAS]Embedding layer (#5444)

2020-04-25 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 1014fef  [KERAS]Embedding layer (#5444)
1014fef is described below

commit 1014fefa54b5f0a359501b6d19ea3b5a52d6dca6
Author: Samuel 
AuthorDate: Sun Apr 26 08:28:02 2020 +0530

[KERAS]Embedding layer (#5444)
---
 python/tvm/relay/frontend/keras.py  | 10 +-
 tests/python/frontend/keras/test_forward.py | 20 +++-
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/python/tvm/relay/frontend/keras.py 
b/python/tvm/relay/frontend/keras.py
index bf91bc1..43065be 100644
--- a/python/tvm/relay/frontend/keras.py
+++ b/python/tvm/relay/frontend/keras.py
@@ -207,6 +207,14 @@ def _convert_permute(inexpr, keras_layer, _):
 return _op.transpose(inexpr, axes=(0,) + keras_layer.dims)
 
 
+def _convert_embedding(inexpr, keras_layer, etab):
+indices = inexpr
+weightList = keras_layer.get_weights()
+weight = etab.new_const(weightList[0])
+out = _op.take(weight, indices.astype('int32'), axis=0)
+
+return out
+
 def _convert_dense(inexpr, keras_layer, etab):
 weightList = keras_layer.get_weights()
 weight = etab.new_const(weightList[0].transpose([1, 0]))
@@ -893,7 +901,7 @@ _convert_map = {
 'Maximum'  : _convert_merge,
 'Dot'  : _convert_merge,
 'Permute'  : _convert_permute,
-# 'Embedding'  : _convert_embedding,
+'Embedding': _convert_embedding,
 # 'RepeatVector'   : _convert_repeat_vector,
 
 'InputLayer'   : _default_skip,
diff --git a/tests/python/frontend/keras/test_forward.py 
b/tests/python/frontend/keras/test_forward.py
index b764137..b4a1816 100644
--- a/tests/python/frontend/keras/test_forward.py
+++ b/tests/python/frontend/keras/test_forward.py
@@ -466,6 +466,24 @@ class TestKeras:
 keras_model = keras.models.Model(data, x)
 verify_keras_frontend(keras_model, layout='NDHWC')
 
+
+def test_forward_embedding(self, keras):
+data = keras.layers.Input(shape=(2, 4), dtype="int32")
+x = keras.layers.Embedding(10, 3)(data)
+keras_model = keras.models.Model(data, x)
+verify_keras_frontend(keras_model, need_transpose=False)
+
+data = keras.layers.Input(shape=(2, 3, 4), dtype="int32")
+x = keras.layers.Embedding(4, 5)(data)
+keras_model = keras.models.Model(data, x)
+verify_keras_frontend(keras_model, need_transpose=False)
+
+data = keras.layers.Input(shape=(6, 2, 3, 4), dtype="int32")
+x = keras.layers.Embedding(4, 5)(data)
+keras_model = keras.models.Model(data, x)
+verify_keras_frontend(keras_model, need_transpose=False)
+
+
 if __name__ == '__main__':
 for k in [keras, tf_keras]:
 sut = TestKeras()
@@ -497,4 +515,4 @@ if __name__ == '__main__':
 sut.test_forward_pool3d(keras=k)
 sut.test_forward_upsample3d(keras=k)
 sut.test_forward_zero_padding3d(keras=k)
-
+sut.test_forward_embedding(keras=k)



[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-25 Thread GitBox


lixiaoquan commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r415197766



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   Thanks for you comment.
   
   I tried two ways to use data tensor's shape directly.
   First, I passed data tensor's shape to _reshape_shape_func but I got this 
error
   ```
   ValueError: All indices are supposed to be constants
   ```
   Second, I tried to construct a tensor by tvm.te.placeholder based on data 
tensor's shape, but it doesn't work because it can't be passed to tvm.lower 
called later.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-25 Thread GitBox


lixiaoquan commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r415196802



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   Thanks for your comment
   
   I checked #4312, and tried to use data.shape the same ways as 
_strided_slice_shape_func(), but I got this error from hybrid script parser
   ```
   ValueError: All indices are supposed to be constants
   ```
   it seems _strided_slice_shape_func only uses const index when accessing 
data.shape, but _reshape_shape_func uses variables as indices to access 
data_shape
   
   I'd appreciate it if you can give me some hint about how to solve this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-25 Thread GitBox


lixiaoquan commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r415196802



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   I checked #4312, and tried to use data.shape the same ways as 
_strided_slice_shape_func(), but I got this error from hybrid script parser
   ```
   ValueError: All indices are supposed to be constants
   ```
   it seems _strided_slice_shape_func only uses const index when accessing 
data.shape, but _reshape_shape_func uses variables as indices to access 
data_shape
   
   I'd appreciate it if you can give me some hint about how to solve this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] libaihong commented on pull request #5428: [CODEGEN][CUDA] Fix a bug when vectorized load&store was involved for…

2020-04-25 Thread GitBox


libaihong commented on pull request #5428:
URL: https://github.com/apache/incubator-tvm/pull/5428#issuecomment-619465094


   > 
   > 
   > Could you please add a few tests.Do you mean loading uchar2x4? We could 
load/store them as uint16_t x 4. I do not see similar code in PrintType below:
   > 
   > 
https://github.com/apache/incubator-tvm/blob/master/src/target/source/codegen_cuda.cc#L186
   
   I've added some unit test. 
   
   
   > 
   > 
   > Could you please add a few tests.Do you mean loading uchar2x4? We could 
load/store them as uint16_t x 4. I do not see similar code in PrintType below:
   > 
   > 
https://github.com/apache/incubator-tvm/blob/master/src/target/source/codegen_cuda.cc#L186
   
   I've added some unit test.
   This modification only add code for char2 when the lanes is 2.  And when 
lanes is 4, it will use the old logic and will not loading uchar4.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-04-25 Thread GitBox


yzhliu commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r415164446



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -314,6 +318,42 @@ class ForwardPrep : private ExprVisitor {
   }
 };
 
+static bool IsIntInArray(const Array& axis, int v) {
+  for (size_t i = 0; i < axis.size(); i++) {
+if (axis[i] == v)
+  return true;
+  }
+  return false;
+}
+
+static Expr ReshapeToMatchAxis(Expr scale, const Array& shape,
+  const Array& axis) {
+  Array arr;
+  for (size_t i = 0; i < shape.size(); i++) {
+if (IsIntInArray(axis, i)) {
+  auto node = shape[i].as();
+  if (!node) {
+// if the shape is not a constant, use normal transform
+return Expr();

Review comment:
   this will cause failure as you later do `CHECK(scale.defined());` ?

##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -39,6 +39,10 @@ namespace relay {
  *
  * Use namespace to reduce potential naming conflict.
  */
+
+extern Expr MakeReshape(Expr data,
+ Array newshape);

Review comment:
   can we move it to .h?

##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -518,13 +564,30 @@ Expr Conv2DForwardRewrite(const Call& ref_call,
 
   // match the ic_axis
   if (is_depthwise_conv2d) {
-Expr scale = ExpandBiasToMatchAxis(
-sdata->scale, kernel_layout.ndim(), {big_oc_axis});
-weight = Multiply(weight, scale);
+if (is_simple) {
+  Expr scale = ExpandBiasToMatchAxis(
+  sdata->scale, kernel_layout.ndim(), {big_ko_axis});
+  weight = Multiply(weight, scale);
+} else {
+  weight = Multiply(weight, ReshapeToMatchAxis(sdata->scale,
+  weight->type_as()->shape,
+  {big_ko_axis, small_ko_axis}));
+  if (!weight.defined())
+return Expr();

Review comment:
   would this cause problem?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-25 Thread GitBox


icemelon9 commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r415137698



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   No need to have `kShapeDependantBoth`. When you provide the data tensor 
as input to the shape function, you can retrieve its shape directly. So it's 
unnecessary to provide both shape and data tensor.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-25 Thread GitBox


kevinthesun commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r415134966



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   Also this change involves a lot of changes in compile engine as well. 
AFAIK, dynamic stridedslice doesn't require these changes. Dynamic reshape 
should not be that different?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5429: [RELAY][TF] Support symbolic newshape for Reshape

2020-04-25 Thread GitBox


kevinthesun commented on a change in pull request #5429:
URL: https://github.com/apache/incubator-tvm/pull/5429#discussion_r415134500



##
File path: include/tvm/relay/op_attr_types.h
##
@@ -81,10 +81,16 @@ using TOpIsStateful = bool;
  */
 using TNonComputational = bool;
 
+enum ShapeDependantKind {
+  kShapeDependantShape = 0,
+  kShapeDependantData = 1,
+  kShapeDependantBoth = 2,

Review comment:
   A bit curious about whether we need kShapeDependantBoth. Currently we 
can support data dependent shape function, and reshape with symbolic newshape 
should also be data dependent.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] antinucleon commented on issue #5427: [Relay] Lack complex tests for parser

2020-04-25 Thread GitBox


antinucleon commented on issue #5427:
URL: https://github.com/apache/incubator-tvm/issues/5427#issuecomment-619423150


   Before the parser is fixed, in case someone needs a parser in a hurry, here 
is my emergency use parser: 
   
   ```
   import re
   import random
   
   import tvm
   from tvm import auto_scheduler
   from tvm import relay
   
   class ReRelayParser(object):
   def __init__(self, expr_text, param_dict):
   self.memo = {}
   self.expr_text = expr_text
   self.param_dict = param_dict
   
   def _extract_args(self):
   ARG_LIST = re.compile(r'fn\s\((.*?)\)\s->')
   ARGS = 
re.compile(r'\%([a-zA-Z0-9_]+):\sTensor\[\(([0-9,\s]+)\),\s([a-z0-9]+)\]')
   arg_list = ARG_LIST.findall(self.expr_text)[0]
   args = ARGS.findall(arg_list)
   ret = []
   for name, ss, dtype in args:
   cmd = "{name} = relay.var('{name}', shape=[{shape}], 
dtype='{dtype}')".format(name=name, shape=ss, dtype=dtype)
   ret.append(cmd)
   self.memo["%" + name] = name
   return ret
   
   def _extract_body(self):
   ret = []
   STMT = 
re.compile(r'(\%\d+)\s=\s([a-zA-Z0-9._]+)\(([a-zA-Z\%0-9,\s_=\[\]"]+)\)')
   LAST_STMT = 
re.compile(r'([a-zA-Z0-9._]+)\(([a-zA-Z\%0-9,\s_=\[\]"]+)\)')
   
   def random_name(length=8):
   name = ""
   for i in range(length):
   name += chr(random.randint(97, 122))
   return name
   
   def process_args(args):
   VAR_ARGS = re.compile(r'(%[0-9a-zA-z_]+)')
   var_arg = VAR_ARGS.findall(args)
   var_len = len(var_arg)
   tmp = args.split(",")
   return var_arg, ",".join(tmp[var_len:])
   
   tmp = STMT.findall(self.expr_text)
   for ss in tmp:
   output = ss[0]
   op = ss[1]
   args = ss[2]
   self.memo[output] = random_name()
   var_args, attrs = process_args(args)
   cmd = "{name} = relay.{op}({var_args}, {attrs})".format(
   name=self.memo[output],
   op=op,
   var_args=", ".join([self.memo[x] for x in var_args]),
   attrs=attrs
   )
   ret.append(cmd)
   op, args = LAST_STMT.findall(self.expr_text)[-1]
   var_args, attrs = process_args(args)
   cmd = "output_expr = relay.{op}({var_args}, {attrs})".format(
   op=op,
   var_args=", ".join([self.memo[x] for x in var_args]),
   attrs=attrs
   )
   ret.append(cmd)
   return ret
   
   def _convert_parmas(self):
   ret = {}
   for key in self.param_dict.keys():
   new_key = "v" + key
   ret[new_key] = self.param_dict[key]
   return ret
   
   def convert(self):
   cmds = []
   cmds.extend(self._extract_args())
   cmds.extend(self._extract_body())
   new_params = self._convert_parmas()
   l = {}
   exec("\n".join(cmds), None, l)
   output_expr = l["output_expr"]
   func = relay.Function(relay.analysis.free_vars(output_expr), 
output_expr)
   # mod = tvm.ir.IRModule.from_expr(output_expr)
   return func, new_params
   
   
   if __name__ == "__main__":
   from PIL import Image
   from matplotlib import pyplot as plt
   import numpy as np
   from tvm.contrib.download import download_testdata
   
   image_url = 
'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
   image_path = download_testdata(image_url, 'cat.png', module='data')
   resized_image = Image.open(image_path).resize((224, 224))
   #plt.imshow(resized_image)
   #plt.show()
   image_data = np.asarray(resized_image).astype("float32")
   
   # Add a dimension to the image so that we have NHWC format layout
   image_data = np.expand_dims(image_data, axis=0)
   
   synset_url = ''.join(['https://gist.githubusercontent.com/zhreshold/',
   '4d0b62f3d01426887599d4f7ede23ee5/raw/',
   '596b27d23537e5a1b5751d2b0481ef172f58b539/',
   'imagenet1000_clsid_to_human.txt'])
   synset_name = 'imagenet1000_clsid_to_human.txt'
   synset_path = download_testdata(synset_url, synset_name, module='data')
   with open(synset_path) as f:
   synset = eval(f.read())
   
   # Preprocess image as described here:
   # 
https://github.com/tensorflow/models/blob/edb6ed22a801665946c63d650ab9a0b23d98e1b1/research/slim/preprocessing/inception_preprocessing.py#L243
   #image_data[:, :, :, 0] = 2.0 / 255.0 * image_data[:, :, :, 0] - 1
   #image_data[:, :, :, 1] = 2.0 / 255.0 * image_data[:, :, :, 1] - 1
   #image_data[:, :, :, 2] = 2.

[GitHub] [incubator-tvm] mbrookhart commented on pull request #5441: Add TopK to ONNX Frontend

2020-04-25 Thread GitBox


mbrookhart commented on pull request #5441:
URL: https://github.com/apache/incubator-tvm/pull/5441#issuecomment-619418119


   Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5444: [KERAS]Embedding layer

2020-04-25 Thread GitBox


siju-samuel opened a new pull request #5444:
URL: https://github.com/apache/incubator-tvm/pull/5444


   @FrozenGene Please help to review the embedding layer op in Keras. Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] woniuasd commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-04-25 Thread GitBox


woniuasd commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-619408318


   @siju-samuel You mean i can use embedded instead of  aten::index_put_ ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [RELAY] Move frontend utils (#5345)

2020-04-25 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 3f47b32  [RELAY] Move frontend utils (#5345)
3f47b32 is described below

commit 3f47b32774dfcbe8f51f0ab0d86e09535904cb8d
Author: mbaret <55580676+mba...@users.noreply.github.com>
AuthorDate: Sat Apr 25 17:18:30 2020 +0100

[RELAY] Move frontend utils (#5345)

* [RELAY] Move frontend utils

The util file currently under frontend is used from
outside of frontend (in qnn/op/legalizations). This suggests
that the file should be pushed up to a higher level.

The benefit from this change is that importing qnn no longer
also imports all the frontends.

* Inline get_scalar_from_constant

Change-Id: I1cc64e9ecb0eadb6ac0f7b62e6ea174644af4ad4

* Remove util.py from Relay

Change-Id: If9cd7cf3fc0bd1861a3a9b5604f338e084d8db96

* Shorten functions

Change-Id: Ieb537d82e6ee52421ff05a90cd00a03679ffebf2

* Line length

Change-Id: I1d216b7e73a060c4f118f5da50ce58b18eba907f
---
 python/tvm/relay/frontend/tflite.py  | 12 +++-
 python/tvm/relay/frontend/util.py| 33 
 python/tvm/relay/qnn/op/legalizations.py | 11 ++-
 3 files changed, 21 insertions(+), 35 deletions(-)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index 275d0ce..bba7d3b 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -29,7 +29,6 @@ from .. import function as _function
 from .. import op as _op
 from .. import qnn as _qnn
 from ... import nd as _nd
-from .util import get_scalar_from_constant
 from .common import ExprTable
 from .common import infer_shape as _infer_shape
 
@@ -2281,6 +2280,17 @@ class OperatorConverter(object):
 def has_expr(self, input_tensor_idx):
 return self.exp_tab.has_expr(get_tensor_name(self.subgraph, 
input_tensor_idx))
 
+
+def get_scalar_from_constant(expr):
+""" Returns scalar value from Relay constant scalar. """
+assert isinstance(expr, _expr.Constant) and not expr.data.shape, \
+"Expr is not a constant scalar."
+value = expr.data.asnumpy()
+assert value.dtype == np.dtype(np.int32) or value.dtype == 
np.dtype(np.float32), \
+"value must be float32/int32"
+return np.asscalar(value)
+
+
 def build_str_map(obj):
 """Build string map of TFLite enum int value
 
diff --git a/python/tvm/relay/frontend/util.py 
b/python/tvm/relay/frontend/util.py
deleted file mode 100644
index a7f89a3..000
--- a/python/tvm/relay/frontend/util.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-# pylint: disable=wildcard-import, redefined-builtin, invalid-name
-""" Utility functions that are used across many directories. """
-from __future__ import absolute_import
-import numpy as np
-from .. import expr as _expr
-
-def get_scalar_from_constant(expr):
-""" Returns scalar value from Relay constant scalar. """
-assert isinstance(expr, _expr.Constant) and not expr.data.shape, \
-"Expr is not a constant scalar."
-value = expr.data.asnumpy()
-if value.dtype == np.dtype(np.int32):
-return int(value)
-if value.dtype == np.dtype(np.float32):
-return float(value)
-assert False, "Constant expr must be float32/int32"
-return None  # To suppress pylint
diff --git a/python/tvm/relay/qnn/op/legalizations.py 
b/python/tvm/relay/qnn/op/legalizations.py
index b1c1909..c96a730 100644
--- a/python/tvm/relay/qnn/op/legalizations.py
+++ b/python/tvm/relay/qnn/op/legalizations.py
@@ -20,8 +20,8 @@ from __future__ import absolute_import
 
 import tvm
 from tvm import relay
+import numpy as np
 from .. import op as reg
-from ...frontend.util import get_scalar_from_constant
 
 #
 # Register the functions for different operators.
@@ -54,6 +54,15 @@ def qnn_dense_legalize(attrs, inputs, types):
 # Helper functions.
 ###
 
+def get_scalar_from_con

[GitHub] [incubator-tvm] HK017 commented on issue #5443: How to implement a demo of TVM C++ object-detection?

2020-04-25 Thread GitBox


HK017 commented on issue #5443:
URL: https://github.com/apache/incubator-tvm/issues/5443#issuecomment-619376478


   > Suggest asking this question on the discuss forum: https://discuss.tvm.ai/
   
   Because no one replied to me, I came here for help on the discuss forum: 
https://discuss.tvm.ai/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on issue #5443: How to implement a demo of TVM C++ object-detection?

2020-04-25 Thread GitBox


FrozenGene commented on issue #5443:
URL: https://github.com/apache/incubator-tvm/issues/5443#issuecomment-619351941


   Suggest asking this question on the discuss forum: https://discuss.tvm.ai/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] HK017 opened a new issue #5443: How to implement a demo of TVM C++ object-detection?

2020-04-25 Thread GitBox


HK017 opened a new issue #5443:
URL: https://github.com/apache/incubator-tvm/issues/5443


   hi all developer, how to transform tvm::run_time::NDArray data to c++ data 
so that I can process the data better,for example,
   For yolov3 (Darknet version) TVM C + + code, how to handle the output of 
get_output?
   
   If you see the problem, please help me to solve it, I will be very 
grateful!!!
   Thanks again
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [CodeGen] Cleanup generated code (#5424)

2020-04-25 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 952def5  [CodeGen] Cleanup generated code (#5424)
952def5 is described below

commit 952def53da51e6cf17c5dbf50b92e193622ca695
Author: Wei Pan <60017475+wpan1...@users.noreply.github.com>
AuthorDate: Sat Apr 25 01:20:27 2020 -0700

[CodeGen] Cleanup generated code (#5424)

- remove unnecessary white spaces from storage kind
- do not start a new scope for vectorization as temporary
  variables are alll uniquely generated.

The above two changes make vectorized code much cleaner.

Signed-off-by: Wei Pan 
---
 src/target/source/codegen_c.cc  | 10 +-
 src/target/source/codegen_c.h   | 23 ---
 src/target/source/codegen_cuda.cc   | 10 +-
 src/target/source/codegen_metal.cc  |  7 +++
 src/target/source/codegen_opencl.cc |  5 ++---
 5 files changed, 7 insertions(+), 48 deletions(-)

diff --git a/src/target/source/codegen_c.cc b/src/target/source/codegen_c.cc
index 84604b8..6461908 100644
--- a/src/target/source/codegen_c.cc
+++ b/src/target/source/codegen_c.cc
@@ -94,7 +94,6 @@ void CodeGenC::AddFunction(const PrimFunc& f) {
   auto it = alloc_storage_scope_.find(v.get());
   if (it != alloc_storage_scope_.end()) {
 PrintStorageScope(it->second, stream);
-stream << ' ';
   }
 
   PrintType(GetType(v), stream);
@@ -179,7 +178,6 @@ std::string CodeGenC::GetBufferRef(
   if (!scope.empty() && IsScopePartOfType()) {
 PrintStorageScope(scope, os);
   }
-  os << ' ';
   PrintType(t, os);
   os << "*)" << vid << ')';
 } else {
@@ -213,7 +211,6 @@ std::string CodeGenC::GetBufferRef(
 if (!scope.empty() && IsScopePartOfType()) {
   PrintStorageScope(scope, os);
 }
-os << ' ';
 PrintType(t, os);
 os << "*)(";
 if (!HandleTypeMatch(buffer, t.element_of())) {
@@ -221,7 +218,6 @@ std::string CodeGenC::GetBufferRef(
   if (!scope.empty() && IsScopePartOfType()) {
 PrintStorageScope(scope, os);
   }
-  os << ' ';
   PrintType(t.element_of(), os);
   os << "*)";
 }
@@ -681,7 +677,6 @@ void CodeGenC::VisitExpr_(const LoadNode* op, std::ostream& 
os) {  // NOLINT(*)
 auto it = alloc_storage_scope_.find(op->buffer_var.get());
 if (it != alloc_storage_scope_.end()) {
   PrintStorageScope(it->second, value_temp);
-  value_temp << ' ';
 }
   }
   PrintType(elem_type, value_temp);
@@ -731,7 +726,6 @@ void CodeGenC::VisitStmt_(const StoreNode* op) {
 auto it = alloc_storage_scope_.find(op->buffer_var.get());
 if (it != alloc_storage_scope_.end()) {
   PrintStorageScope(it->second, stream);
-  stream << ' ';
 }
   }
   PrintType(elem_type, stream);
@@ -823,10 +817,8 @@ void CodeGenC::VisitStmt_(const AllocateNode* op) {
 const VarNode* buffer = op->buffer_var.as();
 std::string scope = alloc_storage_scope_.at(buffer);
 PrintStorageScope(scope, stream);
-stream << ' ';
 PrintType(op->dtype, stream);
-stream << ' '<< vid << '['
-   << constant_size << "];\n";
+stream << ' ' << vid << '[' << constant_size << "];\n";
 
   RegisterHandleType(op->buffer_var.get(), op->dtype);
   this->PrintStmt(op->body);
diff --git a/src/target/source/codegen_c.h b/src/target/source/codegen_c.h
index db655be..4fb4b7e 100644
--- a/src/target/source/codegen_c.h
+++ b/src/target/source/codegen_c.h
@@ -257,29 +257,6 @@ class CodeGenC :
   /*! \brief the data type of allocated buffers */
   std::unordered_map handle_data_type_;
 
-  /*!
-   * \brief A RAII utility class for emitting code in a scoped region.
-   */
-  class EnterScopeRAII {
-// The codegen context.
-CodeGenC* cg;
-
-// The new scope level.
-int scope;
-
-   public:
-explicit EnterScopeRAII(CodeGenC* cg) : cg(cg) {
-  cg->PrintIndent();
-  cg->stream << "{\n";
-  scope = cg->BeginScope();
-}
-~EnterScopeRAII() {
-  cg->EndScope(scope);
-  cg->PrintIndent();
-  cg->stream << "}\n";
-}
-  };
-
  private:
   /*! \brief whether to print in SSA form */
   bool print_ssa_form_{false};
diff --git a/src/target/source/codegen_cuda.cc 
b/src/target/source/codegen_cuda.cc
index 02b5b41..56d7162 100644
--- a/src/target/source/codegen_cuda.cc
+++ b/src/target/source/codegen_cuda.cc
@@ -242,8 +242,6 @@ void CodeGenCUDA::PrintVecBinaryOp(
   this->PrintType(t, stream);
   stream << ' ' << sret << ";\n";
   {
-EnterScopeRAII scope(this);
-
 // Unpack into individual ops.
 std::string vlhs = SSAGetID(PrintExpr(lhs), lhs.dtype());
 std::string vrhs = SSAGetID(PrintExpr(rhs), rhs.dtype());
@@ -350,7 +348,7 @@