[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] 
NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#discussion_r380301754
 
 

 ##
 File path: python/tvm/relay/frontend/keras.py
 ##
 @@ -751,6 +816,10 @@ def from_keras(model, shape=None):
 shape: dict of str to int list/tuple
 Input shapes of the model, optional
 
+layout: str
 
 Review comment:
   Should we mention like "default layout is NCHW"?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] 
NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#discussion_r380300117
 
 

 ##
 File path: python/tvm/relay/frontend/keras.py
 ##
 @@ -274,8 +287,13 @@ def _convert_convolution(inexpr, keras_layer, etab):
 if pad_t == pad_b and pad_l == pad_r:
 params['padding'] = (pad_t, pad_l)
 else:
-inexpr = _op.nn.pad(data=inexpr, pad_width=(
-(0, 0), (0, 0), (pad_t, pad_b), (pad_l, pad_r)))
+if etab.data_layout == 'NCHW':
 
 Review comment:
   ```python
   if pad_t ...:
 ...
   elif etab.data_layout == 'NCHW':
 ...
   else:
 ...
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] 
NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#discussion_r380300684
 
 

 ##
 File path: python/tvm/relay/frontend/keras.py
 ##
 @@ -322,26 +353,39 @@ def _convert_separable_convolution(inexpr, keras_layer, 
etab):
 if pad_t == pad_b and pad_l == pad_r:
 params0['padding'] = (pad_t, pad_l)
 else:
-inexpr = _op.nn.pad(data=inexpr, pad_width=(
-(0, 0), (0, 0), (pad_t, pad_b), (pad_l, pad_r)))
+if etab.data_layout == 'NCHW':
 
 Review comment:
   ditto


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4899: [Relay][Frontend][Keras] 
NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#discussion_r380299492
 
 

 ##
 File path: python/tvm/relay/frontend/keras.py
 ##
 @@ -234,18 +234,29 @@ def _convert_dense(inexpr, keras_layer, etab):
 
 def _convert_convolution(inexpr, keras_layer, etab):
 _check_data_format(keras_layer)
+if etab.data_layout == 'NHWC':
+kernel_layout = 'HWIO'
+else:
+kernel_layout = 'OIHW'
 is_deconv = type(keras_layer).__name__ == 'Conv2DTranspose'
 is_depthconv = type(keras_layer).__name__ == 'DepthwiseConv2D'
 weightList = keras_layer.get_weights()
+weight = weightList[0]
 if is_deconv:
-kernel_h, kernel_w, n_filters, in_channels = weightList[0].shape
-weight = weightList[0].transpose([3, 2, 0, 1])
+kernel_h, kernel_w, n_filters, in_channels = weight.shape
+if kernel_layout == 'OIHW':
+weight = weight.transpose([3, 2, 0, 1])
 elif is_depthconv:
-kernel_h, kernel_w, in_channels, depth_mult = weightList[0].shape
-weight = weightList[0].transpose([2, 3, 0, 1])
+kernel_h, kernel_w, in_channels, depth_mult = weight.shape
+if kernel_layout == 'OIHW':
+weight = weight.transpose([2, 3, 0, 1])
+else:
+kernel_layout = "HWOI"
 
 Review comment:
   This looks confusion at the first glance. It might be better to put all 
`kernel_layout` determination related logic together.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380296359
 
 

 ##
 File path: topi/python/topi/arm_cpu/__init__.py
 ##
 @@ -14,13 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-
+# pylint: disable=wildcard-import
 """Schedule for ARM CPU"""
 
-from . import conv2d
-from . import depthwise_conv2d
-from . import conv2d_transpose
-from . import conv2d_int8
-from . import bitserial_conv2d
-from . import bitserial_dense
-from . import injective
+from .conv2d import *
+from .depthwise_conv2d import *
+from .conv2d_transpose import *
+from .conv2d_int8 import *
+from . import conv2d_alter_op
+from .bitserial_conv2d import *
+from .bitserial_dense import *
+from .injective import *
 
 Review comment:
   Just easier to get the compute and schedule function, and consistent with 
other target.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #4790: Fast exponent

2020-02-17 Thread GitBox
tqchen merged pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (a43e326 -> 1314091)

2020-02-17 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from a43e326  Update faq.md (#4893)
 add 1314091  Fast exponent (#4790)

No new revisions were added by this update.

Summary of changes:
 topi/include/topi/elemwise.h| 80 +
 topi/python/topi/math.py| 16 
 topi/src/topi.cc|  5 +++
 topi/tests/python/test_topi_math.py | 38 ++
 4 files changed, 139 insertions(+)



[GitHub] [incubator-tvm] tqchen commented on issue #4790: Fast exponent

2020-02-17 Thread GitBox
tqchen commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-587092625
 
 
   Thanks @alexgl-github @anijain2305 @masahi @FrozenGene !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380301022
 
 

 ##
 File path: topi/python/topi/arm_cpu/conv2d_alter_op.py
 ##
 @@ -0,0 +1,171 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument,no-member
+"""Conv2D alter op and legalize functions for arm cpu"""
+
+import logging
+
+import tvm
+from tvm import relay
+from tvm import autotvm
+
+from ..nn import conv2d_alter_layout
+from ..util import get_const_tuple
+
+
+logger = logging.getLogger('topi')
+
+
+@conv2d_alter_layout.register(["arm_cpu"])
+def _alter_conv2d_layout(attrs, inputs, tinfos, out_type):
+target = tvm.target.Target.current(allow_none=False)
+dispatch_ctx = autotvm.task.DispatchContext.current
+
+_, outs = relay.backend.compile_engine.select_implement(
+relay.op.get("nn.conv2d"), attrs, tinfos, out_type, target)
+workload = autotvm.task.get_workload(outs)
+if workload is None:
+# The best implementation is not an AutoTVM template,
+# we then assume it's not necessary to alter this op.
+return None
+cfg = dispatch_ctx.query(target, workload)
+if cfg.is_fallback:  # if is fallback, clear query cache and return None
+autotvm.task.clear_fallback_cache(target, workload)
 
 Review comment:
   The dispatch context will cache the random assigned fallback cache. I think 
this line just clear the query cache. I just followed the original arm_cpu 
cov2d_layout implementation for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
yzhliu commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380008720
 
 

 ##
 File path: topi/python/topi/arm_cpu/__init__.py
 ##
 @@ -14,13 +14,14 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-
+# pylint: disable=wildcard-import
 """Schedule for ARM CPU"""
 
-from . import conv2d
-from . import depthwise_conv2d
-from . import conv2d_transpose
-from . import conv2d_int8
-from . import bitserial_conv2d
-from . import bitserial_dense
-from . import injective
+from .conv2d import *
+from .depthwise_conv2d import *
+from .conv2d_transpose import *
+from .conv2d_int8 import *
+from . import conv2d_alter_op
+from .bitserial_conv2d import *
+from .bitserial_dense import *
+from .injective import *
 
 Review comment:
   do we need this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
yzhliu commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380275069
 
 

 ##
 File path: topi/python/topi/cuda/conv3d.py
 ##
 @@ -126,24 +78,55 @@ def schedule_conv3d_ncdhw_cuda(cfg, outs):
 s: Schedule
 The computation schedule for conv2d.
 """
-target = tvm.target.Target.current()
-if 'cudnn' in target.libs:
-return generic.schedule_extern(outs)
-
 outs = [outs] if isinstance(outs, tvm.tensor.Tensor) else outs
 s = tvm.create_schedule([x.op for x in outs])
 
 def _callback(op):
 if op.tag == 'conv3d_ncdhw':
-schedule_direct_3d_cuda(cfg, s, op.output(0))
+schedule_direct_conv3d_cuda(cfg, s, op.output(0), "NCDHW",
+"conv3d_ncdhw.cuda")
 
 traverse_inline(s, outs[0].op, _callback)
 return s
 
 
-@autotvm.register_topi_schedule(generic.schedule_conv3d_ndhwc, ["cuda", "gpu"],
-["direct"])
-def schedule_conv3d_ndhwc_cuda(cfg, outs):
+@autotvm.register_topi_compute("conv3d_ndhwc.cuda")
+def conv3d_ndhwc(cfg, data, kernel, strides, padding, dilation, 
out_dtype='float32'):
+"""Conv3D operator for cuda backend.
+
+Parameters
+--
+cfg: ConfigEntity
+The config for this template
+
+data : tvm.Tensor
+5-D with shape [batch, in_channel, in_depth, in_height, in_width]
 
 Review comment:
   should be ndhwc ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
yzhliu commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380255632
 
 

 ##
 File path: topi/python/topi/arm_cpu/conv2d_alter_op.py
 ##
 @@ -0,0 +1,171 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument,no-member
+"""Conv2D alter op and legalize functions for arm cpu"""
+
+import logging
+
+import tvm
+from tvm import relay
+from tvm import autotvm
+
+from ..nn import conv2d_alter_layout
+from ..util import get_const_tuple
+
+
+logger = logging.getLogger('topi')
+
+
+@conv2d_alter_layout.register(["arm_cpu"])
+def _alter_conv2d_layout(attrs, inputs, tinfos, out_type):
+target = tvm.target.Target.current(allow_none=False)
+dispatch_ctx = autotvm.task.DispatchContext.current
+
+_, outs = relay.backend.compile_engine.select_implement(
+relay.op.get("nn.conv2d"), attrs, tinfos, out_type, target)
+workload = autotvm.task.get_workload(outs)
+if workload is None:
+# The best implementation is not an AutoTVM template,
+# we then assume it's not necessary to alter this op.
+return None
+cfg = dispatch_ctx.query(target, workload)
+if cfg.is_fallback:  # if is fallback, clear query cache and return None
+autotvm.task.clear_fallback_cache(target, workload)
 
 Review comment:
   what is in fallback cache?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4891: [TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort

2020-02-17 Thread GitBox
tqchen commented on a change in pull request #4891: [TEST][FLAKY] 
topi/tests/python/test_topi_sort.py::test_argsort
URL: https://github.com/apache/incubator-tvm/pull/4891#discussion_r380298254
 
 

 ##
 File path: topi/tests/python/test_topi_sort.py
 ##
 @@ -27,6 +27,12 @@ def verify_argsort(axis, is_ascend):
 data_dtype = "float32"
 data = tvm.placeholder(dshape, name="data", dtype=data_dtype)
 np_data = np.random.uniform(size=dshape).astype(data_dtype)
 
 Review comment:
   remove np_data here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] cchung100m commented on issue #4891: [TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort

2020-02-17 Thread GitBox
cchung100m commented on issue #4891: [TEST][FLAKY] 
topi/tests/python/test_topi_sort.py::test_argsort
URL: https://github.com/apache/incubator-tvm/pull/4891#issuecomment-587051868
 
 
   Hi @tqchen 
   
   Thanks for the prompt reply and explanation.
   
   I shuffle index and get data from the shuffled index. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380302988
 
 

 ##
 File path: topi/python/topi/cuda/conv3d.py
 ##
 @@ -126,24 +78,55 @@ def schedule_conv3d_ncdhw_cuda(cfg, outs):
 s: Schedule
 The computation schedule for conv2d.
 """
-target = tvm.target.Target.current()
-if 'cudnn' in target.libs:
-return generic.schedule_extern(outs)
-
 outs = [outs] if isinstance(outs, tvm.tensor.Tensor) else outs
 s = tvm.create_schedule([x.op for x in outs])
 
 def _callback(op):
 if op.tag == 'conv3d_ncdhw':
-schedule_direct_3d_cuda(cfg, s, op.output(0))
+schedule_direct_conv3d_cuda(cfg, s, op.output(0), "NCDHW",
+"conv3d_ncdhw.cuda")
 
 traverse_inline(s, outs[0].op, _callback)
 return s
 
 
-@autotvm.register_topi_schedule(generic.schedule_conv3d_ndhwc, ["cuda", "gpu"],
-["direct"])
-def schedule_conv3d_ndhwc_cuda(cfg, outs):
+@autotvm.register_topi_compute("conv3d_ndhwc.cuda")
+def conv3d_ndhwc(cfg, data, kernel, strides, padding, dilation, 
out_dtype='float32'):
+"""Conv3D operator for cuda backend.
+
+Parameters
+--
+cfg: ConfigEntity
+The config for this template
+
+data : tvm.Tensor
+5-D with shape [batch, in_channel, in_depth, in_height, in_width]
 
 Review comment:
   fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380295907
 
 

 ##
 File path: src/relay/op/nn/convolution.h
 ##
 @@ -153,6 +153,16 @@ bool Conv2DRel(const Array& types, int num_inputs, 
const Attrs& attrs,
   << " But got " << out_layout;
 
   Array dshape_nchw = trans_in_layout.ForwardShape(data->shape);
+  bool is_depthwise = false;
+  if (param->groups > 1) {
+CHECK(weight && weight->shape.defined()) <<
+"Weight shape must be specified when groups is greater than 1.";
+Array wshape_oihw = 
trans_kernel_layout.ForwardShape(weight->shape);
+if (tvm::tir::Equal(param->groups, dshape_nchw[1]) &&
+tvm::tir::Equal(param->groups, wshape_oihw[0])) {
 
 Review comment:
   In depthwise conv2d, weight's out_channel == groups == input's in_channel, 
and weight's in_channel == channel multiplier. The output's channel = groups * 
channel multiplier.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4546: [CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore

2020-02-17 Thread GitBox
tqchen commented on issue #4546: [CODEGEN] Support cuda tensorcore subbyte int 
data type in auto tensorcore
URL: https://github.com/apache/incubator-tvm/pull/4546#issuecomment-587091708
 
 
   see if you can reprod the error locally. I wonder if it has things to do 
with the join running of the testcase, or a memory corruption case where the 
change caused some changes in a memory locaiton. would be great if you can 
investigate further.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4884: [android_deploy] CRASH caused by `Module.load` func while running App on Android Device with

2020-02-17 Thread GitBox
tqchen commented on issue #4884: [android_deploy] CRASH caused by `Module.load` 
func while running App on Android Device with 
URL: https://github.com/apache/incubator-tvm/issues/4884#issuecomment-587092114
 
 
   Glad that the problem is resolved. For future trouble shooting questions, 
you are more than welcomed to open new threads on https://discuss.tvm.ai/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 merged pull request #4898: [DOCS] Introduce how to add hardware backend to FAQ

2020-02-17 Thread GitBox
tmoreau89 merged pull request #4898: [DOCS] Introduce how to add hardware 
backend to FAQ
URL: https://github.com/apache/incubator-tvm/pull/4898
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (1314091 -> 0b2d11a)

2020-02-17 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 1314091  Fast exponent (#4790)
 add 0b2d11a  [DOCS] Introduce how to add hardware backend to FAQ (#4898)

No new revisions were added by this update.

Summary of changes:
 docs/api/python/target.rst|  2 +-
 docs/dev/relay_bring_your_own_codegen.rst |  2 +
 docs/faq.md   | 49 ---
 docs/faq.rst  | 64 +++
 docs/install/index.rst|  2 +
 docs/vta/index.rst|  4 +-
 tutorials/autotvm/README.txt  |  5 ++-
 tutorials/language/tensorize.py   |  2 +
 8 files changed, 77 insertions(+), 53 deletions(-)
 delete mode 100644 docs/faq.md
 create mode 100644 docs/faq.rst



[GitHub] [incubator-tvm] jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587108875
 
 
   @masahi, Keras does use NHWC by default just like TensorFlow. However, 
because TVM is better optimized for NCHW, the Keras frontend previously 
converted all layers from NHWC to NCHW. This option allows us to keep the 
layers in their original format.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-17 Thread GitBox
zhiics commented on issue #4879: [Relay][Pass] Fix bug in re-processing call 
node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879#issuecomment-587125329
 
 
   @mbaret PTAL. Let's land this if it looks good to you as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587170389
 
 
   @tqchen, you're right that would be a more direct way to apply 
ConvertLayout. If everyone else would prefer it, I could change the Keras 
frontend to only output in NHWC and then we would rely on ConvertLayout to get 
models into NCHW.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
anijain2305 commented on issue #4899: [Relay][Frontend][Keras] NHWC import 
support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587173870
 
 
   I see. Yes, I meant that we should retain native framework format (NHWC in 
this case). And use ConvertLayout to change the layout to desired layout 
(passed on by relay.from_keras API).
   
   @jwfromm Yes, in my opinion, that is the cleanest way forward. Same thing 
needs to happen for TF framework. I can take care of that. Will appreciate if 
you can take care of Keras.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Laurawly opened a new pull request #4901: [Fix] Fix get_valid_count flaky test for cuda

2020-02-17 Thread GitBox
Laurawly opened a new pull request #4901: [Fix] Fix get_valid_count flaky test 
for cuda
URL: https://github.com/apache/incubator-tvm/pull/4901
 
 
   Turned on get_valid_count test for cuda in topi.  Used atomic operations in 
this fix to replace previous block sync method.
   
   @trevor-m @kevinthesun @yzhliu Could you review?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on issue #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-17 Thread GitBox
soiferj commented on issue #4879: [Relay][Pass] Fix bug in re-processing call 
node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879#issuecomment-587124344
 
 
   @zhiics I merged your changes and updated the branch. Would you mind taking 
another look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
masahi commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380376581
 
 

 ##
 File path: src/relay/op/nn/convolution.h
 ##
 @@ -153,6 +153,16 @@ bool Conv2DRel(const Array& types, int num_inputs, 
const Attrs& attrs,
   << " But got " << out_layout;
 
   Array dshape_nchw = trans_in_layout.ForwardShape(data->shape);
+  bool is_depthwise = false;
+  if (param->groups > 1) {
+CHECK(weight && weight->shape.defined()) <<
+"Weight shape must be specified when groups is greater than 1.";
+Array wshape_oihw = 
trans_kernel_layout.ForwardShape(weight->shape);
+if (tvm::tir::Equal(param->groups, dshape_nchw[1]) &&
+tvm::tir::Equal(param->groups, wshape_oihw[0])) {
 
 Review comment:
   ah right, thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380386997
 
 

 ##
 File path: python/tvm/autotvm/record.py
 ##
 @@ -130,9 +135,17 @@ def decode(row, protocol='json'):
 result: autotvm.tuner.MeasureResult
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380387035
 
 

 ##
 File path: python/tvm/autotvm/task/relay_integration.py
 ##
 @@ -67,27 +65,22 @@ def extract_from_program(mod, params, ops, target, 
target_host=None,
 The module or function to tune
 params: dict of str to numpy array
 The associated parameters of the program
-ops: List of relay op
-List of relay ops to be tuned
 target: tvm.target.Target
 The compilation target
 target_host: tvm.target.Target
 The host compilation target
-template_keys: dict of topi op to str
-The tuning template keys map for schedules, default to None.
-Example: {topi.nn.conv2d: 'direct'}
+ops: List of relay.op.Op
+List of relay ops to be tuned
 
 Returns
 ---
 task: Array of autotvm.task.Task
 collected tasks
 """
-return extract_from_multiple_program([mod], [params], ops, target, 
target_host,
- template_keys)
+return extract_from_multiple_program([mod], [params], target, target_host, 
ops)
 
 
-def extract_from_multiple_program(mods, params, ops, target, target_host=None,
-  template_keys=None):
+def extract_from_multiple_program(mods, params, target, target_host=None, 
ops=None):
 
 Review comment:
   updated


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380386976
 
 

 ##
 File path: python/tvm/autotvm/database.py
 ##
 @@ -167,6 +167,7 @@ def filter(self, func):
 current = self.get(key)
 try:
 records = [decode(x) for x in 
current.split(RedisDatabase.MAGIC_SPLIT)]
+records = list(filter(None, records))
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-17 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r380334499
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1026 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+# pylint: disable=import-outside-toplevel, simplifiable-if-expression, 
unnecessary-comprehension
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+from tvm.ir import module as _module
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+data = inputs[0]
+
+import torch
+if isinstance(data, _expr.Expr):
+shape = _infer_shape(data)
+elif isinstance(data, list):
+shape = data
+elif isinstance(data, (torch.Tensor, np.ndarray)):
+shape = data.shape
+else:
+assert "data type {} could not be parsed in ones op" % (type(data))
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+data = inputs[0]
+
+import torch
+if isinstance(data, _expr.Expr):
+shape = _infer_shape(data)
+elif isinstance(data, list):
+shape = data
+elif isinstance(data, (torch.Tensor, np.ndarray)):
+shape = data.shape
+else:
+assert "data type {} could not be parsed in zeros op" % 
(type(data))
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def 

[GitHub] [incubator-tvm] tmoreau89 commented on issue #4887: [VTA] YoloV3 Support

2020-02-17 Thread GitBox
tmoreau89 commented on issue #4887: [VTA] YoloV3 Support
URL: https://github.com/apache/incubator-tvm/pull/4887#issuecomment-587124568
 
 
   Hi @huajsj, thanks for adding YoloV3 support for VTA. Do you think that 
along with this PR you can also construct a tutorial similar to this one: 
https://github.com/apache/incubator-tvm/blob/master/vta/tutorials/frontend/deploy_vision_on_vta.py
   
   We could have one tutorial called deploy_classification_on_vta.py, and 
deploy_detection_on_vta.py to differentiate between object classification vs. 
detection. In addition, this will exercise compilation support for Yolo to make 
sure that support does not break. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen edited a comment on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
tqchen edited a comment on issue #4899: [Relay][Frontend][Keras] NHWC import 
support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587154754
 
 
   Given that the model is in native NHWC, perhaps it is better to do it in 
another way around(note that the old approach converts to NCHW on the fly), 
ingest as NHWC as in this PR and then call ConvertLayout


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
tqchen commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587154754
 
 
   Given that the model is in native NHWC, perhaps it is better to do it in 
another way around, ingest as NHWC as in this PR and then call ConvertLayout


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380383331
 
 

 ##
 File path: python/tvm/autotvm/task/relay_integration.py
 ##
 @@ -67,27 +65,22 @@ def extract_from_program(mod, params, ops, target, 
target_host=None,
 The module or function to tune
 params: dict of str to numpy array
 The associated parameters of the program
-ops: List of relay op
-List of relay ops to be tuned
 target: tvm.target.Target
 The compilation target
 target_host: tvm.target.Target
 The host compilation target
-template_keys: dict of topi op to str
-The tuning template keys map for schedules, default to None.
-Example: {topi.nn.conv2d: 'direct'}
+ops: List of relay.op.Op
+List of relay ops to be tuned
 
 Returns
 ---
 task: Array of autotvm.task.Task
 collected tasks
 """
-return extract_from_multiple_program([mod], [params], ops, target, 
target_host,
- template_keys)
+return extract_from_multiple_program([mod], [params], target, target_host, 
ops)
 
 
-def extract_from_multiple_program(mods, params, ops, target, target_host=None,
-  template_keys=None):
+def extract_from_multiple_program(mods, params, target, target_host=None, 
ops=None):
 
 Review comment:
   Yes, that's the current behavior. I'll update the doc to show this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587118058
 
 
   @comaniac I think I've fixed all the style issues you pointed out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics merged pull request #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-17 Thread GitBox
zhiics merged pull request #4879: [Relay][Pass] Fix bug in re-processing call 
node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass (#4879)

2020-02-17 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 27a0284  [Relay][Pass] Fix bug in re-processing call node in 
MergeComposite pass (#4879)
27a0284 is described below

commit 27a02844cb52e883a4a66da68a527590d76f7d01
Author: Jon Soifer 
AuthorDate: Mon Feb 17 12:18:15 2020 -0800

[Relay][Pass] Fix bug in re-processing call node in MergeComposite pass 
(#4879)

* Fix bug in re-processing call node

* Add test

* Add to main

* temp changes to work from another machine

* fix rest of tests

* fix test_reuse_call_merge

* fix merge

Co-authored-by: Jon Soifer 
---
 src/relay/pass/merge_composite.cc   | 25 +---
 tests/python/relay/test_pass_merge_composite.py | 82 +
 2 files changed, 98 insertions(+), 9 deletions(-)

diff --git a/src/relay/pass/merge_composite.cc 
b/src/relay/pass/merge_composite.cc
index 28bf8fa..4e1094b 100644
--- a/src/relay/pass/merge_composite.cc
+++ b/src/relay/pass/merge_composite.cc
@@ -87,7 +87,7 @@ class MergeCompositeWrapper : public ExprMutator {
* a new Relay expression ready to be wrapped into a composite function.
*/
   Expr ExtractPattern(const Call& pattern, const Call& root,
-  Map>* var_map) {
+  Map>* var_map, Map* call_map) {
 // check to make sure both calls are to operators (not functions)
 if (!pattern->op->IsInstance() || !root->op->IsInstance())
   return Expr();
@@ -99,14 +99,20 @@ class MergeCompositeWrapper : public ExprMutator {
 for (const auto& arg : pattern->args) {
   Expr new_arg;
   if (arg->IsInstance()) {
-// fail if the root argument is not also a call node
-if (!root->args[i]->IsInstance()) {
-  return Expr();
+// if we've already processed this call node, return the previous 
result
+if (call_map->find(arg) != call_map->end()) {
+  new_arg = (*call_map)[arg];
+} else {
+  // fail if the root argument is not also a call node
+  if (!root->args[i]->IsInstance()) {
+return Expr();
+  }
+  // if it's a call node, recursively call this function
+  new_arg = ExtractPattern(Downcast(arg),
+  Downcast(root->args[i]),
+  var_map, call_map);
+  call_map->Set(arg, new_arg);
 }
-// if it's a call node, recursively call this function
-new_arg = ExtractPattern(Downcast(arg),
- Downcast(root->args[i]),
- var_map);
   } else if (arg->IsInstance()) {
 // if there's a var in the pattern, it must be a free var
 // so call the function to update the var_map
@@ -155,7 +161,8 @@ class MergeCompositeWrapper : public ExprMutator {
 Call pattern = Downcast(pattern_);
 CHECK(pattern.defined());
 Map> args_map;
-auto extract = ExtractPattern(pattern, call, _map);
+Map call_map;
+auto extract = ExtractPattern(pattern, call, _map, _map);
 if (extract.defined()) {
   auto free_vars = FreeVars(extract);
   // make the composite function
diff --git a/tests/python/relay/test_pass_merge_composite.py 
b/tests/python/relay/test_pass_merge_composite.py
index 4f5acc7..b96a89b 100644
--- a/tests/python/relay/test_pass_merge_composite.py
+++ b/tests/python/relay/test_pass_merge_composite.py
@@ -110,6 +110,26 @@ def make_conv_bias_relu_pattern():
 return r
 
 
+def make_add_add_add_pattern():
+"""Create a pattern to match the following graph.
+   Useful for testing re-using a call node.
+
+xy
+  /  \  /
+  |  add
+   \  |  \
+ add |
+  | /
+ add
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+add_node_1 = relay.add(x, add_node)
+r = relay.add(add_node_1, add_node)
+return r
+
+
 def test_simple_merge():
 """Test composite function is correctly produced from simple graph.
 
@@ -239,6 +259,67 @@ def test_branch_merge():
 assert relay.analysis.alpha_equal(result, expected)
 
 
+def test_reuse_call_merge():
+"""Test composite function is correctly produced from simple graph
+   which re-uses call nodes.
+
+We could expect the pattern `make_add_add_add` to be merged
+into a single op `add_add_add`.
+
+x y
+ \   / \
+  sub  |   x y
+/  |  / \   / |
+| add  > sub  |
+ \ |  \   |  /
+  add |   add_add_add
+   | /
+  add
+
+"""
+pattern_table = [
+("add_add_add", make_add_add_add_pattern())
+]
+
+def before():
+a = relay.var('a', 

[GitHub] [incubator-tvm] zhiics commented on issue #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-17 Thread GitBox
zhiics commented on issue #4879: [Relay][Pass] Fix bug in re-processing call 
node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879#issuecomment-587144682
 
 
   Thanks @soiferj @mbaret @cbalint13 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
jwfromm commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587153715
 
 
   @anijain2305, I think you're right that for most cases ConvertLayout will 
achieve something similar. However, there are some benefits to having layers 
directly parsed as NHWC. I actually did this PR because I'm working on a 
project that uses some funky custom Keras layers and it's very useful to parse 
them directly in NHWC.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380382775
 
 

 ##
 File path: python/tvm/autotvm/task/relay_integration.py
 ##
 @@ -67,27 +65,22 @@ def extract_from_program(mod, params, ops, target, 
target_host=None,
 The module or function to tune
 params: dict of str to numpy array
 The associated parameters of the program
-ops: List of relay op
-List of relay ops to be tuned
 target: tvm.target.Target
 The compilation target
 target_host: tvm.target.Target
 The host compilation target
-template_keys: dict of topi op to str
-The tuning template keys map for schedules, default to None.
-Example: {topi.nn.conv2d: 'direct'}
+ops: List of relay.op.Op
+List of relay ops to be tuned
 
 Returns
 ---
 task: Array of autotvm.task.Task
 collected tasks
 """
-return extract_from_multiple_program([mod], [params], ops, target, 
target_host,
- template_keys)
+return extract_from_multiple_program([mod], [params], target, target_host, 
ops)
 
 
-def extract_from_multiple_program(mods, params, ops, target, target_host=None,
-  template_keys=None):
+def extract_from_multiple_program(mods, params, target, target_host=None, 
ops=None):
 
 Review comment:
   Would that be better if we include all ops by default if `ops=None`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380340757
 
 

 ##
 File path: python/tvm/autotvm/database.py
 ##
 @@ -125,7 +125,7 @@ def load(self, inp, get_all=False):
 current = self.get(measure_str_key(inp))
 if current is not None:
 records = [decode(x) for x in 
current.split(RedisDatabase.MAGIC_SPLIT)]
-results = [rec[1] for rec in records]
+results = [rec[1] for rec in records if rec is not None]
 if get_all:
 return results
 return max(results, key=lambda result: result.timestamp)
 
 Review comment:
   `max` will throw `ValueError` if `results` is empty.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380341368
 
 

 ##
 File path: python/tvm/autotvm/database.py
 ##
 @@ -167,6 +167,7 @@ def filter(self, func):
 current = self.get(key)
 try:
 records = [decode(x) for x in 
current.split(RedisDatabase.MAGIC_SPLIT)]
+records = list(filter(None, records))
 
 Review comment:
   Better to use the similar logic as above and avoid `filter`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
comaniac commented on a change in pull request #4644: [Relay][AutoTVM] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380342661
 
 

 ##
 File path: python/tvm/autotvm/record.py
 ##
 @@ -130,9 +135,17 @@ def decode(row, protocol='json'):
 result: autotvm.tuner.MeasureResult
 
 Review comment:
   Add `Optional` or `None` to the return type.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on issue #4879: [Relay][Pass] Fix bug in re-processing call node in MergeComposite pass

2020-02-17 Thread GitBox
mbaret commented on issue #4879: [Relay][Pass] Fix bug in re-processing call 
node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879#issuecomment-587143463
 
 
   Looks good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4900: [REFACTOR][PY] Establish tvm.te and tvm.driver

2020-02-17 Thread GitBox
tqchen opened a new pull request #4900: [REFACTOR][PY] Establish tvm.te and 
tvm.driver
URL: https://github.com/apache/incubator-tvm/pull/4900
 
 
   - Move the related files to tvm.te
   - Move build_module.py to tvm.driver
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380386912
 
 

 ##
 File path: include/tvm/relay/op_attr_types.h
 ##
 @@ -207,13 +216,137 @@ enum AnyCodegenStrategy {
   kVariableDimensions
 };
 
-/* \brief A runtime representation of shape. */
+/*! \brief A runtime representation of shape. */
 using Shape = Array;
 
 using FShapeFunc = runtime::TypedPackedFunc<
   Array(const Attrs& attrs,
- const Array& inputs,
- const Array& out_ndims)>;
+const Array& inputs,
+const Array& out_ndims)>;
+
+/*!
+ * \brief Operator implementation in TVM.
+ */
+class OpImplementNode : public Object {
 
 Review comment:
   Renamed to OpImplementation, and also updated the related APIs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay op strategy

2020-02-17 Thread GitBox
icemelon9 commented on a change in pull request #4644: [Relay][AutoTVM] Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r380386961
 
 

 ##
 File path: python/tvm/autotvm/database.py
 ##
 @@ -125,7 +125,7 @@ def load(self, inp, get_all=False):
 current = self.get(measure_str_key(inp))
 if current is not None:
 records = [decode(x) for x in 
current.split(RedisDatabase.MAGIC_SPLIT)]
-results = [rec[1] for rec in records]
+results = [rec[1] for rec in records if rec is not None]
 if get_all:
 return results
 return max(results, key=lambda result: result.timestamp)
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (27a0284 -> 08338dd)

2020-02-17 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 27a0284  [Relay][Pass] Fix bug in re-processing call node in 
MergeComposite pass (#4879)
 add 08338dd  [REFACTOR][PY] Establish tvm.te and tvm.driver (#4900)

No new revisions were added by this update.

Summary of changes:
 python/tvm/__init__.py |  27 +-
 python/tvm/api.py  | 610 +
 python/tvm/arith.py|   5 +-
 python/tvm/autotvm/feature.py  |   3 +-
 python/tvm/autotvm/task/topi_integration.py|   7 +-
 python/tvm/{_ffi/_cy2 => driver}/__init__.py   |   3 +-
 python/tvm/{ => driver}/build_module.py| 272 +
 python/tvm/error.py|   2 +-
 python/tvm/hybrid/__init__.py  |   2 +-
 python/tvm/hybrid/parser.py|  13 +-
 python/tvm/hybrid/util.py  |   2 +-
 python/tvm/ir/expr.py  |  24 +-
 python/tvm/relay/backend/_backend.py   |   7 +-
 python/tvm/relay/op/op.py  |   2 +-
 python/tvm/relay/quantize/_calibrate.py|   1 +
 python/tvm/target/__init__.py  |   1 +
 python/tvm/target/build_config.py  | 254 +
 .../tvm/te/__init__.py |  16 +-
 python/tvm/{ir => te}/_ffi_api.py  |   4 +-
 python/tvm/{api.py => te/operation.py} | 366 +++--
 python/tvm/{ => te}/schedule.py|  86 ++-
 python/tvm/{ => te}/tag.py |   2 +-
 python/tvm/{ => te}/tensor.py  |  20 +-
 python/tvm/{ => te}/tensor_intrin.py   |  40 +-
 python/tvm/testing.py  |   5 +
 python/tvm/tir/__init__.py |   4 +-
 python/tvm/tir/expr.py |  51 ++
 python/tvm/tir/op.py   | 188 ++-
 src/api/api_arith.cc   |   8 +-
 src/api/api_lang.cc| 110 ++--
 src/api/api_schedule.cc|   4 +-
 src/api/api_test.cc|  14 +-
 src/target/target.cc   |  10 +-
 tests/python/unittest/test_lang_buffer.py  |   4 +-
 tests/python/unittest/test_lang_constructor.py |   4 +-
 tests/python/unittest/test_runtime_error.py|  13 +-
 tests/python/unittest/test_runtime_packed_func.py  |   4 +-
 tests/python/unittest/test_runtime_rpc.py  |   3 +-
 vta/python/vta/build_module.py |   4 +-
 39 files changed, 815 insertions(+), 1380 deletions(-)
 copy python/tvm/{_ffi/_cy2 => driver}/__init__.py (91%)
 rename python/tvm/{ => driver}/build_module.py (61%)
 create mode 100644 python/tvm/target/build_config.py
 copy cmake/modules/contrib/MicroStandaloneRuntime.cmake => 
python/tvm/te/__init__.py (67%)
 copy python/tvm/{ir => te}/_ffi_api.py (92%)
 copy python/tvm/{api.py => te/operation.py} (52%)
 rename python/tvm/{ => te}/schedule.py (86%)
 rename python/tvm/{ => te}/tag.py (98%)
 rename python/tvm/{ => te}/tensor.py (93%)
 rename python/tvm/{ => te}/tensor_intrin.py (81%)



[GitHub] [incubator-tvm] tqchen merged pull request #4892: [Bugfix] Fixed: Bitwise ops on floats causing wrong code generation and crashes.

2020-02-17 Thread GitBox
tqchen merged pull request #4892: [Bugfix] Fixed: Bitwise ops on floats causing 
wrong code generation and crashes. 
URL: https://github.com/apache/incubator-tvm/pull/4892
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (08338dd -> 976c08a)

2020-02-17 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 08338dd  [REFACTOR][PY] Establish tvm.te and tvm.driver (#4900)
 add 976c08a  Fixed bugs that occured when using bitwise operators on 
floating point type expressions. Further crash when using ops <<, >>, %. 
Finally added regression tests for both types of bug. (#4892)

No new revisions were added by this update.

Summary of changes:
 python/tvm/tir/expr.py   | 16 
 src/tir/ir/op.cc | 10 ++
 tests/python/unittest/test_lang_basic.py | 22 ++
 3 files changed, 48 insertions(+)



[GitHub] [incubator-tvm] tqchen commented on issue #4892: [Bugfix] Fixed: Bitwise ops on floats causing wrong code generation and crashes.

2020-02-17 Thread GitBox
tqchen commented on issue #4892: [Bugfix] Fixed: Bitwise ops on floats causing 
wrong code generation and crashes. 
URL: https://github.com/apache/incubator-tvm/pull/4892#issuecomment-587248067
 
 
   Thanks @dpankratz !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4903: [CI] Update ci docker to add autodocsumm

2020-02-17 Thread GitBox
tqchen opened a new pull request #4903: [CI] Update ci docker to add autodocsumm
URL: https://github.com/apache/incubator-tvm/pull/4903
 
 
   source https://github.com/apache/incubator-tvm/pull/4902


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4902: [CI] Add autodocsum as dep

2020-02-17 Thread GitBox
tqchen opened a new pull request #4902: [CI] Add autodocsum as dep
URL: https://github.com/apache/incubator-tvm/pull/4902
 
 
   autodocsumm can generate a summary table for functions in a module 
automatically.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #4900: [REFACTOR][PY] Establish tvm.te and tvm.driver

2020-02-17 Thread GitBox
tqchen merged pull request #4900: [REFACTOR][PY] Establish tvm.te and tvm.driver
URL: https://github.com/apache/incubator-tvm/pull/4900
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-17 Thread GitBox
FrozenGene commented on a change in pull request #4847: Return empty 
CSourceModule when no lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r380467063
 
 

 ##
 File path: src/relay/backend/build_module.cc
 ##
 @@ -437,28 +441,51 @@ class RelayBuildModule : public runtime::ModuleNode {
 ret_.params = graph_codegen_->GetParams();
 
 auto lowered_funcs = graph_codegen_->GetLoweredFunc();
+
+// When there is no lowered_funcs due to reasons such as optimization,
+// we first try to generate a dummy one if the target host is "llvm".
 if (lowered_funcs.size() == 0) {
-  LOG(WARNING) << "no lowered funcs exist in the compiled module";
+  // Decide first the target host
+  Target target_host_val = target_host_;
+  if (!target_host_.defined()) {
+for (const auto  : targets_) {
+  if (it.second->device_type == kDLCPU) {
+target_host_val = it.second;
+break;
+  }
+}
+  }
+
+  // If no target_host has been set, we choose a default one, which is
+  // llvm if "codegen.build_llvm" is accessible.
+  const runtime::PackedFunc* pf = 
runtime::Registry::Get("codegen.build_llvm");
+  if (!target_host_val.defined())
+target_host_val = (pf != nullptr) ? target::llvm() : target::stackvm();
+
+  if (target_host_val.defined() && target_host_val->target_name == "llvm")
+lowered_funcs.Set(
+  target_host_val->str(),
+  Array({
+ MakeAPI(EvaluateNode::make(0), "__dummy__", Array(), 
0, false) }));
 
 Review comment:
   Add one `__dummy__` is not the most elegant solution IMO. If we have this 
LLVM IR:
   ```
   target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
   target triple = "x86_64-apple-macosx10.14.0"
   ```
   Does this satisfy your need?
   
   If it is, I think we could create one packed function in `llvm_module.cc`. 
We could create this kind of empty llvm module. Then we could have`ret_.mod = 
tvm::codegen::LLVMModuleCreate...` like CSourceModuleCreate. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4904: [REFACTOR][PY] Establish tvm.arith

2020-02-17 Thread GitBox
tqchen opened a new pull request #4904: [REFACTOR][PY] Establish tvm.arith
URL: https://github.com/apache/incubator-tvm/pull/4904
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] huajsj commented on issue #4887: [VTA] YoloV3 Support

2020-02-17 Thread GitBox
huajsj commented on issue #4887: [VTA] YoloV3 Support
URL: https://github.com/apache/incubator-tvm/pull/4887#issuecomment-587283268
 
 
   Hi @tmoreau89 , Thanks for the follow up , sure,  would work for a tutorial 
and upload soon.
   
   Regards
   Hua
   
   > Hi @huajsj, thanks for adding YoloV3 support for VTA. Do you think that 
along with this PR you can also construct a tutorial similar to this one: 
https://github.com/apache/incubator-tvm/blob/master/vta/tutorials/frontend/deploy_vision_on_vta.py
   > 
   > We could have one tutorial called deploy_classification_on_vta.py, and 
deploy_detection_on_vta.py to differentiate between object classification vs. 
detection. In addition, this will exercise compilation support for Yolo to make 
sure that support does not break.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on a change in pull request #4857: Windows Support for cpp_rpc

2020-02-17 Thread GitBox
soiferj commented on a change in pull request #4857: Windows Support for cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857#discussion_r380402254
 
 

 ##
 File path: apps/cpp_rpc/rpc_env.cc
 ##
 @@ -20,141 +20,139 @@
  * \file rpc_env.cc
  * \brief Server environment of the RPC.
  */
+#include 
 #include 
-#include 
-#ifndef _MSC_VER
-#include 
+#ifndef _WIN32
 #include 
+#include 
 #include 
 #else
 #include 
+#include 
+namespace {
+  int mkdir(const char* path, int /* ignored */) { return _mkdir(path); }
+}
 #endif
+#include 
 #include 
-#include 
 #include 
 #include 
-#include 
+#include 
+#include 
 
-#include "rpc_env.h"
 #include "../../src/support/util.h"
 #include "../../src/runtime/file_util.h"
+#include "rpc_env.h"
+
+namespace {
+#if defined(__linux__) || defined(__ANDROID__)
+  const std::string untar_cmd = "tar -C ";
+#elif defined(_WIN32)
+  const std::string untar_cmd = "wsl tar -C ";
 
 Review comment:
   I don't think it's ideal to have to force Windows users to enable WSL. Can 
we use Python 
[tarfile](https://docs.python.org/2/library/tarfile.html#examples) instead? Are 
there any other options?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jmorrill commented on a change in pull request #4857: Windows Support for cpp_rpc

2020-02-17 Thread GitBox
jmorrill commented on a change in pull request #4857: Windows Support for 
cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857#discussion_r380410740
 
 

 ##
 File path: apps/cpp_rpc/rpc_env.cc
 ##
 @@ -20,141 +20,139 @@
  * \file rpc_env.cc
  * \brief Server environment of the RPC.
  */
+#include 
 #include 
-#include 
-#ifndef _MSC_VER
-#include 
+#ifndef _WIN32
 #include 
+#include 
 #include 
 #else
 #include 
+#include 
+namespace {
+  int mkdir(const char* path, int /* ignored */) { return _mkdir(path); }
+}
 #endif
+#include 
 #include 
-#include 
 #include 
 #include 
-#include 
+#include 
+#include 
 
-#include "rpc_env.h"
 #include "../../src/support/util.h"
 #include "../../src/runtime/file_util.h"
+#include "rpc_env.h"
+
+namespace {
+#if defined(__linux__) || defined(__ANDROID__)
+  const std::string untar_cmd = "tar -C ";
+#elif defined(_WIN32)
+  const std::string untar_cmd = "wsl tar -C ";
 
 Review comment:
   I considered some open-source C and C++ libraries, but shied away with 
license and time to implement reasons.
   
   I think Python tarfile would be a good pick, especially if it comes packaged 
with pythonas long as having python is not a burden.  I would like to test 
the speed though, mostly around python startup time and determine what is 
"acceptable" or not.
   
   As a side note, it seems you can just send a precompiled dll (no tar), by 
changing the build_func:
   
   `from tvm.contrib import cc`
   
   `builder=autotvm.LocalBuilder(n_parallel=18, timeout=25, 
build_func=cc.create_shared)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.

2020-02-17 Thread GitBox
tqchen commented on issue #4899: [Relay][Frontend][Keras] NHWC import support.
URL: https://github.com/apache/incubator-tvm/pull/4899#issuecomment-587224358
 
 
   That sounds good. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (8310b25 -> 38d1dd2)

2020-02-17 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8310b25  [CI] Update ci docker to add autodocsumm (#4903)
 add 38d1dd2  [CI] Add autodocsum as dep (#4902)

No new revisions were added by this update.

Summary of changes:
 docker/install/ubuntu_install_sphinx.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-tvm] tqchen merged pull request #4903: [CI] Update ci docker to add autodocsumm

2020-02-17 Thread GitBox
tqchen merged pull request #4903: [CI] Update ci docker to add autodocsumm
URL: https://github.com/apache/incubator-tvm/pull/4903
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [CI] Update ci docker to add autodocsumm (#4903)

2020-02-17 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 8310b25  [CI] Update ci docker to add autodocsumm (#4903)
8310b25 is described below

commit 8310b2526e69d1761a67f6a8566691a0eeb2e652
Author: Tianqi Chen 
AuthorDate: Mon Feb 17 20:56:45 2020 -0800

[CI] Update ci docker to add autodocsumm (#4903)
---
 Jenkinsfile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 0230a1a..bb57abb 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -45,8 +45,8 @@
 //
 
 ci_lint = "tvmai/ci-lint:v0.60"
-ci_gpu = "tvmai/ci-gpu:v0.60"
-ci_cpu = "tvmai/ci-cpu:v0.55"
+ci_gpu = "tvmai/ci-gpu:v0.61"
+ci_cpu = "tvmai/ci-cpu:v0.60"
 ci_i386 = "tvmai/ci-i386:v0.52"
 
 // tvm libraries



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-17 Thread GitBox
FrozenGene commented on a change in pull request #4847: Return empty 
CSourceModule when no lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r380467063
 
 

 ##
 File path: src/relay/backend/build_module.cc
 ##
 @@ -437,28 +441,51 @@ class RelayBuildModule : public runtime::ModuleNode {
 ret_.params = graph_codegen_->GetParams();
 
 auto lowered_funcs = graph_codegen_->GetLoweredFunc();
+
+// When there is no lowered_funcs due to reasons such as optimization,
+// we first try to generate a dummy one if the target host is "llvm".
 if (lowered_funcs.size() == 0) {
-  LOG(WARNING) << "no lowered funcs exist in the compiled module";
+  // Decide first the target host
+  Target target_host_val = target_host_;
+  if (!target_host_.defined()) {
+for (const auto  : targets_) {
+  if (it.second->device_type == kDLCPU) {
+target_host_val = it.second;
+break;
+  }
+}
+  }
+
+  // If no target_host has been set, we choose a default one, which is
+  // llvm if "codegen.build_llvm" is accessible.
+  const runtime::PackedFunc* pf = 
runtime::Registry::Get("codegen.build_llvm");
+  if (!target_host_val.defined())
+target_host_val = (pf != nullptr) ? target::llvm() : target::stackvm();
+
+  if (target_host_val.defined() && target_host_val->target_name == "llvm")
+lowered_funcs.Set(
+  target_host_val->str(),
+  Array({
+ MakeAPI(EvaluateNode::make(0), "__dummy__", Array(), 
0, false) }));
 
 Review comment:
   Add one `__dummy__` is not elegant solution IMO. If we have this LLVM IR:
   ```
   target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128"
   target triple = "x86_64-apple-macosx10.14.0"
   ```
   Does this satisfy your need?
   
   If it is, I think we could create one packed function in `llvm_module.cc`. 
We could create this kind of empty llvm module. Then we could have`ret_.mod = 
tvm::codegen::LLVMModuleCreate...` like CSourceModuleCreate. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jmorrill commented on a change in pull request #4857: Windows Support for cpp_rpc

2020-02-17 Thread GitBox
jmorrill commented on a change in pull request #4857: Windows Support for 
cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857#discussion_r380492019
 
 

 ##
 File path: apps/cpp_rpc/rpc_env.cc
 ##
 @@ -20,141 +20,139 @@
  * \file rpc_env.cc
  * \brief Server environment of the RPC.
  */
+#include 
 #include 
-#include 
-#ifndef _MSC_VER
-#include 
+#ifndef _WIN32
 #include 
+#include 
 #include 
 #else
 #include 
+#include 
+namespace {
+  int mkdir(const char* path, int /* ignored */) { return _mkdir(path); }
+}
 #endif
+#include 
 #include 
-#include 
 #include 
 #include 
-#include 
+#include 
+#include 
 
-#include "rpc_env.h"
 #include "../../src/support/util.h"
 #include "../../src/runtime/file_util.h"
+#include "rpc_env.h"
+
+namespace {
+#if defined(__linux__) || defined(__ANDROID__)
+  const std::string untar_cmd = "tar -C ";
+#elif defined(_WIN32)
+  const std::string untar_cmd = "wsl tar -C ";
 
 Review comment:
   I just did a few tests in powershell:
   `Measure-Command {start-process "python tar-test.py" -Wait}`
   
   Where tar-test.py is just: "import tarfile"
   
   `Measure-Command {start-process "wsl tar --help" -Wait}`
   
   It doesnt look like python startup is much worse than "wsl tar".  Both 
around ~30ms.
   
   How would you suggest this be implemented?  Generate a python script at 
runtime in the tmp folder?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jmorrill commented on a change in pull request #4857: Windows Support for cpp_rpc

2020-02-17 Thread GitBox
jmorrill commented on a change in pull request #4857: Windows Support for 
cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857#discussion_r380492019
 
 

 ##
 File path: apps/cpp_rpc/rpc_env.cc
 ##
 @@ -20,141 +20,139 @@
  * \file rpc_env.cc
  * \brief Server environment of the RPC.
  */
+#include 
 #include 
-#include 
-#ifndef _MSC_VER
-#include 
+#ifndef _WIN32
 #include 
+#include 
 #include 
 #else
 #include 
+#include 
+namespace {
+  int mkdir(const char* path, int /* ignored */) { return _mkdir(path); }
+}
 #endif
+#include 
 #include 
-#include 
 #include 
 #include 
-#include 
+#include 
+#include 
 
-#include "rpc_env.h"
 #include "../../src/support/util.h"
 #include "../../src/runtime/file_util.h"
+#include "rpc_env.h"
+
+namespace {
+#if defined(__linux__) || defined(__ANDROID__)
+  const std::string untar_cmd = "tar -C ";
+#elif defined(_WIN32)
+  const std::string untar_cmd = "wsl tar -C ";
 
 Review comment:
   I just did a few tests in powershell:
   `Measure-Command {start-process "python tar-test.py" -Wait}`
   
   Where tar-test.py is just: "import tarfile"
   
   `Measure-Command {start-process "python wsl tar --help" -Wait}`
   
   It doesnt look like python startup is much worse than "wsl tar".  Both 
around ~30ms.
   
   How would you suggest this be implemented?  Generate a python script at 
runtime in the tmp folder?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #4891: [TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort

2020-02-17 Thread GitBox
cchung100m commented on a change in pull request #4891: [TEST][FLAKY] 
topi/tests/python/test_topi_sort.py::test_argsort
URL: https://github.com/apache/incubator-tvm/pull/4891#discussion_r380464642
 
 

 ##
 File path: topi/tests/python/test_topi_sort.py
 ##
 @@ -27,6 +27,12 @@ def verify_argsort(axis, is_ascend):
 data_dtype = "float32"
 data = tvm.placeholder(dshape, name="data", dtype=data_dtype)
 np_data = np.random.uniform(size=dshape).astype(data_dtype)
 
 Review comment:
   Do you mean the output `nd.array` of `numpy.random.uniform`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] KindleHe commented on issue #4884: [android_deploy] CRASH caused by `Module.load` func while running App on Android Device with

2020-02-17 Thread GitBox
KindleHe commented on issue #4884: [android_deploy] CRASH caused by 
`Module.load` func while running App on Android Device with 
URL: https://github.com/apache/incubator-tvm/issues/4884#issuecomment-586919994
 
 
   Thanks for your quick reply! 
   
   As you said in [[REFACTOR][PY][API-Change] Polish tvm.runtime, 
tvm.runtime.module API 
update](https://github.com/apache/incubator-tvm/pull/4837#issue-372188411)
   ```
   API changes wrt to runtime.Module
   tvm.module.load -> tvm.runtime.load_module
   tvm.module.enabled -> tvm.runtime.enabled
   tvm.module.system_lib -> tvm.runtime.system_lib
   ```
   However, the [tvm.module related 
codes](https://github.com/apache/incubator-tvm/blob/1dcf8a16ee3a93dff5ffc1ad1a66892eda03ef13/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L184)
 did not change yet, even I keep my dev branch the same with newest master 
branch.
   ```
   Module modelLib = Module.load(libCacheFilePath);
   ```
   Finally, I get the same crash error ,and I m not sure what's the problem?
   Could you offer me some further help?
   > Can you see if #4871 resolved your problem, please make sure to rebuild 
the native app along with the java source
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] KindleHe edited a comment on issue #4884: [android_deploy] CRASH caused by `Module.load` func while running App on Android Device with

2020-02-17 Thread GitBox
KindleHe edited a comment on issue #4884: [android_deploy] CRASH caused by 
`Module.load` func while running App on Android Device with 
URL: https://github.com/apache/incubator-tvm/issues/4884#issuecomment-586919994
 
 
   Thanks for your quick reply! 
   
   As you said in [[REFACTOR][PY][API-Change] Polish tvm.runtime, 
tvm.runtime.module API 
update](https://github.com/apache/incubator-tvm/pull/4837#issue-372188411)
   ```
   API changes wrt to runtime.Module
   tvm.module.load -> tvm.runtime.load_module
   tvm.module.enabled -> tvm.runtime.enabled
   tvm.module.system_lib -> tvm.runtime.system_lib
   ```
   However, the [tvm.module related 
codes](https://github.com/apache/incubator-tvm/blob/1dcf8a16ee3a93dff5ffc1ad1a66892eda03ef13/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L184)
 did not change yet, even I keep my dev branch the same with newest master 
branch.
   in `src/main/java/org/apache/tvm/android/demo/MainActivity.java`
   ```
   Module modelLib = Module.load(libCacheFilePath);
   ```
   Finally, I get the same crash error ,and I m not sure what's the problem?
   Could you offer me some further help?
   > Can you see if #4871 resolved your problem, please make sure to rebuild 
the native app along with the java source
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4857: Windows Support for cpp_rpc

2020-02-17 Thread GitBox
FrozenGene commented on issue #4857: Windows Support for cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857#issuecomment-586882243
 
 
   > @FrozenGene Thanks for taking a peek. I totally understand limited time.
   > 
   > You are right, I do not include a case for Android to skip linking with 
pthread. My only excuse was can't really find a solid answer to this in CMake. 
"if(ANDROID)" would be nice if that's how it is done. I'll be very happy to 
implement any suggestion though!
   > 
   > I made this "first class" option from the main CMake because a) Its a much 
better implementation on Windows than the python version, which took a bunch of 
hacks to make decent performance. b) Was very easy to link to the tvm libs 
given its a sub CMake project.
   
   I agree all about the advantages of CMake. For Android, I think the 
[CMAKE_SYSTEM_NAME](https://cmake.org/cmake/help/v3.4/variable/CMAKE_SYSTEM_NAME.html)
 maybe could help us. But it requires the users to specify the build os, I 
prefer we do it automatically, i.e. we could use the compiler option 
`-dumpmachine` to get we are in the android or linux.
   
   For example, when I run:
   ` ~/ndk_tools/bin/clang -dumpmachine`, it will output: 
`aarch64-none-linux-android`
   If I run arm linux compiler, it will output: `aarch64-linux-gnueabi`.
   
   So I think we could this to distinguish android or linux. If we can not run 
this command successfully, we do the default things. 
   
   Please make sure this [Android ndk 
tools](https://developer.android.com/ndk/downloads) and [linaro linux] 
(https://www.linaro.org/downloads/) could compile cpp/rpc successfully, because 
they are important platforms to use cpp_rpc. (Note. these two tools has windows 
os support too)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-17 Thread GitBox
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-586890441
 
 
   ping @zhiics @kazum @FrozenGene  please review the change you requested. 
Your approvals are required to move forward.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Orion34C commented on a change in pull request #4546: [CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore

2020-02-17 Thread GitBox
Orion34C commented on a change in pull request #4546: [CODEGEN] Support cuda 
tensorcore subbyte int data type in auto tensorcore
URL: https://github.com/apache/incubator-tvm/pull/4546#discussion_r380072928
 
 

 ##
 File path: tutorials/autotvm/tensor_core_matmul_subbyte_int.py
 ##
 @@ -0,0 +1,231 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import logging
+import sys
+
+import numpy as np
+import tvm
+
+from tvm import autotvm
+
+
+def matmul_nn(A, B, L, dtype='int4', layout='TN'):
+k = tvm.reduce_axis((0, L), name='k')
+out_type = 'int'
+return tvm.compute((N, M), lambda i, j: tvm.sum((A[i, k] * B[j, 
k]).astype(out_type), axis=k))
+
+@autotvm.template
+def test_gemm_nn(N, L, M, dtype, layout):
+shape_a = (N, L)
+shape_b = (M, L)
+A = tvm.placeholder(shape_a, name='A', dtype=dtype)
+B = tvm.placeholder(shape_b, name='B', dtype=dtype)
+C = matmul_nn(A, B, L, dtype, layout)
+
+s = tvm.create_schedule(C.op)
+y, x = s[C].op.axis
+k = s[C].op.reduce_axis[0]
+
+# storage_align params
+factor = 64
+offset = 32
+if dtype == 'int1':
+  factor = 256
+  offset = 128
+
+AA = s.cache_read(A, "shared", [C])
+s[AA].storage_align(AA.op.axis[0], factor, offset)
+AL = s.cache_read(AA, "local", [C])
+BB = s.cache_read(B, "shared", [C])
+BL = s.cache_read(BB, "local", [C])
+CL = s.cache_write(C, "local")
+
+cfg = autotvm.get_config()
+cfg.define_knob("bx", [4, 8])
+cfg.define_knob("by", [8, 16, 32, 64])
+cfg.define_knob("step_k", [1, 2, 4, 8, 16, 32])
+cfg.define_knob("v", [8, 16, 32])
+by = cfg['by'].val
+bx = cfg['bx'].val
+step_k = cfg['step_k'].val
+v = cfg['v'].val
+'''
+bx = 4
+by = 16
+step_k = 32
+'''
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4790: Fast exponent

2020-02-17 Thread GitBox
masahi commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-586890882
 
 
   ping @tqchen 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4821: [WINDOWS][AutoTVM] OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted and O

2020-02-17 Thread GitBox
masahi commented on issue #4821: [WINDOWS][AutoTVM] OSError: [WinError 10048] 
Only one usage of each socket address (protocol/network address/port) is 
normally permitted and OSError: [WinError 10049] The requested address is not 
valid in its context
URL: https://github.com/apache/incubator-tvm/issues/4821#issuecomment-586902457
 
 
   AutoTVM doesn't work on Win at the moment, @jmorrill  is working on it. See 
#4548


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-17 Thread GitBox
masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r380058573
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1026 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+# pylint: disable=import-outside-toplevel, simplifiable-if-expression, 
unnecessary-comprehension
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+from tvm.ir import module as _module
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+data = inputs[0]
+
+import torch
+if isinstance(data, _expr.Expr):
+shape = _infer_shape(data)
+elif isinstance(data, list):
+shape = data
+elif isinstance(data, (torch.Tensor, np.ndarray)):
+shape = data.shape
+else:
+assert "data type {} could not be parsed in ones op" % (type(data))
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+data = inputs[0]
+
+import torch
+if isinstance(data, _expr.Expr):
+shape = _infer_shape(data)
+elif isinstance(data, list):
+shape = data
+elif isinstance(data, (torch.Tensor, np.ndarray)):
+shape = data.shape
+else:
+assert "data type {} could not be parsed in zeros op" % 
(type(data))
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-17 Thread GitBox
masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r380058573
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1026 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+# pylint: disable=import-outside-toplevel, simplifiable-if-expression, 
unnecessary-comprehension
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+from tvm.ir import module as _module
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+data = inputs[0]
+
+import torch
+if isinstance(data, _expr.Expr):
+shape = _infer_shape(data)
+elif isinstance(data, list):
+shape = data
+elif isinstance(data, (torch.Tensor, np.ndarray)):
+shape = data.shape
+else:
+assert "data type {} could not be parsed in ones op" % (type(data))
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+data = inputs[0]
+
+import torch
+if isinstance(data, _expr.Expr):
+shape = _infer_shape(data)
+elif isinstance(data, list):
+shape = data
+elif isinstance(data, (torch.Tensor, np.ndarray)):
+shape = data.shape
+else:
+assert "data type {} could not be parsed in zeros op" % 
(type(data))
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def 

[GitHub] [incubator-tvm] KindleHe closed issue #4884: [android_deploy] CRASH caused by `Module.load` func while running App on Android Device with

2020-02-17 Thread GitBox
KindleHe closed issue #4884: [android_deploy] CRASH caused by `Module.load` 
func while running App on Android Device with 
URL: https://github.com/apache/incubator-tvm/issues/4884
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Orion34C commented on issue #4546: [CODEGEN] Support cuda tensorcore subbyte int data type in auto tensorcore

2020-02-17 Thread GitBox
Orion34C commented on issue #4546: [CODEGEN] Support cuda tensorcore subbyte 
int data type in auto tensorcore
URL: https://github.com/apache/incubator-tvm/pull/4546#issuecomment-586994714
 
 
   @vinx13 Hi, I re-run the tests several times, the same error happened in the 
test_workspace_add with simply a TVM error in the cpu env. Is there any docs or 
tutorials that can help me figure out where went wrong in my commit? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] KindleHe commented on issue #4884: [android_deploy] CRASH caused by `Module.load` func while running App on Android Device with

2020-02-17 Thread GitBox
KindleHe commented on issue #4884: [android_deploy] CRASH caused by 
`Module.load` func while running App on Android Device with 
URL: https://github.com/apache/incubator-tvm/issues/4884#issuecomment-586994657
 
 
   @tqchen Yes!  I find the answer finally! Your answer is right!
   
   TVM4J is Java Frontend for TVM Runtime, and the CRASH occurs at calling 
Module.load func.
   
   As [#4871](https://github.com/apache/incubator-tvm/pull/4871) said, runtime 
PackedFunc is changed at this commit. Namely, the corresponding Java interface 
for TVM runtime changed.
   
   So, even I synced my branch with remote master newest branch and rebuild the 
android apk, I get the same CRASH due to I forget to rebuild TVM4J to get a new 
java interface after [#4871](https://github.com/apache/incubator-tvm/pull/4871).
   
   The key is **rebuild TVM4J** before **rebuild the android apk**
   
   I am glad to learn more about Java as a Java newcomer and thanks for your 
kind help very much!
   
   > Thanks for your quick reply!
   > 
   > As you said in [[REFACTOR][PY][API-Change] Polish tvm.runtime, 
tvm.runtime.module API 
update](https://github.com/apache/incubator-tvm/pull/4837#issue-372188411)
   > 
   > ```
   > API changes wrt to runtime.Module
   > tvm.module.load -> tvm.runtime.load_module
   > tvm.module.enabled -> tvm.runtime.enabled
   > tvm.module.system_lib -> tvm.runtime.system_lib
   > ```
   > 
   > However, the [tvm.module related 
codes](https://github.com/apache/incubator-tvm/blob/1dcf8a16ee3a93dff5ffc1ad1a66892eda03ef13/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L184)
 did not change yet, even I keep my dev branch the same with newest master 
branch.
   > in `src/main/java/org/apache/tvm/android/demo/MainActivity.java`
   > 
   > ```
   > Module modelLib = Module.load(libCacheFilePath);
   > ```
   > 
   > Finally, I get the same crash error ,and I m not sure what's the problem?
   > Could you offer me some further help?
   > 
   > > Can you see if #4871 resolved your problem, please make sure to rebuild 
the native app along with the java source
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-02-17 Thread GitBox
mbaret commented on issue #4847: Return empty CSourceModule when no 
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-587005445
 
 
   I've tested the most recent commit with my use-case (aarch64 with offloading 
to external codegen) and it looks to work well. It's still not a particularly 
elegant solution but I can't propose a better one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services