[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-29 Thread GitBox
alexwong commented on a change in pull request #4497: [WIP] [Relay] Add a 
PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r372766377
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1098 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+data0 = _convert_elemwise_input(inputs[0])
+data1 = _convert_elemwise_input(inputs[1])
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
 
 Review comment:
   Yes, this op seems to be almost a 1-1 map with select.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-29 Thread GitBox
masahi commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch 
to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r372752325
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1098 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+data0 = _convert_elemwise_input(inputs[0])
+data1 = _convert_elemwise_input(inputs[1])
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
 
 Review comment:
   can we use op.transform.take(...) for this?
   
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/op/transform.py#L263


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4785: [FFI][Windows] Parse additional exception strings

2020-01-29 Thread GitBox
FrozenGene commented on issue #4785: [FFI][Windows] Parse additional exception 
strings
URL: https://github.com/apache/incubator-tvm/pull/4785#issuecomment-580071366
 
 
   @soiferj if you have no other comment, please approve this change according 
to this doc: 
https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] shoubhik commented on issue #4764: [CI] ci-gpu update blockers

2020-01-29 Thread GitBox
shoubhik commented on issue #4764: [CI] ci-gpu update blockers 
URL: https://github.com/apache/incubator-tvm/issues/4764#issuecomment-580066405
 
 
   @tqchen, in that case, can we create a new docker instance specifically for 
testing the qnn parser?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4790: Fast exponent

2020-01-29 Thread GitBox
FrozenGene commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r372741774
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,71 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+ /*
+ * \brief Fast exponential function implementation from Eigen
+ * 
https://github.com/eigenteam/eigen-git-mirror/blob/master/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h#L183
 
 Review comment:
   What is the license of libeigen? If you understand and write the code by 
yourself, could remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vinx13 commented on a change in pull request #4775: conv3d_ndhwc schedule

2020-01-29 Thread GitBox
vinx13 commented on a change in pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#discussion_r372740925
 
 

 ##
 File path: topi/python/topi/x86/conv3d.py
 ##
 @@ -0,0 +1,112 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable, too-many-locals
+# pylint: disable=unused-argument, redefined-builtin, no-else-return
+"""Conv3D operators"""
+import tvm
+from tvm import autotvm
+from .. import generic, tag
+from ..nn.conv3d import conv3d, conv3d_ndhwc, conv3d_ncdhw
+from ..generic.nn import schedule_conv3d_ndhwc
+
+@autotvm.register_topi_compute(conv3d, 'cpu', ['direct'])
+def conv3d_x86(cfg, input, filter, strides, padding, dilation, layout='NCDHW', 
out_dtype=None):
+if layout == 'NCDHW':
+return conv3d_ncdhw(input, filter, strides, padding, dilation, 
out_dtype)
+elif layout == 'NDHWC':
+return conv3d_ndhwc(input, filter, strides, padding, dilation, 
out_dtype)
+
+@autotvm.register_topi_schedule(schedule_conv3d_ndhwc, 'cpu', ['direct'])
+def schedule_conv3d_ndhwc_x86(cfg, outs):
+"""TOPI schedule callback for conv2d
 
 Review comment:
   @alexgl-github this comment hasn't been resolved


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4788: [FRONTEND][TFLITE]Gather, StridedSlice op support added

2020-01-29 Thread GitBox
FrozenGene commented on a change in pull request #4788: 
[FRONTEND][TFLITE]Gather, StridedSlice op support added
URL: https://github.com/apache/incubator-tvm/pull/4788#discussion_r372740391
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -244,6 +244,57 @@ def test_forward_slice():
 _test_slice(np.arange(8, dtype=np.int32).reshape((2, 4)), begin=[0, 
1], size=[-1, -1])
 _test_slice(np.arange(5, dtype=np.int32).reshape((5, )), begin=[4], 
size=[-1])
 
+###
+# Gather
+# --
+
+def _test_gather(dshape, indices, axis, dtype):
+""" One iteration of Gather """
+data = np.random.uniform(1, 10, size=dshape).astype(dtype)
+indices = np.asarray(indices).astype('int32')
+
+with tf.Graph().as_default():
+in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)
+out = array_ops.gather(in_data, indices, axis=axis)
+compare_tflite_with_tvm(data, 'Placeholder:0', [in_data], [out])
+
+def test_forward_gather():
 
 Review comment:
   What happened when the input data is quantized?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch merged pull request #4783: Make sure to visit the arguments of inlined functions

2020-01-29 Thread GitBox
jroesch merged pull request #4783: Make sure to visit the arguments of inlined 
functions
URL: https://github.com/apache/incubator-tvm/pull/4783
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (1b8522e -> 6798ba8)

2020-01-29 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 1b8522e  [AUTOTVM] Fix a bug in generating the search space (#4779)
 add 6798ba8  Make sure to visit the arguments of inlined functions (#4783)

No new revisions were added by this update.

Summary of changes:
 src/relay/backend/vm/inline_primitives.cc | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] icemelon9 commented on issue #4764: [CI] ci-gpu update blockers

2020-01-29 Thread GitBox
icemelon9 commented on issue #4764: [CI] ci-gpu update blockers 
URL: https://github.com/apache/incubator-tvm/issues/4764#issuecomment-580047528
 
 
   @shoubhik @tqchen 
   I found out that mxnet-mkl RNN layer has some bug. If we use mxnet, the rnn 
layer test can pass. But if we install mxnet-mkl, even for 1.5.0, the mxnet 
result is wrong.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zxy844288792 commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
zxy844288792 commented on a change in pull request #4787: [Relay] Conv2D 
padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r372718146
 
 

 ##
 File path: python/tvm/relay/op/nn/nn.py
 ##
 @@ -202,6 +203,8 @@ def conv2d(data,
 dilation = (dilation, dilation)
 if isinstance(padding, int):
 padding = (padding, padding)
 
 Review comment:
   thanks, removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Laurawly commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
Laurawly commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372695578
 
 

 ##
 File path: python/tvm/autotvm/task/task.py
 ##
 @@ -116,43 +149,134 @@ def __repr__(self):
 self.name, self.args, self.kwargs, self.workload
 )
 
-TASK_TABLE = {
-}
+TASK_TABLE2 = {}
 
 Review comment:
   Can we remove 2 in task_table as well?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Laurawly commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
Laurawly commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372700311
 
 

 ##
 File path: python/tvm/relay/quantize/_annotate.py
 ##
 @@ -53,11 +53,11 @@ def simulated_quantize_compute(attrs, inputs, out_type, 
target):
 return [rdata]
 
 
-_reg.register_schedule("relay.op.annotation.simulated_quantize",
-   _reg.schedule_injective)
+# _reg.register_schedule("relay.op.annotation.simulated_quantize",
 
 Review comment:
   Remove


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Laurawly commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
Laurawly commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372704546
 
 

 ##
 File path: topi/python/topi/intel_graphics/conv2d.py
 ##
 @@ -443,8 +354,32 @@ def _schedule_cl_spatialpack_NCHWc(cfg, s, op):
 s[output].compute_inline()
 conv = s.outputs[0]
 SCHEDULE_OUTPUT = False
+else: # conv2d_NCHWc_unpack
+conv = op.input_tensors[0]
+temp = s[conv].op.input_tensors[0]
+kernel = s[conv].op.input_tensors[1]
+temp_W = s.cache_read(temp, "warp", [conv])
+conv_L = s.cache_write(conv, "local")
+SCHEDULE_OUTPUT = True
 kernel_L = s.cache_read(kernel, "local", [conv_L])
 
+if temp.name == "pad_temp":
+data = temp.op.input_tensors[0]
+# TODO(@Laurawly): Do we need to schedule pad op here?
 
 Review comment:
   Yeah we need to schedule temp.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zxy844288792 commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
zxy844288792 commented on a change in pull request #4787: [Relay] Conv2D 
padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r372702055
 
 

 ##
 File path: python/tvm/relay/op/nn/nn.py
 ##
 @@ -202,6 +203,8 @@ def conv2d(data,
 dilation = (dilation, dilation)
 if isinstance(padding, int):
 padding = (padding, padding)
+# convert 2-way padding to 4-way padding
+padding = get_pad_tuple(padding)
 
 Review comment:
   in topi/nn/util, we have get_pad_tuple1d and get_pad_tuple3d, so maybe I 
will rename it as get_pad_tuple2d for consistency


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372636908
 
 

 ##
 File path: python/tvm/relay/backend/compile_engine.py
 ##
 @@ -63,6 +83,316 @@ def _get_cache_key(source_func, target):
 return source_func
 
 
+def get_shape(shape):
+"""Convert the shape to correct dtype and vars."""
+ret = []
+for dim in shape:
+if isinstance(dim, tvm.expr.IntImm):
+val = int(dim)
+assert val <= np.iinfo(np.int32).max
+ret.append(tvm.expr.IntImm("int32", val))
+elif isinstance(dim, tvm.expr.Any):
+ret.append(tvm.var("any_dim", "int32"))
+else:
+ret.append(dim)
+return ret
+
+
+def get_valid_implements(op, attrs, inputs, out_type, target):
+"""Get all valid implementations from the op strategy.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list of tvm.Tensor
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+Returns
+---
+ret : list of relay.op.OpImplement
+The list of op implementations.
+"""
+fstrategy = op.get_attr("FTVMStrategy")
+assert fstrategy is not None, "%s doesn't have FTVMStrategy registered" % 
op.name
+with target:
+strategy = fstrategy(attrs, inputs, out_type, target)
+ret = []
+for spec in strategy.specializations:
+if spec.condition:
+flag = True
+for clause in spec.condition.clauses:
+clause = tvm.ir_pass.Simplify(clause)
+if isinstance(clause, tvm.expr.IntImm) and clause.value:
 
 Review comment:
   Could you add comments to this statement, or use a better name for `flag`?  
I don't quite understand the logic here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372683444
 
 

 ##
 File path: python/tvm/relay/backend/compile_engine.py
 ##
 @@ -63,6 +83,316 @@ def _get_cache_key(source_func, target):
 return source_func
 
 
+def get_shape(shape):
+"""Convert the shape to correct dtype and vars."""
+ret = []
+for dim in shape:
+if isinstance(dim, tvm.expr.IntImm):
+val = int(dim)
+assert val <= np.iinfo(np.int32).max
+ret.append(tvm.expr.IntImm("int32", val))
+elif isinstance(dim, tvm.expr.Any):
+ret.append(tvm.var("any_dim", "int32"))
+else:
+ret.append(dim)
+return ret
+
+
+def get_valid_implements(op, attrs, inputs, out_type, target):
+"""Get all valid implementations from the op strategy.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list of tvm.Tensor
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+Returns
+---
+ret : list of relay.op.OpImplement
+The list of op implementations.
+"""
+fstrategy = op.get_attr("FTVMStrategy")
+assert fstrategy is not None, "%s doesn't have FTVMStrategy registered" % 
op.name
+with target:
+strategy = fstrategy(attrs, inputs, out_type, target)
+ret = []
+for spec in strategy.specializations:
+if spec.condition:
+flag = True
+for clause in spec.condition.clauses:
+clause = tvm.ir_pass.Simplify(clause)
+if isinstance(clause, tvm.expr.IntImm) and clause.value:
+continue
+flag = False
+break
+if flag:
+for impl in spec.implements:
+ret.append(impl)
+else:
+for impl in spec.implements:
+ret.append(impl)
+return ret
+
+
+def select_implement(op, attrs, inputs, out_type, target, use_autotvm=True):
+"""Select the best implement from the op strategy.
+
+If use_autotvm is True, it'll first try to find the best implementation
+based on AutoTVM profile results. If no AutoTVM profile result is found,
+it'll choose the implementation with highest plevel.
+
+If use_autotvm is False, it'll directly choose the implementation with
+highest plevel.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list[tvm.Tensor]
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+use_autotvm : bool
+Whether query AutoTVM to pick the best.
+
+Returns
+---
+ret : tuple(relay.op.OpImplement, list[tvm.Tensor])
+The best op implementation and the corresponding output tensors.
+"""
+all_impls = get_valid_implements(op, attrs, inputs, out_type, target)
+
+best_plevel_impl = None
+for impl in all_impls:
+if best_plevel_impl is None or int(impl.plevel) > 
int(best_plevel_impl.plevel):
 
 Review comment:
   Why we need to cast `plevel` to int here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372640446
 
 

 ##
 File path: python/tvm/build_module.py
 ##
 @@ -425,6 +425,8 @@ def lower(sch,
 stmt = ir_pass.InstrumentBoundCheckers(stmt)
 if simple_mode:
 return stmt
+# print('='*80)
+# print(stmt)
 
 Review comment:
   Remove


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r372693044
 
 

 ##
 File path: python/tvm/autotvm/task/dispatcher.py
 ##
 @@ -481,8 +412,12 @@ def _query_inside(self, target, workload):
 """
 if self._counter < len(self._records):
 cfg = self._records[self._counter][0].config
+wkl = self._records[self._counter][0].task.workload
+if workload is not None:
+assert wkl == workload
 self._counter += 1
-self.update(target, workload, cfg)
+self.update(target, wkl, cfg)
+cfg.workload = wkl
 
 Review comment:
   Where is `cfg.workload` initialized? I didn't find a definition in 
`ConfigSpace`. Also what's the purpose to have a workload field in a config 
space?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] huajsj opened a new pull request #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-29 Thread GitBox
huajsj opened a new pull request #4791: [TOPI] upsample operator 'NCHWinic' 
format support.
URL: https://github.com/apache/incubator-tvm/pull/4791
 
 
   some hardware accelerator ask packed format data like NCHWinic to fit the
   hardware resource, here add 'upsample' 'NCHWinic' format support to help
   such requirement.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-29 Thread GitBox
alexwong edited a comment on issue #4497: [WIP] [Relay] Add a PyTorch to Relay 
Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-579987071
 
 
   I cleaned up the code based off the simpler feedback and will focus on 
getting the CI to pass (w/ refactored test code based off @jwfromm's comment). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-29 Thread GitBox
mbarrett97 commented on issue #4543: [FRONTEND][TFLITE] Add support for 
TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-579987271
 
 
   @FrozenGene are there any other changes you want?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-29 Thread GitBox
alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-579987071
 
 
   I cleaned up all of the simpler fixes and will focus on getting the CI to 
pass (w/ refactored test code based off @jwfromm's comment). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jmorrill commented on issue #4785: [FFI][Windows] Parse additional exception strings

2020-01-29 Thread GitBox
jmorrill commented on issue #4785: [FFI][Windows] Parse additional exception 
strings
URL: https://github.com/apache/incubator-tvm/pull/4785#issuecomment-579897309
 
 
   Clean fix was all you.  I just did the clerical work.
   
   Thanks man!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r372523989
 
 

 ##
 File path: python/tvm/relay/op/nn/nn.py
 ##
 @@ -202,6 +203,8 @@ def conv2d(data,
 dilation = (dilation, dilation)
 if isinstance(padding, int):
 padding = (padding, padding)
+# convert 2-way padding to 4-way padding
+padding = get_pad_tuple(padding)
 
 Review comment:
   Since we also have conv1d and conv3d, maybe it's better to rename this 
function to `get_2d_pad_tuple`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r372528150
 
 

 ##
 File path: topi/python/topi/nn/conv2d.py
 ##
 @@ -62,6 +62,8 @@ def conv2d(input, filter, strides, padding, dilation, 
layout='NCHW', out_dtype=N
 output : tvm.Tensor
 4-D with shape [batch, out_channel, out_height, out_width]
 """
+#only accepts 4-way padding
+assert len(padding) == 4, "only accepts 4-way padding"
 
 Review comment:
   Change the function docstring accordingly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r372525767
 
 

 ##
 File path: python/tvm/relay/op/nn/util.py
 ##
 @@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable
+"""NN operator common utilities"""
+from __future__ import absolute_import
+
+def get_pad_tuple(padding):
+"""Common code to get the pad option
+Parameters
+--
+padding : int or str
+Padding size, or ['VALID', 'SAME']
 
 Review comment:
   The type should be `Union[int, Tuple[int, ...]]`. Also please change the 
description accordingly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
comaniac commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r372526415
 
 

 ##
 File path: python/tvm/relay/op/nn/nn.py
 ##
 @@ -202,6 +203,8 @@ def conv2d(data,
 dilation = (dilation, dilation)
 if isinstance(padding, int):
 padding = (padding, padding)
 
 Review comment:
   Consider removing this logic since `get_pad_tuple` can also accept `int`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexgl-github opened a new pull request #4790: Fast exponent

2020-01-29 Thread GitBox
alexgl-github opened a new pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexgl-github commented on issue #4775: conv3d_ndhwc schedule

2020-01-29 Thread GitBox
alexgl-github commented on issue #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#issuecomment-579823237
 
 
   > @alexgl-github Thanks for the contribution.
   > 
   > Almost every PR needs to have a test case in Apache TVM project. In this 
case, it can be a test case that uses this schedule and compares accuracy with 
some golden reference.
   
   There's already topi conv3d_ndhwc test using this schedule:
   
https://github.com/apache/incubator-tvm/blob/1b8522e475e9a889b80f069a83928cafa3502a74/topi/tests/python/test_topi_conv3d_ndhwc.py#L60


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-29 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r372449323
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -705,47 +710,87 @@ def convert_div(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
+"""Convert TFLite POW"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
+def convert_squared_difference(self, op):
 
 Review comment:
   This part shouldn’t appear in your pr if you rebase master correctly. Seems 
that you don’t handle it correctly and move this implementation manually. 
Please rebase your pr to master correctly referring this: 
https://docs.tvm.ai/contribute/git_howto.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-29 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r372450103
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -843,6 +843,13 @@ def _test_pow(data):
 """ One iteration of power """
 return _test_elemwise(math_ops.pow, data)
 ###
+# Squared_difference
+# --
+
+def _test_squared_difference(data):
 
 Review comment:
   Same problem as previous comment. Please correct it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

2020-01-29 Thread GitBox
u99127 commented on issue #4696: [Relay][Frontend][TFlite] Add support for 
quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-579765000
 
 
   ping for an approving review ? @FrozenGene @anijain2305.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 commented on a change in pull request #4788: [FRONTEND][TFLITE]Gather, StridedSlice op support added

2020-01-29 Thread GitBox
u99127 commented on a change in pull request #4788: [FRONTEND][TFLITE]Gather, 
StridedSlice op support added
URL: https://github.com/apache/incubator-tvm/pull/4788#discussion_r372389879
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -747,6 +749,158 @@ def convert_squared_difference(self, op):
 out = _op.power(difference, relay.const(2, exp_type))
 return out
 
+def convert_gather(self, op):
+"""Method to Convert TFLite Gather operator"""
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized gather operator is not supported yet.')
 
 Review comment:
   Since this is a tensor manipulation operation rather than something that 
requires any specific qnn dialect, I would prefer we handle the qnn ops here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 commented on a change in pull request #4788: [FRONTEND][TFLITE]Gather, StridedSlice op support added

2020-01-29 Thread GitBox
u99127 commented on a change in pull request #4788: [FRONTEND][TFLITE]Gather, 
StridedSlice op support added
URL: https://github.com/apache/incubator-tvm/pull/4788#discussion_r372390838
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -747,6 +749,158 @@ def convert_squared_difference(self, op):
 out = _op.power(difference, relay.const(2, exp_type))
 return out
 
+def convert_gather(self, op):
+"""Method to Convert TFLite Gather operator"""
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized gather operator is not supported yet.')
+input_tensors = self.get_input_tensors(op)
+
+try:
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.GatherOptions import GatherOptions
+from tflite.TensorType import TensorType
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+assert op.BuiltinOptionsType() == BuiltinOptions.GatherOptions
+op_options = op.BuiltinOptions()
+gather_options = GatherOptions()
+gather_options.Init(op_options.Bytes, op_options.Pos)
+axis = gather_options.Axis()
+
+data = self.get_expr(input_tensors[0].tensor_idx)
+
+indices = input_tensors[1]
+indices_type = indices.tensor.Type()
+
+assert indices_type in (TensorType.INT32, TensorType.INT64)
+indices_type_str = self.get_tensor_type_str(indices_type)
+indices = self.exp_tab.new_const(self.get_tensor_value(indices),
+ dtype=indices_type_str)
+out = _op.take(data, indices, axis=axis)
+return out
+
+def convert_strided_slice(self, op):
+"""Method to Convert TFLite Strided Slice operator"""
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized strided slice operator is not supported 
yet.')
 
 Review comment:
   Same comment as above - this is a tensor manipulation op rather than 
anything that intrinsically requires quantization knowledge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob commented on issue #4789: [Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range

2020-01-29 Thread GitBox
inadob commented on issue #4789: [Frontend][TFLite] Dynamically calculate 
input_stats of any fake_quant range
URL: https://github.com/apache/incubator-tvm/pull/4789#issuecomment-579739378
 
 
   @kevinthesun @anijain2305 can you please take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob opened a new pull request #4789: [Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range

2020-01-29 Thread GitBox
inadob opened a new pull request #4789: [Frontend][TFLite] Dynamically 
calculate input_stats of any fake_quant range
URL: https://github.com/apache/incubator-tvm/pull/4789
 
 
   * pass the input range to the convertor and caclulate (mean, scale) there
   * change the range of the second tensor in elemwise operations
 so that we test inputs with different quant params
   * change the possible output range for elemwise ops wrt the updated ranges
   * update the comments for (m, s) calculations
   * add input range dict to tflite reduce_mean operation
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] siju-samuel opened a new pull request #4788: [FRONTEND][TFLITE]Gather, StridedSlice op support added

2020-01-29 Thread GitBox
siju-samuel opened a new pull request #4788: [FRONTEND][TFLITE]Gather, 
StridedSlice op support added
URL: https://github.com/apache/incubator-tvm/pull/4788
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   @FrozenGene @kevinthesun  Could you please help to review this patch. TIA


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zxy844288792 opened a new pull request #4787: [Relay] Conv2D padding representation

2020-01-29 Thread GitBox
zxy844288792 opened a new pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787
 
 
   As discussed here: 
https://discuss.tvm.ai/t/rfc-conv2d-padding-representation/5394. We agree we 
should enforce topi.nn.conv2d to accept 4-way padding. We also should have 
relay.nn.conv2d legalizes the padding to 4-way.
   
   get_pad_tuple is from topi util.py. I deleted some unuseful code and reuse 
it for relay.op.nn.conv2d.
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services