mbaret commented on a change in pull request #6523:
URL: https://github.com/apache/incubator-tvm/pull/6523#discussion_r492584474
##########
File path: include/tvm/relay/attrs/nn.h
##########
@@ -596,11 +596,13 @@ struct Conv2DTransposeAttrs : public
tvm::AttrsNode<Conv2DTransposeAttrs> {
/*! \brief Attributes used in dilate operator */
struct DilateAttrs : public tvm::AttrsNode<DilateAttrs> {
Array<IndexExpr> strides;
+ double dilation_value;
Review comment:
Why double vs float?
##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2809,7 +2809,7 @@ def convert_transpose_conv(self, op):
# Weights
weights_tensor_type = weights_tensor.tensor.Type()
# weights tensor type should be UINT8 (quantization) or FLOAT32
Review comment:
Update this comment to include INT8
##########
File path: python/tvm/topi/nn/dilate.py
##########
@@ -34,6 +34,9 @@ def dilate(data, strides, name="DilatedInput"):
strides : list / tuple of n ints
Dilation stride on each dimension, 1 means no dilation.
+ dilation_value : int/float
Review comment:
document 'optional'
##########
File path: python/tvm/topi/testing/dilate_python.py
##########
@@ -30,6 +30,9 @@ def dilate_python(input_np, strides):
strides : list / tuple of n ints
Dilation stride on each dimension, 1 means no dilation.
+ dilation_value : int/float
Review comment:
document 'optional'
##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2831,17 +2831,94 @@ def convert_transpose_conv(self, op):
else:
padding = (0, 0, 0, 0)
- out = _op.nn.conv2d_transpose(
- in_expr,
- weight_expr_iohw,
- strides=(stride_h, stride_w),
- padding=padding,
- channels=int(out_channels),
- kernel_size=(int(kernel_h), int(kernel_w)),
- data_layout="NHWC",
- kernel_layout="OIHW",
- out_dtype=output_tensor_type_str,
- )
+ if input_tensor.qnn_params:
+ # Making use of qnn.conv2d
Review comment:
May be useful to document the mathematical approach taken here.
##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2831,17 +2831,94 @@ def convert_transpose_conv(self, op):
else:
padding = (0, 0, 0, 0)
- out = _op.nn.conv2d_transpose(
- in_expr,
- weight_expr_iohw,
- strides=(stride_h, stride_w),
- padding=padding,
- channels=int(out_channels),
- kernel_size=(int(kernel_h), int(kernel_w)),
- data_layout="NHWC",
- kernel_layout="OIHW",
- out_dtype=output_tensor_type_str,
- )
+ if input_tensor.qnn_params:
+ # Making use of qnn.conv2d
Review comment:
I suppose I was less interested in a proof, and more just a statement of
what manipulations are happening.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]