jainris commented on a change in pull request #6523:
URL: https://github.com/apache/incubator-tvm/pull/6523#discussion_r492592531
##########
File path: include/tvm/relay/attrs/nn.h
##########
@@ -596,11 +596,13 @@ struct Conv2DTransposeAttrs : public
tvm::AttrsNode<Conv2DTransposeAttrs> {
/*! \brief Attributes used in dilate operator */
struct DilateAttrs : public tvm::AttrsNode<DilateAttrs> {
Array<IndexExpr> strides;
+ double dilation_value;
Review comment:
This is parallel to `pad_value` of `PadAttrs` which is `double`.
##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2831,17 +2831,94 @@ def convert_transpose_conv(self, op):
else:
padding = (0, 0, 0, 0)
- out = _op.nn.conv2d_transpose(
- in_expr,
- weight_expr_iohw,
- strides=(stride_h, stride_w),
- padding=padding,
- channels=int(out_channels),
- kernel_size=(int(kernel_h), int(kernel_w)),
- data_layout="NHWC",
- kernel_layout="OIHW",
- out_dtype=output_tensor_type_str,
- )
+ if input_tensor.qnn_params:
+ # Making use of qnn.conv2d
Review comment:
This is essentially the same as the implementation of Relay op
`conv2d_transpose`, which makes the same transformations (with 0 instead of
zero-point) to the `input `and `kernel `and then does a convolution. So, the
mathematical approach would rather be proving that the transformations followed
by convolution is the same as transpose convolution, proving which might take
some unreasonable space.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]