trevor-m opened a new pull request #6038:
URL: https://github.com/apache/incubator-tvm/pull/6038
The fully_connected converter used the shapes from the TFLite model to
reshape the data tensor. However, the TFLite model shapes do not reflect those
provided by the `data_shape` parameter in `from_tflite()`. The TFLite model
shape `input_tensor.tensor.ShapeAsNumpy()` will give a batch size of `1`
because the TFLite model obviously doesn't know about the `data_shape` dict the
user provided to the relay importer.
For this particular op, the reshape can always be set to `(-1, n_units)`
without needing to calculate a batch size.
For inceptionv4, without this PR we would incorrectly compute a batch size
of 1 adding this reshape from %515: `(4, 1, 1, 1536)` to %516: `(1, 1536)`.
This ultimately causes the model output to become `(1, 1001)` instead of `(4,
1001)`
```
..
%515 = nn.avg_pool2d(%514, pool_size=[8, 8], padding=[0, 0, 0, 0],
layout="NHWC") /* ty=Tensor[(4, 1, 1, 1536), float32] */;
%516 = reshape(%515, meta[relay.Constant][0] /* ty=Tensor[(2), int32] */
/* ty=Tensor[(2), int32] */, newshape=[1, 1536]) /* ty=Tensor[(1, 1536),
float32] */;
%517 = nn.dense(%516, %v_param_299, units=None) /* ty=Tensor[(1, 1001),
float32] */;
%518 = nn.bias_add(%517, %v_param_300) /* ty=Tensor[(1, 1001), float32]
*/;
nn.softmax(%518, axis=1) /* ty=Tensor[(1, 1001), float32] */
}
```
Btw, should the reshape op have some validation to prevent reshapes where
the numbers of elements don't match? `(4, 1, 1, 1536)` -> `(1, 1536)` shouldn't
be valid.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]