mshr-h commented on code in PR #16131:
URL: https://github.com/apache/tvm/pull/16131#discussion_r1393965464
##########
python/tvm/relay/frontend/pytorch.py:
##########
@@ -1546,6 +1546,16 @@ def flatten(self, inputs, input_types):
out = _op.squeeze(out, axis=squeeze_axes)
return out
+ def unflatten(self, inputs, input_types):
+ data = inputs[0]
+ dim = int(inputs[1])
+ unflattened_size = tuple(inputs[2])
+ dshape = get_const_tuple(self.infer_shape_with_prelude(data))
+ assert len(dshape) > dim
+ new_shape = dshape[:dim] + unflattened_size + dshape[dim + 1 :]
Review Comment:
@vvchernov Thanks!
I don't think we have to check it because torch.jit.trace does it.
They provide something like the below RuntimeError.
```
RuntimeError: unflatten: Provided sizes [3, 5, 3, -1] don't multiply up to
the size of dim 0 (60) in the input tensor
```
Should we add the check in TVM's PyTorch frontend?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]