sunggg commented on code in PR #14282:
URL: https://github.com/apache/tvm/pull/14282#discussion_r1141205096
##########
src/relax/op/op.cc:
##########
@@ -315,6 +315,32 @@ Expr MakeShapeOf(Expr expr) {
TVM_REGISTER_GLOBAL("relax.op.shape_of").set_body_typed(MakeShapeOf);
+// tensor_to_shape
+
+StructInfo ReturnTensorToShapeStructInfo(const Call& call, const BlockBuilder&
ctx) {
+ ICHECK(call->args.size() == 1);
+ ICHECK(call->args[0]->struct_info_.defined());
+ const auto* tsinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[0]);
+ ICHECK(tsinfo && tsinfo->shape.defined());
Review Comment:
The contract with lowering is that we need to know each shape value (could
be symbolic var) for memory pre-allocation.
So, this example wouldn't lower.
```python
@I.ir_module
class Module:
@R.function
def main(x: R.Tensor((3,), dtype="int64")) -> R.Tensor((3,),
dtype="int64"):
gv: R.Shape(ndim=3) = R.call_packed(
"vm.builtin.tensor_to_shape", x, sinfo_args=(R.Shape(ndim=3),)
)
gv3: R.Tensor(ndim=3, dtype="int64") = R.reshape(x, gv)
gv_1: R.Tensor(ndim=3, dtype="int64") = gv3
return gv_1
mod = LegalizeOps()(Module)
mod.show()
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]