jwfromm commented on code in PR #14282:
URL: https://github.com/apache/tvm/pull/14282#discussion_r1134343862
##########
tests/python/relax/test_relax_operators.py:
##########
@@ -193,5 +193,20 @@ def test_op_shape_of():
assert constrained_shape == tvm.runtime.ShapeTuple([1])
[email protected]_module
+class TensorToShapeTest:
+ @R.function
+ def run_tensor_to_shape(t: R.Tensor(ndim=1, dtype="int64")) -> R.Shape((1,
2, 3)):
+ gv: R.Shape(ndim=3) = R.tensor_to_shape(t)
+ return gv
+
+
+def test_op_tensor_to_shape():
+ out_shape = run_cpu(
+ TensorToShapeTest, "run_tensor_to_shape", tvm.nd.array(np.array([1, 2,
3]).astype("int64"))
Review Comment:
Noting my comment above, I'm not sure this is the proper behavior. The input
tensor has `ndim=1`. Should the produced shape really have `ndim=3`?
##########
src/relax/op/op.cc:
##########
@@ -315,6 +315,32 @@ Expr MakeShapeOf(Expr expr) {
TVM_REGISTER_GLOBAL("relax.op.shape_of").set_body_typed(MakeShapeOf);
+// tensor_to_shape
+
+StructInfo ReturnTensorToShapeStructInfo(const Call& call, const BlockBuilder&
ctx) {
+ ICHECK(call->args.size() == 1);
+ ICHECK(call->args[0]->struct_info_.defined());
+ const auto* tsinfo = GetStructInfoAs<TensorStructInfoNode>(call->args[0]);
+ ICHECK(tsinfo && tsinfo->shape.defined());
+ ShapeExpr shape_expr = Downcast<ShapeExpr>(tsinfo->shape.value());
+ ICHECK(shape_expr->values.size() == 1);
+ const IntImmNode* ndim = shape_expr->values[0].as<IntImmNode>();
Review Comment:
I dont think this logic is right. Shouldn't the number of dimensions and the
output shape match the incoming shape? This seems like it may have been a
remnant of `Reshape`'s shape function.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]