vinx13 commented on code in PR #14447:
URL: https://github.com/apache/tvm/pull/14447#discussion_r1154967695


##########
python/tvm/relax/op/base.py:
##########
@@ -253,6 +254,19 @@ def render_object(val: tvm.Object) -> str:
     return str(val)
 
 
[email protected]_func("relax.run.shape_to_tensor")
+def relax_shape_to_tensor(shape_tuple: tvm.runtime.ShapeTuple) -> 
tvm.nd.NDArray:
+    """
+    Takes a ShapeTuple and convert it to NDArray.
+
+    Parameters
+    ----------
+    shape_tuple: tvm.runtime.ShapeTuple
+        Shape tuple that we want to convert to NDArray at runtime
+    """
+    return tvm.nd.array([int(v) for v in shape_tuple])

Review Comment:
   do we assume it's always on cpu?



##########
src/relax/transform/fold_constant.cc:
##########
@@ -279,6 +279,25 @@ class ConstantFolder : public ExprMutator {
           }
           return ShapeExpr(shape_values);
         }
+      } else if (op->name == "relax.shape_to_tensor") {
+        // Special handling for "relax.shape_to_tensor" since it is 
implemented in PackedFunc.
+        // TODO(sunggg): revisit this when we extend ConstantFolding to fold 
PackedFunc.
+        Expr arg = post_call->args[0];
+        ShapeExpr shape = Downcast<ShapeExpr>(arg);
+        Array<PrimExpr> values = shape->values;
+        Array<Integer> arr;
+        bool isKnown = true;

Review Comment:
   nit
   ```suggestion
           bool is_known = true;
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to