cloud-fan commented on code in PR #53530:
URL: https://github.com/apache/spark/pull/53530#discussion_r2660357464


##########
sql/core/src/main/scala/org/apache/spark/sql/scripting/SqlScriptingExecutionContext.scala:
##########
@@ -244,3 +299,36 @@ class SqlScriptingExecutionScope(
     errorHandler
   }
 }
+
+/**
+ * Definition of a cursor in SQL scripting.
+ *
+ * @param name
+ *   Name of the cursor.
+ * @param query
+ *   The query that defines the cursor (LogicalPlan). For parameterized 
cursors,
+ *   this is updated with the analyzed plan when the cursor is opened.
+ * @param queryText
+ *   The original SQL text of the query (preserves parameter markers).
+ * @param isOpen
+ *   Whether the cursor is currently open.
+ * @param resultIterator
+ *   The iterator over result rows when cursor is open. Uses toLocalIterator()
+ *   to avoid loading all data into memory at once.
+ */
+case class CursorDefinition(
+    name: String,
+    var query: org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,
+    queryText: String,
+    var isOpen: Boolean = false,
+    var resultIterator: Option[java.util.Iterator[org.apache.spark.sql.Row]] = 
None) {

Review Comment:
   it's hacky to manage cursor life cycle with 3 vars in a case class...
   
   I think cursor definition should only contains the name and SQL text. We 
should have an explicit cursor state to manage the life cycle: 1) opened: the 
query is parsed with parameter substitution. 2) fetched: result iterator is 
created.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to