davidm-db commented on code in PR #48794:
URL: https://github.com/apache/spark/pull/48794#discussion_r1850878247


##########
sql/core/src/main/scala/org/apache/spark/sql/scripting/SqlScriptingExecutionNode.scala:
##########
@@ -636,3 +649,223 @@ class LoopStatementExec(
     body.reset()
   }
 }
+
+/**
+ * Executable node for ForStatement.
+ * @param query Executable node for the query.
+ * @param variableName Name of variable used for accessing current row during 
iteration.
+ * @param body Executable node for the body.
+ * @param label Label set to ForStatement by user or None otherwise.
+ * @param session Spark session that SQL script is executed within.
+ */
+class ForStatementExec(
+    query: SingleStatementExec,
+    variableName: Option[String],
+    body: CompoundBodyExec,
+    val label: Option[String],
+    session: SparkSession) extends NonLeafStatementExec {
+
+  private object ForState extends Enumeration {
+    val VariableAssignment, Body, VariableCleanup = Value
+  }
+  private var state = ForState.VariableAssignment
+  private var currRow = 0
+  private var areVariablesDeclared = false
+
+  // map of all variables created internally by the for statement
+  // (variableName -> variableExpression)
+  private var variablesMap: Map[String, Expression] = Map()
+
+  // compound body used for dropping variables while in 
ForState.VariableAssignment
+  private var dropVariablesExec: CompoundBodyExec = null
+
+  private var queryResult: Array[Row] = null
+  private var isResultCacheValid = false
+  private def cachedQueryResult(): Array[Row] = {
+    if (!isResultCacheValid) {
+      queryResult = query.buildDataFrame(session).collect()

Review Comment:
   food for thought: does DataFrame have a mechanism to partially collect the 
data so we don't collect all the results in memory? since we are already using 
the caching concept, this would be easy to add to the logic of 
`cachedQueryResult`.
   
   quickly researching, we can do something like:
   ```sliced_df = df.offset(starting_index).limit(ending_index - 
starting_index)```
   but there might be something better...
   
   I wouldn't block the PR on this, but I think we definitely need to consider 
something like this for a follow-up.
   
   cc: @cloud-fan @MaxGekk 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to