qlong commented on code in PR #53625:
URL: https://github.com/apache/spark/pull/53625#discussion_r2653172259


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala:
##########
@@ -67,10 +67,14 @@ class QueryExecution(
     val tracker: QueryPlanningTracker = new QueryPlanningTracker,
     val mode: CommandExecutionMode.Value = CommandExecutionMode.ALL,
     val shuffleCleanupMode: ShuffleCleanupMode = DoNotCleanup,
-    val refreshPhaseEnabled: Boolean = true) extends Logging {
+    val refreshPhaseEnabled: Boolean = true,
+    val queryId: UUID = UUIDv7Generator.generate()) extends Logging {

Review Comment:
   I misplaced my previous comment to the wrong file.  In QueryExecution, I 
also think it should be queryID so it can be used in analyzer/optimizer as 
well. For that, queryID should be stable.  Example:
   
   ```
   val df = spark.range(10).filter($"id" > 5)
   / /All these share the same queryId = uuid-1:
   [queryId=uuid-1] Analyzing query...
   [queryId=uuid-1] Optimizing query...
   [queryId=uuid-1] Planning query...
   [queryId=uuid-1, executionUUID=1] Executing df.count()
   [queryId=uuid-1, executionUUID=2] Executing df.show()
   ```
   but in **SQLExecution and SQLListener,** we shoud not call it queryId as it 
really just tracks execution. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to