ivoson commented on a change in pull request #33905:
URL: https://github.com/apache/spark/pull/33905#discussion_r703428815
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/Observation.scala
##########
@@ -150,10 +150,12 @@ class Observation(name: String) {
private[sql] case class ObservationListener(observation: Observation)
extends QueryExecutionListener {
- override def onSuccess(funcName: String, qe: QueryExecution, durationNs:
Long): Unit =
+ override def onSuccess(
+ funcName: String, executionId: Long, qe: QueryExecution, durationNs:
Long): Unit =
Review comment:
If we want to unify these two, I think the id bound to a QueryExecution
should be the same with query execution id we can get from UI now. So the
timing to initialize the id might also need to be the same.
Since a QueryExecution will be created for each dataset, we can not
initialize the id for every QueryExecution, but just for the ones triggered by
an action. I think the logic and timing here is just like what we did in
SQLExecution.nextExecutionId.
In another word, we can just put the query execution Id to a QueryExecution
when newExecutionId generated. This is what I have in mind now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]