leanken edited a comment on pull request #29431:
URL: https://github.com/apache/spark/pull/29431#issuecomment-674004920
> > During AQE, when sub-plan changed, LiveExecutionData is using the new
sub-plan SQLMetrics to override the old ones
>
> This is expected. The old subquery should not be used anymore and should
not have accumulate updates. How does the "final aggregateMetrics" fail exactly?
```
private def onExecutionEnd(event: SparkListenerSQLExecutionEnd): Unit = {
val SparkListenerSQLExecutionEnd(executionId, time) = event
Option(liveExecutions.get(executionId)).foreach { exec =>
exec.completionTime = Some(new Date(time))
update(exec)
// Aggregating metrics can be expensive for large queries, so do it
asynchronously. The end
// event count is updated after the metrics have been aggregated, to
prevent a job end event
// arriving during aggregation from cleaning up the metrics data.
kvstore.doAsync {
exec.metricsValues = aggregateMetrics(exec)
removeStaleMetricsData(exec)
exec.endEvents.incrementAndGet()
update(exec, force = true)
}
}
}
private def aggregateMetrics(exec: LiveExecutionData): Map[Long, String] = {
val metricTypes = exec.metrics.map { m => (m.accumulatorId,
m.metricType) }.toMap
// ****************************************************************
// this liveStageMetrics include the executed queryStage metrics
val liveStageMetrics = exec.stages.toSeq
.flatMap { stageId => Option(stageMetrics.get(stageId)) }
val taskMetrics = liveStageMetrics.flatMap(_.metricValues())
val maxMetrics = liveStageMetrics.flatMap(_.maxMetricValues())
val allMetrics = new mutable.HashMap[Long, Array[Long]]()
val maxMetricsFromAllStages = new mutable.HashMap[Long, Array[Long]]()
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]