Github user gengliangwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/23002#discussion_r232713761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListener.scala
---
@@ -159,7 +159,7 @@ class SQLAppStatusListener(
}
private def aggregateMetrics(exec: LiveExecutionData): Map[Long, String]
= {
- val metricIds = exec.metrics.map(_.accumulatorId).sorted
+ val metricIds = exec.metrics.map(_.accumulatorId).toSet
val metricTypes = exec.metrics.map { m => (m.accumulatorId,
m.metricType) }.toMap
val metrics = exec.stages.toSeq
.flatMap { stageId => Option(stageMetrics.get(stageId)) }
--- End diff --
If the metrics is large, then using a while loop can reduce the number of
traversal loops. And it is not complicated to do it in the code here.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]