squito commented on a change in pull request #23767: [SPARK-26329][CORE][WIP]
Faster polling of executor memory metrics.
URL: https://github.com/apache/spark/pull/23767#discussion_r258651270
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala
##########
@@ -267,12 +279,17 @@ private[spark] class EventLoggingListener(
override def onExecutorMetricsUpdate(event:
SparkListenerExecutorMetricsUpdate): Unit = {
if (shouldLogStageExecutorMetrics) {
- // For the active stages, record any new peak values for the memory
metrics for the executor
- event.executorUpdates.foreach { executorUpdates =>
- liveStageExecutorMetrics.values.foreach { peakExecutorMetrics =>
- val peakMetrics = peakExecutorMetrics.getOrElseUpdate(
- event.execId, new ExecutorMetrics())
- peakMetrics.compareAndUpdatePeakValues(executorUpdates)
+ event.executorUpdates.foreach { case (k1, peakUpdates) =>
+ liveStageExecutorMetrics.foreach { case (k2, peakExecutorMetrics) =>
+ // If the update came from the driver, the key k1 will be the dummy
key (-1, -1),
+ // so record those peaks for all active stages.
+ // Otherwise, record the peaks for the matching stage.
+ val k0 = (-1, -1)
+ if (k1 == k0 || k1 == k2) {
Review comment:
similarly here, it doesn't seem like you need to iterate over all of
`liveStageExecutorMetrics` (in fact I'm a bit confused about what you want to
happen when its an update from the driver -- are you updating the metrics for
every executor w/ the driver metrics?)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]