Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21221#discussion_r207002859
  
    --- Diff: 
core/src/main/scala/org/apache/spark/status/AppStatusListener.scala ---
    @@ -669,6 +686,34 @@ private[spark] class AppStatusListener(
             }
           }
         }
    +
    +    // check if there is a new peak value for any of the executor level 
memory metrics
    +    // for the live UI. SparkListenerExecutorMetricsUpdate events are only 
processed
    +    // for the live UI.
    +    event.executorUpdates.foreach { updates: ExecutorMetrics =>
    +      liveExecutors.get(event.execId).foreach { exec: LiveExecutor =>
    +        if (exec.peakExecutorMetrics.compareAndUpdatePeakValues(updates)) {
    +          maybeUpdate(exec, now)
    +        }
    +      }
    +    }
    +  }
    +
    +  override def onStageExecutorMetrics(executorMetrics: 
SparkListenerStageExecutorMetrics): Unit = {
    +    val now = System.nanoTime()
    +
    +    // check if there is a new peak value for any of the executor level 
memory metrics,
    +    // while reading from the log. SparkListenerStageExecutorMetrics are 
only processed
    +    // when reading logs.
    +    liveExecutors.get(executorMetrics.execId)
    +      .orElse(deadExecutors.get(executorMetrics.execId)) match {
    +      case Some(exec) =>
    --- End diff --
    
    From the 
[Scaladoc](https://www.scala-lang.org/api/2.10.2/index.html#scala.Option):
    
    > The most idiomatic way to use an scala.Option instance is to treat it as 
a collection or monad and use map,flatMap, filter, or foreach
    
    It's probably better to follow the Scala conventions here.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to