Github user mridulm commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21221#discussion_r203489913
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
    @@ -160,11 +160,29 @@ case class 
SparkListenerBlockUpdated(blockUpdatedInfo: BlockUpdatedInfo) extends
      * Periodic updates from executors.
      * @param execId executor id
      * @param accumUpdates sequence of (taskId, stageId, stageAttemptId, 
accumUpdates)
    + * @param executorUpdates executor level metrics updates
      */
     @DeveloperApi
     case class SparkListenerExecutorMetricsUpdate(
         execId: String,
    -    accumUpdates: Seq[(Long, Int, Int, Seq[AccumulableInfo])])
    +    accumUpdates: Seq[(Long, Int, Int, Seq[AccumulableInfo])],
    +    executorUpdates: Option[Array[Long]] = None)
    +  extends SparkListenerEvent
    +
    +/**
    + * Peak metric values for the executor for the stage, written to the 
history log at stage
    + * completion.
    + * @param execId executor id
    + * @param stageId stage id
    + * @param stageAttemptId stage attempt
    + * @param executorMetrics executor level metrics, indexed by 
MetricGetter.values
    + */
    +@DeveloperApi
    +case class SparkListenerStageExecutorMetrics(
    +    execId: String,
    +    stageId: Int,
    +    stageAttemptId: Int,
    +    executorMetrics: Array[Long])
    --- End diff --
    
    I am completely sold on the idea of enum-like.
    My main concern was around avoiding `MatchError`'s in scala and the other 
potential failures you elaborated on above.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to