Github user jisookim0513 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16714#discussion_r101438216
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
    @@ -64,6 +64,12 @@ private[spark] class EventLoggingListener(
       private val shouldOverwrite = 
sparkConf.getBoolean("spark.eventLog.overwrite", false)
       private val testing = sparkConf.getBoolean("spark.eventLog.testing", 
false)
       private val outputBufferSize = 
sparkConf.getInt("spark.eventLog.buffer.kb", 100) * 1024
    +  // To reduce the size of event logs, we can omit logging all of internal 
accumulables for metrics.
    +  private val omitInternalAccumulables =
    --- End diff --
    
    I don't think this information is used to reconstruct job UI. I am not sure 
how this information got included in event logs, but I think some people might 
be using it to get internal metrics for a stage from the history server using 
its REST API. For example, CPU time metrics is not included in stage metrics 
you can get by querying history server endpoint 
`/applications/[app-id]/stages/[stage-id]`. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to