Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16714#discussion_r102336169
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -64,6 +64,12 @@ private[spark] class EventLoggingListener(
private val shouldOverwrite =
sparkConf.getBoolean("spark.eventLog.overwrite", false)
private val testing = sparkConf.getBoolean("spark.eventLog.testing",
false)
private val outputBufferSize =
sparkConf.getInt("spark.eventLog.buffer.kb", 100) * 1024
+ // To reduce the size of event logs, we can omit logging all of internal
accumulables for metrics.
+ private val omitInternalAccumulables =
--- End diff --
Actually I see CPU time in both stage-level data and task-level data in the
REST API...
Do you mind checking the code for when this was introduced and whether it
was a conscious decision (as in, it covered some user case we're not seeing)?
If possible it's always better to avoid adding more config options,
especially in this kind of situation. For example, if this data is needed for
something, the config would be disabling that functionality, and it would be
better to instead figure out how to save it in a way that does not waste so
much space. And there's always compression.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]