Github user tdas commented on a diff in the pull request:
    --- Diff: 
core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala ---
    @@ -142,6 +172,12 @@ private[spark] object RollingFileAppender {
       val SIZE_DEFAULT = (1024 * 1024).toString
       val DEFAULT_BUFFER_SIZE = 8192
    +  val ENABLE_COMPRESSION = "spark.executor.logs.rolling.enableCompression"
    +    "spark.executor.logs.rolling.fileUncompressedLengthCacheSize"
    --- End diff --
    This is not a configuration inside executor. Its inside the worker. So why 
is this named "spark.executor"?
    Its nothing to do with executor. The worker process (that manages 
executors) runs this code, and is independent of the application specific 
configuration in the executor. 
    Spark worker configurations are named as "spark.worker.*". See
    So how about renaming it to "spark.worker.ui. 

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to