Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13712#discussion_r68765722
  
    --- Diff: docs/running-on-yarn.md ---
    @@ -472,6 +472,29 @@ To use a custom metrics.properties for the application 
master and executors, upd
       Currently supported services are: <code>hive</code>, <code>hbase</code>
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.yarn.rolledLog.includePattern</code></td>
    +  <td>(none)</td>
    +  <td>
    +  Java Regex to filter the log files which match the defined include 
pattern
    +  and those log files will be aggregated in a rolling fashion.
    +  This will be used with YARN's rolling log aggregation, to enable this 
feature in YARN side
    +  
<code>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</code> 
should be
    +  configured in yarn-site.xml.
    +  Besides this feature can only be used with Hadoop 2.6.1+. And the log4j 
appender should be changed to
    +  File appender. Based on the file name configured in log4j configuration 
(like spark.log),
    --- End diff --
    
    Ah, ok that makes sense. 
    
    Can we change the wording slightly just to clarify, maybe something like:
    
    This feature can only be used with Hadoop 2.6.1+. The Spark log4j appender 
needs be changed to use FileAppender or another appender that can handle the 
files being removed while its running. Based on the file name configured in the 
log4j configuration (like spark.log), the user should set the regex (spark*) to 
include all the log files that need to be aggregated.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to