I should mention that I'm doing this via Ambari 1.7. I replaced Ambari
default with log4j.properties below and it doesn't seem to take effect. I
understand that it may be a basic RTFM question but I want to rule out any
hierarchical intricacies of hadoop logging.

Artem Ervits
On Mar 2, 2015 1:33 PM, "Artem Ervits" <[email protected]> wrote:

> Hello all,
>
> I am trying to replace the default DailyRollingFileAppender entry for
> hdfs-audit.log with RollingFileAppender so that I could set maxbackupindex.
> Basically, I am trying to purge hdfs-audit logs older than 7 days. Can
> someone suggest if anything is wrong here?
>
> I have the following:
>
> #
> # hdfs audit logging
> #
> hdfs.audit.logger=INFO,console
> hdfs.audit.log.maxfilesize=256MB
> hdfs.audit.log.maxbackupindex=7
>
> log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
>
> log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
> log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
> log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
> log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
> log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
> log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
> log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
>

Reply via email to