[
https://issues.apache.org/jira/browse/ARGUS-5?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ramesh Mani resolved ARGUS-5.
-----------------------------
Resolution: Fixed
Fix Version/s: 0.1.0
Patch available for review
> Ability to write audit log in HDFS
> ----------------------------------
>
> Key: ARGUS-5
> URL: https://issues.apache.org/jira/browse/ARGUS-5
> Project: Argus
> Issue Type: New Feature
> Reporter: Selvamohan Neethiraj
> Assignee: Ramesh Mani
> Fix For: 0.1.0
>
>
> {panel:title=Ability to write Logs into HDFS}
> HdfsFileAppender is log4J appender used to log into hdfs the logs.
> Following are configuration parameters.
> # HDFS appender
> #
> hdfs.xaaudit.logger=INFO,console,HDFSLOG
> log4j.logger.xaaudit=${hdfs.xaaudit.logger}
> log4j.additivity.xaaudit=false
> log4j.appender.HDFSLOG=com.xasecure.authorization.hadoop.log.HdfsFileAppender
> log4j.appender.HDFSLOG.File=/grid/0/var/log/hadoop/hdfs/argus_audit.log
> log4j.appender.HDFSLOG.HdfsDestination=hdfs://ec2-54-88-128.112.compute.1.amazonaws.com:8020:/audit/hdfs/%hostname%/argus_audit.log
> log4j.appender.HDFSLOG.layout=org.apache.log4j.PatternLayout
> log4j.appender.HDFSLOG.layout.ConversionPattern=%d\X{ISO8601} %p %c{2}: %m%n
> %X{LogPath}
> HdfsFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr,
> 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months...
> log4j.appender.HDFSLOG.HdfsFileRollingInterval=3min
> LocalFileRollingInterval -> Hdfs file Rollover time e.g 1min, 5min,... 1hr,
> 2hrs,.. 1day, 2days... 1week, 2weeks.. 1month, 2months..
> log4j.appender.HDFSLOG.FileRollingInterval=1min
> log4j.appender.HDFSLOG.HdfsLiveUpdate=true
> log4j.appender.HDFSLOG.HdfsCheckInterval=2min
> 1) HdfsFileAppender will log into given HDFSDestination Path.
> 2) Incase of unavailability of configured hdfs, a Local file in the given
> log4j parameter FILE will be created with extension. cache.
> 3) This local .cache file will be rolled over based on the
> FileRollingInterval parameter.
> 4) Once when the hdfs is available and ready, logging will be done in the
> HDFSDestination provided.
> 5) Local .cache file will be moved into HDFSDestination.
> 6) Log File created in the hdfs destination will be rolled over based on
> the HdfsFileRollingInterval parameter
> 7) Parameter HdfsLiveUpdate = True mean when ever the hdfs is available
> appender will send the logs to hdfsfile. If False Local .cache file will be
> created and these files will be moved periodically into HDFSDestination
> 8) Parameter HdfsCheckInterval is the interval to check for the
> availability of HDFS after the first failure. It that time local .cache file
> will hold the logs.
> Argus Audit Logging into HDFS:
> . For Audit logs “Policy Manager” should exclude the hdfs file Path
> from auditing to avoid recursive call that is there when logging the audit.
> . Configure log4j parameter in the xasecure-audit.xml. Make it Async.
> (Note that each agent will have its own xasecure-aduit.xml )
> properties.
> . For Auditing Hdfs Agent, have the appender part of NameNode and
> SecondaryNamenode.
> . For Auditing Hbase Agent , have the appender part of Master and
> RegionServer.
> . For Auditing Hive Agent have it part of the HiverServer2
> Regular Logging Usage:
> For regular functionality of enabling the logging , do the
> same way other Log4J appenders are configured.
> {panel}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)