-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/56335/#review165103
-----------------------------------------------------------


Ship it!




Please add JIRA number and branch numbers - I believe this patch is required in 
master and ranger-0.7

- Velmurugan Periasamy


On Feb. 6, 2017, 6:36 p.m., Ramesh Mani wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/56335/
> -----------------------------------------------------------
> 
> (Updated Feb. 6, 2017, 6:36 p.m.)
> 
> 
> Review request for ranger, Don Bosco Durai, Abhay Kulkarni, Madhan Neethiraj, 
> and Velmurugan Periasamy.
> 
> 
> Repository: ranger
> 
> 
> Description
> -------
> 
> Ranger Audit framework enhancement to provide an option to allow audit 
> records to be spooled to local disk first before sending it to destinations
> 
> 
> Diffs
> -----
> 
>   
> agents-audit/src/main/java/org/apache/ranger/audit/destination/HDFSAuditDestination.java
>  7c37cfa 
>   
> agents-audit/src/main/java/org/apache/ranger/audit/provider/AuditFileCacheProvider.java
>  PRE-CREATION 
>   
> agents-audit/src/main/java/org/apache/ranger/audit/provider/AuditProviderFactory.java
>  e3c3508 
>   
> agents-audit/src/main/java/org/apache/ranger/audit/queue/AuditFileCacheProviderSpool.java
>  PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/56335/diff/
> 
> 
> Testing
> -------
> 
> Test all the plugins in Local VM
> To enable the file cache provider for each of the components please do the 
> following
>       
> For HDFS Plugin
> ===============
>       mkdir -p  /var/log/hadoop/hdfs/audit/spool
>       cd /var/log/hadoop/hdfs/audit/
>       chown hdfs:hadoop spool
>       Add the following properties to the "custom ranger-hive-audit” in the 
> Ambari for hdfs. 
>       xasecure.audit.provider.filecache.is.enabled=true
>       xasecure.audit.provider.filecache.filespool.file.rollover.sec=300
>       
> xasecure.audit.provider.filecache.filespool.dir=/var/log/hadoop/hdfs/audit/spool
> 
>    NOTE:
>       xasecure.audit.provider.filecache.is.enabled = true 
>            This property will enable file cache provider which will store the 
> audit locally first before sending it to destinations to avoid lose of data 
>       xasecure.audit.provider.filecache.filespool.file.rollover.sec=300
>             This property will close each of local file every 300 sec ( 5 min 
> ) and send it destinations. For testing we maded to 30 sec.
>       
> xasecure.audit.provider.filecache.filespool.dir=/var/log/hadoop/hdfs/audit/spool
>               This property is the directory where the local audit cache is 
> present.
> 
> For Hive Plugin
> =============
> 
>    mkdir -p /var/log/hive/audit/spool
>     cd /var/log/hive/audit/
>     chown hdfs:hadoop spool
> Add the following properties to the "custom ranger-hive-audit” in the Ambari 
> for hdfs. 
> xasecure.audit.provider.filecache.is.enabled=true
> xasecure.audit.provider.filecache.filespool.file.rollover.sec=300
> xasecure.audit.provider.filecache.filespool.dir=/var/log/hive/audit/spool
> 
> Please do the same steps mentioned  for all the components which  need this 
> audit file cache provider.
> 
> 
> ---------------
> Issues:
>       - Audit to HDFS destination gets 0 bytes file or missing records in the 
> file from HDFS plugin when HDFS get restarted and 
>          audit from hdfs plugin is logged into destination.
>         
>       - Audit to HDFS destination gets partial records from 
> HIVE/HBASE/KNOX/STORM plugin when HDFS is restarted and there are active 
>       spooling into hdfs is happening.
>       
> Scenarios to test
> 
> 1) Audit to HDFS / Solr destination with FileCache enabled- 
> HDFS/HIVESERVER2/HBASE/KNOX/STORM/KAFKA.
>               - Mentioned issue should not happen.
>               - Audit will be getting pushed every 5 minutes ( we are setting 
> it to 300 sec in the parameter)
> 
> 2) Audit to HDFS / Solr destination with FileCache enabled  with one of the 
> destination is down and brought back up later.
>               - Audit from the local cache should be present in destination 
> when the destination is up 
>               - In case of HDFS as destination audit might show up during 
> next rollover of hdfs file or  if the corresponding component
>                  is restarted ( say if it is hiveserver2 plugin, when 
> Hiveserver2 is restarted audit into HDFS appears as this will close the
>                   existing opened hdfsfile)
>                - Mentioned issue should not be present
>                - 
>                - 
> 3) Same has to be done for each for the plugins ( HBASE, STORM, KAFKA, KMS)
> 
> 
> Thanks,
> 
> Ramesh Mani
> 
>

Reply via email to