I might get some free time over the weekend. Let me see if I can try it on my 
cluster.

 

If it works good, we can commit the code and hope you can try out when it 
becomes available for you.

 

Bosco

 

 

From: Sean Roberts <srobe...@hortonworks.com>
Reply-To: <user@ranger.apache.org>
Date: Thursday, March 22, 2018 at 4:14 AM
To: "user@ranger.apache.org" <user@ranger.apache.org>
Subject: Re: Max file size for HDFSAuditDestination writers?

 

Bosco – No sir. I’d like to but all our test environments are deployed by 
Ambari so difficult to get a dev build in.

 

-- 

Sean Roberts

From: Don Bosco Durai <bo...@apache.org>
Reply-To: "user@ranger.apache.org" <user@ranger.apache.org>
Date: Thursday, 22 March 2018 at 08:31
To: "user@ranger.apache.org" <user@ranger.apache.org>
Subject: Re: Max file size for HDFSAuditDestination writers?

 

Sean,  have you also tried out the new feature where the log files are written 
as compressed ORC format?

 

Bosco

 

 

From: Ramesh Mani <rm...@hortonworks.com>
Reply-To: <user@ranger.apache.org>
Date: Tuesday, February 20, 2018 at 12:05 PM
To: "user@ranger.apache.org" <user@ranger.apache.org>
Subject: Re: Max file size for HDFSAuditDestination writers?

 

Sean,

 

Ranger Audit Framework rolls over the file based on following parameter. 
Rolling over  more frequently in a system where audits are generated a lot will 
help you having a smaller size file. There is no functionality to provide a 
file size as roll over threshold now.

 

xasecure.audit.destination.hdfs.file.rollover.sec=3600( every 1hrs)

xasecure.audit.destination.hdfs.file.rollover.period = 1h / 2h / 1d / 2d ..





Refer this https://issues.apache.org/jira/browse/RANGER-1105 to get some more 
info on it.

 

Regards,

Ramesh

 

From: Sean Roberts <srobe...@hortonworks.com>
Reply-To: "user@ranger.apache.org" <user@ranger.apache.org>
Date: Tuesday, February 20, 2018 at 8:03 AM
To: "user@ranger.apache.org" <user@ranger.apache.org>
Subject: Max file size for HDFSAuditDestination writers?

 

Ranger folks – When writing audits to HDFS, is it possible to limit the file 
size of the written files?

 

Instead of a single large file like:

```

300G /ranger/audit/SERVICE/YYYYMMDD/SERVICE_ranger_audit_hostname.log

```

 

Something like this is preferred:

```

5G /ranger/audit/SERVICE/YYYYMMDD/SERVICE_ranger_audit_hostname.1.log

5G /ranger/audit/SERVICE/YYYYMMDD/SERVICE_ranger_audit_hostname.2.log

5G /ranger/audit/SERVICE/YYYYMMDD/SERVICE_ranger_audit_hostname.3.log

... etc ...

```

 

-- 

Sean Roberts

@seano

 

Reply via email to