[ 
https://issues.apache.org/jira/browse/RANGER-3271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkat A updated RANGER-3271:
-----------------------------
    Description: 
I see following error when Knox audits being written to HDFS after Ranger-Knox 
plugin enabled.

 

 3322021-05-01 15:33:34,435 INFO destination.HDFSAuditDestination: Returning 
HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml 
3322021-05-01 15:33:34,435 INFO destination.HDFSAuditDestination: Returning 
HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml 
3182021-05-01 15:33:34,439 ERROR provider.BaseAuditHandler: Error writing to 
log file. 
349{color:#FF0000}*org.apache.hadoop.fs.UnsupportedFileSystemException: No 
FileSystem for scheme "hdfs"*{color} at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3332) at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477) at 
org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAuditDestination.java:277)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination.access$000(HDFSAuditDestination.java:44)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:157)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:154)
 at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at 
org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:529)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestination.java:154)
 at 
org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) 
at 
org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827)
 at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) 
at java.lang.Thread.run(Thread.java:748)2021-05-01 15:33:34,439 INFO 
destination.HDFSAuditDestination: Flushing HDFS audit. Event Size:1 
1802021-05-01 15:33:34,439 ERROR queue.AuditFileSpool: Error sending logs to 
consumer. provider=knox.async.multi_dest.batch, 
consumer=knox.async.multi_dest.batch.hdfs 7092021-05-01 15:33:34,440 INFO 
queue.AuditFileSpool: Destination is down. sleeping for 30000 milli seconds. 
indexQueue=0, queueName=knox.async.multi_dest.batch, 
consumer=knox.async.multi_dest.batch.hdfs

 

 

Tried lot of options to avoid above error. Not sure if it is a bug or some sort 
of Compatibility issue.

Environment : 

HADOOP : 3.3.0

KNOX : 1.4.0

RANGER : 2.1.0

NOTE: 

Knox able to write audits if i give local path to store audits instead of HDFS 
File System.

Appreciate your help on this.

  was:
I see following error when Knox audits being written to HDFS after Ranger-Knox 
plugin enabled.

 

 3322021-05-01 15:33:34,435 INFO destination.HDFSAuditDestination: Returning 
HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml 
3322021-05-01 15:33:34,435 INFO destination.HDFSAuditDestination: Returning 
HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml 
3182021-05-01 15:33:34,439 ERROR provider.BaseAuditHandler: Error writing to 
log file. 349org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem 
for scheme "hdfs" at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3332) at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477) at 
org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAuditDestination.java:277)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination.access$000(HDFSAuditDestination.java:44)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:157)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:154)
 at java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
 at 
org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:529)
 at 
org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestination.java:154)
 at 
org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879) 
at 
org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827)
 at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) 
at java.lang.Thread.run(Thread.java:748)2021-05-01 15:33:34,439 INFO 
destination.HDFSAuditDestination: Flushing HDFS audit. Event Size:1 
1802021-05-01 15:33:34,439 ERROR queue.AuditFileSpool: Error sending logs to 
consumer. provider=knox.async.multi_dest.batch, 
consumer=knox.async.multi_dest.batch.hdfs 7092021-05-01 15:33:34,440 INFO 
queue.AuditFileSpool: Destination is down. sleeping for 30000 milli seconds. 
indexQueue=0, queueName=knox.async.multi_dest.batch, 
consumer=knox.async.multi_dest.batch.hdfs

 

 

Tried lot of options to avoid above error. Not sure if it is a bug or some sort 
of Compatibility issue.

Environment : 

HADOOP : 3.3.0

KNOX : 1.4.0

RANGER : 2.1.0

NOTE: 

Knox able to write audits if i give local path to store audits instead of HDFS 
File System.

Appreciate your help on this.


> Ranger Knox Plugin Unable to Write  Knox Audits to HDFS
> -------------------------------------------------------
>
>                 Key: RANGER-3271
>                 URL: https://issues.apache.org/jira/browse/RANGER-3271
>             Project: Ranger
>          Issue Type: Bug
>          Components: Ranger
>    Affects Versions: 2.1.0
>         Environment: HADOOP : 3.3.0
> KNOX : 1.4.0
> RANGER : 2.1.0
>            Reporter: Venkat A
>            Priority: Blocker
>
> I see following error when Knox audits being written to HDFS after 
> Ranger-Knox plugin enabled.
>  
>  3322021-05-01 15:33:34,435 INFO destination.HDFSAuditDestination: Returning 
> HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml 
> 3322021-05-01 15:33:34,435 INFO destination.HDFSAuditDestination: Returning 
> HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml 
> 3182021-05-01 15:33:34,439 ERROR provider.BaseAuditHandler: Error writing to 
> log file. 
> 349{color:#FF0000}*org.apache.hadoop.fs.UnsupportedFileSystemException: No 
> FileSystem for scheme "hdfs"*{color} at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3332) at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3352) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477) at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAuditDestination.java:277)
>  at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.access$000(HDFSAuditDestination.java:44)
>  at 
> org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:157)
>  at 
> org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:154)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at 
> org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:529)
>  at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestination.java:154)
>  at 
> org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:879)
>  at 
> org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:827)
>  at org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:757) 
> at java.lang.Thread.run(Thread.java:748)2021-05-01 15:33:34,439 INFO 
> destination.HDFSAuditDestination: Flushing HDFS audit. Event Size:1 
> 1802021-05-01 15:33:34,439 ERROR queue.AuditFileSpool: Error sending logs to 
> consumer. provider=knox.async.multi_dest.batch, 
> consumer=knox.async.multi_dest.batch.hdfs 7092021-05-01 15:33:34,440 INFO 
> queue.AuditFileSpool: Destination is down. sleeping for 30000 milli seconds. 
> indexQueue=0, queueName=knox.async.multi_dest.batch, 
> consumer=knox.async.multi_dest.batch.hdfs
>  
>  
> Tried lot of options to avoid above error. Not sure if it is a bug or some 
> sort of Compatibility issue.
> Environment : 
> HADOOP : 3.3.0
> KNOX : 1.4.0
> RANGER : 2.1.0
> NOTE: 
> Knox able to write audits if i give local path to store audits instead of 
> HDFS File System.
> Appreciate your help on this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to