[ 
https://issues.apache.org/jira/browse/AMBARI-20369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mugdha Varadkar updated AMBARI-20369:
-------------------------------------
    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

Committed to branch-2.5: 
[bf4edcd76b4e78e0a90455fc8d56c053683ad1ba|https://github.com/apache/ambari/commit/bf4edcd76b4e78e0a90455fc8d56c053683ad1ba]
 and trunk: 
[b4da19ea0b1bf87d5b91ad0520b822f00da26ab8|https://github.com/apache/ambari/commit/b4da19ea0b1bf87d5b91ad0520b822f00da26ab8]

> Need hdfs-site for saving ranger audits to hdfs in namenode HA env
> ------------------------------------------------------------------
>
>                 Key: AMBARI-20369
>                 URL: https://issues.apache.org/jira/browse/AMBARI-20369
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.5.0
>            Reporter: Mugdha Varadkar
>            Assignee: Mugdha Varadkar
>              Labels: Ambari
>             Fix For: 2.5.0
>
>         Attachments: AMBARI-20369.patch
>
>
> For {{KNOX}} and {{RANGER_KMS}} services which supports ranger plugin, need 
> to have hdfs-site.xml available in respective services conf directory for 
> saving ranger audits to hdfs in namenode HA env.
> Below error logs are found, if hdfs-site.xml is not available,
> {noformat}
> 2017-03-01 18:48:50,150 ERROR provider.BaseAuditHandler 
> (BaseAuditHandler.java:logError(327)) - Error writing to log file.
> java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
>       at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:438)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:690)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:631)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:160)
>       at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2795)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.getLogFileStream(HDFSAuditDestination.java:271)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.access$000(HDFSAuditDestination.java:43)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:157)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination$1.run(HDFSAuditDestination.java:154)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
>       at 
> org.apache.ranger.audit.provider.MiscUtil.executePrivilegedAction(MiscUtil.java:523)
>       at 
> org.apache.ranger.audit.destination.HDFSAuditDestination.logJSON(HDFSAuditDestination.java:154)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool.sendEvent(AuditFileSpool.java:880)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool.runLogAudit(AuditFileSpool.java:828)
>       at 
> org.apache.ranger.audit.queue.AuditFileSpool.run(AuditFileSpool.java:758)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.UnknownHostException: mycluster
>       ... 24 more
> 2017-03-01 18:48:50,151 ERROR queue.AuditFileSpool 
> (AuditFileSpool.java:logError(710)) - Error sending logs to consumer. 
> provider=knox.async.multi_dest.batch, 
> consumer=knox.async.multi_dest.batch.hdfs{{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to