You should be able to create your own account in Apache JIRA. 
https://issues.apache.org/jira/login.jsp and sign-in.

After that you should be able to create the JIRA. You should use the project 
“Ranger". You won’t have permission to assign the owner, but it is fine.

Thanks

Bosco


From:  Dale Bradman <da...@profusion.com>
Reply-To:  <user@ranger.incubator.apache.org>
Date:  Friday, June 17, 2016 at 7:51 AM
To:  "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Subject:  RE: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

Yes sure, how do I do that without a log in?

 

 

From: Don Bosco Durai [mailto:bo...@apache.org] 
Sent: 16 June 2016 19:29
To: user@ranger.incubator.apache.org
Subject: Re: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

 

If you don’t mind, can you create a JIRA for us to track using local 
hdfs-site.xml and core-site.xml.

 

Thanks

 

Bosco

 

 

From: Dale Bradman <da...@profusion.com>
Reply-To: <user@ranger.incubator.apache.org>
Date: Thursday, June 16, 2016 at 1:43 AM
To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Subject: RE: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

 

Yes definitely works for me. Makes sense as the properties are available on 
every node.

 

I configured the HDFS plugin with Velmurugan’s suggestions and this has worked. 

 

There is one warning on the xa_portal:

 

WARN  org.apache.hadoop.hdfs.DFSUtil (DFSUtil.java:689) - Namenode for tatooine 
remains unresolved for ID nn2.  Check your hdfs-site.xml file to ensure 
namenodes are configured properly.

 

Is this just because nn2 is currently the stand by node?

 

Thanks Velmurugan & Bosco.

 

 

From: Don Bosco Durai [mailto:bo...@apache.org] 
Sent: 16 June 2016 08:16
To: user@ranger.incubator.apache.org
Subject: Re: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

 

Sorry to hijack the thread…

 

Since it is common practice and sometimes required to have the HDFS properties 
be available on each node. Should we consider just loading the HDFS related 
properties directly from the HDFS properties file? In this way, we only need to 
provide to take the path to the HDFS folder?

 

Dale, please confirm if this works for you.

 

Thanks

 

Bosco

 

 

From: Velmurugan Periasamy <vperias...@hortonworks.com>
Reply-To: <user@ranger.incubator.apache.org>
Date: Wednesday, June 15, 2016 at 10:42 AM
To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Subject: Re: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

 

Having active name node in repo config should work just fine. Only resource 
lookup is not available during failover cases, until the repo config is updated.

 

For HA configuration to work, need to add the below properties in repo config 
(I.e. additional entries in the advanced section). They can be copied from 
hdfs-site.xml. 

 

dfs.nameservices = <ha_name>

dfs.ha.namenodes.<ha_name> = <nn1,nn2>

dfs.namenode.rpc-address.<nn1> = <nn1_host:8020>

dfs.namenode.rpc-address.<nn2> = <nn2_host:8020>

dfs.client.failover.proxy.provider.<nn2> = 
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

 

 

From: Dale Bradman <da...@profusion.com>
Reply-To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Date: Wednesday, June 15, 2016 at 10:51 AM
To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Subject: RE: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

 

That did not work.

 

It works when I set:

Hadoop.rpc.protection = -

 

Then in HDFS plugin:

Namenode URL = hdfs://hdpmaster01:8020

RPC Protection Type = Authentication

 

The above works. It seems it is the HA configuration that is a problem. Will it 
work with NameNode HA? Is there any risk for it not being configured to HA?

 

Thanks.

From: Velmurugan Periasamy [mailto:vperias...@hortonworks.com] 
Sent: 15 June 2016 14:31
To: user@ranger.incubator.apache.org
Subject: Re: HDFS Plugin - Unable to get listing of files for directory [/] 
from Hadoop environment

 

Dale:

 

Could you set hadoop.rpc.protection to authentication and try?

 

Thank you,

Vel

 

From: Dale Bradman <da...@profusion.com>
Reply-To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Date: Wednesday, June 15, 2016 at 9:28 AM
To: "user@ranger.incubator.apache.org" <user@ranger.incubator.apache.org>
Subject: HDFS Plugin - Unable to get listing of files for directory [/] from 
Hadoop environment

 

Trying to configure the HDFS plugin for Keberised, HA, HDP 2.4.2.

I have followed this guide 
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/hdfs_plugin_kerberos.html

I have created a “rangerrepouser” in AD and is visible in the Ranger UI.

 

Advanced ranger-hdfs-pluging properties:

Ranger repository config user = rangerrepouser@AD.EXAMPLE

Ranger repository config password = password set in AD

Hadoop.rpc.protection = 

 

 

HDFS Service props:

Username: rangerrepouser@AD.EXAMPLE

Namenode URL: hdfs://tatooine

Authorization enabled: Yes

Authentication type: Kerberos

hadoop.security.auth_to_local : 

RULE:[1:$1@$0](ambari-qa-Tatooine@AD.EXAMPLE)s/.*/ambari-qa/RULE:[1:$1@$0](hdfs-Tatooine@AD.EXAMPLE)s/.*/hdfs/RULE:[1:$1@$0](.*@AD.EXAMPLE)s/@.*//RULE:[2:$1@$0](amshbase@AD.EXAMPLE)s/.*/ams/RULE:[2:$1@$0](amszk@AD.EXAMPLE)s/.*/ams/RULE:[2:$1@$0](dn@AD.EXAMPLE)s/.*/hdfs/RULE:[2:$1@$0](hive@AD.EXAMPLE)s/.*/hive/RULE:[2:$1@$0](jhs@AD.EXAMPLE)s/.*/mapred/RULE:[2:$1@$0](jn@AD.EXAMPLE)s/.*/hdfs/RULE:[2:$1@$0](nm@AD.EXAMPLE)s/.*/yarn/RULE:[2:$1@$0](nn@AD.EXAMPLE)s/.*/hdfs/RULE:[2:$1@$0](rm@AD.EXAMPLE)s/.*/yarn/RULE:[2:$1@$0](yarn@AD.EXAMPLE)s/.*/yarn/DEFAULT

Dfs.datanode.kerberos.principal=dn/hdpnode01.hadoop.local@AD.EXAMPLE

Dfs.namenode.kerberos.principal= nn/hdpmaster01.hadoop.local@ AD.EXAMPLE

Dfs.secondary.namenode.kerberos.principal nn/hdpmaster01.hadoop.local@ 
AD.EXAMPLE

RPC Protection Type = 

 

 

Here is the xa_portal.log:

2016-06-15 14:21:05,037 [timed-executor-pool-0] INFO  
org.apache.ranger.plugin.client.BaseClient (BaseClient.java:100) - Init Login: 
using username/password

2016-06-15 14:21:05,194 [timed-executor-pool-0] ERROR 
apache.ranger.services.hdfs.client.HdfsResourceMgr (HdfsResourceMgr.java:48) - 
<== HdfsResourceMgr.testConnection Error: 
org.apache.ranger.plugin.client.HadoopException: Unable to get listing of files 
for directory [/] from Hadoop environment [Tatooine_hadoop].

2016-06-15 14:21:05,194 [timed-executor-pool-0] ERROR 
org.apache.ranger.services.hdfs.RangerServiceHdfs (RangerServiceHdfs.java:59) - 
<== RangerServiceHdfs.validateConfig 
Error:org.apache.ranger.plugin.client.HadoopException: Unable to get listing of 
files for directory [/] from Hadoop environment [Tatooine_hadoop].

2016-06-15 14:21:05,195 [timed-executor-pool-0] ERROR 
org.apache.ranger.biz.ServiceMgr$TimedCallable (ServiceMgr.java:434) - 
TimedCallable.call: Error:org.apache.ranger.plugin.client.HadoopException: 
Unable to get listing of files for directory [/] from Hadoop environment 
[Tatooine_hadoop].

2016-06-15 14:21:05,195 [http-bio-6080-exec-3] ERROR 
org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:120) - ==> 
ServiceMgr.validateConfig Error:java.util.concurrent.ExecutionException: 
org.apache.ranger.plugin.client.HadoopException: Unable to get listing of files 
for directory [/] from Hadoop environment [Tatooine_hadoop].

 

 

1.      Any ideas as to why this is not working? Everything seems consistent.

2.      Does the rangerrepouser have to be set up on the Ranger Admin server? 
It is visible on Ranger UI but is only synchronised with my edge node and not 
the Admin server

3.      Does it matter that the namenode and secondary namenode are pointing to 
the same Kerberos principal? Doesn’t work if I point them to their respective 
principals either. 

 

Thanks,

Dale

Reply via email to