Re Ramesh.

I investigated more my problem and I'm sorry for the confusion.
I checked the policy cache directory on the namenode, and also the logs of
the namenode.

The policycache dir contains an empty file.
And the namenode log contains the following error message :
###
2016-05-18 08:53:50,129 ERROR client.RangerAdminRESTClient
(RangerAdminRESTClient.java:getServicePoliciesIfUpdated(79)) - Error
getting policies. request=https://<RANGER HOST FQDN>:<RANGER ADMIN
PORT>/service/plugins/policies/download/<HDFS
REPO>?lastKnownVersion=-1&pluginId=hdfs@<NAMENODE HOST FQDN>-<HDFS REPO>,
response={"httpStatusCode":400,"statusCode":1,"msgDesc":"Unauthorized
access - unable to get client
certificate","messageList":[{"name":"OPER_NOT_ALLOWED_FOR_ENTITY","rbKey":"xa.error.oper_not_allowed_for_state","message":"Operation
not allowed for entity"}]}, serviceName=<HDFS REPO>
2016-05-18 08:53:50,130 ERROR util.PolicyRefresher
(PolicyRefresher.java:loadPolicyfromPolicyAdmin(228)) -
PolicyRefresher(serviceName=<HDFS REPO>): failed to refresh policies. Will
continue to use last known version of policies (-1)
java.lang.Exception: Unauthorized access - unable to get client certificate
        at
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:81)
        at
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:205)
        at
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:175)
        at
org.apache.ranger.plugin.util.PolicyRefresher.startRefresher(PolicyRefresher.java:132)
        at
org.apache.ranger.plugin.service.RangerBasePlugin.init(RangerBasePlugin.java:106)
        at
org.apache.ranger.authorization.hadoop.RangerHdfsPlugin.init(RangerHdfsAuthorizer.java:399)
        at
org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer.start(RangerHdfsAuthorizer.java:83)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1062)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:763)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:687)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1586)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1652)
###

What does OPER_NOT_ALLOWED_FOR_ENTITY means ?
Which user is the operator for the hdfs plugin ?
Is it the user created for the plugin (in the property Ranger repository
config user) ?

I enabled the SSL for HDFS plugin following the HW doc here :
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/ch04s18s02s04s01.html

Do you think my problem could come from an error from my SSL configuration ?

If I summarize what i did :

I have :
- one node with the namenode
- one node with ranger (admin + usersync)

On the namenode host, I created a plugin keystore.
This keystore contains the certificate for the alias rangerHdfsAgent.
###
cd /etc/hadoop/conf
keytool -genkey -keyalg RSA -alias rangerHdfsAgent -keystore
/etc/hadoop/conf/ranger-plugin-keystore.jks -validity 3600 -keysize 2048
-dname
'cn=HdfsPlugin,ou=<mycompany>,o=<mycompany>,l=<mycity>,st=<mycountry>,c=<idcountry>'
chown hdfs:hdfs /etc/hadoop/conf/ranger-plugin-keystore.jks
chmod 400 /etc/hadoop/conf/ranger-plugin-keystore.jks
###

On the Ranger host, I exported the certificate for the alias rangeradmin
from the admin keystore.
###
keytool -export -keystore /etc/ranger/admin/conf/ranger-admin-keystore.jks
-alias rangeradmin -file /etc/ranger/admin/conf/ranger-admin-trust.cer
###

Then I transfered the cer file from the ranger host to the namenode host.

On the namenode host, I imported the certificate of the alias rangeradmin
into the plugin truststore. (the truststore was not yet existing)
###
keytool -import -file /etc/hadoop/conf/ranger-admin-trust.cer -alias
rangeradmintrust -keystore /etc/hadoop/conf/ranger-plugin-truststore.jks
chown hdfs:hdfs /etc/hadoop/conf/ranger-plugin-truststore.jks
chmod 400 /etc/hadoop/conf/ranger-plugin-truststore.jks
###

On the namenode host, I exported the certificate for the alias
rangerHdfsAgent from the plugin keystore.
###
keytool -export -keystore /etc/hadoop/conf/ranger-plugin-keystore.jks
-alias rangerHdfsAgent -file /etc/hadoop/conf/ranger-hdfsAgent-trust.cer
###

Then I transfered the ranger-hdfsAgent-trust.cer file from the namenode
host to the ranger host.

On the ranger host, I imported the certificate for the alias
rangerHdfsAgent in the admin truststore (the truststore was not yet
existing).
###
keytool -import -file /etc/ranger/admin/conf/ranger-hdfsAgent-trust.cer
-alias rangerHdfsAgentTrust -keystore
/etc/ranger/admin/conf/ranger-admin-truststore.jks
chown ranger:ranger /etc/ranger/admin/conf/ranger-admin-truststore.jks
chmod 400 /etc/ranger/admin/conf/ranger-admin-truststore.jks
###

In the Ambari UI, I added the CN HdfsPlugin in the property "Common Name
For Certificate".

In the Ranger Admin UI, I checked that, in the repository definition, there
is also this property with the right value.

Do you think Is there something wrong ?

BR.

Lune.


On Tue, May 17, 2016 at 3:45 PM, Lune Silver <lunescar.ran...@gmail.com>
wrote:

> Hello !
>
> I just enabled the HDFS plugin for Ranger.
> The repository was created by Ambari (2.2.1 with HDP cluster 2.3.2).
>
> In the Ranger Admin UI, in the repository edit window, when I check on the
> button "test connection", I have the following error message :
> ###
> Unable to connect repository with given config for <MYCLUSTER>_hadoop
> ###
>
> And I can see this in the logs :
> ###
> 2016-05-17 15:41:49,895 [http-bio-6182-exec-5] ERROR
> org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:120) - ==>
> ServiceMgr.validateConfig Error:java.util.concurrent.ExecutionException:
> org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable
> to get listing of files for directory /null] from Hadoop environment
> [<MYCLUSTER>_hadoop].
> ###
>
> Any idea about why this test connection fails ?
>
> BR.
>
> Lune.
>

Reply via email to