I enabled the DEBUGLOG in ranger admin and found this when I rpess on the
test connect button in the repository definition windows in the ranger
admin UI.

###
2016-05-18 10:42:03,135 [timed-executor-pool-0] DEBUG
org.apache.hadoop.security.SaslRpcClient (SaslRpcClient.java:264) - Get
token info proto:interface
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB
info:@org.apache.hadoop.security.token.TokenInfo(value=class
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
2016-05-18 10:42:03,139 [timed-executor-pool-0] DEBUG
org.apache.hadoop.security.UserGroupInformation
(UserGroupInformation.java:1681) - PrivilegedAction as:rangerhdfslookup
(auth:null)
from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:648)
2016-05-18 10:42:03,139 [timed-executor-pool-0] WARN
org.apache.hadoop.ipc.Client$Connection$1 (Client.java:680) - Exception
encountered while connecting to the server : java.lang.NullPointerException
2016-05-18 10:42:03,140 [timed-executor-pool-0] DEBUG
org.apache.hadoop.security.UserGroupInformation
(UserGroupInformation.java:1661) - PrivilegedActionException
as:rangerhdfslookup (auth:null) cause:java.io.IOException:
java.lang.NullPointerException
2016-05-18 10:42:03,143 [timed-executor-pool-0] DEBUG
org.apache.hadoop.ipc.Client$Connection (Client.java:1180) - closing ipc
connection to <NAMENODE HOST FQDN>/<namenode host IP>:8020:
java.lang.NullPointerException
java.io.IOException: java.lang.NullPointerException
        at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:685)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:648)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:735)
        at
org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1493)
        at org.apache.hadoop.ipc.Client.call(Client.java:1397)
        at org.apache.hadoop.ipc.Client.call(Client.java:1358)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy90.getListing(Unknown Source)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:573)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy91.getListing(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2094)
        at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2077)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
        at
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
        at
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
        at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:849)
        at
org.apache.ranger.services.hdfs.client.HdfsClient.listFilesInternal(HdfsClient.java:83)
        at
org.apache.ranger.services.hdfs.client.HdfsClient.access$000(HdfsClient.java:41)
        at
org.apache.ranger.services.hdfs.client.HdfsClient$1.run(HdfsClient.java:165)
        at
org.apache.ranger.services.hdfs.client.HdfsClient$1.run(HdfsClient.java:162)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at
org.apache.ranger.services.hdfs.client.HdfsClient.listFiles(HdfsClient.java:169)
        at
org.apache.ranger.services.hdfs.client.HdfsClient.testConnection(HdfsClient.java:211)
        at
org.apache.ranger.services.hdfs.client.HdfsResourceMgr.testConnection(HdfsResourceMgr.java:46)
        at
org.apache.ranger.services.hdfs.RangerServiceHdfs.validateConfig(RangerServiceHdfs.java:57)
        at
org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:484)
        at
org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:471)
        at
org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:432)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
        at
org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:227)
        at
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
        at
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
        at
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:558)
        at
org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:373)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:727)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:723)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:722)
        ... 39 more
2016-05-18 10:42:03,144 [timed-executor-pool-0] DEBUG
org.apache.hadoop.ipc.Client$Connection (Client.java:1189) - IPC Client
(1901255770) connection to <NAMENODE HOST FQDN>/<namenode host IP>:8020
from rangerhdfslookup: closed
2016-05-18 10:42:03,144 [timed-executor-pool-0] TRACE
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker
(ProtobufRpcEngine.java:235) - 60: Exception <- <NAMENODE HOST
FQDN>/<namenode host IP>:8020: getListing {java.io.IOException: Failed on
local exception: java.io.IOException: java.lang.NullPointerException; Host
Details : local host is: "<ranger host fqdn>/<ranger host IP>"; destination
host is: "<NAMENODE HOST FQDN>":8020; }
2016-05-18 10:42:03,147 [timed-executor-pool-0] DEBUG
apache.ranger.services.hdfs.client.HdfsClient (HdfsClient.java:140) - <==
HdfsClient listFilesInternal Error : java.io.IOException: Failed on local
exception: java.io.IOException: java.lang.NullPointerException; Host
Details : local host is: "<ranger host fqdn>/<ranger host IP>"; destination
host is: "<NAMENODE HOST FQDN>":8020;
2016-05-18 10:42:03,147 [timed-executor-pool-0] ERROR
apache.ranger.services.hdfs.client.HdfsResourceMgr
(HdfsResourceMgr.java:48) - <== HdfsResourceMgr.testConnection Error:
org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable
to get listing of files for directory /null] from Hadoop environment [<HDFS
REPO>].
###

The user rangerhdfslookup exists in my kerberos db, a kinit
rangerhdfslookupwith the right password works fine, and this is the same
password that I put in the repository definition, and in the ambari UI.

BR.

Lune.

On Wed, May 18, 2016 at 10:15 AM, Lune Silver <lunescar.ran...@gmail.com>
wrote:

> Re Ramesh.
>
> So my SSL problem is solved, but I still have this error in my log :
> ###
>
> 2016-05-18 10:07:32,579 [timed-executor-pool-0] ERROR
> org.apache.ranger.services.hdfs.RangerServiceHdfs
> (RangerServiceHdfs.java:59) - <== RangerServiceHdfs.validateConfig
> Error:org.apache.ranger.plugin.client.HadoopException: listFilesInternal:
> Unable to get listing of files for directory /null] from Hadoop environment
> [<CLUSTERNAME>_hadoop].
> ###
>
> I already have a log of files and folders in HDFS.
> What do you mean by create an empty file ? With which user ? In which
> folder ?
>
> BR.
>
> Lune.
>
>
>
> On Wed, May 18, 2016 at 9:52 AM, Lune Silver <lunescar.ran...@gmail.com>
> wrote:
>
>> Pfew.
>>
>> Indeed the wrong truststore was my problem.
>> By using the one of the JDK I managed to get rid of the error.
>>
>> To get the JAVA_HOME location :
>> readlink -f /usr/bin/java | sed "s:bin/java::"
>>
>> Then the cacerts file is located in lib/security/.
>> And the default password is changeit.
>>
>> BR.
>>
>> Lune.
>>
>> On Wed, May 18, 2016 at 9:29 AM, Lune Silver <lunescar.ran...@gmail.com>
>> wrote:
>>
>>> In fact, it uses by default the JDK cacert.
>>> https://issues.apache.org/jira/browse/AMBARI-15917
>>>
>>> So I'm wondering if I'm not using the wrong truststore for ranger admin
>>> in fact.
>>>
>>> BR.
>>>
>>> Lune
>>>
>>> On Wed, May 18, 2016 at 9:27 AM, Lune Silver <lunescar.ran...@gmail.com>
>>> wrote:
>>>
>>>> In fact, I'm wondering.
>>>> What is the truststore used by default by Ranger Admin ?
>>>>
>>>> I can find a property for the truststore of Ranger User-Sync, but not
>>>> for Ranger Admin.
>>>>
>>>> BR.
>>>>
>>>>
>>>> Lune.
>>>>
>>>> On Wed, May 18, 2016 at 9:16 AM, Lune Silver <lunescar.ran...@gmail.com
>>>> > wrote:
>>>>
>>>>> Re Ramesh.
>>>>>
>>>>> I investigated more my problem and I'm sorry for the confusion.
>>>>> I checked the policy cache directory on the namenode, and also the
>>>>> logs of the namenode.
>>>>>
>>>>> The policycache dir contains an empty file.
>>>>> And the namenode log contains the following error message :
>>>>> ###
>>>>> 2016-05-18 08:53:50,129 ERROR client.RangerAdminRESTClient
>>>>> (RangerAdminRESTClient.java:getServicePoliciesIfUpdated(79)) - Error
>>>>> getting policies. request=https://<RANGER HOST FQDN>:<RANGER ADMIN
>>>>> PORT>/service/plugins/policies/download/<HDFS
>>>>> REPO>?lastKnownVersion=-1&pluginId=hdfs@<NAMENODE HOST FQDN>-<HDFS
>>>>> REPO>,
>>>>> response={"httpStatusCode":400,"statusCode":1,"msgDesc":"Unauthorized
>>>>> access - unable to get client
>>>>> certificate","messageList":[{"name":"OPER_NOT_ALLOWED_FOR_ENTITY","rbKey":"xa.error.oper_not_allowed_for_state","message":"Operation
>>>>> not allowed for entity"}]}, serviceName=<HDFS REPO>
>>>>> 2016-05-18 08:53:50,130 ERROR util.PolicyRefresher
>>>>> (PolicyRefresher.java:loadPolicyfromPolicyAdmin(228)) -
>>>>> PolicyRefresher(serviceName=<HDFS REPO>): failed to refresh policies. Will
>>>>> continue to use last known version of policies (-1)
>>>>> java.lang.Exception: Unauthorized access - unable to get client
>>>>> certificate
>>>>>         at
>>>>> org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:81)
>>>>>         at
>>>>> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:205)
>>>>>         at
>>>>> org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:175)
>>>>>         at
>>>>> org.apache.ranger.plugin.util.PolicyRefresher.startRefresher(PolicyRefresher.java:132)
>>>>>         at
>>>>> org.apache.ranger.plugin.service.RangerBasePlugin.init(RangerBasePlugin.java:106)
>>>>>         at
>>>>> org.apache.ranger.authorization.hadoop.RangerHdfsPlugin.init(RangerHdfsAuthorizer.java:399)
>>>>>         at
>>>>> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer.start(RangerHdfsAuthorizer.java:83)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1062)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:763)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:687)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:896)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:880)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1586)
>>>>>         at
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1652)
>>>>> ###
>>>>>
>>>>> What does OPER_NOT_ALLOWED_FOR_ENTITY means ?
>>>>> Which user is the operator for the hdfs plugin ?
>>>>> Is it the user created for the plugin (in the property Ranger
>>>>> repository config user) ?
>>>>>
>>>>> I enabled the SSL for HDFS plugin following the HW doc here :
>>>>>
>>>>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/ch04s18s02s04s01.html
>>>>>
>>>>> Do you think my problem could come from an error from my SSL
>>>>> configuration ?
>>>>>
>>>>> If I summarize what i did :
>>>>>
>>>>> I have :
>>>>> - one node with the namenode
>>>>> - one node with ranger (admin + usersync)
>>>>>
>>>>> On the namenode host, I created a plugin keystore.
>>>>> This keystore contains the certificate for the alias rangerHdfsAgent.
>>>>> ###
>>>>> cd /etc/hadoop/conf
>>>>> keytool -genkey -keyalg RSA -alias rangerHdfsAgent -keystore
>>>>> /etc/hadoop/conf/ranger-plugin-keystore.jks -validity 3600 -keysize 2048
>>>>> -dname
>>>>> 'cn=HdfsPlugin,ou=<mycompany>,o=<mycompany>,l=<mycity>,st=<mycountry>,c=<idcountry>'
>>>>> chown hdfs:hdfs /etc/hadoop/conf/ranger-plugin-keystore.jks
>>>>> chmod 400 /etc/hadoop/conf/ranger-plugin-keystore.jks
>>>>> ###
>>>>>
>>>>> On the Ranger host, I exported the certificate for the alias
>>>>> rangeradmin from the admin keystore.
>>>>> ###
>>>>> keytool -export -keystore
>>>>> /etc/ranger/admin/conf/ranger-admin-keystore.jks -alias rangeradmin -file
>>>>> /etc/ranger/admin/conf/ranger-admin-trust.cer
>>>>> ###
>>>>>
>>>>> Then I transfered the cer file from the ranger host to the namenode
>>>>> host.
>>>>>
>>>>> On the namenode host, I imported the certificate of the alias
>>>>> rangeradmin into the plugin truststore. (the truststore was not yet
>>>>> existing)
>>>>> ###
>>>>> keytool -import -file /etc/hadoop/conf/ranger-admin-trust.cer -alias
>>>>> rangeradmintrust -keystore /etc/hadoop/conf/ranger-plugin-truststore.jks
>>>>> chown hdfs:hdfs /etc/hadoop/conf/ranger-plugin-truststore.jks
>>>>> chmod 400 /etc/hadoop/conf/ranger-plugin-truststore.jks
>>>>> ###
>>>>>
>>>>> On the namenode host, I exported the certificate for the alias
>>>>> rangerHdfsAgent from the plugin keystore.
>>>>> ###
>>>>> keytool -export -keystore /etc/hadoop/conf/ranger-plugin-keystore.jks
>>>>> -alias rangerHdfsAgent -file /etc/hadoop/conf/ranger-hdfsAgent-trust.cer
>>>>> ###
>>>>>
>>>>> Then I transfered the ranger-hdfsAgent-trust.cer file from the
>>>>> namenode host to the ranger host.
>>>>>
>>>>> On the ranger host, I imported the certificate for the alias
>>>>> rangerHdfsAgent in the admin truststore (the truststore was not yet
>>>>> existing).
>>>>> ###
>>>>> keytool -import -file
>>>>> /etc/ranger/admin/conf/ranger-hdfsAgent-trust.cer -alias
>>>>> rangerHdfsAgentTrust -keystore
>>>>> /etc/ranger/admin/conf/ranger-admin-truststore.jks
>>>>> chown ranger:ranger /etc/ranger/admin/conf/ranger-admin-truststore.jks
>>>>> chmod 400 /etc/ranger/admin/conf/ranger-admin-truststore.jks
>>>>> ###
>>>>>
>>>>> In the Ambari UI, I added the CN HdfsPlugin in the property "Common
>>>>> Name For Certificate".
>>>>>
>>>>> In the Ranger Admin UI, I checked that, in the repository definition,
>>>>> there is also this property with the right value.
>>>>>
>>>>> Do you think Is there something wrong ?
>>>>>
>>>>> BR.
>>>>>
>>>>> Lune.
>>>>>
>>>>>
>>>>> On Tue, May 17, 2016 at 3:45 PM, Lune Silver <
>>>>> lunescar.ran...@gmail.com> wrote:
>>>>>
>>>>>> Hello !
>>>>>>
>>>>>> I just enabled the HDFS plugin for Ranger.
>>>>>> The repository was created by Ambari (2.2.1 with HDP cluster 2.3.2).
>>>>>>
>>>>>> In the Ranger Admin UI, in the repository edit window, when I check
>>>>>> on the button "test connection", I have the following error message :
>>>>>> ###
>>>>>> Unable to connect repository with given config for <MYCLUSTER>_hadoop
>>>>>> ###
>>>>>>
>>>>>> And I can see this in the logs :
>>>>>> ###
>>>>>> 2016-05-17 15:41:49,895 [http-bio-6182-exec-5] ERROR
>>>>>> org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:120) - ==>
>>>>>> ServiceMgr.validateConfig Error:java.util.concurrent.ExecutionException:
>>>>>> org.apache.ranger.plugin.client.HadoopException: listFilesInternal: 
>>>>>> Unable
>>>>>> to get listing of files for directory /null] from Hadoop environment
>>>>>> [<MYCLUSTER>_hadoop].
>>>>>> ###
>>>>>>
>>>>>> Any idea about why this test connection fails ?
>>>>>>
>>>>>> BR.
>>>>>>
>>>>>> Lune.
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to