[ 
https://issues.apache.org/jira/browse/RANGER-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16079980#comment-16079980
 ] 

peng.jianhua edited comment on RANGER-1681 at 7/10/17 7:44 AM:
---------------------------------------------------------------

Hi [~vperiasamy], you are right. It is one of the ways to solve this problem. 
There are following defects using this way to resolve the problem.
1. The user must manually link these files. For user, it increases the 
difficulty using ranger.
2. Actually some files are not needed. Excessive reliance on irrelevant files 
can cause potential problems. User will be very difficult to locate and 
analysis it once problem occurs.

It is estimated that you have also been troubled by this problem.

However the issue resolved above defects.


was (Author: peng.jianhua):
Hi [~vperiasamy], you are right. It is one of the ways to solve this problem. 
There are following defects using this way to resolve the problem.
1. The user must manually link these files. For user, it increases the 
difficulty using ranger.
2. Actually some files are not needed. Excessive reliance on irrelevant files 
can cause potential problems. User will be very difficult to locate and 
analysis it once problem occurs.

However the issue resolved above defects.

> Now ranger's kerberos configuration item relies directly on the configuration 
> of the hadoop component. When the HDFS opens other functions such as HA, the 
> test connection of hbase will fail
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: RANGER-1681
>                 URL: https://issues.apache.org/jira/browse/RANGER-1681
>             Project: Ranger
>          Issue Type: Bug
>          Components: admin, Ranger
>            Reporter: peng.jianhua
>            Assignee: peng.jianhua
>              Labels: patch
>         Attachments: 0001-RANGER-1681.patch
>
>
> Currently, ragner-admin opens kerberos switch:
> 1.Configure the ranger-admin install.properties file:
> {code}
>       hadoop_conf=/etc/hadoop/conf
> {code}
> 2.Read the configuration items in the hadoop configuration file core-site.xml:
> {code}
>       <property>
>               <name>hadoop.security.authentication</name>
>               <value>kerberos</value>
>       </property>
> {code}
> However, when ranger-admin opened kerberos,and the HDFS opens the HA function,
> hbase-plugin service tests connection failure ,this is because ranger and 
> hadoop shared the same kerberos switch configuration file which caused a 
> series of unnecessary dependencies
> {code}
> 2017-06-22 08:14:44,518 INFO 
> org.apache.ranger.services.hbase.client.HBaseClient: HBase connection has 
> [zookeeper.znode.parent] with value [/hbase]
> 2017-06-22 08:14:44,520 INFO org.apache.ranger.plugin.client.BaseClient: Init 
> Login: security not enabled, using username
> 2017-06-22 08:14:44,581 INFO 
> org.apache.ranger.services.hbase.client.HBaseClient: getHBaseStatus: creating 
> default Hbase configuration
> 2017-06-22 08:14:44,582 INFO 
> org.apache.ranger.services.hbase.client.HBaseClient: getHBaseStatus: setting 
> config values from client
> 2017-06-22 08:14:44,582 INFO 
> org.apache.ranger.services.hbase.client.HBaseClient: getHBaseStatus: checking 
> HbaseAvailability with the new config
> 2017-06-22 08:14:44,923 WARN org.apache.zookeeper.ClientCnxn: SASL 
> configuration failed: javax.security.auth.login.LoginException: No JAAS 
> configuration section named 'Client' was found in specified JAAS 
> configuration file: '/dev/null'. Will continue connection to Zookeeper server 
> without SASL authentication, if Zookeeper server allows it.
> 2017-06-22 08:14:45,033 ERROR 
> org.apache.ranger.services.hbase.client.HBaseClient: getHBaseStatus: Unable 
> to check availability of Hbase environment [hbasedev].java.io.IOException: 
> java.lang.reflect.InvocationTargetException
> 2017-06-22 08:14:45,033 ERROR 
> org.apache.ranger.services.hbase.client.HBaseClient: <== 
> HBaseClient.testConnection(): Unable to retrieve any databases using given 
> parameters
> org.apache.ranger.plugin.client.HadoopException: getHBaseStatus: Unable to 
> check availability of Hbase environment [hbasedev].
>       at 
> org.apache.ranger.services.hbase.client.HBaseClient$1.run(HBaseClient.java:175)
>       at 
> org.apache.ranger.services.hbase.client.HBaseClient$1.run(HBaseClient.java:128)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:356)
>       at 
> org.apache.ranger.services.hbase.client.HBaseClient.getHBaseStatus(HBaseClient.java:128)
>       at 
> org.apache.ranger.services.hbase.client.HBaseClient.connectionTest(HBaseClient.java:100)
>       at 
> org.apache.ranger.services.hbase.client.HBaseResourceMgr.connectionTest(HBaseResourceMgr.java:47)
>       at 
> org.apache.ranger.services.hbase.RangerServiceHBase.validateConfig(RangerServiceHBase.java:59)
>       at 
> org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:560)
>       at 
> org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:547)
>       at 
> org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:508)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: java.lang.reflect.InvocationTargetException
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>       at 
> org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2916)
>       at 
> org.apache.ranger.services.hbase.client.HBaseClient$1.run(HBaseClient.java:138)
>       ... 14 more
> Caused by: java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>       at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>       ... 18 more
> Caused by: java.lang.ExceptionInInitializerError
>       at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
>       at 
> org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
>       at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
>       at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:880)
>       at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:636)
>       ... 23 more
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> nameservice
>       at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:320)
>       at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:692)
>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:633)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
>       at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2694)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2728)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2710)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:384)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:178)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>       at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.initTempDir(DynamicClassLoader.java:120)
>       at 
> org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:98)
>       at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
>       ... 28 more
> Caused by: java.net.UnknownHostException: nameservice
>       ... 45 more
> 2017-06-22 08:14:45,034 ERROR 
> org.apache.ranger.services.hbase.client.HBaseResourceMgr: <== 
> HBaseResourceMgr.connectionTest() Error: 
> org.apache.ranger.plugin.client.HadoopException: getHBaseStatus: Unable to 
> check availability of Hbase environment [hbasedev].
> 2017-06-22 08:14:45,034 ERROR 
> org.apache.ranger.services.hbase.RangerServiceHBase: <== 
> RangerServiceHBase.validateConfig() 
> Error:org.apache.ranger.plugin.client.HadoopException: getHBaseStatus: Unable 
> to check availability of Hbase environment [hbasedev].
> 2017-06-22 08:14:45,034 ERROR org.apache.ranger.biz.ServiceMgr: 
> TimedCallable.call: Error:org.apache.ranger.plugin.client.HadoopException: 
> getHBaseStatus: Unable to check availability of Hbase environment [hbasedev].
> 2017-06-22 08:14:45,035 ERROR org.apache.ranger.biz.ServiceMgr: ==> 
> ServiceMgr.validateConfig 
> Error:org.apache.ranger.plugin.client.HadoopException: 
> org.apache.ranger.plugin.client.HadoopException: getHBaseStatus: Unable to 
> check availability of Hbase environment [hbasedev].
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to