Hi All,
i've written a STORM topology with a HDFS and a HBase Bolt, which runs
absolutely fine with kerberos turned off in my hadoop cluster.
With activated security in the cluster, the HDFS Bolts works like a charm, but
I encounter authentication errors with the HBase Bolt.
Accessing HBase shell with a Kerberos ticket works also absolutely fine.
The java exception in the Storm topology log is as follows:
2015-03-15 12:21:11 o.a.h.h.i.RpcClient [WARN] Exception encountered while
connecting to the server : javax.security.sasl.SaslException: GSS initiate
failed [Caused by GSSExcept$
2015-03-15 12:21:11 o.a.h.h.i.RpcClient [ERROR] SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
~[na:1.7.0_45]
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:177)
~[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:815)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$800(RpcClient.java:349)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:943)
~[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:940)
~[stormjar.jar:na]
at java.security.AccessController.doPrivileged(Native Method)
~[na:1.7.0_45]
at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_45]
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:940)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1094)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1061)
[stormjar.jar:na]
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1516)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1724)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1777)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:30373)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1604)
[stormjar.jar:na]
at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:768)
[stormjar.jar:na]
at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:766)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:772)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:160)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.prefetchRegionCache(ConnectionManager.java:1222)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1286)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1135)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:362)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:313)
[stormjar.jar:na]
at
org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:1066)
[stormjar.jar:na]
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1344)
[stormjar.jar:na]
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1001)
[stormjar.jar:na]
at
com.mhp.bigdata.storm.gt6.MHPHBaseBolt2.execute(MHPHBaseBolt2.java:159)
[stormjar.jar:na]
at
backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at
backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at
backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at
backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465)
[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism
level: Failed to find any Kerberos tgt)
at
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
~[na:1.7.0_45]
at
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
~[na:1.7.0_45]
at
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
~[na:1.7.0_45]
at
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
~[na:1.7.0_45]
at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
~[na:1.7.0_45]
at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
~[na:1.7.0_45]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
~[na:1.7.0_45]
... 40 common frames omitted
My Storm Version is: 0.9.3.2.2.0.0
My HBase Version is: 0.98.4.2.2.0.0
Java JCE is installed. Kerberos is working fine for all other components of the
cluster.
Is there any known bug? Or any chance to trace this issue down to its roots?
I'm looking forward to your responses.
Best regards
Frank