[
https://issues.apache.org/jira/browse/HBASE-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15353754#comment-15353754
]
Mujtaba Chohan commented on HBASE-16115:
----------------------------------------
{noformat}
Stack:
FATAL [ctions-1466815775283] ipc.RpcClient - SASL authentication failed. The
most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException:
No valid credentials provided (Mechanism level: Failed to find any Kerberos
tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at
org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:774)
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:360)
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:895)
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:892)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1706)
at
org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:892)
at
org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1577)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1476)
at
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1693)
at
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1760)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:32914)
at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1559)
at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:747)
at org.apache.hadoop.hbase.client.HTable$2.call(HTable.java:745)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:115)
at org.apache.hadoop.hbase.client.HTable.getRowOrBefore(HTable.java:751)
at
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:144)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1261)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1323)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1179)
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1136)
at
org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:390)
at
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:335)
at
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:287)
at
org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:1019)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1395)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:965)
at
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment$HTableWrapper.put(CoprocessorHost.java:478)
at
org.apache.phoenix.schema.stats.StatisticsWriter.commitLastStatsUpdatedTime(StatisticsWriter.java:227)
at
org.apache.phoenix.schema.stats.StatisticsWriter.newWriter(StatisticsWriter.java:83)
at
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.<init>(DefaultStatisticsCollector.java:85)
at
org.apache.phoenix.schema.stats.StatisticsCollectorFactory.createStatisticsCollector(StatisticsCollectorFactory.java:51)
at
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:614)
at
org.apache.hadoop.hbase.coprocessor.BaseRegionObserver.preCompact(BaseRegionObserver.java:197)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$9.call(RegionCoprocessorHost.java:584)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670)
at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:579)
at
org.apache.hadoop.hbase.regionserver.compactions.Compactor$3.run(Compactor.java:363)
at
org.apache.hadoop.hbase.regionserver.compactions.Compactor$3.run(Compactor.java:360)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1706)
at
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCreateCoprocScanner(Compactor.java:360)
at
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:270)
at
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
at
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1135)
at
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1550)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
at
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:538)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}
> Missing security context in RegionObserver coprocessor when a compaction is
> triggered through the UI
> ----------------------------------------------------------------------------------------------------
>
> Key: HBASE-16115
> URL: https://issues.apache.org/jira/browse/HBASE-16115
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.98.20
> Reporter: Lars Hofhansl
>
> We ran into an interesting phenomenon which can easily render a cluster
> unusable.
> We loaded some tests data into a test table and forced a manual compaction
> through the UI. We have some compaction hooks implemented in a region
> observer, which writes back to another HBase table when the compaction
> finishes. We noticed that this coprocessor is not setup correctly, it seems
> the security context is missing.
> The interesting part is that this _only_ happens when the compaction is
> triggere through the UI. Automatic compactions (major or minor) or when
> triggered via the HBase shell (folling a kinit) work fine. Only the
> UI-triggered compactions cause this issues and lead to essentially
> neverending compactions, immovable regions, etc.
> Not sure what exactly the issue is, but I wanted to make sure I capture this.
> [~apurtell], [~ghelmling], FYI.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)