[ 
https://issues.apache.org/jira/browse/HBASE-18955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195222#comment-16195222
 ] 

Josh Elser commented on HBASE-18955:
------------------------------------

The problem is definitely that every time we try to read from meta, we're 
always pulling the region location out of the cache without forcing a refresh 
on it. I've been able to verify this with a remote debugger on the Master.

{noformat}
2017-10-06 16:05:16,823 INFO  [MASTER_SERVER_OPERATIONS-hw10447:16000-1] 
handler.ServerShutdownHandler: Received exception accessing hbase:meta during 
server shutdown of hw10447.local,16202,1507317713261, retrying hbase:meta read
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=351, exceptions:
Fri Oct 06 16:05:16 EDT 2017, null, java.net.SocketTimeoutException: 
callTimeout=60000, callDuration=60105: row '' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, hostname=hw10447.local,16202,1507317713261, 
seqNum=0

        at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
        at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
        at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
        at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
        at 
org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
        at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:799)
        at 
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
        at 
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:156)
        at 
org.apache.hadoop.hbase.MetaTableAccessor.getServerUserRegions(MetaTableAccessor.java:555)
        at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:179)
        at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketTimeoutException: callTimeout=60000, 
callDuration=60105: row '' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, hostname=hw10447.local,16202,1507317713261, 
seqNum=0
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
        at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
        ... 3 more
Caused by: java.io.IOException: Call to hw10447.local/10.200.31.19:16202 failed 
on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call 
id=78, waitTime=60005, operationTimeout=60000 expired.
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:284)
        at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1238)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
        at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:379)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:201)
        at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:63)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
        at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
        ... 4 more
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=78, 
waitTime=60005, operationTimeout=60000 expired.
        at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
        at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1212)
        ... 14 more
{noformat}

> HBase client queries stale hbase:meta location with half-dead RegionServer
> --------------------------------------------------------------------------
>
>                 Key: HBASE-18955
>                 URL: https://issues.apache.org/jira/browse/HBASE-18955
>             Project: HBase
>          Issue Type: Bug
>          Components: Client
>    Affects Versions: 1.1.12
>            Reporter: Josh Elser
>            Assignee: Josh Elser
>            Priority: Critical
>             Fix For: 1.1.13
>
>
> Have been investigating a case with [~tedyu] where, when a RegionServer 
> becomes "hung" (for no specific reason -- not the point), the client becomes 
> stuck trying to talk to this RegionServer, never exiting. This was eventually 
> tracked down to HBASE-15645. However, in testing the fix, I found that there 
> is an additional problem which only affects branch-1.1.
> When the RegionServer in the "half-dead" state is also hosting meta, the 
> hbase client (both the one trying to read data, but also the client in the 
> Master trying to read meta in SSH) get stuck repeatedly trying to read meta 
> from the old location after meta has been reassigned.
> The general test outline goes like this:
> * Start at least 2 regionservers
> * Load some data into a table ({{hbase pe}} is great)
> * Find a region that is hosted by the same RS that is hosting meta
> * {{kill -SIGSTOP}} that RS hosting the user region and meta
> * Issue a {{get}} in the hbase-shell trying to read from that user region
> The expectation is that the ZK lock will expire for the STOP'ed RS, meta will 
> be reassigned, then the user regions will be reassigned, then the client will 
> get the result of the get without seeing an error (as long as this happens 
> within the hbase.client.operation.timeout value, of course).
> We see this happening on HBase 1.2.4 and 1.3.2-SNAPSHOT, but, on 
> 1.1.13-SNAPSHOT, the Master gets up to re-assigning meta, but then gets stuck 
> trying to read meta from the STOP'ed RS instead of where it re-assigned it. 
> Because of this, the regions stay in transition until the master is restarted 
> or the STOP'ed RS is CONT'ed. My best guess is that when the RS sees the 
> {{SIGCONT}}, it immediately begins stopping which is enough to kick the 
> client into refreshing the region location cache.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to