[ 
https://issues.apache.org/jira/browse/HBASE-23739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17139052#comment-17139052
 ] 

Michael Stack commented on HBASE-23739:
---------------------------------------

I ran into this issue. We try 18 times to get the descriptor from Master before 
we time out... about 3 minutes. We finally fail w/ this (someone interrupts us):

{code}
 2020-06-17 21:20:37,469 WARN  [split-log-closeStream--pool6-t1] 
wal.BoundedRecoveredHFilesOutputSink: Failed to get table descriptor for 
hbase:meta
 java.io.InterruptedIOException: Interrupted after 18 tries while maxAttempts=46
   at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:173)
   at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3007)
   at 
org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:550)
   at 
org.apache.hadoop.hbase.client.HBaseAdmin.getDescriptor(HBaseAdmin.java:365)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.getTableDescriptor(BoundedRecoveredHFilesOutputSink.java:227)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.lambda$createRecoveredHFileWriter$4(BoundedRecoveredHFilesOutputSink.java:203)
   at 
java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.createRecoveredHFileWriter(BoundedRecoveredHFilesOutputSink.java:203)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.append(BoundedRecoveredHFilesOutputSink.java:109)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.lambda$writeRemainingEntryBuffers$3(BoundedRecoveredHFilesOutputSink.java:149)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
...
{code}

... which seems to interrupt the split worker that was running on the 
regionserver....

{code}
2020-06-17 21:20:37,469 WARN  
[RS_LOG_REPLAY_OPS-regionserver/localhost:16020-0] regionserver.SplitLogWorker: 
Resigning, interrupted splitting of WAL 
file:/Users/stack/checkouts/hbase.apache.git/tmp/hbase/WALs/localhost,16020,1592440848604-splitting/
                          
localhost%2C16020%2C1592440848604.meta.1592440852959.meta
 java.io.InterruptedIOException
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.writeRemainingEntryBuffers(BoundedRecoveredHFilesOutputSink.java:163)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.close(BoundedRecoveredHFilesOutputSink.java:136)
   at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:391)
   at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:215)
   at 
org.apache.hadoop.hbase.regionserver.SplitLogWorker.splitLog(SplitLogWorker.java:102)
   at 
org.apache.hadoop.hbase.regionserver.SplitWALCallable.splitWal(SplitWALCallable.java:104)
   at 
org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:86)
   at 
org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:49)
   at 
org.apache.hadoop.hbase.regionserver.handler.RSProcedureHandler.process(RSProcedureHandler.java:49)
   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.InterruptedException
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
   at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
   at 
java.util.concurrent.ExecutorCompletionService.take(ExecutorCompletionService.java:193)
   at 
org.apache.hadoop.hbase.wal.BoundedRecoveredHFilesOutputSink.writeRemainingEntryBuffers(BoundedRecoveredHFilesOutputSink.java:156)
   ... 12 more
{code}

The interrupt comes out here which causes the RS to crash:
{code}
 2020-06-17 21:20:37,472 ERROR 
[RS_LOG_REPLAY_OPS-regionserver/localhost:16020-0] handler.RSProcedureHandler: 
Error when call RSProcedureCallable:
 java.io.IOException: Failed WAL split, status=RESIGNED, 
wal=file:/Users/stack/checkouts/hbase.apache.git/tmp/hbase/WALs/localhost,16020,1592440848604-splitting/localhost%2C16020%2C1592440848604.meta.1592440852959.meta
   at 
org.apache.hadoop.hbase.regionserver.SplitWALCallable.splitWal(SplitWALCallable.java:106)
   at 
org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:86)
   at 
org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:49)
   at 
org.apache.hadoop.hbase.regionserver.handler.RSProcedureHandler.process(RSProcedureHandler.java:49)
   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
{code}

This is a standalone instance so kills the server. Let me make a subissue.


> BoundedRecoveredHFilesOutputSink should read the table descriptor directly
> --------------------------------------------------------------------------
>
>                 Key: HBASE-23739
>                 URL: https://issues.apache.org/jira/browse/HBASE-23739
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Guanghao Zhang
>            Assignee: Guanghao Zhang
>            Priority: Major
>             Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Read from meta or filesystem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to