[ 
https://issues.apache.org/jira/browse/HBASE-26715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17502650#comment-17502650
 ] 

Bryan Beaudreault edited comment on HBASE-26715 at 3/9/22, 3:01 AM:
--------------------------------------------------------------------

I wonder if we could re-use hbase.regionserver.logroll.wait.timeout.ms that was 
added in HBASE-26552. While that Jira used it for a different purpose, I think 
the intent still remains – as a maximum time for logroll to take. It just so 
happens that they tackled from the perspective of bounding retries, but we also 
wouldn't want a single attempt to take that long either. Thoughts? 

Edit: fixed referenced Jira


was (Author: bbeaudreault):
I wonder if we could re-use hbase.regionserver.logroll.wait.timeout.ms that was 
added in HBASE-26679. While that Jira used it for a different purpose, I think 
the intent still remains – as a maximum time for logroll to take. It just so 
happens that they tackled from the perspective of bounding retries, but we also 
wouldn't want a single attempt to take that long either. Thoughts?

> Blocked on SyncFuture in AsyncProtobufLogWriter#write
> -----------------------------------------------------
>
>                 Key: HBASE-26715
>                 URL: https://issues.apache.org/jira/browse/HBASE-26715
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Bryan Beaudreault
>            Assignee: Andrew Kyle Purtell
>            Priority: Major
>              Labels: WAL
>             Fix For: 2.5.0, 2.6.0, 3.0.0-alpha-3, 2.4.11
>
>
> Ran into an issue on hbase 2.4.6, I think related to HBASE-26679. Individual 
> writes are blocking on SyncFuture, which never gets completed. Eventually 
> (5m) the writes timeout and fail. But the regionserver hung on like this 
> basically forever until I killed it about 14 hours later. While 26679 may fix 
> the hang bug, I think we should have additional protection against such 
> zombie states. In this case I think what happened is that the rollWAL was 
> requested due to failed appends, but it also hung forever. See the below 
> stack trace:
>  
> {code:java}
> Thread 240 (regionserver/host:60020.logRoller):
>   State: WAITING
>   Blocked count: 38
>   Waited count: 293
>   Waiting on java.util.concurrent.CompletableFuture$Signaller@13342c6d
>   Stack:
>     [email protected]/jdk.internal.misc.Unsafe.park(Native Method)
>     
> [email protected]/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
>     
> [email protected]/java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1796)
>     
> [email protected]/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3128)
>     
> [email protected]/java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1823)
>     
> [email protected]/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1998)
>     
> app//org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:189)
>     
> app//org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(AsyncProtobufLogWriter.java:202)
>     
> app//org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:170)
>     
> app//org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113)
>     
> app//org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669)
>     
> app//org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130)
>     
> app//org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:841)
>     
> app//org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:268)
>     
> app//org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:187)
>  {code}
>  
> The wall roller thread was stuck on this wait seemingly forever, so it was 
> never able to roll the wal and get writes working again. I think we should 
> add a timeout here, and abort the regionserver if a WAL cannot be rolled in a 
> timely manner.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to