[ 
https://issues.apache.org/jira/browse/HBASE-21564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712290#comment-16712290
 ] 

Sergey Shelukhin commented on HBASE-21564:
------------------------------------------

[~stack] do you remember why on WAL reaching target size causes all WALs to 
roll (in normal non-multi-wal case, only meta wal will be affected)? See 
LogRoller walNeedsToRoll map before this patch - in normal case, the value is 
set to true for a particular WAL when requesting a WAL roll based on size, but 
when actually rolling WALs in run() it's not used as a filter but merely as a 
value for "force" flag and all WALs are rolled. It seems like a random thing to 
do, esp. if using multi-wal.

> race condition in WAL rolling resulting in size-based rolling getting stuck
> ---------------------------------------------------------------------------
>
>                 Key: HBASE-21564
>                 URL: https://issues.apache.org/jira/browse/HBASE-21564
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Sergey Shelukhin
>            Assignee: Sergey Shelukhin
>            Priority: Major
>         Attachments: HBASE-21564.master.001.patch
>
>
> Manifests at least with AsyncFsWriter.
> There's a window after LogRoller replaces the writer in the WAL, but before 
> it sets the rollLog boolean to false in the finally, where the WAL class can 
> request another log roll (it can happen in particular when the logs are 
> getting archived in the LogRoller thread, and there's high write volume 
> causing the logs to roll quickly).
> LogRoller will blindly reset the rollLog flag in finally and "forget" about 
> this request.
> AsyncWAL in turn never requests it again because its own rollRequested field 
> is set and it expects a callback. Logs don't get rolled until a periodic roll 
> is triggered after that.
> The acknowledgment of roll requests by LogRoller should be atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to