[
https://issues.apache.org/jira/browse/HBASE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13175038#comment-13175038
]
stack commented on HBASE-5081:
------------------------------
@Jimmy Another tack would be ensuring splitLogDistributed has cleaned-up after
itself before it returns including clean up on early-out because of exception.
It seems like we will rerun the split if we early-out if
OrphanHLogAfterSplitException is thrown ONLY (Is this what happened in your
scenario? You say three log splits failed? Was it because a new log file
showed up: i.e. OrphanHLogAfterSplitException? Or for some other reason? If
for some other reason, the split should have failed?). I'd think that if a new
file shows up while we were splitting, its fine to redo the split but I'd think
that splitLogDistibuted would make sure it'd cleaned up after itself before it
returned... that it had completed the batch it had been asked do.
I was waiting on this issue to be done before cutting the RC but after looking
at the pieces, I think that while this an important issue, my thinking is that
it rare so I won't hold up the RC.
Good stuff.
> Distributed log splitting deleteNode races againsth splitLog retry
> -------------------------------------------------------------------
>
> Key: HBASE-5081
> URL: https://issues.apache.org/jira/browse/HBASE-5081
> Project: HBase
> Issue Type: Bug
> Components: wal
> Affects Versions: 0.92.0, 0.94.0
> Reporter: Jimmy Xiang
> Assignee: Jimmy Xiang
> Fix For: 0.92.0
>
> Attachments: distributed-log-splitting-screenshot.png,
> hbase-5081-patch-v6.txt, hbase-5081-patch-v7.txt,
> hbase-5081_patch_for_92_v4.txt, hbase-5081_patch_v5.txt, patch_for_92.txt,
> patch_for_92_v2.txt, patch_for_92_v3.txt
>
>
> Recently, during 0.92 rc testing, we found distributed log splitting hangs
> there forever. Please see attached screen shot.
> I looked into it and here is what happened I think:
> 1. One rs died, the servershutdownhandler found it out and started the
> distributed log splitting;
> 2. All three tasks failed, so the three tasks were deleted, asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. During the retrial, it created these three tasks again, and put them in a
> hashmap (tasks);
> 5. The asynchronously deletion in step 2 finally happened for one task, in
> the callback, it removed one
> task in the hashmap;
> 6. One of the newly submitted tasks' zookeeper watcher found out that task is
> unassigned, and it is not
> in the hashmap, so it created a new orphan task.
> 7. All three tasks failed, but that task created in step 6 is an orphan so
> the batch.err counter was one short,
> so the log splitting hangs there and keeps waiting for the last task to
> finish which is never going to happen.
> So I think the problem is step 2. The fix is to make deletion sync, instead
> of async, so that the retry will have
> a clean start.
> Async deleteNode will mess up with split log retrial. In extreme situation,
> if async deleteNode doesn't happen
> soon enough, some node created during the retrial could be deleted.
> deleteNode should be sync.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira