[
https://issues.apache.org/jira/browse/HBASE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13174428#comment-13174428
]
Jimmy Xiang commented on HBASE-5081:
------------------------------------
Upload patch v6 to review board.
+ assertTrue(ZKUtil.checkExists(zkw, tasknode) != -1);
The original is this:
+ assertTrue(ZKUtil.checkExists(zkw, tasknode) == -1);
This assertion can be satisfied when there is no race without this patch.
With this patch, we don't delete any failed task node. So the node should be
there now.
I am still working on a patch to delete the node synchronously in this scenario.
> Distributed log splitting deleteNode races againsth splitLog retry
> -------------------------------------------------------------------
>
> Key: HBASE-5081
> URL: https://issues.apache.org/jira/browse/HBASE-5081
> Project: HBase
> Issue Type: Bug
> Components: wal
> Affects Versions: 0.92.0, 0.94.0
> Reporter: Jimmy Xiang
> Assignee: Jimmy Xiang
> Attachments: distributed-log-splitting-screenshot.png,
> hbase-5081-patch-v6.txt, hbase-5081_patch_for_92_v4.txt,
> hbase-5081_patch_v5.txt, patch_for_92.txt, patch_for_92_v2.txt,
> patch_for_92_v3.txt
>
>
> Recently, during 0.92 rc testing, we found distributed log splitting hangs
> there forever. Please see attached screen shot.
> I looked into it and here is what happened I think:
> 1. One rs died, the servershutdownhandler found it out and started the
> distributed log splitting;
> 2. All three tasks failed, so the three tasks were deleted, asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. During the retrial, it created these three tasks again, and put them in a
> hashmap (tasks);
> 5. The asynchronously deletion in step 2 finally happened for one task, in
> the callback, it removed one
> task in the hashmap;
> 6. One of the newly submitted tasks' zookeeper watcher found out that task is
> unassigned, and it is not
> in the hashmap, so it created a new orphan task.
> 7. All three tasks failed, but that task created in step 6 is an orphan so
> the batch.err counter was one short,
> so the log splitting hangs there and keeps waiting for the last task to
> finish which is never going to happen.
> So I think the problem is step 2. The fix is to make deletion sync, instead
> of async, so that the retry will have
> a clean start.
> Async deleteNode will mess up with split log retrial. In extreme situation,
> if async deleteNode doesn't happen
> soon enough, some node created during the retrial could be deleted.
> deleteNode should be sync.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira