[ 
https://issues.apache.org/jira/browse/HBASE-5081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13179611#comment-13179611
 ] 

Jimmy Xiang commented on HBASE-5081:
------------------------------------

@Prakash, cool, that's great.

splitlog is called by master when it starts up, and the server shutdownhandler 
when a rs dies.
The master does wait then retry.  However, server shutdownhandler doesn't wait. 
 Can we make it
wait as the master does?  ServerShutdownHandler.process().

Another thing is the resubmit() method, it's called by multiple threads: the 
monitor chore thread
and the ZK event thread. Access to task's member fields should be synchronized, 
or make them volatile.
                
> Distributed log splitting deleteNode races against splitLog retry 
> ------------------------------------------------------------------
>
>                 Key: HBASE-5081
>                 URL: https://issues.apache.org/jira/browse/HBASE-5081
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.92.0, 0.94.0
>            Reporter: Jimmy Xiang
>            Assignee: Prakash Khemani
>             Fix For: 0.92.0
>
>         Attachments: 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> 0001-HBASE-5081-jira-Distributed-log-splitting-deleteNode.patch, 
> distributed-log-splitting-screenshot.png, hbase-5081-patch-v6.txt, 
> hbase-5081-patch-v7.txt, hbase-5081_patch_for_92_v4.txt, 
> hbase-5081_patch_v5.txt, patch_for_92.txt, patch_for_92_v2.txt, 
> patch_for_92_v3.txt
>
>
> Recently, during 0.92 rc testing, we found distributed log splitting hangs 
> there forever.  Please see attached screen shot.
> I looked into it and here is what happened I think:
> 1. One rs died, the servershutdownhandler found it out and started the 
> distributed log splitting;
> 2. All three tasks failed, so the three tasks were deleted, asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. During the retrial, it created these three tasks again, and put them in a 
> hashmap (tasks);
> 5. The asynchronously deletion in step 2 finally happened for one task, in 
> the callback, it removed one
> task in the hashmap;
> 6. One of the newly submitted tasks' zookeeper watcher found out that task is 
> unassigned, and it is not
> in the hashmap, so it created a new orphan task.
> 7.  All three tasks failed, but that task created in step 6 is an orphan so 
> the batch.err counter was one short,
> so the log splitting hangs there and keeps waiting for the last task to 
> finish which is never going to happen.
> So I think the problem is step 2.  The fix is to make deletion sync, instead 
> of async, so that the retry will have
> a clean start.
> Async deleteNode will mess up with split log retrial.  In extreme situation, 
> if async deleteNode doesn't happen
> soon enough, some node created during the retrial could be deleted.
> deleteNode should be sync.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to