[ 
https://issues.apache.org/jira/browse/HBASE-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13234592#comment-13234592
 ] 

Prakash Khemani commented on HBASE-5606:
----------------------------------------

@Chinna

It is the TimeoutMonitor that causes the so many Deletes to be queued.

The fix will be the following

In TimeoutMonitor do not call getDataSetWatch() if the task has already failed.

Ignore the call to getDataSetWatch() if there is already a pending 
getDataSetWatch against the task.

Thanks for finding this issue.
                
> SplitLogManger async delete node hangs log splitting when ZK connection is 
> lost 
> --------------------------------------------------------------------------------
>
>                 Key: HBASE-5606
>                 URL: https://issues.apache.org/jira/browse/HBASE-5606
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>    Affects Versions: 0.92.0
>            Reporter: Gopinathan A
>            Priority: Critical
>             Fix For: 0.92.2
>
>
> 1. One rs died, the servershutdownhandler found it out and started the 
> distributed log splitting;
> 2. All tasks are failed due to ZK connection lost, so the all the tasks were 
> deleted asynchronously;
> 3. Servershutdownhandler retried the log splitting;
> 4. The asynchronously deletion in step 2 finally happened for new task
> 5. This made the SplitLogManger in hanging state.
> This leads to .META. region not assigened for long time
> {noformat}
> hbase-root-master-HOST-192-168-47-204.log.2012-03-14"(55413,79):2012-03-14 
> 19:28:47,932 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: put up 
> splitlog task at znode 
> /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
> hbase-root-master-HOST-192-168-47-204.log.2012-03-14"(89303,79):2012-03-14 
> 19:34:32,387 DEBUG org.apache.hadoop.hbase.master.SplitLogManager: put up 
> splitlog task at znode 
> /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
> {noformat}
> {noformat}
> hbase-root-master-HOST-192-168-47-204.log.2012-03-14"(80417,99):2012-03-14 
> 19:34:31,196 DEBUG 
> org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted 
> /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
> hbase-root-master-HOST-192-168-47-204.log.2012-03-14"(89456,99):2012-03-14 
> 19:34:32,497 DEBUG 
> org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback: deleted 
> /hbase/splitlog/hdfs%3A%2F%2F192.168.47.205%3A9000%2Fhbase%2F.logs%2Flinux-114.site%2C60020%2C1331720381665-splitting%2Flinux-114.site%252C60020%252C1331720381665.1331752316170
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to