[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13295311#comment-13295311
 ] 

stack commented on HBASE-6060:
------------------------------

bq. But i agree two systems AM and SSH take decision based on that..

Sorry, I don't follow?  Are you saying we should rely on RegionState?   Or on 
RegionPlan?  (Plan is what we 'want' to happen; RegionState should be where 
regions actually 'are'. I'd think that we'd want to deal w/ current state 
making a decision rather than planned state?)

bq. One more thing on RegionState in RIT is that it is reactive....Every time 
Assignment starts RegionState in RIT goes thro a set of steps and may be that 
is why we are not sure as in what step the RIT is and who made that change.

In SSH, we need the answer to one basic question only; i.e. who owns the 
region, master or regionserver.  Unless you have a better idea, I think setting 
of znode to OPENING before returning from the open rpc necessary to plug the 
gray area you fellas identified.

bq. We also tried thought of one approach, like can we remove the retry logic 
itself in assign?

It seems like the retry has been there a long time (Originally we just recursed 
calling assign on exception but in HBASE-3263 we changed it to a bounded loop). 
 It seems like the retry is ok since we'll try a different server if we fail on 
first plan?  We also like this single-assign method because it more rigorous 
about state management than bulk-assign?  Its only a problem if a concurrent 
assign when we are unsure who is responsible for the region.


                
> Regions's in OPENING state from failed regionservers takes a long time to 
> recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: rajeshbabu
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
> 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
> 6060-trunk_3.patch, 6060_alternative_suggestion.txt, 
> 6060_suggestion2_based_off_v3.patch, 6060_suggestion_based_off_v3.patch, 
> 6060_suggestion_toassign_rs_wentdown_beforerequest.patch, 
> HBASE-6060-92.patch, HBASE-6060-94.patch, HBASE-6060-trunk_4.patch, 
> HBASE-6060_trunk_5.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state 
> for a very long time when the region server who is opening the region fails. 
> My understanding of the process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is 
> generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
> master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
> HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But 
> that znode is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See 
> OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
> state, and the master just waits for rs to change the region state, but since 
> rs is down, that wont happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
> against these kind of conditions. It periodically checks (every 10 sec by 
> default) the regions in transition to see whether they timedout 
> (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
> which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING 
> state, although it handles other states. 
> Lowering that threshold from the configuration is one option, but still I 
> think we can do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to