[ 
https://issues.apache.org/jira/browse/HBASE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289945#comment-13289945
 ] 

ramkrishna.s.vasudevan commented on HBASE-6060:
-----------------------------------------------

@Stack and @Chunhui
As a whole if you see this patch it is a rebase of HBASE-5396.  If you see 
HBASE-5396 (now in 0.90) there are no special states.  But the fix there caused 
HBASE-5816.
Now Chunhui's suggestion will make the timeout to work but the problem in 
HBASE-5816 will easily occur i.e two assignments will go in parallel.  And 
every time you hit this problem there are high chances master will abort.
{code}
    if (!hijack && !state.isClosed() && !state.isOffline()) {
      String msg = "Unexpected state : " + state + " .. Cannot transit it to 
OFFLINE.";
      this.master.abort(msg, new IllegalStateException(msg));
      return -1;
    }
{code}
Why this double assignment happens? It is because
In the assign() code there is a retry mechanism if we fail the first assign.
{code}
LOG.warn("Failed assignment of " +
          state.getRegion().getRegionNameAsString() + " to " +
          plan.getDestination() + ", trying to assign elsewhere instead; " +
          "retry=" + i, t);
        // Clean out plan we failed execute and one that doesn't look like it'll
        // succeed anyways; we need a new plan!
        // Transition back to OFFLINE
        state.update(RegionState.State.OFFLINE);
        // Force a new plan and reassign.  Will return null if no servers.
        if (getRegionPlan(state, plan.getDestination(), true) == null) {
          this.timeoutMonitor.setAllRegionServersOffline(true);
          LOG.warn("Unable to find a viable location to assign region " +
            state.getRegion().getRegionNameAsString());
          return;
        }
{code}
Now because of the retry we start an assignment and SSH also will try to do an 
assignement(Either with the soln in patch or with Chunhui's suggestion). Now 
this will cause the master abort to happen.  And the same was reported in 
HBASE-5816 by Maryann.  This is the reason we were trying to address both the 
problems as part of this patch with some workarounds.  Also we try to use such 
mechanism so that in future we can avoid double assignments.

bq.Can't we use zk znode sequence numbers to prevent this

I am not very clear on this.  I think before setting a node to OFFLINE already 
we have some code checking the version no of znode inside 
createOrForceNodeOffline().






                
> Regions's in OPENING state from failed regionservers takes a long time to 
> recover
> ---------------------------------------------------------------------------------
>
>                 Key: HBASE-6060
>                 URL: https://issues.apache.org/jira/browse/HBASE-6060
>             Project: HBase
>          Issue Type: Bug
>          Components: master, regionserver
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>             Fix For: 0.96.0, 0.94.1, 0.92.3
>
>         Attachments: 6060-94-v3.patch, 6060-94-v4.patch, 6060-94-v4_1.patch, 
> 6060-94-v4_1.patch, 6060-trunk.patch, 6060-trunk.patch, 6060-trunk_2.patch, 
> 6060-trunk_3.patch, HBASE-6060-92.patch, HBASE-6060-94.patch
>
>
> we have seen a pattern in tests, that the regions are stuck in OPENING state 
> for a very long time when the region server who is opening the region fails. 
> My understanding of the process: 
>  
>  - master calls rs to open the region. If rs is offline, a new plan is 
> generated (a new rs is chosen). RegionState is set to PENDING_OPEN (only in 
> master memory, zk still shows OFFLINE). See HRegionServer.openRegion(), 
> HMaster.assign()
>  - RegionServer, starts opening a region, changes the state in znode. But 
> that znode is not ephemeral. (see ZkAssign)
>  - Rs transitions zk node from OFFLINE to OPENING. See 
> OpenRegionHandler.process()
>  - rs then opens the region, and changes znode from OPENING to OPENED
>  - when rs is killed between OPENING and OPENED states, then zk shows OPENING 
> state, and the master just waits for rs to change the region state, but since 
> rs is down, that wont happen. 
>  - There is a AssignmentManager.TimeoutMonitor, which does exactly guard 
> against these kind of conditions. It periodically checks (every 10 sec by 
> default) the regions in transition to see whether they timedout 
> (hbase.master.assignment.timeoutmonitor.timeout). Default timeout is 30 min, 
> which explains what you and I are seeing. 
>  - ServerShutdownHandler in Master does not reassign regions in OPENING 
> state, although it handles other states. 
> Lowering that threshold from the configuration is one option, but still I 
> think we can do better. 
> Will investigate more. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to