[ 
https://issues.apache.org/jira/browse/HBASE-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060193#comment-13060193
 ] 

Ted Yu commented on HBASE-4060:
-------------------------------

Here is related log snippet:
{noformat}
2011-06-29 16:39:54,326 DEBUG
org.apache.hadoop.hbase.master.handler.OpenedRegionHandler: Opened region
gs_raw_events,GSLoad_1308518553_168_WEB204,1308533970928.584dac5cc70d8682f71c4675a843c309.
on hadoop1-s05.farm-ny.gigya.com,60020,1307349217076
2011-06-29 16:40:00,598 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign:
master:60000-0x13004a31d7804c4 Creating (or updating) unassigned node for
584dac5cc70d8682f71c4675a843c309 with OFFLINE state
2011-06-29 16:40:00,877 DEBUG
org.apache.hadoop.hbase.master.AssignmentManager: No previous transition
plan was found (or we are ignoring an existing plan) for
gs_raw_events,GSLoad_1308518553_168_WEB204,1308533970928.584dac5cc70d8682f71c4675a843c309.
so generated a random one;
hri=gs_raw_events,GSLoad_1308518553_168_WEB204,1308533970928.584dac5cc70d8682f71c4675a843c309.,
src=, dest=hadoop1-s05.farm-ny.gigya.com,60020,1307349217076; 5 (online=5,
exclude=null) available servers
{noformat}
the log indicates that the following was executed:
{code}
  private void assign(final RegionState state, final boolean setOfflineInZK,
      final boolean forceNewPlan) {
    for (int i = 0; i < this.maximumAssignmentAttempts; i++) {
      if (setOfflineInZK && !setOfflineInZooKeeper(state)) return;
{code}
The above would have been called from either:
{code}
  public void assign(HRegionInfo region, boolean setOfflineInZK) {
    assign(region, setOfflineInZK, false);
  }
{code}
or TimeoutMonitor.chore()

> Making region assignment more robust
> ------------------------------------
>
>                 Key: HBASE-4060
>                 URL: https://issues.apache.org/jira/browse/HBASE-4060
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>             Fix For: 0.92.0
>
>
> From Eran Kutner:
> My concern is that the region allocation process seems to rely too much on
> timing considerations and doesn't seem to take enough measures to guarantee
> conflicts do not occur. I understand that in a distributed environment, when
> you don't get a timely response from a remote machine you can't know for
> sure if it did or did not receive the request, however there are things that
> can be done to mitigate this and reduce the conflict time significantly. For
> example, when I run dbck it knows that some regions are multiply assigned,
> the master could do the same and try to resolve the conflict. Another
> approach would be to handle late responses, even if the response from the
> remote machine arrives after it was assumed to be dead the master should
> have enough information to know it had created a conflict by assigning the
> region to another server. An even better solution, I think, is for the RS to
> periodically test that it is indeed the rightful owner of every region it
> holds and relinquish control over the region if it's not.
> Obviously a state where two RSs hold the same region is pathological and can
> lead to data loss, as demonstrated in my case. The system should be able to
> actively protect itself against such a scenario. It probably doesn't need
> saying but there is really nothing worse for a data storage system than data
> loss.
> In my case the problem didn't happen in the initial phase but after
> disabling and enabling a table with about 12K regions.
> For more background information, see 'Errors after major compaction' 
> discussion on [email protected]

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to