[ 
https://issues.apache.org/jira/browse/HBASE-4991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13219839#comment-13219839
 ] 

stack commented on HBASE-4991:
------------------------------

@Mubarak I see that this patch is modeled on HBASE-4213, the online schema-edit 
patch.  I'm not sure that is a good model to follow in the first place -- its 
disabled because it does not currently work in the face of splits though it has 
handler code supposedly to manage this and secondly, its a bunch of custom code 
specific to the schema change only.  Your patch does a bunch of copy/paste from 
the schema patch duplicating the model and then also repeating code except for 
some changes in method names and the znodes we wait on up in zk.  Rather don't 
you think we should be generalizing the common facility and having these two 
features share its use rather than making a copy, especially since we now we 
have two clients in need (Its actually three if you count merge, which IMO, 
this feature should be built on).  For example, in both cases we need to 
disable table splitting.  In the schema patch it does this with a 
waitForInflightSchemaChange check that looks at state in zk and then in the 
splitRegion code, we wait by invoking the below:

{code}
waitForSchemaChange(Bytes.toString(regionInfo.getTableName()));
{code}

You come along and do a repeat. You add to the splitRegion code:

{code}
+   waitForDeleteRegion(regionInfo.getEncodedName());
{code}

The list of things to check before we go ahead and split could get pretty long 
if we keep on down this route.

Instead we should have a generic disable splitting function that both schema 
edit and this patch could use.

Going back to your design, I see this:

{code}
4. DeleteRegionTracker (new class in RS side) will process 
nodeChildrenChanged(), get the list of regions_to_be_deleted, check that those 
regions are being hosted by the RS, if yes then

doDeleteRegion
call deleteRegion() in HRegionServer
disable the region split
close the region
remove from META
bridge the whole in META (extending the span of before or after region)
remove region directory from HDFS
update state in ZK 
(<zookeeper.znode.parent>/delete-region/<encoded-region-name>)
{code}

Does the above presume all regions for a range are on a single regionserver (If 
not, how is the meta editing done -- in particular the bridging of the hole in 
.META.?).

I'm asking because I think its not a good design asking regionservers to do the 
merge; it makes this patch more complicated than it need be IMO.

I suggest we go back to the design and work forward from there.  Your patch is 
fat and has a bunch of good stuff that we can repurpose once we have the design 
done.

I suggest a design below.  It has some prerequisites, some general function 
that this feature could use (and others).  The prereqs if you think them good, 
could be done outside of this JIRA.

Here's a suggested rough outline of how I think this feature should run.  The 
feature I'm describing below is merge and deleteRegion for I see them as in 
essence the same thing.

# Client calls merge or deleteRegion API.  API is a range of rows.
# Master gets call.
# Master obtains a write lock on table so it can't be disabled from under us.  
The write lock will also disable splitting. This is one of the prereqs I think. 
 Its HBASE-5494 (Or we could just do something simpler where we have a flag up 
in zk that splitRegion checks but thats less useful I think; OR we do the 
dynamic configs issue and set splits to off via a config. change).  There'd be 
a timer for how long we wait on the table lock.
# If we get the lock, write intent to merge a range up into zk.  It also hoists 
into zk if its a pure merge or a merge that drops the region data (a 
deleteRegion call)
# Return to the client either our failed attempt at locking the table or an id 
of some sort used identifying this running operation; can use it querying 
status.
# Turn off balancer.  TODO/prereq: Do it in a way that is persisted.  Balancer 
switch currently in memory only so if master crashes, new master will come up 
in balancing mode # (If we had dynamic config. could hoist up to zk a config. 
that disables the balancer rather than have a balancer-specific flag/znode OR 
if a write lock outstanding on a table, then the balancer does not balance 
regions in the locked table -- this latter might be the easiest to do)
# Write into zk that just turned off the balancer (If it was on)
# Get regions that are involved in the span
# Hoist the list up into zk.
# Create region to span the range.
# Write that we did this up into zk.
# Close regions in parallel.  Confirm close in parallel.
# Write up into zk regions closed (This might not be necessary since can ask if 
region is open).
# If a merge and not a delete region, move files under new region.  Might 
multithread this (moves should go pretty fast). If a deleteregion, we skip this 
step.
# On completion mark zk (though may not be necessary since its easy to look in 
fs to see state of move). 
# Edit .META.
# Confirm edits went in.
# Move old regions to hbase trash folder TODO: There is no trash folder under 
/hbase currently.  We should add one.
# Enable balancer (if it was off)
# Unlock table

Done

Above is a suggestion.  It'd get us merge and your deleteRegion.


                
> Provide capability to delete named region
> -----------------------------------------
>
>                 Key: HBASE-4991
>                 URL: https://issues.apache.org/jira/browse/HBASE-4991
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Ted Yu
>            Assignee: Mubarak Seyed
>             Fix For: 0.94.0
>
>         Attachments: HBASE-4991.trunk.v1.patch, HBASE-4991.trunk.v2.patch
>
>
> See discussion titled 'Able to control routing to Solr shards or not' on 
> lily-discuss
> User may want to quickly dispose of out of date records by deleting specific 
> regions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to