[ 
https://issues.apache.org/jira/browse/HBASE-24664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17149092#comment-17149092
 ] 

Duo Zhang commented on HBASE-24664:
-----------------------------------

It is the old behavior for a long time. So the question here should not be 'I'm 
not sure if there is a scenario that the old one performs better', it should be 
'I'm not sure if the new one always performs better.'

We should have a way to let users just keep the old behavior. Changing this may 
lead to a split storm on an existing cluster...

> Some changing of split region by overall region size rather than only one 
> store size
> ------------------------------------------------------------------------------------
>
>                 Key: HBASE-24664
>                 URL: https://issues.apache.org/jira/browse/HBASE-24664
>             Project: HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: Zheng Wang
>            Assignee: Zheng Wang
>            Priority: Major
>
> As a distributed cluster, HBase distribute loads in unit of region, so if 
> region grows too big,
>  it will bring some negative effects, such as:
>  1. Harder to homogenize disk usage(consider locality)
>  2. Might cost more time on region opening
>  3. After split, the daughter region might lead to more io cost on compaction 
> in a short time(if write evenly)
> HBASE-24530 introduced a new SteppingAllStoresSizeSplitPolicy, and as 
> discussed in its comments and related 
> [thread|https://lists.apache.org/thread.html/r08a8103e2532eb667a0fcb4efa8a4117b3f82e6251bc4bd0bc157c26%40%3Cdev.hbase.apache.org%3E],
>  we should do follow-on tasks in this new issue.
>  1. Set SteppingAllStoresSizeSplitPolicy as default
>  2. Mark SteppingSplitPolicy and IncreasingToUpperBoundRegionSplitPolicy as 
> deprecated
>  3. Fix ConstantSizeRegionSplitPolicy to split region by overall region size 
> also



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to