[
https://issues.apache.org/jira/browse/HBASE-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13901172#comment-13901172
]
Lars Hofhansl commented on HBASE-10501:
---------------------------------------
Cubing could be better indeed. Was thinking not to change too much. But maybe
we should.
So with initialSize = 2*flushSize and cubing we'd get the following by default
(128m memstores, 10g regions):
256m, 2048m, 6912m, 10g
With squaring we'd get
256m, 1024m, 2304m, 4096m, 6400m, 9216m, 10g
With 4*flushsize and squaring it's:
512m, 2048m, 4608m, 8192m, 10g
Note sure. Looks like 2*flushSize + cubing is best. When cluster is sparsely
used we spread quickly, but also grow quickly once we start seeing multiple
regions. Let's do that then?
As I said this is fuzzy and there is not right or wrong :)
Do we have to worry about numerical overflow? We'd blow past 2^63 after a few
1000 regions depending on flush size. Maybe clamp to max file size after 100
regions.
One bit of information. In our test on a 9 RS/DN cluster we loaded 1bn
rows/250gb and ended up with 171 regions. I.e. 1.4g on average and 19 per
region server. Definitely not good - and we have 256mb flush size and 10g max
file size. Now, 250gb is not exactly a lot of data, but it illustrates the
point.
(It's much higher than our math here; presumably because some of the RS have
fewer regions at times, so they split faster even)
Maybe I can get our perf folks to do some testing.
> Make IncreasingToUpperBoundRegionSplitPolicy configurable
> ---------------------------------------------------------
>
> Key: HBASE-10501
> URL: https://issues.apache.org/jira/browse/HBASE-10501
> Project: HBase
> Issue Type: Bug
> Reporter: Lars Hofhansl
> Attachments: 10501-0.94-v2.txt, 10501-0.94.txt
>
>
> During some (admittedly artificial) load testing we found a large amount
> split activity, which we tracked down the
> IncreasingToUpperBoundRegionSplitPolicy.
> The current logic is this (from the comments):
> "regions that are on this server that all are of the same table, squared,
> times the region flush size OR the maximum region split size, whichever is
> smaller"
> So with a flush size of 128mb and max file size of 20gb, we'd need 13 region
> of the same table on an RS to reach the max size.
> With 10gb file sized it is still 9 regions of the same table.
> Considering that the number of regions that an RS can carry is limited and
> there might be multiple tables, this should be more configurable.
> I think the squaring is smart and we do not need to change it.
> We could
> * Make the start size configurable and default it to the flush size
> * Add multiplier for the initial size, i.e. start with n * flushSize
> * Also change the default to start with 2*flush size
> Of course one can override the default split policy, but these seem like
> simple tweaks.
> Or we could instead set the goal of how many regions of the same table would
> need to be present in order to reach the max size. In that case we'd start
> with maxSize/goal^2. So if max size is 20gb and the goal is three we'd start
> with 20g/9 = 2.2g for the initial region size.
> [~stack], I'm especially interested in your opinion.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)