[
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14137613#comment-14137613
]
Virag Kothari commented on HBASE-11165:
---------------------------------------
bq. fully compacted meta with single-versioned cells 1 row in meta takes up
7-10k
The 7GB for 1M is with 10 versions of meta. Meta has 10 versions by default and
we dint change that for our experiments. But I see the confusion. The attached
pdf says 10 versions while the shared google doc says one version. Sorry about
that.
Also 7GB was the size of store file on hdfs. The table was simply created using
HexStringSplit with nothing extra. Also this was on 0.98 (I think master code
adds some stuff about region replicas in meta) with zk-less assignment.
bq. I'd appreciate a lot if you could share a sample representative row from
your meta, so we can see the typical size of elements in it?
On prod, we currently run 0.94. I can check if we can share some sample row
from there.
> Scaling so cluster can host 1M regions and beyond (50M regions?)
> ----------------------------------------------------------------
>
> Key: HBASE-11165
> URL: https://issues.apache.org/jira/browse/HBASE-11165
> Project: HBase
> Issue Type: Brainstorming
> Reporter: stack
> Attachments: HBASE-11165.zip, Region Scalability test.pdf,
> zk_less_assignment_comparison_2.pdf
>
>
> This discussion issue comes out of "Co-locate Meta And Master HBASE-10569"
> and comments on the doc posted there.
> A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M
> regions maybe even 50M later. This issue is about discussing how we will do
> that (or if not 50M on a cluster, how otherwise we can attain same end).
> More detail to follow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)