[
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14120535#comment-14120535
]
Francis Liu commented on HBASE-11165:
-------------------------------------
{quote}
If we can get into the same scaling range as HDFS's namenode then I don't see
the urgency to split meta.
{quote}
[~eclark] We already can't scale the number of regions close to the number hdfs
can't handle. See attached doc. Also the hdfs guys internally will be working
to support hbase scaling requirements.
{quote}
Is the NN you mentioned with 250M files is solely dedicated to HBase
installation?
{quote}
The NN I mentioned belongs to a different cluster. The files/per region
{quote}
I mean, could the assumption be made that the HBase cluster with 1M or large
regions consumes about 250M of files in HDFS, so roughly 250 files / per
region, or would it be too bold assumption?
{quote}
Yeah that wouldn't be accurate. It's hard to come up with a good estimate
because it's use case dependent. Tho even for our use case I'm making estimates
as things are still in flux. I'll share something once we have more data.
> Scaling so cluster can host 1M regions and beyond (50M regions?)
> ----------------------------------------------------------------
>
> Key: HBASE-11165
> URL: https://issues.apache.org/jira/browse/HBASE-11165
> Project: HBase
> Issue Type: Brainstorming
> Reporter: stack
> Attachments: HBASE-11165.zip, Region Scalability test.pdf,
> zk_less_assignment_comparison_2.pdf
>
>
> This discussion issue comes out of "Co-locate Meta And Master HBASE-10569"
> and comments on the doc posted there.
> A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M
> regions maybe even 50M later. This issue is about discussing how we will do
> that (or if not 50M on a cluster, how otherwise we can attain same end).
> More detail to follow.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)