[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14104809#comment-14104809
 ] 

Francis Liu commented on HBASE-11165:
-------------------------------------

{quote}
If the 0.98 is different to what folks want for 2.0, as per Andy lets split 
this issue.
{quote}
We plan to start working on splitting meta the week after next or maybe even 
next. If there's no clear conclusion to the approach we will likely bring back 
root for 0.98. If there's an agreed upon solution prior we'd be happy to 
work/collaborate to get it done. I'm hoping to have splittable meta stable soon 
so we can avoid having to do 2 backward incompatible rollouts. [~apurtell] hope 
you're okay with this.

{quote}
We need to put stake in grounds soon for hbase 2.0 cluster topology.
{quote}
So far it seems to me the driving requirements are:

+ scale
+ high availability
+ stop using zookeeper completely/for persistence(?)

There's a lot of unknowns specific requirements may change. Let's pile on the 
ideas and have a roadmap iteratively experimenting and adding features with 
clear gains.


> Scaling so cluster can host 1M regions and beyond (50M regions?)
> ----------------------------------------------------------------
>
>                 Key: HBASE-11165
>                 URL: https://issues.apache.org/jira/browse/HBASE-11165
>             Project: HBase
>          Issue Type: Brainstorming
>            Reporter: stack
>         Attachments: HBASE-11165.zip, Region Scalability test.pdf, 
> zk_less_assignment_comparison_2.pdf
>
>
> This discussion issue comes out of "Co-locate Meta And Master HBASE-10569" 
> and comments on the doc posted there.
> A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
> regions maybe even 50M later.  This issue is about discussing how we will do 
> that (or if not 50M on a cluster, how otherwise we can attain same end).
> More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to