[ 
https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13804212#comment-13804212
 ] 

Mark Miller commented on SOLR-5381:
-----------------------------------

bq. Do we have a choice of not scaling to a VERY LARGE cluster ? I think it 
will be suicidal.

My point is simple - I have said, yes, for a cluster over 10k nodes, some extra 
hoops are necessary.

We are not currently stable at a cluster 1/10 that size or less. So jumping 10k 
cluster hoops when we can't properly scale to way less than that just seems 
like introducing more complexity and opportunity for new bugs before we are 
even stable on a much smaller scale - a scale that works with something close 
to the current architecture very nicely - and one that we have been slowly 
hardening.

> Split Clusterstate and scale 
> -----------------------------
>
>                 Key: SOLR-5381
>                 URL: https://issues.apache.org/jira/browse/SOLR-5381
>             Project: Solr
>          Issue Type: Improvement
>          Components: SolrCloud
>            Reporter: Noble Paul
>            Assignee: Noble Paul
>   Original Estimate: 2,016h
>  Remaining Estimate: 2,016h
>
> clusterstate.json is a single point of contention for all components in 
> SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes 
> because there are too many updates and too many nodes need to be notified of 
> the changes. As the no:of nodes go up the size of clusterstate.json keeps 
> going up and it will soon exceed the limit impossed by ZK.
> The first step is to store the shards information in separate nodes and each 
> node can just listen to the shard node it belongs to. We may also need to 
> split each collection into its own node and the clusterstate.json just 
> holding the names of the collections .
> This is an umbrella issue



--
This message was sent by Atlassian JIRA
(v6.1#6144)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to