If you have 15K collections I guess that you are doing custom sharding and not 
using collection sharding.

My first approach was the same as you are doing. In fact, I have the same lote 
of cores issue. I use the Djute.maxbuffer without any issue.

In last versions, Solr implements a way to do sharding using a prefix in your 
ID, therefore I replace my lot of cores with a collection with shards. Now with 
the splitshard feature you can split the shards that reach a condiserable size.

Downside, I don't know if the splitshard feature honors the compositeId defined 
on collection's creation.

Recommendation, if you don't want that the lot of cores issue bites you in some 
kind of wierd issue or anomalous behavior try to reduce the cores as possible 
and splits shards as necessary when performance can hurt your environment.

-- 
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Monday, September 9, 2013 at 3:09 PM, diyun2008 wrote:

> I just found this option "-Djute.maxbuffer" in zookeeper admin document. But
> it's a "Unsafe Options". I can't really know what it mean. Maybe that will
> bring some unstable problems? Does someone have some real practical
> experiences when using this parameter? I will have at least 15K collections.
> Or I will have to merge them to small numbers.
> 
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr4-4-or-zookeeper-3-4-5-do-not-support-too-many-collections-more-than-600-tp4088689p4088878.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 


Reply via email to