Yes, we will think about this how to reorganise the application.

Thanks
Christoph

-----Ursprüngliche Nachricht-----
Von: Joseph Obernberger [mailto:joseph.obernber...@gmail.com] 
Gesendet: Sonntag, 31. August 2014 16:58
An: solr-user@lucene.apache.org
Betreff: Re: Scaling to large Number of Collections

Could you add another field(s) to your application and use that instead of 
creating collections/cores?  When you execute a search, instead of picking a 
core, just search a single large core but add in a field which contains some 
core ID.

-Joe
http://www.lovehorsepower.com


On Sun, Aug 31, 2014 at 8:23 AM, Mark Miller <markrmil...@gmail.com> wrote:

>
> > On Aug 31, 2014, at 4:04 AM, Christoph Schmidt <
> christoph.schm...@moresophy.de> wrote:
> >
> > we see at least two problems when scaling to large number of
> collections. I would like to ask the community, if they are known and 
> maybe already addressed in development:
> > We have a SolrCloud running with the following numbers:
> > -          5 Servers (each 24 CPUs, 128 RAM)
> > -          13.000 Collection with 25.000 SolrCores in the Cloud
> > The Cloud is working fine, but we see two problems, if we like to 
> > scale
> further
> > 1.       Resource consumption of native system threads
> > We see that each collection opens at least two threads: one for the
> zookeeper (coreZkRegister-1-thread-5154) and one for the searcher
> (searcherExecutor-28357-thread-1)
> > We will run in "OutOfMemoryError: unable to create new native thread".
> Maybe the architecture could be changed here to use thread pools?
> > 2.       The shutdown and the startup of one server in the SolrCloud
> takes 2 hours. So a rolling start is about 10h. For me the problem 
> seems to be that leader election is "linear". The Overseer does core 
> per core. The organisation of the cloud is not done parallel or 
> distributed. Is this already addressed by 
> https://issues.apache.org/jira/browse/SOLR-5473 or is there more needed?
>
> 2. No, but it should have been fixed by another issue that will be in 4.10.
>
>
> - Mark
> http://about.me/markrmiller

Reply via email to