Brian:

Forget what I wrote about LotsOfCores then, it was introduced in 4.2.....

Erick

On Fri, Nov 7, 2014 at 4:39 PM, Brian Call
<brian.c...@soterawireless.com> wrote:
> Half of those indexes max out at about 1.3G, the other half will always stay 
> very small < 5m total. We keep an index for “raw” data and another index for 
> events and “trended” data. Possible design changes may make this number go up 
> to 4-5G per index, but definitely no more than that.
>
> We’re indexing streaming patient vitals data from many different devices 
> simultaneously, hence the high number of necessary indices. There are also 
> constraints on patient identity confirmation, which requires that multiple 
> indices are created to keep unconfirmed patient data separate from confirmed 
> data.
>
> Also, we’re not using Solr, only raw lucene. The indices remain open until 
> the streaming data has stopped and a user has removed the related session 
> from the UI.
>
> Yes, it’s a necessary kind of scary…
>
> -Brian
>
> On Nov 7, 2014, at 4:20 PM, Erick Erickson <erickerick...@gmail.com> wrote:
>
>> bq: Our server runs many hundreds (soon to be thousands) of indexes
>> simultaneously
>>
>> This is actually kind of scary. How do you expect to fit "many
>> thousands" of indexes into
>> memory? Raising per-process virtual memory to unlimited still doesn't handle 
>> the
>> amount of RAM the Solr process needs. It holds things like caches,
>> (top-level and
>> per-segment), sort lists, all that. How many G of indexes are we
>> talking here? Note
>> that this is not a great guide to RAM requirements, but I'm just
>> trying to get a handle
>> on the scale you're at. You're not, for instance, going to handle
>> terabyte-scale indexes
>> on a single machine satisfactorily IMO.
>>
>> If your usage pattern is a user signs on, works with their index for a
>> while then
>> signs off, you might get some joy out of the LotsOfCores option. That said, 
>> this
>> option has NOT been validated on cloud setups, where I expect it'll
>> have problems.
>>
>> FWIW,
>> Erick
>>
>> On Fri, Nov 7, 2014 at 2:24 PM, Uwe Schindler <u...@thetaphi.de> wrote:
>>> Hi,
>>>
>>>> That error can also be thrown when the number of open files exceeds the
>>>> given limit. "OutOfMemory" should really have been named
>>>> "OutOfResources".
>>>
>>> This was changed already. Lucene no longer prints OOM (it removes the OOM 
>>> from stack trace). It also adds useful information. So I think the version 
>>> of Lucene that produced this exception is older (before 4.9): 
>>> https://issues.apache.org/jira/browse/LUCENE-5673
>>>
>>> Uwe
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: java-user-h...@lucene.apache.org
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> For additional commands, e-mail: java-user-h...@lucene.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to