Oh ok. That makes sense. Thanks.

Otis Gospodnetic wrote:
> 
> Oleg, you can't explicitly say "N GB for index".  Wunder was just saying
> how much you can imagine how much RAM each piece might need and be happy
> with.
>  
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
> 
> ----- Original Message ----
> From: oleg_gnatovskiy <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Wednesday, April 16, 2008 2:05:23 PM
> Subject: Re: too many queries?
> 
> 
> Hello. I am having a similar problem as the OP. I see that you recommended
> setting 4GB for the index, and 2 for Solr. How do I allocate memory for
> the
> index? I was under the impression that Solr did not support a RAMIndex.
> 
> 
> Walter Underwood wrote:
>> 
>> Do it. 32-bit OS's went out of style five years ago in server-land.
>> 
>> I would start with 8GB of RAM. 4GB for your index, 2 for Solr, 1 for
>> the OS and 1 for other processes. That might be tight. 12GB would
>> be a lot better.
>> 
>> wunder
>> 
>> On 4/16/08 7:50 AM, "Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
>> 
>>> In order to do that I have to change to a 64 bits OS so I can have more
>>> than
>>> 4 GB of RAM.Is there any way to see how long does it takes to Solr to
>>> warmup
>>> the searcher?
>>> 
>>> On Wed, Apr 16, 2008 at 11:40 AM, Walter Underwood
>>> <[EMAIL PROTECTED]>
>>> wrote:
>>> 
>>>> A commit every two minutes means that the Solr caches are flushed
>>>> before they even start to stabilize. Two things to try:
>>>> 
>>>> * commit less often, 5 minutes or 10 minutes
>>>> * have enough RAM that your entire index can fit in OS file buffers
>>>> 
>>>> wunder
>>>> 
>>>> On 4/16/08 6:27 AM, "Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
>>>> 
>>>>> So I counted the number if distinct values that I have for each field
>>>> that I
>>>>> want a facet on. In total it's around 100,000. I tried with a
>>>> filterCache
>>>>> of 120,000 but it seems like too much because the server went down. I
>>>> will
>>>>> try with less, around 75,000 and let you know.
>>>>> 
>>>>> How do you to partition the data to a static set and a dynamic set,
>>>>> and
>>>> then
>>>>> combining them at query time? Do you have a link to read about that?
>>>>> 
>>>>> 
>>>>> 
>>>>> On Tue, Apr 15, 2008 at 7:21 PM, Mike Klaas <[EMAIL PROTECTED]>
>>>> wrote:
>>>>> 
>>>>>> On 15-Apr-08, at 5:38 AM, Jonathan Ariel wrote:
>>>>>> 
>>>>>>> My index is 4GB on disk. My servers has 8 GB of RAM each (the OS is
>>>>>>> 32
>>>>>>> bits).
>>>>>>> It is optimized twice a day, it takes around 15 minutes to optimize.
>>>>>>> The index is updated (commits) every two minutes. There are between
>>>>>>> 10
>>>>>>> and
>>>>>>> 100 inserts/updates every 2 minutes.
>>>>>>> 
>>>>>> 
>>>>>> Caching could help--you should definitely start there.
>>>>>> 
>>>>>> The commit every 2 minutes could end up being an unsurmountable
>>>> problem.
>>>>>>  You may have to partition your data into a large, mostly static set
>>>> and a
>>>>>> small dynamic set, combining the results at query time.
>>>>>> 
>>>>>> -Mike
>>>>>> 
>>>> 
>>>> 
>> 
>> 
>> 
> 
> -- 
> View this message in context:
> http://www.nabble.com/too-many-queries--tp16690870p16727264.html
> Sent from the Solr - User mailing list archive at Nabble.com.
> 
> 
> 
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/too-many-queries--tp16690870p16732932.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to