Thanks, this is very interesting idea. But my index folder is about 30Gb. Max 
RAM I could get probably is 16Gb. Rest could be in swap, but I think it will 
kill the whole idea.. May be it will be useful to put just some files from 
index folder to RAM? If this is possible at all))...


----- Original Message ----
From: Dennis Kubes <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Tuesday, December 4, 2007 12:00:55 PM
Subject: Re: Cache use

One way to do this if you are running on linux is to create a tempfs 
(which is ram) and then mount the filesystem in the ram.  Then your 
index acts normally to the application but is essentially served from 
Ram.  This is how we server the Nutch lucene indexes on our web search 
engine (www.visvo.com) which is ~100M pages.  Below is how you can 
achieve this, assuming your indexes are in /path/to/indexes:


mv /path/to/indexes /path/to/indexes.dist
mkdir /path/to/indexes
cd /path/to
mount -t tmpfs -o size=2684354560 none /path/to/indexes
rsync --progress -aptv indexes.dist/* indexes/
chown -R user:group indexes

This would of course be limited by the amount of RAM you have on the 
machine.  But with this approach most searches are sub-second.

Dennis Kubes

Evgeniy Strokin wrote:
> Hello,...
> we have 110M records index under Solr. Some queries takes a while, but we 
> need sub-second results. I guess the only solution is cache (something 
> else?)...
> We use standard LRUCache. In docs it says (as far as I understood) that it 
> loads view of index in to memory and next time works with memory instead of 
> hard drive.
> So, my question: hypothetically, we can have all index in memory if we'd have 
> enough memory size, right? In this case the result should come up very fast. 
> We have very rear updates. So I think this could be a solution.
> How should I configure the cache to achieve such approach?
> Thanks for any advise.
> Gene

Reply via email to