Please, take a look at

http://issues.apache.org/jira/browse/SOLR-1379

Alex.

On Wed, Sep 9, 2009 at 5:28 PM, Constantijn Visinescu <baeli...@gmail.com>wrote:

> Just wondering, is there an easy way to load the whole index into ram?
>
> On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov <alex.barano...@gmail.com
> >wrote:
>
> > There is a good article on how to scale the Lucene/Solr solution:
> >
> >
> >
> http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
> >
> > Also, if you have heavy load on the server (large amount of concurrent
> > requests) then I'd suggest to consider loading the index into RAM. It
> > worked
> > well for me on the project with 140+ million documents and 30 concurrent
> > user requests per second. If your index can be placed in RAM you can
> reduce
> > the architecture complexity.
> >
> > Alex Baranov
> >
> > On Wed, Sep 9, 2009 at 5:10 PM, Elaine Li <elaine.bing...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I have 20 million docs on solr. If my query would return more than
> > > 10,000 results, the response time will be very very long. How to
> > > resolve such problem? Can I slice my docs into pieces and let the
> > > query operate within one piece at a time so the response time and
> > > response data will be more managable? Thanks.
> > >
> > > Elaine
> > >
> >
>

Reply via email to