On Tue, 2008-03-25 at 18:13 +0530, Shailendra Mudgal wrote:
> We are using Lucene to search on a index of around 20G size with around 3
> million documents. We are facing performance issues loading large results
> from the index. [...]
> After all these changes, it seems to be  taking around 90 secs to load 17k
> documents. [...]

That's fairly slow. Are you doing any warm-up? It is my experience that
it helps tremendously with performance.

I tried requesting a stored field from all hits for all searches with
logged queries on our index (9 million documents, 37GB), no fancy
tricks, just Hits and hit.get(fieldname). For the first couple of
minutes, using standard harddisks, performance was about 2-300
field-requests/second. After that, the speed increased to about 2-3000
field-requests/second.

Using solid state drives, the same pattern could be seen, just with much
lower warm-up time before the full speed kicked in.

> *Here is the code:
> 
> *                public void collect(int id, float score) {
>                     try {
>                         MapFieldSelector selector = new MapFieldSelector(new
> String[] {COMPANY_ID, ID});
>                         doc = searcher.doc(id, selector);
>                         mappedCompanies = doc.getValues(COMPANY_ID);
>                     } catch (IOException e) {
>                         logger.debug("inside IDCollector.collect()
> :"+e.getMessage());
>                     }
>                 }*
> 
> *

There's no need to initialize the selector for every collect-call.
Try moving the initialization outside of the collect method.

> [...] Also we observed that the searcher is actually using only 1G RAM though
>  we have 4G allocated to it.

The system will (hopefully) utilize the free RAM for disk-cache, so the
last 3GB are not wasted.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to