I use several collections, one of 1 200 000 documents, one of 3 800 000 and
another one of 12 000 000 documents (for the biggests) and the performances
are quite good (except for search with wildcards).
Our machine have 1 giga bites of memory and 2 CPU.


----- Original Message ----- 
From: "Mark Devaney" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, March 10, 2004 4:26 PM
Subject: Large document collections?


> I'm looking for information on the largest document collection that Lucene
> has been used to index, the biggest benchmark I've been able to find so
far
> is 1MM documents.
>
> I'd like to generate some benchmarks for large collections (1-100MM)
records
> and would like to know if this is feasible without using distributed
> indexes, etc.  It's mostly to construct a performance profile relating
> indexing/retrieval time and storage requirements to the number of
documents.
>
> Thanks.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to