I have created Indexes with 1.5 billion documents.

It was experimental: I took an index with 25 million documents, and
merged it with itself many times. While not definitive as there were
only 25m unique documents that were duplicated, it did prove that
Lucene should be able to handle this number of (unique) documents.

That said, Lucene needs to support >2B, so docids (and all associated
internals) need to become 'long' fairly soon....

-Glen

2008/4/30 John Wang <[EMAIL PROTECTED]>:
> lucene docids are represented in a java int, so max signed int would be the
>  limit, a little over 2 billion.
>
>  -John
>
>
>
>  On Wed, Apr 30, 2008 at 11:54 AM, Sebastin <[EMAIL PROTECTED]> wrote:
>
>  >
>  > Hi All,
>  > Does Lucene supports Billions of data in a single index store of size 14
>  > GB
>  > for every search.I have 3 Index Store of size 14 GB per index i need to
>  > search these index store and retreive the result.it throws out of memory
>  > problem while searching this index stores.
>  > --
>  > View this message in context:
>  > 
> http://www.nabble.com/Does-Lucene-Supports-Billions-of-data-tp16974808p16974808.html
>  > Sent from the Lucene - Java Users mailing list archive at Nabble.com.
>  >
>  >
>  > ---------------------------------------------------------------------
>  > To unsubscribe, e-mail: [EMAIL PROTECTED]
>  > For additional commands, e-mail: [EMAIL PROTECTED]
>  >
>  >
>



-- 

-

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to