Index Partitioning should be a good idea.
It'll save a lot of time on index merging, incremental indexing.

Just my experience, partition size really depends on CPU, hard disk speed,
and memory size. Nowadays with Core 2 Duo, 10G size for each chunk should be
good.

-- 
Chris Lu
-------------------------
Instant Scalable Full-Text Search On Any Database/Application
site: http://www.dbsight.net
demo: http://search.dbsight.com
Lucene Database Search in 3 minutes:
http://wiki.dbsight.com/index.php?title=Create_Lucene_Database_Search_in_3_minutes

On 8/29/07, Michael J. Prichard <[EMAIL PROTECTED]> wrote:
>
> Hello All,
>
> I want to hear from those out there that have large (i.e. 50 GB+)
> indexes on how they have designed their architecture.  I currently have
> an index for email that is 10 GB and growing.  Right now there are no
> issues with it but I am about to get into an even bigger use for the
> software which will surely require access to much larger indexes.
> Should I begin to break off indexes into separate chunks and then search
> across them as needed?
>
> For example, maybe break it out by "Month_Year Or "Day_Month_Year"?
> Ideas and experienced practices welcome!
>
> Thanks,
> Michael
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

Reply via email to