Hi all

  I am using lucene to index a large dataset, it so happens 10% of this data
yields indexes of
  400MB, in all likelihood it is possible the index may go upto 7GB.

  My deployment will be on a linux/tomcat  system, what will be a better
solution
  a) create one large index and hope linux does not mind
  b) generate 7-10 indexes based on some criteria and glue them together
using MultiReader, in this case I may cross the MAX file handles limit of
Tomcat ?

 regards







---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to