Thanks,
Joel Shellman
John Moylan wrote:
As long as your kernel has "Large File Support", then you should be fine. Most modern distro's support >2GB files now out of the box.
John
On Fri, 2004-07-23 at 13:44, Karthik N S wrote:
Hi
I think (a) would be a better choice [I have done it on Linux upt to 7GB , it's pretty faster then doing the same on win2000 PF]
with regards Karthik
-----Original Message----- From: Rupinder Singh Mazara [mailto:[EMAIL PROTECTED] Sent: Friday, July 23, 2004 5:55 PM To: Lucene Users List Subject: Large index files
Hi all
I am using lucene to index a large dataset, it so happens 10% of this data yields indexes of 400MB, in all likelihood it is possible the index may go upto 7GB.
My deployment will be on a linux/tomcat system, what will be a better solution a) create one large index and hope linux does not mind b) generate 7-10 indexes based on some criteria and glue them together using MultiReader, in this case I may cross the MAX file handles limit of Tomcat ?
regards
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]