Since you're using all the results for a query, and ignoring the score value, you might try and do the same thing with a relational database. But I would not expect that to be much faster, especially when using a field cache.
Other than that, you could also go the other way, and try and add more data to the lucene index that can be used to reduce the number of results to be fetched. Regards, Paul Elschot Op Wednesday 26 March 2008 13:51:24 schreef Shailendra Mudgal: > > The bottom line is that reading fields from docs is expensive. > > FieldCache will, I believe, load fields for all documents but only > > once - so the second and subsequent times it will be fast. Even > > without using a cache it is likely that things will speed up > > because of caching by the OS. > > As i mentioned in my previous mail that the companyId is a > multivalued field, so caching it will consume a lot of memory. And > this way we'll have to keep the document vs field mapping also in the > memory. > > > If you've got plenty of memory vs index size you could look at > > RAMDirectory or MMapDirectory. Or how about some solid state > > disks? Someone recently posted some very impressive performance > > stats. > > The index size is around 20G and the available Memory is 4G so, > keeping the entire index into the memory is not possible. But as i > mentioned earlier that it is using only 1 G out of 4 G, so is their a > way to specify the lucene to cache more documents , say use 2G for > caching the index ?? > > I'll appreciate more suggestions on the same problem. > > Regards, > Vipin --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]