This is what I do with general search caches. It works very well. I
think the same approach would work great with the field cache.
I do think though that we might want direct support for this - using
a fixed length field file (per segment).
E.g. so that you would configure keys with n bytes, you could do
seek (segment doc no * keylength), read (byte[keylength])
This would be very efficient when using external document storage.
On Oct 18, 2007, at 11:34 AM, Mark Miller wrote:
Hoss has worked on a new FieldCache implementation that should
address this if finished and used with the new reopen. I have been
meaning to look at it in greater detail myself, but havn't gotten
at it. It sounds as if he has been a bit too busy to be be able to
work on it himself. It would only require the reloading of the
FieldCache for the SegmentReaders that have changed I believe...eg
each SegmentReader would have its own FieldCache...
- Mark
Doug Cutting wrote:
Erik Hatcher wrote:
2) Load/Warmup the FieldCache (for large corpus, loading up the
indexreader can be slow)
With the new IndexReader#reopen(), the cost of opening a new
IndexReader is much reduced. However, loading a FieldCache is not
that much faster, so that may or may not be enough to make this
approach viable.
Doug
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]