If you run the same query again, the IndexSearcher will go all the way
to the index again - no caching. Some caching will be done by your
file system, possibly, but that's it. Lucene is fast, so don't
optimize early.
Otis
--- Ben Rooney <[EMAIL PROTECTED]> wrote:
> thanks chris,
>
> you are correct that i'm not sure if i need the caching ability. it
> is
> more to understand right now so that if we do need to implement it, i
> am
> able to.
>
> the reason for the caching is that we will have listing pages for
> certain content types. for example a listing page of articles. this
> listing will be generated against lucene engine using a basic query.
> the page will also have the ability to filter the articles based on
> date
> range as one example. so caching those results could be beneficial.
>
> however, we will also potentially want to cache the basic query so
> that
> subsequent queries will hit a cache. when new content is published
> or
> content is removed from the site, the caches will need to be
> invalidated
> so new results are created.
>
> for the basic query, is there any caching mechanism built into the
> SearchIndexer or do we need to build our own caching mechanism?
>
> thanks
> ben
>
> On Tue, 2004-07-12 at 12:29 -0800, Chris Hostetter wrote:
>
> > : > executes the search, i would keep a static reference to
> SearchIndexer
> > : > and then when i want to invalidate the cache, set it to null or
> create
> >
> > : design of your system. But, yes, you do need to keep a reference
> to it
> > : for the cache to work properly. If you use a new IndexSearcher
> > : instance (I'm simplifying here, you could have an IndexReader
> instance
> > : yourself too, but I'm ignoring that possibility) then the
> filtering
> > : process occurs for each search rather than using the cache.
> >
> > Assuming you have a finite number of Filters, and assuming those
> Filters
> > are expensive enough to be worth it...
> >
> > Another approach you can take to "share" the cache among multiple
> > IndexReaders is to explicitly call the bits method on your
> filter(s) once,
> > and then cache the resulting BitSet anywhere you want (ie:
> serialize it to
> > disk if you so choose). and then impliment a "BitsFilter" class
> that you
> > can construct directly from a BitSet regardless of the IndexReader.
> The
> > down side of this approach is that it will *ONLY* work if you
> arecertain
> > that the index is never being modified. If any documents get
> added, or
> > the index gets re-optimized you must regenerate all of the BitSets.
> >
> > (That's why the CachingWrapperFilter's cache is keyed off of hte
> > IndexReader ... as long as you're re-using the same IndexReader, it
> know's
> > that the cached BitSet must still be valid, because an IndexReader
> > allways sees the same index as when it was opened, even if another
> > thread/process modifies it.)
> >
> >
> > class BitsFilter {
> > BitSet bits;
> > public BitsFilter(BitSet bits) {
> > this.bits=bits;
> > }
> > public BitSet bigs(IndexReader r) {
> > return bits.clone();
> > }
> > }
> >
> >
> >
> >
> > -Hoss
> >
> >
> >
> ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail:
> [EMAIL PROTECTED]
> >
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]