On Mon, Jul 14, 2008 at 10:25 AM, Brian Eaton <[EMAIL PROTECTED]> wrote:

> On Mon, Jul 14, 2008 at 10:08 AM, Louis Ryan <[EMAIL PROTECTED]> wrote:
> > Brian
> > This was actually how the original implementation worked and was later
> > changed because we needed to ability to associate rewritten content with
> the
> > cacheable version of the request and not the executed version of the
> > request. See the OAuth and signing fetchers for examples of why this is
> > important. Having the chained-fetchers do this just spreads the cache
> > lookups throughout the code which isnt very DRY. A potential refactoring
> > would be to allow for injecting cache entry pre/post processors into the
> > cache.
>
> The way I'd like to do this would work as follows:
> - any content fetcher that needs special handling of cached content
> (which is pretty much all of them) exposes an interface to generate an
> appropriate cache key
> - we thwack a CacheContentFetcher on the front of the fetching chain
> that knows what do with cache keys.
>
> So the CacheContentFetcher looks something like this:
>
>   key = nextFetcher.getCacheKey(request)
>   if (key in cache) {
>      return cached content
>   }
>   content = nextFetcher.fetch(request)
>   cache.put(key, content)
>
> The rewriting content fetcher would sit between the cached content
> fetcher and the other fetchers in the chain (e.g signing, oauth, or
> anything else).  It's getCacheKey method would look something like
> this:
>
>   key = nextFetcher.getCacheKey(request)
>   if (shouldRewrite(request)) {
>       key.addParameter("rewritten", "true")
>   }
>   return key;
>
> So the fetching chain would look like this:
>   caching -> rewriting -> [authentication] -> remote content fetcher
>
> This would be pleasant because then we wouldn't need to sprinkle cache
> lookups throughout all of the fetchers.  Each fetcher would need to
> know how to generate an appropriate cache key, but wouldn't need a
> handle to the actual cache and wouldn't need to worry about how to do
> lookups.


Coupling rewriting to http fetching further just makes the problem worse.
There's really no reason why the only thing being cached has to be an
HttpResponse, or, indeed, why there can't be multiple caches for different
types of data.

Right now, the callers of the fetchers have to know a lot about every type
of request. That calls for separate interfaces, not globbing more stuff into
the same interface.


>
> Cheers,
> Brian
>

Reply via email to