Yeah, I've looked at that too.

One strategy:

Assume that the caching query can find the requested record, and pull
a block of N records including that record.. Every time there's a
cache miss, the cache fetches a block of N results including the
requested record. All are added to the cache.

The point of this is to build up the cache with a regular set of DB
accesses while building up a cache of commonly used records. DB access
architectures should be able to limit DB accesses to N records instead
of just doing one giant fetch.

Does this make sense?

On 3/13/10, Mark Miller <markrmil...@gmail.com> wrote:
> On 03/13/2010 06:26 PM, Mark Miller wrote:
>> I don't really follow DataImportHandler, but it looks like its using
>> an unbounded cache (simple HashMap).
>>
>> Perhaps we should make the cache size configurable?
>>
>> The impl seems a little odd - the caching occurs in the base class -
>> so caching impls that extends it don't really have full control - they
>> just kind of "turn on" the caching in the base class? Kind of an odd
>> approach - to cache you have to turn on the cache support in the base
>> class and impl a couple custom methods as well?
>>
> Looking a little closer, really it seems like all of the caching support
> should be lifted out of EntityProcessorBase and into something like
> CachedEntityProcessorBase. Not a huge deal, but a cleaner design I
> think. There is no real need for anyone looking at EntityProcessorBase
> to think about caching.
>
> Then caching impls can either extend that for some base support, or just
> cache in a completely different way - without the "default caching" kind
> of always being in the chain (even though it's technically "off").
>
> --
> - Mark
>
> http://www.lucidimagination.com
>
>
>
>


-- 
Lance Norskog
goks...@gmail.com

Reply via email to