On 2/10/06, Tim Williams <[EMAIL PROTECTED]> wrote: > It seems to me that implementing CacheableProcessingComponent with an > input module like LM isn't feasible since there's only one instance of > it and it won't help us cache at the more granular level. I think I > confirmed that by following it through it's lifecycle this evening. > > Anyway, Ross is on to correcting the validity issue so I thought I'd > spend some time on figuring out how to get away from our > homegrown-hashmap-cache. I think instead of trying to use the cocoon > cache, the answer is to manage validity ourselves and go directly to > the store. This gets our little lm cache "managed" with the real > cocoon store as i think it should be. > > The only problem that I can foresee is our current transient store is set > with: > > <parameter name="maxobjects" value="100"/> > > 100 seems extremely small to me anyway but I think if we started to > use it for the lm, we'd find that it's definitely too small and would > spend as much resources cleaning itself as to make it not worth it. > Anyone know why it shouldn't be larger? > > Does storing lm cached hints in the transient store seem reasonable? > > Thanks, > --tim
FYI. I'm not very keen on implementing it yet, but we may be forced to decide just how important this is to us. We'll see if I get any other responses, but I'm not holding my breath... http://marc.theaimsgroup.com/?t=113995323300001&r=1&w=2 --tim
