Tim Williams wrote:
It seems to me that implementing CacheableProcessingComponent with an
input module like LM isn't feasible since there's only one instance of
it and it won't help us cache at the more granular level.  I think I
confirmed that by following it through it's lifecycle this evening.

Anyway, Ross is on to correcting the validity issue so I thought I'd
spend some time on figuring out how to get away from our
homegrown-hashmap-cache.

That's brilliant, I love Open Source :-))

 I think instead of trying to use the cocoon
cache, the answer is to manage validity ourselves and go directly to
the store.    This gets our little lm cache "managed" with the real
cocoon store as i think it should be.

Cool.

The only problem that I can foresee is our current transient store is set with:

<parameter name="maxobjects" value="100"/>

100 seems extremely small to me anyway but I think if we started to
use it for the lm, we'd find that it's definitely too small and would
spend as much resources cleaning itself as to make it not worth it. Anyone know why it shouldn't be larger?

I'm afraid I know nothing. Just replying to show my support. However, if I do read anything in the Cocoon docs I'll be sure to comment on them.

Ross