Hi,
are you sure that the CachingCIncludeTransformer is really working?
As far as I see the validity object gets all its information when
the first response is generated (the timestamps are added).
This hack works.
But when the second response comes in, the new validity object
for this response is empty. It is compared with the one from the
first run and they are not equal.
The second validity object gets the timestamps after the
response is generated (or during the generation).
So as far as I understood your algorithm, I should cache the
correct result due to the hack but this is never retrieved from
the cache.
Or did I oversee something?
> Maciek Kaminski wrote:
>
> I am eager to work on aggregation of pipelines. It seems that
> some redesign in
> SourceResolver and Source concepts is necessary. Any suggestions?
No, the redesign in SourceResolver and Source should be finished.
For making this work the only change should be to the CocoonSourceFactory.
This must only calculate a valid last modification date for the
whole pipeline.
As this is a EventPipeline the last modification date can only be
available/valid if the whole pipeline is cacheable.
The last modification could then be fetched from the time the
response of this eventpipeline was put into the cache?
Carsten
Open Source Group sunShine - b:Integrated
================================================================
Carsten Ziegeler, S&N AG, Klingenderstrasse 5, D-33100 Paderborn
www.sundn.de mailto: [EMAIL PROTECTED]
================================================================
>
>
> On 5 Jul 2001, at 10:27, Carsten Ziegeler wrote:
>
> > Hi,
> >
> > with the current caching algorithm implemented in Cocoon2 it is
> > not possible to cache either the CInludeTransformer nor the
> > XIncludeTransformer.
> >
>
> It is possible, although hacky.
>
> > Why is this so?
> >
> > Before the sax stream is generated (before the generator starts
> > reading its xml document), the caching algorithm builds the
> > key and collects the validity objects for the current request.
> >
> > At this stage, the CachingCIncludeTransformer didn't get any
> > startElement() call, so it does not know which content will
> > be included at a later time. So the key and the validity
> > objects cannot contain any information about this.
> >
>
> CachingCIncludeTransformer generates "empty" IncludeCacheValidity:
>
> <code>
> public CacheValidity generateValidity() {
>
> try {
> currentCacheValidity = new
> IncludeCacheValidity(sourceResolver);
> return currentCacheValidity;
> } catch (Exception e) {
> getLogger().error("CachingCIncludeTransformer: could
> not generateKey", e);
> return null;
> }
> }
> </code>
>
> and fills it with data during transformation:
>
> <code>
> protected void processCIncludeElement(
> String src, String element, String ns, String prefix)
> {
>
> try {
> currentCacheValidity.add(src,
> sourceResolver.resolve(src).getLastModified());
> ...
> </code>
>
> Validity of IncludeCacheValidity is calculated not by comparing
> if with new
> IncludeCacheValidity object (as in AggregatedCacheValidity) but
> by comparing
> timestamps:
>
> <code>
> public boolean isValid(CacheValidity validity) {
> if (validity instanceof IncludeCacheValidity) {
> SourceResolver otherResolver =
> ((IncludeCacheValidity) validity).resolver;
>
> for(Iterator i = sources.iterator(), j =
> timeStamps.iterator(); i.hasNext();) {
> String src = ((String)i.next());
> long timeStamp = ((Long)j.next()).longValue();
> try {
> if(
> otherResolver.resolve(src).getLastModified()!= timeStamp )
> return false;
> } catch (Exception e) {
> return false;
> }
> }
> return true;
> }
> return false;
> }
> </code>
>
> CachingCIncludeTransformer always generates the same key since
> which documents
> are included depends only on former generation/transformation stages:
>
> <code>
> public long generateKey() {
> return 1;
> }
> </code>
>
>
> > So the key and your validity objects your transformer generates
> > are always the same, regardless which documents are included.
> >
> > Sorry.
>
> Not at all ;->
>
> >
> > Regarding caching in content aggregation:
> > It is implemented and should work, if you do not aggregate
> pipelines. If you
> > aggregate xml files it is cached. In opposite to the
> Transformer the content
> > aggregation knows beforehand which documents are aggregated (or
> included if you
> > like), so the key and the validity objects generated contain all this
> > information.
> >
>
> I am eager to work on aggregation of pipelines. It seems that
> some redesign in
> SourceResolver and Source concepts is necessary. Any suggestions?
>
> Maciek Kaminski
> [EMAIL PROTECTED]
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, email: [EMAIL PROTECTED]
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]