On 5 Jul 2001, at 10:27, Carsten Ziegeler wrote:

> Hi,
> 
> with the current caching algorithm implemented in Cocoon2 it is
> not possible to cache either the CInludeTransformer nor the
> XIncludeTransformer.
> 

It is possible, although hacky.

> Why is this so?
> 
> Before the sax stream is generated (before the generator starts
> reading its xml document), the caching algorithm builds the
> key and collects the validity objects for the current request.
> 
> At this stage, the CachingCIncludeTransformer didn't get any
> startElement() call, so it does not know which content will
> be included at a later time. So the key and the validity
> objects cannot contain any information about this.
> 

CachingCIncludeTransformer generates "empty" IncludeCacheValidity:

<code>
    public CacheValidity generateValidity() {

        try {
            currentCacheValidity = new IncludeCacheValidity(sourceResolver);
            return currentCacheValidity;
        } catch (Exception e) {
            getLogger().error("CachingCIncludeTransformer: could not generateKey", e);
            return null;
        }
    }
</code>

and fills it with data during transformation:

<code>
    protected void processCIncludeElement(
        String src, String element, String ns, String prefix) 
    {

        try {
            currentCacheValidity.add(src,
                sourceResolver.resolve(src).getLastModified());
        ...
</code>

Validity of IncludeCacheValidity is calculated not by comparing if with new 
IncludeCacheValidity object (as in AggregatedCacheValidity) but by comparing 
timestamps:

<code>
    public boolean isValid(CacheValidity validity) {
        if (validity instanceof IncludeCacheValidity) {
            SourceResolver otherResolver = ((IncludeCacheValidity) validity).resolver;

            for(Iterator i = sources.iterator(), j = timeStamps.iterator(); 
i.hasNext();) {
                String src = ((String)i.next());
                long timeStamp = ((Long)j.next()).longValue();
                try {
                    if( otherResolver.resolve(src).getLastModified()!= timeStamp )
                        return false;
                } catch (Exception e) {
                    return false;
                }
            }
            return true;
        }
        return false;
    }
</code>

CachingCIncludeTransformer always generates the same key since which documents 
are included depends only on former generation/transformation stages:

<code>
    public long generateKey() {
            return 1;
    }
</code>


> So the key and your validity objects your transformer generates
> are always the same, regardless which documents are included.
> 
> Sorry.

Not at all ;->

> 
> Regarding caching in content aggregation:
> It is implemented and should work, if you do not aggregate pipelines. If you
> aggregate xml files it is cached. In opposite to the Transformer the content
> aggregation knows beforehand which documents are aggregated (or included if you
> like), so the key and the validity objects generated contain all this
> information.
> 

I am eager to work on aggregation of pipelines. It seems that some redesign in 
SourceResolver and Source concepts is necessary. Any suggestions?

Maciek Kaminski
[EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to