Hello Ellis,

> 
> Hi Ard,
> 
> I've not tried the double-aggregate thing (yet), but I've now 
> attached 
> to the bug a very simple repeatable test that demonstrates 
> the lock up 
> as I've experienced it.

I do not doubt about your tests....I will try to find some time to verify your 
findings (and the aggregate thing), and your solution


> 
> Have fun!

I doubt it :-) 

> 
> Ellis.

Regards Ard

> 
> 
> Ard Schrijvers wrote:
> 
> >>Hi,
> >>
> >>The crux is that the sub-pipeline is called twice within the 
> >>context of 
> >>the master pipeline (once by the root pipeline, once by an 
> include); 
> >>thus the pipeline keys which are the same are those for the 
> >>sub-pipeline, not the master pipeline.
> >>
> >>My 'broken' pipeline is too complex to explain, but it's basically 
> >>something like:
> >>    
> >>
> >
> >If this results in a deadlock, then there is something 
> basically wrong with this locking. Does the master pipeline 
> lock its subpipeline untill it is finished itself? That 
> wouldn't make sense. 
> >
> >I mean, for the thing below, the master would have a lock 
> related to the cachekey where the cachekey is something like: 
> >
> >PK_G-file-cocoon:/foo?pipelinehash=-3411761154931530775_T-xsl
> t-/....page.xsl_S-xml-1
> >
> >Then, as I would understand this key is locked. Now, the 
> pipeline with pattern="foo" gets its own pipeline cachkey, 
> which is also locked. But after this one is finished, your 
> problem indicates that the lock of this sub pipeline is not 
> cleared untill the master pipeline is finished? This doesn't 
> make sense to me.
> >
> >Furthermore, if this setup gives problems, then wouldn't
> >
> ><map:aggregate>
> >     <map:part src="cocoon:/foo">
> >     <map:part src="cocoon:/foo">
> ></map:aggregate>
> >
> >result in the same deadlock? I must be missing something trivial
> >
> >Ard
> >
> >  
> >
> >><map:match pattern="master">
> >>  <map:generate src="cocoon:/foo">
> >>  <map:transform src="page.xsl"/> <!-- generates include 
> element for 
> >>"cocoon:/included" -->
> >>  <map:transform type="include"/> <!-- includes "included" 
> >>sub-pipeline -->
> >>  <map:serialize/>
> >></map:match>
> >>
> >><map:match pattern="included">
> >>  <map:generate src="cocoon:/foo">
> >>  <map:transform src="included-page.xsl"/>
> >>  <map:serialize/>
> >></map:match>
> >>
> >><map:match pattern="foo"> <!-- this gets called twice -->
> >>  <map:generate ... />
> >>  <map:serialize/>
> >></map:match>
> >>
> >>Ellis.
> >>
> >>
> >>Ard Schrijvers wrote:
> >>
> >>    
> >>
> >>>Hello,
> >>>
> >>> 
> >>>
> >>>      
> >>>
> >>>>Cocoon 2.1.9 introduced the concept of a lock in 
> >>>>AbstractCachingProcessingPipeline, an optimization to prevent 
> >>>>two concurrent requests from generating the same cached 
> >>>>content. The first request adds the pipeline key to the 
> >>>>transient cache to 'lock' the cache entry for that pipeline, 
> >>>>subsequent concurrent requests wait for the first request to 
> >>>>cache the content (by Object.lock()ing the pipeline key 
> >>>>entry) before proceeding, and can then use the newly 
> cached content.
> >>>>
> >>>>However, this has introduced an incompatibility with the 
> >>>>IncludeTransformer: if the inclusions access the same 
> >>>>yet-to-be-cached content as the root pipeline, the whole 
> >>>>assembly hangs, since a lock will be made on a lock already 
> >>>>held by the same thread, and which cannot be satisfied.
> >>>>
> >>>>e.g.
> >>>>i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
> >>>>ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key 
> >>>>to the transient store as a lock.
> >>>>iii) subsequently in the root pipeline, the 
> >>>>        
> >>>>
> >>IncludeTransformer is run.
> >>    
> >>
> >>>>iv) one of the inclusions also generates with 
> >>>>cocoon:/foo.xml, this sub-pipeline locks in 
> >>>>AbstractProcessingPipeline.waitForLock() because the 
> >>>>sub-pipeline key is already present.
> >>>>v) deadlock.
> >>>>   
> >>>>
> >>>>        
> >>>>
> >>>I do not understand one part of it. If a sub-pipeline is 
> >>>      
> >>>
> >>called, cocoon:/foo.xml, there is lock generated for this 
> >>sub-pipeline seperately, right? (if not, I do not understand 
> >>why it is not like this. I suppose a lock is generated for 
> >>the root pipeline, but as well for every sub-pipeline 
> >>individually. I suppose though, because i did not actually 
> >>look at the code). 
> >>    
> >>
> >>>Now, if the include transformer calls this same 
> >>>      
> >>>
> >>sub-pipeline, which is having its own lock, I do not see why 
> >>a deadlock can occur? The root-pipeline is locked, the 
> >>sub-pipeline is locked as well. The include transformer wants 
> >>to include the same sub-pipeline, waits untill this one is 
> >>finished, then can includes it, right? 
> >>    
> >>
> >>>I most be missing something, 
> >>>
> >>>Regards Ard
> >>>
> >>> 
> >>>
> >>>      
> >>>
> >>>>I've found a (partial, see below) solution for this: instead 
> >>>>of a plain Object being added to the transient store as the 
> >>>>lock object, the Thread.currentThread() is added; when 
> >>>>waitForLock() is called, if the lock object exists, it checks 
> >>>>that it is not the same thread before attempting to lock it; 
> >>>>if it is the same thread, then waitForLock() returns success, 
> >>>>which allows generation to proceed. You loose the efficiency 
> >>>>of generating the cache only once in this case, but at least 
> >>>>it doesn't hang! With JDK1.5 this can be made neater by using 
> >>>>Thread#holdsLock() instead of adding the thread object itself 
> >>>>to the transient store.
> >>>>
> >>>>See patch file.
> >>>>
> >>>>However, even with this fix, parallel includes (when enabled) 
> >>>>may still hang, because they pass the not-the-same-thread 
> >>>>test, but fail because the root pipeline, which holds the 
> >>>>initial lock, cannot complete (and therefore statisfy the 
> >>>>lock condition for the parallel threads), before the threads 
> >>>>themselves have completed, which then results in a deadlock again.
> >>>>
> >>>>The complete solution is probably to avoid locking if the 
> >>>>lock is held by the same top-level Request, but that requires 
> >>>>more knowledge of Cocoon's processing than I (currently) have!
> >>>>
> >>>>IMHO unless a complete solution is found to this, then this 
> >>>>optimization should be removed completely, or else made 
> >>>>optional by configuration, since it renders the 
> >>>>IncludeTransformer dangerous.
> >>>>
> >>>>
> >>>>-- 
> >>>>This message is automatically generated by JIRA.
> >>>>-
> >>>>If you think it was sent incorrectly contact one of the 
> >>>>administrators: 
> >>>>   
> >>>>
> >>>>        
> >>>>
> >>>https://issues.apache.org/jira/secure/Administrators.jspa
> >>>-
> >>>For more information on JIRA, see: 
> >>>      
> >>>
> >http://www.atlassian.com/software/jira
> >  
> >
> >>       
> >> 
> >>
> >>    
> >>
> >
> >  
> >
> 
> 

Reply via email to