Re: JNet integration doubts

2008-03-25 Thread Carsten Ziegeler

I'll try to respond in more details to this during the week :)
But as a first quick answer: jnet can be considered alpha, so it might 
have some rough edges especially when it comes to integration.
I think it makes more sense to move the excalibur sourceresolver support 
into an optional module for jnet and keep jnet completly free from such 
references.
The abstraction we introduced with all these sub interfaces from source 
looked great in the beginning, but today I'm not sure that you really 
need this. Traversing over http urls is not working for instance; if you 
want to traverse of files, well use the file api etc.


More during the week

Carsten

Grzegorz Kossakowski wrote:

Grzegorz Kossakowski pisze:

AFAIU, you call

Installer.setURLStreamHandlerFactory(new SourceURLStreamHandlerFactory());

at the startup of your application.

Then you can use the SourceFactoriesManager to install and uninstall
source factories.

Yes, but when and where should I call SourceFactoriesManager to install 
SourceFactories?
That's the main problem here.


Ok, somehow solved and committed. The stuff I committed should be considered 
as experimental (even
though it works...) so don't be surprised seeing lots of hacks.

After playing with JNet idea for a while I'm more and more doubtful about the 
direction we have
taken. I really like Source, SourceFactory interfaces, they are clean, focused 
and obvious to use
contrary to the URL machinery from Java API. Look at what I committed, there is 
no way to release
underlying Source object if InputStream was not been obtained.

Moreover, if you need some advanced functionality (e.g. traversable source) you 
still need switch
back to Excalibur interfaces. Same goes for modifiable, postable etc.

I'm going to invest my energy into implementation of my original idea of 
providing default
SourceResolver for SSF internal needs so we can release SSF 1.1.0 ASAP. I'll 
wait with JNet
integration until someone (Carsten?) else chimes in and explains how everything 
should be glued.

Abstract description explaining what are _real_ benefits of integrating JNet 
into SSF and Cocoon
(Corona?) in general would be good. I really need to get some roadmap if I'm 
going to continue.

Thanks for listening and have happy post-Easter time!




--
Carsten Ziegeler
[EMAIL PROTECTED]


Re: JNet integration

2008-03-25 Thread Reinhard Poetz

Grzegorz Kossakowski wrote:

Grzegorz Kossakowski pisze:

AFAIU, you call

Installer.setURLStreamHandlerFactory(new
SourceURLStreamHandlerFactory());

at the startup of your application.

Then you can use the SourceFactoriesManager to install and uninstall 
source factories.

Yes, but when and where should I call SourceFactoriesManager to install
SourceFactories? That's the main problem here.


Ok, somehow solved and committed. The stuff I committed should be
considered as experimental (even though it works...) so don't be surprised
seeing lots of hacks.

After playing with JNet idea for a while I'm more and more doubtful about the
direction we have taken. I really like Source, SourceFactory interfaces, they
are clean, focused and obvious to use contrary to the URL machinery from Java
API. Look at what I committed, there is no way to release underlying Source
object if InputStream was not been obtained.


Are there any other use cases for releasing a source than the SitemapSource 
(cocoon:/ protocol)?



Moreover, if you need some advanced functionality (e.g. traversable source)
you still need switch back to Excalibur interfaces. Same goes for modifiable,
postable etc.


What's the problem with that? If you are happy with that what the URL object can 
do for you, you don't need to depend on any external stuff. If you want more, 
you have to add some more dependencies to your code.


This sounds to me very familiar: If I want to use advanced logging, I have to 
add e.g. log4j. If I'm happy with that what the JDK offers, I don't have to do 
anything.


What's so special in the case of Excalibur source?


I'm going to invest my energy into implementation of my original idea of
providing default SourceResolver for SSF internal needs so we can release SSF
1.1.0 ASAP. I'll wait with JNet integration until someone (Carsten?) else
chimes in and explains how everything should be glued.


I don't understand this. From a quick glance at your code I see that there we 
are able to set the servlet context in the SSF without depending on Excalibur 
sourceresolve or Excalibur source.


Why and what exactly do you want to change?


Abstract description explaining what are _real_ benefits of integrating JNet
into SSF and Cocoon (Corona?) in general would be good. 


With JNet being set up correctly, Corona doesn't depend on any third-party 
library. E.g. if you want to create a simple pipeline, you don't have to provide 
a SourceResolver - using URLs us enough.



I really need to get
some roadmap if I'm going to continue.


I think that the main goal is making SSF implementation useable for the usage 
without Cocoon core (2.2) and IMHO without having to setup a SourceResolver. A 
test case for this is when you can do


URL u = new URL(servlet:otherService:/a/b/c);

from within Corona and you get the expected inputstream afterwards.

--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


org.apache.excalibur.source.SourceFactory/blockcontext

2008-03-25 Thread Reinhard Poetz

[EMAIL PROTECTED] wrote:

Modified: 
cocoon/trunk/core/cocoon-servlet-service/cocoon-servlet-service-impl/src/main/resources/META-INF/cocoon/spring/cocoon-servlet-service-block-servlet-map.xml
URL: 
http://svn.apache.org/viewvc/cocoon/trunk/core/cocoon-servlet-service/cocoon-servlet-service-impl/src/main/resources/META-INF/cocoon/spring/cocoon-servlet-service-block-servlet-map.xml?rev=640476r1=640475r2=640476view=diff
==
--- 
cocoon/trunk/core/cocoon-servlet-service/cocoon-servlet-service-impl/src/main/resources/META-INF/cocoon/spring/cocoon-servlet-service-block-servlet-map.xml
 (original)
+++ 
cocoon/trunk/core/cocoon-servlet-service/cocoon-servlet-service-impl/src/main/resources/META-INF/cocoon/spring/cocoon-servlet-service-block-servlet-map.xml
 Mon Mar 24 10:27:24 2008
@@ -29,4 +29,15 @@
type=javax.servlet.Servlet
has-properties=mountPath
key-property=mountPath/
+   
+bean id=org.apache.cocoon.servletservice.URLStreamFactoryInstaller class=org.apache.cocoon.servletservice.URLStreamFactoryInstaller

+  scope=singleton init-method=init
+  property name=globalSourceFactories
+!-- only blockcontext and file protocols are supported by SSF --
+map
+entry key=file 
value-ref=org.apache.excalibur.source.SourceFactory/file/
+entry key=blockcontext 
value-ref=org.apache.excalibur.source.SourceFactory/blockcontext/
+/map
+  /property
+/bean
 /beans


Out of curiosity: Where is 
org.apache.excalibur.source.SourceFactory/blockcontext being setup?


--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


Re: [GSoC_2008] Project ideas

2008-03-25 Thread Reinhard Poetz

Vadim Gritsenko wrote:

On Mar 22, 2008, at 1:14 PM, Lukas Lang wrote:
Yesterday I was introduced to an Austrian student who would be 
interested in

working on a GSoC for the Cocoon project this year.

The best idea we've had so far was was an upgrade of cForms to Dojo 
1.x (or
replacing it with something else if that is what the community is 
interested in).


Any other suggestions? (the deadline for project proposals is 
Monday, 17th of March)


hey everybody,

i'm the student, who's interested in participating in a GSoC 
cocoon-project.
two days ago i had a conversation with Reinhard and as i read on the 
list,
he told me raising CForms from Dojo 0.4 upto Dojo 1.1 as a GSoC 
project would not be the best way to do so,

due to Jeremy's work on this.

he pointed out that several blocks and related examples yet don't work 
in cocoon-2.2 and
a great part of cocoon's users would take advantage of porting 
frequently used, cohesive blocks to version 2.2.


migrating the following blocks could be a realistic aim:

- cocoon-eventcache
- cocoon-jms
- cocoon-webdav
- cocoon-repository

my suggestion would consist of:

- creating a test-environment
- writing integration tests
- replacing avalon by spring
- making existing samples work
- developing new samples

what do you think?


I'd change the order a bit. First I'd suggest to make sure (and fix if 
necessary) existing samples. Once this is done, the block should be 
released. After that, you could start (as necessary) avalon to spring 
migration, and development of new samples.


I'd like to rephrase this: Make the existing samples work which includes 
replacing all the XSP stuff. This could be the base for a 1.0.0 release.
Then we branch and Lukas can continue with the Springification and writing 
integration tests which can be the base for a 1.1.0 release.


Just to make it clear for Lukas: It's not your responsibility that something 
gets released. And don't add any dependency on some external event to your proposal.


Finally, one thing that I miss: Can you add 'documentation' to your list of 
deliverables?


--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


Re: [GSoC_2008] Project ideas

2008-03-25 Thread Dev at weitling



Vadim Gritsenko wrote:
I'd change the order a bit. First I'd suggest to make sure (and fix if 
necessary) existing samples. Once this is done, the block should be 
released. After that, you could start (as necessary) avalon to spring 
migration, and development of new samples.


Concerning the samples: It would be a great improvement to have the 
related code visible with the sample in browser. It's annoying when you 
have to manually grep whole directories for the appropriate sitemap or 
flowscript with a url as only information.


Florian


Re: org.apache.excalibur.source.SourceFactory/blockcontext

2008-03-25 Thread Grzegorz Kossakowski
Reinhard Poetz pisze:
 
 Out of curiosity: Where is
 org.apache.excalibur.source.SourceFactory/blockcontext being setup?
 

At this point it's set up by cocoon-core[1] but it's already planned to move it 
to
cocoon-servlet-service-impl.

[1]
http://svn.apache.org/viewvc/cocoon/trunk/core/cocoon-core/src/main/resources/META-INF/cocoon/avalon/cocoon-core-source-factories.xconf?view=markup

-- 
Grzegorz


Re: Compiled vs. Interpreted Sitemap Engine (Cocoon 2.0.4)

2008-03-25 Thread Carsten Ziegeler

Robert La Ferla wrote:
I am using Cocoon 2.0.4.  Can someone please explain why the interpreted 
sitemap engine is faster than the compiled one?  Also, is it the class 
attribute that determines whether Cocoon is using the compiled vs. the 
interpreted sitemap engine?  Seems like if you omit the class attribute, 
it should use the interpreted engine but if you specify 
class=org.apache.cocoon.sitemap.SitemapManager, it uses the compiled 
engine??



The term interpreted is a little bit missleading as the sitemap xml is not
read on each access. On the first access the sitemap is read and a 
object representation is created in memory which is used for subsequent 
requests. This approach is way faster than the compiled one which relies
on heavy xslt transformations etc. The interpreted sitemap has also been 
optimized, so you should try to use the interpreted one. Please also 
note that starting with 2.1 the compiled sitemap has been removed.


The class attribute on the sitemap element specifies the implementation 
of the sitemap. Without a class attribute the interpreted is used as 
this is the default for the class attribute. By specifying a different 
value a different implementation (like the compiled one) can be used.


HTH
Carsten
--
Carsten Ziegeler
[EMAIL PROTECTED]


Re: JNet integration doubts

2008-03-25 Thread Vadim Gritsenko

On Mar 25, 2008, at 3:10 AM, Carsten Ziegeler wrote:
The abstraction we introduced with all these sub interfaces from  
source looked great in the beginning, but today I'm not sure that  
you really need this. Traversing over http urls is not working for  
instance; if you want to traverse of files, well use the file api etc.


Just to give an example, WebDAV, FTP, XML:DB all are traversable and  
all do not implement File API.


I don't even think it is possible to extend File APIs with your own  
file systems (java.io.FileSystem is package private).


Vadim


Re: JNet integration

2008-03-25 Thread Grzegorz Kossakowski
Reinhard Poetz pisze:
 Are there any other use cases for releasing a source than the
 SitemapSource (cocoon:/ protocol)?

Hmmm. CachingSource has non-trivial release() method as well. Anyway, I agree 
that most Sources do
not need to be released at all.

 
 What's the problem with that? If you are happy with that what the URL
 object can do for you, you don't need to depend on any external stuff.
 If you want more, you have to add some more dependencies to your code.
 
 This sounds to me very familiar: If I want to use advanced logging, I
 have to add e.g. log4j. If I'm happy with that what the JDK offers, I
 don't have to do anything.
 
 What's so special in the case of Excalibur source?

I agree with you reasoning but I have a feeling that JDK API does not have its 
counterparts for the
most basic functionality found in Source/SourceFactory:

  * exists() - no counterpart
  * getInputStream() - openInputStream()
  * getURI() - toExternalForm()  (Javadocs suggest it's not a counterpart 
but practice suggests
something else...)
  * getLastModified() - no counterpart

Dropping usage of JDK API only to resolve relative URI into absolute form feels 
strange. You will
need to do that no matter where, in Corona (think caching pipelines), in SSF 
and anywhere else you
do something non-trivial with Sources.

 I'm going to invest my energy into implementation of my original idea of
 providing default SourceResolver for SSF internal needs so we can
 release SSF
 1.1.0 ASAP. I'll wait with JNet integration until someone (Carsten?) else
 chimes in and explains how everything should be glued.
 
 I don't understand this. From a quick glance at your code I see that
 there we are able to set the servlet context in the SSF without
 depending on Excalibur sourceresolve or Excalibur source.
 
 Why and what exactly do you want to change?

Current way of installing JNet through init() method of dummy Spring bean is a 
very, very dirt hack.
Morever, since there is no way to resolve blockcontext: path into absolute ones 
I still need to
obtain underlying Source instance. If it's the case, I don't see how all these 
hacks pay off.

 Abstract description explaining what are _real_ benefits of
 integrating JNet
 into SSF and Cocoon (Corona?) in general would be good. 
 
 With JNet being set up correctly, Corona doesn't depend on any
 third-party library. E.g. if you want to create a simple pipeline, you
 don't have to provide a SourceResolver - using URLs us enough.

Yep, until caching comes in. Or until you want to log path of file being 
processed in /absolute/
form. ;-)

 I really need to get
 some roadmap if I'm going to continue.
 
 I think that the main goal is making SSF implementation useable for the
 usage without Cocoon core (2.2) and IMHO without having to setup a
 SourceResolver. A test case for this is when you can do
 
 URL u = new URL(servlet:otherService:/a/b/c);
 
 from within Corona and you get the expected inputstream afterwards.
 

I think little bit more should be expected. See above...

-- 
Grzegorz


Re: JNet integration

2008-03-25 Thread Reinhard Poetz

Grzegorz Kossakowski wrote:

Reinhard Poetz pisze:

Are there any other use cases for releasing a source than the SitemapSource
(cocoon:/ protocol)?


Hmmm. CachingSource has non-trivial release() method as well. Anyway, I agree
that most Sources do not need to be released at all.

What's the problem with that? If you are happy with that what the URL 
object can do for you, you don't need to depend on any external stuff. If

you want more, you have to add some more dependencies to your code.

This sounds to me very familiar: If I want to use advanced logging, I 
have to add e.g. log4j. If I'm happy with that what the JDK offers, I don't

have to do anything.

What's so special in the case of Excalibur source?


I agree with you reasoning but I have a feeling that JDK API does not have
its counterparts for the most basic functionality found in
Source/SourceFactory:

* exists() - no counterpart * getInputStream() - openInputStream() * getURI()
- toExternalForm()  (Javadocs suggest it's not a counterpart but practice
suggests something else...) * getLastModified() - no counterpart

Dropping usage of JDK API only to resolve relative URI into absolute form
feels strange. You will need to do that no matter where, in Corona (think
caching pipelines), in SSF and anywhere else you do something non-trivial
with Sources.

I'm going to invest my energy into implementation of my original idea of 
providing default SourceResolver for SSF internal needs so we can release

SSF 1.1.0 ASAP. I'll wait with JNet integration until someone (Carsten?)
else chimes in and explains how everything should be glued.

I don't understand this. From a quick glance at your code I see that there
we are able to set the servlet context in the SSF without depending on
Excalibur sourceresolve or Excalibur source.

Why and what exactly do you want to change?


Current way of installing JNet through init() method of dummy Spring bean is
a very, very dirt hack. Morever, since there is no way to resolve
blockcontext: path into absolute ones I still need to obtain underlying
Source instance. If it's the case, I don't see how all these hacks pay off.



Abstract description explaining what are _real_ benefits of integrating
JNet into SSF and Cocoon (Corona?) in general would be good.

With JNet being set up correctly, Corona doesn't depend on any third-party
library. E.g. if you want to create a simple pipeline, you don't have to
provide a SourceResolver - using URLs us enough.


Yep, until caching comes in. Or until you want to log path of file being
processed in /absolute/ form. ;-)


I really need to get some roadmap if I'm going to continue.
I think that the main goal is making SSF implementation useable for the 
usage without Cocoon core (2.2) and IMHO without having to setup a 
SourceResolver. A test case for this is when you can do


URL u = new URL(servlet:otherService:/a/b/c);

from within Corona and you get the expected inputstream afterwards.



I think little bit more should be expected. See above...



Once again, my goal is that if you use e.g. Corona in its simplest form, I don't 
want to make everybody and his dog depend on JNet/SourceResolve/Source. E.g. see 
the FileGenerator. Using the URL object is enough for simple use cases of a 
pipeline API.


Yes, I understand that when it comes to caching pipelines, you need more, but 
not everybody needs caching pipelines. For that purpose there could be a 
CacheableFileGenerator, etc.


If you are right and it is difficult or even impossible to remove the 
dependencies on source/sourceresolve/xmlutils/jnet, then be it. I withdraw my 
example Url(servlet:...) from above. When we can switch to sourceresolve 3.0, 
the dependency graph will get smaller anyway.


The main benefit from using URLs (instead of the SourceResolver) comes from 
simple use cases, e.g. you need a pipeline in your Java application that reads 
in some XML file, performs some transformations and finally creates a PDF 
document. FWIW, using URLs should be all that you need.


--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


Re: JNet integration

2008-03-25 Thread Carsten Ziegeler

Reinhard Poetz wrote:



Once again, my goal is that if you use e.g. Corona in its simplest form, 
I don't want to make everybody and his dog depend on 
JNet/SourceResolve/Source. E.g. see the FileGenerator. Using the URL 
object is enough for simple use cases of a pipeline API.


Yes, I understand that when it comes to caching pipelines, you need 
more, but not everybody needs caching pipelines. For that purpose there 
could be a CacheableFileGenerator, etc.


If you are right and it is difficult or even impossible to remove the 
dependencies on source/sourceresolve/xmlutils/jnet, then be it. I 
withdraw my example Url(servlet:...) from above. When we can switch to 
sourceresolve 3.0, the dependency graph will get smaller anyway.


The main benefit from using URLs (instead of the SourceResolver) comes 
from simple use cases, e.g. you need a pipeline in your Java application 
that reads in some XML file, performs some transformations and finally 
creates a PDF document. FWIW, using URLs should be all that you need.


I totally agree with Reinhard; for most uses cases getting an input 
stream (or sax events) via a url is totally sufficient. With the source 
interface we created another abstraction like the request/response 
abstraction in the cocoon environment which seems to be nice and great 
but in the end is not really needed, creates problems in other places etc.
Let's forget jnet for a second and see if the java net api can be 
sufficient. The only other use case might really be caching. You need a 
way to find out if a resource might have changed or not, but I think 
that should be possible.
Using java net api for Corona makes totally sense to me; it keeps it 
simple and small.


Carsten

--
Carsten Ziegeler
[EMAIL PROTECTED]


Re: JNet integration

2008-03-25 Thread Grzegorz Kossakowski
Carsten Ziegeler pisze:
 Reinhard Poetz wrote:


 Once again, my goal is that if you use e.g. Corona in its simplest
 form, I don't want to make everybody and his dog depend on
 JNet/SourceResolve/Source. E.g. see the FileGenerator. Using the URL
 object is enough for simple use cases of a pipeline API.

 Yes, I understand that when it comes to caching pipelines, you need
 more, but not everybody needs caching pipelines. For that purpose
 there could be a CacheableFileGenerator, etc.

 If you are right and it is difficult or even impossible to remove the
 dependencies on source/sourceresolve/xmlutils/jnet, then be it. I
 withdraw my example Url(servlet:...) from above. When we can switch
 to sourceresolve 3.0, the dependency graph will get smaller anyway.

 The main benefit from using URLs (instead of the SourceResolver) comes
 from simple use cases, e.g. you need a pipeline in your Java
 application that reads in some XML file, performs some transformations
 and finally creates a PDF document. FWIW, using URLs should be all
 that you need.

 I totally agree with Reinhard; for most uses cases getting an input
 stream (or sax events) via a url is totally sufficient. With the source
 interface we created another abstraction like the request/response
 abstraction in the cocoon environment which seems to be nice and great
 but in the end is not really needed, creates problems in other places etc.

I agree that our Environment abstraction was awkward - it introduced 
abstraction that never was a
real abstraction and mostly duplicated existing, quite nice servlet API.

At least now I fail to see coherent, nice to use standard Java API that 
Excalibur's Source and
SourceFactory duplicates. As for now I can only see obstacles like:

  new URL(blabla/foo);

will fail with java.net.MalformedURLException: no protocol: blabla/foo
so one must use:

  new URL(baseURL, blabla/foo);

Who will judge if given path is relative and requires baseURL instance? How one 
will get this
baseURL instance?

Guys, it's non-sense...

 Let's forget jnet for a second and see if the java net api can be
 sufficient. The only other use case might really be caching. You need a
 way to find out if a resource might have changed or not, but I think
 that should be possible.
 Using java net api for Corona makes totally sense to me; it keeps it
 simple and small.

Yep, the idea sounds great - that's why I started to dig into JNet. As usual, 
the devil is in a detail.

-- 
Grzegorz


Re: JNet integration

2008-03-25 Thread Reinhard Poetz

Grzegorz Kossakowski wrote:

Carsten Ziegeler pisze:

Reinhard Poetz wrote:


Once again, my goal is that if you use e.g. Corona in its simplest form,
I don't want to make everybody and his dog depend on 
JNet/SourceResolve/Source. E.g. see the FileGenerator. Using the URL 
object is enough for simple use cases of a pipeline API.


Yes, I understand that when it comes to caching pipelines, you need more,
but not everybody needs caching pipelines. For that purpose there could
be a CacheableFileGenerator, etc.

If you are right and it is difficult or even impossible to remove the 
dependencies on source/sourceresolve/xmlutils/jnet, then be it. I 
withdraw my example Url(servlet:...) from above. When we can switch to

sourceresolve 3.0, the dependency graph will get smaller anyway.

The main benefit from using URLs (instead of the SourceResolver) comes 
from simple use cases, e.g. you need a pipeline in your Java application

that reads in some XML file, performs some transformations and finally
creates a PDF document. FWIW, using URLs should be all that you need.


I totally agree with Reinhard; for most uses cases getting an input stream
(or sax events) via a url is totally sufficient. With the source interface
we created another abstraction like the request/response abstraction in the
cocoon environment which seems to be nice and great but in the end is not
really needed, creates problems in other places etc.


I agree that our Environment abstraction was awkward - it introduced
abstraction that never was a real abstraction and mostly duplicated existing,
quite nice servlet API.

At least now I fail to see coherent, nice to use standard Java API that
Excalibur's Source and SourceFactory duplicates. As for now I can only see
obstacles like:

new URL(blabla/foo);

will fail with java.net.MalformedURLException: no protocol: blabla/foo so one
must use:

new URL(baseURL, blabla/foo);

Who will judge if given path is relative and requires baseURL instance? How
one will get this baseURL instance?


What about the developer? He could assemble pipelines this way:

URL baseUrl = new URL(file:///C:/temp/);
Pipeline pipeline = new NonCachingPipeline();
pipeline.addComponent(new FileGenerator(baseUrl, xyz.xml);
pipeline.addComponent(new XSLTTransformer(baseUrl, xyz.xslt);
pipeline.addComponent(new XMLSerializer());
pipeline.invoke(new InvocationImpl(System.out));

Any need for a Source object?

--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


Re: JNet integration

2008-03-25 Thread Reinhard Poetz

Reinhard Poetz wrote:

What about the developer? He could assemble pipelines this way:

URL baseUrl = new URL(file:///C:/temp/);
Pipeline pipeline = new NonCachingPipeline();
pipeline.addComponent(new FileGenerator(baseUrl, xyz.xml);
pipeline.addComponent(new XSLTTransformer(baseUrl, xyz.xslt);
pipeline.addComponent(new XMLSerializer());
pipeline.invoke(new InvocationImpl(System.out));


uuups, small correction:

URL baseUrl = new URL(file:///C:/temp/);
Pipeline pipeline = new NonCachingPipeline();
pipeline.addComponent(new FileGenerator(new URL(baseUrl, xyz.xml));
pipeline.addComponent(new XSLTTransformer(new URL(baseUrl, xyz.xslt));
pipeline.addComponent(new XMLSerializer());
pipeline.invoke(new InvocationImpl(System.out));

--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


Re: JNet integration

2008-03-25 Thread Grzegorz Kossakowski
Reinhard Poetz pisze:
 Reinhard Poetz wrote:
 What about the developer? He could assemble pipelines this way:

 URL baseUrl = new URL(file:///C:/temp/);
 Pipeline pipeline = new NonCachingPipeline();
 pipeline.addComponent(new FileGenerator(baseUrl, xyz.xml);
 pipeline.addComponent(new XSLTTransformer(baseUrl, xyz.xslt);
 pipeline.addComponent(new XMLSerializer());
 pipeline.invoke(new InvocationImpl(System.out));
 
 uuups, small correction:
 
 URL baseUrl = new URL(file:///C:/temp/);
 Pipeline pipeline = new NonCachingPipeline();
 pipeline.addComponent(new FileGenerator(new URL(baseUrl, xyz.xml));
 pipeline.addComponent(new XSLTTransformer(new URL(baseUrl, xyz.xslt));
 pipeline.addComponent(new XMLSerializer());
 pipeline.invoke(new InvocationImpl(System.out));

Hmm, getting back to more complicated scenarios. Do you think that baseURL 
should be scoped (with
call scope) Spring bean?


Another question is weather we should still support meta-protocols like 
blockcontext: one.
Currently it works following way:
  * if you ask for blockcontext: or blockcontext:/ then you will get 
instance of
BlockContextSource that implements TraversableSource (to list all blocks) and 
always returns null if
it's asked for InputSource.
  * if you ask for blockcontext:/block_name/directory then casual FileSource is 
returned pointing to
the filesystem path of block's root directory.

In second case, blockcontext: works as a meta-protocol because its Factory 
never returns its
instance. This leads to the issue that getURI()/getExternalForm() returns path 
beginning with
file:/ instead of blockcontext: which is bad IMHO.
I think one should always expect the same protocol used in canonical 
representation of newly created
URL in order to avoid confusion.

WDYT?

-- 
Grzegorz


Re: JNet integration

2008-03-25 Thread Sylvain Wallez

Reinhard Poetz wrote:

Reinhard Poetz wrote:

What about the developer? He could assemble pipelines this way:

URL baseUrl = new URL(file:///C:/temp/);
Pipeline pipeline = new NonCachingPipeline();
pipeline.addComponent(new FileGenerator(baseUrl, xyz.xml);
pipeline.addComponent(new XSLTTransformer(baseUrl, xyz.xslt);
pipeline.addComponent(new XMLSerializer());
pipeline.invoke(new InvocationImpl(System.out));


uuups, small correction:

URL baseUrl = new URL(file:///C:/temp/);
Pipeline pipeline = new NonCachingPipeline();
pipeline.addComponent(new FileGenerator(new URL(baseUrl, xyz.xml));
pipeline.addComponent(new XSLTTransformer(new URL(baseUrl, xyz.xslt));
pipeline.addComponent(new XMLSerializer());
pipeline.invoke(new InvocationImpl(System.out));


Or even using method chaining

new NonCachingPipeline()
   .setBaseURL(new URL(file:///C:/temp/))
   .setGenerator(new FileGenerator(xyz.xml))
   .addTransformer(new XSLTransformer(xyz.xslt))
   .setSerializer(new XMLSerializer(new StreamResult(System.out)))
   .process();

Sylvain

--
Sylvain Wallez - http://bluxte.net



Re: JNet integration

2008-03-25 Thread Ralph Goers
I had to create a class at work for handling some files. I started with 
an input stream. What I needed, though, required caching and being able 
to check whether the file was still valid. In this case I soon realized 
that I would have to reinvent the Excalibur Source interface since I had 
to cache the Validity (or something like it) along with the information 
about the file.  In the end it made far more sense to just use the 
Source interface. I ended up extending the Excalibur Source 
implementations or creating my own though, as my Validity checking 
didn't match any of the existing implementations exactly.


The point is, if you are planning on caching your files and checking 
whether they are valid or not just using java.net, etc. isn't going to 
be sufficient.


Ralph

Grzegorz Kossakowski wrote:

Carsten Ziegeler pisze:
  


I agree that our Environment abstraction was awkward - it introduced 
abstraction that never was a
real abstraction and mostly duplicated existing, quite nice servlet API.

At least now I fail to see coherent, nice to use standard Java API that 
Excalibur's Source and
SourceFactory duplicates. As for now I can only see obstacles like:

  new URL(blabla/foo);

will fail with java.net.MalformedURLException: no protocol: blabla/foo
so one must use:

  new URL(baseURL, blabla/foo);

Who will judge if given path is relative and requires baseURL instance? How one 
will get this
baseURL instance?

Guys, it's non-sense...

  



Re: JNet integration

2008-03-25 Thread Ralph Goers


I think you are out of your mind. (Not seriously). 

I have to tell you, Cocoon without caching pipelines would suck so bad 
with performance problems you would give it the boot in very short 
order. Even without Cocoon, as soon as you start doing anything 
serious caching will become necessary.


I'll give you a trivial example. I wrote my own I18n implementation for 
use with JSF and used Excalibur Source to read an XML properties file 
containing the keys and values. The first implementation checked to see 
if the file was valid for every key that was read. This didn't perform 
well at all and I changed my Validity so that the file validity was only 
checked once per Request. This made it so the overhead of this utility 
was not noticeable. Now imagine that instead of just checking the 
validity I had been actually reading the file for every key!


Ralph

Reinhard Poetz wrote:



Once again, my goal is that if you use e.g. Corona in its simplest 
form, I don't want to make everybody and his dog depend on 
JNet/SourceResolve/Source. E.g. see the FileGenerator. Using the URL 
object is enough for simple use cases of a pipeline API.


Yes, I understand that when it comes to caching pipelines, you need 
more, but not everybody needs caching pipelines. For that purpose 
there could be a CacheableFileGenerator, etc.


If you are right and it is difficult or even impossible to remove the 
dependencies on source/sourceresolve/xmlutils/jnet, then be it. I 
withdraw my example Url(servlet:...) from above. When we can switch 
to sourceresolve 3.0, the dependency graph will get smaller anyway.


The main benefit from using URLs (instead of the SourceResolver) comes 
from simple use cases, e.g. you need a pipeline in your Java 
application that reads in some XML file, performs some transformations 
and finally creates a PDF document. FWIW, using URLs should be all 
that you need.




unable to build cocoon forms 1.0 branch

2008-03-25 Thread Vadim Gritsenko

Hi All,

There is something wrong going with maven here. Trying to build Cocoon  
Forms from 1.0 branch:


  $ svn info
  URL: 
https://svn.apache.org/repos/asf/cocoon/branches/cocoon-forms-1.0.0/cocoon-forms-impl

  $ mvn install
  [INFO] Installing ~/cocoon-forms-1.0.x/cocoon-forms-impl/target/ 
cocoon-forms-impl-1.0.0-RC2-SNAPSHOT.jar to
  ~/.m2/repository/org/apache/cocoon/cocoon-forms-impl/1.1.0-SNAPSHOT/ 
cocoon-forms-impl-1.1.0-SNAPSHOT.jar



Please note what it is doing: it installs 1.0.0 version of jar (cocoon- 
forms-impl-1.0.0-RC2-SNAPSHOT.jar) into repository in place of 1.1.0  
version (cocoon-forms-impl-1.1.0-SNAPSHOT.jar).


Can anybody take a look and fix this?

Thanks
Vadim


[continuum] BUILD FAILURE: Apache Cocoon [build root]

2008-03-25 Thread Continuum VMBuild Server

Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=69109projectId=51

Build statistics:
 State: Failed
 Previous State: Ok
 Started at: Tue 25 Mar 2008 13:29:39 -0700
 Finished at: Tue 25 Mar 2008 13:30:53 -0700
 Total time: 1m 13s
 Build Trigger: Schedule
 Build Number: 230
 Exit code: 1
 Building machine hostname: vmbuild.apache.org
 Operating system : Linux(unknown)
 Java Home version : 
 java version 1.4.2_15

 Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_15-b02)
 Java HotSpot(TM) Client VM (build 1.4.2_15-b02, mixed mode)
   
 Builder version :

 Maven version: 2.0.7
 Java version: 1.4.2_15
 OS name: linux version: 2.6.20-16-server arch: i386
   


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Defintion:

POM filename: pom.xml
Goals: clean install   
Arguments: --batch-mode -P allblocks,it

Build Fresh: true
Always Build: false
Default Build Definition: true
Schedule: DEFAULT_SCHEDULE
Profile Name: Java 1.4, Large Memory
Description: 




Test Summary:

Tests: 0
Failures: 0
Total time: 0


Output:

[INFO] Scanning for projects...
Downloading: 
http://repo1.maven.org/maven2/org/apache/cocoon/cocoon/6/cocoon-6.pom
[INFO] 
[ERROR] FATAL ERROR
[INFO] 
[INFO] Failed to resolve artifact.

GroupId: org.apache.cocoon
ArtifactId: cocoon
Version: 6

Reason: Unable to download the artifact from any repository

 org.apache.cocoon:cocoon:pom:6

from the specified remote repositories:
 central (http://repo1.maven.org/maven2)


[INFO] 
[INFO] Trace
org.apache.maven.reactor.MavenExecutionException: Cannot find parent: 
org.apache.cocoon:cocoon for project: null:cocoon-core-modules:pom:6-SNAPSHOT 
for project null:cocoon-core-modules:pom:6-SNAPSHOT
at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:378)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:290)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:125)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315)
at org.codehaus.classworlds.Launcher.launch(Launcher.java:255)
at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430)
at org.codehaus.classworlds.Launcher.main(Launcher.java:375)
Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find 
parent: org.apache.cocoon:cocoon for project: 
null:cocoon-core-modules:pom:6-SNAPSHOT for project 
null:cocoon-core-modules:pom:6-SNAPSHOT
at 
org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1261)
at 
org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:747)
at 
org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:479)
at 
org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200)
at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:553)
at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:467)
at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:527)
at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:364)
... 11 more
Caused by: org.apache.maven.project.ProjectBuildingException: POM 
'org.apache.cocoon:cocoon' not found in repository: Unable to download the 
artifact from any repository

 org.apache.cocoon:cocoon:pom:6

from the specified remote repositories:
 central (http://repo1.maven.org/maven2)
for 

Re: JNet integration

2008-03-25 Thread Reinhard Poetz

Ralph Goers wrote:


I think you are out of your mind. (Not seriously).
I have to tell you, Cocoon without caching pipelines would suck so bad 
with performance problems you would give it the boot in very short 
order. Even without Cocoon, as soon as you start doing anything 
serious caching will become necessary.


Sure, caching is important but this doesn't mean that we can't provide a *basic* 
pipeline API that works with URLs only 
(http://marc.info/?l=xml-cocoon-devm=120646488429681w=2). If you need more, 
you can always build a layer on top of it, e.g. by using


Source source =
  (Source) new URL(file:///C:/Temp/foo.xml).getContent(Source.class)

or using a SourceResolver.

--
Reinhard PötzManaging Director, {Indoqa} GmbH
  http://www.indoqa.com/en/people/reinhard.poetz/

Member of the Apache Software Foundation
Apache Cocoon Committer, PMC member, PMC Chair[EMAIL PROTECTED]
_


[continuum] BUILD FAILURE: Apache Cocoon [build root]

2008-03-25 Thread Continuum VMBuild Server

Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=69134projectId=51

Build statistics:
 State: Failed
 Previous State: Failed
 Started at: Tue 25 Mar 2008 14:43:55 -0700
 Finished at: Tue 25 Mar 2008 14:45:04 -0700
 Total time: 1m 8s
 Build Trigger: Schedule
 Build Number: 230
 Exit code: 1
 Building machine hostname: vmbuild.apache.org
 Operating system : Linux(unknown)
 Java Home version : 
 java version 1.4.2_15

 Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_15-b02)
 Java HotSpot(TM) Client VM (build 1.4.2_15-b02, mixed mode)
   
 Builder version :

 Maven version: 2.0.7
 Java version: 1.4.2_15
 OS name: linux version: 2.6.20-16-server arch: i386
   


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Defintion:

POM filename: pom.xml
Goals: clean install   
Arguments: --batch-mode -P allblocks,it

Build Fresh: true
Always Build: false
Default Build Definition: true
Schedule: DEFAULT_SCHEDULE
Profile Name: Java 1.4, Large Memory
Description: 




Test Summary:

Tests: 0
Failures: 0
Total time: 0


Output:

[INFO] Scanning for projects...
Downloading: 
http://repo1.maven.org/maven2/org/apache/cocoon/cocoon/6/cocoon-6.pom
[INFO] 
[ERROR] FATAL ERROR
[INFO] 
[INFO] Failed to resolve artifact.

GroupId: org.apache.cocoon
ArtifactId: cocoon
Version: 6

Reason: Unable to download the artifact from any repository

 org.apache.cocoon:cocoon:pom:6

from the specified remote repositories:
 central (http://repo1.maven.org/maven2)


[INFO] 
[INFO] Trace
org.apache.maven.reactor.MavenExecutionException: Cannot find parent: 
org.apache.cocoon:cocoon for project: null:cocoon-core-modules:pom:6-SNAPSHOT 
for project null:cocoon-core-modules:pom:6-SNAPSHOT
at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:378)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:290)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:125)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315)
at org.codehaus.classworlds.Launcher.launch(Launcher.java:255)
at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430)
at org.codehaus.classworlds.Launcher.main(Launcher.java:375)
Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find 
parent: org.apache.cocoon:cocoon for project: 
null:cocoon-core-modules:pom:6-SNAPSHOT for project 
null:cocoon-core-modules:pom:6-SNAPSHOT
at 
org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1261)
at 
org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:747)
at 
org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:479)
at 
org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200)
at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:553)
at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:467)
at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:527)
at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:364)
... 11 more
Caused by: org.apache.maven.project.ProjectBuildingException: POM 
'org.apache.cocoon:cocoon' not found in repository: Unable to download the 
artifact from any repository

 org.apache.cocoon:cocoon:pom:6

from the specified remote repositories:
 central (http://repo1.maven.org/maven2)
for 

[continuum] BUILD FAILURE: Apache Cocoon [build root]

2008-03-25 Thread Continuum VMBuild Server

Online report : 
http://vmbuild.apache.org/continuum/buildResult.action?buildId=69145projectId=51

Build statistics:
 State: Failed
 Previous State: Failed
 Started at: Tue 25 Mar 2008 15:37:02 -0700
 Finished at: Tue 25 Mar 2008 15:40:35 -0700
 Total time: 3m 33s
 Build Trigger: Schedule
 Build Number: 230
 Exit code: 1
 Building machine hostname: vmbuild.apache.org
 Operating system : Linux(unknown)
 Java Home version : 
 java version 1.4.2_15

 Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_15-b02)
 Java HotSpot(TM) Client VM (build 1.4.2_15-b02, mixed mode)
   
 Builder version :

 Maven version: 2.0.7
 Java version: 1.4.2_15
 OS name: linux version: 2.6.20-16-server arch: i386
   


SCM Changes:

No files changed


Dependencies Changes:

No dependencies changed



Build Defintion:

POM filename: pom.xml
Goals: clean install   
Arguments: --batch-mode -P allblocks,it

Build Fresh: true
Always Build: false
Default Build Definition: true
Schedule: DEFAULT_SCHEDULE
Profile Name: Java 1.4, Large Memory
Description: 




Test Summary:

Tests: 81
Failures: 0
Total time: 18885


Output:

[INFO] Scanning for projects...
[INFO] Reactor build order: 
[INFO]   Apache Cocoon

[INFO]   Cocoon Tools [modules]
[INFO]   Cocoon 2.2 Archetype: Block
[INFO]   Cocoon 2.2 Archetype: Block (plain)
[INFO]   Cocoon 2.2 Archetype: Web Application
[INFO]   Cocoon Integration Test Framework [maven-plugin]
[INFO]   Cocoon Maven Reports
[INFO]   Cocoon Maven 2 Plugin
[INFO]   Cocoon Maven Javadocs Script Report
[INFO]   Cocoon Maven Javadocs Script Report
[INFO]   Cocoon Reloading ClassLoader - Webapp Wrapper
[INFO]   Cocoon Reloading ClassLoader - Spring reloader
[INFO]   Cocoon Configuration API
[INFO]   Cocoon Spring Configurator
[INFO]   Cocoon Core [modules]
[INFO]   Cocoon Pipeline API
[INFO]   Cocoon Util
[INFO]   Cocoon XML API
[INFO]   Cocoon Expression Language API
[INFO]   Cocoon Pipeline Implementation
[INFO]   Cocoon XML Implementation
[INFO]   Cocoon Pipeline Components
[INFO]   Cocoon Sitemap API
[INFO]   Cocoon XML Utilities
[INFO]   Cocoon Expression Language Implementation.
[INFO]   Cocoon Thread API
[INFO]   Cocoon Sitemap Implementation
[INFO]   Cocoon Sitemap Components
[INFO]   Cocoon XML Resolver
[INFO]   Cocoon Store Implementation
[INFO]   Cocoon Thread Implementation
[INFO]   Cocoon Core
[INFO]   Cocoon Servlet Service Implementation
[INFO]   Cocoon Blocks [modules]
[INFO]   Cocoon Linkrewriter Block Implementation
[INFO]   Cocoon Servlet Service Components
[INFO]   Cocoon Ajax Block Implementation
[INFO]   Cocoon Template Framework Block Implementation
[INFO]   Cocoon Samples Style Default Block
[INFO]   Cocoon Ajax Block Sample
[INFO]   Cocoon Apples Block Implementation
[INFO]   Session Framework Implementation
[INFO]   XSP Block Implementation
[INFO]   Cocoon Main Core Sample Block
[INFO]   Cocoon Flowscript Block Implementation
[INFO]   Cocoon Forms Block Implementation
[INFO]   Cocoon Apples Block Samples
[INFO]   Cocoon Additional Sample Block
[INFO]   Cocoon Batik Block Implementation
[INFO]   Cocoon Forms Block Samples
[INFO]   Cocoon: Integration Tests Block
[INFO]   Cocoon Linkrewriter Block Samples
[INFO]   Cocoon Template Block Samples
[INFO]   Cocoon Batik Block Samples
[INFO]   Cocoon Welcome (Samples)
[INFO]   Asciiart Block Implementation
[INFO]   Asciiart Block Samples
[INFO]   cocoon-acegisecurity
[INFO]   Cocoon Authentication Block API
[INFO]   Cocoon Authentication Block Implementation
[INFO]   Cocoon Authentication Block Sample
[INFO]   Authentication Framework Implementation
[INFO]   Authentication Framework Sample Application
[INFO]   Axis Block Implementation
[INFO]   Axis Block Samples
[INFO]   Bsf Block Implementation
[INFO]   Bsf Block Samples
[INFO]   Cocoon Captcha Block Implementation
[INFO]   Cocoon Captcha Block Sample
[INFO]   Chaperon Block Implementation
[INFO]   Chaperon Block Samples
[INFO]   Cron block implementation
[INFO]   Cron Block Samples
[INFO]   Cocoon Database Block Mocks
[INFO]   Cocoon Database Block Bridge for Avalon components
[INFO]   Cocoon Database Block Implementation
[INFO]   Cocoon Hsqldb Server Block Implementation
[INFO]   Cocoon 

[jira] Commented: (COCOON-1985) AbstractCachingProcessingPipeline locking with IncludeTransformer may hang pipeline

2008-03-25 Thread Vadim Gritsenko (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12582098#action_12582098
 ] 

Vadim Gritsenko commented on COCOON-1985:
-

Both trunk and branch are patched for 'BIG SCARY UN-SYNCHRONIZED GAP' and 
pipeline locking is made optional (use 'locking' parameter), and with 
non-infinite wait time (use 'locking-timeout' parameter) defaulting to 7 
seconds. Please test and if there are no more issues with it, I'll close this 
bug report.

 AbstractCachingProcessingPipeline locking with IncludeTransformer may hang 
 pipeline
 ---

 Key: COCOON-1985
 URL: https://issues.apache.org/jira/browse/COCOON-1985
 Project: Cocoon
  Issue Type: Bug
  Components: * Cocoon Core
Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
Reporter: Ellis Pritchard
Priority: Critical
 Fix For: 2.1.12-dev (Current SVN), 2.2-dev (Current SVN)

 Attachments: caching-trials.patch, includer.xsl, patch.txt, 
 reproduceMultipleThreads.tar.gz, sitemap.xmap


 Cocoon 2.1.9 introduced the concept of a lock in 
 AbstractCachingProcessingPipeline, an optimization to prevent two concurrent 
 requests from generating the same cached content. The first request adds the 
 pipeline key to the transient cache to 'lock' the cache entry for that 
 pipeline, subsequent concurrent requests wait for the first request to cache 
 the content (by Object.lock()ing the pipeline key entry) before proceeding, 
 and can then use the newly cached content.
 However, this has introduced an incompatibility with the IncludeTransformer: 
 if the inclusions access the same yet-to-be-cached content as the root 
 pipeline, the whole assembly hangs, since a lock will be made on a lock 
 already held by the same thread, and which cannot be satisfied.
 e.g.
 i) Root pipeline generates using sub-pipeline cocoon:/foo.xml
 ii) the cocoon:/foo.xml sub-pipeline adds it's pipeline key to the transient 
 store as a lock.
 iii) subsequently in the root pipeline, the IncludeTransformer is run.
 iv) one of the inclusions also generates with cocoon:/foo.xml, this 
 sub-pipeline locks in AbstractProcessingPipeline.waitForLock() because the 
 sub-pipeline key is already present.
 v) deadlock.
 I've found a (partial, see below) solution for this: instead of a plain 
 Object being added to the transient store as the lock object, the 
 Thread.currentThread() is added; when waitForLock() is called, if the lock 
 object exists, it checks that it is not the same thread before attempting to 
 lock it; if it is the same thread, then waitForLock() returns success, which 
 allows generation to proceed. You loose the efficiency of generating the 
 cache only once in this case, but at least it doesn't hang! With JDK1.5 this 
 can be made neater by using Thread#holdsLock() instead of adding the thread 
 object itself to the transient store.
 See patch file.
 However, even with this fix, parallel includes (when enabled) may still hang, 
 because they pass the not-the-same-thread test, but fail because the root 
 pipeline, which holds the initial lock, cannot complete (and therefore 
 statisfy the lock condition for the parallel threads), before the threads 
 themselves have completed, which then results in a deadlock again.
 The complete solution is probably to avoid locking if the lock is held by the 
 same top-level Request, but that requires more knowledge of Cocoon's 
 processing than I (currently) have!
 IMHO unless a complete solution is found to this, then this optimization 
 should be removed completely, or else made optional by configuration, since 
 it renders the IncludeTransformer dangerous.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (COCOON-2173) AbstractCachingProcessingPipeline: Two requests can deadlock each other

2008-03-25 Thread Vadim Gritsenko (JIRA)

[ 
https://issues.apache.org/jira/browse/COCOON-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12582102#action_12582102
 ] 

Vadim Gritsenko commented on COCOON-2173:
-

Patched branch and trunk:
(1) Wait is now limited to 7 seconds by default (or specify timeout with 
'locking-timeout' parameter)
(2) Locking is optional, turned on by default but can be disabled by setting 
'locking' parameter to false.

I agree that ideally synchronization should be re-designed to include deadlock 
detection. I'll leave this for the next time - or somebody can provide a patch.

I don't like short wait (250ms)-and-abort you are suggesting. It will be way to 
easy not to notice that something wrong is going on with your pipelines if wait 
is just 250ms. So I used higher limit - 7s - so you notice that there is a 
problem and there is an option of either configuring lower limit, or switching 
locking off completely for deadlocking pipelines. In your scenario, I would 
suggest moving deadlocking pipelines into separate map:pipeline with locking 
disabled. Or, if you have time, implementing a deadlock detection :-)

 AbstractCachingProcessingPipeline: Two requests can deadlock each other
 ---

 Key: COCOON-2173
 URL: https://issues.apache.org/jira/browse/COCOON-2173
 Project: Cocoon
  Issue Type: Bug
  Components: - Components: Sitemap
Affects Versions: 2.1.9, 2.1.10, 2.1.11, 2.2-dev (Current SVN)
Reporter: Alexander Daniel
 Attachments: patchFor2.1.11.txt, reproduceMultipleThreads.tar.gz, 
 reproduceMultipleThreads2.2RC3-SNAPSHOT.tar.gz


 Two requests can deadlock each other when they depend on the same resources 
 which they acquire in a different order. I can reproduce that in Cocoon 
 2.1.11 and Cocoon 2.2-RC3-SNAPSHOT:
 * request A: generating lock for 55933 
 * request B: generating lock for 58840 
 * request B: waiting for lock 55933 which is hold by request A 
 * request A: waiting for lock 58840 which is hold by request B 
 I can reproduce this behaviour with Apache Bench and following pipeline: 
 * terminal 1: Apache Bench request A (ab -k -n 1 -c 25 
 http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/55933/)
  
 * terminal 2: Apache Bench request B (ab -k -n 1 -c 25 
 http://localhost:/samples/reproduceMultipleThreads/productOfferForDevice/58840/)
  
 * terminal 3: touching the two data files every second to invalidate the 
 cache (while true; do echo -n .; touch 55933.xml 58840.xml; sleep 1; done) 
 * pipeline: 
 map:pipeline type=caching 
   map:match pattern=productOfferForDevice*/*/ 
 map:generate src=cocoon:/exists/{2}.xml label=a/ 
 map:transform type=xsltc src=productOfferIncludeDevice.xsl 
 label=b 
 map:parameter name=noInc value={1}/ 
 /map:transform 
 map:transform type=include label=c/ 
 map:serialize type=xml/ 
 /map:match 
   map:match pattern=exists/** 
 map:act type=resource-exists 
 map:parameter name=url value={1} / 
 map:generate src={../1} / 
 map:serialize type=xml / 
 /map:act 
 !-- not found -- 
 map:generate src=dummy.xml / 
 map:serialize type=xml / 
   /map:match 
 /map:pipeline 
 After some seconds the deadlock occurs == 
 * Apache Bench requests run into a timeout 
 * I can see following pipe locks in the default transient store: 
 PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/55933.xml?pipelinehash=-910770960103935149_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
  (class: org.mortbay.util.ThreadPool$PoolThread) 
 PIPELOCK:PK_G-file-cocoon://samples/reproduceMultipleThreads/exists/58840.xml?pipelinehash=-499603111986478_T-xsltc-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/productOfferIncludeDevice.xsl;noInc=_T-include-I_S-xml-1
  (class: org.mortbay.util.ThreadPool$PoolThread) 
 PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/55933.xml
  (class: org.mortbay.util.ThreadPool$PoolThread) 
 PIPELOCK:PK_G-file-file:///Users/alex/dev/cocoon/cocoon-2.1.11/build/webapp/samples/reproduceMultipleThreads/58840.xml
  (class: org.mortbay.util.ThreadPool$PoolThread) 
 I added some logging to AbstractCachingProcessingPipeline.java which 
 reconfirms the explanations above: 
 INFO (2008-03-13) 13:50.16:072 [sitemap] 
 (/samples/reproduceMultipleThreads/productOfferForDevice/55933/) 
 PoolThread-47/AbstractCachingProcessingPipeline: generating lock 
 

Re: JNet integration

2008-03-25 Thread Joerg Heinicke

On 25.03.2008 10:53, Reinhard Poetz wrote:

Once again, my goal is that if you use e.g. Corona in its simplest form, 
I don't want to make everybody and his dog depend on 
JNet/SourceResolve/Source. E.g. see the FileGenerator. Using the URL 
object is enough for simple use cases of a pipeline API.


Yes, I understand that when it comes to caching pipelines, you need 
more, but not everybody needs caching pipelines. For that purpose there 
could be a CacheableFileGenerator, etc.


If you are right and it is difficult or even impossible to remove the 
dependencies on source/sourceresolve/xmlutils/jnet, then be it. I 
withdraw my example Url(servlet:...) from above. When we can switch to 
sourceresolve 3.0, the dependency graph will get smaller anyway.


The main benefit from using URLs (instead of the SourceResolver) comes 
from simple use cases, e.g. you need a pipeline in your Java application 
that reads in some XML file, performs some transformations and finally 
creates a PDF document. FWIW, using URLs should be all that you need.


Hmm, I don't see the advantages of dropping the Source abstractions. Why 
giving up all the good things just to remove one dependency? What are 
the downsides of the Source abstraction? I never had the need to 
implement a Source and for the mentioned simple cases I wonder where you 
have to cope with them at all? Cocoon used to be a framework for 
non-Java developers ... even if we introduce a pipeline API as in the 
examples in this thread why do I need to care about Urls or Sources at 
all? Why should it be different then map:generate with its src 
attribtue? And when I read CacheableFileGenerator something tells me 
this approach is wrong.


Joerg