Re: [vote] Arje Cahn as a new Cocoon committer

2005-09-09 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sylvain Wallez wrote:
 Please cast your votes!

+1, welcome :-)

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFDIWEecuec8tVNKAwRAkrwAKDXHhsIHEMp7m7uPupTB/qm4PPN3ACg2JxT
z2ww43JTIDL3SC1huLX3Ick=
=e0Vj
-END PGP SIGNATURE-



Re: [VOTE] Switch to Maven NOW

2005-08-17 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Carsten Ziegeler wrote:
 So far the response was positiv, so I think we should just vote about it
 and then do it. If you have any questions, please use the proposal thread.
 
 So please cast your votes for switching to Maven2 NOW as
 outlined/discussed in the proposal thread.

+1

- --
Unico

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFDA1Sccuec8tVNKAwRAmu2AKClIGcyCVouZa9ba4V5jkOAYSAGggCgjh3S
N+C/5LHhwHALThedpyy02tU=
=dXHu
-END PGP SIGNATURE-



Re: DirectoryGenerator using abstract Source

2005-07-13 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Wechner wrote:
 Joerg Heinicke wrote:
 
 On 13.07.2005 00:28, Michael Wechner wrote:

 It seems to me that the directory generator is not really based
 on the abstract methods of an excalibur Source, but rather takes
 the source and maps it onto a java.io.File.

 Is that intended or just not implemented for the lack of time?

 I would like to make this more generic with regard to other sources,
 e.g.
 JCR or whatever. If this makes sense then I would patch the
 DirectoryGenerator,
 but otherwise I would write a DirectoryGenerator from scratch, e.g
 CollectionGenerator which is making use the TraversableSource interface.



 You don't have to:
 $COCOON_HOME/src/blocks/repository/java/org/apache/cocoon/generation/TraversableGenerator.java

 
 
 
 thanks very much for the pointer. I think it would make sense to make a
 note
 within the DirectoryGenerator that the TraversableGenerator exists and is
 more generic.
 

In fact IMHO, it should be deprecated in favor of TraversableGenerator...

- --
Unico

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFC1UORcuec8tVNKAwRAiA4AJ94XDoBh0ACS2iTFW+uqTDcIBJ6lQCg34Fr
xYZdDb1pyefSC/Wlf2FAyjw=
=y0i0
-END PGP SIGNATURE-



Re: Implementing map:mount... in Forrest Locationmap

2005-07-11 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Ross! Great to see Forrest is finally making use of the locationmap.
I've been using it successfully myself for quite a while now.

Ross Gardler wrote:
 Quite some time ago Unico created a Locationmap module for Forrest.
 This allows us to specify where the source file for a request is
 located independently of the client request URL. this excellent code
 has sat in our SVN for far too long, waiting for someone with a
 strong enough itch to make real use of it. With the release of 0.7
 and therefore the start of 0.8 development it has come out of hiding
 and is now enabled in our core. It is a key part of integrating
 repositories and CMSs such as Slide, Daisy and Lenya.
 
 The Locationmap code [1] reuses code from Cocoon rather than starting
  from scratch. 

The part of Cocoon that is reused are Matchers and Selectors and some
ideas in the treeprocessor package, but not the parsing and interpreting
code of the treeprocessor itself which among other things concerns mounting.

 We now need to enhance the code to allow the mounting
 of sub-locationmaps in the same way that Cocoon can mount
 sub-sitemaps. We are hoping that this can be done through a simple
 reuse of Cocoon code. Unfortunately, nobody in Forrest knows the
 insides of Cocoon well enough at this time so it would seem now is
 time for some of us to learn...

No problem, I don't think it will be too difficult. I'll try to answer
any questions you may have as accurately as possible.

 A locationmap consists of a number of matchers, for example:
 
 match pattern=remoteDemo/**.xml location 
 src=http://svn.apache.org/repos/asf/forrest/trunk/site-author/content/xdocs/{1}.xml;
  / /match
 
 For a complete locationmap file see 
 http://svn.apache.org/viewcvs.cgi/*checkout*/forrest/trunk/site-author/content/xdocs/docs_0_80/locationmap.xml?rev=210059
 
 
 
 The map is built by the build(...) method in 
 org.apache.forrest.locationmap.lm.LocationMap (see 
 http://svn.apache.org/viewcvs.cgi/forrest/trunk/main/java/org/apache/forrest/locationmap/lm/LocationMap.java?view=markup
  )
 
 As can be seen in this code Unico reused much of the sitemap code
 from Cocoon. My question is, can we also leverage the map:mount...
 code? Any pointers as to how to do this would be greatly appreciated.

I mostly only reused the XML syntax not the actual tree building code
from the treeprocessor package. For instance treeprocessor distinguishes
between NodeBuilders and Nodes itself, I did not deem that neccessary
for the locationmap because it is much simpler and limited in scope. But
the mount mechanism should be similar.

Here's a rough list of what it involves to implement:

- - Create a new class MountNode that represents a map:mount statement.
- - Add code that creates and builds new MountNode objects whenever
map:mount is encountered (duh)
- - Implement MountNode:locate. You'll need the same code that is in
LocationMapModule:getLocationMap() Probably factor out into an
accessable static method for reuse. Then call LocationMap:locate with
the rest of the hint.
- - There may be a need to pass the parent LocationMap's InvokeContext
along to the child LocationMap but probably not. You'd need to change
LocationMap:locate method signature for that and then pass the parent
InvokeContext when constructing the child InvokeContext (I think).

Hope that helps :)

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFC0lCCcuec8tVNKAwRAoFbAJ9PnvHJohngvNisS8F8eRvD79sWsACdFlS0
/HhYc41pEXYHYodQbRBeElM=
=lLjR
-END PGP SIGNATURE-



Re: Implementing map:mount... in Forrest Locationmap

2005-07-11 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thorsten Scherler wrote:
 On Mon, 2005-07-11 at 12:57 +0200, Unico Hommes wrote:
 
 Hi Ross! Great to see Forrest is finally making use of the
 locationmap. I've been using it successfully myself for quite a while
 now.
 
 Ross Gardler wrote:
 
 Quite some time ago Unico created a Locationmap module for Forrest.
  This allows us to specify where the source file for a request is 
 located independently of the client request URL. this excellent
 code has sat in our SVN for far too long, waiting for someone with
 a strong enough itch to make real use of it. With the release of
 0.7 and therefore the start of 0.8 development it has come out of
 hiding and is now enabled in our core. It is a key part of
 integrating repositories and CMSs such as Slide, Daisy and Lenya.
 
 The Locationmap code [1] reuses code from Cocoon rather than
 starting from scratch.
 
 The part of Cocoon that is reused are Matchers and Selectors and some
  ideas in the treeprocessor package, but not the parsing and
 interpreting code of the treeprocessor itself which among other
 things concerns mounting.
 
 
 
 First of all great work Unico (OT BTW in spanish your name means
 the only one). :)

True and my last name means men in french, I guess it's better than
the previous Dutch prime minister's lastname which was Kok ;-)

 Talking about selectors in the lm. I gave it a shoot last month
 with the select type=exist, but I was stumbling over the
 map:when/otherwise syntax that seems not be implemented in the lm.

I have not used selectors in the locationmap that much myself so that
part is probably least tested. The thing with selectors in the context
of the locationmap is that the concept makes less sense in some
situations. Therefore the syntax is a bit different from the selector
syntax in a sitemap:


  select type=exists
location src=file://home/styles/site.xsl/
location src=file://forrest/styles/site.xsl/
  /select


 How do we (I am as well from forrest) have to extend that part?
 Maybe you would like to join force again because now that we
 understood the power of your work, we would like to further enhance
 it. You are the creator and I guess you know how to get the most
 out of it. ;-)

Hopefully :-). I promise to monitor forrest-dev more closely from now on.

 
 Thanks for helping us out.

You're welcome :-)

- -
Unico

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFC0mM7cuec8tVNKAwRAgXPAJ99arGCPR2UNXr5n4R/kX2UBwGB8ACbBrWS
QMM7OljG/o2gB7CKDE/K7Iw=
=Qru4
-END PGP SIGNATURE-



Re: [vote] Give Max Pfingsthorn temporary and restricted commit privileges to our code repository

2005-07-10 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Reinhard Poetz wrote:
 
 As you all know, Max Pfingsthorn is one of our Google Summer of Code
 students and he will work on the implementation of the cforms library.
 
 In order to make his life and the life of his two mentors (Sylvain and
 I) easier, I want to give him *temporary* and *restricted*
 (http://svn.apache.org/repos/asf/cocoon/whiteboard/forms/**) commit
 privileges to our SVN code repository.
 
 After the project Max' account will be closed and he has to go the usual
 way of becoming a regular Apache Cocoon committer.
 (http://www.apache.org/foundation/how-it-works.html#roles).
 
 Please cast your votes!
 

+1

- --
Unico

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFC0Rgxcuec8tVNKAwRAgYkAJ9igJOcqGEPfDeOrRlvhdzxsoO3BACgj5UQ
y7XHv8DPnXiefGIN75LUOGI=
=SDuL
-END PGP SIGNATURE-



Re: FW: svn commit: r191131 - /cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mail/transformation/SendMailTransformer.java

2005-06-17 Thread Unico Hommes
It does. Thanks, I'll fix it.

--
Unico

Nathaniel Alfred wrote:
 Twice static final int MODE_xxx = 8 looks like copy-waste error?
 Cheers, Alfred.
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Freitag, 17. Juni 2005 13:46
 To: cvs@cocoon.apache.org
 Subject: svn commit: r191131 -
 /cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mai
 l/transformation/SendMailTransformer.java
 
 
 Author: unico
 Date: Fri Jun 17 04:46:08 2005
 New Revision: 191131
 
 URL: http://svn.apache.org/viewcvs?rev=191131view=rev
 Log:
 Make smtp port configurable
 
 Modified:
 
 cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mail/transformation/SendMailTransformer.java
 
 Modified: 
 cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mail/transformation/SendMailTransformer.java
 URL: 
 http://svn.apache.org/viewcvs/cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mail/transformation/SendMailTransformer.java?rev=191131r1=191130r2=191131view=diff
 ==
 --- 
 cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mail/transformation/SendMailTransformer.java
  (original)
 +++ 
 cocoon/branches/BRANCH_2_1_X/src/blocks/mail/java/org/apache/cocoon/mail/transformation/SendMailTransformer.java
  Fri Jun 17 04:46:08 2005
 @@ -188,6 +188,7 @@
  public static final String NAMESPACE  = 
 http://apache.org/cocoon/transformation/sendmail;;
  public static final String ELEMENT_SENDMAIL   = sendmail;
  public static final String ELEMENT_SMTPHOST   = smtphost;
 +public static final String ELEMENT_SMTPPORT   = smtpport;
  public static final String ELEMENT_MAILFROM   = from;
  public static final String ELEMENT_MAILTO = to;
  public static final String ELEMENT_REPLYTO= reply-to;
 @@ -215,11 +216,13 @@
  protected static final int MODE_ATTACHMENT = 6;
  protected static final int MODE_ATTACHMENT_CONTENT = 7;
  protected static final int MODE_REPLY_TO   = 8;
 +protected static final int MODE_SMTPPORT   = 8;
 ^
  
 

 This message is for the named person's use only. It may contain
 confidential, proprietary or legally privileged information. No
 confidentiality or privilege is waived or lost by any
 mistransmission. If you receive this message in error, please notify
 the sender urgently and then immediately delete the message and any
 copies of it from your system. Please also immediately destroy any
 hardcopies of the message. You must not, directly or indirectly, use,
 disclose, distribute, print, or copy any part of this message if you
 are not the intended recipient. The senders company reserves the
 right to monitor all e-mail communications through their networks.
 Any views expressed in this message are those of the individual
 sender, except where the message states otherwise and the sender is
 authorised to state them to be the views of the senders company.
 



Re: svn commit: r191129 - /cocoon/trunk/blocks.properties

2005-06-17 Thread Unico Hommes
Reinhard Poetz wrote:
 [EMAIL PROTECTED] wrote:
 
 Author: unico
 Date: Fri Jun 17 04:24:24 2005
 New Revision: 191129

 URL: http://svn.apache.org/viewcvs?rev=191129view=rev
 Log:
 jms dependencies changed

 Modified:
 cocoon/trunk/blocks.properties
 
 
 Maybe I've overlooked your commit of gump.xml, if not, blocks.properties
 is generated using some Ant task out of gump.xml.
 

I guess you did overlook it, my version of gump.xml is up to date and I
used it to generate the blocks.properties.

--
Unico



Re: [VOTE] Consensus about documentation location - revised version

2005-06-14 Thread Unico Hommes
Linden H van der (MI) wrote:
 Hi Bertrand,
 
 
P.S. Helma, it seems like your mailer is breaking threads sometimes, 
but not always. For example, [2] starts a new thread although it is 
obviously a reply to [3]. If it's easy to fix it might be 
good for our 
archives.

[2] 
http://marc.theaimsgroup.com/?l=xml-cocoon-devm=111866680601135w=2
[3] 
http://marc.theaimsgroup.com/?l=xml-cocoon-devm=111866271205873w=2
 
 
 I have no idea. I'm using Outlook connecting to an Exchange server. I
 changed the header myself, to reflect the content. So tell me what I
 should do/don't do.

Outlook doesn't send the correct headers in order to track who replied
to whom. This was discussed a while ago here:
http://marc.theaimsgroup.com/?l=xml-cocoon-devm=107548477026275w=2

I'd encourage you to start using a different mail client such as
Mozilla/Thunderbird because it *does* make it a lot easier to follow the
discussions. Perhaps it should be mentioned on our site that mailing
lists are best participated in using a mailer that does not break the
thread view, just as we ask people to send their messages in plain text
for example.

--
Unico



Re: Eclipse 3.1's Unnecessary else statement

2005-06-09 Thread Unico Hommes
Carsten Ziegeler wrote:
 Sylvain Wallez wrote:
 
Hi all,

I noticed for a while that many commits are related to making Eclipse 
happier because of an additional 3.1 feature that flags unnecessary 
else statement.

These changes are for constructs such as:
if (condition) {
return foo;
} else {
return bar;
}

which are changed to:
if (condition) {
return foo;
}
return bar;

 
 
So please, update your settings and leave unchanged what doesn't need to 
be changed :-)

 
 Oh no, you're starting one of the famous code formatting threads...I'm
 just waiting for someone pointing out that the brackets should be in a
 new line...

Just for play, here is my religion: I believe it is only control
statements that need to start on a new line, brackets are merely
punctuation:

if (condition) {
  return foo;
}
else {
  return bar;
}

:-P

 Seriously, I think a method should only have *one single* return
 statement, which makes imho code even more readable.

Seriously, you have a point. But I don't think it is worth the effort to
argue it or to have everyone abide. The unnecessary else statement fix
Sylvain points out is indeed silly as the proposed improvement by
Eclipse results in code that is actually less clear.

--
Unico



Re: [VOTE] Document Editors, and a new Committer

2005-06-09 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Upayavira wrote:

snip/
 
 On this basis, I'd like to propose Helma Van Der Linden as a Cocoon
 committer, and thus our first 'publisher'. She has been participating
 regularly in our community, and has shown a quiet but steady interest in
 helping with our documentation, as well as an increasingly clear
 understanding of how our community works.
 
 As granting committership requires a vote, please cast your votes now:

[X] Helma Van Der Linden as a Cocoon committer

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)

iD8DBQFCqB12cuec8tVNKAwRAqCXAJ9O/NrB/3Y3MoT8UM7CtFGo4vbh2wCglc9/
fVpNnVZS+RW9upsuJlxYcKI=
=zyx1
-END PGP SIGNATURE-



Re: Small CMS on top of the SVN repository

2005-05-25 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Reinhard Poetz wrote:
 Unico Hommes wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Sylvain Wallez wrote:
 snip/

 Yes it *is* about the technology. Why don't we have cool docs? Because
 writing them in XML is a major PITA. That's why I dislike writing docs
 so much.

 Give me an htmlarea in a webapp and I'll be happy to dump my brain. And
 we have this today with Cocoon.



 That gets me thinking that perhaps, instead of installing a ready to go
 CMS like Daisy or Lenya or whathaveyou, we should just start right
 there. I mean, install a minimal Cocoon with Linotype at
 cocoon.zones.apache.org and start adding the features we want to it.

 As a start I'd volunteer to make Linotype use our SVN repository as its
 backend. I'm sure we can find some space in our repository for that.
 
 
 My initial idea was very similar (instead of Linotyp I'd use Cocoon
 Forms with a widget styled using HTMLArea), see
 http://wiki.apache.org/cocoon/CocoonDocumentationSystem. The new flat
 structure of documents  (every document is a directory with a bunch of
 documents) was designed to be easily integrated into a tiny,
 self-written CMS on top of it.
 

Exactly. The document editing side of this is really not very
complicated. We don't need a heavy enterprise level CMS for that. I very
much like the proposal on that wiki page. And I see that already some
work has been done.

I have a little time on my hands to actually implement some of the
things mentioned. My only question is what are others possibly doing in
parallel setting up some other system. I'd rather not duplicate efforts
when it turns out someone has set up Daisy somewhere by the end of this
week. If someone's that close to a working system I say we go with that.

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFClHOIcuec8tVNKAwRAl8lAJ44Ur4lONOu2FHokfNxPEB+4n1csACfdsR0
o8HKgZw8FbNY4kM0oFJDnHQ=
=hkSu
-END PGP SIGNATURE-



Re: Block builder and deployer

2005-05-25 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Reinhard Poetz wrote:
 Nicola Ken Barozzi wrote:
 
 Unico Hommes wrote:
 ...

 Exactly my thinking. In fact the reason I asked is that I was thinking
 of starting a Maven2 plugin for cocoon. I've been looking at the
 emerging Maven2 effort that is due to come out this summer and I think
 its going to be a killer. IIUC I can just start that in the whiteboard
 without a vote right?



 Yes.

 Maven2 is very interesting: it seems that most architectural
 shortcomings of Maven 1 have been fixed.

 I have written an xsl that converts the gump descriptor in the
 block.xml files, I just need to test it. 
 
 
 :-D
 
 I also want to use the Maven Ant tasks to download the jars needed, as
 already voted on this list. For this, and to be able to collaborate on
 transitive dependencies with the other projects, we will need to
 create also a pom.xml
 
 
 can we generate this? I want to avoid having more than one descriptor file

Good point, I'll keep that in mind. Though its probably separate from
the core plugin functionality.

 for each block, which would help also your effort for Maven 2.

 BTW, the current schema is inadeguate for MAven2:

   libraries
 lib id=avalon-framework-api location=core/
 lib id=avalon-framework-impl location=core/
 lib id=excalibur-xmlutil location=core/
 lib id=excalibur-pool location=core/
 lib id=excalibur-sourceresolve location=core/
   /libraries

 I'll have to change it to mimic the Maven pom library entries.
 
 
 no problem
 
 OTOMH, maybe it may also be beneficial to use the Maven2 directory
 project layout, at least for blocks.
 
 
 no problem
 
 The current block builder makes it possible to work on blocks that have
 a dependency on another *development version* of a block. What I don't
 want is having a build system that requires me to build all dependencies
 manually and put the build JARs into my project. This has to happen
 automatically (as it is done by the current block builder).

M2 does this automatically IIUC.

 The build system must also be able to resolve all the dependencies on
 *development versions* of blocks when it creates IDE descriptor files.

OK, M2 has support for generating eclipse and idea project files but I
don't know if it will take us all the way here.

- --
Unico

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFClHOkcuec8tVNKAwRArOgAJ44tBHprAzp/jgZPDND9Eu0vEJ/vACeInpK
0fr8X5fBZ0fvjrHHalrx3vQ=
=TOSu
-END PGP SIGNATURE-



Re: Block builder and deployer

2005-05-25 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Nicola Ken Barozzi wrote:
 Unico Hommes wrote:
 ...
 
 Exactly my thinking. In fact the reason I asked is that I was thinking
 of starting a Maven2 plugin for cocoon. I've been looking at the
 emerging Maven2 effort that is due to come out this summer and I think
 its going to be a killer. IIUC I can just start that in the whiteboard
 without a vote right?
 
 
 Yes.

I love that policy.

 Maven2 is very interesting: it seems that most architectural
 shortcomings of Maven 1 have been fixed.
 
 I have written an xsl that converts the gump descriptor in the block.xml
 files, I just need to test it.

If you upload it to subversion I could help you out. :-)

- --
Unico


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFClHOycuec8tVNKAwRAksqAJwLGoUtaGAH7+yUZi2AcnUkN7tsCACgr4kF
IqXUh8cLQl5NnVPNj4iGzdQ=
=xqwJ
-END PGP SIGNATURE-



Re: Small CMS on top of the SVN repository

2005-05-25 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Linden H van der (MI) wrote:
I have a little time on my hands to actually implement some 
of the things mentioned. My only question is what are others 
possibly doing in parallel setting up some other system. I'd 
rather not duplicate efforts when it turns out someone has 
set up Daisy somewhere by the end of this week. If someone's 
that close to a working system I say we go with that.
 
 
 Daisy already IS set up: www.cocoondev.org/handbook 
 
 I now use it for the tutorial only, but I really don't mind if it
 becomes the (temporary?) home of the entire new documentation.

I saw your mail about it right after I hit send :) Well in that case,
perhaps I should better concentrate on other things. I have to say it
looks like exactly what is needed. And if Ross is working on a way to
integrate it with Forrest it seems like our truubles will be over very soon.

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFClHfRcuec8tVNKAwRAj7hAJ9eAx0hKbvQ2Ki3rN2UkY4t63DiEwCg1Daa
djNejFuSSTzWtifklMDn9S8=
=ShSF
-END PGP SIGNATURE-



Re: Planet Cocoon license and reuse of docs

2005-05-25 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Stefano Mazzocchi wrote:
 Unico Hommes wrote:
 
 
BTW. Where *is* Linotype? I found this
http://marc.theaimsgroup.com/?l=xml-cocoon-devm=110705988801725w=2 but
 looking at http://simile.mit.edu/ I can't seem to find it. Linotype in
the 2.1 branch seems to be somewhat broken.
 
 
 on betaversion.org svn I host the version that runs my blog and the
 latest version, which should be RDF-based and it currently doesn't work.

Does the repository have public access, and if so could you send me a
url so I can take a look? I can't seem to find it.

- --
Unico


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFClHp+cuec8tVNKAwRAka4AKCvCXK2TmC3exIPrfaXw1aYeEkeIgCfbaj7
Xy6/RTy31muSVuPqucBBm4s=
=R/4a
-END PGP SIGNATURE-



Re: Small CMS on top of the SVN repository

2005-05-25 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ross Gardler wrote:
 Unico Hommes wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Linden H van der (MI) wrote:

 I have a little time on my hands to actually implement some of the
 things mentioned. My only question is what are others possibly doing
 in parallel setting up some other system. I'd rather not duplicate
 efforts when it turns out someone has set up Daisy somewhere by the
 end of this week. If someone's that close to a working system I say
 we go with that.



 Daisy already IS set up: www.cocoondev.org/handbook
 I now use it for the tutorial only, but I really don't mind if it
 becomes the (temporary?) home of the entire new documentation.



 I saw your mail about it right after I hit send :) Well in that case,
 perhaps I should better concentrate on other things. I have to say it
 looks like exactly what is needed. And if Ross is working on a way to
 integrate it with Forrest it seems like our truubles will be over very
 soon.
 
 
 I would like to encourage you to create a CForms based editor that will
 write to SVN. There is already a plugin for Forrest that creates an
 htmlArea editor, however it is not CForms based and only writes to the
 filesystem. I've been using it for some time with great success on my
 Learning Object editor (http://www.burrokeet.org).
 
 If you create CForms editor here I will replace the existing Forrest
 plugin with your CForm based editor.
 
 This will mean that people will be able to do forrest run and edit the
 content in SVn immediately. As the Daisy plugin matures the same will be
 true for content in a Daisy repository. Similarly, if PlanetCocoon
 succeeds we can integrate their content and editing in the same way.

I'd rather not have people need to install anything at all in order to
edit documentation but instead be able to log in to an online system.

 My vision is to integrate all the various efforts via a Forrest
 publishing system. Please, if you have time go ahead and write the
 htmlArea - SVN editor.

I enabled the htmlArea plugin you mention for the forrest site to get a
better feel for what you mean but I couldn't immediately get it working.
 I'll have a better look at it tomorrow. If the editor is to be used
inside the plugin, then I'd rather just develop it in the context of
that plugin directly.

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFClQEqcuec8tVNKAwRAmXYAKCiIzuRlc3L7zXWo5Fmz33WMZoAYgCg1xtn
FLSNnx+ZNaxyF4TVWTjz5PY=
=usaR
-END PGP SIGNATURE-



Re: modifiable source and .tmp files

2005-05-24 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Close the OutputStream.

Jorg Heymans wrote:
 Hi,
 
 As per [1], i decided to have a go at implementing a SAX based
 TeeTransformer.
 
 Now the code works [2], but cocoon writes a .tmp file instead of the
 intended file (say myfile.log.tmp instead of myfile.log). Any clues as
 to why this is happening? When i try to close the outputstream on the
 modifiable source in recycle() i get Concurrentmodification exceptions
 on the file itself.
 
 
 Regards
 Jorg
 
 [1] http://article.gmane.org/gmane.text.xml.cocoon.user/49099
 [2] http://www.domek.be/TeeTransformer.java
 
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCkzufcuec8tVNKAwRAsx+AKCmWgftjUPq8PibO2GIO4SszsKs/QCgn6r2
rrksfbG2ZmNEza2MB+i8/bY=
=SOyH
-END PGP SIGNATURE-



Re: modifiable source and .tmp files

2005-05-24 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Reinhard Poetz wrote:
 Unico Hommes wrote:
 
 Nice to see you back Unico!

Thanks Reinhard! Lurker mode is off now, but still have some catching up
to do. One of the things I'd been meaning to find out is what is the
status of your blocks builder and blocks deployer efforts. Especially,
is blocks builder the intented build system for 2.2? I found the wiki
documentation on it.

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCk2KNcuec8tVNKAwRAjHGAJ9ZvfRItNiNZPAdqIkV6CCrcKl74ACfVpGR
/jZV3zh3l03uEniETDtKwF4=
=pRpf
-END PGP SIGNATURE-



Re: Planet Cocoon license and reuse of docs

2005-05-24 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sylvain Wallez wrote:
snip/
 
 Yes it *is* about the technology. Why don't we have cool docs? Because
 writing them in XML is a major PITA. That's why I dislike writing docs
 so much.
 
 Give me an htmlarea in a webapp and I'll be happy to dump my brain. And
 we have this today with Cocoon.

That gets me thinking that perhaps, instead of installing a ready to go
CMS like Daisy or Lenya or whathaveyou, we should just start right
there. I mean, install a minimal Cocoon with Linotype at
cocoon.zones.apache.org and start adding the features we want to it.

As a start I'd volunteer to make Linotype use our SVN repository as its
backend. I'm sure we can find some space in our repository for that.

This discussion has been gaining so much momentum it's clear that we
should just start to fucking implement the doko before all this energy
disperses because of some political standoff or people waiting for each
other to make the first move.

BTW. Where *is* Linotype? I found this
http://marc.theaimsgroup.com/?l=xml-cocoon-devm=110705988801725w=2 but
 looking at http://simile.mit.edu/ I can't seem to find it. Linotype in
the 2.1 branch seems to be somewhat broken.

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCk6+Vcuec8tVNKAwRApE0AKCbslnEviyvOnosyJb4UIHNaR6SQQCeJM7r
v0qC/ouIcI8E/TIRHWGG73g=
=Nmsx
-END PGP SIGNATURE-



Re: Block builder and deployer

2005-05-24 Thread Unico Hommes
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Reinhard Poetz wrote:
 Unico Hommes wrote:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Reinhard Poetz wrote:

 Unico Hommes wrote:

 Nice to see you back Unico!



 Thanks Reinhard! Lurker mode is off now, but still have some catching up
 to do. One of the things I'd been meaning to find out is what is the
 status of your blocks builder 
 
 
 the block-builder works as described on the wiki page. It uses the
 block.xml to generate an Ant script out of it which builds the block
 resolving all dependencies. Currently it requires more knowledge about
 how blocks will actually look like. AFAIU only little work is left but
 this requires more work on the implementation of real blocks (Daniel's
 current work on the Block(s)Manager, the OSGi integration)
 

Ah, ok that explains the rather suspended state of that effort I was
sensing.

 and blocks deployer
 
 
 Ihe interfaces are pretty stable and the unit tests are already able to
 deploy a block. My last commits dealt with updating the wiring.xml but
 this hasn't been finished. I guess that it will take about 5 working
 days to finish the missing parts and some additional time to adapt it
 according to the requirements that will arise with the actual block
 implementation.
 
 As we currently have the ongoing discussion about OSGi, I agree with
 Daniel and Sylvain that we will have to provide our own deployment tools
 as our needs are too special. But maybe I 'm (we're) wrong here.
 
 Of course any help is very appreciated!
 
  efforts. Especially,
 
 is blocks builder the intented build system for 2.2?
 
 
 hmmm, IMO yes ;-) Some others want to use Maven. As said in some former
 discussions, it will not matter which build system will build a COB as
 long as it follows the (to be done) COB specification (block.xml,
 directory structure). I generated Ant scripts out of block descriptors
 (block.xml) as I know much more about Ant than about Maven. Finally, I
 think we will be able to provide support for both build systems which
 isn't a disadvantage IMO.

Exactly my thinking. In fact the reason I asked is that I was thinking
of starting a Maven2 plugin for cocoon. I've been looking at the
emerging Maven2 effort that is due to come out this summer and I think
its going to be a killer. IIUC I can just start that in the whiteboard
without a vote right?

- --
Unico
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCk7kVcuec8tVNKAwRAo9HAKCl4Nt/Erzit+xrrA3A/i6YeirfhQCeIRNk
LNlRcS6kqE73NMF6MmbEJ+c=
=Ky+y
-END PGP SIGNATURE-



Re: EHDefaultStore

2004-12-03 Thread Unico Hommes
On 3-dec-04, at 13:12, Jon Evans wrote:
Hi Unico,
On 2 Dec 2004, at 13:35, Unico Hommes wrote:
On 2-dec-04, at 13:13, Jon Evans wrote:
I needed a cache for part of my webapp.  Basically I can look up 
lists of things, based on a number of parameters.  I put all the 
parameters into a Serializable class which becomes the key.  If it 
isn't in the cache then I create a new object using a stored 
procedure and add it to the cache.  The stored procedure takes about 
500ms, hence the need for the cache.

I had to create my own version of EHDefaultStore, specific to my 
app, because I didn't want an eternal cache.  I expire items after 5 
minutes so that a db hit is forced (in case the data has been 
updated).  Although EHDefaultStore takes many config parameters, it 
doesn't have eternal, timeToLive or timeToIdle, and is hard coded to 
always create an eternal cache.

My questions:
1) Any reason why those parameters can't be added?  Then I could 
have just configured my own instance in cocoon.xconf with a 
different role name to the default.

EHDefaultStore is eternal by design. Having a store that would take 
care of expiration itself does not fit well with the way Cocoon uses 
and manages its stores. In order to open up the implementation to 
other uses than that of Cocoon it would be best to follow the same 
pattern as DefaultStore is doing. That is, move the current 
EHDefaultStore to AbstractEHStore and have EHDefaultStore extend 
AbstractEHStore and hard-code / overide some of the parameter 
settings.
Sounds like a good idea.  Shall I do that  submit the patch? (stupid 
question...) :-)

Yes please :-)
2) EHDefaultStore configures a CacheManager from an xml file, but 
then creates the Cache object itself using the long Cache() 
constructor.
From my understanding of the EHCache docs, the xml file is used to 
set the defaults for if a Cache object is created by the 
CacheManager itself, which isn't being done in EHDefaultStore.
We might just as well use

CacheManager cacheManager = CacheManager.create(); // or 
getInstance()

even if the values in ehcache.xml are changed, it will make no 
difference because each Cache is programatically created using 
specific values for each property.  So we don't need it.

Yes, we need it. IIRC some settings can only be specified in the 
descriptor file. Most notably the filesystem location of the 
disk-based cache. If you look at the EHDefaultStore code you can see 
that it relies on the fact that disk based cache location is 
configured to be the java.io.tmpdir system property.
OK.  Incidentally, the code also includes:
System.setProperty(java.io.tmpdir, directoryPath);
is that a good idea?  Any component starting after EHDefaultStore 
would see that new value for tmpdir, which could be different to the 
one set when the JVM started up.  I don't think it could cause 
problems but you could potentially end up with temp files in two 
different places.  Or have I misread it again?


No, you are absolutely right. This is a hazardous situation. 
Unfortunately, I don't think there is an easy way around this. The only 
thing I can think of ATM is to manipulate the configuration file's 
input stream to insert the right value.

--
Unico


Re: EHDefaultStore

2004-12-03 Thread Unico Hommes
On 3-dec-04, at 13:27, Jon Evans wrote:
Hi Jorg,
On 2 Dec 2004, at 13:38, Jorg Heymans wrote:
Jon Evans wrote:
I had to create my own version of EHDefaultStore, specific to my 
app, because I didn't want an eternal cache.  I expire items after 5 
minutes so that a db hit is forced (in case the data has been 
updated).  Although EHDefaultStore takes many config parameters, it 
doesn't have eternal, timeToLive or timeToIdle, and is hard coded to 
always create an eternal cache.
I'm not sure on the exact role of EHDefaultStore (allright I didn't 
know it even existed).

I am using my own cachemanager created as follows
CacheManager.create(new FileInputStream(new File( 
/WEB-INF/classes/ehcache.xml)));

In this file i configure my caches and also provide a default cache.
I then create my configured caches with 
manager.getCache(myconfiguredcache1)

(a side effect of this is that Cocoon dumps it's ehcache in the dir 
configured in my ehcache.xml). This means it's using the default 
cache which is a bad thing IMHO.

I've just checked out the ehcache source and confirmed what Unico said 
in his reply: CacheManager is a singleton, so whichever component 
starts up first (yours or EHDefaultCache) will configure the cache 
manager.  The second one to start up will just end up using the 
existing instance, it won't be reconfigured.  I'm sure this will work 
fine 90% of the time, but I bet it would be hard to track down the 
reason why it's suddenly ignoring changes you've made in your config 
file (i.e. it already read the other one).

This is another reason why I think we need a system-wide ehcache 
component, which is used by EHDefaultCache and any other instances 
needed by specific applications...

I don't understand. What is wrong with the Store interface?
--
Unico


Re: EHDefaultStore

2004-12-02 Thread Unico Hommes
On 2-dec-04, at 13:13, Jon Evans wrote:
Hi,
I needed a cache for part of my webapp.  Basically I can look up lists 
of things, based on a number of parameters.  I put all the parameters 
into a Serializable class which becomes the key.  If it isn't in the 
cache then I create a new object using a stored procedure and add it 
to the cache.  The stored procedure takes about 500ms, hence the need 
for the cache.

I had to create my own version of EHDefaultStore, specific to my 
app, because I didn't want an eternal cache.  I expire items after 5 
minutes so that a db hit is forced (in case the data has been 
updated).  Although EHDefaultStore takes many config parameters, it 
doesn't have eternal, timeToLive or timeToIdle, and is hard coded to 
always create an eternal cache.

My questions:
1) Any reason why those parameters can't be added?  Then I could have 
just configured my own instance in cocoon.xconf with a different role 
name to the default.

EHDefaultStore is eternal by design. Having a store that would take 
care of expiration itself does not fit well with the way Cocoon uses 
and manages its stores. In order to open up the implementation to other 
uses than that of Cocoon it would be best to follow the same pattern as 
DefaultStore is doing. That is, move the current EHDefaultStore to 
AbstractEHStore and have EHDefaultStore extend AbstractEHStore and 
hard-code / overide some of the parameter settings.

2) EHDefaultStore configures a CacheManager from an xml file, but then 
creates the Cache object itself using the long Cache() constructor.
From my understanding of the EHCache docs, the xml file is used to set 
the defaults for if a Cache object is created by the CacheManager 
itself, which isn't being done in EHDefaultStore.
We might just as well use

CacheManager cacheManager = CacheManager.create(); // or getInstance()
even if the values in ehcache.xml are changed, it will make no 
difference because each Cache is programatically created using 
specific values for each property.  So we don't need it.

Yes, we need it. IIRC some settings can only be specified in the 
descriptor file. Most notably the filesystem location of the disk-based 
cache. If you look at the EHDefaultStore code you can see that it 
relies on the fact that disk based cache location is configured to be 
the java.io.tmpdir system property.

3) a CacheManager can manage more than one Cache, yet we create one 
per instance of EHDefaultStore.
OK, at the moment there is only one instance of EHDefaultStore (I 
think?), but if it's made more generic (see 1) then there could be 
more.  We could have a static CacheManager shared between them all.
This does however mean that we'd need an instance count so that the 
last instance could call cacheManager.shutdown() (and the first client 
would call create()).  But then we already do have an instance count 
which is used in EHDefaultStore to generate a new name each time the 
constructor is called.

Look at the implementation of CacheManager.create(1) the CacheManager 
is already a shared instance. Calling CacheManager.create() with a 
different config file after it has already been initialized previously 
has no effect. Another reason to not have the configuration file be 
configurable.

1. http://marc.theaimsgroup.com/?l=xml-cocoon-devm=107066391625123w=2
--
Unico


Re: EHDefaultStore

2004-12-02 Thread Unico Hommes
On 2-dec-04, at 14:35, Unico Hommes wrote:
Look at the implementation of CacheManager.create(1)
1. http://marc.theaimsgroup.com/?l=xml-cocoon-devm=107066391625123w=2
Sorry this was not supposed to be related. The above link is an 
explanation of the current Store pattern in Cocoon. 1 in 
CacheManager.create(1) is the number of arguments of the method.

--
Unico


Re: [BUILD SYSTEM] New eclipse-customized-project task

2004-11-23 Thread Unico Hommes
Antonio Gallardo wrote:
Hi:
I wrote a new task for the Cocoon build system. The new task build the
eclipse project files only for the blocks included in
[local.]blocks.properties.
 

Nice! Please commit it.
--
Unico


Re: svn commit: rev 76124 - cocoon/trunk

2004-11-18 Thread Unico Hommes
On 18-nov-04, at 22:21, Geoff Howard wrote:
On Wed, 17 Nov 2004 16:25:46 +0100, Unico Hommes [EMAIL PROTECTED] 
wrote:
Geoff Howard wrote:
...
I guess my question about optional dependencies gave the wrong
impression of the utility of jms in event cache.  I don't think it's 
a
problem having a dependency on the jms api in this block because:
...
- This is not a new dependency - it's been there I think from the
first commit, though maybe shortly after and was always the intention
Actually, not quite true [1]. The JMS listener for eventcache used to 
be
located in the jms block. I moved that functionality to the eventcache
block as part of refactoring work I did on the jms block. The 
dependency
from eventcache to jms seemed more logical to me than the other way
around. Unfortunately, it turned out that the jms samples also rely on
the eventcache block, so that a virtual cyclic dependency broke the
build at that point. The temporary resolution I did was to comment the
samples- dependency from jms to eventcache in gump.xml.
ok, ok :)
What I meant to communicate was that someone didn't just recently add 
some
new functionality here and slip in a big dependency in the loosest
sense (not gump or even ant sense).
Off course. I didn't mean to provide an argument for or against your 
previous points. In fact - I should have made  that clear - I 
completely agree with them.


Niclas' advice is the only way to fix gump.  Why this is the first
it's come up is a mystery to me, but IMV the dependency should have
been there all along.
So, Unico do you agree that this dependency (all meanings) is OK here 
or do we
need to try a conditional dependency as Stefano mentions in another
thread (thanks by the way S.)?
Yes I think the dependency is correct. Off course it would be ideal to 
have conditional dependencies in our build system. But that's a 
separate issue IMO.

--
Unico


Re: NPE in AbstractCachingProcessingPipeline

2004-11-17 Thread Unico Hommes
The section fixes 
http://issues.apache.org/bugzilla/show_bug.cgi?id=31012 which is a 
serious bug in itself.

Anyway, I have found out why toCacheSourceValidities could be null when 
cachedResponse is not. Apparently this happens when the pipeline is not 
yet expired. I have committed a better fix.

--
Unico
Carsten Ziegeler wrote:
Hmm, this is imho not the best solutions. Preventing NPEs
by null checking smells a little bit :)
If the section has been added recently than that author
should now if a null for the object is a valid case or
not; otherwise we should better remove the section for 
the release.

Carsten 

 

-Original Message-
From: Unico Hommes [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 16, 2004 4:00 PM
To: [EMAIL PROTECTED]
Subject: Re: NPE in AbstractCachingProcessingPipeline

The section that causes the NPE was added recently. If you 
say that there are situations where it causes an NPE then I 
believe you immediately. The code is rather incomprehensible 
to me as well to say the least. Lots of side effects and null 
states that are supposed to signal some sort of situation 
that are probably not even clear to the original author 
anymore. I guess I'll add a null check for this situation, 
when SVN is up again that is.

--
Unico
Jon Evans wrote:
   

Hi,
Bugzilla appears to be broken...
I've been chasing down a problem where the portal I'm 
 

developing would 
   

work fine for a few iterations, then it would break and bits of it 
would be replaced with The coplet xxx is currently 
 

unavailable.  The 
   

NPE logged in error.log was pretty hard to track down, in the end I 
had to step through the code.

I tracked it down to the function 
 

getValidityForEventPipeline(), the 
   

first section of which reads:
 

   if (this.cachedResponse != null) {
   final AggregatedValidity validity = new 
AggregatedValidity();

   for (int i=0; i  this.toCacheSourceValidities.length;
i++) {
   validity.add(this.toCacheSourceValidities[i]);
   }
   return validity;
   

The problem seems to be that this.toCacheSourceValidities is null.  
This would point to the fact that setupValidities() has never been 
called - or it's internal state is such that 
 

setupValidities() ends up 
   

setting toCacheSourceValidities to null.  I don't know 
 

anything about 
   

the lifecycle of an AbstractCachingProcessingPipeline so I wouldn't 
know where to track that down.

If a !=null test is added to getValidityForEventPipeline():
 

   if (this.cachedResponse != null) {
   final AggregatedValidity validity = new 
AggregatedValidity();

   if (this.toCacheSourceValidities != null) {
   for (int i=0; i 
this.toCacheSourceValidities.length; i++) {
   validity.add(this.toCacheSourceValidities[i]);
   }
   }
   return validity;
   

then it seems to work (at least, my portal doesn't seem to keep 
falling over any more).

If my patch isn't correct then I'd be happy to track this down 
further, if someone could point me in the right direction with some 
more information about how an 
 

AbstractCachingProcessingPipeline gets 
   

constructed, and at what point in its lifecycle setupValidities() 
should be called.

I'd appreciate some feedback / help on this one!
Thanks,
Jon
 

   

 




Re: svn commit: rev 76124 - cocoon/trunk

2004-11-17 Thread Unico Hommes
Geoff Howard wrote:
Please change what?  Remove what is viewed to be a key use-case of a
block to avoid a dependency?  Move one class into its own block?
(there are very few classes in the event cache block anyway).
The jms event cache item could also be moved into the jms block, but
that then introduces a deep dependency of the jms block to event
cache, which also is not required and people would complain there.
I guess my question about optional dependencies gave the wrong
impression of the utility of jms in event cache.  I don't think it's a
problem having a dependency on the jms api in this block because:
- It's not introducing a new jar into our cvs
- It exists for the primary envisioned use of the block (though there
are other possible uses as I noted)
- It's an API dependency, not an implementation dependency - you don't
have to have a full JMS server unless you want to use the JMS
functionality.
- This is not a new dependency - it's been there I think from the
first commit, though maybe shortly after and was always the intention
 

Actually, not quite true [1]. The JMS listener for eventcache used to be 
located in the jms block. I moved that functionality to the eventcache 
block as part of refactoring work I did on the jms block. The dependency 
from eventcache to jms seemed more logical to me than the other way 
around. Unfortunately, it turned out that the jms samples also rely on 
the eventcache block, so that a virtual cyclic dependency broke the 
build at that point. The temporary resolution I did was to comment the 
samples- dependency from jms to eventcache in gump.xml.

Niclas' advice is the only way to fix gump.  Why this is the first
it's come up is a mystery to me, but IMV the dependency should have
been there all along.
 

I see two possible reasons the problem only surfaces now. The first is 
that it may be due to a recent change in the build system. Previously 
declaring a dependency on another block meant that it would inherit all 
of *its* dependencies as well. Now that is nolonger the case, at least 
in the 2.1.x branch. Second, it has been a long time since gump last 
built the blocks successfully. So the change in the direction of the 
dependency could have gone unnoticed by gump untill now.

1.http://cvs.apache.org/viewcvs.cgi/cocoon/trunk/src/blocks/eventcache/java/org/apache/cocoon/caching/impl/JMSEventMessageListener.java?root=Apache-SVNrev=36568view=log


Re: [VOTE] Release of 2.1.6

2004-11-17 Thread Unico Hommes
On 17-nov-04, at 22:32, Carsten Ziegeler wrote:
Please cast your votes for releasing the current SVN of 2.1.x as
2.1.6. The vote will end on friday morning.
+1
--
Unico


Re: [URGENT] Fix status and sync branches before release

2004-11-11 Thread Unico Hommes
Vadim Gritsenko wrote:
Hi guys,
There are some changes which claimed to be in 2.1 and actually not in 
2.1, and changes which are in 2.1 but not in 2.2. Please review the 
list and help to reconcile discrpancies before a release:

--- cocoon-2.1.X\status.xml2004-11-11 09:36:01.164552000 -0500
+++ cocoon-2.2.X\status.xml2004-11-11 10:14:00.051430400 -0500
@@ -465,6 +619,16 @@
  AbstractMessageListener and AbstractMessagePublisher should be 
used as basis for
  custom publish/subscribe components.
/action
+   action dev=UH type=add
+ Still in the scratchpad area at the time of this writing, added a
+ CachedSource proxy subclass for Sources that implement 
TraversableSource and
+ InspectableSource (for instance WebDAVSource).
+   /action

Done.
--
Unico


Re: Problems with Quartz JobStore

2004-11-08 Thread Unico Hommes
What if, instead of using the short names datasources and jdbc, you 
declare the datasource like so:

component 
role=org.apache.avalon.excalibur.datasource.DataSourceComponent/quartz
 
class=org.apache.avalon.excalibur.datasource.ResourceLimitingJdbcDataSource
 pool-controller max=10 min=5/
 dburljdbc:postgresql://localhost/quartz/dburl
 usercocoon/user
 password*/password
/component

I was under the impression that ECM would transparently map the short 
name form and the above form to the same lookup semantics. 
(serviceManager.lookup(DataSourceComponent.ROLE + /quartz)) Perhaps I 
was wrong and it is only Fortress that does that.

--
Unico
Jeremy Quinn wrote:
Hi All
Has anyone got any experience using a persistable JobStore with Quartz ?
I am trying to use Postgresql to persist Quartz jobs across restarts 
of  Cocoon.

I have the following setup :
1) Using the script in docs/dbTables/tables_postgres.sql that comes  
with Quartz 1.4.2 distribution, create a database called 'quartz',  
grant privileges on the tables, confirm that I can connect with user:  
'cocoon', using an external client (PostMan Query.app).

2) Add the postgres driver to :  
BRANCH_2_1_X/build/webapp/WEB-INF/lib/postgresql.jar

3) Add the config to load the driver to web.xml:
init-param
  param-nameload-class/param-name
  param-valueorg.postgresql.Driver/param-value
/init-param
4) Add the datasource to cocoon.xconf :
  datasources
jdbc logger=core.datasources.quartz name=quartz
  pool-controller max=10 min=5/
  dburljdbc:postgresql://localhost/quartz/dburl
  usercocoon/user
  password*/password
/jdbc
. . .
  /datasources
5) Adjust the Quartz setup in cocoon.xconf :
  !--store type=ram/--
  store type=tx  
delegate=org.quartz.impl.jdbcjobstore.PostreSQLDelegate
datasource provider=excaliburquartz/datasource
  /store

When I start Cocoon, I get the following exception in cron.log :
ERROR   (2004-11-08) 11:50.10:449   [core.manager] (Unknown-URI)  
Unknown-thread/ExcaliburComponentManager: Caught an exception trying 
to  initialize the component handler.
org.apache.avalon.framework.configuration.ConfigurationException: No  
datasource available by that name: quartz
at  
org.apache.cocoon.components.cron.DataSourceComponentConnectionProvider. 
init(DataSourceComponentConnectionProvider.java:42)
at  
org.apache.cocoon.components.cron.QuartzJobScheduler.createJobStore(Quar 
tzJobScheduler.java:738)
at  
org.apache.cocoon.components.cron.QuartzJobScheduler.initialize(QuartzJo 
bScheduler.java:321)
at  
org.apache.avalon.framework.container.ContainerUtil.initialize(Container 
Util.java:283)
at  
org.apache.avalon.excalibur.component.DefaultComponentFactory.newInstanc 
e(DefaultComponentFactory.java:277)
at  
org.apache.avalon.excalibur.component.ThreadSafeComponentHandler.initial 
ize(ThreadSafeComponentHandler.java:108)
at  
org.apache.avalon.excalibur.component.ExcaliburComponentManager.initiali 
ze(ExcaliburComponentManager.java:522)
at  
org.apache.cocoon.components.CocoonComponentManager.initialize(CocoonCom 
ponentManager.java:541)
at  
org.apache.avalon.framework.container.ContainerUtil.initialize(Container 
Util.java:283)
at org.apache.cocoon.Cocoon.initialize(Cocoon.java:314)
at  
org.apache.avalon.framework.container.ContainerUtil.initialize(Container 
Util.java:283)
at  
org.apache.cocoon.servlet.CocoonServlet.createCocoon(CocoonServlet.java: 
1382)
at 
org.apache.cocoon.servlet.CocoonServlet.init(CocoonServlet.java:480)
at  
org.mortbay.jetty.servlet.ServletHolder.start(ServletHolder.java:220)
at  
org.mortbay.jetty.servlet.ServletHandler.initializeServlets(ServletHandl 
er.java:445)
at  
org.mortbay.jetty.servlet.WebApplicationHandler.initializeServlets(WebAp 
plicationHandler.java:150)
at  
org.mortbay.jetty.servlet.WebApplicationContext.start(WebApplicationCont 
ext.java:458)
at org.mortbay.http.HttpServer.start(HttpServer.java:663)
at org.mortbay.jetty.Server.main(Server.java:429)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at  
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.jav 
a:39)
at  
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor 
Impl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at Loader.invokeMain(Unknown Source)
at Loader.run(Unknown Source)
at Loader.main(Unknown Source)

DataSourceComponentConnectionProvider cannot find the specified  
datasource.

What have I done wrong?
Thanks for any help
regards Jeremy

  If email from this address is not signed
IT IS NOT FROM ME
Always check the label, folks !




Re: Problems with Quartz JobStore

2004-11-08 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
What if, instead of using the short names datasources and jdbc, 
you declare the datasource like so:

component 
role=org.apache.avalon.excalibur.datasource.DataSourceComponent/quartz
 
class=org.apache.avalon.excalibur.datasource.ResourceLimitingJdbcDataSource 

 pool-controller max=10 min=5/
 dburljdbc:postgresql://localhost/quartz/dburl
 usercocoon/user
 password*/password
/component
I was under the impression that ECM would transparently map the short 
name form and the above form to the same lookup semantics. 
(serviceManager.lookup(DataSourceComponent.ROLE + /quartz)) Perhaps 
I was wrong and it is only Fortress that does that.

You have to lookup selector first, and then, quartz from selector.

Yeah, I gathered as much from Jeremy's description. I thought ECM would 
accept either method. I'll change it.

--
Unico


Re: Implementation of the Continuations checker

2004-10-30 Thread Unico Hommes
On 30-okt-04, at 18:46, Ralph Goers wrote:
Carsten Ziegeler wrote:
I think the key point is that we are using Quartz as the 
implementation,
but not as the interface. Now we already have the quartz block with
the required implementation, so it's easy to use that.
The first step is to do this move. If then someone things that an
own implementation would be better, this can simply be changed without
destroying compatibility.

Carsten
I'm not saying don't do this, but I am asking if this is really what 
you want.  After briefly looking at the Event interface and the Cron 
block, they appear to be very different.  The Cron block appears to be 
about job scheduling, which is fine if that is really what you want.  
But if you really want some sort of Event handling, I'm not sure Cron 
is what you want - mostly I guest because I'm not sure what that 
means.

I guess I would just like a confirmation that the interface that is 
going to be used is acceptable.

I agree. The focus of quartz seems very different than that provided by 
event and that needed by the use cases Vadim listed. Looking at the 
JobScheduler interface I can't shake off the feeling that cron is not 
the canon (honestly it's quite a beast!) best fitted to shoot the flies 
we are dealing with.

I'd much prefer we implement our own excalibur event (why not just fork 
it? it's good code!)

--
Unico


Re: [Heads up] Change to build system in 2.1.x

2004-10-30 Thread Unico Hommes
On 30-okt-04, at 19:29, Stefano Mazzocchi wrote:
Ugo Cei wrote:
Il giorno 25/ott/04, alle 19:14, Unico Hommes ha scritto:
I've completed the changes to the build system discussed earlier 
[1]. In order to do so I have extended the gump descriptor with 
additional information that allows the build system to locate one or 
more dependency jars per depend project within ./lib/optional. See 
for an example the cocoon-block-axis project definition in gump.xml

Every block now *must* declare all the dependencies it requires to 
compile in gump.xml just in order for it to build properly.
Looks like this is not a backward-compatible change. Blocks which are 
distributed outside of Cocoon (like Fins or my Spring Petstore) must 
change their deployment instructions to add all those depend 
elements (and put dependencies in gump too, which wasn't required 
before, even though it might have been good practice).
Shouldn't we make this change in trunk only and leave 2.1 as is?
yes, this is a pretty big obstacle for a simple bugfix release. The 
block build system *is* not an aPI but it's a contract and we must 
honor them.

When I was working on this I didn't realize this would be an 
incompatible change, or even that compatibility concerns also affected 
our build system. In all fairness I think you have a good point.

Rather than undoing all the changes (would make me really feel bad 
about the wasted time I invested :-) I think it is no problem to 
provide compatibility. I think that all that is needed is that 
block/lib is added to the block compilation classpath and that those 
libraries are also copied to WEB-INF/lib.

WDYT?
--
Unico


Re: [Heads up] Change to build system in 2.1.x

2004-10-30 Thread Unico Hommes
On 30-okt-04, at 20:05, Unico Hommes wrote:
On 30-okt-04, at 19:29, Stefano Mazzocchi wrote:
Ugo Cei wrote:
Il giorno 25/ott/04, alle 19:14, Unico Hommes ha scritto:
I've completed the changes to the build system discussed earlier 
[1]. In order to do so I have extended the gump descriptor with 
additional information that allows the build system to locate one 
or more dependency jars per depend project within ./lib/optional. 
See for an example the cocoon-block-axis project definition in 
gump.xml

Every block now *must* declare all the dependencies it requires to 
compile in gump.xml just in order for it to build properly.
Looks like this is not a backward-compatible change. Blocks which 
are distributed outside of Cocoon (like Fins or my Spring Petstore) 
must change their deployment instructions to add all those depend 
elements (and put dependencies in gump too, which wasn't required 
before, even though it might have been good practice).
Shouldn't we make this change in trunk only and leave 2.1 as is?
yes, this is a pretty big obstacle for a simple bugfix release. The 
block build system *is* not an aPI but it's a contract and we must 
honor them.

When I was working on this I didn't realize this would be an 
incompatible change, or even that compatibility concerns also affected 
our build system. In all fairness I think you have a good point.

Rather than undoing all the changes (would make me really feel bad 
about the wasted time I invested :-) I think it is no problem to 
provide compatibility. I think that all that is needed is that 
block/lib is added to the block compilation classpath and that those 
libraries are also copied to WEB-INF/lib.

WDYT?
Not that I no longer care what you guys think ;-) but I went ahead and 
made the proposed changes. It should be compatible now.

--
Unico


Re: Implementation of the Continuations checker

2004-10-30 Thread Unico Hommes
On 30-okt-04, at 21:02, Giacomo Pati wrote:
On Sat, 30 Oct 2004, Unico Hommes wrote:
On 30-okt-04, at 18:46, Ralph Goers wrote:
Carsten Ziegeler wrote:
I think the key point is that we are using Quartz as the 
implementation,
but not as the interface. Now we already have the quartz block with
the required implementation, so it's easy to use that.
The first step is to do this move. If then someone things that an
own implementation would be better, this can simply be changed 
without
destroying compatibility.
Carsten
I'm not saying don't do this, but I am asking if this is really what 
you want.  After briefly looking at the Event interface and the Cron 
block, they appear to be very different.  The Cron block appears to 
be about job scheduling, which is fine if that is really what you 
want.  But if you really want some sort of Event handling, I'm not 
sure Cron is what you want - mostly I guest because I'm not sure 
what that means.
I guess I would just like a confirmation that the interface that is 
going to be used is acceptable.
I agree. The focus of quartz seems very different than that provided 
by event and that needed by the use cases Vadim listed. Looking at 
the JobScheduler interface I can't shake off the feeling that cron is 
not the canon (honestly it's quite a beast!) best fitted to shoot the 
flies we are dealing with.
Vadim listed the usecase for scheduling tasks to be done once or 
periodically. Looking at the JobScheduler interface there just is that 
functionality presented. So, why do you say The focus of quartz seems 
very different than that?
All core use cases would be met by a simple timeout and delay 
parameter, a complete cron spec as that provided by quartz is overkill. 
I don't want the jobs that concurrently fetch the URLs used by the 
include transformer to be persisted to my RDBMS, yet that is exactly 
what will happen if I happen to naively use the same scheduler instance 
for my workflow subsystem.

We are not specifically talking about Quartz but about the 
JobScheduler interface. We can use the cron block as a replacement of 
the functionalities quickly. If you think the JobScheduler interface 
implementation using Quartz is not hat you need just write another 
one.
Looking at what is required by the JobScheduler interface, I wouldn't 
want to be the one assigned to that job. And I would be surprised if 
there was a ready to use compatible replacement for Quartz for that 
matter.

I think that a functionality like that provided by excalibur event is 
better suited for the job at hand than is quartz, and I am concerned 
that the Avalon debacle not only caused the concrete crisis WRT 
Cocoon's containment framework (which is bad), but that it has scarred 
us up to the point that it hurts our very nature: our cooperative 
spirit itself (which IHMO is much worse).

--
Unico


Re: Implementation of the Continuations checker

2004-10-29 Thread Unico Hommes
Giacomo Pati wrote:
On Fri, 29 Oct 2004, Carsten Ziegeler wrote:
Giacomo Pati wrote:
On Fri, 29 Oct 2004, Carsten Ziegeler wrote:
The current implementation of our continuations manager uses the
excalibur event package for the background checker that checks for
expired continuations.
Now, this approach has the problem, that excalibur event is
deprecated. In addition we aren't using it somewhere else,
so it would
be great if we could remove this dependency.
Yesterday, I wrote a simple replacement which I checked into 2.2:
a simple background thread is initialized that sleeps for a
configured
period of time, checks the continuations, sleeps etc.
Now, this solution should work.
The question is now, should I port this to 2.1.x as well? Are there
better solutions?

Does this mean the CommandManager from the Context is gone?
Yes, at least for 2.2 - for 2.1.x we would have to decide if we 
remove it.

Are you using it?

Yes, we used the CommandManager in some projects. It is based on the 
PooledExecutor from Doug Leas concurrent-utils package. It comes in 
quite handy as you can put tasks there you'd like to be done 
asynchroniously (ie. indexing a uploaded document with lucene to speed 
up percieved performance).

I believe that the excalibur event package lives on at d-haven [1]. Why 
not use that?

1. http://api.d-haven.org/event/
--
Unico


Re: Implementation of the Continuations checker

2004-10-29 Thread Unico Hommes
Nicola Ken Barozzi wrote:
Unico Hommes wrote:
...
I believe that the excalibur event package lives on at d-haven [1]. 
Why not use that?

Oh oh. We did this discussion with the container, I hope we don't have 
to go over this again for every Avalon dependency we have ;-P

Nope just with Avalon/Excalibur *components* in general. The 
relationship between Cocoon and its container is much more integral than 
that between Cocoon and the components it uses. I was under the 
impression that we would be continuing to use excalibur components. Not 
knowing the history of the migration of event to d-haven I just wanted 
to find out whether this case is somehow special. BTW if I inadvertently 
raised a painful subject I apologize for that, it is not my intention to 
antagonize at all. But to discuss these issues openly.

--
Unico


Re: Persisting SimpleLuceneQuery [Long]

2004-10-29 Thread Unico Hommes
Jeremy Quinn wrote:

  If email from this address is not signed
IT IS NOT FROM ME
Always check the label, folks !


Aha! I'm on to you, person that wants to make believe he is Jeremy 
Quinn! ;-)

--
Unico


Re: Persisting SimpleLuceneQuery [Long]

2004-10-29 Thread Unico Hommes
Jeremy Quinn wrote:
On 29 Oct 2004, at 13:37, Unico Hommes wrote:
Jeremy Quinn wrote:

  If email from this address is not signed
IT IS NOT FROM ME
Always check the label, folks !


Aha! I'm on to you, person that wants to make believe he is Jeremy 
Quinn! ;-)

ROFL 

Objective reality is a synthetic construct, dealing with a 
hypothetical universalization of a multitude of subjective realities.
Philip K Dick - The Electric Ant

Ah see, I *knew* it was you after all. I should better tune my 
intra-subjective antennae ;-)

--
Unico


Re: Blocker: Exception using cocoon:// protocol

2004-10-28 Thread Unico Hommes
Vadim Gritsenko wrote:
Hey all,
There is some issue with cocoon:// protocol that I can't wrap my head 
around... It is blocking 2.1.6 release. NullPointerException occurs 
when request's pipeline uses / includes another pipeline via cocoon:// 
protocol. It is reproducable using multiple samples:

  http://localhost:/samples/aggregation/aggregate
  http://localhost:/samples/aggregation/aggregate2
  http://localhost:/samples/modules/index.html
  http://localhost:/samples/test/reader-mime-type/test20.html
etc. Last one is the simpliest. Stacktrace in the last case is below. 
When trying to debug I found out this sequence:
 * cocoon://test10.html reader: ResourceReader.recycle()
 * cocoon://test10.html source: SitemapSource.reset()
 * CocoonComponentManager.endProcessing()
 * EnvironmentWrapper instance: AbstractEnvironment.finishProcessing()
   Here sourceResolver is set to null!
 * Back to CocoonComponentManager.endProcessing() which calls:
 * EnvironmentDescription.release()
 * test10.html reader: ResourceReader.recycle()
 * EnvironmentWrapper instance: AbstractEnvironment.release()
   Here sourceResolver is already null!

Any ideas?

I just committed a fix for this. My rationale:
CocoonComponentManager.EnvironmentDescriptor depends on Environment. But 
in CocoonComponentManager.endProcessing EnvironmentDescriptor is 
released *before* Environment is cleaned. According to the direction of 
the dependency this should be the other way around.

--
Unico


Re: Locking the JS global scope to avoid implicit declarations

2004-10-28 Thread Unico Hommes
Carsten Ziegeler wrote:
Sylvain Wallez wrote 
 

Carsten Ziegeler wrote:
   

This might be a dump question, but I thought that global 
 

variables are 
   

attached to the session of the current user. Is this wrong 
 

or are there 
   

different kinds of global variables?
 

Yes, the global scope in flowscript is attached to the 
session. That means implicitely declared variables are shared 
by all continuations of a given user instead of being local 
to a continuation. Things would be even worse if the global 
scope was unique to the application!!

   

Definitly, I got this impression from your description and was
*really* scared :)
 

Me too. I got a *lot* of code to check.
So: Let's lock it without making it configurable - I don't see
any use case for it. If this would be a vote, I would say +1 .. :)
 

+1 for locking it.
--
Unico


Re: [VOTE] Leszek Gawron and Ralph Goers as committers

2004-10-28 Thread Unico Hommes
Torsten Curdt wrote:
Folks please cast your votes for:
[  ] Leszek
[  ] Ralph
as Apache Cocoon committers.

+1 for both.
--
Unico


Re: svn commit: rev 55619 - in cocoon/branches/BRANCH_2_1_X/src: test/anteater webapp/samples/test/reader-mime-type

2004-10-27 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Now only one release blocker remains: NPE in 
AbstractEnvironment.release()

No, not exactly. As I see it, ResourceReader should return Source's 
mimeType in this particular case, as per:

public String getMimeType() {
Context ctx = ObjectModelHelper.getContext(objectModel);
if (ctx != null) {
final String mimeType = ctx.getMimeType(source);
if (mimeType != null) {
return mimeType;
}
}
return inputSource.getMimeType();
}
But in this case, it was not able to get mime type of the 
SitemapSource. SitemapSource, in its turn, was retrieving mimeType 
from the environment:

344:this.mimeType = this.environment.getContentType();
But for some reason it is not there. I think that is the problem and 
it was a valid test case for this problem. Unless I am mistaken... If 
I'm not, we should revert the removal of test case.

No don't say that! I thought we'd finally gotten rid it. :-(
You are right though. My conclusion that the test case was no longer 
valid was premature. It used to be that SitemapSource would just return 
hard-coded text/xml as its mime type.

Anyway, the problem appears to be that the mime-type is only set on the 
environment by the processing pipeline during actual processing whereas 
at the time Reader.getMimeType is called the pipeline has only been 
setup and prepared.

Now I am wondering whether we can move setting the mime-type from the 
processing stage to the preparation stage. The reason it is currently 
being done during processing is that in the case of the caching pipeline 
the mime-type is held in the cached response and the retrieval of the 
cached response is delayed until the processing stage in order to 
optimize performance.

Is it really neccessary or desirable to cache the mime-type? What 
happens if the effective mime-type is from the  one from the reader 
definition or the read node declaration (map:reader class=.. 
mime-type=text/xml/ or map:read src=somewhere mime-type=text/xml 
/) From these examples it follows that the mime-type must be part of 
the cache-key because if you were to change it in any of the attributes 
above the previously cached contents should not be served.

Btw. ResourceReader.getMimeType() method above looks a bit funny to me. 
The mime-type mapping from the context has priority over the mime-type 
of the individual source. Surely that should be the other way around?

Comments?
--
Unico


Re: When to set mime-type

2004-10-27 Thread Unico Hommes
Luigi Bai wrote:

On Wed, 27 Oct 2004, Unico Hommes wrote:
Vadim Gritsenko wrote:
Unico Hommes wrote:
Now only one release blocker remains: NPE in 
AbstractEnvironment.release()

No, not exactly. As I see it, ResourceReader should return Source's 
mimeType in this particular case, as per:

public String getMimeType() {
Context ctx = ObjectModelHelper.getContext(objectModel);
if (ctx != null) {
final String mimeType = ctx.getMimeType(source);
if (mimeType != null) {
return mimeType;
}
}
return inputSource.getMimeType();
}
But in this case, it was not able to get mime type of the 
SitemapSource. SitemapSource, in its turn, was retrieving mimeType 
from the environment:

344:this.mimeType = this.environment.getContentType();
But for some reason it is not there. I think that is the problem and 
it was a valid test case for this problem. Unless I am mistaken... 
If I'm not, we should revert the removal of test case.

No don't say that! I thought we'd finally gotten rid it. :-(
You are right though. My conclusion that the test case was no longer 
valid was premature. It used to be that SitemapSource would just 
return hard-coded text/xml as its mime type.

Anyway, the problem appears to be that the mime-type is only set on 
the environment by the processing pipeline during actual processing 
whereas at the time Reader.getMimeType is called the pipeline has 
only been setup and prepared.

Now I am wondering whether we can move setting the mime-type from the 
processing stage to the preparation stage. The reason it is currently 
being done during processing is that in the case of the caching 
pipeline the mime-type is held in the cached response and the 
retrieval of the cached response is delayed until the processing 
stage in order to optimize performance.

I would like to consider allowing the mime-type to be set during 
processing, perhaps even as late as after endDocument(). The reason 
for this is: I store images in XML documents as Base64 encoded data, 
and its element tag looks like this:

image mime-type=image/pngBase64data/image
In this way I can store any kind of image in my files, and it is 
extracted correctly (I have a Base64Serializer to stream it out). 
However, in order to correctly extract it, I have to know, in the 
sitemap, /a priori/, what the mime-type is. Which leads to ugly hacks 
such as: You can only put PNG images in this document and JPG images 
in this document, so I know how to extract them, or perhaps the 
requester would have to know to ask for a particular mime-type. And 
adding support for a new image type (admittely this does not happen 
frequently!) means updating the sitemap and adding new rules. If I 
could set the mime-type of the response at least after the pertinent 
startElement(), then I'd be fine. Of course, that would mean that the 
response couldn't start streaming until it was complete - in this 
case, setting the mime-type might be the last thing it needs before 
beginning the stream.

Exactly. This is the problem with delaying the setting of the mime type. 
For the general case we'd always have to buffer the whole response which 
isn't practical.

What you could do instead is to manually set the response header in the 
serializer and make sure the pipeline that processes the response has a 
large enough buffer. You can set the outputBufferSize parameter either 
during the pipe's configuration or using a sitemap parameter.

--
Unico


Re: When to set mime-type

2004-10-27 Thread Unico Hommes
Luigi Bai wrote:

On Wed, 27 Oct 2004, Unico Hommes wrote:
I would like to consider allowing the mime-type to be set during 
processing, perhaps even as late as after endDocument(). The reason 
for this is: I store images in XML documents as Base64 encoded data, 
and its element tag looks like this:

image mime-type=image/pngBase64data/image
In this way I can store any kind of image in my files, and it is 
extracted correctly (I have a Base64Serializer to stream it out). 
However, in order to correctly extract it, I have to know, in the 
sitemap, /a priori/, what the mime-type is. Which leads to ugly 
hacks such as: You can only put PNG images in this document and JPG 
images in this document, so I know how to extract them, or perhaps 
the requester would have to know to ask for a particular mime-type. 
And adding support for a new image type (admittely this does not 
happen frequently!) means updating the sitemap and adding new 
rules. If I could set the mime-type of the response at least after 
the pertinent startElement(), then I'd be fine. Of course, that 
would mean that the response couldn't start streaming until it was 
complete - in this case, setting the mime-type might be the last 
thing it needs before beginning the stream.

Exactly. This is the problem with delaying the setting of the mime 
type. For the general case we'd always have to buffer the whole 
response which isn't practical.

Well, according to the Servlet spec, you really should set ContentType 
before beginning the output stream, especially if you're using a 
Writer and not an OutputStream (for charset to be handled correctly). 
So it seems orthogonal to buffering.

Huh? Probably we mean the same thing. In an http response, header come 
before body, otherwise its not a valid http response. With buffering I 
mean the process of stalling the streaming of the response body for a 
specified amount of bytes in order to allow modification of the response 
headers. Be that by physically setting an output buffer size in the 
Servlet API or brewing up something external to it. So one has a lot to 
do with the other.

And Cocoon already has shouldSetContentLength(), which tells the 
pipeline that at least ContentLength happens later in the processing 
(and of course the output has to be buffered). If that is not set, the 
general case is to stream without contentLength.

shouldSetContentLength doesn't tell the pipeline that content length 
happens later in the pipeline, it tells the pipeline it ought to do 
whatever it can to determine the content length. It happens to do that 
by buffering the output.

Anyway, I think content length is a special case since there is no 
general mechanism - like there is with mime-type - to determine it in 
advance.

I'd propose a shouldSetContentType(), which would be a special case 
(not the general case); it would tell the pipeline to wait to send 
output until the contentType is set. This may or may not cause 
buffering; indeed, in the use-case I described, nothing is sent to the 
output stream by the time the image content-type is known.

What you could do instead is to manually set the response header in 
the serializer and make sure the pipeline that processes the response 
has a large enough buffer. You can set the outputBufferSize parameter 
either during the pipe's configuration or using a sitemap parameter.

Is it true that setting response headers in the Serializer will be 
respected? It's not clear that's true in all cases - various Wrapped 
Responses throw that away. I think it should be more explicitly part 
of the workflow.

A Serializer will never deal with WrappedEnvironments AFAICS. Internal 
xml pipeline processing is pipeline minus Serializer.

--
Unico


Re: When to set mime-type

2004-10-27 Thread Unico Hommes
Luigi Bai wrote:

On Wed, 27 Oct 2004, Unico Hommes wrote:
Luigi Bai wrote:

Exactly. This is the problem with delaying the setting of the mime 
type. For the general case we'd always have to buffer the whole 
response which isn't practical.

Well, according to the Servlet spec, you really should set 
ContentType before beginning the output stream, especially if you're 
using a Writer and not an OutputStream (for charset to be handled 
correctly). So it seems orthogonal to buffering.

Huh? Probably we mean the same thing. In an http response, header 
come before body, otherwise its not a valid http response. With 
buffering I mean the process of stalling the streaming of the 
response body for a specified amount of bytes in order to allow 
modification of the response headers. Be that by physically setting 
an output buffer size in the Servlet API or brewing up something 
external to it. So one has a lot to do with the other.

Yes, we mean the same thing. I was just pointing out that in a 
pipeline, it's not always the case that _anything_ has streamed out 
(needs to be buffered) by the time the headers are all ready - even if 
the headers are set in the Serializer. So, it's an observation about 
the Cocoon workflow, not headers and buffering in general.

And Cocoon already has shouldSetContentLength(), which tells the 
pipeline that at least ContentLength happens later in the processing 
(and of course the output has to be buffered). If that is not set, 
the general case is to stream without contentLength.

shouldSetContentLength doesn't tell the pipeline that content length 
happens later in the pipeline, it tells the pipeline it ought to do 
whatever it can to determine the content length. It happens to do 
that by buffering the output.

Anyway, I think content length is a special case since there is no 
general mechanism - like there is with mime-type - to determine it in 
advance.

Well, there's only a general mechanism to determine mime-type in 
advance if you require it to be set in advance. That's a circular 
argument! :-) 

That's not what the argument was about though. The point is that there 
is no way to determine the content-length berforehand. In this way 
content lenght is different from mime-type.

My point is that in a data-driven model, the pipeline may not know 
until the Serializer has started processing the stream (and not just 
at setup() but after startDocument()) what the characteristics of the 
data are. Since the Response headers are supposed to reflect what the 
data is, shouldn't it be possible for the data to influence what 
headers are sent to represent it? 

OK. In some special cases, the data that flows through the pipeline 
determines the characteristics of the outputted data, ie. it is only 
available by actually processing the pipeline, and it cannot be 
determined beforehand. In this case you want a mechanism to set the 
mime-type during that processing stage. But such a mechanism exists in 
Cocoon, just not by interfacing with the ProcessingPipeline but by using 
the  Response object in the Serializer directly. I think the best way to 
solve this is using that.

The Servlet spec, with all its redirects and includes, makes it 
possible, although with many cautions about flushing etc.

I'd propose a shouldSetContentType(), which would be a special case 
(not the general case); it would tell the pipeline to wait to send 
output until the contentType is set. This may or may not cause 
buffering; indeed, in the use-case I described, nothing is sent to 
the output stream by the time the image content-type is known.

What you could do instead is to manually set the response header in 
the serializer and make sure the pipeline that processes the 
response has a large enough buffer. You can set the 
outputBufferSize parameter either during the pipe's configuration 
or using a sitemap parameter.

Is it true that setting response headers in the Serializer will be 
respected? It's not clear that's true in all cases - various Wrapped 
Responses throw that away. I think it should be more explicitly part 
of the workflow.

A Serializer will never deal with WrappedEnvironments AFAICS. 
Internal xml pipeline processing is pipeline minus Serializer.

I didn't know that. However: if the content is being streamed to the 
client, at what point are the headers sent? Can it be said for certain 
that a Serializer can set headers after startDocument(), and these 
headers are sent to the client? 

Actually I think that in your case it is not a problem and you don't 
have bother with buffer size. You'll always know the mime-type before 
you start steaming because it is an attribute of the element enclosing 
the contents you are going to stream.

--
Unico


Re: svn commit: rev 55619 - in cocoon/branches/BRANCH_2_1_X/src: test/anteater webapp/samples/test/reader-mime-type

2004-10-27 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Vadim Gritsenko wrote:
Unico Hommes wrote:
Now only one release blocker remains: NPE in 
AbstractEnvironment.release()

No, not exactly. As I see it, ResourceReader should return Source's 
mimeType in this particular case, as per:

public String getMimeType() {
Context ctx = ObjectModelHelper.getContext(objectModel);
if (ctx != null) {
final String mimeType = ctx.getMimeType(source);
if (mimeType != null) {
return mimeType;
}
}
return inputSource.getMimeType();
}
But in this case, it was not able to get mime type of the 
SitemapSource. SitemapSource, in its turn, was retrieving mimeType 
from the environment:

344:this.mimeType = this.environment.getContentType();
But for some reason it is not there. I think that is the problem and 
it was a valid test case for this problem. Unless I am mistaken... 
If I'm not, we should revert the removal of test case.


No don't say that! I thought we'd finally gotten rid it. :-(

Ooops. I'm sorry! :)

Never mind. It's not your fault ;-)
You are right though. My conclusion that the test case was no longer 
valid was premature. It used to be that SitemapSource would just 
return hard-coded text/xml as its mime type.

Anyway, the problem appears to be that the mime-type is only set on 
the environment by the processing pipeline during actual processing 
whereas at the time Reader.getMimeType is called the pipeline has 
only been setup and prepared.

Now I am wondering whether we can move setting the mime-type from the 
processing stage to the preparation stage.

Probably we should, if possible. 

I should be possible if the issue Luigi raised doesn't hold valid. 
Should the data stream in the pipeline have any say over its own 
characteristics? I am not sure. It would be a whole lot easier if we 
would just mandate it. But then perhaps the Response.setHeader should 
not even be available to sitemap components, or FOM even. Hmm.


The reason it is currently being done during processing is that in 
the case of the caching pipeline the mime-type is held in the cached 
response and the retrieval of the cached response is delayed until 
the processing stage in order to optimize performance.

If mime-type is always set during pipeline setup, it will be not 
necessary to store it in the cached response at all. Moreover, current 
situation is flawed anyway: what about all the other headers? They are 
all lost on second request. If all headers are set during pipeline 
setup, we won't have the issue.

Is it really neccessary or desirable to cache the mime-type?

See above.

What happens if the effective mime-type is from the  one from the 
reader definition or the read node declaration (map:reader 
class=.. mime-type=text/xml/ or map:read src=somewhere 
mime-type=text/xml /) From these examples it follows that the 
mime-type must be part of the cache-key because if you were to change 
it in any of the attributes above the previously cached contents 
should not be served.

Not if it set as described above.

Btw. ResourceReader.getMimeType() method above looks a bit funny to 
me. The mime-type mapping from the context has priority over the 
mime-type of the individual source. Surely that should be the other 
way around?

There were some changes around mime type handling in 2.2...
http://issues.apache.org/bugzilla/show_bug.cgi?id=10277
ResourceReader implementation was not changed, though.

Yes, the conclusion at the end seems not to match the code in 
ResourceReader.

--
Unico



Re: getInputStream() in FOM_Cocoon.FOM_Request ?

2004-10-26 Thread Unico Hommes
On 24-okt-04, at 13:47, Unico Hommes wrote:
On 24-okt-04, at 1:38, Frédéric Glorieux wrote:
Hello,
Like explained in another thread, I'm trying to implement a PUT 
binaries in the webdav/davmap demo. I wrote a simple RequestReader 
like waited in the sitemap sample, it works well for httpRequest, 
because there is a  getInputStream() method. But the sample sitemap 
is calling a pipeline from flow, and  FOM_Cocoon.FOM_Request doesn't 
seem to accept to give an InputStream in the API. I have a simple 
question, why ? There's a piece of architecture I can't understand 
there between all these requests.

The Request interface itself does not have getInputStream method, only 
HttpRequest does. So first step would be to add getInputStream method 
to the Request and then add it to FOM.

Done. I've applied the changes to the trunk only for now because I had 
to do an incompatible change. HttpRequest.getInputStream method 
returned ServletInputStream which has been changed to InputStream. 
Should I port the changes to the branch anyway ? I've also deprecated 
ActionRequest.getPortletInputStream in the portlet environment.

--
Unico


Re: getInputStream() in FOM_Cocoon.FOM_Request ?

2004-10-26 Thread Unico Hommes
This seemed to be related to the removal of instrumentation from the 
axis block. Should be fixed now.

--
Unico
Tim Larson wrote:
On Tue, Oct 26, 2004 at 01:49:24PM +0200, Unico Hommes wrote:
 

On 24-okt-04, at 13:47, Unico Hommes wrote:
   

The Request interface itself does not have getInputStream method, only 
HttpRequest does. So first step would be to add getInputStream method 
to the Request and then add it to FOM.
 

Done. I've applied the changes to the trunk only for now because I had 
to do an incompatible change. HttpRequest.getInputStream method 
returned ServletInputStream which has been changed to InputStream. 
Should I port the changes to the branch anyway ? I've also deprecated 
ActionRequest.getPortletInputStream in the portlet environment.
   

Accessing the root page from a clean build of the trunk gives:
Initialization Problem
Message: Error during configuration
Description: org.apache.avalon.framework.configuration.ConfigurationException: Error 
during configuration
Sender: org.apache.cocoon.servlet.CocoonServlet
Source: Cocoon Servlet
cause
java.lang.NullPointerException
request-uri
/
full exception chain stacktrace
org.apache.avalon.framework.configuration.ConfigurationException: Error during 
configuration
at 
org.apache.cocoon.components.axis.SoapServerImpl.configure(SoapServerImpl.java:207)
at 
org.apache.avalon.framework.container.ContainerUtil.configure(ContainerUtil.java:240)
at 
org.apache.cocoon.core.container.ComponentFactory.newInstance(ComponentFactory.java:131)
snip/
Caused by: java.lang.NullPointerException
at 
org.apache.excalibur.source.impl.ResourceSource.getInputStream(ResourceSource.java:97)
at 
org.apache.cocoon.components.axis.SoapServerImpl.setManagedServices(SoapServerImpl.java:366)
at 
org.apache.cocoon.components.axis.SoapServerImpl.configure(SoapServerImpl.java:201)
snip/
--Tim Larson
 




Re: svn commit: rev 55619 - in cocoon/branches/BRANCH_2_1_X/src: test/anteater webapp/samples/test/reader-mime-type

2004-10-26 Thread Unico Hommes
Now only one release blocker remains: NPE in AbstractEnvironment.release()
--
Unico
[EMAIL PROTECTED] wrote:
Author: unico
Date: Tue Oct 26 10:07:04 2004
New Revision: 55619
Modified:
  cocoon/branches/BRANCH_2_1_X/src/test/anteater/reader-mime-type.xml
  
cocoon/branches/BRANCH_2_1_X/src/webapp/samples/test/reader-mime-type/explain-test.xml
  cocoon/branches/BRANCH_2_1_X/src/webapp/samples/test/reader-mime-type/sitemap.xmap
Log:
remove testcase that no longer complies with expected behavior:
internal requests should not be able to alter response headers 

as discussed in http://marc.theaimsgroup.com/?t=10978326015r=1w=2
 

snip/


Re: getInputStream() in FOM_Cocoon.FOM_Request ?

2004-10-25 Thread Unico Hommes
Sylvain Wallez wrote:
Frédéric Glorieux wrote:

The Request interface itself does not have getInputStream method, 
only HttpRequest does. So first step would be to add getInputStream 
method to the Request and then add it to FOM.

DONE, it seems to work for me. Still problems for a real world WebDAV 
implementation, but this is for other threads.

I think this would be a good addition. What do others think?

YES, what do they think ?

+1.
But we have to define the behavior of getInputStream for environments 
where it has no meaning (e.g. background env, sitemap source, 
cocoonbean etc). IMO, we should throw an exception for these cases.

Agreed. java.lang.UnsupportedOperationException is appropriate I think.
--
Unico


[Heads up] Change to build system in 2.1.x

2004-10-25 Thread Unico Hommes
I've completed the changes to the build system discussed earlier [1]. In 
order to do so I have extended the gump descriptor with additional 
information that allows the build system to locate one or more 
dependency jars per depend project within ./lib/optional. See for an 
example the cocoon-block-axis project definition in gump.xml

Every block now *must* declare all the dependencies it requires to 
compile in gump.xml just in order for it to build properly.

Since I am not very familiar with gump.xml and I had to add a lot of 
information it is very probable that I made a mistake or two with the 
way local projects are declared. I guess we'll discover all that soon 
enough.

1. http://marc.theaimsgroup.com/?t=10982808125r=1w=2
--
Unico


Re: [Heads up] Change to build system in 2.1.x

2004-10-25 Thread Unico Hommes
On 25-okt-04, at 20:35, Vadim Gritsenko wrote:
Unico Hommes wrote:
I've completed the changes to the build system discussed earlier [1]. 
In order to do so I have extended the gump descriptor with additional 
information that allows the build system to locate one or more 
dependency jars per depend project within ./lib/optional. See for 
an example the cocoon-block-axis project definition in gump.xml
Every block now *must* declare all the dependencies it requires to 
compile in gump.xml just in order for it to build properly.
Since I am not very familiar with gump.xml and I had to add a lot of 
information it is very probable that I made a mistake or two with the 
way local projects are declared.
I thought you'll add libary/ element which would be independent of 
depend/ element and thus avoid any possible conflicts with Gump. But 
now I see that you'd added bunch of new depend/ elements - which are 
not currently required by Gump - I don't think we should do that.

I'd sleep better if instead of:
+depend project=db-ojb/
+depend project=antlr/
+depend project=commons-dbcp/
+depend project=commons-pool/
We'd have:
+library project=db-ojb/
+library project=antlr/
+library project=commons-dbcp/
+library project=commons-pool/
WDYT?
No problem, better even. Consider it done.

I guess we'll discover all that soon enough.
We won't - Gump does not build 2.1, but only 2.2, AFAIK.
Ah ok, makes sense sort of. Then we'll find out after I port the 
changes to the trunk. :-)

--
Unico


Re: webdav block, davmap, binaries PUT, request reader

2004-10-24 Thread Unico Hommes
Hi again,
I thought that would have worked :-/ The problem may be that 
getInputStream is not defined on the Response interface but only on 
HttpResponse class. Hmm sorry, I guess a Reader is the best option ATM.

--
Unico
On 23-okt-04, at 20:30, Frédéric Glorieux wrote:
Hello Unico,
Back from family, and hungry to make it works.
You probably no longer
Now ?
Using a cocoon HEAD build on 16 october, among lots of others I got 
this exception

Caused by: org.apache.excalibur.source.SourceException: Exception 
during processing of 
cocoon://samples/blocks/webdav/davmap/request/read

  map:match pattern=request/read
map:read src=module:request:inputStream/
  /map:match
Error during resolving of the input stream: 
org.apache.excalibur.source.SourceException:  The attribute: 
inputStream is empty

This come from the webdav.js flowscript function
function put() {
  var src  = cocoon.parameters[src];
  var dest = cocoon.parameters[dest];
  try {
var status = repository.save(src,dest);
sendStatus(status);
  }
  catch (e) {
cocoon.log.error(e);
sendStatus(500);
  }
}
where src point to your snippet
This arrives when saving (PUT) with a WebDAV authoring tool (free 
XMLSpy, sorry, I'm still Windows) working correctly when src point to

  map:match pattern=request/PUT
map:generate type=stream
  map:parameter name=defaultContentType value=text/xml /
/map:generate
map:serialize type=xml /
  /map:match
Do you need more or did I point the right thing ?
Should I build something newer or did I miss something in using your 
snippet ?


need to write a Reader for that. Instead you should be able to use 
the ModuleSource. This component exposes input module values as 
Sources. Syntax is like so: module:moduleName:moduleAttr . So in the 
case of davmap, it would become:
map:read src=module:request:inputStream/
Fred.
--
Frédéric Glorieux (ingénieur documentaire, AJLSM)
http://www.ajlsm.com



Re: getInputStream() in FOM_Cocoon.FOM_Request ?

2004-10-24 Thread Unico Hommes
On 24-okt-04, at 1:38, Frédéric Glorieux wrote:
Hello,
Like explained in another thread, I'm trying to implement a PUT 
binaries in the webdav/davmap demo. I wrote a simple RequestReader 
like waited in the sitemap sample, it works well for httpRequest, 
because there is a  getInputStream() method. But the sample sitemap is 
calling a pipeline from flow, and  FOM_Cocoon.FOM_Request doesn't seem 
to accept to give an InputStream in the API. I have a simple question, 
why ? There's a piece of architecture I can't understand there between 
all these requests.

The Request interface itself does not have getInputStream method, only 
HttpRequest does. So first step would be to add getInputStream method 
to the Request and then add it to FOM.

I think this would be a good addition. What do others think?
Workaround are easy (less flow, more JAVA), but this could be sad for 
this so pure davmap, and also because it was my first flow try.

A request reader could perhaps be useful elsewhere ?
IIRC it was mentioned a while ago that ModuleSource would deprecate 
StreamGenerator. I guess we found out it is not the case yet. But if we 
modify the Response object the I described then I guess the combination 
of the ModuleSource and the ResourceReader would be equivalent to the 
Request Reader.

--
Unico


Re: webdav block, davmap, binaries PUT, request reader

2004-10-23 Thread Unico Hommes
You probably no longer need to write a Reader for that. Instead you 
should be able to use the ModuleSource. This component exposes input 
module values as Sources. Syntax is like so: 
module:moduleName:moduleAttr . So in the case of davmap, it would 
become:

map:read src=module:request:inputStream/
--
Unico
On 23-okt-04, at 1:57, Frédéric Glorieux wrote:
Hi,
Sorry to disturb you for a so specific question, but I evaluate and 
understood today the block webdav/davmap

really amazing how it's a pretty way to have a quite full 
implementation of webdav, except,

PUT binary files.
Do someone plan to write the reader needed for that ?
Are there some blocking issues ?
I can't stop myself to try tomorrow, but I'm not the best guy to do 
the job.

Help welcome, but thanks anymore for all work already done.
--
Frédéric Glorieux (ingénieur documentaire, AJLSM)
http://www.ajlsm.com



Re: [cron block] dependency question

2004-10-22 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
I'd like to add the ability to use an excalibur DataSourceComponent 
as the ConnectionProvider for the QuartzJobScheduler's JobStore. 
However the solution I had in mind results in an additional 
dependency on the cocoon databases block. Not because of a 
compilation dependency on the source code of that block but on a jar 
in its lib directory. Am I correct in assuming that the policy on 
this is that I move the excalibur-datasource jar to lib/optional?

Just an idea; can we move ALL libraries to lib/optional, and copy them 
into the WEB-INF/lib on as-needed basis, i.e. if block included which 
needs a library, only then librariy is copied over? This should be 
possible using the info from gump.xml...

OK, I started working on this. Actually the changes to the build system 
were a breeze. The only difficulty I am currently having is with 
gump.xml because I am not very familiar with it. A lot of blocks project 
declarations have missing dependency information. In those cases I need 
to find out the project's gump name. How do I do that?

Another thing is that some external project names may not correspond to 
the name of the jar in our repository. Should I rename the jar in that 
case or is there a way to specify an alternative jar name in the descriptor?

--
Unico


Re: [cron block] dependency question

2004-10-22 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Vadim Gritsenko wrote:
Unico Hommes wrote:
I'd like to add the ability to use an excalibur DataSourceComponent 
as the ConnectionProvider for the QuartzJobScheduler's JobStore. 
However the solution I had in mind results in an additional 
dependency on the cocoon databases block. Not because of a 
compilation dependency on the source code of that block but on a 
jar in its lib directory. Am I correct in assuming that the policy 
on this is that I move the excalibur-datasource jar to lib/optional?


Just an idea; can we move ALL libraries to lib/optional, and copy 
them into the WEB-INF/lib on as-needed basis, i.e. if block included 
which needs a library, only then librariy is copied over? This 
should be possible using the info from gump.xml...

OK, I started working on this. Actually the changes to the build 
system were a breeze. The only difficulty I am currently having is 
with gump.xml because I am not very familiar with it. A lot of blocks 
project declarations have missing dependency information. In those 
cases I need to find out the project's gump name. How do I do that?

Another thing is that some external project names may not correspond 
to the name of the jar in our repository. Should I rename the jar in 
that case or is there a way to specify an alternative jar name in the 
descriptor?

I don't like renaming jars... Can we just extend gump.xml with the 
info we need? In this case, for a block, we need a list of jar files 
it requires from lib/optional.

The way I have it now is that for each depend I add an include 
[EMAIL PROTECTED]/ to the fileset . We could add a library 
attribute that overides the project attribute. For instance:

project name=cocoon-block-axis
 depend project=xml-axis library=axis/
On the other hand, I think there are also cases (actually perhaps 
xml-axis it one) where one project delivers more than one jar. In that 
case perhaps library attribute must become a comma separated list. On 
second thought it may be better to create a library child element.

Thoughts?
--
Unico


Re: [cron block] dependency question

2004-10-21 Thread Unico Hommes
Carsten Ziegeler wrote:
Vadim Gritsenko wrote:
 

Just an idea; can we move ALL libraries to lib/optional, and 
copy them into the WEB-INF/lib on as-needed basis, i.e. if 
block included which needs a library, only then librariy is 
copied over? This should be possible using the info from gump.xml...

This way we could eliminate several pseudo dependencies like this.
   

Yes, this has been discussed several times and I think as well this
is the easiest solution. But upto now noone had time to implement it :(
 

Ok ok, I get the hint ;-) But before I decide to do any work there is 
another issue with the blocks build system I'm dying to resolve. What 
about having only one repository location for blocks? I am so tired of 
all the duplicate effort we have to do for each and every change to 
blocks. It shouldn't be neccessary.

There probably isn't a small amount of work involved to get it working 
but I'd like to know exactly what it would take to byte this bullet. 
Some of the steps involved that I can distinguish are:

1. Merge/sync the current 2.1.x and 2.2 blocks.What blocks have the 
biggest differences between their 2.1.x and 2.2 copy? If there are 
unresolvable differerences, how to handle that? Have separate source 
locations for different versions in conflicting blocks? Define Cocoon 
target versions for individual blocks in gump.xml?

2. Move blocks to one location. /repos/asf/cocoon/trunk/blocks?
3. Separate blocks build system from core build system and let one drive 
the other.

Comments?
--
Unico


Re: [cron block] dependency question

2004-10-21 Thread Unico Hommes
Reinhard Poetz wrote:
Unico Hommes wrote:
Carsten Ziegeler wrote:

2. Move blocks to one location. /repos/asf/cocoon/trunk/blocks?

yes
3. Separate blocks build system from core build system and let one 
drive the other.

Comments?

Some thoughts I want to share:
goal: move towards real blocks - do as much work that can be reused later
- each block has its own build system

IIRC, Nicola already started an effort for an updated build system that 
features an individual build file for each block separately. I wonder 
what happened to that and whether it is useable already.

- each block has a public and a private directory

What does that do?
- each block has a deployment descriptor:
  * list of blocks that are necessary to make them compile/run
  * list of blocks that are necessary at runtime
(e.g. forms needs xsp because of the examples)
  * list of all libraries
(all libraries are in a single Cocoon library repository
 and this way we can make sure that we don't end at jar hell
 -- real block and their shielding will finally solve this)
  -- create gump file
  -- create eclipse/idea project files
- each block only depends on
  - cocoon core
  - public sources of other blocks
This is the same file as blocks.xml for real blocks right?
a complete build runs through following steps:
- compile Cocoon core
- one build task that compiles all public directories
  at once (or can we make sure that there are no
  circular dependencies, which should be avoided of
  course)
- compile each block separatly
- create web application
- deploy each block separatly
  - copy samples
  - patch configuration files
but of course only a single block can be deployed too
It's probably a lot of work but sooner or later we have to do it 
anyway, so why should we suffer from our build system and a gigantic 
eclipse project any longer

For splitting the eclipse project there is an additional requirement 
that blocks and core directory don't overlap. Eclipse cannot deal with 
overlapping projects. So that would mean that the 2.2 core move to 
/repos/asf/cocoon/trunk/core .

--
Unico


Re: [cron block] dependency question

2004-10-21 Thread Unico Hommes
Carsten Ziegeler wrote:
Unico Hommes wrote:
 

Ok ok, I get the hint ;-) 
   

Oh, that wasn't targetted at you, Unico, but if you have time... :)
 

I didn't really feel that it was. It just seems opportune that I take up 
the issue since I raised it. I'll see what I can do.

But before I decide to do any work 
there is another issue with the blocks build system I'm dying 
to resolve. What about having only one repository location 
for blocks? I am so tired of all the duplicate effort we have 
to do for each and every change to blocks. It shouldn't be neccessary.
   

Yes, and it should be simple.
 

There probably isn't a small amount of work involved to get 
it working but I'd like to know exactly what it would take to 
byte this bullet. 
Some of the steps involved that I can distinguish are:

1. Merge/sync the current 2.1.x and 2.2 blocks.What blocks 
have the biggest differences between their 2.1.x and 2.2 
copy? If there are unresolvable differerences, how to handle 
that? Have separate source locations for different versions 
in conflicting blocks? Define Cocoon target versions for 
individual blocks in gump.xml?

   

We have to finish the syncing, The wiki still lists some blocks
that haven't been synced yet - but again this is simple work.
 

Ah thanks for the pointer. I see there is plenty I can do in that 
departement.

Apart from that, some blocks depend (unfortunately) on some internal
things which have changed between 2.1.x and 2.2. The most 
prominent is of course XSP. But there are others that now
take advantage of some new features in 2.2 that aren't available
in 2.1.x.
So in the end this is not so easy.

We could start simple. First move blocks that don't have a difference
and leave the different ones in the two branches.
 

Hmm, that doesn't make the build system any simpler, but alas, we can 
clean up after the migration is complete.

But I would strongly suggest that we first finish syncing, can
then do a painless 2.1.6 release and then spend energy on
this issue. I personally don't want to delay a 2.1.6 release
just because of a broken build system etc.
 

That is true. Is there anything holding back a 2.1.6 release btw?
--
Unico


Re: [cron block] dependency question

2004-10-21 Thread Unico Hommes
Reinhard Poetz wrote:
Unico Hommes wrote:

[snip]
For splitting the eclipse project there is an additional requirement 
that blocks and core directory don't overlap. Eclipse cannot deal 
with overlapping projects. So that would mean that the 2.2 core move 
to /repos/asf/cocoon/trunk/core .

IMO the eclipse project file for a block contains
- two source directories (public/private)
- Cocon core lib
- public libraries of blocks it depends on
- external libraries (currently in lib/optional)
This way we don't have to deal with Eclipse project dependencies, do we?
Ah, so you mean to keep the notion of a single eclipse project but one 
whose project descriptor is split between a top level core one that 
references project fragments inside individual blocks using entity 
references? I am just guessing.

At least in eclipse this is illegal:
./.project.xml
./blocks/blockA/.project.xml
Opening both projects at the same time in the same workspace is not 
possible.

--
Unico


Re: [cron block] dependency question

2004-10-21 Thread Unico Hommes
Upayavira wrote:
Unico Hommes wrote:
Reinhard Poetz wrote:
Unico Hommes wrote:

[snip]
For splitting the eclipse project there is an additional 
requirement that blocks and core directory don't overlap. Eclipse 
cannot deal with overlapping projects. So that would mean that the 
2.2 core move to /repos/asf/cocoon/trunk/core .


IMO the eclipse project file for a block contains
- two source directories (public/private)
- Cocon core lib
- public libraries of blocks it depends on
- external libraries (currently in lib/optional)
This way we don't have to deal with Eclipse project dependencies, do 
we?

Ah, so you mean to keep the notion of a single eclipse project but 
one whose project descriptor is split between a top level core one 
that references project fragments inside individual blocks using 
entity references? I am just guessing.

At least in eclipse this is illegal:
./.project.xml
./blocks/blockA/.project.xml
Opening both projects at the same time in the same workspace is not 
possible.

I understood him to mean one project per block. I'd prefer one project 
for all blocks.

Having said that, there's no reason why we couldn't have:
trunk/blocks/.project
trunk/blocks/forms/.project
trunk/blocks/databases/.project
etc
That way, you can do it which ever way suits you.
But the core project descriptor currently is trunk/.project that should 
become trunk/core/.project in order to be able to open both core and 
blocks projects in the same workspace.

--
Unico


[cron block] dependency question

2004-10-20 Thread Unico Hommes
I'd like to add the ability to use an excalibur DataSourceComponent as 
the ConnectionProvider for the QuartzJobScheduler's JobStore. However 
the solution I had in mind results in an additional dependency on the 
cocoon databases block. Not because of a compilation dependency on the 
source code of that block but on a jar in its lib directory. Am I 
correct in assuming that the policy on this is that I move the 
excalibur-datasource jar to lib/optional?

--
Unico


Re: [RT] applying SoC on cocoon itself

2004-10-20 Thread Unico Hommes
Sylvain Wallez wrote:
Stefano Mazzocchi wrote:
Sylvain Wallez wrote:
Stefano Mazzocchi wrote:
snip/
and this solves *ALSO* the issue that was identified at the GT 
about virtual pipeline components resolving services locally or 
remotely (block-wise).

The current problem with VPCs is the context in which relative URIs 
must be resolved. We have not found a good solution for this as that 
depends on the point of view that is considered. What we're missing 
actually is the notion of typed parameter that would allow us to 
absolutize a URI at the closest point where we can determine that it 
is a URI and not a raw string.

In the syntax I proposed, the uri= becomes the identifier for the 
service (thru relative to the block that exposes the service) and the 
src= becomes the identifier for the instructions for the service 
(thus relative to the block that requires the service).

We already have this src= attribute which currently is a raw string. 
Does this mean that we will enforce the contract on this by 
explicitely stating that it's a URI that will be resolved in the local 
context where the instruction is written?

We have to check if all of our current components use src as an URI.
I've often thought that the signature of the setup() method was wrong. 
The src parameter is passed as a String, the component is free to 
interpret it as anything it wishes. But I think this parameter was 
really only meant to ever be interpreted as a Source object. So instead 
of setup(SourceResolver resolver, Map om, String src, Parameters pars) 
the method should be setup(Source source, Map om, Parameters pars). That 
also unambiguously defines the meaning of the src attribute.

I think this solves the issue.

Mmhh... what if other parameters are also URIs?
Easy. There is no way for a sitemap to interpret it as anything but a 
plain string. If a component wants to interpret it as a Source it uses 
its SourceResolver. We'll have a special Source protocol that allows to 
get FileSources relative to the calling sitemap.

--
Unico


Re: [RT] applying SoC on cocoon itself

2004-10-20 Thread Unico Hommes
Sylvain Wallez wrote:
Unico Hommes wrote:
Sylvain Wallez wrote:

snip/
We already have this src= attribute which currently is a raw 
string. Does this mean that we will enforce the contract on this by 
explicitely stating that it's a URI that will be resolved in the 
local context where the instruction is written?

We have to check if all of our current components use src as an URI.
I've often thought that the signature of the setup() method was 
wrong. The src parameter is passed as a String, the component is free 
to interpret it as anything it wishes. But I think this parameter was 
really only meant to ever be interpreted as a Source object. So 
instead of setup(SourceResolver resolver, Map om, String src, 
Parameters pars) the method should be setup(Source source, Map om, 
Parameters pars). That also unambiguously defines the meaning of the 
src attribute.

Agree. But this parameter has been underspecified for so long that it 
may well be the case that some people use it as a row string.

No pain, no gain ;-)
Mmhh... what if other parameters are also URIs?

Easy. There is no way for a sitemap to interpret it as anything but a 
plain string. If a component wants to interpret it as a Source it 
uses its SourceResolver.

Sure.
We'll have a special Source protocol that allows to get FileSources 
relative to the calling sitemap.

Er... what is the calling sitemap when there's a chain involving 
several sitemap, e.g. in a virtual component defined in a parent 
sitemap that has been called through a block? IMO, an absolutizer 
input module is better suited as it is evaluated at sitemap execution 
time (and thus in the context of that sitemap) whereas we don't know 
when a component will decide to use the SourceResolver to get a Source 
object from a String.

Ah! I understand now. I was wondering about that in your other email. 
You are totally right.

--
Unico



Re: svn commit: rev 55002 - cocoon/trunk/src/java/org/apache/cocoon/environment/wrapper

2004-10-18 Thread Unico Hommes
Sylvain Wallez wrote:
[EMAIL PROTECTED] wrote:
Author: unico
Date: Mon Oct 18 06:10:24 2004
New Revision: 55002
Added:
  
cocoon/trunk/src/java/org/apache/cocoon/environment/wrapper/ResponseWrapper.java 

Modified:
  
cocoon/trunk/src/java/org/apache/cocoon/environment/wrapper/EnvironmentWrapper.java 

Log:
introduce WrappedResponse for preventing internal requests to modify 
the response headers
as discussed here: 
http://marc.theaimsgroup.com/?t=10978326015r=1w=2
 

I haven't followed that discussion, but I think these changes will 
break internal redirects for external requests, as it won't allow to 
set headers in that case.

Example :
map:match pattern=*/
 map:redirect-to uri=cocoon:/index.html/
/map:match
The headers set by the called pipeline will be ignored although they 
should not. A check that the wrapped environment is external and avoid 
wrapping in that case should be enough, I guess.

I see, the TreeProcessor wraps the environment in a 
ForwardEnvironmentWrapper in the case of cocoon redirects. Hmm, but 
would the check on whether the wrapped environment is an external one 
really make the correct distinction though? Wouldn't that check also 
match the scenario that started this:

map:match pattern=transformation
 map:read src=xsl/
/map:match
map:match pattern=page
 map:generate src=xml/
 map:transform src=cocoon:/transformation/
 map:serialize/
/map:match
Isn't the environment in which the transformation pipeline is 
processed also an environment wrapping an external one?

--
Unico


Re: svn commit: rev 55002 - cocoon/trunk/src/java/org/apache/cocoon/environment/wrapper

2004-10-18 Thread Unico Hommes
Sylvain Wallez wrote:
Unico Hommes wrote:
Sylvain Wallez wrote:
[EMAIL PROTECTED] wrote:
Author: unico
Date: Mon Oct 18 06:10:24 2004
New Revision: 55002
Added:
  
cocoon/trunk/src/java/org/apache/cocoon/environment/wrapper/ResponseWrapper.java 

Modified:
  
cocoon/trunk/src/java/org/apache/cocoon/environment/wrapper/EnvironmentWrapper.java 

Log:
introduce WrappedResponse for preventing internal requests to 
modify the response headers
as discussed here: 
http://marc.theaimsgroup.com/?t=10978326015r=1w=2
 

I haven't followed that discussion, but I think these changes will 
break internal redirects for external requests, as it won't allow to 
set headers in that case.

Example :
map:match pattern=*/
 map:redirect-to uri=cocoon:/index.html/
/map:match
The headers set by the called pipeline will be ignored although they 
should not. A check that the wrapped environment is external and 
avoid wrapping in that case should be enough, I guess.

I see, the TreeProcessor wraps the environment in a 
ForwardEnvironmentWrapper in the case of cocoon redirects. Hmm, but 
would the check on whether the wrapped environment is an external one 
really make the correct distinction though? Wouldn't that check also 
match the scenario that started this:

map:match pattern=transformation
 map:read src=xsl/
/map:match
map:match pattern=page
 map:generate src=xml/
 map:transform src=cocoon:/transformation/
 map:serialize/
/map:match
Isn't the environment in which the transformation pipeline is 
processed also an environment wrapping an external one?

Mmmh... you're right :-)
I guess we should make the distinction between wrappers for internal 
redirects, that would not wrap the response, and wrappers for cocoon: 
sources that must wrap the response. A simple additional boolean in 
the EnvironmentWrapper constructor should to the trick.

Yeah, that should do the trick. I'll take care of it. Thanks for 
catching this :-)

--
Unico


Re: container independent lifecycles?

2004-10-16 Thread Unico Hommes
Ugo Cei wrote:
Il giorno 16/ott/04, alle 16:08, Tim Larson ha scritto:
All this talk about being independent from the container...but how do we
get lifecycles and still stay independend from the container?

bean id=x class=X init-method=init destroy-method=destroy/
public class X {
  public void init() { ... }
  public void destroy() { ... }
}
Now, there might be some cases where this is not enough, but until 
someone comes up with some real use cases ...

That was one of my questions also. I have not had time to look at Spring 
framework so bare with me. One of the issues we currently have with ECM 
is that it does not do shutdown in order based on dependency 
information. At several places this leads to errors when component A 
that relies on component B to do work during its shutdown fase finds 
component B already was destroyed. Does Spring handle this case?

--
Unico


Re: container independent lifecycles?

2004-10-16 Thread Unico Hommes
Ugo Cei wrote:
Il giorno 16/ott/04, alle 18:36, Ugo Cei ha scritto:
I posted a question about this to the Spring forum:

I could have avoided it: the answer is in the users' list archive:
http://sourceforge.net/mailarchive/message.php?msg_id=8578586
Looks like a very decent solution to me :-)
--
Unico


Re: Invalid content length, revisited

2004-10-15 Thread Unico Hommes
Rogier Peters wrote:
Guys,
On 18/5 Joerg asked a question about invalid content length errors[1] due to
readers. There is also a bug that is somewhat related[2], but it seems to be
WONTFIX.
 

The WONTFIX seems to apply to the original description of the bug only. 
There is a related issue that is mentioned in that thread that I think 
*is* a valid problem.

I have the following case :
 generatorreader
 |  |
 validator .. dtd
 |
 serializer
In this case the reader sets the content-length, and the serializer doesn't. So
if the length of the serializer's output is greater than that of the dtd, output
is incomplete.
Althoug a quick fix is not to get the dtd through a reader, I'm sure there are
cases where that isn't a solution. 
I didn't post this as a bug, yet, because I am not sure whether this is just
unintended use of the reader. 
 

It oughta work. I don't see how it doesn't. The environment that is 
associated with internal requests (EnvironmentWrapper) does not forward 
setContentLength() to its wrapped instance so it shouldn't reach the 
HttpEnvironment. Perhaps reading a cocoon pipeline is interpreted as an 
internal redirect in wich case MutableEnvironmentFacade *does* forward it?

Also, I can't quite get my mind around what's the best way to solve this. Joerg
suggested in his original mail[1] to build some awareness in to the reader to
see if it is called as a cocoon-source or not. Another possible solution would
be setting content-length from all serializers, although Carsten suggests in the
closing of the bug that content-length can not be set repeatedly. 

 

The problem Carsten mentions is that once the http headers are sent 
setting the content-length header has no effect.

I agree with Joerg that it smells that the (resource) reader is setting 
all kinds of response headers where it should probably better be handled 
by the processing pipeline. That is the IoC approach that is taken with 
setting the Content-Type header. By doing it in that way, you avoid the 
current problem that a ResourceReader in a caching pipeline produces 
different responses depending on whether the content is served from the 
cache or produced by a call to generate().

So perhaps we should think about enhancing the SitemapOutputComponent 
interface with several additional methods to allow the processing 
pipeline to set more response headers. I know of one problem with 
content encoding where the pipeline should be able to determine the 
encoding in order to return it in the Content-Type header.

What do others think?



Re: Invalid content length, revisited

2004-10-15 Thread Unico Hommes
Unico Hommes wrote:
Rogier Peters wrote:
Guys,
On 18/5 Joerg asked a question about invalid content length errors[1] 
due to
readers. There is also a bug that is somewhat related[2], but it 
seems to be
WONTFIX.

 

The WONTFIX seems to apply to the original description of the bug 
only. There is a related issue that is mentioned in that thread that I 
think *is* a valid problem.

I have the following case :
 generatorreader
 |  |
 validator .. dtd
 |
 serializer
In this case the reader sets the content-length, and the serializer 
doesn't. So
if the length of the serializer's output is greater than that of the 
dtd, output
is incomplete.
Althoug a quick fix is not to get the dtd through a reader, I'm sure 
there are
cases where that isn't a solution. I didn't post this as a bug, yet, 
because I am not sure whether this is just
unintended use of the reader.  

It oughta work. I don't see how it doesn't. 

Oh I see now. The ResourceReader also sets the content length on the 
response. It *really* shouldn't do that IMHO. Anybody know why it does that?

--
Unico


Re: Cocoon 2.1.6 Release Plan, Re: Syncing 2.1.x and 2.2

2004-10-15 Thread Unico Hommes
Vadim Gritsenko wrote:
Carsten Ziegeler wrote:
According to the wiki we still have some open blocks/areas.
http://wiki.apache.org/cocoon/MergingBranches

In addition it seems that some new things have been checked in
only to one branch, either trunk or 2.1.x, but not to both.
Could everyone please verify that all patches, fixes etc. are
applied accordingly? Of course, there are features that
should only apply to 2.2.
I see a successful merging as a minimum requirement for the 2.1.6
release.

What else beside merging? I'd say checking out all tests and samples 
is another required step. Going through TODO/Bugzilla items to 
identify blockers could be the third thing. I'm aware of at least one 
open issue [1] which could be addressed in 2.1.6

Vadim
[1] http://marc.theaimsgroup.com/?l=xml-cocoon-devm=109596886721684

Both the test and the code seem to be wrong. IIUC the behavior should be 
that failing to redirect from flow should raise a ProcessingException. 
IOW a 500 response status code should be the correct behavior. However 
the test case tests for a 404 and the actual response code seems to be 200.

--
Unico


Re: Invalid content length, revisited

2004-10-15 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Oh I see now. The ResourceReader also sets the content length on the 
response. It *really* shouldn't do that IMHO. Anybody know why it 
does that?

I see that EnvironmentWrapper ignores set content length. And it has 
RequestWrapper. BUT IT DOES NOT HAVE ResponseWrapper! I guess that's 
the real problem.

ResourceReader probably should be changed to set content length on the 
environment, but this is fix in only one case. ResponseWrapper seems 
to be the fix for all cases at once. Or, am I missing something?

I guess you are absolutely right. The Response object in the case of an 
internal request should be a WrappedResponse object. I don't think 
ResourceReader should deal with Environment directly.

There still remains the question whether the ResourceReader should be 
setting the Content-Length itself though. In fact setting response 
headers while generating a pipeline may be a bad idea in general 
depending on the type of header. When the processing pipeline 
implementation is caching there is no way to reproduce the original 
response and hence the cached response is different from the original 
generated response.

--
Unico


Re: Invalid content length, revisited

2004-10-15 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Oh I see now. The ResourceReader also sets the content length on the 
response. It *really* shouldn't do that IMHO. Anybody know why it 
does that?

I see that EnvironmentWrapper ignores set content length. And it has 
RequestWrapper. BUT IT DOES NOT HAVE ResponseWrapper! I guess that's 
the real problem.

ResourceReader probably should be changed to set content length on the 
environment, but this is fix in only one case. ResponseWrapper seems 
to be the fix for all cases at once. Or, am I missing something?

What about FOM though? Doesn't a flowscript run within an 
EnvironmentWrapper? If so, having WrapperResponse ignore calls to 
setHeader() and related methods would be undesirable.

--
Unico


Re: Invalid content length, revisited

2004-10-15 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Vadim Gritsenko wrote:
Unico Hommes wrote:
Oh I see now. The ResourceReader also sets the content length on 
the response. It *really* shouldn't do that IMHO. Anybody know why 
it does that?


I see that EnvironmentWrapper ignores set content length. And it has 
RequestWrapper. BUT IT DOES NOT HAVE ResponseWrapper! I guess that's 
the real problem.

ResourceReader probably should be changed to set content length on 
the environment, but this is fix in only one case. ResponseWrapper 
seems to be the fix for all cases at once. Or, am I missing something?

What about FOM though? Doesn't a flowscript run within an 
EnvironmentWrapper? If so, having WrapperResponse ignore calls to 
setHeader() and related methods would be undesirable.

Shouldn't it / could it run under MutableEnvironmentFacade?
I think that in the following scenario there will be an 
EnvironmentWrapper in there somewhere:

sitemap

map:match pattern=bar
 map:generate src=bar/
 map:serialize/
/map:match
map:match pattern=foo
 map:call function=foo/
/map:match
map:match pattern=foobar
 map:generate src=cocoon:/foo/
 map:serialize/
/map:match
flowscript
--
function foo() {
 cocoon.response.setHeader(foo, bar);
 cocoon.sendPage(bar);
}
Because function foo() is called via a SitemapSource its environment is 
a WrappedEnvironment  (don't know, but I am guessing) and the call to 
setHeader will be to WrappedResponse?

--
Unico


Re: Invalid content length, revisited

2004-10-15 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
Vadim Gritsenko wrote:
Unico Hommes wrote:
Vadim Gritsenko wrote:
Unico Hommes wrote:
Oh I see now. The ResourceReader also sets the content length on 
the response. It *really* shouldn't do that IMHO. Anybody know 
why it does that?



I see that EnvironmentWrapper ignores set content length. And it 
has RequestWrapper. BUT IT DOES NOT HAVE ResponseWrapper! I guess 
that's the real problem.

ResourceReader probably should be changed to set content length on 
the environment, but this is fix in only one case. ResponseWrapper 
seems to be the fix for all cases at once. Or, am I missing 
something?

What about FOM though? Doesn't a flowscript run within an 
EnvironmentWrapper? If so, having WrapperResponse ignore calls to 
setHeader() and related methods would be undesirable.


Shouldn't it / could it run under MutableEnvironmentFacade?
I think that in the following scenario there will be an 
EnvironmentWrapper in there somewhere:

sitemap

map:match pattern=bar
 map:generate src=bar/
 map:serialize/
/map:match
map:match pattern=foo
 map:call function=foo/
/map:match
map:match pattern=foobar
 map:generate src=cocoon:/foo/
 map:serialize/
/map:match
flowscript
--
function foo() {
 cocoon.response.setHeader(foo, bar);
 cocoon.sendPage(bar);
}
Because function foo() is called via a SitemapSource its environment 
is a WrappedEnvironment  (don't know, but I am guessing) and the call 
to setHeader will be to WrappedResponse?

But that is good! foo should *not* set content length in this scenario 
- you have further processing to do on the SAX events (in this case -- 
default serializer) - and content length *will* be different after 
pipeline is complete.

I tried to find an example of a http/1.1 header that does not depend on 
the top level processing environment but didn't find one. So I guess you 
are right that it is a good thing foo() cannot set headers. OK, I guess 
we should implement the WrappedResponse solution then. Any other opinions?

--
Unico


Fw: [SourceForge.net Release] ehcache : ehcache

2004-09-28 Thread Unico Hommes
SourceForge.net wrote:
Project: ehcache  (ehcache)
Package: ehcache
Date   : 2004-09-28 07:03
Project ehcache ('ehcache') has released the new version of package 'ehcache'.
You can download it from SourceForge.net by following this link:
https://sourceforge.net/project/showfiles.php?group_id=93232release_id=271146
or browse Release Notes and ChangeLog by visiting this link:
https://sourceforge.net/project/shownotes.php?release_id=271146
 

I believe this was to be our cue for moving ehcache based store to the 
core and making it our default right? I have just updated our ehcache to 
the new release version but haven't yet moved it to the core. If noone 
objects I will move it to the core later.

There still remains one FIXME in the EHCache store implementation 
though. Method free() hasn't been implemented. AFAIK this one is called 
by the StoreJanitor to do its work. However, although it works a little 
different from ours, EHCache has its own kind of janitor mechanism that 
operates on the basis of time-to-live and time-to-idle expiry 
strategies. Will this be sufficient or should we try to come up with 
some sort of implementation of the free() method?

Another thing to keep in mind is that EHCacheStore cannot be used in the 
role of a transient store. This is because it requires entries to be 
Serializable even if it is configured not to be persistence enabled. I 
will file an enhancement request to the ehcache guys like we did with 
JCS at the time, to remove this requirement.

--
Unico


Re: Fw: [SourceForge.net Release] ehcache : ehcache

2004-09-28 Thread Unico Hommes
Vadim Gritsenko wrote:
Unico Hommes wrote:
SourceForge.net wrote:
Project ehcache ('ehcache') has released the new version of 
package 'ehcache'.

I believe this was to be our cue for moving ehcache based store to 
the core and making it our default right? I have just updated our 
ehcache to the new release version but haven't yet moved it to the 
core. If noone objects I will move it to the core later.

One objection - below.

There still remains one FIXME in the EHCache store implementation 
though. Method free() hasn't been implemented. AFAIK this one is 
called by the StoreJanitor to do its work. However, although it works 
a little different from ours, EHCache has its own kind of janitor 
mechanism that operates on the basis of time-to-live and time-to-idle 
expiry strategies. Will this be sufficient or should we try to come 
up with some sort of implementation of the free() method?

No, it won't be sufficient. Janitor checks that JVM is not starving, 
and cache *must* react on low-memory condition regardless of 
time-to-live and other bells-and-wistles.

Then what about implementing free() so that it removes a specified 
number of elements? Perhaps there is a possibility to find out the LRU 
ones. I believe StoreJanitor tries to free memory until the low-memory 
condition goes away, so such an implementation would suffice right?

--
Unico


Re: svn commit: rev 47048 - cocoon/branches/BRANCH_2_1_X/src/blocks/cron/java/org/apache/cocoon/components/cron

2004-09-22 Thread Unico Hommes
Vadim Gritsenko wrote:
[EMAIL PROTECTED] wrote:
 Modified: 
cocoon/branches/BRANCH_2_1_X/src/blocks/cron/java/org/apache/cocoon/components/cron/QuartzJobScheduler.java
 
==
 --- 
cocoon/branches/BRANCH_2_1_X/src/blocks/cron/java/org/apache/cocoon/components/cron/QuartzJobScheduler.java   
(original)
 +++ 
cocoon/branches/BRANCH_2_1_X/src/blocks/cron/java/org/apache/cocoon/components/cron/QuartzJobScheduler.java   
Wed Sep 22 07:42:00 2004
 @@ -547,6 +555,7 @@
 
  jobDataMap.put(DATA_MAP_NAME, name);
  jobDataMap.put(DATA_MAP_LOGGER, getLogger());
 +jobDataMap.put(DATA_MAP_CONTEXT, this.applicationContext);
  jobDataMap.put(DATA_MAP_MANAGER, this.manager);
  jobDataMap.put(DATA_MAP_ENV_CONTEXT, this.environmentContext);
  jobDataMap.put(DATA_MAP_RUN_CONCURRENT, new 
Boolean(canRunConcurrently));

You'll also have to remove this entry from jobDataMap when Quartz is
about to write it into the database. See QuartzDriverDelegate, methods
removeTransientData() and selectJobDetail().
O, good one, thanks :-) Will take care of this.
--
Unico


Re: DO NOT REPLY [Bug 31234] - Event Aware cache does not remove registry key

2004-09-16 Thread Unico Hommes
Geoff Howard wrote:
On 15 Sep 2004 10:29:27 -, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 

http://issues.apache.org/bugzilla/show_bug.cgi?id=31234
Event Aware cache does not remove registry key
   

...
 

--- Additional Comments From [EMAIL PROTECTED]  2004-09-15 10:29 ---
Thanks Oliver, I applied your fix.
About the way AbstractCachingProcessingPipeline tries to find cached objects for
shorter cache keys, judging from your description, it makes sense to me to
always try a shorter key to find partially cached contents. I don't think this
should be the concern of the Cache implementation. To pursue this further I
suggest you either create a separate bugzilla entry for this or start a
discussion on [EMAIL PROTECTED]
   

Thanks, Unico and Oliver - I'm barely keeping up with the list these
days.  It looks like this was a result of some of the cache
re-factoring?  If so, hopefully this is the only remaining effect.
 

I think so, the bug seems to have been introduced by this change to the 
Cache interface: 
http://cvs.apache.org/viewcvs.cgi/cocoon-2.1/src/java/org/apache/cocoon/caching/Cache.java?r1=1.2r2=1.3 

--
Unico


Re: EHCache in its own block?

2004-09-02 Thread Unico Hommes
Upayavira wrote:
Pier Fumagalli wrote:
On 2 Sep 2004, at 08:19, Ugo Cei wrote:
Pier Fumagalli wrote:
I think it might have been because in the code comments of the 
EHStore there's a mention that the store is not persistent across 
JVM restarts, but now it seems they fixed it...
http://ehcache.sourceforge.net/documentation/#mozTocId581616

But see also:
Note: This documentation is being updated for the forthcoming 
release of Version 1.0 of ehcache. The current release, version 0.9, 
contains most of the features documented here, with the main 
exception being persistence DiskStores.

Assuming I interpret this correctly, we could have EHCache as the 
default cache as soon as it reaches 1.0.

I agree... I'd say to separate it out of the ScratchPad so that it's 
easier to test (maybe in 2.1.6), and then when they hit 1.0, we can 
swap it with JCS.

All agree???

Why?
We have an unreleased JCS, so I don't see the problem with an 
unreleased EHCache. And, as JCS doesn't persit disc stores, the fact 
that EHCache can't yet isn't a problem either. To my mind, it is just 
a question of which performs best, and impression is that EHCache 
works better.

So, if one of the major problems with 2.1.5 is poor cache performance 
under load with JCS as default, and EHCache works under that load, I'd 
say switch to EHCache for 2.1.6.

+1, I have nothing but good experiences with EHCache and we have been 
using it in production sites for months now.

--
Unico


Re: Planning 2.1.6

2004-09-02 Thread Unico Hommes
Carsten Ziegeler wrote:
The 2.1.x branch contains imho enough important bug fixes for
a new release: 2.1.6.
What do you think of a release by the end of september?
 

+1
--
Unico


Re: EHCache in its own block?

2004-09-01 Thread Unico Hommes
Pier Fumagalli wrote:
On 1 Sep 2004, at 13:38, Unico Hommes wrote:
Pier Fumagalli wrote:
Could I move the EHCache-based store in its own block (unstable, for 
now, of course)? It seems to be behaving a lot better than JCS, and 
I seriously don't want to build the whole scratchpad just for one 
simple class and jar...

Some time ago I proposed the creation of a cache block [1]  but there 
wasn't much interest at the time. I don't know if eventcache should 
be part of it but at least there is enough stuff in scratchpad 
already to justify such a block (CachingSource, EHCache, 
ExpiresCachingProcessingPipeline).

EHCache is used to implement the Store (used by the cache, but not the 
cache in itself).

I think that the two concerns are different.

Fair enough, it did cross my mind as well. But I also thought it might 
be desirable to minimize the number of blocks. Another option is to put 
it in excalibur store (we should all have commit privileges there as well).

--
Unico


Re: EHCache in its own block?

2004-09-01 Thread Unico Hommes
Ralph Goers wrote:
At 9/1/2004  05:24 AM, you wrote:
Could I move the EHCache-based store in its own block (unstable, for 
now, of course)? It seems to be behaving a lot better than JCS, and I 
seriously don't want to build the whole scratchpad just for one 
simple class and jar...

Shouldn't both JCS and EHCache be in blocks so that only the one being 
used is built?

There should at least be a default when neither is included. And what 
happens if both blocks are included?

--
Unico


Re: Actual implementation of passthrough

2004-08-31 Thread Unico Hommes
Nicola Ken Barozzi wrote:
If the @passthrough attribute is to be put in the pipelines section 
of the mounted sitemap, it seems easy: make the PipelinesNodeBuilder 
set a passthrough variable in the PipelinesNode, and have the 
PipelinesNode tell or not the last PipelineNode if it has to stop:

public void setChildren(ProcessingNode[] nodes) {
// Mark the last pipeline so that it can throw a
//ResourceNotFoundException
//- put an if() here
  ((PipelineNode)nodes[nodes.length - 1]).setLast(true);
//
super.setChildren(nodes);
}
The point is that it makes sense for the mount node to set it, but I'm 
not sure which is the preferred way in the TreeProcessor to pass that 
info from the MountNodeBuilder to the PipelinesNode.

Suggestions?
What about letting MountNode catch the No pipeline matched request 
exception that is thrown during processor.buildPipeline() and 
processor.process() and decide whether or not to rethrow it there. Would 
that work?

--
Unico


Re: problem in 2.1 branch?

2004-08-18 Thread Unico Hommes
Antonio Gallardo wrote:
Hi:
I am trying to update libs, after doing a svn up and make a full rebuild I
got:
java.lang.ClassNotFoundException:
org.apache.cocoon.components.jms.JMSEventListener
am I missing a jar file?
 

No, I forgot to commit the removal of some obsolete xpatch files. Very 
sorry about that. Should be better now.

--
Unico


Subversion really slow

2004-08-12 Thread Unico Hommes
My experience with subversion since we switched is that the cocoon 
repository is painfully slow at least using subclipse. I haven't much 
previous experience with subversion but I was under the impression that 
it should be more performant then CVS? So, I am wondering whether this 
is a problem with subclipse, the apache svn repository or subversion 
itself. I also noticed that cvs web subversion browsing is equally slow.

Even worse, on my computer at home I haven't been able to checkout 
Cocoon at all because it keeps craching my computer. Both using the 
command line client and subclipse. Anybody have similar experience 
and/or advice?

--
Unico


Re: Subversion really slow

2004-08-12 Thread Unico Hommes
Well at least my applications don't freeze up and I can do other things 
in the meantime. But to give an impression to others of how serious this 
is on Windows platform, the first mail I sent was just after I started 
'synchronize' in subclipse: it is still running .. I know this is 
probably in vain but I'd almost suggest infrastructure install 1.1 RC2 
now which was released yesterday. This is impairing developement to the 
point that subversion in its current state is close to unusable for 
windows users.

--
Unico
Carsten Ziegeler wrote:
Yepp, similar problems here. It doesn't matter if I use subclipse or CLI.
My machine simply hangs for 10 minutes or more and all other applications
are blocked. So, in fact this is currently a major pita: I'm waiting
a quarter of my time for svn to do something :( and I'm even not able
to read mails or surf the net!
I read somewhere that performance problems with subclipse will be solved
as soon as svn 1.1 is out and they use it.
Carsten 

 

-Original Message-
From: Unico Hommes [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 12, 2004 3:43 PM
To: [EMAIL PROTECTED]
Subject: Subversion really slow

My experience with subversion since we switched is that the 
cocoon repository is painfully slow at least using subclipse. 
I haven't much previous experience with subversion but I was 
under the impression that it should be more performant then 
CVS? So, I am wondering whether this is a problem with 
subclipse, the apache svn repository or subversion itself. I 
also noticed that cvs web subversion browsing is equally slow.

Even worse, on my computer at home I haven't been able to 
checkout Cocoon at all because it keeps craching my computer. 
Both using the command line client and subclipse. Anybody 
have similar experience and/or advice?

--
Unico
   

 




Re: Subversion really slow

2004-08-12 Thread Unico Hommes
Adam R. B. Jack wrote:
On Thu, 12 Aug 2004, Unico Hommes wrote:
Well at least my applications don't freeze up and I can do other 
things in the meantime. But to give an impression to others of how 
serious this is on Windows platform, the first mail I sent was just 
after I started 'synchronize' in subclipse: it is still running .. I 
know this is probably in vain but I'd almost suggest infrastructure 
install 1.1 RC2 now which was released yesterday. This is impairing 
developement to the point that subversion in its current state is 
close to unusable for windows users.

Can you pinpoint what the problem is? Network IO, CPU, other? I use 
SVN over a modem on W2K, and although I don't checkout Cocoon, I do 
checkout some largish things. I've had all sorts of pain with  
Subclipse (especialyl when refactoring in Eclipse) but even that has 
reached a point I can tolerate it. Basically, I don't see the problems 
you are seeing. Something oddly non-SVN has to be happening, this 
isn't SVN as normal.

I am seeing about 4000 bytes/sec network traffic for the subversion job. 
Since I am on SDSL internet connection here, that should really be a lot 
faster. CPU usage is almost 0.

Do you use HTTPS to get the data? Have you run SVN from the 
commandline and (manually) accepted the fact that the ASF cert isn't 
quite right (I forget the message)?

Yes, I did that.
BTW: What Eclipse are you using?
3.0 with latest subclipse.
regards
Adam



Re: Subversion really slow

2004-08-12 Thread Unico Hommes
Vadim Gritsenko wrote:
Adam R. B. Jack wrote:
On Thu, 12 Aug 2004, Unico Hommes wrote:
Well at least my applications don't freeze up and I can do other 
things in the meantime. But to give an impression to others of how 
serious this is on Windows platform, the first mail I sent was just 
after I started 'synchronize' in subclipse: it is still running .. I 
know this is probably in vain but I'd almost suggest infrastructure 
install 1.1 RC2 now which was released yesterday. This is impairing 
developement to the point that subversion in its current state is 
close to unusable for windows users.

Can you pinpoint what the problem is? Network IO, CPU, other? I use 
SVN over a modem on W2K, and although I don't checkout Cocoon, I do 
checkout some largish things. I've had all sorts of pain with  
Subclipse (especialyl when refactoring in Eclipse) but even that has 
reached a point I can tolerate it. Basically, I don't see the 
problems you are seeing. Something oddly non-SVN has to be happening, 
this isn't SVN as normal.

Do you use HTTPS to get the data?

Yes, I suppose all Cocoon committers use HTTPS.

Have you run SVN from the commandline and (manually) accepted the 
fact that the ASF cert isn't quite right (I forget the message)?

Yes.
I have couple of suggestions to Carsten and Unico:
  http://blog.reverycodes.com/archives/28.html

Thanks. The disk operations you mention got me thinking it might be my 
On-Access virus scanner :-) I'll try it out.

--
Unico


Re: Subversion really slow

2004-08-12 Thread Unico Hommes
Vadim Gritsenko wrote:
Ugo Cei wrote:
On OS X Panther:
$ time svn update
At revision 36287.
real1m6.448s
user0m2.010s
sys 0m4.010s

On Windows XP, HDD 7200 RPM, last access timestamp disabled:
cocoon-2.2.X $ time svn up
At revision 36287.
real0m29.125s
user0m1.632s
sys 0m2.663s
OK, switching off my virus scanner did a lot:
$ time svn up
At revision 36287.
real0m21.170s
user0m0.015s
sys 0m0.016s
Thank!
--
Unico


Re: Subversion really slow

2004-08-12 Thread Unico Hommes
Unico Hommes wrote:
Vadim Gritsenko wrote:
Ugo Cei wrote:
On OS X Panther:
$ time svn update
At revision 36287.
real1m6.448s
user0m2.010s
sys 0m4.010s

On Windows XP, HDD 7200 RPM, last access timestamp disabled:
cocoon-2.2.X $ time svn up
At revision 36287.
real0m29.125s
user0m1.632s
sys 0m2.663s
OK, switching off my virus scanner did a lot:
$ time svn up
At revision 36287.
real0m21.170s
user0m0.015s
sys 0m0.016s
Thank!

I'm afraid my conclusion was too quick these are two runs one with virus 
scanner enabled, the other disabled.

$ time svn up
At revision 36290.
real0m22.492s
user0m0.015s
sys 0m0.000s
$ time svn up
At revision 36290.
real0m22.202s
user0m0.015s
sys 0m0.015s
So I guess it must be a problem with subclipse plugin. Like Adam I get 
the plugin from loonsoft via eclipse update but unlike Adam it doesn't 
seem to work very well on my machine. I guess I will have to abandon my 
wysiwyg ways for now :-(

--
Unico



Re: Subversion really slow

2004-08-12 Thread Unico Hommes
Leszek Gawron wrote:
Unico Hommes wrote:
My experience with subversion since we switched is that the cocoon 
repository is painfully slow at least using subclipse. I haven't much 
previous experience with subversion but I was under the impression 
that it should be more performant then CVS? So, I am wondering 
whether this is a problem with subclipse, the apache svn repository 
or subversion itself. I also noticed that cvs web subversion browsing 
is equally slow.

Even worse, on my computer at home I haven't been able to checkout 
Cocoon at all because it keeps craching my computer. Both using the 
command line client and subclipse. Anybody have similar experience 
and/or advice?

--
Unico
in my case cli works like a charm. I've resigned from subclipse - had 
the same problems you have.

Yes I've just installed the CLI and it works like you say. Could it be 
the JNI interface that causes it to be so slow?

--
Unico


Re: Upgrading 2.1.6-dev branch

2004-08-10 Thread Unico Hommes
Antonio Gallardo wrote:
Hi:
I started to upgrade the 2.1.6-dev branch and I don't see the point to
upgrade jars there. I feel myself like doing the same work again. My last
commits were just a copy paste from the 2.2 branch:
http://marc.theaimsgroup.com/?l=xml-cocoon-cvsm=109214252606989w=2
http://marc.theaimsgroup.com/?l=xml-cocoon-cvsm=109214290714483w=2
http://marc.theaimsgroup.com/?l=xml-cocoon-cvsm=109214378109796w=2
AFAIK, the current 2.2-dev is a bug fix + minor enhancements since 2.1.5.1
So where is the point to update it? It really matter? Will we have a 2.1.6
release at all?
 

2.2 does contain some non-trivial changes, esp. to the environment 
handling IIRC. There was a discussion [1] a few days ago about the 
roadmap for 2.2. I definately think there will be releases in the 2.1.x 
branch (at least I hope so considering the proposed roadmap for 2.2) and 
I definately think porting bugfixes, minor enhancements and jar upgrades 
to 2.1.x is a valuable thing.

1. http://marc.theaimsgroup.com/?l=xml-cocoon-devm=109136664115145w=2
--
Unico


Re: CIncludeTransformer + event aware caching

2004-07-20 Thread Unico Hommes
Hi Corin,
I hadn't noticed this before but you seem to be correct. What we need to 
do is make CIncludeTransformer support a different cache validity 
generation method appart from the current expires one. Similar to the 
one used by TraversableGenerator. The TG builds up an aggregated 
validity object as it progresses through its generation of events. Each 
time it encounters a new Source to include it adds the validity object 
from that source to the aggregated validity. Since EventValidity objects 
do not need a new validity for comparison when determing whether they 
are valid or not this works out nicely.

--
Unico
Corin Moss wrote:
Hi Guys,
We're implementing Event Aware caching at the moment, and it's working
well.  We're migrating from the Prism based cache which we've used until
now.  Our only hurdle at the moment is proving to be the
CIncludeTransformer.  Expires based caching is hard wired right into it
(and the DefaultIncludeCacheManager).  Has anyone done any work on
anything like this?  I've not been able to find anything in CVS.  I
suspect that it won't be easy, as the methods that need to be overridden
are split between the implementation, and its helper classes.
Any thoughts?
Thanks,
Corin

CAUTION: This e-mail and any attachment(s) contains information
that is intended to be read only by the named recipient(s). It
may contain information that is confidential, proprietary or the
subject of legal privilege. This information is not to be used by
any other person and/or organisation. If you are not the intended
recipient, please advise us immediately and delete this e-mail
from your system. Do not use any information contained in it.

For more information on the Television New Zealand Group, visit
us online at http://www.tvnz.co.nz

 




Re: [Vote] Marking internal classes

2004-07-19 Thread Unico Hommes
Sylvain Wallez wrote:
Unico Hommes wrote:
Reinhard Poetz wrote:

snip/
I also propose to separate the cocoon.jar into two parts: One that 
contains all interfaces that can be implemented and all classes that 
can be used or extended and a second jar that contains the rest 
which is only used internally. After migrating to Suversion this can 
be done without breaking the history because this will also require 
some file moving. IMO this step is necessary to separate our blocks 
from Cocoon core, isn't it?

Good idea, but IMO this complements marking the classes, as if you 
load the full Cocoon in an Eclipse project, completion will show all 
classes. A notice at the beginning of the javadoc helps in noticing 
that you use something that's forbidden.

I like this too. There could be more than two parts though. There is 
the API part that contains interfaces used to embed cocoon into a 
certain environment. The SPI interfaces developers implement to 
extend the functionality of Cocoon. The different API implementations 
that we have (CLI, Servlet). And finally the core components.

Can you elaborate on the difference between API and SPI?
This is the way they separate their code into modules at Avalon. I like 
it because it clearly marks the different functions separate parts of 
the code play in an application. Interfaces that function as SPI's are 
for instance the different sitemap component interfaces: Transformer, 
ProcessingPipeline, Reader, Matcher, etc. An example of an API level 
interface is for instance Processor and the different environment 
related interfaces: Request, Context, Environment, etc.

--
Unico


Re: [Vote] Marking internal classes

2004-07-19 Thread Unico Hommes
Reinhard Poetz wrote:
Unico Hommes wrote:
Sylvain Wallez wrote:
Unico Hommes wrote:
Reinhard Poetz wrote:

snip/
I also propose to separate the cocoon.jar into two parts: One that 
contains all interfaces that can be implemented and all classes 
that can be used or extended and a second jar that contains the 
rest which is only used internally. After migrating to Suversion 
this can be done without breaking the history because this will 
also require some file moving. IMO this step is necessary to 
separate our blocks from Cocoon core, isn't it?

Good idea, but IMO this complements marking the classes, as if you 
load the full Cocoon in an Eclipse project, completion will show all 
classes. A notice at the beginning of the javadoc helps in noticing 
that you use something that's forbidden.

I like this too. There could be more than two parts though. There 
is the API part that contains interfaces used to embed cocoon into 
a certain environment. The SPI interfaces developers implement to 
extend the functionality of Cocoon. The different API 
implementations that we have (CLI, Servlet). And finally the core 
components.

Can you elaborate on the difference between API and SPI?

This is the way they separate their code into modules at Avalon. I 
like it because it clearly marks the different functions separate 
parts of the code play in an application. Interfaces that function as 
SPI's are for instance the different sitemap component interfaces: 
Transformer, ProcessingPipeline, Reader, Matcher, etc. An example of 
an API level interface is for instance Processor and the different 
environment related interfaces: Request, Context, Environment, etc.

So we have
- cocoon-spi.jar
 Interfaces that function as SPI's are for instance the different 
sitemap
  component interfaces: Transformer, ProcessingPipeline, Reader, 
Matcher, etc.

- cocoon-api.jar
 An example of an API level interface is for instance Processor and 
the different
  environment related interfaces: Request, Context, Environment, etc.

- cocoon-core.jar
 Implementations of component interfaces for (re)use
- cocoon-internal.jar
 everything that should only be used internally
Perhaps we don't need a separate jar for internal classes. Marking them 
with javadoc tags should be enough.

Additionally we can create modules containing the differnet 
environment implementations:

- servlet-environment.module.jar
- commandline-environment.module.jar
- ...
A module compiles against cocoon-api.jar.
Exactly.
Everything else is part of a block. A block compiles against 
cocoon-core.jar and cocoon-spi.jar.

.. and also cocoon-api.jar but not because it contains classes that 
implement API interfaces.

I'm not sure where to put the Flow implementations, XSP and the 
different templating engines.

Hmm, yes. XSP is currently a block, I say we leave it like that for now. 
The flow implementation could be part of the core but is perhaps better 
manageable as a separate unit. The flow Interpreter interface is part of 
the SPI so by the above reasoning that blocks are SPI implementations 
this would mean it should be a block. OTOH it is definately different 
from other blocks. Maybe we need another concept for this. Also because 
the core is still quite big. I remember some time ago Stefano talked 
about aspects in reference to XSP.

--
Unico


  1   2   3   4   5   >