On Monday, Jul 28, 2003, at 14:23 Europe/Rome, Sylvain Wallez wrote:


Stefano Mazzocchi wrote:

On Friday, Jul 25, 2003, at 11:44 Europe/Rome, [EMAIL PROTECTED] wrote:

"Europe/Rome" : Stefano is ba-a-ack !

Yep, I'm back.


Inspired by an email of Michael Homeijer http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=103483890724049&w=2 I created a first version of a DAVmap, a sitemap serving as a WebDAV server.

If you extract the zip file to <cocoon-context>/samples/webdav, you can mount http://localhost:8888/samples/webdav/davmap/repo as a webfolder.

I tested it with Windows Explorer on win2k, with XML Spy, an application build on the Slide WebDAV client library and the WebDAVSource (yes, that means you can use Cocoon as its own WebDAV repository :-)


Way cool, Guido !


But the bad thing about WebDAV on Windoze (aka "webfolders") is that it's not a real filesystem : you cannot open the file directly from there, but only copy/paste it on a "real" filesystem. Or did I missed something ?

no, you are right, webforlders suuuuuuuuuuuuck! and they are as buggy as hell, expecially if you do webdav over https with digital certificates. forget it, you have to use a commercial application (don't remember its name).


As this again is based on Cocoon's source resolving you could expose your CVS repository via Sylvain's CVSSource or a Xindice Database (given someone implements TraversableSource and maybe ModifiableSource in XMLDBSource). You could even integrate some data of a SQL table or just proxy another WebDAV server (to leverage its versioning or to plugin some workflow logic).


Quick update about the CVSSource : I did a major rewrite for one of our projects (Gianugo : it's a CMS using dynamically-generated sxw files to edit content!), and it's now an Excalibur Source and supports crawling the version history (only on the main branch), tagging, cache validity, etc. I committed this new version this morning on cocoondev.org.

awesome!!


What's to be done now is handling branches and move to Eclipse's CVS client library in order to migrate into Cocoon's CVS (it currently uses JCVS's client library which is LGPL'ed).

maybe if you give us some pointers on what needs to be done, somebody might chime in.


<snip/>

I would love to see cocoon becoming a framework that can glue together everything on the web, from stateless publishing from stateful webdav applications and yeah, why not, once you can do webdav applications you can do SOAP applications or XML-RPC applications, they are, more or less, all XMLoverHTTP stuff.

Now, is the sitemap ready for this?

No, it needs a new concept. Something I've been calling "extractor": a component that is able to extract information from a pipeline and make it available to the sitemap for further selection of components.

why? because both WebDAV and SOAP have the terrible attitude of storing pipeline-routing information *inside* the pipeline.

It has been proposed in the past to allow selectors to have access to the pipeline, but I like the idea of "extractors" more.


Mmmh... "extractor" or pipeline-aware selectors somewhat imply a single processing pipeline.

not at all, actually, the opposite.


Where does the flow fit in this ?

just like any other pipeline.


What about handling this kind of request by a flowscript that would call an input pipeline (term introduced by Daniel Fagerstrom) that would extract the meat from the incoming stream (e.g. build a bean/hashmap/woody form/whatever from its content), and then call a regular response pipeline after having processed the incoming data ?

sure, but will be up to you do decide how to write your webdavapp. that's the point: cocoon should provide you the low level components and you compose them as you like.


Today, cocoon cannot select parts of a pipeline depending on information contained inside the pipeline itself.

So far, we didn't miss this much because selection is never done with data that passes inside the pipeline, but all xml-rpc-like applications (and SOAP and WebDAV fall into this category), the information for processing the request has to be *extracted* from the pipeline flow before being made available.

Note that I said "processing the request" and this impact all matching selecting and flow control.

Do we really need extractors? no, we can extend the StreamGenerator into a WebDAV StreamGenerator that extracts information from the stream and places it into the request parameters, then you can do processing as it was a normal request.

But just as the aggregator is just a special generator, I was thinking of introducing the concept of an extractor which is just a generator but that expects serious payloads that contain inforamation that might be needed by the sitemap/flow to process the request (NOTE: both the sitemap and flow DO NOT have access to the pipeline content directly, and this should *NOT* change! this is the reason why we should introduce this 'extraction' concept)

In other words, streamed requests aren't so much different from regular requests : it's just that incoming data is more complex and that decoding is not handled transparently by the servlet engine. Once decoded, the processing model can be the same as usual.

Hmmm, hmmmm, hmmmm, you are triggering second order thinking.... hmmmm... I need a whiteboard... I'll be back soon.


--
Stefano.



Reply via email to