Hunsberger, Peter wrote:

I'm not sure that this would hit the core?  It looks to me more like a
resource that wants to call other resources.  I plug this new thing into the
sitemap, it take over for a transformer at some point, then it calls other
resources as defined in the sitemap.
Yeah, that's basically what cinclude does. No need to change anything if you follow that pattern.

After working through this discussion it seems that this primary concern is probably: sitemap+flowxslt instead of sitemap+flowscript. The problem now is that, (like you ?), I can't quite see how the two concepts could be pluggable replacements for each other?
Yes, I definately can't see why anybody would want to use XSLT for the flow description.... but that might well be my personal limitation kicking in and I fully want to understand your point so that I might learn something that I failed to see before.

Hmm, I meant that I couldn't easily see how to replace a script engine with
a functional engine not that that I couldn't see how to use XSLT to drive
other transformations.
Sure, I got that.

Still, XSLT was *NOT* designed for those type of things and my gut feeling is that by twisting its model too much you might end up having something that uses the same XSLT syntax but is not XSLT anymore... so it might simply confuse people and appear as a 'golden-hammer'
anti-pattern.

I'd have to disagree.  I think XSLT was designed precisely for this purpose:
sort through some data and determine what data should be created as a
result.
Hmmm, sure. but that is only a very small part of what flow-management is about. How would you keep the state across requests in XSLT?

Ah, we are coming back to the XPathSelector!!!

<map:select type="xpath">
<map:when test="count(data/*) = 0">
<map:transform src="error.xsl"/>
</map:when>
...
</map:select>

This was proposed a while ago but was voted down since the sitemap routing components shouldn't have a way to access the content of the pipeline.

But sitemap components already have ways to access the content of the
pipeline, it's just that currently you have to expose those contents as
parameters...
Selectors, matchers and actions have no way to get the content of a pipeline (by design!)

<snip of example 2/>

Yeah, I think that all you want might be achieved with the XPathSelector.

Well, these are somewhat trivial examples, I'd think that you can see that
much more would be possible inside of a real XSLT...
Ok

The use of XSLT is *clearly* overkill. What you are describing is *NOT* XSLT: even if it uses the same syntax, the semantics are totally different (in fact, you are mixing XSLT semantics with the sitemap semantics... this cannot possibly be easier to learn/use/maintain than just the sitemap semantics)

That doesn't make sense: in that case you'd have to consider the production
of HTML from XML via XSLT a mixing of HTML semantics with XSLT semantics?
No, because that content is *passive*.

Here, you are mixing xslt and sitemap which are both 'active' content, content that has server-side semantics and will not be passed passivly on.

So, in fact, the complexity of sitemap+flowscript is repeated in XSLT+sitemap, just that your solution doesn't have a way to separate the two.

NOTE: I've been proposing that the flowscript might be generated out of a cocoon pipeline. That implies that you might use whatever declerative markup you want to describe your rules (as people do in a workflow environment) and have a stylesheet transform it into flowscript.

That would:

1) give you a way to keep the rules separated from the resources
2) still use a declarative view of your rule engine (your rules can even be extracted out of a database and cached since I don't think they change that often)
3) no need for custom sitemap transformation components

this is how I would do it. Using XSLT as a pipeline componentization skeleton sounds golden-hammer all the way.

So if someone needs to make a pipeline decision based on metadata they are
out of luck?
Why? you can plug your own selectors and matchers for that. That's exactly what they were build for (they can even connect via JDBC to your rule-database if you want and using Java, not some foolish XSLT extension for that).

Really, our entire model is metadata driven, there is no hard
data at any point outside of metadata selector patterns...
You have some pipeline metadata and you have to make some decisions based on that. Great. Write a matcher or a selector that reacts on that metadata.

There is *no* need to aggregate this into the pipeline and then extract it again from there.

This, instead of mixing stuff in the pipeline, enforces separation and clean design.

I'm still keeping the processing of the metadata separate from the pipeline.
You have to call the new thing to access the metadata and the metadata is
not exposed in the pipeline (as it would be with XPathSelector).
Then I'm lost.

Your bunch of data is always represented as nodes (since it's XML and must be processed that way). Your bunch of metadata is always represented as objects (since it's not XML and should not be processed that way).

But all my metadata is XML and I have good reasons for processing it that
way...
Oh, gosh, golden-hammer all over the place.

XML is a syntax, not a data model. Your data model in the pipeline is SAX events. Do you need to process your metadata as a stream of events? I'm positive you don't.

You are telling me that your metadata is contained in a tree-structured data model and you like it keep it so and you would like to use xpath to query it.

Great, write your create sitemap selector that connects to your rule database, gets an Xml representation, create a DOM, and query that DOM with an XPath query layer. It's no more than 10 lines of java code if you know what avalon components to use.

I tend to think that I don't need a pipeline for things that are not stream-based. But that's me.

The use of XPathSelection in the sitemap stops us from being allowed to separate flow from resource production. Exactly because you merge two different datasets (pipeline data and pipeline metadata).

To me it's definately not cleaner model but it's aggregation of concerns.

Well that's why I see this new thing as being a separate component.  The
selection of resource via this new thing isn't any different than an action
spitting out a parameter and the sitemap acting on that.
Yeah, exactly. That's *exactly* why I hate actions and I want flowscript to show people a cleaner way to do the same functionality without requiring this massive hack copied from other webapp frameworks (turbine, that is)

Whether the action
looked at data, metadata or the phase of the moon to spit out the parameter
is not relevant to the pipeline.  Similarly, how this new component arrives
at it's determination isn't relevant to the pipeline...
Cool.

So, basically your are saying that everything you want can be obtained by writing a sitemap component (you tend to think of a transformer, I suggest you to take a deeper look into selectors).

If that is the case, great, we don't need to do anything. The flowscript is there if you want it, but I won't force people to use it (or not to use actions). It's entirely up to you.

I was only concerned about your functional requirements needing sitemap changes, but if that is not the case, I'm happy and in case you donate your new component to us, we might want to distribute with Cocoon because others might find it interesting.

What do you think?

--
Stefano Mazzocchi <[EMAIL PROTECTED]>
--------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to