Peter,

thanks so much for this, great plug for me to start.


Peter Hunsberger wrote:
On 9/30/05, Stefano Mazzocchi <[EMAIL PROTECTED]> wrote:

There was a moment where I could have been one of the first people to
respond to this thread, you just happened to ask this question when I
was watching the Cocoon mailing list for some reason or other. However, in spite of the fact I've been debating whether we could run
our software without Cocoon for the last 6 months or so I honestly
didn't know how to answer your central issue: what's next?

I've got a gut feeling for what we need, some of it resonates with
what you post here, but I've personally grown sort of attached to
Cocoon, so first off, I'd have to answer the subject line with a
resounding "no".

:-)

Well, I never really expected people on this list to say "yeah, it's crap, I moved on" because those who did, would not be here to read that message anyway.

The real question is: does the cocoon community realize that the tectonic plates of web technologies are shifting and that we might find ourselves in a completely new environment really soon? And if so, do we understand it? do we defend our positions or we attack?

Read my comments below.

<snip/>

I do that for my latest web sites and the more I learn how to driven the
client, the less I feel the need for advanced server frameworks. Is it
just me?


Define advanced?  For the foreseeable future we need:

1) low cost, robust, legacy system interfaces (canonical form is
cheap, distributed clients don't lend themselves to canonical form);

2) high speed, 100% dependable, atomic, global, data transaction
management through to the persistence layer (clients != dependable);

3) back end security (in addition to client authentication).

I'm sure there are others, but no, I still think we need good solid
server side capabilities.

One thing that turns me off about cocoon *today* is the pretty steep *perceived* learning curve.

If packaged correctly, a naked (no blocks!) cocoon would take no more than an improved Bertrand's SuperSonic Tour. We are getting there. Slowly, painfully, and dragging our userbase with us without abrupt transitions... maybe too slow, I don't know. But I knok that revolutions are hard to manage, so I'm not unhappy about the way we are dealing with day to day evolution.

But I think we collective lack a vision for what's coming up next and I feel this as a weakness.

Perhaps it's just that you started out mostly server side and now
you're discovering that the client is "fun"? (And that advanced
clients are even more fun.)

I did that with Linotype... which had more code in javascript than in XSLT... and it felt like a "good thing"(tm).

Between Moz and Cocoon, both have similar problems, good visions and architecturs, but they are hard to explain because novices don't hit those walls until really late in the game.

The climb is very steep but the plateau up there very flat and very high. We need to build a way to get people up there quick. Ruby on Rails wrote wizards, we have eclipse plugins (sort-a).

We need to do more in that space and we are. It's great.

But both moz and cocoon face a similar problem: how are they going to face a future of radical changes around them? The web ecosystem is very solid and inertial, yet completely non-linear, things like del.icio.us are small butterflies that trigger hurricanes somewhere else.

Is there anything we can do to watch what's happening and draw conclusions on what we need to prepare for? or enable? or avoid? or deprecate? or influence? or push? or polish? or remove?

We are moving forward in many directions:

 1) real blocks
 2) build system
 3) binary distributions
 4) CMS for docs
 5) IDE (with Lepido)
 6) rail-ification and wizards
 7) solidification
 8) separation and identification of current and legacy features

which are all great, but is that enough?

<snip/>

But as a researcher, a scientist and one that likes to push the edge, I
sense that cocoon is kinda 'done', not as in "finished, passe'", but
more as in "been there, done that".

Sure, that makes sense.  But, give it a couple of years: there are
many fundamental capability enabling patterns embodied in Cocoon
(whew):

- ack-nak/controller-response

- translation/transformation

- iterative processing of small increments of work (the true
separation of concerns)

none of these are going to go away.  In 10 or so years you'll be
wondering "did I really understand what I was doing and/or thinking
"h*ly shit that's coooool...."

This is correct.

Cocoon should never be done. IMO, the two big problems of generalized
graph traversal and graph merge/update will always require some
capability to handle orthogonal concerns at run time because pure REST
can't map the entire universe. There are always ambiguities remaining
to be discovered that can't be named/identified (pick concept de-jure)
before hand.

Here we go, closing in...

I want to processes the work on the hard problems on the server if
only so that I can use general consensus on whether any new discovery
means anything or not. I want a way to life cycle the merged opinions
and discovery at a central location so that someone can review the
results for longitudinal and retrospective discovery.  I want a way to
know when more than one person has a need to work on the same problem.
I want way more, but enough ranting for now...

The great thing about such an outstanding and alive community is that you have many objective judges doing the sort of darwinian selection of ideas right away. In the academic world, unfortunately, peer review is done by no more than 3/4 people... and all of them normally savy but old and established, with clear interests and agendas. Here it's a much bigger audience with a much more diverse set of agendas.

Then you have the open source model that selects the ideas that look good on paper but that require more energy than they seem to be worth.... this filtering does *NOT* happen in closed environments where ideas that pass the first phase are pretty much always implemented (unless the developers fail to do so).

The mix of those two things is radically different than in the two environments. I am applying open source principles to the academic community and resulting in incredible disruption: they can't figure out how we can be so fast in doing things.

Fast enough to find cocoon slowing us down (not the code, but the technological status)

Therefore the scratching my head that brought this thread.

IOW, the life of the researcher (in an institution) should not only
involve client interaction but institutional management of the process
for the researcher via some server that can merge and transform all of
the ongoing interactions so that all can benefit. (key words: merge
and transform)

Agreed.

Sure, lots of things to polish and little things to continue to improve,
but I wonder if the action is somewhere else.

How do you feel about this?

What we really need?

  - back end legacy connectors.  I can do that with JBoss;

- 100% 24/7 rock solid transaction management and data persistence.  I
can do that with JBoss and some commercial (or maybe even OS) RDB;

- flexible data translation and extraction.  I can do that with Saxon;

- involving, low error, client side interaction: I can do that with
AJAX and the rest of the browser stuff that you mention.

So why do I want Cocoon (Java or similar glue is assumed)?  Because I
still need some sort of bridge from the first three things to the
last.

Correct. We ended up writing our own... and it was no more than 50Kb of java code. Since for us size matters, cocoon was not able to replace that with such little space... but others were the turn offs... the fact that RDF and XML have a serious impedence mismatch.

I have been tasked to find a solution for this problem and now I think I have found one.

I need a really efficient action dispatcher. In spite of the
fact that many  might feel that this is one of the weakest parts of
Cocoon, this is what it does better than anything else:

client action -> map to handler -> run business rules -> persist
results -> (begin again)

Eventually, I expect some form of generalized Ontology traversal
(perhaps the semantic Web) to handle the second step

yeah, people will push for that, but I doubt declarative inferenced processing will ever be as predictable as explicit procedures.

but darned if I
see any real AI stepping up to handle the third step.

:-) [even if I'm sure many startups will try to that... and fail ;-)]

For the
foreseeable future I need some kind of multi-technology mosh pit for
that process to work in and currently that looks a lot like Cocoon.

I know. It looks "a lot like" but it's not that.... and I want to bridge the two.

So
that tells us where the real work is: object mapping, pattern
recognition, etc (and the great bugbear; distributed cache management
to keep the results fresh but yet responsive).

Hmmm, not sure.

The client is starting to be able to talk to the humans, next we need
a way for the server to truly understand what that interaction means,
requires, and implies in a global sense.

Not only, but don't worry, I'll come up with something here soon.

--
Stefano.

Reply via email to