On 31 Mar 2004, at 14:33, Leo Sutic wrote:

1. Singletons

I have repeatedly (both in my own development and based on
questions from users) seen the need for a singleton design
pattern. I.e. when a block instance corresponds to some
physical resource that can't be virtualized. For example,
a database connection (if you're only allowed one), can
be shared among several clients, but you must still have
a singleton that manages that single connection.

For Cocoon, one could argue that such blocks should be
outside of Cocoon, but this is not always possible.

The "Composer" is always singleton in each block instance. There are never two Composers in a single block instance. You might argue that you want a "framework" singleton (or a block that cannot be instantiated more than once)... I would strongly oppose to put that in the block descriptor, as (with any Java code) for a "VM-wide" (or classloader-wide) singleton instance anyone can use the usual "static" in the class. (note: Instances share the same classloaders as Objects share the same Class)

For a singleton instance object (a singleton which gets re-created in every new block Instance):

public SingletonComposer implements Composer {
  private Object singleton = // the instance;
  public void acquire() {
    return(singleton);
  }
  public void release() {
  }
  public void dispose() {
  }
}

For a framework singleton object (a singleton which is shared amongst several Instances of the same block):

public SingletonComposer implements Composer {
  private static Object singleton = // the instance;
  public void acquire() {
    return(SingletonComposer.singleton);
  }
  public void release() {
  }
  public void dispose() {
  }
}

The JVM already provides those, let's not over-design.

2. Reloading

When I first read about this framework the plan was to
simply drop the block when it was being reloaded, no matter
if it was currently executing a method call, or whatever.

Then after some discussion the idea was to keep the new block
and the old block side by side and gradually phase out the
old block as it was released by clients.

However, I think this will lead to unpredictable behavior.
This type of lazy reloading will cause problems, even more
problems than the usual ones you get when reloading classes.

Can you give some examples of what could go wrong? I can't imagine any...

2a. Stale classes

One problem is when you retain a reference to a class that
is about to become reloaded. For example:

    interface Templates {
        public TransformerHandler getTransformerHandler ();
    }

    interface MyBlock {
        public Templates getTemplates ();
    }

and the impls: TemplatesImpl, MyBlockImpl.

Suppose you get a handle to MyBlock. Then you get the Templates
from it. Then the MyBlockImpl block is reloaded. When you call
getTransformerHandler on the Templates you just retrieved, will
you execute the old code or the new code? Obviously, the old
code.

Will this old code work with the new classes that has been reloaded
in the rest of the system?

Maybe.

Not maybe, yes... The ClassLoading paradigm associates every single class with its own classloader implementation. Old classes will retain old classloaders and will work as if nothing has happened...

The JVM allows easily to create classes trees where the same class exists in two different places at the same time, but is accessible by all of them. This is why interfaces (public classes exposed by interfaces blocks) CAN NOT be reloaded, because they are shared amongst all class users.

It's how it works with servlet containers (for example).

Suppose a you have a SSL block. Further suppose that a bug has
been discovered in it and you have to get it patched.

With the side-by-side running, I have *no guarantee* that the new
block is active.

???????? I don't understand what you're talking about. Describe exactly the structure of the blocks, because I'm pretty much sure that if you follow a correct blocks design, you'll be able to reload your SSL engine and everything will be absolutely guaranteed to work no problems.

2b. Multiple blocks stepping on each other's toes

Suppose a block accesses some kind of resource that only accepts
one client, meaning that the block must serialize access to
that resource. This can be done via synchronized wrapper
methods, mutexes or whatever.

But if you suddenly have two instances of the block... Well, you've
pretty much had it then.

You risk not only a crash, but that ***wrong stuff*** is being done
***without a crash***. The resource may be stateful:

    interface AccountCursor {
        public void moveToAccount (String account);
        public void withdraw (int amountOfEuro);
        public void put (int amountOfEuro);
    }

What would happen if two block instances accessed this resource
in parallell? No errors, but a messed up account DB.

I don't see how this can be different from having TWO instances of the same Object... If you have to serialize access to a resource, simply declare it static in your class (as you would in a normal java application) and synchronize onto it...

When triggering blocks reload because of a change in the block classes themselves (when you create a new classloader) yes, you will have the same problem: two instances of the same object, but this doesn't differ from when you reload a web-application in your servlet container.


3. Over-reliance on Administrator Omniscience

Linked to the "maybe" in 2a.

http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=108032323410266&w=2
Thing, that (by the way) will _NEVER_ happen automagically, but only

    when the administrator decides that it's time to reload a block
    instance?

So, why should we bother with these problems? The administrator
knows when a block can be reloaded, right?

Wrong.

What sane admin will hit the reload button, unless he knows
with some certainty what will happen, or that the system will
crash instead of trashing important databases if something goes
wrong?

Frankly, I'll be too scared to use this reloading functionality.

And then what's the point?

I can also answer the question immediately before the quote above:

http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=108032323410266&w=2
What's the difference between that, and loosing the connection
for a second with a component?

When a connection is lost it is *lost*. It has well defined behavior. It

has easily understood effects on the code. It has *visible* effects on
the code (Exceptions being thrown everywhere).

With this lazy reloading scheme, or any reloading scheme that doesn't
involve locking, you won't get any exceptions, but you'll get a whole
lot of nasty errors that will not leave any other trace except a trashed
server after a few hours of running bad code.

And administrator reloads a block when it wants to deploy a new version of it (HTMLGenerator version 1.0.1 is updated by HTMLGenerator version 1.0.2), and don't tell me that you want me to shut down the ENTIRE VM to perform that change...

I'm confident that if designed correctly, blocks won't have a problem in rewiring themselves on newly deployed replacing instances, and I (administrator) want control over it... Otherwise, what's the whole point in having blocks in the first place? Just write a better sourceresolver and load everything into your servlet container's web-application classloader...

Pier


Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to