I'm a bit confused about how you envision this working, so please take my comments with a grain of salt :)

On Jan 3, 2005, at 1:05 AM, David Jencks wrote:

The current architecture of deployment might be considered to have some limitations. It is not clear how to extend the system to deploy more artifacts such as web services and portlets, let alone artifacts we don't know about yet. There is also a growing web of dependencies between module builders to take care of ejb references, connection factory references, and bits such as security and naming. I also think there may be situations where an incompletely processed artifact does not cause a deployment error. I wonder if there is a simpler more extensible architecture.

What I'm thinking of has two main new features:

1. a chain of builders with only one method, "build"

Do you really mean a chain? The word chain implies an order to me, so is there an order you are thing of? If not, how about "set" or "bag".


2. some sort of "deferred work" object that allows one builder to ask another one, farther down the chain, to do some work for it.

Lets look at how builders interact with each other:

One type of interaction is for a builder to add extract some information from the current deployment plan and cache it in the deployment context for use by itself or other builders later. For instance, the connector builder figures out the activation spec metadata and puts it in the context for use when deploying message driven beans.

This type of interaction can be handled by a chain of builders by simply having two builders: the first one adds the shared info to the context, and the second one(s) use it.

If I have this correct, you are suggest we switch from a push (cache) system, to a pull system.


I'm not sure that will work in the case of an EJB ref. An EJB ref has matching rules, that really need holistic view of all EJBs in the deployment unit. Off the top of my head we have the following precedence rules:

* Exact ejb-link specification of EJB module and EJB name
* No module in ejb-link, but an EJB in the current module has the same name as the ejb-link name
* No module in ejb-link, but only one EJB in the EAR that has the same name as the ejb-link name
* No ejb-link, but only one EJB has the same name as the type and interfaces of the ejb-ref


For most ejb-refs, you need to need to inspect every EJB in the ear.

Another type of interaction is for a builder to extract some elements from the spec and/or vendor plan and immediately ask another builder to process it. For instance, ejb and resource references are immediately processed by the naming builder, the vendor security descriptor elements are immediately processed by the security builder, and dependency and gbean elements are processed by the service builder.

Why do we need "builders" for security, naming, and gbeans. Can't we get buy with a utility class instead of a full blown service?


Most of these are used to construct a gbean attribute value which is not used further during deployment. So, perhaps the builder needing the work done could construct a "deferred work" object containing the element to be processed and the gbean and attribute name, and add this to the context. Then the, e.g., naming builder could look in the context for deferred work objects that it understands, process them, and set the appropriate gbean attribute values. If any deferred work objects were left over unprocessed when it came time to serialize the gbean state, we would know there was a deployment problem.

Ya, you lost me here. Can you be more specific on where I would use "deferred work" and how it would be implemented?


The dependency elements in the plans are used to construct the classpath, which is needed during deployment. Each builder could extract these elements and put them in deferred work objects: in this case the processing would not set a gbean attribute value, but would result in adding to the classpath.

Not sure how that is better then what we have today.

The ear deployer might fit well into this scheme. Instead of calling specific module builders for each type of module it recognizes, it could bundle up the info for each module into a work object that it adds to the context: the current module builders would then look in the context for the work objects they understand.

Again, I'm not sure how that is better, but I think that would be an easy change to make. Instead of having an 4 builders we have a set and ask each one if it can handle a specific module (or give it all modules and say do what you can). I'm just now sure it buys us much. I most afraid that we will loose the ability to have good error messages since we don't know who is supposed to be responsible for a module.


Right now I'd characterize these ideas as speculation and wonder if anyone else thinks they are worth pursuing further. I also wonder if limiting the objects in the deployer chain to a single method has too little structure and giving them several methods to be called in a specified order, as with the current architecture, would provide better organization without limiting useful functionality.

I did consider using a single "doIt" method when working with deployment last time, but the problem is you trade coupling in methods (interface) for complexity in the parameters. For example, all of Java could be rewritten with each class having a single method doIt(Map data). I know that you are not implying that we go that far, but this demonstrates the complexity trade-off. I suggest something in the middle, a few methods with a few mildly complex parameters :)


-dain



Reply via email to