> From: [EMAIL PROTECTED] > > Berin, > > I have one alternative as well. It is a variation on the last option > you showed. I'm going to change the names to make things a > little more > obvious: > > interface TransactionSource > { > Transaction getNextTransaction(); > } > > interface TransactionSink > { > Response process( Transaction trans ); > } > > All Manipulators and Outputs will implement TransactionSink. > The only > difference is that a Manipulator will need to have a registered > Manipulator or Output to sent its stuff along to.
That is a variation on my second option. Essentially, we would have registered everything behind the scenes. It is doable, but I think we can avoid registration. > > This requires registration to chain everything together, but I don't > know how that can be avoided. > > One big advantage over your third alternative, is that Input's don't > need to know if they have a Manipulator or an Output. And > manipulators > are free to return Responses as they see fit. Inputs don't know/don't care what follows them. All they know is that they have to return a Transaction object. It is the Job, which has intimate knowledge of the processing pipeline, that has to know what follows it. I also like the idea of having the Response object generated by the Transaction object itself. That way we have the ability to change whether the Transaction is successful or not as we go on, and only generate the associated Response object when we really need to. > One thing I didn't like about Berin's second alternative, was that > Manipulator extends Input so it has a getNextTransaction() > method which > really doesn't make much sense. I don't really know what this method > means in this context. I tend to agree here. I like my third alternative the best. > > I'm afraid I can't seem to find the Morpher code to take a > better look > at. Which cvs module is it in. It's in Jakarta Commons Sandbox I am not in favor of integrating Morpher code in at this time. One of the reasons is that it is my experience that we need to start out with a specific set of requirements, and then abstract out when we have similar requirements elsewhere. Each time we have a similar solution, we need to consider the strengths and weaknesses of each implementation. One of the things I am trying to avoid with the InfoMover component model is the mistake that Cocoon made in forcing an order of calling the component's methods. It keeps those components from being thread safe, and under load the extra component instances do weigh on a server. It doesn't mean that the implementations *have* to be thread safe, it just means that they *can* be. Besides, we need to think of the Job as the pipeline for the information. Granted we can sidestep the issue, have our Components act as factories that generate a new version of the Input/Manipulator/Output based on the configuration information passed in. However, I think it is much more elegant and easier to program if we have several sub containers (one for each Job), with isolated component instances. We can easily do this with InfoMovers design. -- To unsubscribe, e-mail: <mailto:[EMAIL PROTECTED]> For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>