Jeremy,
I won't comment on your attacks at the bottom of this email. I was
hoping for a more constructive technical discussion. I added my answers
and comments on the specific technical issues inline.
Jeremy Boynes wrote:
On Jul 5, 2006, at 12:43 PM, Jean-Sebastien Delfino wrote:
My proposal is not to merge M1 and the core2 sandbox. I am proposing
to start a new fresh code stream and build the runtime through baby
steps. We may be able to reuse some pieces of existing code, but more
important is to engage our community in this exercise and integrate
the new ideas that will emerge from this.
I don't believe the two issues are necessarily coupled. Quite a few
members of the community are engaged on the sandbox code already and
we could work with you to improve that rather having to throw
everything out and start over with all new ideas.
Here's an example where I'm struggling with both M1 and the core2
sandbox and thinking that we can do better if we start with a new
fresh stream: our (recursive) assembly metadata model.
- M1 does not implement the recursive composition model and would
require significant changes to support it. Core2 is an attempt to
implement it but I'm not sure it's quite right, and also think that
it can be simplified.
It would really help if you could come up with concrete areas where it
is not right or where it could be simplified - for example, end user
scenarios that are not supported.
- M1 used Lists to represent relationships, Core2 uses Maps, I think
M1 was better since it allowed to keep the order in the relationships.
There's nothing I remember in the assembly spec where order matters.
On the other hand there are many areas where things are keyed by a
name which has to be unique. This seems like a natural mapping (sorry)
to a Map. In M1 I started to move toward simple map structures but you
replaced it with what seemed like a fairly complicated specialized
List implementation that sent notifications that updated a Map anyway.
Given the desire for simplification, are there any end-user scenarios
that require ordering to be preserved and that can't be supported with
a basic HashMap or LinkedHashMap?
As an administrator I'll want my administration tool to display
components displayed in the order I declared them in SCDL. I'll also
want a configuration or admin tool loading/saving modified SCDL to write
things in the order that they were initially, not in a random order. As
an application developer I'd like to have an SCA debugging tool showing
me my components in a list in the right order as well. Also if I want to
implement the model defined by the XML schemas in the spec using any of
the DataBinding technologies out there, I'll end up with Lists, not Maps.
Finally even if we decided to use Maps in some cases to provide keyed
access to some elements of the model, we'd have to do it differently.
For example a single Map containing all components, references and
services in a composite (according to the spec they cannot have the same
names) instead of three Maps like you have in Core2.
- Core2 only defines implementation classes for the model, I think we
should have interfaces + default implementation classes instead, like
we had in M1, to allow for alternate implementations of the model.
One of the most complex things with the M1 model was all the
interfaces involved, the need to pass factory implementations around,
the number of different factories involved (one per extension
implementation) and the potential issues with code assuming its
implementation of the factory was the one used.
The core2 model uses concrete classes which are really just data
holders - there's no behaviour in them to be abstracted through the
interface. This gives a much simpler programming model for extensions
using the model.
Do you have any scenarios that would require different implementations
of the model? Are they so different that they might as well just use
different classes?
I don't think that having just implementation classes is much simpler.
If you interact with the model SPI, reading interfaces is simpler IMO
and more suitable for inclusion in a specification document... allowing
multiple implementations of these interfaces. Also we have to support
the whole lifecycle of an SCA application (development, deploy/install,
runtime, admin etc.) and I'd like to allow some flexibility for
different tools, running at different times to use different
implementations of the assembly model interfaces.
- Over usage of Java Generics breaks flexibility in some cases, for
example Component<I extends Implementation> will force you to
recreate an instance of Component to swap its implementation with an
implementation of a different type (and lose all the wires going
in/out of the component).
There may be cases where generics may be overkill but I don't think
that really requires us to throw out the model. There are other cases
where the use of wildcards would be appropriate; for example, in the
scenario you give here you could just create a
Component<Implementation> to allow different types of implementation
to be used.
Then instead of
Component<Implementation> {
Implementation getImplementation();
}
I think we can just do
Component {
Implementation getImplementation();
}
What we have now in core2 is overkill IMO.
- Core2 defines ReferenceDefinitions (without bindings) and
BoundReferenceDefinitions (with bindings). IMO there are Reference
types and Reference instances and both can have bindings.
or Reference.
I'm with you here - we need to refactor the way bindings are handled
for both Service and Reference. One thing the sandbox model is missing
is the ability to associate multiple bindings with a single
Service/Reference.
My main point is not about supporting multiple bindings on a Service or
Reference. I think this is secondary and the interfaces I put in my
sandbox to support a design discussion don't even have that either. My
point is that Services, References, and their instantiation by
Components are at the foundation of the SCA assembly model... and
therefore need to be modeled correctly. I'm proposing a different
design, illustrated by the interfaces I checked in.
- I think that Remotable should be on Interface and not Service.
I agree Service is wrong and that it should be on ServiceContract.
Thanks for catching it.
- Scope should be defined in the Java component implementation,
separate from the core model.
Scope is not a Java specific concept.
Interaction scope (stateless vs. stateful) can apply to any
ServiceContract.
Container scope is the contract between an implementation and a
ScopeContainer and applies to any implementation type that can support
stateful interactions. This would include JavaScript, Groovy, C++, ...
I think that means that support for state management (which is what
scope is configuring) belongs in the core with the configuration
metadata supplied by the implementation type.
I don't think it's quite right. First interaction scopes are defined on
interfaces and not service contracts. Also they control whether an
interface is conversational or not, independent from any state management.
Anyway I was talking about a different scope, the implementation scope
defined in the Java C&I spec, which governs the lifecycle of Java
component implementation instances. I think the definition and
implementation of lifecycle management will vary greatly depending on
the component implementation type, for example Java component
implementations and BPEL component implementations typically deal with
this in a very different way. Therefore, in my view state/lifecycle
management should be left to the component implementation contributions
and not belong to core.
- Java and WSDL interfaces should be defined separate from the core
model, we need to support multiple interface definition languages
supported by plugins, not in the core.
The model supports generic IDL through the use of ServiceContract.
Java and WSDL are two forms of IDL that are mandated by the
specification. This is really just a question of where those
implementations are packaged and again I don't think this warrants a
rewrite.
Packaging issues are important and often hide bigger dependency/coupling
problems. I think we should package the support for Java and WSDL
interfaces separate from the core to prevent any coupling between the
two, and also give people who will have to support new interface
definition languages a better template to follow.
Individual issues do not warrant a rewrite. What about the sum of many
issues?
- Implementation should extend ComponentType IMO instead of pointing
to it, and we may even be able to simplify and just remove
Implementation. Also I am not sure why we need to distinguish between
AtomicImplementation and CompositeImplementation.
One of the problems the assembly spec has is that it is difficult to
do top-down design because you cannot link a component to a
componentType without having an implementation. I agree this is an
area that we (and the spec) need to sort out.
IMO a component is associated with one componentType but may have
multiple implementations so I don't think they are quite the same
thing or that either can be removed.
AtomicImplementation is a marker for implementations that cannot have
children.
In my view a component has a type. The ComponentType is either abstract
(just defining the types of services offered, references used, and
properties that can be configured), or concrete. A POJO component
implementation is a concrete ComponentType.
- Support for Composite Includes is missing, this is a significant
part of the recursive composition model (half of it, one of the two
ways to nest composites).
It's not really half - it's really just a very small part of the
model, comparable to the <import> element we used to support in M1.
Again, I don't see why we need to rewrite the model to add this in.
Quite the opposite: you've said you've been looking for way to engage
and this could be it.
I disagree. Includes are a very significant part of the assembly model
(the other part is the ability to use a composite as a component
implementation). Two examples:
- An included composite is the equivalent of a module fragment in the
0.9 spec. This concept is key to allowing a team to work on various
pieces of an application, split in multiple composites, included in a
composite representing the application.
- When (formerly subsystems) composites get deployed to an SCA system,
they are actually included in that system, rather than being used as
component implementations.
This list is not exhaustive... Another idea would be to externalize
support for Composites in a separate plugin not part of the core
service model (since there may be other ways to compose services in
addition to an SCA composite, with Spring or other similar
programming models), I'd like to know what people think about that.
Having the composite implementation type in the core does not preclude
that - again, it's just packaging for ease-of-use.
I think it's more significant than packaging. Are you saying that we
could move the code supporting composites out of core2 without breaking
the code in core2?
You seem to have the impression that the core is sealed and that we
only support things that are included within it. That is not the case.
The only place we need things bundled with the core is in the
bootstrap phase - specifically, we need them bundled with the
primordial deployer. The actual runtime is created from the SCDL
passed to that primordial deployer, can contain any mix of components
and need not contain any of the infrastructure used to boot it.
I just checked in sandbox/sebastien/m2-design/model.spi a set of new
interfaces. This is just an initial strawman to trigger a
constructive discussion and ideas on how to best represent the
recursive model. I also need help to define a scenario (not unit test
cases, but an end to end sample application) to help put the
recursive composition model in perspective and make sure we all
understand it the same way.
I am troubled that you have chosen to start on your own codebase at a
time when most of us have been trying to have constructive discussion
on this list. Based on the approach you proposed in your original
email I would have hoped that we could have started with your end-user
scenarios and had a chance to explore how they could be supported by
M1, the sandbox, or some other code before starting another codebase.
I'm disappointed that, having started this very thread nearly a week
ago with the premise of community, your first response on it was to
commit a large chunk of independent code rather than follow up with
any of the other people who have already contributed to the discussion.
I think discussion led to compromise and consensus on the
scenario-driven approach that you proposed. As shown above and in
other recent threads, there's plenty of room for improvements and/or
new features in our current code and a willingness to discuss them,
albeit in terms of technical merit rather than personal opinion. I
hope you can find a way to join in rather than forge your own path.
--
Jeremy
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
--
Jean-Sebastien
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]