On Nov 9, 2006, at 2:59 PM, Rick wrote:
I'm not convinced that DAS really requires it but I'll let them be
the judge of that. Just from what I remembered I didn't see the
need so I questioned it. I just think when possible it best to
integrate at the highest level of the stack that suits your needs.
Jim, you lost me and peeked my interest at the same time on the
following could you elaborate a little more:
> Not all extensions need to implement Tuscany SPIs. A Tuscany
extension
> could have no dependencies on an Tuscany or SCA API or
annotation; it
> could be just a simple POJO. When it needs to access specific
runtime
> capabilities, it may need to use injection or lower level APIs.
Sure. The runtime extension model is based on SCA so I could have a
runtime extension that is just this:
public class Foo {
public Foo() {
// do something
}
}
It's a bit contrived but you wouldn't even need to implement SCDL to
get this to work since there is a Java deployment API (we use this in
the boostrapper to deploy primordial extensions like some of the
STaXElementLoaders). Most of the extensions really only use SCA
annotations such as @Property, @Resource (or the Tuscany @Autowire).
For doing things like component impl type extensions, the minimum bar
is to implement some interfaces. But for something like a Transaction
Manager, I think it could theoretically be done as just a bunch of
POJOs.
Does that make sense or am I just rambling?
Jim
Jim Marino wrote:
On Nov 9, 2006, at 11:34 AM, cr22rc wrote:
My understanding is you would need to implement classes from
tuscany-spi to implement your container. This is surely specific
to Tuscany. The work that was started by Luciano was as I
understood it a Pojo that was an SCA component. That should be
able to work in any other SCA implementation.
There may be a really good reason you need to go the container
route and I just possibly missed it.
Both conceptually and practically, I think we are modeling a
component implementation type. Conceptually, the implementation
language is a query language (e.g. SQL). Practically, we are
likely going to want to to do things that are most appropriately
done by an extensions. For example, at some point I imagine DAS
will need to be integrated with the DataSource and transactional
infrastructure provided by the runtime.
One thing that has come up that may also have impact on this
configuration information. Other O/R engines like Hibernate and
OpenJPA have a model where there is a factory (in Hibernate it is
the SessionFactory) that is fairly heavyweight and is responsible
for processing configuration (i.e. mapping) information, hooking
into the runtime transactional mechanisms, etc. This factory is
generally thread-safe and is shared on a per-Application basis.
There are individual Sessions (or EntitManagers) which encapsulate
a specific unit of work, are therefore lightweight, and are not
thread-safe. They are used to maintain object-identity and changes
and are provided by the factory. I don't know much about how DAS
works but if it is similar, then a container extension would be
the way to go to accommodate this: there would be some sort of
shared DAS object that was instantiated once-per application
composite and contained all of the configuration.
Kevin or Luciano, is there something analogous to what I outlined
in DAS?
Spring 1.0 took the approach that its container was primarily
extended through additional beans, much like the "DAS as a POJO
approach" The problem with this was the configuration began to
introduce implementation details. For example to expose a Spring
bean as a remote service, I had to configure the bean then create
another bean whose implementation type was a Java class provided
by the Spring container. This had the effect of coupling the
application configuration to the internal implementation of the
Spring container as well as making the XML syntax less strongly
typed. In 2.0 Spring fixed this by providing extensible namespace
handling where people could extend the container by implementing a
custom XML parser for this configuration information, thereby
providing a way to eliminate the need for the extra bean and
referencing an implementation class. Basically, I view this as
analogous to our extension SPI.
And that's all I'm asking for is why.
My overall concern is if every technology comes along needs to
implement tuscany-spis and can't achieve it through creating
reusable SCA components that can be configured through properties.
l APIs.Not all extensions need to implement Tuscany SPIs. A
Tuscany extension could have no dependencies on an Tuscany or SCA
API or annotation; it could be just a simple POJO. When it needs
to access specific runtime capabilities, it may need to use
injection or lower leve
I'm not sure if I did a good job explaining this...
Jim
Kevin Williams wrote:
I absolutely want the best possible solution to integration and
this discussion is helping quit a bit. I may misunderstand but,
if we develop implementation.das and an associated container,
wont this be usable in another SCA implementation?
--Kevin
Rick wrote:
I quickly skimmed the threads on this and I didn't pick up what
the advantage was it making this a "container". The only thing
I seen was a tighter integration between DAS and SCA. But as I
understand it me this is really a tighter integration between
DAS and Tuscany implementation of SCA. In general I think we
should be promoting building services on SCA not Tuscany. When
new bindings/containers extension are proposed for Tuscany we
should first consider why SCA/ binding/components were
inadequate and see if there is something the SCA architecture
can improve that would have made it possible.
The major down side I can see to the container approach is that
it can't be reused in another SCA implementation.
Jim Marino wrote:
Hi Luciano,
Is the DAS container a component or binding? If it is in the
former, I would think it should go under container/
container.das, otherwise binding/binding/das/
Also, the dependency in the module should be against a version
of DAS and not the DAS project directly so that it is treated
as an external dependency like Axis or OpenEJB.
Jim
On Nov 8, 2006, at 5:08 PM, Luciano Resende wrote:
Hi Guys
Amita have started work to expose DAS as an SCA container,
and we are
looking what would be the best way to get collaboration on
continuing the
task. One of the ways we were thinking was to clean-up what
is available
today and get it into the trunk, so we could share and
continue contributing
by submitting patches. Is that the way to go ? Any other
ideas ? If that's
the case, would the right place to have the container be
around :
/java/sca/containers ? and then we would have a client
available in
das/samples/das.container.client ?
Toughts ?
- Luciano Resende
-----------------------------------------------------------------
----
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
------------------------------------------------------------------
---
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
-------------------------------------------------------------------
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]