On Oct 24, 2006, at 2:11 PM, Kevin Williams wrote:
Jim Marino wrote:
When I first read the thread on this, I thought the DAS service
would be a component extension type (e.g. analogous to a
implementation.java or implementation.ejb) and not a component
implementation type, which would allow for dynamic and eventually
declarative configuration styles.
I have not very familiar with the terminology so I am not sure what
a "component extension type" is. But, I do think we eventually
want "implementation.rdbdas". Wouldn't this be a new
implementation type?
It looks like Amita chose to start with a POJO. I notice the use
of " implementation.java".
Yes that's it, something like implementation.das or
implementation.rdbdas. Maybe there could be just be one
implementation.das which could be configured to work with a number of
different mediators for different types of data store?
I'm playing devil's advocate a bit with DAS, but I think it is
important we have a clear statement as to when it is appropriate
and not appropriate to use. One place to start would be to
compare it to JDBC 4 and JPA.
We can get started with JPA:
* JPA is java-specific, container-based, built around a connected
data model and offers a complete O/R mapping for POJOs
* The RDB DAS is a java implementation of a language - neutral
concept (hopefully specified some day) that is containerless,
assumes a disconnected data model and provides a simple, implicit
mapping for SDO DataObjects (Dynamic or Static) to relational
tables.
Anything else?
Hmm JPA is also "containerless" in that it will work in a J2SE
environment (the JPA spec was separated out to accommodate this), has
a disconnected model (e.g. the "merge" operation has these
semantics) , and contains (somewhat) implicit mapping capabilities
for POJOs (when greenfield databases are allowed).
I thought the difference would have been something to the effect of:
- DAS provides a declarative veneer over heterogeneous mediators,
such as JDBC, JPA, Hibernate, and XML store, etc.
- DAS is used for "remoting" data to clients, some of which may not
be Java-based.
So, I would expect Java component implementations to make heavy use
of "locally disconnected" persistence APIs such as Hibernate and JPA
when manipulating application data that does not flow "as-is" (i.e.
"shape not modified") to remote clients. DAS would be used when
someone wants a simple declarative way to get at data (and
performance is not the primary concern) or they need to send data to
a remote client that is not necessarily Java. An interesting case you
mentioned would be to allow JPA, Hibernate, or JDBC 4 to function as
the store mediator. I think this would be valuable in that it avoids
having to re-specify mappings (most applications I would suspect have
a need for "local" data) as well as allows for integration into the
persistence infrastructure for things such as cache invalidation. In
this case, DAS would be used to send data pulled from Hibernate or
JPA down to a client such as a web browser, Swing app or a .NET app.
Trying to do that today with those technologies is either not
possible or not trivial.
Right now I'm obviously tied up in SCA, otherwise I probably would
have offered to create a prototype of this. As background, the reason
why I am bringing this up is I am getting asked why we are
introducing another persistence API. I'm getting this from Java
people as well as a colleague heavily involved in the .NET world. I
responded to him with the above description but I still feel it is
not as simple a story as it should be (i.e. it doesn't fit in a sound
bite). What do you think?
Jim
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]