Fuhwei Lwo wrote:
Hi Sebastien,
Here is my understanding of requirements about getting rid of import.sdo and
switching to contribution -
1) A contribution will be created by contribution processor for each
application. - Contribution processor has been done for Jar and file system.
Yes
2) The contribution processor will create a SDO scope (HelperContext instance)
to associate with the contribution. Currently calling
SDOUtil.createHelperContext() is enough.
That's what I was poking at in my previous email. Creating our own
context, different from the default SDO context forces SCA to introduce
a new API to get to that context, and forces all SDO users to use that
new API. So I'm wondering if it wouldn't be better to play more nicely
with SDO, and have the SCA runtime just populate the default SDO context
in use in a particular application in the server environment.
3) Tuscany SCA needs to provide a way for the application to get hold of the
HelperContext in association with the contribution in step 2 above. Currently
the application is forced to use SDO API - HelperProvider.getDefaultContext()
which is using TCCL.
I'm not getting this one :) Is it bad for an SDO user to be "forced to"
use an SDO API to get an SDO context? It seems better to me than forcing
an SDO user to use an SCA API, simply because his code may be used at
some point in an SCA environment... and then his code wouldn't work in a
JSP, a servlet, or any other non-SCA environment...
If the fact that HelperProvider.getDefaultContext() is using the TCCL to
find the correct SDO context is a problem, then we just need to fix
that. We went through the same discussion with SCA CompositeContext
about a year ago. Associating context with the TCCL is not always
convenient in a server environment, and it may be better to associate
context with the current Thread (using a threadlocal or an inheritable
thread local for example). This is what we did for SCA CompositeContext.
Maybe SDO could provide a way to associate an SDO context with the
current thread instead or in addition to associating the SDO context
with the TCCL?
This would seem a good thing to have anyway since these contexts are not
thread safe as far as I know :)
Thoughts?
I am not sure my understanding above is correct so please bear with me. Based
on my understanding above, currently there is no additional requirement from
SDO.
I wouldn't reach that conclusion so fast :) I think that there is a
requirement to provide a way to get to an SDO context independent of
TCCL if people don't like that association with TCCL.
In the future, if we decided to support contribution import/export that may
require SDO scoping hierarchy support. But I think we should start using
contribution and getting rid of import.sdo as the first step.
Yes I'd like to get rid of import.sdo, as I indicated earlier in this
discussion thread.
I would like to support contribution import/export at some point. I'm
not sure that we'll be able to use SDO scope hierarchy support as an SCA
contribution import does not necessarily import the whole scope of
another SCA contribution, but I guess we'll know more when we start to
look at the details.
What do you think? Thanks for your reply.
Fuhwei Lwo
Jean-Sebastien Delfino <[EMAIL PROTECTED]> wrote: Fuhwei Lwo wrote:
Hi,
In my composite, I defined in the default.scdl file that would prompt the SCA
container to register my data types using SDO databinding. The question I have
is what API I should use in my service implementation code to obtain the
registered data types. If I have two composites that are using two different
data type definition but with the same namespace URI, I definitely don't want
to obtain the wrong data type definition. Thanks for your help.
Below is the previous message from Raymond Feng about associating databinding
type system context/scope with a composite. I think this is related to my
question but from Tuscany SCA development perspective.
How to associate some context with a composite?
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200702.mbox/[EMAIL
PROTECTED]
Hi,
The short (and not perfect) answer to your question is. With the current
code in trunk, use:
commonj.sdo.impl.HelperProvider.getDefaultContext()
But I thought about this a bit and your question triggered some
comments, and more questions :)
Import.sdo extension:
I think we should be able to remove that Tuscany extension to SCA
assembly XML, now that we have the SCA contribution service in place. We
know which WSDLs and XSDs are available in a given SCA contribution and,
with sca-contribution.xml import elements, we also know which XML
namespaces are imported from other SCA contributions or other locations
outside of an SCA domain. So we probably don't need another
element duplicating part of this information in .composite files.
Scope of XML metadata:
My understanding of the SCA assembly spec is that the scope of XML
metadata is an SCA contribution (plus what it imports from outside) and
not an individual Composite.
Scope of metadata contributed by Java classes:
Our runtime currently supports SCA contributions packaged as JARs or
file system folders. With these packaging schemes an SCA contribution is
self contained and cannot reference application classes in other SCA
contributions. At some point we'll probably want to support packaging of
SCA contributions as OSGI bundles and then leverage OSGI to allow an
OSGI bundle to see classes in another bundle, but we don't support that
OSGI packaging scheme yet. As a side comment I'd like to see if we could
reactivate some work on the OSGI extensions that we have under
java/sca/contrib/ and are not integrated in our build at the moment. So,
the scope of Java metadata is an SCA contribution as well, with no
external import mechanism.
So the bottom line is:
References to types in SCA artifacts are resolved at the SCA
contribution level. There is no relationship between an SCA composite
and a metadata scope.
More comments, on databinding specific handling of metadata:
We need to support multiple databindings. Each databinding comes with
its own form of metadata and different APIs to get to that metadata and
define metadata scopes. I guess it's important for a databinding
technology to define a way to scope metadata if it wants to be
successfully used in a server environment, and isolate the metadata for
the different applications running on the server.
In such an environment, our SCA runtime should play nicely with the
other pieces of runtime and application code (not necessarily running as
SCA components), and use the metadata scoping mechanism defined by each
databinding in such a way that non-SCA code and SCA component code
running together in the server environment are able to see the same
metadata for a given application.
I'd like to start a discussion to cover this aspect for our various
databindings and make sure that the metadata story for each databinding
holds together.
To help feed this discussion with concrete data, could the SDO folks
jump in here, and describe the various ways of maintaining SDO metadata
scopes in a server environment, running with multiple classloaders and
threads?
Thanks,
--
Jean-Sebastien
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]