This thread is getting deep (and long), but I want to inject some ideas
into this part of the discussion.
I'd like to propose a target use-case that might help drive us towards
a reasonable solution. Component1 in composite1 (cz1) invokes
component2 in composite2(cz2), so c2 is @remotable. Also,
c2 has @allowsPassByReference. Both cz1 and cz2 are deployed
into the same JVM. One of the variants could be whether or not
these two composites come from the same contribution. Given that
deployed composites get included into the domain, the composite
wrapper falls away from an SCA wiring perspective.
Now, the hard part, if we wanted to implement true passByRef, the
params/results of the c1->c2 invocation would have to be loaded
by the same classloader to avoid class casts.
While it is true that hosting environments will have their own
classloader schemes, achieving the spec semantics will be hard with
just any old CL scheme. Hosting environments may have to be
adapted to handle it.
BTW, I agree with start simple and build up. Just wanted to take a
moment to propose something to build up to.
There were other questions raised below, like the lack of import/export
for java package references in the SCA deployment design. I can tell
you that the spec group envisioned extensions for this purpose, but felt it
inappropriate to do so in a language neutral spec.
Dave
----- Original Message -----
From: "Jean-Sebastien Delfino" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Tuesday, April 24, 2007 5:04 PM
Subject: Re: Require more context for URLArtifactProcessorExtension.read()
More comments inline.
Luciano Resende wrote:
Comments inline...
On 4/24/07, Jean-Sebastien Delfino <[EMAIL PROTECTED]> wrote:
Luciano Resende wrote:
>> I was thinking that Class loading would be the responsibility of the
>> ArtifactResolver. So, we wouldn't need a ClassLoader, we would call
>> ArtifactResolver.resolve() to get the Class, like for all other
>> artifacts resolved in an SCA contribution.
>
> I think that the packageProcessor should be the one creating the
> necessary
> contribution classLoader, but how does the resolver will get a
> reference to
> it ?
>
> I can have it done if you haven't started it yet....
>
I have started to put together a class that we can use as a key to
resolve classes, I'll commit it soon and then if you want I leave the
rest to you :)
With respect to passing a classloader to the DefaultArtifactResolver,
here's what I think: SimpleRuntimeImpl currently creates a single
DefaultArtifactResolver, independent of any SCA Contribution. This works
for now as SimpleRuntimeImpl only handles a single Contribution, but
what we really need is:
- one ArtifactResolver per Contribution
How are you going to resolve things external to the contribution then ?
Let's say on the case of import/exports ?
- one ClassLoader per Contribution
+1
Good question, the short answer is: it depends... :)
My first cut of this email was actually saying "or a ClassLoader per
contribution graph when/if we support contribution dependencies at some
point", but I removed that sentence before sending it for a number of
reasons, including:
- Imports/exports do not cover classes at the moment, they cover XML
namespaces, if we want to cover imports of Java artifacts we'll have to
introduce a new <import.java> for example, and clearly define what it
means
- Imports/exports do not point to contributions, they point to location
URIs, which can take any form, can be resolved in a runtime specific way,
and point to other contributions, or not :)
- An obvious way to support Java imports and Contribution dependency
graphs for Java artifacts would be to use OSGI, but then maintaining a
graph of ClassLoaders ourselves would probably get in the way of OSGI, and
it may be much simpler to let OSGI maintain the ClassLoader graph and have
our runtime just point to the top bundle ClassLoader, or not :) I guess
we'll have to figure this out when we look at this integration in details
again.
- Finally, I think that the hosting environment has a role to play to
establish, select and maintain ClassLoaders. Different hosting
environments will probably come with very different ClassLoader management
schemes.
So I think that there are enough unknowns here to not try to boil the
ocean with our own homegrown ClassLoader hierarchy management scheme right
now... and suggest to start simple with one ClassLoader per SCA
Contribution. We can define the semantics of an <import.java> extension to
the spec later, assuming that we even need that, and worry about how that
maps to ClassLoaders in the various hosting environments then.
- and the ArtifactResolver can get the Contribution's ClassLoader passed
to its constructor.
+1
I'm not sure if having the PackageProcessor create the ClassLoader is
the best solution as the hosting environment needs to have a say with
respect to which ClassLoader is used for what, but I guess we can start
with that approach at least now to make progress, and change later when
we have a better idea of how this fits with the various hosting
environments.
--
Jean-Sebastien
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
--
Jean-Sebastien
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]