Simon Laws wrote:
Hi

A few clarifications in line.

Simon

On Fri, Apr 4, 2008 at 8:24 PM, Jean-Sebastien Delfino <[EMAIL PROTECTED]>
wrote:

Yang Lei wrote:

Hello,

I have the following usage scenarios that I currently use
EmbeddedSCADomain's ContributionService to accomplish. When I look at
the new set of workspace modules, I wonder how it can be accomplished
by using this new set of workspace related apis. And what the
pros/cons if I switch to use workspace:

Scenario 1: I need to load a SCA contribution to iterate the
deployables , each deployable composite needs to resolve the
componentType: from java annotation, from componentType file, from
QName of another composite file which may be imported from another
contribution by using <import namespace>

The way I support it today is like what itest/contribution-import-export
does:

       ContributionService contributionService =
domain.getContributionService();
       ...
       Contribution consumerContribution =
           contributionService.contribute(...);
       Composite consumerComposite =
consumerContribution.getDeployables().get(0);
       domain.getDomainComposite().getIncludes().add(consumerComposite);
       domain.buildComposite(consumerComposite);


Scenario 2: I need to start a contribution 's deployable composite
with a domain.

Again I use the same approach as in itest/contribution-import-export,
besides the above code I add the following

       // Start Components from my composite
       domain.getCompositeActivator().activate(consumerComposite);
       domain.getCompositeActivator().start(consumerComposite);



Now I am looking into how to accomplish the above by using workspace
related APIs. I started looking at a workspace test case:

http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/itest/domain/src/test/java/org/apache/tuscany/sca/itest/domain/ContributionSPIsTestCase.java

I have the following observations:

1. The bootstraping of Tuscany extension points are outside the
workspace.

I can see a lot code in init() to do bootstraping. I think I would
prefer the bootstrapping are tied with a given domain, as all the
workspace usage for a given domain should have the same bootstrapping
on the object model and what kind of bindingTypes or
implementationTypes are supported. If it can be done it that way, then
I do not need to bootstrap everytime I use workspace, and I can keep
both bootstrapping of scenario 1 and 2 consistent, even though it may
happen that scenario1 bootstrapping is only a subset of scenario 2's .

If we are worried that one fit for all bootstrapping is an overhead
for scenario 1, maybe we can have some 2 stage bootstrapping:
composite model resolved, composite start. Or we can even break into 3
: composite model load from scdl  no resolving componentType,
composite model resolved, composite start...

I'm interested in what you say about bootstrapping being associated with a
domain. The code you have been looking at in the domain itest I believe
contains all the detailed steps you need to go through in order to read
contributions, understand the dependencies between them, read and resolve
them and finally run some composite that is contained in the contributions.

Is your main concern here that these steps are just too complicated and that
you would like them wrapped up (which is, as Sebastien suggests, relatively
straightforward to do as long as we can agree that the steps are
fundamentally doing the right kinds of things). Or is there some more
fundamental issue with the concepts that concerns you. In particular you say

"I think I would
prefer the bootstrapping are tied with a given domain, as all the
workspace usage for a given domain should have the same bootstrapping
on the object model and what kind of bindingTypes or
implementationTypes are supported. If it can be done it that way, then
I do not need to bootstrap everytime I use workspace, and I can keep
both bootstrapping of scenario 1 and 2 consistent,"

But if I take the init code from the test you have been looking at and run
it twice both copies of the runtime would have the same sets of extensions
and bindings as the code loads these from the runtime classpath.

As Sebastien describes below the workspace is independent of the rest of the
code in the init method in that that is just holds onto contributions and
doesn't care how those contributions were generated.


Makes sense. I am not sure that the bootstrap code should be 'tied to a
domain', but I can do the following:

- Provide a few pre-canned init methods that bootstrap the subset of a
Tuscany runtime required for your scenarios. I'll start with these:
 a) list deployables in a contribution
 b) resolve deployables given the set of available contributions

- Come up with samples (easier to understand than test cases) showing how
to use the init methods and the current SPIs to implement these scenarios.

I'll probably keep the init method in each sample to start with, and then
as we work through more usage scenarios I'm hoping that we can find common
init patterns that we can then push into proper SPIs for all to reuse.


 2. Some detailed questions related to what I see in the
ContributionSPIsTestCase:

I can see contribution can be added to workspace by
workspace.getContributions().add(contribution);

I am not sure if at this stage I will be able to get the composite
model object that I need for scenario 1

I'm assuming that you're talking about the code in
testReadDependentContributions()?

Workspace is a model object, which you can use to represent the collection
of Contributions that you're working with. Workspace.getContributions()
simply returns a java.util.List for you to record and list contributions.

So workspace.getContributions().add(contribution) does not affect in any
way the contents or state of the contribution model object and the ability
to get composites from it.

You should be able to just get a composite from a contribution, but going
through the list of artifacts returned by getArtifacts() or using a model
resolver.

 or I need to go extra steps

to get the Composite model resolved.

The test case does not seem to try to resolve anything, as it just reads
contributions and never calls resolve on the contribution processor. I'll
try to add code to one of to-be-written samples to show how to resolve a
contribution.


I both reads and resolves contributions. It uses the list of contributions
returned by the dependency analyser to determine the list of which
contributions should be resolved in order to start a chosen composite.


 e.g. I can see some code like: List<Contribution> dependencies =
analyzer.buildContributionDependencies(workspace,
workspace.getContributions().get(0));  is it needed for me to get the
resolved model or it is just something to play with to get a
dependency graph.

No it's not needed to resolve artifacts in a contribution. The
contribution dependency analyzer is a utility class, which can be used to
get a contribution dependency graph, useful to have in hand if you're
building a contribution admin application for example and want to display
lists of dependencies.


Agreed. It's not absolutely required.  I just added this in to work out
which contributions really needed processing in order to start a chosen
composite. If you have the situation where you are processing all the
contributions that are being added then you don't need to work out how they
are related before trying to resolve them.



3. I can also see getting composite started will have more codes than
using domain.

The composite activator has not changed and still works the same way as
before. So if you have composite objects already in memory, ready to be
used, they can be given to the composite activator exactly like before (like
you showed at the top of this email).

If on the other hand you want to use SCANode2Factory, I think that the
test case code below and the FIXME statement in it are little confusing, and
can be simplified. More comments inline.


I fixed that. My bad - I got confused about the intention of the interface.

My use of the SCANode2 here is to show that the node can be used to run the
composite based on the information that comes out of the composite
processing we have already done. Of course the SCANode2 itself sets up a
whole set of runtime things so you may want to avoid this to save on memory
footprint if you are trying to run the composite within the same JVM that
you are doing your contribution processing.


 One thing I realized that there is no association of a

Node to a Domain. (sorry if I missed it ). I would assume the Node
will be associated with a SCADomain as then we can call
SCADomain.getService to locate the services hosted on the Domain. And
also it will make it possible that we can have multiple domain in one
single JVM , each may have different contributions , so its hosted
services are different and behaviors are different as there can be
different definitions.xml in a contribution for intent or policy or
others..

           //
====================================================================
           // run the chosen composite

           SCANode2Factory nodeFactory = SCANode2Factory.newInstance();
           SCAContribution contribution0 = new
SCAContribution(contributionsToDeploy.get(0).getURI(),
contributionsToDeploy.get(0).getLocation());
           SCAContribution contribution1 = new
SCAContribution(contributionsToDeploy.get(1).getURI(),
contributionsToDeploy.get(1).getLocation());

           // FIXME - need a more flexible constructor on the node so
we can pass in a
           //         dynamic list of contributions

The second parameter of createSCANode is a variable list of arguments, you
can pass contribution0, contribution1, or... just pass an array, giving you
the ability to provide a 'dynamic' list of contributions. I'm assuming that
by 'dynamic' we meant 'not determined at compile time'.

            SCANode2 node =
nodeFactory.createSCANode(chosenDeployableLocation, contribution0,
contribution1);

like that:
Contribution[] contributions = an array of contributions
nodeFactory.createSCANode(chosenDeployableLocation, contributions);


I changed to this approach .




           node.start();

Here's a simpler example:
       SCANode2Factory factory = SCANode2Factory.newInstance();
       SCANode2 node = factory.createSCANode("Calculator.composite",
               new Contribution("calc", "/tmp/calc.jar"),
               new Contribution("math", "/tmp/math.jar"));
       node.start();

This gives you the ability to use multiple contributions in an application
which DefaultSCADomain didn't support before.


I believe this is what the test is doing except that the contribution name
and location are not hardcoded in the call.



            SCAClient client = (SCAClient)node;
           CalculatorService calculatorService =
               client.getService(CalculatorService.class,
"CalculatorServiceComponentA");

One more comment about SCANode2 and SCAClient. This is work in progress,
for example we named SCANode2 like that to not conflict with SCANode which
we also have in the code at the moment. SCANode2 will probably be renamed to
SCANode at some point or merged with it.

 4. Another interest I have is about model validation for contribution
or composite.   I see another different thread discussing on
validation (
http://www.mail-archive.com/tuscany-dev@ws.apache.org/msg29510.html),
I wonder if the same answers can be applied to using workspace.... or
it is different approach.

Validation is currently performed as part of CompositeBuilder, it doesn't
change at all from before, since CompositeBuilder does not know at all (and
doesn't need to know) how the models given to it were obtained.

Does that answer your question? I wasn't sure what the question was on
that one :)


Thank you. Looking forward to some answers.

Yang


Last comment, this is a pretty long email :) with 5/6 different subjects
piled in it. I've try to respond to all in line here, but I'd suggest to
spawn different threads for further discussion of each specific subject as
necessary if we want to keep this readable as others jump in the discussion
too.

Hope this helps.
--
Jean-Sebastien


To allow programs to work with the Tuscany model extensions without dragging a dependency on the runtime modules (which is one of the ideas discussed here), I need to push the following interfaces/classes:
o.a.t.sca.core.ExtensionPointRegistry
o.a.t.sca.core.DefaultExtensionPointRegistry
o.a.t.sca.core.ModuleActivator
down from module core to module extensibility

This is transparent to all modules that use these classes, as there is no name change and module core already has a compile dependency on module extensibility.

If there's no objection I'll do that at the end of the day tomorrow.
--
Jean-Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to