On Fri, Feb 29, 2008 at 5:37 PM, Jean-Sebastien Delfino <
[EMAIL PROTECTED]> wrote:

> Comments inline.
>
> >>>>> A) Contribution workspace (containing installed contributions):
> >>>>> - Contribution model representing a contribution
> >>>>> - Reader for the contribution model
> >>>>> - Workspace model representing a collection of contributions
> >>>>> - Reader/writer for the workspace model
> >>>>> - HTTP based service for accessing the workspace
> >>>>> - Web browser client for the workspace service
> >>>>> - Command line client for the workspace service
> >>>>> - Validator for contributions in a workspace
> >
> > I started looking at step D). Having a rest from URLs :-) In the context
> of
> > this thread the node can loose it's connection to the domain and hence
> the
> > factory and the node interface slims down. So "Runtime that loads a set
> of
> > contributions and a composite" becomes;
> >
> > create a node
> > add some contributions (addContribution) and mark a composite for
> > starting(currently called addToDomainLevelComposite).
> > start the node
> > stop the node
> >
> > You could then recycle (destroy) the node and repeat if required.
> >
> > This all sound like a suggestion Sebastien made about 5 months ago ;-) I
> > have started to check in an alternative implementation of the node
> > (node2-impl). I haven't changed any interfaces yet so I don't break any
> > existing tests (and the code doesn't run yet!).
> >
> > Anyhow. I've been looking at the workspace code for parts A and B that
> has
> > recently been committed. It would seem to be fairly representative of
> the
> > motivating scenario [1].  I don't have detailed question yet but
> > interestingly it looks like contributions, composites etc are exposed as
> > HTTP resources. Sebastien, It would be useful to have a summary of you
> > thoughts on how it is intended to hang together and how these will be
> used.
>
> I've basically created three services:
>
> workspace - Provides access to a collection of links to contributions,
> their URI and location. Also provides functions to get the list of
> contribution dependencies and validate a contribution.
>
> composites - Provides access to a collection of links to the composites
> present in to the domain composite. Also provides a function returning a
> particular composite once it has been 'built' (by CompositeBuilder),
> i.e. its references, properties etc have been resolved.
>
> nodes - Provides access to a collection of links to composites
> describing the <implementation.node> components which represent SCA nodes.
>
> There's another "file upload" service that I'm using to upload
> contribution files and other files to some storage area but it's just
> temporary.
>
> I'm using <binding.atom> to expose the above collections as editable
> ATOM-Pub collections (and ATOM feeds of contributions, composites, nodes).
>
> Here's how I'm using these services as an SCA domain administrator:
>
> 1. Add one or more links to contributions to the workspace. They can be
> anywhere accessible on the network through a URL, or local on disk. The
> workspace just keeps track of the list.
>
> 2. Add one or more composites to the composites collection. They become
> part of the domain composite.
>
> 3. Add one or more composites declaring SCA nodes to the nodes
> collection. The nodes are described as SCA components of type
> <implementation.node>. A node component names the application composite
> that is assigned to run on it (see implementation-node-xml for an
> example).
>
> 4. Point my Web browser to the various ATOM collections to get:
> - lists of contributions, composites and nodes
> - list of contributions that are required by a given contribution
> - the source of a particular composite
> - the output of a composite built by CompositeBuilder
>
> Here, I'm hoping that the work you've started to "assign endpoint info
> to domain model" [2] will help CompositeBuilder produce the correct
> fully resolved composite.
>
> 5. Pick a node, point my Web browser to its composite description and
> write down:
> - $node = URL of the composite describing the node
> - $composite = URL of the application composite that's assigned to it
> - $contrib = URL the list of contribution dependencies.
>
> 6. When you have node2-impl ready :) from the command line do:
> sca-node $node $composite $contrib
> this should start the SCA node, which can get its description, composite
> and contributions from these URLs.
>
> or for (6) start the node directly from my Web browser as described in
> [1], but one step at a time... that can come later when we have the
> basic building blocks working OK :)
>
>
> >
> > I guess these HTTP resource bring a deployment dimension.
> >
> > Local - Give the node contribution URLs that point to the local file
> system
> > from where the node reads the contribution (this is how it has worked to
> > date)
> > Remote - Give it contribution URLs that point out to HTTP resource so
> the
> > node can read the contributions from where they are stored in the
> network
> >
> > Was that the intention?
>
> Yes. I don't always want to have to upload contributions to some server
> or even have to copy them around. The collection of contributions should
> be able to point to contributions directly in my IDE workspace for
> example (and it supports that today).
>
> > [1] http://www.mail-archive.com/[email protected]/msg27362.html
> [2] http://marc.info/?l=tuscany-dev&m=120422784528176
>
> --
> Jean-Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
> Great summary Sebastien. Thank you.

I've been running the workspace code today with a view to integrating the
new code in assembly which calculates service endpoints i.e. point4 above.

I think we need to amend point 4 to make this work properly..

4. Point my Web browser to the various ATOM collections to get:
- lists of contributions, composites and nodes
- list of contributions that are required by a given contribution
- the source of a particular composite
- the output of a composite after the domain composite has been built by
CompositeBuilder

Looking at the code in DeployableCompositeCollectionImpl I see that on
doGet() it builds the request composite. What the last point  needs to do is

- read the whole domain
- set up all of the service URIs for each of the included composites taking
into account the node to which each composite is assigned
- build the whole domain using CompositeBuilder
- extract the required composite from the domain and serialize it out.

Are you changing this code or can I put this in?

Simon

Reply via email to