On 7/27/07, Simon Laws <[EMAIL PROTECTED]> wrote:
>
>
>
> On 7/27/07, Mike Edwards <[EMAIL PROTECTED]> wrote:
> >
> > Simon Laws wrote:
> > > In the distributed domain contributions and any updates have to be
> > > provisioned to each node. There are many ways of doing this, ftp,
> > http,
> > > shred file system , etc. to the extent that Tuscany shouldn't really
> > care
> > > too much about how it is achieved. I would expect that at any give
> > time a
> > > domain at a node can be notified of its configuration given the URL(s)
> > of
> > > where to find its up to date contributions. For example, in the stand
> > alone
> > > case this could just resolve to the contributions in "file:///path to
> > > sca-cntributions dir".
> > >
> > > However I'm a little unclear how to deal with configuration updates.
> > There
> > > have been a number of posts recently about incremental updates to
> > domains
> > > [1] and [2]. I can see that contributions can be added and removed but
> > the
> > > implication of the discussion in [1] and the assembly  spec (1.10.4.1)
> > > implies that contributions can also be updated. Is there code to
> > support
> > > this in place already? I'm assuming the current approach is to drop
> > the
> > > current contribution and load the updated version rather  than deal
> > with
> > > deltas.
> > >
> > > Simon
> > >
> > > [1]
> > http://www.mail-archive.com/[email protected]/msg19979.html
> > > [2] http://issues.apache.org/jira/browse/TUSCANY-1379
> > >
> > Simon,
> >
> > I think it will pay to do a bit more thinking about all this.  There are
> > going to be a range of different configurations to support, and so
> > thinking through the structure of the runtime nodes and how they
> > interact with the contributions in the Domain will pay off over time.
> >
> > I think there is going to be too much to contain in one email, but I'll
> > start here.  I think capturing the concepts on Wiki pages will pay off
> > in the long run, since I think that sorting through a load of emails to
> > find them will get hard.
> >
> > So, if you are all seated comfortably, let us begin....
> > (Brits of a certain age will understand where that phrase comes from...)
> >
> > We have the SCA Domain.  This contains the configuration data, held as a
> > series of one or more contributions.  On a single node runtime, the way
> > in which the Domain is held can be very simple indeed.  Files on disk in
> > one or more directories will do fine.
> >
> > Once we get a distributed runtime, things rapidly get more complex.  The
> > one obvious thing is that it is almost inevitable that each runtime node
> >
> > will need access to parts of the SCA Domain configuration that they
> > don't "own" - eg to make a wire from a component that one node runs to a
> > component running on another node.
> >
> > How the SCA Domain is done for a distributed runtime is also variable -
> > it could be done in a number of ways.  The trick is to provide
> > interfaces between the runtime node code and the "repository" that allow
> > for alternative implementations.  This must include both the initial
> > configuration when the runtime node(s) start up and also what happens
> > when the Domain configuration changes.
> >
> > I think that we must design interfaces that separate the organization of
> > the SCA Domain repository from the runtime code.  These interfaces are
> > going to have to be two-way, in the sense that there is going to be both
> > pull and push aspects to them.  ie A node can go pull configuration
> > information from the Domain, or it can have configuration thrown at it -
> >
> > either as updates or by some provisioning manager that is tossing out
> > work to do.
> >
> > Service interfaces seem like the right kind of things to do under the
> > covers inside the implementation of the Domain repository.  Nodes in
> > principle need to talk with each other.  We need to think through which
> > interfaces are needed first and then decide how they are dealt with in
> > terms of concrete service interfaces.
> >
> >
> > Yours,  Mike.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
>
>
> +1 mike that we start to work with interfaces. I've checked in some
> starters for 10 (sorry another uk reference) [1] and, if you don't like the
> code version, I've started doing some updates to the wiki page that has been
> a round for a while on this subject [2]. I'm happy to accept that we will
> have differing views on how these interfaces should look but if  we can
> iterate here  to some common understanding that would be great.
>
> Simon
>
> [1] 
> http://svn.apache.org/repos/asf/incubator/tuscany/java/sca/modules/distributed/src/main/java/org/apache/tuscany/sca/distributed/management/
>
> [2]
> http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Distributed+Runtime
>
>
> Mike, in thinking a bit more about what you say about it would be good if
we can take the interfaces that result from this conversation and show how
they can be deployed in  distributed runtimes of various forms. I say this
as there is always more than one way to bake a cake as it were and  we
should  aim for a degree of  flexibility.

1/ Currently the sample/calculator-distributed relies on a set of
distributed nodes that are completely self contained. Each reads the
complete set of contributions and decides which components to run based on
local, file system based, configuration.

2/ Going forward it is useful to have a mechanism where the nodes running
components in a distributed domain can communicate with each other to
exchange information such as the endpoints at which their services are
exposed. This could be done via some central distributed domain manager or
registry.

It would be good if the interfaces defined for this purpose work in both
situations so that we don't have to define two sets and I think it will help
us make better interfaces. For example, in case 1/, an interface that
provides access to service endpoint information could be implemented in each
node to deduce the information from configuration files on disc. While in
case 2/ the same interface could be provided as a remote service on a
separate registry component.

What and where the services are (I think you have used the term moving parts
in the past) depends on how we choose to implement the interfaces. On this
subject I've split the terminology section from the distributed page on the
wiki to try and get more input to the words being used (
http://cwiki.apache.org/confluence/display/TUSCANYWIKI/Terminology).

Thoughts?

Simon

Reply via email to