Nice :),
I respond inline,
Simon Laws wrote:
Thanks Hinde, I think I'm starting to get it;-)
Another question in line. I might find the answer in the paper you reference
so I'll go off and read that but I'll ask here anyhow just in case there is
an easy answer.
In the paper, there is no implementation of the proposal. We are working
now on using SCA.
Simon
On Fri, Jun 20, 2008 at 4:27 PM, Hinde Bouziane <[EMAIL PROTECTED]> wrote:
Simon Laws wrote:
On Fri, Jun 20, 2008 at 1:30 PM, Hinde Bouziane <[EMAIL PROTECTED]>
wrote:
Simon Laws wrote:
On Fri, Jun 20, 2008 at 8:33 AM, Hinde Bouziane <[EMAIL PROTECTED]>
wrote:
Hi
This is an interesting thread. Can you give me an idea of :
1 - how frequently the logical components (as opposed to the instances
of
those logical components) change.
Do you mean logical composition?
For me the "logical" composition is the application description that
represents the designer point of view, before execution. It kept the
same.
An instance of the composition will be the architecture of the
application
at execution. It can be different from the logical. In the example (A;B),
I
added the Proxy which is not in the logical composition. The result is an
instance of the original composition that have an architecture changing
at
execution. The frequency of changes depends on the both the execution
time
of tasks in components and the policy adopted to manage the evolution of
the
workflow.
For example:
- A policy can be the one that I cited in the (A;B) example, where a
component is created once the flow reaches it. The frequency of changes
depends on the flow and time execution of tasks. If the task of A takes
one
day, the manager of the application architecture creates B once this task
is
finished.
- A policy can be based on prediction mechanisms to create in advance
components that may be used in a near future in the flow. Before the
completion of the task of A, B can be created.
I hope that help understanding, Do not hesitate to tell me if something
is
still ambiguous.
Regards,
Hinde
2 - how frequently the workflow that directs the order of execution of
these
components changes
For example, Is it every request, every minute, every six months, etc.
Regards
Simon
Thanks Hinde
I don't know it this helps but let's talk about the distinction between
how
an application is assembled and the way that work flows through it at
runtime. From an SCA point of view each time work flows through an
assembled
application it may take a different route. This depends on the business
logic in each component in the application. This is true even though we
may
only change the description of the composite application every 6 months.
Yes, that is usual in assembly models of existing components models. The
assembly model like in SCA does not capture the flow of an application
execution. This flow is hidden in both components implementation and
(usually) a main code that direct the flow of execution/component usage. In
our case, we built an assembly model that appears as a combination of a
classical assembly model (to express references-services dependencies) and a
workflow language (to express data flow and control flow dependencies
between components). A component in our model is a classical component,
like an SCA component, extended with the concepts of tasks and
input/output data ports. These concepts are projected to a set of
predefined/automatic generated services/references (at the end we obtain an
SCA component).
So to build an application, we define/develop SCA components but we don't
use purely the SCA assembly model but another one. A built assembly
expresses the flow of execution of tasks and references-services
dependencies. From this assembly we can extract the sequence of actions to
create/destroy components and move data from components to others during the
execution; this mainly according to the expressed flow.
An example, a digital film rendering application which, on a frame by
frame
basis does the following (I'm just making this up b.t.w).
1. Apply wire frame
2. Render background
3. If there are characters in the frame render characters.
In the SCA composite for such and application I could imagine 4 components
A. FrameRenderingCoordinator
B. WireFrameProcessor
C. BackgroundRenderer
D. CharacterRenderer
Where the FrameRenderingCoordinator contains a piece of workflow that is
wired to components B, C & D and always calls components B & C and then
makes a decision whether to call D or not depending on if there are
characters in the scene.
To spread load across machines we could deploy these components into a
clustered environment that supports SCA (see demos/load-balancing-webapp).
Alternatively we could write some new binding infrastructure that is able
to
spray work out based on some set of policy(we can't do this today).
In both of these cases the application developer just assembles the
business
application and pays no attention to how the work is distributed.
I suppose I'm trying to demonstrate here how I think about SCA
applications.
In your case you have some interesting infrastructure ideas, e.g.
predictively creating component instances if it is anticipated that flow
will reach them in the future. It's unclear to me how you want to
represent
these kinds of rules.
The flow is anticipated thanks to the assembly model that propose. Because,
it is difficult to make figures by email, you can take a look in
https://www.irisa.fr/paris/bibadmin/uploads/pdf/0.21342000%201213723399_paper.pdf
Are you suggesting changes to the details of the Tuscany runtime
infrastructure to accommodate these improvements without the application
developer being aware?
Not exactly. We aim to use SCA Tuscany runtime to realize our proposal.
The problem that I'm facing is to use it for dynamic creation/removal of
components, nodes, and components within nodes, that without having to
preliminary describe all nodes, their placement and their content (like I
explained in previous message). That should be decided during the execution
(decide for example the placement of a component, then its node) once the
flow reach it).
Are you treating components like a Job such as you might encounter in Grid
schedulers or, for example, in Globus or OMII.
I'm not expert in Globus or OMII but the principle should be similar and
simpler. In our case, we are using the frensh experimental platform
which is Grid'5000 (https://www.*grid5000*.fr/). To submit a job, we
are using the batch management system OAR/OARGRID. It allow machines
resources reservations and jobs submission.
I'm asking as I'd like to
understand the scenario in which a component is placed on a new node. Is
this just in the case where new physical computers are added to the network
or is this a general purpose idea where components are applied to available
resources.
The two situations are possible, but our aim is to optimize resources
usage, so we are in the case: we reserve a computer for a component when
this latter is reached by the flow.
If the latter do you think there is a difference between
assigning a component to all nodes where they might be used and then later
choosing which one you are going to use (remember assigning a component to a
node doesn't actually mean an instance will be created there) versus only
assigning components to nodes when the time comes to use the component.
No, there is no difference. but that assumes all resources are reserved
in advance.
Are you suggesting that the application itself should be making these
decisions, for example we could rearrange our example flow to read
1. If there are characters in the frame ask character renderer to get
ready
while I do steps 2 & 3
2. Apply wire frame
3. Render background
4. If there are characters in the frame render characters.
It is not the application itself that make decisions but a manager that we
have to develop able to extract the actions to do from the assembly
description of the application to be executed. This manager is expected to
use the Tuscany runtime to create/destroy/connect components/nodes. The
manager also decides about the placement of components and the content of
nodes which can evolve during the execution.
So it sounds to me like the manager is a new piece of infrastructure that
decides where and when components are deployed and used.
Yes, it is tow pieces in the infrastructure: one to manage when
components have to be created/destroyed. The creation asks another piece
responsible to asks for a resources and deploy the component (the
opposite for destruction). As we are in a first step of implementing the
assembly model, the second piece can be an SCA component that for
instance, asks for a reservation of a computer with OAR, retrieve the
computer name and deploy the component on it.
Can policies
attached to the assembly model affect the behaviour of this manager?
The assembly model does not define the policy management but a semantic
that allow extracting the expressed behavior by the assembly. Yes
policies affect the behavior of the manager: for example : create
components only when they are reached by the flow or create components
which may be used soon, that is to limits the overhead of deployment
between execution steps in the flow.
regards,
Hinde
Regards,
Hinde
Regards
Simon