i don't think so. although i was somewhat unaware of this terminology,
this is not what i was getting at.

to be clearer, the choice is between a (relatively) static globally shared information model, and a much more dynamic situation where a new service may bring a largely incompatible ontology with it, but if the part needed for initialisation and activation does mesh
with the existing services, then we're okay.


On Nov 20, 2007, at 1:11 AM, Steven Jenkins wrote:

On Nov 19, 2007 12:35 PM, Andrew Hume <[EMAIL PROTECTED]> wrote:
...
 it seems like there are just two answers:

a) there is a global model, populated by things like port numbers, main
memory,
cpu load, available disk space, firewalls and so on. all services can state
their requirements
and measure their usage and performance in terms (predicates etc) of these
entities.

b) the tool inherently knows nothing. it somehow discovers the services
extant in teh cluster
and then figures out what to do by grubbing around through teh ontologies
for each service.
so when the web service says it needs a port number as part of its
installation, the tool
finds the entity 'port number' as part of teh models belonging to the
'firewall' service
and the 'tcp/ip stack' service for a node.


You seem to be describing the debate between single and multiple
dispatch: e.g.,http://en.wikipedia.org/wiki/Multiple_dispatch

That's not to say one or the other is better, but I think it would be
beneficial for people to take a look at the programming language
research on the subject.

Steven

------------------
Andrew Hume  (best -> Telework) +1 732-886-1886
[EMAIL PROTECTED]  (Work) +1 973-360-8651
AT&T Labs - Research; member of USENIX and LOPSA



_______________________________________________
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss

Reply via email to