On Sun, Apr 7, 2013 at 10:40 AM, Tristan Slominski <
[email protected]> wrote:
>
>
> Consider: You can't treat a group of two actors as a single actor.
>
>
> You can definitely group two actors as a single actor. It's almost trivial
> to do so. It requires creating another actor though :D.
>

There are design patterns that can allow developers to treat a clusters of
collaborating actors as a single actor [1], but they are not trivial and
they require careful discipline and design of the whole cluster.

I believe you imagine an actor simply demuxing or unzipping messages to two
or more other actors. But such a design does not result in the same
concurrency, consistency, or termination properties as a single actor,
which is why you cannot (correctly) treat the grouping as a single actor.

[1] http://www.dalnefre.com/wp/2010/05/composing-actors/



>
> Also, you yourself, in "Life With Objects" mention object capability
> model. So to counter the statement of difficulty to reason about
> system-level properties, object capability model definitely makes it
> simpler to reason about the system-level property of security (by walking
> the actor "graph").
>

Object capabilities - and compositional reasoning about security properties
- are in my opinion the best contribution of OOP to CS. (Traits, and Object
Algebras, cf. William Cook's work [2], are up there, too.) I am, of course,
very interested in preserving this reasoning. But I'd also like to address
many other properties.

[2] http://lambda-the-ultimate.org/node/4572


> Correctly implemented actor systems (requirement is non-forgeable actor
> addresses) implement object capability model trivially.


Actors model is certainly capable of object capability patterns, but it is
my understanding that most actors languages and implementations are
developed with an assumption of ambient authority (i.e. the ability to
'import' new capabilities, including stateful ones that would allow
indirect communication between actors).

Developing a language with object capability constraints, or at least to
make it practical, requires special attention to modularity and
integration. It also requires reconsideration of many APIs where there must
be secure collaboration (e.g. for graphics, UI). This is, in mine and
others' experiences, not a trivial hurdle.



As far as standard composition, I've seen lambda calculus implemented in
> actors, and I've seen actors implemented in Kernel
>

Hmm. No, that doesn't count. You don't get to say "we can compose lambdas,
and we can implement lambdas with actors, therefore we can compose actors".
Such an argument, if accepted, would render composition meaningless.

To recognize composition, you should identify three sets: the set of
compositional operands (components), the set of composition operators, and
the set of compositional properties. And there is a minimum of one
compositional property, which is algebraic closure (the result of
composition can be further composed).

You can thus speak of composing lambdas with functional composition, or
composing diagrams with graphics composition operations. But neither says
nothing at all about whether actors, or Kernel, or C or whatever they're
implemented in is composable. Composition is a property of abstractions.

We can, of course, create all sorts of ad-hoc abstractions, e.g. frameworks
or little-languages, that provide composition internally. A DSL or
framework might be much more composable than the language that implements
it. But so far the issue of integrating these solutions, e.g. of composing
frameworks, is poorly addressed.


> are you familiar with Storm ( https://github.com/nathanmarz/storm ) ? It
> is one example of composing things and providing guarantees at a slightly
> higher level of abstraction.
>

Weakly familiar; I haven't used it, but I've seen the presentation and
 know Storm implements event stream processing, which (as a model) is much
more composable than generic message passing (i.e. since you can model
external composition, topologies), but still leaves a lot to be desired.


>
>  Much like my previous metaphor comments, it's important we get to pick
> and choose our metaphors for the task at hand.
>

I'm always wary of such assertions. There are cases where choice is
important or necessary, of course. But how do you weigh freedom to make
choices for "the task at hand" even if they're bad choices for the tasks
NOT immediately at hand (such as integration, maintenance)?

You use metaphors to inspire abstractions. Compositionality is a property
of, and therefore a constraint on, abstraction. There is some friction
here, though not a full contradiction (you can certainly find compositional
metaphors).


> there is always the one bright shining example of an event system that
> definitely counters, for example, your point that "Event systems lack
> generic resilience". As Alan likes to point out, the Internet was built by
> people.
>

Eh, the internet is certainly NOT a counter-example to that point.

At the low level of TCP/IP, there is no *generic* way to re-establish a
broken connection or recover from missing datagrams. Each application or
service has its own ad-hoc, inconsistent solution. The World Wide Web that
you might consider resilient is so primarily due to representational state
transfer (REST) styles and disciplines, which is effectively about giving
the finger to events. (Events involves communicating and effecting
*changes* in state, whereas REST involves communicating full
*representations* of state.) Also, by many metrics (dead links, behavior
under disruption, persistent inconsistency) neither the web nor the
internet is especially resilient (though it does degrade gracefully).



So event systems can become powerful if we can get them right.
>

Their power was never in question. Our ability to control and comprehend
that power, OTOH...


>
> Think of a read caching system. As long as it has the capacity, the more
> requests are executed, the more they are cached, the better the system
> response overall. That's an example of getting better in face of
> variability and demand. This followed by hot-spot detection, horizontal
> scaling in response, and load balancing across are all examples of a system
> improving when stressed.
>

I agree, all of those are good adaptations. I tend to think of them in
terms of the properties that allow them, e.g. horizontal scaling is enabled
by idempotence and commutativity of certain operations, and similar for
caching. If one has machine learning support a constraint satisfaction
model, one gains both stability (we learn to find the same or near
solutions again and again) and performance (we find solutions more quickly).


> I often find myself deliberately disregarding "correctness", and thinking
> about how correctness can become an emergent property of the system.
>

People who focus on the latter often use phrases such as 'convergence',
'stability', 'asymptotic', 'bounded error', 'eventual consistency'. It's
all very interesting. But it is historically a mistake to disregard
correctness then *handwave* about emergent properties; you really need a
mathematical justification even for weak correctness properties.
Biologically inspired models aren't always stable (there are all sorts of
predator-prey cycles, extinctions, etc.) and the transient stability we
observe is often anthropocentric.


> Another inspiration for me in similar fashion was the Conscientious
> Software paper:
>
> http://pleiad.dcc.uchile.cl/_media/bic2007/papers/conscientioussoftwarecc.pdf
>

I'll take a look.

Regards,

Dave
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to