David Barbour <dmbarb...@gmail.com> writes:

> relying on global knowledge when designing an actor system seems, to me,
>> not to be the "right" way
>>
>
> In our earlier discussion, you mentioned that actors model can be used to
> implement lambda calculus. And this is true, given bog standard actors
> model. But do you believe you can explain it from your 'physics' view
> point? How can you "know" that you've implemented, say, a Fibonacci
> function with actors, if you forbid knowledge beyond what can be discovered
> with messages? Especially if you allow message loss?

<snip>

> Any "expressiveness issues" that can be attributed to actors model are a
> consequence of the model. It doesn't matter whether you implement it as a
> language or a framework or even use it as a design pattern.
>
> Of course, one might overcome certain expressiveness issues by stepping
> outside the model, which may be easy for a framework or certain
> multi-paradigm languages. But to the extent you do so, you can't claim
> you're benefiting from actors model. It's a little sad when we work around
> our models rather than with them.

I think in these kinds of discussions it's important to keep in mind
Goedel's limits on self-reference. Any Turing-complete system is
equivalent to any other in the sense that they can implement each other,
but when we reason about their properties it makes a difference whether
our 'frame of reference' is the implemented language or the
implementation language.

For example, lets say we implement lambda calculus in the actor model:
 - From the lambda calculus frame of reference I *know* that
   "λx. ax" == "a" (eta-equivalence), which I can use in my reasoning.
 - From the actor model frame of reference I *cannot* know that an
   encoding of "λx. ax" == an encoding of "a", since it won't be. More
   importantly, reducing an encoding of "λx. ax" will give different
   computational behaviour to reducing an encoding of "a". I cannot
   interchange one for the other in an arbitrary expression and *know*
   that they'll reduce to the same thing, since this would solve the
   halting problem.

We can step outside the actor model and prove eta-equivalence of the
encoded terms (eg. it would follow trivially if we proved that we've
correctly implemented lambda calculus), but the actor model itself will
never believe such a proof.

To use David's analogy, there are some desirable properties that
programmers exploit which are inherently "3D" and cannot be represented
in the "2D" world. Of course, there are also "4D" properties which our
"3D" infrastructure cannot represent, for example correct refactorings
that our IDE will think are unsafe, correct optimisations which our
compiler will think are unsafe, etc. At some point we have to give up
and claim that the meta-meta-meta-....-system is "enough for practical
purposes" and "obviously correct" in its implementation.

The properties that David is interested in preserving under composition
(termination, maintainability, security, etc.) are very meta, so it's
easy for them to become unrepresentable and difficult to encode when a
language/system/model isn't designed with them in mind.

Note that the above argument is based on Goedel's first incompleteness
theorem: no consistent system can prove all true statements. The halting
problem is just the most famous example of such a statement for
Turing-complete systems.

Goedel's second incompleteness theorem is also equally applicable here:
no system can prove its own consistency (or that of a system with
equivalent expressive power, ie. anything which can emulate it). In
other words, no system (eg. the actor model, lambda calculus, etc.) can
prove/reason about its own:

 - Consistency/correctness: a correct system could produce a correct
   proof that it's correct, or an incorrect system could produce an
   incorrect proof that it's correct; by trusting such a proof we become
   inconsistent.
 - Termination/productivity: a terminating/productive system could
   produce a proof that it's terminating/productive, or a divergent
   (non-terminating, non-productive) system could claim, but fail to
   produce, a proof that it's terminating/productive. By trusting such a
   proof we allow non-termination (or we could optimise it away,
   becoming inconsistent).
 - Security: a secure system could produce a proof that it's secure, or
   an insecure system could be tricked into accepting that it's
   secure. By trusting such a proof, we become insecure.
 - Reliability: a reliable system could produce a proof that it's
   reliable, or an unreliable system could fail in a way that produces
   an incorrect proof of its reliability. By trusting such a proof, we
   become unreliable.
 - Etc.

For such properties we *must* reason in an external language/system,
since Goedel showed that such loops cannot be closed without producing
inconsistency (or the analogous 'bad' outcome).

Regards,
Chris
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to