Re: [fonc] Layering, Thinking and Computing

2013-04-14 Thread Gath-Gealaich
On Sat, Apr 13, 2013 at 8:29 PM, David Barbour dmbarb...@gmail.com wrote:


 On this forum, 'Nile' is sometimes proffered as an example of the power of
 equational reasoning, but is a domain specific model.


Isn't one of the points of idst/COLA/Frank/whatever-it-is-called-today to
simplify the development of domain-specific models to such an extent that
their casual application becomes conceivable, and indeed even practical, as
opposed to designing a new one-size-fits-all language every decade or so?

I had another idea the other day that could profit from a domain-specific
model: a model for compiler passes. I stumbled upon the nanopass approach
[1] to compiler construction some time ago and found that I like it. Then
it occurred to me that if one could express the passes in some sort of a
domain-specific language, the total compilation pipeline could be assembled
from the individual passes in a much more efficient way that would be the
case if the passes were written in something like C++.

In order to do that, however, no matter what the intermediate values in the
pipeline would be (trees? annotated graphs?), the individual passes would
have to be analyzable in some way. For example, two passes may or may not
interfere with each other, and therefore may or may not be commutative,
associative, and/or fusable (in the same respect that, say, Haskell maps
over lists are fusable). I can't imagine that C++ code would be analyzable
in this way, unless one were to use some severely restricted subset of C++
code. It would be ugly anyway.

Composing the passes by fusing the traversals and transformations would
decrease the number of memory accesses, speed up the compilation process,
and encourage the compiler writer to write more fine-grained passes, in the
same sense that deep inlining in modern language implementations encourages
the programmer to write small and reusable routines, even higher-order
ones, without severe performance penalties. Lowering the barrier to
implementing such a problem-specific language seems to make such an
approach viable, perhaps even desirable, given how convoluted most
production compilers seem to be.

(If I've just written something that amounts to complete gibberish, please
shoot me. I just felt like writing down an idea that occurred to me
recently and bouncing it off somebody.)

- Gath

[1] Kent Dybvig, A nanopass framework for compiler education (2005),
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.72.5578
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-13 Thread David Barbour
On Sat, Apr 13, 2013 at 9:01 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 I think we don't know whether time exists in the first place.


That only matters to people who want as close to the Universe as
possible.

To the rare scientist who is not also a philosopher, it only matters
whether time is effective for describing and predicting behavior about the
universe, and the same is true for notions of particles, waves, energy,
entropy, etc..

I believe our world is 'synchronous' in the sense of things happening at
 the same time in different places...


 It seems to me that you are describing a privileged frame of reference.


How is it privileged?

Would you consider your car mechanic to have a 'privileged' frame of
reference on our universe because he can look down at your vehicle's engine
and recognize when components are in or out of synch? Is it not obviously
the case that, even while out of synch, the different components are still
doing things at the same time?

Is there any practical or scientific merit for your claim? I believe there
is abundant scientific and practical merit to models and technologies
involving multiple entities or components moving and acting at the same
time.



 I've built a system that does what you mention is difficult above. It
 incorporates autopoietic and allopoietic properties, enables object
 capability security and has hints of antifragility, all guided by the actor
 model of computation.


Impressive.  But with Turing complete models, the ability to build a system
is not a good measure of distance. How much discipline (best practices,
boiler-plate, self-constraint) and foresight (or up-front design) would it
take to develop and use your system directly from a pure actors model?



I don't want programming to be easier than physics. Why? First, this
 implies that physics is somehow difficult, and that there ought to be a
 better way.


Physics is difficult. More precisely: setting up physical systems to
compute a value or accomplish a task is very difficult. Measurements are
noisy. There are many non-obvious interactions (e.g. heat, vibration,
covert channels). There are severe spatial constraints, locality
constraints, energy constraints. It is very easy for things to 'go wrong'.

Programming should be easier than physics so it can handle higher levels of
complexity. I'm not suggesting that programming should violate physics, but
programs shouldn't be subject to the same noise and overhead. If we had to
think about adding fans and radiators to our actor configurations to keep
them cool, we'd hardly get anything done.

I hope you aren't so hypocritical as to claim that 'programming shouldn't
be easier than physics' in one breath then preach 'use actors' in another.
Actors are already an enormous simplification from physics. It even
simplifies away the media for communication.



Whatever happened to the pursuit of Maxwell's equations for Computer
 Science? Simple is not the same as easy.


Simple is also not the same as physics.

Maxwell's equations are a metaphor that we might apply to a specific model
or semantics. Maxwell's equations describe a set of invariants and
relationships between properties. If you want such equations, you'll
generally need to design your model to achieve them.

On this forum, 'Nile' is sometimes proffered as an example of the power of
equational reasoning, but is a domain specific model.



 if we (literally, you and I in our bodies communicating via the Internet)
 did not get here through composition, integration, open extension and
 abstraction, then I don't know how to make a better argument to demonstrate
 those properties are a part of physics and layering on top of it


Do you even have an argument that we are here through composition,
integration, open extension, and abstraction? I'm a bit lost as to what
that would even mean unless you're liberally reinterpreting the words.

In any case, it doesn't matter whether physics has these properties, only
whether they're accessible to a programmer. It is true that any programming
model must be implemented within physics, of course, but that's not the
layer exposed to the programmers.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-13 Thread John Nilsson
This discussion reminds me of
http://www.ageofsignificance.org/

It's a philosophical analysis of what computation means and how, or if, it
can be separated from the machine implementing it. The author argues that
it cannot.

If you haven't read it you might find it interesting. Unfortunately only
the introduction is published as of today.

BR
John
Den 12 apr 2013 20:08 skrev Tristan Slominski tristan.slomin...@gmail.com
:

 I had this long response drafted criticizing Bloom/CALM and Lightweight
 Time Warps, when I realized that we are probably again not aligned as to
 which meta level we're discussing.

 (my main criticism of Bloom/CALM was assumption of timesteps, which is an
 indicator of a meta-framework relying on something else to implement it
 within reality; and my criticism of Lightweight Time Warps had to do with
 that it is a protocol for message-driven simulation, which also needs an
 implementor that touches reality; synchronous reactive programming has
 the word synchronous in it) - hence my assertion that this is more meta
 level than actors.

 I think you and I personally care about different things. I want a
 computational model that is as close to how the Universe works as possible,
 with a minimalistic set of constructs from which everything else can be
 built. Hence my references to cellular automata and Wolfram's hobby of
 searching for the Universe. Anything which starts as synchronous cannot
 be minimalistic because that's not what we observe in the world, our world
 is asynchronous, and if we disagree on this axiom, then so much for that :D

 But actors model fails with regards to extensibility(*) and reasoning


 Those are concerns of an imperator, are they not? Again, I'm not saying
 you're wrong, I'm trying to highlight that our goals differ.

 But, without invasive code changes or some other form of cheating (e.g.
 global reflection) it can be difficult to obtain the name of an actor that
 is part of an actor configuration.


 Again, this is ignorance of the power of Object Capability and the Actor
 Model itself. The above is forbidden in the actor model unless the
 configuration explicitly sends you an address in the message. My earlier
 comment about Akka refers to this same mistake.

 However, you do bring up interesting meta-level reasoning complaints
 against the actor model. I'm not trying to dismiss them away or anything.
 As I mentioned before, that list is a good guide as to what meta-level
 programmers care about when writing programs. It would be great if actors
 could make it easier... and I'm probably starting to get lost here between
 the meta-levels again :/

 Which brings me to a question. Am I the only one that loses track of which
 meta-level I'm reasoning or is this a common occurrence  Bringing it back
 to the topic somewhat, how do people handle reasoning about all the
 different layers (meta-levels) when thinking about computing?


 On Wed, Apr 10, 2013 at 12:21 PM, David Barbour dmbarb...@gmail.comwrote:

 On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


 I'm confused. Why would you be pessimistic about non-actor models when
 your argument is essentially that very simple, deterministic, non-actor
 models can be both Turing complete and address unbounded scalability?

 Hmm. Perhaps what you're really arguing is pessimistic about procedural
 - which today is the mainstream paradigm of choice. The imperial nature of
 procedures makes it difficult to compose or integrate them in any
 extensional or collaborative manner - imperative works best when there is
 exactly one imperator (emperor). I can agree with that pessimism.

 In practice, the limits of scalability are very often limits of reasoning
 (too hard to reason about the interactions, safety, security, consistency,
 progress, process control, partial failure) or limits of extensibility (to
 inject or integrate new behaviors with existing systems requires invasive
 changes that are inconvenient or unauthorized). If either of those limits
 exist, scaling will stall. E.g. pure functional programming fails to scale
 for extensibility reasons, even though it admits a lot of natural
 parallelism.

 Of course, scalable performance is sometimes the issue, especially in
 models that have global 'instantaneous' relationships (e.g. ad-hoc
 non-modular logic programming) or global maintenance issues (like garbage
 collection). Unbounded scalability requires a consideration for locality of
 computation, and that it takes time for information to propagate.

 Actors model is one (of many) models that provides some of the
 considerations necessary for unbounded performance scalability. But actors
 model fails with regards to extensibility(*) and 

Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread Tristan Slominski
I had this long response drafted criticizing Bloom/CALM and Lightweight
Time Warps, when I realized that we are probably again not aligned as to
which meta level we're discussing.

(my main criticism of Bloom/CALM was assumption of timesteps, which is an
indicator of a meta-framework relying on something else to implement it
within reality; and my criticism of Lightweight Time Warps had to do with
that it is a protocol for message-driven simulation, which also needs an
implementor that touches reality; synchronous reactive programming has
the word synchronous in it) - hence my assertion that this is more meta
level than actors.

I think you and I personally care about different things. I want a
computational model that is as close to how the Universe works as possible,
with a minimalistic set of constructs from which everything else can be
built. Hence my references to cellular automata and Wolfram's hobby of
searching for the Universe. Anything which starts as synchronous cannot
be minimalistic because that's not what we observe in the world, our world
is asynchronous, and if we disagree on this axiom, then so much for that :D

But actors model fails with regards to extensibility(*) and reasoning


Those are concerns of an imperator, are they not? Again, I'm not saying
you're wrong, I'm trying to highlight that our goals differ.

But, without invasive code changes or some other form of cheating (e.g.
 global reflection) it can be difficult to obtain the name of an actor that
 is part of an actor configuration.


Again, this is ignorance of the power of Object Capability and the Actor
Model itself. The above is forbidden in the actor model unless the
configuration explicitly sends you an address in the message. My earlier
comment about Akka refers to this same mistake.

However, you do bring up interesting meta-level reasoning complaints
against the actor model. I'm not trying to dismiss them away or anything.
As I mentioned before, that list is a good guide as to what meta-level
programmers care about when writing programs. It would be great if actors
could make it easier... and I'm probably starting to get lost here between
the meta-levels again :/

Which brings me to a question. Am I the only one that loses track of which
meta-level I'm reasoning or is this a common occurrence  Bringing it back
to the topic somewhat, how do people handle reasoning about all the
different layers (meta-levels) when thinking about computing?


On Wed, Apr 10, 2013 at 12:21 PM, David Barbour dmbarb...@gmail.com wrote:

 On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


 I'm confused. Why would you be pessimistic about non-actor models when
 your argument is essentially that very simple, deterministic, non-actor
 models can be both Turing complete and address unbounded scalability?

 Hmm. Perhaps what you're really arguing is pessimistic about procedural
 - which today is the mainstream paradigm of choice. The imperial nature of
 procedures makes it difficult to compose or integrate them in any
 extensional or collaborative manner - imperative works best when there is
 exactly one imperator (emperor). I can agree with that pessimism.

 In practice, the limits of scalability are very often limits of reasoning
 (too hard to reason about the interactions, safety, security, consistency,
 progress, process control, partial failure) or limits of extensibility (to
 inject or integrate new behaviors with existing systems requires invasive
 changes that are inconvenient or unauthorized). If either of those limits
 exist, scaling will stall. E.g. pure functional programming fails to scale
 for extensibility reasons, even though it admits a lot of natural
 parallelism.

 Of course, scalable performance is sometimes the issue, especially in
 models that have global 'instantaneous' relationships (e.g. ad-hoc
 non-modular logic programming) or global maintenance issues (like garbage
 collection). Unbounded scalability requires a consideration for locality of
 computation, and that it takes time for information to propagate.

 Actors model is one (of many) models that provides some of the
 considerations necessary for unbounded performance scalability. But actors
 model fails with regards to extensibility(*) and reasoning. So do most of
 the other models you mention - e.g. cellular automatons are even less
 extensible than actors (cells only talk to a fixed set of immediate
 neighbors), though one can address that with a notion of visitors (mobile
 agents).

 From what you say, I get the impression that you aren't very aware of
 other models that might compete with actors, that attempt to address not
 only unbounded performance scalability but some of the other limiting
 

Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread John Pratt

I feel like these discussions are tangential to the larger issues
brought up on FONC and just serve to indulge personal interest
discussions.  Aren't any of us interested in revolution?  It won't
start with digging into existing stuff like this.


On Apr 12, 2013, at 11:13 AM, Tristan Slominski wrote:

 oops, I forgot to edit this part:
 
  and my criticism of Lightweight Time Warps had to do with that it is a 
 protocol for message-driven simulation, which also needs an implementor that 
 touches reality
 
 It should have read:
 
 and my criticism of Lightweight Time Warps had to do with that it is a 
 protocol for message-driven simulation and (I think) actors are minimal 
 implementors of message-driven protocols
 
 
 On Fri, Apr 12, 2013 at 1:07 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:
 I had this long response drafted criticizing Bloom/CALM and Lightweight Time 
 Warps, when I realized that we are probably again not aligned as to which 
 meta level we're discussing. 
 
 (my main criticism of Bloom/CALM was assumption of timesteps, which is an 
 indicator of a meta-framework relying on something else to implement it 
 within reality; and my criticism of Lightweight Time Warps had to do with 
 that it is a protocol for message-driven simulation, which also needs an 
 implementor that touches reality; synchronous reactive programming has the 
 word synchronous in it) - hence my assertion that this is more meta level 
 than actors.
 
 I think you and I personally care about different things. I want a 
 computational model that is as close to how the Universe works as possible, 
 with a minimalistic set of constructs from which everything else can be 
 built. Hence my references to cellular automata and Wolfram's hobby of 
 searching for the Universe. Anything which starts as synchronous cannot be 
 minimalistic because that's not what we observe in the world, our world is 
 asynchronous, and if we disagree on this axiom, then so much for that :D
 
 But actors model fails with regards to extensibility(*) and reasoning
 
 Those are concerns of an imperator, are they not? Again, I'm not saying 
 you're wrong, I'm trying to highlight that our goals differ.
 
 But, without invasive code changes or some other form of cheating (e.g. 
 global reflection) it can be difficult to obtain the name of an actor that is 
 part of an actor configuration. 
 
 Again, this is ignorance of the power of Object Capability and the Actor 
 Model itself. The above is forbidden in the actor model unless the 
 configuration explicitly sends you an address in the message. My earlier 
 comment about Akka refers to this same mistake.
 
 However, you do bring up interesting meta-level reasoning complaints against 
 the actor model. I'm not trying to dismiss them away or anything. As I 
 mentioned before, that list is a good guide as to what meta-level programmers 
 care about when writing programs. It would be great if actors could make it 
 easier... and I'm probably starting to get lost here between the meta-levels 
 again :/
 
 Which brings me to a question. Am I the only one that loses track of which 
 meta-level I'm reasoning or is this a common occurrence  Bringing it back to 
 the topic somewhat, how do people handle reasoning about all the different 
 layers (meta-levels) when thinking about computing? 
 
 
 On Wed, Apr 10, 2013 at 12:21 PM, David Barbour dmbarb...@gmail.com wrote:
 On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:
 I think it's more of a pessimism about other models. [..] My non-pessimism 
 about actors is linked to Wolfram's cellular automata turing machine [..] 
 overwhelming consideration across all those hints is unbounded scalability. 
 
 I'm confused. Why would you be pessimistic about non-actor models when your 
 argument is essentially that very simple, deterministic, non-actor models can 
 be both Turing complete and address unbounded scalability? 
 
 Hmm. Perhaps what you're really arguing is pessimistic about procedural - 
 which today is the mainstream paradigm of choice. The imperial nature of 
 procedures makes it difficult to compose or integrate them in any extensional 
 or collaborative manner - imperative works best when there is exactly one 
 imperator (emperor). I can agree with that pessimism.
 
 In practice, the limits of scalability are very often limits of reasoning 
 (too hard to reason about the interactions, safety, security, consistency, 
 progress, process control, partial failure) or limits of extensibility (to 
 inject or integrate new behaviors with existing systems requires invasive 
 changes that are inconvenient or unauthorized). If either of those limits 
 exist, scaling will stall. E.g. pure functional programming fails to scale 
 for extensibility reasons, even though it admits a lot of natural parallelism.
 
 Of course, scalable performance is sometimes the issue, especially in models 
 that have global 

Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread David Barbour
On Fri, Apr 12, 2013 at 11:07 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 my main criticism of Bloom/CALM was assumption of timesteps, which is an
 indicator of a meta-framework relying on something else to implement it
 within reality


At the moment, we don't know whether or not reality has discrete
timesteps [1]. I would not dismiss a model as being distinguishable from
'reality' on that basis. A meta-framework is necessary because we're
implementing Bloom/CALM not directly in reality, but rather upon a silicone
chip that enforces sequential computation even where it is not most
appropriate.

[1] http://en.wikipedia.org/wiki/Chronon


 ; and my criticism of Lightweight Time Warps had to do with that it is a
 protocol for message-driven simulation and (I think) actors are minimal
 implementors of message-driven protocols [from edit]


This criticism isn't relevant for the same reason that your but you can
implement lambda calculus in actors arguments weren't relevant. The
properties of the abstraction must be considered separately from the
properties of the model implementing it.

It is true that you can implement a time warp system with actors. It is
also the case that you can implement actors in a time warp system. Either
direction will involve a framework or global transform.



 I think you and I personally care about different things. I want a
 computational model that is as close to how the Universe works as possible,


You want more than close to how the Universe works.

For example, you also want symbiotic autopoietic and allopoietic systems
and antifragility, and possibly even object capability security. Do you
deny this? But you should consider whether your wants might be in conflict
with one another. And, if so, you should weigh and consider what you want
more.

I believe they are in conflict with one another. Reality has a lot of
properties that make it a difficult programming model. There are reasons we
write software instead of hardware. There is a related discussion [fonc]
Physics and Types in 2011 August. There, I say:


Physics has a very large influence on a lot of language designs and
programming models, especially those oriented around concurrency and
communication. We cannot fight physics, because physics will ruthlessly
violate our abstractions and render our programming models unscalable, or
non-performant, or inconsistent (pick two).

But we want programming to be easier than physics. We need composition
(like Lego bricks), integration (without 'impedance mismatch'), open
extension, and abstraction (IMO, in roughly that order). So the trick is to
isolate behaviors that we can utilize and feasibly implement at arbitrary
scales (small and large), yet that support our other desiderata.


As I said earlier, the limits on growth, and thus on scalability, are often
limits of reasoning or extensibility. But it is not impossible to develop
programming models that align well with physical constraints but that do
not sacrifice reasoning and extensibility.

Based on our discussions so far, you seem to believe that if you develop a
model very near our universe, the rest will follow - that actor systems
will shine. But I am not aware of any sound argument that will take you
from this model works just like our universe to this model is usable by
collaborating humans.



a minimalistic set of constructs from which everything else can be built


A minima is just a model from which you can't take anything away. There are
lots of Turing complete minima. Also, there is no guarantee that our
universe is minimalistic.



Anything which starts as synchronous cannot be minimalistic because
 that's not what we observe in the world, our world is asynchronous, and if
 we disagree on this axiom, then so much for that :D


I believe our world is 'synchronous' in the sense of things happening at
the same time in different places. I believe our world is 'synchronous' in
the sense that two photons pointed in the same direction will (barring
interference) move the same distance over the same period. If you send
those photons at the same time, they will arrive at the same time.

It seems, at the physical layer, that 'asynchronous' only happens when you
have some sort of intermediate storage or non-homogeneous delay. And even
in those cases, if I were to model the storage, transport, and retrieval
processes down to the physical minutiae, every micro process would be
end-to-end synchronous - or close enough to reason about and model them
that way. (I'm not sure about the quantum layers.)

Asynchronous communication can be a useful *abstraction* because it allows
us to hide some of the physical minutiae and heterogeneous computations.
But asynchrony isn't the only choice in that role. E.g. if we model static
latencies, those can also hide flexible processing, while perhaps being
easier to reason about for real-time systems.



 But, without invasive code changes or some other form of cheating (e.g.
 

Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread David Barbour
Existing stuff from outside of mainstream is exactly what you should be
digging into.
On Apr 12, 2013 12:08 PM, John Pratt jpra...@gmail.com wrote:


 I feel like these discussions are tangential to the larger issues
 brought up on FONC and just serve to indulge personal interest
 discussions.  Aren't any of us interested in revolution?  It won't
 start with digging into existing stuff like this.


 On Apr 12, 2013, at 11:13 AM, Tristan Slominski wrote:

 oops, I forgot to edit this part:

  and my criticism of Lightweight Time Warps had to do with that it is a
 protocol for message-driven simulation, which also needs an implementor
 that touches reality


 It should have read:

 and my criticism of Lightweight Time Warps had to do with that it is a
 protocol for message-driven simulation and (I think) actors are minimal
 implementors of message-driven protocols



 On Fri, Apr 12, 2013 at 1:07 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I had this long response drafted criticizing Bloom/CALM and Lightweight
 Time Warps, when I realized that we are probably again not aligned as to
 which meta level we're discussing.

 (my main criticism of Bloom/CALM was assumption of timesteps, which is an
 indicator of a meta-framework relying on something else to implement it
 within reality; and my criticism of Lightweight Time Warps had to do with
 that it is a protocol for message-driven simulation, which also needs an
 implementor that touches reality; synchronous reactive programming has
 the word synchronous in it) - hence my assertion that this is more meta
 level than actors.

 I think you and I personally care about different things. I want a
 computational model that is as close to how the Universe works as possible,
 with a minimalistic set of constructs from which everything else can be
 built. Hence my references to cellular automata and Wolfram's hobby of
 searching for the Universe. Anything which starts as synchronous cannot
 be minimalistic because that's not what we observe in the world, our world
 is asynchronous, and if we disagree on this axiom, then so much for that :D

 But actors model fails with regards to extensibility(*) and reasoning


 Those are concerns of an imperator, are they not? Again, I'm not saying
 you're wrong, I'm trying to highlight that our goals differ.

 But, without invasive code changes or some other form of cheating (e.g.
 global reflection) it can be difficult to obtain the name of an actor that
 is part of an actor configuration.


 Again, this is ignorance of the power of Object Capability and the Actor
 Model itself. The above is forbidden in the actor model unless the
 configuration explicitly sends you an address in the message. My earlier
 comment about Akka refers to this same mistake.

 However, you do bring up interesting meta-level reasoning complaints
 against the actor model. I'm not trying to dismiss them away or anything.
 As I mentioned before, that list is a good guide as to what meta-level
 programmers care about when writing programs. It would be great if actors
 could make it easier... and I'm probably starting to get lost here between
 the meta-levels again :/

 Which brings me to a question. Am I the only one that loses track of
 which meta-level I'm reasoning or is this a common occurrence  Bringing it
 back to the topic somewhat, how do people handle reasoning about all the
 different layers (meta-levels) when thinking about computing?


 On Wed, Apr 10, 2013 at 12:21 PM, David Barbour dmbarb...@gmail.comwrote:

 On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


 I'm confused. Why would you be pessimistic about non-actor models when
 your argument is essentially that very simple, deterministic, non-actor
 models can be both Turing complete and address unbounded scalability?

 Hmm. Perhaps what you're really arguing is pessimistic about
 procedural - which today is the mainstream paradigm of choice. The
 imperial nature of procedures makes it difficult to compose or integrate
 them in any extensional or collaborative manner - imperative works best
 when there is exactly one imperator (emperor). I can agree with that
 pessimism.

 In practice, the limits of scalability are very often limits of
 reasoning (too hard to reason about the interactions, safety, security,
 consistency, progress, process control, partial failure) or limits of
 extensibility (to inject or integrate new behaviors with existing systems
 requires invasive changes that are inconvenient or unauthorized). If either
 of those limits exist, scaling will stall. E.g. pure functional programming
 fails to scale for extensibility reasons, even though it admits a lot of
 natural parallelism.

 Of course, 

Re: [fonc] Layering, Thinking and Computing

2013-04-10 Thread Tristan Slominski
 I did not specify that there is only one bridge, nor that you finish
 processing a message from a bridge before we start processing another next.
 If you model the island as a single actor, you would fail to represent many
 of the non-deterministic interactions possible in the 'island as a set' of
 actors.


Ok, I think I see the distinction you're painting here from a meta
perspective of reasoning about an actor system. I keep on jumping back in
into the message-only perspective, where the difference is (it seems)
unknowable. But with meta reasoning about the system, which is what I think
you've been trying to get me to see, the difference matters and complicates
reasoning about the thing as a whole.

I cannot fathom your optimism.


I think it's more of a pessimism about other models that leads me to be
non-pessimistic about actors :D. I have some specific goals I want to
achieve with computation, and actors are the only things right now that
seem to fit.

What we can say of a model is often specific to how we implemented it, the
 main exceptions being compositional properties (which are trivially a
 superset of invariants). Ad-hoc reasoning easily grows intractable and
 ambiguous to the extent the number of possibilities increases or depends on
 deep implementation details. And actors model seems to go out of its way to
 make reasoning difficult - pervasive state, pervasive non-determinism,
 negligible ability to make consistent observations or decisions involving
 the states of two or more actors.
 I think any goal to lower those comprehension barriers will lead to
 development of a new models. Of course, they might first resolve as
 frameworks or design patterns that get used pervasively (~ global
 transformation done by hand, ugh). Before RDP, there were reactive design
 patterns I had developed in the actors model while pursuing greater
 consistency and resilience.


I think we're back to different reference points, and different goals. What
follows is not a comment on what you said but my attempt to communicate why
I'm going about it the way I am and continue to resist what I'm sure are
sound software meta-reasoning practices.

My non-pessimism about actors is linked to Wolfram's cellular automata
turing machine (
http://blog.wolfram.com/2007/10/24/the-prize-is-won-the-simplest-universal-turing-machine-is-proved/).
My continuing non-pessimism about interesting computation being possible in
actors is his search for our universe (
http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-universe/).
Cellular automata are not actors, I get that, but these to me are the
hints. Another hint is the structure of HTMs and the algorithm reverse
engineered from the human neocortex (
https://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf).
Another hint are what we call mesh networks. And overwhelming consideration
across all those hints is unbounded scalability.

Cheers,

Tristan

On Tue, Apr 9, 2013 at 6:25 PM, David Barbour dmbarb...@gmail.com wrote:

 On Tue, Apr 9, 2013 at 12:44 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 popular implementations (like Akka, for example) give up things such as
 Object Capability for nothing.. it's depressing.


 Indeed. Though, frameworks shouldn't rail too much against their hosts.



 I still prefer to model them as in every message is delivered. It wasn't
 I who challenged this original guaranteed-delivery condition but Carl
 Hewitt himself.


 It is guaranteed in the original formalism, and even Hewitt can't change
 that. But you can model loss of messages (e.g. by explicitly modeling a
 lossy network).


 You've described composing actors into actor configurations :D, from the
 outside world, your island looks like a single actor.


 I did not specify that there is only one bridge, nor that you finish
 processing a message from a bridge before we start processing another next.
 If you model the island as a single actor, you would fail to represent many
 of the non-deterministic interactions possible in the 'island as a set' of
 actors.


 I don't think we have created enough tooling or understanding to fully
 grok the consequences of the actor model yet. Where's our math for emergent
 properties and swarm dynamics of actor systems? [..] Where is our reasoning
 about symbiotic autopoietic and allopoietic systems? This is, in my view,
  where the actor systems will shine


 I cannot fathom your optimism.

 What we can say of a model is often specific to how we implemented it, the
 main exceptions being compositional properties (which are trivially a
 superset of invariants). Ad-hoc reasoning easily grows intractable and
 ambiguous to the extent the number of possibilities increases or depends on
 deep implementation details. And actors model seems to go out of its way to
 make reasoning difficult - pervasive state, pervasive non-determinism,
 negligible ability to make consistent observations or decisions involving
 the 

Re: [fonc] Layering, Thinking and Computing

2013-04-10 Thread David Barbour
On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


I'm confused. Why would you be pessimistic about non-actor models when your
argument is essentially that very simple, deterministic, non-actor models
can be both Turing complete and address unbounded scalability?

Hmm. Perhaps what you're really arguing is pessimistic about procedural -
which today is the mainstream paradigm of choice. The imperial nature of
procedures makes it difficult to compose or integrate them in any
extensional or collaborative manner - imperative works best when there is
exactly one imperator (emperor). I can agree with that pessimism.

In practice, the limits of scalability are very often limits of reasoning
(too hard to reason about the interactions, safety, security, consistency,
progress, process control, partial failure) or limits of extensibility (to
inject or integrate new behaviors with existing systems requires invasive
changes that are inconvenient or unauthorized). If either of those limits
exist, scaling will stall. E.g. pure functional programming fails to scale
for extensibility reasons, even though it admits a lot of natural
parallelism.

Of course, scalable performance is sometimes the issue, especially in
models that have global 'instantaneous' relationships (e.g. ad-hoc
non-modular logic programming) or global maintenance issues (like garbage
collection). Unbounded scalability requires a consideration for locality of
computation, and that it takes time for information to propagate.

Actors model is one (of many) models that provides some of the
considerations necessary for unbounded performance scalability. But actors
model fails with regards to extensibility(*) and reasoning. So do most of
the other models you mention - e.g. cellular automatons are even less
extensible than actors (cells only talk to a fixed set of immediate
neighbors), though one can address that with a notion of visitors (mobile
agents).

From what you say, I get the impression that you aren't very aware of other
models that might compete with actors, that attempt to address not only
unbounded performance scalability but some of the other limiting factors on
growth. Have you read about Bloom and the CALM conjecture? Lightweight time
warp? What do you know of synchronous reactive programming?

There is a lot to be optimistic about, just not with actors.

(*) People tend to think of actors as extensible since you just need names
of actors. But, without invasive code changes or some other form of
cheating (e.g. global reflection) it can be difficult to obtain the name of
an actor that is part of an actor configuration. This wouldn't be a problem
except that actors pervasively encapsulate state, and ad-hoc extension of
applications often requires access to internal state [1], especially to
data models represented in that state [2].

Regards,

Dave

[1] http://awelonblue.wordpress.com/2012/10/21/local-state-is-poison/
[2] http://awelonblue.wordpress.com/2011/06/15/data-model-independence/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread Chris Warburton
David Barbour dmbarb...@gmail.com writes:

 relying on global knowledge when designing an actor system seems, to me,
 not to be the right way


 In our earlier discussion, you mentioned that actors model can be used to
 implement lambda calculus. And this is true, given bog standard actors
 model. But do you believe you can explain it from your 'physics' view
 point? How can you know that you've implemented, say, a Fibonacci
 function with actors, if you forbid knowledge beyond what can be discovered
 with messages? Especially if you allow message loss?

snip

 Any expressiveness issues that can be attributed to actors model are a
 consequence of the model. It doesn't matter whether you implement it as a
 language or a framework or even use it as a design pattern.

 Of course, one might overcome certain expressiveness issues by stepping
 outside the model, which may be easy for a framework or certain
 multi-paradigm languages. But to the extent you do so, you can't claim
 you're benefiting from actors model. It's a little sad when we work around
 our models rather than with them.

I think in these kinds of discussions it's important to keep in mind
Goedel's limits on self-reference. Any Turing-complete system is
equivalent to any other in the sense that they can implement each other,
but when we reason about their properties it makes a difference whether
our 'frame of reference' is the implemented language or the
implementation language.

For example, lets say we implement lambda calculus in the actor model:
 - From the lambda calculus frame of reference I *know* that
   λx. ax == a (eta-equivalence), which I can use in my reasoning.
 - From the actor model frame of reference I *cannot* know that an
   encoding of λx. ax == an encoding of a, since it won't be. More
   importantly, reducing an encoding of λx. ax will give different
   computational behaviour to reducing an encoding of a. I cannot
   interchange one for the other in an arbitrary expression and *know*
   that they'll reduce to the same thing, since this would solve the
   halting problem.

We can step outside the actor model and prove eta-equivalence of the
encoded terms (eg. it would follow trivially if we proved that we've
correctly implemented lambda calculus), but the actor model itself will
never believe such a proof.

To use David's analogy, there are some desirable properties that
programmers exploit which are inherently 3D and cannot be represented
in the 2D world. Of course, there are also 4D properties which our
3D infrastructure cannot represent, for example correct refactorings
that our IDE will think are unsafe, correct optimisations which our
compiler will think are unsafe, etc. At some point we have to give up
and claim that the meta-meta-meta--system is enough for practical
purposes and obviously correct in its implementation.

The properties that David is interested in preserving under composition
(termination, maintainability, security, etc.) are very meta, so it's
easy for them to become unrepresentable and difficult to encode when a
language/system/model isn't designed with them in mind.

Note that the above argument is based on Goedel's first incompleteness
theorem: no consistent system can prove all true statements. The halting
problem is just the most famous example of such a statement for
Turing-complete systems.

Goedel's second incompleteness theorem is also equally applicable here:
no system can prove its own consistency (or that of a system with
equivalent expressive power, ie. anything which can emulate it). In
other words, no system (eg. the actor model, lambda calculus, etc.) can
prove/reason about its own:

 - Consistency/correctness: a correct system could produce a correct
   proof that it's correct, or an incorrect system could produce an
   incorrect proof that it's correct; by trusting such a proof we become
   inconsistent.
 - Termination/productivity: a terminating/productive system could
   produce a proof that it's terminating/productive, or a divergent
   (non-terminating, non-productive) system could claim, but fail to
   produce, a proof that it's terminating/productive. By trusting such a
   proof we allow non-termination (or we could optimise it away,
   becoming inconsistent).
 - Security: a secure system could produce a proof that it's secure, or
   an insecure system could be tricked into accepting that it's
   secure. By trusting such a proof, we become insecure.
 - Reliability: a reliable system could produce a proof that it's
   reliable, or an unreliable system could fail in a way that produces
   an incorrect proof of its reliability. By trusting such a proof, we
   become unreliable.
 - Etc.

For such properties we *must* reason in an external language/system,
since Goedel showed that such loops cannot be closed without producing
inconsistency (or the analogous 'bad' outcome).

Regards,
Chris
___
fonc mailing list
fonc@vpri.org

Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 5:21 AM, Chris Warburton
chriswa...@googlemail.comwrote:


 To use David's analogy, there are some desirable properties that
 programmers exploit which are inherently 3D and cannot be represented
 in the 2D world. Of course, there are also 4D properties which our
 3D infrastructure cannot represent, for example correct refactorings
 that our IDE will think are unsafe, correct optimisations which our
 compiler will think are unsafe, etc. At some point we have to give up
 and claim that the meta-meta-meta--system is enough for practical
 purposes and obviously correct in its implementation.

 The properties that David is interested in preserving under composition
 (termination, maintainability, security, etc.) are very meta, so it's
 easy for them to become unrepresentable and difficult to encode when a
 language/system/model isn't designed with them in mind.


Well said.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread John Carlson
So it's message recognition and not actor recognition?  Can actors
collaborate to recognize a message?  I'm trying to put this in terms of
subjective/objective.  In a subjective world there are only messages
(waves).  In an objective world there are computers and routers and
networks (actors, locations, particles).
On Apr 8, 2013 4:52 PM, Tristan Slominski tristan.slomin...@gmail.com
wrote:

 Therefore, with respect to this property, you cannot (in general) reason
 about or treat groups of two actors as though they were a single actor.


 This is incorrect, well, it's based on a false premise.. this part is
 incorrect/invalid? (an appropriate word escapes me):

 But two actors can easily (by passing messages in circles) send out an
 infinite number of messages to other actors upon receiving a single message.


 I see it as the equivalent of saying: I can write an infinite loop,
 therefore, I cannot reason about functions

 As you note, actors are not unique in their non-termination. But that
 misses the point. The issue was our ability to reason about actors
 compositionally, not whether termination is a good property.


 The above statement, in my mind, sort of misunderstands reasoning about
 actors. What does it mean for an actor to terminate. The _only_ way you
 will know, is if the actor sends you a message that it's done. Any
 reasoning about actors and their compositionality must be done in terms of
 messages sent and received. Reasoning in other ways does not make sense in
 the actor model (as far as I understand). This is how I model it in my
 head:

 It's sort of the analog of asking what happened before the Big Bang.
 Well, there was no time before the Big Bang, so asking about before
 doesn't make sense. In a similar way, reasoning about actor systems with
 anything except messages, doesn't make sense. To use another physics
 analogy, there is no privileged frame of reference in actors, you only get
 messages. It's actually a really well abstracted system that requires no
 other abstractions. Actors and actor configurations (groupings of actors)
 become indistinguishable, because they are logically equivalent for
 reasoning purposes. The only way to interact with either is to send it a
 message and to receive a message. Whether it's millions of actors or just
 one doesn't matter, because *you can't tell the difference* (remember,
 there's no privileged frame of reference). To instrument an actor
 configuration, you need to put actors in front of it. But to the user of
 such instrumented configuration, they won't be able to tell the difference.
 And so on and so forth, It's Actors All The Way Down.

 ...

 I think we found common ground/understanding on other things.


 On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the
 biomimetic camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the
 biomimetic camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread Tristan Slominski
I think I am now bogged down in a Meta Tarpit :D

A good question to ask is: can I correctly and efficiently implement
 actors model, given these physical constraints? One might explore the
 limitations of scalability in the naive model. Another good question to ask
 is: is there a not-quite actors model suitable for a more
 scalable/efficient/etc. implementation. (But note that the not-quite
 actors model will never quite be the actors model.)


The problem with the above is that popular implementations (like Akka, for
example) give up things such as Object Capability for nothing.. it's
depressing. Hearing commentary from one of the creators of the framework
himself, as far as I understand, this was not a conscious choice, but a
result of unfamiliarity with the model.

Actors makes a guarantee that every message is delivered (along with a nigh
 uselessly weak fairness property), but for obvious reasons guaranteed
 delivery is difficult to scale to distributed systems. And it seems you're
 entertaining whether *ad-hoc message loss* is suitable.


I still prefer to model them as in every message is delivered. It wasn't I
who challenged this original guaranteed-delivery condition but Carl Hewitt
himself. (see:
http://letitcrash.com/post/20964174345/carl-hewitt-explains-the-essence-of-the-actorat
timestamp 14:00 ). I was quite surprised by this (
https://groups.google.com/d/msg/computational-actors-guild/Xi-aGdSotxw/nlq8Ib0fDaMJ)

Consider an alternative: explicitly model islands (within which no message
 loss occurs) and serialized connections (bridges) between them. Disruption
 and message loss could then occur in a controlled manner: a particular
 bridge is lost, with all of the messages beyond a certain point falling
 into the ether. Compared to ad-hoc message loss, the bridged islands design
 is much more effective for reasoning about and recovering from partial
 failure.


You've described composing actors into actor configurations :D, from the
outside world, your island looks like a single actor.

I find actors can only process one message at a time is an interesting
 constraint on concurrency, and certainly a useful one for reasoning. And
 it's certainly relevant with respect to composition (ability to treat an
 actor configuration as an actor) and decomposition (ability to divide an
 actor into an actor configuration).


From the same video I linked above at time 5:16 Carl Hewitt explains one
message at a time.

Do you also think zero and one are uninteresting numbers?


I've spent an equivalent of a semester reading/learning axiomatic set
theory and trying to understand why 0+1=1 and 1+1=2. I definitely don't
think they are uninteresting :D

It seems you misrepresent your true opinion and ignore difficult,
 topic-relevant issues by re-scoping discussion to one of explanatory power.


This may very well be the case, hence my earlier comment of being stuck in
a Meta Tarpit.

But to the extent you do so, you can't claim you're benefiting from actors
 model. It's a little sad when we work around our models rather than with
 them.


I don't think we have created enough tooling or understanding to fully grok
the consequences of the actor model yet. Where's our math for emergent
properties and swarm dynamics of actor systems? Where are our tools for
that? Even with large companies operating server farms, the obsession with
control of individual components is pervasive. Even a Java-world all-time
favorite distributed coordination system ZooKeeper has a *master* node
that is elected. If that's our pinnacle, no wonder we can't benefit from
the actor model. Where is our reasoning about symbiotic autopoietic and
allopoietic systems? (earlier reference to
http://pleiad.dcc.uchile.cl/_media/bic2007/papers/conscientioussoftwarecc.pdf)
This is, in my view,  where the actor systems will shine, but I haven't
seen (it could be my ignorance) sustained community searching to discover
and command such actor configurations. This is what I'm trying to highlight
when I discuss the appropriate frame of reference for actor system
programming. (This can be rooted in my ignorance, in which case, I would
love some pointers to how this approach failed in the past).

That said, focusing on explanatory power could be interesting - e.g.
 comparing actors model with other physically-inspired models (e.g. time
 warp, cellular automata, synchronous reactive). To be honest, I think
 actors model will fare poorly. Where do 'references' occur in physics? What
 about fan-in and locality?


 Such issues include composition, decomposition, consistency, discovery,
 persistence, runtime update.



 But there are also a few systemic with respect to implementation - e.g.
 regarding garbage collection, process control, and partitioning or partial
 failure in distributed systems, and certain optimizations (inlining,
 mirroring). Actors really aren't as scalable as they promise without quite
 a few hacks.


That's some stuff that I'm interested in 

Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 12:44 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 popular implementations (like Akka, for example) give up things such as
 Object Capability for nothing.. it's depressing.


Indeed. Though, frameworks shouldn't rail too much against their hosts.



 I still prefer to model them as in every message is delivered. It wasn't I
 who challenged this original guaranteed-delivery condition but Carl Hewitt
 himself.


It is guaranteed in the original formalism, and even Hewitt can't change
that. But you can model loss of messages (e.g. by explicitly modeling a
lossy network).


 You've described composing actors into actor configurations :D, from the
 outside world, your island looks like a single actor.


I did not specify that there is only one bridge, nor that you finish
processing a message from a bridge before we start processing another next.
If you model the island as a single actor, you would fail to represent many
of the non-deterministic interactions possible in the 'island as a set' of
actors.


 I don't think we have created enough tooling or understanding to fully
 grok the consequences of the actor model yet. Where's our math for emergent
 properties and swarm dynamics of actor systems? [..] Where is our reasoning
 about symbiotic autopoietic and allopoietic systems? This is, in my view,
  where the actor systems will shine


I cannot fathom your optimism.

What we can say of a model is often specific to how we implemented it, the
main exceptions being compositional properties (which are trivially a
superset of invariants). Ad-hoc reasoning easily grows intractable and
ambiguous to the extent the number of possibilities increases or depends on
deep implementation details. And actors model seems to go out of its way to
make reasoning difficult - pervasive state, pervasive non-determinism,
negligible ability to make consistent observations or decisions involving
the states of two or more actors.

I think any goal to lower those comprehension barriers will lead to
development of a new models. Of course, they might first resolve as
frameworks or design patterns that get used pervasively (~ global
transformation done by hand, ugh). Before RDP, there were reactive design
patterns I had developed in the actors model while pursuing greater
consistency and resilience.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread Tristan Slominski

 Therefore, with respect to this property, you cannot (in general) reason
 about or treat groups of two actors as though they were a single actor.


This is incorrect, well, it's based on a false premise.. this part is
incorrect/invalid? (an appropriate word escapes me):

But two actors can easily (by passing messages in circles) send out an
 infinite number of messages to other actors upon receiving a single message.


I see it as the equivalent of saying: I can write an infinite loop,
therefore, I cannot reason about functions

As you note, actors are not unique in their non-termination. But that
 misses the point. The issue was our ability to reason about actors
 compositionally, not whether termination is a good property.


The above statement, in my mind, sort of misunderstands reasoning about
actors. What does it mean for an actor to terminate. The _only_ way you
will know, is if the actor sends you a message that it's done. Any
reasoning about actors and their compositionality must be done in terms of
messages sent and received. Reasoning in other ways does not make sense in
the actor model (as far as I understand). This is how I model it in my
head:

It's sort of the analog of asking what happened before the Big Bang.
Well, there was no time before the Big Bang, so asking about before
doesn't make sense. In a similar way, reasoning about actor systems with
anything except messages, doesn't make sense. To use another physics
analogy, there is no privileged frame of reference in actors, you only get
messages. It's actually a really well abstracted system that requires no
other abstractions. Actors and actor configurations (groupings of actors)
become indistinguishable, because they are logically equivalent for
reasoning purposes. The only way to interact with either is to send it a
message and to receive a message. Whether it's millions of actors or just
one doesn't matter, because *you can't tell the difference* (remember,
there's no privileged frame of reference). To instrument an actor
configuration, you need to put actors in front of it. But to the user of
such instrumented configuration, they won't be able to tell the difference.
And so on and so forth, It's Actors All The Way Down.

...

I think we found common ground/understanding on other things.


On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the biomimetic
 camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the biomimetic
 camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread David Barbour
On Mon, Apr 8, 2013 at 2:52 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 This is incorrect, well, it's based on a false premise.. this part is
 incorrect/invalid?


A valid argument with a false premise is called an 'unsound' argument. (
http://en.wikipedia.org/wiki/Validity#Validity_and_soundness)



 What does it mean for an actor to terminate. The _only_ way you will
 know, is if the actor sends you a message that it's done.


That is incorrect. One can also know things via static or global knowledge
- e.g. type systems, symbolic analysis, proofs, definitions. Actors happen
to be defined in such a manner to guarantee progress and certain forms of
fairness at the message-passing level. From their definition, I can know
that a single actor will terminate (i.e. finish processing a message),
without ever receiving a response. If it doesn't terminate, then it isn't
an actor.

In any case, non-termination (and our ability or inability to reason about
it) was never the point. Composition is the point. If individual actors
were allowed to send an infinite number of messages in response to a single
message (thus obviating any fairness properties), then they could easily be
compositional with respect to that property.

Unfortunately, they would still fail to be compositional with respect to
other relevant properties, such as serializable state updates, or message
structure.



Any reasoning about actors and their compositionality must be done in terms
 of messages sent and received. Reasoning in other ways does not make sense
 in the actor model (as far as I understand).


Carl Hewitt was careful to include certain fairness and progress properties
in the model, in order to support a few forms of system-level reasoning.
Similarly, the notion that actor-state effectively serializes messages
(i.e. each message defines the behavior for processing the next message) is
important for safe concurrency within an actor. Do you really avoid all
such reasoning? Or is such reasoning simply at a level that you no longer
think about it consciously?



 there is no privileged frame of reference in actors, you only get messages


I'm curious what your IDE looks like. :-)

A fact is that programming is NOT like physics, in that we do have a
privileged frame of reference that is only compromised at certain
boundaries for open systems programming. It is this frame of reference that
supports abstraction, refactoring, static typing, maintenance,
optimizations, orthogonal persistence, process control (e.g. kill,
restart), live coding, and the like.

If you want an analogy, it's like having a 3D view of a 2D world. As
developers, often use our privilege to examine our systems from frames that
no actor can achieve within our model.

This special frame of reference isn't just for humans, of course. It's just
as useful for metaprogramming, e.g. for those 'layered' languages with
which Julian opened this topic.


 Actors and actor configurations (groupings of actors)
 become indistinguishable, because they are logically equivalent for
 reasoning purposes. The only way to interact with either is to send it a
 message and to receive a message.


It is true that, from within the actor system, we cannot distinguish an
actor from an actor configuration.



 It's Actors All The Way Down.


Actors don't have clear phase separation or staging. There is no down,
just an ad-hoc graph. Also, individual actors often aren't decomposable
into actor configurations. A phrase I favored while developing actors
systems (before realizing their systemic problems) was It's actors all the
way out.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread Tristan Slominski
This helps a lot, thank you. Your arguments help me to understand how I
fail to communicate to others what I see in actor systems. Finding a way to
address the concerns you bring up will go a long way for my ability to
communicate what I see.

From their definition, I can know that a single actor will terminate
 (i.e. finish processing a message), without ever receiving a response.


The problem with this, that I see, is that (I'm going to jump way up in the
layers of abstraction) in my physics view of actors, that's similar to
saying I 'know' that the value was committed to a database, because I told
it to 'put'. I won't know unless you receive an 'ack' (whatever that 'ack'
is, in dynamo it could be after reaching quorum, etc.). Jumping back down
in layers of abstraction. Imagine an actor, the sole responsibility of
which, is to store a single value. I send a {save 7} message to it. I
know that it has 7... well.. sort of. I won't know until I send something
like {read} and receive a response. This is that frame of reference thing
I've been describing. Messages could be lost.

Going back to idea of global knowledge that an actor will terminate. Well,
yes, but I will only know that, if it responds to another message.
Because if I never send another message to it, then that actor is not
relevant to the system anymore, it effectively becomes a sink behavior, a
black hole.

important for safe concurrency within an actor.


This is another hint that we might have a different mental model. I don't
find concurrency within an actor interesting. Actors can only process one
message at a time. So concurrency is only relevant in that sending messages
to other actors happens in parallel. That's not an interesting property.

Do you really avoid all such reasoning? Or is such reasoning simply at a
 level that you no longer think about it consciously?


Actor behavior is a mapping function from a message that was received to
creation of finite number of actors, sending finite number of messages, and
changing own behavior to process the next message. This could be a simple
dictionary lookup in the degenerate case. What's there to reason about in
here? In my view, the properties of how these communicate are the
interesting parts.

A fact is that programming is NOT like physics,


This is a description of your opinion of best practice of how to program.
It's an assertion of a preference, not a proof. I believe this is the main
difference in our points of view. I like actors precisely because I CAN
make programming look like physics. A reference frame is really important
in thinking about actor systems. Assuming, or worse, relying on global
knowledge when designing an actor system seems, to me, not to be the
right way (unfortunately I cannot readily point to a proof of that
last sentence  so I acknowledge it as an unproven assertion, but it's where
my intuition about actors currently points me :/ , I'll need to do better)

It is this frame of reference that supports abstraction, refactoring,
 static typing, maintenance, optimizations, orthogonal persistence, process
 control (e.g. kill, restart), live coding, and the like.


I think this highlights our different frames of reference when we discuss
actor systems. Refactoring, static typing, maintenance, optimizations, etc.
are language level utilities... what language an actor system is written in
is, in my opinion, an implementation detail. It's relevant in a
productivity sense, but because actor behaviors should/are simple, that's
not the interesting part of the actor system. Another way of putting it,
actor model is a pattern, not a language. Notice how many actor libraries
there are in all the different languages. Most of the languages I work in,
I build an actor library for. After doing this a few times, there is a
clear distinction between the actor model and an implementation in a
language. For completeness, there are also actor languages, but that's
just taking the pattern all the way down because the model is powerful
if viewed from a certain point of view. Abstraction, maintenance,
optimizations, process control, live coding, etc... fit in as tools in my
view of the actor model as a running system.

Also, individual actors often aren't decomposable into actor configurations


In the physics point of view of programming actors, it's impossible to tell
the difference. Why would you want to decompose something that is already
encapsulated?

So, I think I've arrived at a realization, which I hinted above (perhaps
you've already seen this and I'm just now catching up). I think I'm
describing an actor model as in turing machine/a model of computation (a
running system of actors communicating via messages, the platonic form of
it, if you will), whereas you're describing an actor model as in
language. Do you think that correctly frames our discussion so far?

I think we might have interesting things to talk about within the confines
of an actor _language_, but we should 

Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread David Barbour
On Mon, Apr 8, 2013 at 6:29 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 The problem with this, that I see, is that [..] in my physics view of
 actors [..] Messages could be lost.


Understanding computational physics is a good thing. More people should do
it. A couple times each year I end up in discussions with people who think
software and information aren't bound by physical laws, who have never
heard of Landauer's principle, who seem to mistakenly believe that
distinction of concepts (like information and representation, or mind and
body) implies they are separable.

However, it is not correct to impose physical law on the actors model.
Actors is its own model, apart from physics.

A good question to ask is: can I correctly and efficiently implement
actors model, given these physical constraints? One might explore the
limitations of scalability in the naive model. Another good question to ask
is: is there a not-quite actors model suitable for a more
scalable/efficient/etc. implementation. (But note that the not-quite
actors model will never quite be the actors model.) Actors makes a
guarantee that every message is delivered (along with a nigh uselessly weak
fairness property), but for obvious reasons guaranteed delivery is
difficult to scale to distributed systems. And it seems you're entertaining
whether *ad-hoc message loss* is suitable.

That doesn't mean ad-hoc message-loss is a good choice, of course. I've
certainly entertained that same thought, as have other, but we can't trust
every fool thought that enters our heads.

Consider an alternative: explicitly model islands (within which no message
loss occurs) and serialized connections (bridges) between them. Disruption
and message loss could then occur in a controlled manner: a particular
bridge is lost, with all of the messages beyond a certain point falling
into the ether. Compared to ad-hoc message loss, the bridged islands design
is much more effective for reasoning about and recovering from partial
failure.

One could *implement* either of those loss models within actors model,
perhaps requiring some global transforms. But, as we discussed earlier
regarding composition, the implementation is not relevant while reasoning
with abstractions.

Reason about the properties of each abstraction or model. Separately,
reason about whether the abstraction can be correctly (and easily,
efficiently, scalably) implemented. This is 'layering' at its finest.


 This is another hint that we might have a different mental model. I don't
 find concurrency within an actor interesting. Actors can only process one
 message at a time. So concurrency is only relevant in that sending messages
 to other actors happens in parallel. That's not an interesting property.


I find actors can only process one message at a time is an interesting
constraint on concurrency, and certainly a useful one for reasoning. And
it's certainly relevant with respect to composition (ability to treat an
actor configuration as an actor) and decomposition (ability to divide an
actor into an actor configuration).

Do you also think zero and one are uninteresting numbers? Well, de gustibus
non est disputandum.



 Actor behavior is a mapping function from a message that was received to
 creation of finite number of actors, sending finite number of messages, and
 changing own behavior to process the next message. This could be a simple
 dictionary lookup in the degenerate case. What's there to reason about in
 here?


Exactly what you said: finite, finite, sequential - useful axioms from
which we can derive theorems and knowledge.



 A fact is that programming is NOT like physics,


 This is a description


Indeed. See how easily we can create straw-man arguments with which we can
casually agree or disagree by stupidly taking sentence fragments out of
context? :-)



I like actors precisely because I CAN make programming look like physics.


I am fond of linear logic and stream processing for similar reasons. I
certainly approve, in a general sense, of developing models designed to
operate within physical constraints. But focusing on the aspects I enjoy,
or disregarding those I find uninteresting, would certainly put me at risk
of reasoning about an idealized model a few handwaves removed from the
original.



relying on global knowledge when designing an actor system seems, to me,
 not to be the right way


In our earlier discussion, you mentioned that actors model can be used to
implement lambda calculus. And this is true, given bog standard actors
model. But do you believe you can explain it from your 'physics' view
point? How can you know that you've implemented, say, a Fibonacci
function with actors, if you forbid knowledge beyond what can be discovered
with messages? Especially if you allow message loss?



 I think this highlights our different frames of reference when we discuss
 actor systems. Refactoring, static typing, maintenance, optimizations, etc.
 are [..] 

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Julian Leviston

On 07/04/2013, at 1:48 PM, Tristan Slominski tristan.slomin...@gmail.com 
wrote:

 a lot of people seem to have the opinion the language a person communicates 
 in locks them into a certain way of thinking.
 
 There is an entire book on the subject, Metaphors We Live By, which 
 profoundly changed how I think about thinking and what role metaphor plays in 
 my thoughts. Below is a link to what looks like an article by the same title 
 from the same authors.
 
 http://www.soc.washington.edu/users/brines/lakoff.pdf
 
 

Having studied linguistics, I can tell you there are MANY books on this 
subject. I can point to at least the following reference work on the topic:

http://mitpress.mit.edu/books/language-thought-and-reality

I wasn't interested in this discussion. I would agree that semi-educated 
ordinary man definitely thinks in language and it definitely shapes the 
thoughts that they're capable of, however what I'm talking about is really only 
found in people who speak more than two other languages than their native 
language, and/or in languages not touched by that modern culture (such as the 
Australian Aborigines dreamtime metalinguistic awareness).

I guess my question mostly relates to whether or not learning more languages 
than one, (perhaps when one gets to about three different languages to some 
level of proficiency and deep study), causes one to form a pre/post-linguistic 
awareness as I referenced in my original post.

I think learning only one language is bad for people who want to understand 
each other, and the same thing with programming languages. Less than 3 
languages doesn't allow one to triangulate meaning very well, perhaps even 
properly.

Julian___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Ondřej Bílka
On Sat, Apr 06, 2013 at 09:00:26PM -0700, David Barbour wrote:
On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston [1]jul...@leviston.net
wrote:
 
  LISP is perfectly precise. It's completely unambiguous. Of course,
  this makes it incredibly difficult to use or understand sometimes.
 
Ambiguity isn't necessarily a bad thing, mind. One can consider it an
opportunity: For live coding or conversational programming, ambiguity
enables a rich form of iterative refinement and conversational programming
styles, where the compiler/interpreter fills the gaps with something that
seems reasonable then the programmer edits if the results aren't quite
those desired. For mobile code, or portable code, ambiguity can provide
some flexibility for a program to adapt to its environment. One can
consider it a form of contextual abstraction. Ambiguity could even make a
decent target for machine-learning, e.g. to find optimal results or
improve system stability [1].
[1] [2]http://awelonblue.wordpress.com/2012/03/14/stability-without-state/


IMO unambiguity is property that looks good only in the paper.

When you look to perfect solution you will get perfect solution for
wrong problem. 

A purpose of language is to convey how to solve problems. You need to look for 
robust solution. You must deal with that real world is inprecise. Just 
transforming 
problem to words causes inaccuracy. when you tell something to many
parties each of them wants to optimize something different. You again
need flexibility.


This is problem of logicians that they did not go into this direction
but direction that makes their results more and more brittle. 
Until one can answer questions above along with how to choose between 
contradictrary data what is more important there is no chance to get decent AI.

What is important is cost of knowledge. It has several important
properties, for example that in 99% of cases it is negative.

You can easily roll dice 50 times and make 50 statements about them that 
are completely unambiguous and completely useless.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread John Nilsson
Layering kind of implies a one dimensional space: lower vs. higher
abstraction. Although we try hard to project other dimensions such as the
why-how onto this dimension the end result is complex mess of concepts from
different domains trying to fit in a way to small space. So besides
layering we should probably also spend some effort in untangling those
other dimensions and formalize how to map and compose things in all
relevant dimensions without complecting them.

BR
John
Den 7 apr 2013 06:00 skrev David Barbour dmbarb...@gmail.com:

 On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston jul...@leviston.netwrote:

 LISP is perfectly precise. It's completely unambiguous. Of course, this
 makes it incredibly difficult to use or understand sometimes.


 LISP isn't completely unambiguous. First, anything that is *implementation
 dependent* is ambiguous or non-deterministic from the pure language
 perspective. This includes a lot of things, such as evaluation order for
 modules. Second, anything that is *context dependent* (e.g. deep
 structure-walking macros, advanced uses of special vars, the common-lisp
 object system, OS and configuration-dependent structure) is not entirely
 specified or unambiguous when studied in isolation. Third, any operation
 that is non-deterministic (e.g. due to threading or heuristic search) is
 naturally ambiguous; to the extent you model and use such operations, your
 LISP code will be ambiguous.

 Ambiguity isn't necessarily a bad thing, mind. One can consider it an
 opportunity: For live coding or conversational programming, ambiguity
 enables a rich form of iterative refinement and conversational programming
 styles, where the compiler/interpreter fills the gaps with something that
 seems reasonable then the programmer edits if the results aren't quite
 those desired. For mobile code, or portable code, ambiguity can provide
 some flexibility for a program to adapt to its environment. One can
 consider it a form of contextual abstraction. Ambiguity could even make a
 decent target for machine-learning, e.g. to find optimal results or improve
 system stability [1].

 [1] http://awelonblue.wordpress.com/2012/03/14/stability-without-state/



 is it possible to build a series of tiny LISPs on top of each other such
 that we could arrive at incredibly precise and yet also incredibly concise,
 but [[easily able to be traversed] meanings].


 It depends on what you want to say. Look up Kolmogorov Complexity [2].
 There is a limit to how concise you can make a given meaning upon a given
 language, no matter how you structure things.

 [2] http://en.wikipedia.org/wiki/Kolmogorov_complexity

 If you want to say a broad variety of simple things, you can find a
 language that allows you to say them concisely.

 We can, of course, create a language for any meaning Foo that allows us to
 represent Foo concisely, even if Foo is complicated. But the representation
 of this language becomes the bulk of the program. Indeed, I often think of
 modular programming as language-design: abstraction is modifying the
 language from within - adding new words, structures, interpreters,
 frameworks - ultimately allowing me express my meaning in just one line.

 Of course, the alleged advantage of little problem-oriented languages is
 that they're reusable in different contexts. Consider that critically: it
 isn't often that languages easily work together because they often have
 different assumptions or models for cross-cutting concerns (such as
 concurrency, persistence, reactivity, security, dependency injection,
 modularity, live coding). Consider, for example, the challenge involved
 with fusing three or more frameworks, each of which have different callback
 models and concurrency.

 Your proposal is effectively that we develop a series (or stack) of little
 languages for specific problem. But even if you can express your meaning
 concisely on a given stack, you will encounter severe difficulty when comes
 time to *integrate* different stacks that solve different problems.

 (Some people are seeking solutions to the general problem of language
 composition, e.g. with Algebraic Effects or modeling languages as
 Generalized Arrows.)


 one could replace any part of the system because one could understand any
 part of it


 Sadly, the former does not follow the latter.

 The ability to understand any part of a system does not imply the
 ability to understand how replacing that part (in a substantive way) will
 affect the greater system. E.g. one might look into a Car engine and say:
 hmm. I understand this tube, and that bolt, and this piston... but not
 really grok the issues of vibration, friction and wear, pressure, heat,
 etc.. For such a person, a simple fix is okay, since the replacement part
 does the same thing as the original. But substantive modifications would be
 risky without more knowledge.

 Today, much software is monolithic for exactly that reason: to understand
 a block of code often 

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski
Thanks for the book reference, I'll check it out

I guess my question mostly relates to whether or not learning more
 languages than one, (perhaps when one gets to about three different
 languages to some level of proficiency and deep study), causes one to form
 a pre/post-linguistic awareness as I referenced in my original post.


Hmm.. I probably fit the criteria of multiple languages. I tried to expose
myself to different language families, so I have Slavic, Germanic, Asiatic,
and Arabic familiarity to various degrees (sorry if those aren't correct
groupings :D). I'm fluent in only two, but both of those were learned as a
kid.

I haven't thought about what you're describing. Perhaps it was the
fish-thinking-about-water phenomenon? I assumed that everyone thinks in
free form and then solidifies it into language when necessary for clarity
or communication. However, I do recall my surprise at experiencing first
hand all the different grammar structures that still allow people to
communicate well. From what I can tell about my thoughts, there's
definitely some abstraction going on, where the thought comes before the
words, and then the word shuffle needs to happen to fit the grammar
structure of the language being used.

So there appear to be at least two modes I think in. One being almost
intuitive and without form, the other being expressed through language.
(I've written poetry that other people liked, so I'm ok at both). However,
I thought that the free form thought had more to do with being good at
mathematics and the very abstract thought that promotes. I didn't link it
to knowing multiple languages.

So what about mathematical thinking? It seems it does more for my abstract
thinking than multiple languages. Trying to imagine mathematical
structures/concepts that are often impossible to present in the solidity of
the real world did more for me loosening my thought boundaries and
abandoning language structure than any language I learned as far as I can
tell.

Mathematics can also be considered a language. But there are also different
mathematical languages as well. I experienced this first hand. Perhaps I'm
not as smart as some people, but the biggest mental challenge, and one I
had to give up on to maintain my sanity (literally), was learning physics
and computer science at the same time and for the first time. It was
overwhelming. Where I studied it, physics math was all continuous
mathematics. In contrast, Computer Science math was all discrete
mathematics. On a physics quiz, my professor did a little extra credit, and
asked the students to describe how they would put together a model of the
solar system. Even though the quiz was anonymous, he knew exactly who I
was, because I was the only one to describe an algorithm for a computer
simulation. The others described a mechanical model. There was also
something very weird that was happening to my brain at the time. The
cognitive dissonance in switching between discrete and continuous math
paradigms was overwhelming, to the point where I ended up picking the
discrete/computer science path and gave up on physics, at least while an
undergrad.

I don't think knowing only one language is bad. It's sort of like saying,
oh, you're only a doctor, and you do nothing else. However, there appears
to be something to knowing multiple languages.

Part of the reason why I mentioned the metaphor stuff in the first place,
is because it resonates with me in what I understand about how the human
neocortex works. The most compelling explanation for the neocortex that I
found is Jeff Hawkins' Hierarchical Temporal Memory model. A similar
concept also came up in David Gelenter's Mirror Worlds. And that is, in
very general terms, that our neocortex is a pattern recognition machine
working on sequences in spatial and temporal neuron activation. There is
nothing in it at birth, and over our lifetimes, it fills up with memories
of more and more complex and abstract sequences. Relating this back to our
language discussion, with that as the background, it seems intuitive that
knowing another language, i.e. memorizing a different set of sequences,
will enable different patterns of thought, as well as more modes of
expression.

As to building a series of tiny LISPs. I see that as being similar as
arguing for knowing only one family of languages. We would be missing
entire structures and modes of expression by concentrating only on LISP
variants, would we not? The Actor Model resonates deeply with me, and
sometimes I have trouble explaining some obvious things that arise from
thinking in Actors to people unfamiliar with that model of computation. I
believe part of the reason is that lots of the computation happens as an
emergent property of the invisible web of message traffic, and not in the
procedural actor behavior. How would one program a flock in LISP?

On Sun, Apr 7, 2013 at 3:47 AM, Julian Leviston jul...@leviston.net wrote:


 On 07/04/2013, at 1:48 PM, Tristan Slominski 

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski
Very interesting David. I'm subscribed to the RSS feed but I don't think I
read that one yet.

I agree that largely, we can use more work on languages, but it seems that
making the programming language responsible for solving all of programming
problems is somewhat narrow.

A friend of mine, Janelle Klein, is in the process of publishing a book
called The Idea Flow Method: Solving the Human Factor in Software
Development
https://leanpub.com/ideaflow . The method she arrived at, after leading
software teams for a while, in my mind, ended up mapping a software
organization into how a human neocortex works (as opposed to the typical
Lean methodology of mapping a software organization onto a factory).

The Idea Flow Method, in my poor summary, focuses on what matters when
building software. And what appears to matter cannot be determined at the
moment of writing the software, but only after multiple iterations of
working with the software. So, for example, imagine that I write a really
crappy piece of code that works, in a corner of the program that nobody
ever ends up looking in, nobody understands it, and it just works. If
nobody ever has to touch it, and no bugs appear that have to be dealt with,
then as far as the broader organization is concerned, it doesn't matter how
beautiful that code is, or which level of Dante's Inferno it hails from. On
the other hand, if I write a really crappy piece of code that breaks in
ambiguous ways, and people have to spend a lot of time understanding it and
debugging, then it's really important how understandable that code is, and
time should probably be put into making it good. (Janelle's method
provides a tangible way of tracking this type of code importance).

Of course, I can only defend the deal with it if it breaks strategy only
so far. Every component that is built shapes it's surface area and other
components need to mold themselves to it. Thus, if one of them is wrong, it
gets non-linearly worse the more things are shaped to the wrong component,
and those shape to those, etc. We then end up thinking about protocols,
objects, actors, and so on.. and I end up agreeing with you that
composition becomes the most desirable feature of a software system. I
think in terms of actors/messages first, so no argument there :D

As far as applying metaphor to programming... from the book I referenced,
it appears that the crucial thing about metaphor is the ability to pick and
choose pieces from different metaphors to describe a new concept. Depending
on what we want to compute/communicate we can attribute to ideas the
properties of commodities, resources, money, plants, products, cutting
instruments. To me, the most striking thing about this being the absence of
a strict hierarchy at all, i.e., no strict hierarchical inheritance. The
ability to mix and match various attributes together as needed seems to
most closely resemble how we think. That's composition again, yes?

On Sat, Apr 6, 2013 at 11:04 PM, David Barbour dmbarb...@gmail.com wrote:


 On Sat, Apr 6, 2013 at 8:48 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 a lot of people seem to have the opinion the language a person
 communicates in locks them into a certain way of thinking.


 There is an entire book on the subject, Metaphors We Live By, which
 profoundly changed how I think about thinking and what role metaphor plays
 in my thoughts. Below is a link to what looks like an article by the same
 title from the same authors.

 http://www.soc.washington.edu/users/brines/lakoff.pdf


 I'm certainly interested in how metaphor might be applied to programming.
 I write, regarding 'natural language' programming [1] that metaphor and
 analogy might be addressed with a paraconsistent logic - i.e. enabling
 developers to apply wrong functions but still extract some useful meaning
 from them.

 [1]
 http://awelonblue.wordpress.com/2012/08/01/natural-programming-language/


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski

 A purpose of language is to convey how to solve problems. You need to look
 for
 robust solution. You must deal with that real world is inprecise. Just
 transforming
 problem to words causes inaccuracy. when you tell something to many
 parties each of them wants to optimize something different. You again
 need flexibility.


Ondrej, have you come across Nassim Nicholas Taleb's Antifragility concept?
The reason I ask, is because we seem to agree on what's important in
solving problems. However, robustness is a limited goal, and antifragility
seems a much more worthy one.

In short, the concept can be expressed in opposition of how we usually
think of fragility. And the opposite of fragility is not robustness. Nassim
argues that we really didn't have a name for the concept, so he called it
antifragility.

fragility - quality of being easily damaged or destroyed.
robust - 1. Strong and healthy; vigorous. 2. Sturdy in construction.

Nassim argues that the opposite of easily damaged or destroyed [in face of
variability] is actually getting better [in face of variability], not just
remaining robust and unchanging. This getting better is what he called
antifragility.

Below is a short summary of what antifragility is. (I would also encourage
reading Nassim Taleb directly, a lot of people, perhaps myself included,
tend to misunderstand and misrepresent this concept)

http://www.edge.org/conversation/understanding-is-a-poor-substitute-for-convexity-antifragility





On Sun, Apr 7, 2013 at 4:25 AM, Ondřej Bílka nel...@seznam.cz wrote:

 On Sat, Apr 06, 2013 at 09:00:26PM -0700, David Barbour wrote:
 On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston [1]
 jul...@leviston.net
 wrote:
 
   LISP is perfectly precise. It's completely unambiguous. Of course,
   this makes it incredibly difficult to use or understand sometimes.
 
 Ambiguity isn't necessarily a bad thing, mind. One can consider it an
 opportunity: For live coding or conversational programming, ambiguity
 enables a rich form of iterative refinement and conversational
 programming
 styles, where the compiler/interpreter fills the gaps with something
 that
 seems reasonable then the programmer edits if the results aren't quite
 those desired. For mobile code, or portable code, ambiguity can
 provide
 some flexibility for a program to adapt to its environment. One can
 consider it a form of contextual abstraction. Ambiguity could even
 make a
 decent target for machine-learning, e.g. to find optimal results or
 improve system stability [1].
 [1] [2]
 http://awelonblue.wordpress.com/2012/03/14/stability-without-state/
 

 IMO unambiguity is property that looks good only in the paper.

 When you look to perfect solution you will get perfect solution for
 wrong problem.

 A purpose of language is to convey how to solve problems. You need to look
 for
 robust solution. You must deal with that real world is inprecise. Just
 transforming
 problem to words causes inaccuracy. when you tell something to many
 parties each of them wants to optimize something different. You again
 need flexibility.


 This is problem of logicians that they did not go into this direction
 but direction that makes their results more and more brittle.
 Until one can answer questions above along with how to choose between
 contradictrary data what is more important there is no chance to get
 decent AI.

 What is important is cost of knowledge. It has several important
 properties, for example that in 99% of cases it is negative.

 You can easily roll dice 50 times and make 50 statements about them that
 are completely unambiguous and completely useless.



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread David Barbour
On Sun, Apr 7, 2013 at 5:44 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 I agree that largely, we can use more work on languages, but it seems that
 making the programming language responsible for solving all of programming
 problems is somewhat narrow.


I believe each generation of languages should address a few more of the
cross-cutting problems relative to their predecessors, else why the new
language?

But to address a problem is not necessarily to automate the solution, just
to push solutions below the level of conscious thought, e.g. into a path of
least resistance, or into simple disciplines that (after a little
education) come as easily and habitually (no matter how unnaturally) as
driving a car or looking both ways before crossing a street.


 imagine that I write a really crappy piece of code that works, in a
 corner of the program that nobody ever ends up looking in, nobody
 understands it, and it just works. If nobody ever has to touch it, and no
 bugs appear that have to be dealt with, then as far as the broader
 organization is concerned, it doesn't matter how beautiful that code is, or
 which level of Dante's Inferno it hails from


Unfortunately, it is not uncommon that bugs are difficult to isolate, and
may even exhibit in locations far removed from their source. In such cases,
having code that nobody understands can be a significant burden - one you
pay for with each new bug, even if each time you eventually determine that
the cause is elsewhere.

Such can motivate use of theorem provers: if the code is so simple or so
complex that no human can readily grasp why it works, then perhaps such
understanding should be automated, with humans on the periphery asking for
proofs that various requirements and properties are achieved.



 Of course, I can only defend the deal with it if it breaks strategy only
 so far. Every component that is built shapes it's surface area and other
 components need to mold themselves to it. Thus, if one of them is wrong, it
 gets non-linearly worse the more things are shaped to the wrong component,
 and those shape to those, etc.


Yes. Of course, even being right in different ways can cause much
awkwardness - like a bridge built from both ends not quite meeting in he
middle.



We then end up thinking about protocols, objects, actors, and so on.. and I
 end up agreeing with you that composition becomes the most desirable
 feature of a software system. I think in terms of actors/messages first, so
 no argument there :D


Actors/messaging is much more about reasoning in isolation (understanding
'each part') than composition. Consider: You can't treat a group of two
actors as a single actor. You can't treat a sequence of two messages as a
single message. There are no standard composition operators for using two
actors or messages together, e.g. to pipe output from one actor as input to
another.

It is very difficult, with actors, to reason about system-level properties
(e.g. consistency, latency, partial failure). But it is not difficult to
reason about actors individually.

I've a few articles on related issues:

[1] http://awelonblue.wordpress.com/2012/07/01/why-not-events/
[2] http://awelonblue.wordpress.com/2012/05/01/life-with-objects/
[3]
http://awelonblue.wordpress.com/2013/03/07/objects-as-dependently-typed-functions/




 To me, the most striking thing about this being the absence of a strict
 hierarchy at all, i.e., no strict hierarchical inheritance. The ability to
 mix and match various attributes together as needed seems to most closely
 resemble how we think. That's composition again, yes?


Yes, of sorts.

The ability to combine traits, flavors, soft constraints, etc. in a
standard way constitutes a form of composition. But they don't suggest rich
compositional reasoning (i.e. the set of compositional properties may be
trivial or negligible). Thus, trait composition, soft constraints, etc.
tend to be 'shallow'. Still convenient and useful, though.

I mention some related points in the 'life with objects' article (linked
above) and also in my stone soup programming article [4].

[4] http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/

(from a following message)


 robustness is a limited goal, and antifragility seems a much more worthy
 one.


Some people interpret 'robustness' rather broadly, cf. the essay 'Building
Robust Systems' from Gerald Jay Sussman [5]. In my university education,
the word opposite fragility was 'survivability' (e.g. 'survivable
networking' was a course).

I tend to break various survivability properties into robustness (resisting
damage, security), graceful degradation (breaking cleanly and predictably;
succeeding within new capabilities), resilience (recovering quickly; self
stabilizing; self healing or easy fixes). Of course, these are all passive
forms; I'm a bit wary about developing computer systems that 'hit back'
when attacked, at least as a default policy.

[5]

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski

 I believe each generation of languages should address a few more of the
 cross-cutting problems relative to their predecessors, else why the new
 language?


Well, there are schools of thought that every problem merits a domain
specific language to solve it :D. But setting my quick response aside, I
think I appreciate somewhat more what I think you're trying to communicate.
Your statement here:

But they don't suggest rich compositional reasoning (i.e. the set of
 compositional properties may be trivial or negligible). Thus, trait
 composition, soft constraints, etc. tend to be 'shallow'. Still convenient
 and useful, though.


helped to anchor this richer composition concept in my mind. It's
definitely something I will think about.

Actors/messaging is much more about reasoning in isolation (understanding
 'each part') than composition. Consider: You can't treat a group of two
 actors as a single actor. You can't treat a sequence of two messages as a
 single message. There are no standard composition operators for using two
 actors or messages together, e.g. to pipe output from one actor as input to
 another.
 It is very difficult, with actors, to reason about system-level properties
 (e.g. consistency, latency, partial failure). But it is not difficult to
 reason about actors individually.


You can definitely group two actors as a single actor. It's almost trivial
to do so. It requires creating another actor though :D.

Also, you yourself, in Life With Objects mention object capability model.
So to counter the statement of difficulty to reason about system-level
properties, object capability model definitely makes it simpler to reason
about the system-level property of security (by walking the actor graph).
Correctly implemented actor systems (requirement is non-forgeable actor
addresses) implement object capability model trivially. So, the level of
difficulty of reasoning seems to be dependent on what it is we are
reasoning about.

Additionally, we are getting better at understanding evented systems. As
the market pressure for always available services grows, the survivors
are very good in reasoning about system availability, inconsistency, and
unpredictability (Netflix, Etsy, AWS, Google, Facebook, etc.). Yes, this is
at the layer of entire machines (although often virtual machines), but
lessons from that layer can be readily applied to evented systems in layers
closer to concerns of programming individual applications/services.

I would agree with you on the general lack of tooling within the actor
computing paradigm itself.

As far as standard composition, I've seen lambda calculus implemented in
actors, and I've seen actors implemented in Kernel (which itself was
implemented in actors written in C). These are, after all, Turing
equivalent. Using both is probably the right approach. At some point, we
have to leave the functional world and deal with network communications and
emergent swarm behavior, actors FTW. Also, are you familiar with Storm (
https://github.com/nathanmarz/storm ) ? It is one example of composing
things and providing guarantees at a slightly higher level of abstraction.
Much like my previous metaphor comments, it's important we get to pick and
choose our metaphors for the task at hand.

From reading Why Not Events, a lot of the accidental complexity that
serves as examples seems to be poor implementation of event systems,
because it's a hard thing to do. This statement, in itself, may be
conceding the point. However, there is always the one bright shining
example of an event system that definitely counters, for example, your
point that *Event systems lack generic resilience.* As Alan likes to point
out, the Internet was built by people. So event systems can become powerful
if we can get them right. In having event system discussion, we can't start
our statements with Aside from the Internet There is something to
event systems that make them work incredibly well (Internet). Perhaps we
don't understand enough and that makes them difficult to understand right
now. But if we want to describe interesting things, sometimes we need
Quantum Dynamics, because other things will be wrong in profound ways deep
down.

Going back to Life With Objects, you mention stateless objects. That's a
compelling proposition, and something I will continue to think about. I've
tried to wrap my mind a while back about how to reason with stateless
actors when trying to see if one could layer an actor system on top of
GPGPU paradigm.

I don't think I understand objects as dependently-typed functions, so I
can't comment on Objects as Dependently-Typed Functions article.

I remember reading the Stone Soup Programming post. It's interesting to
see that in my mind, I saw actors interacting as if molecules/cells in a
cell/organism. I now think that in your mind you had perhaps a more
functional and compositional as in function view of that model.

Lastly...

I'm a bit wary about developing computer systems 

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Ondřej Bílka
On Sun, Apr 07, 2013 at 08:03:54AM -0500, Tristan Slominski wrote:
  A purpose of language is to convey how to solve problems. You need to
  look for
  robust solution. You must deal with that real world is inprecise. Just
  transforming
  problem to words causes inaccuracy. when you tell something to many
  parties each of them wants to optimize something different. You again
  need flexibility.
 
Ondrej, have you come across Nassim Nicholas Taleb's Antifragility
concept? The reason I ask, is because we seem to agree on what's important
in solving problems. However, robustness is a limited goal, and
antifragility seems a much more worthy one.

I did not
Yes that is almost exactly what I meant. I did not have word that would
fit exactly so I described it as robustness which was closest upto now.

In short, the concept can be expressed in opposition of how we usually
think of fragility. And the opposite of fragility is not robustness.
Nassim argues that we really didn't have a name for the concept, so he
called it antifragility.
fragility - quality of being easily damaged or destroyed.
robust - 1. Strong and healthy; vigorous. 2. Sturdy in construction.
Nassim argues that the opposite of easily damaged or destroyed [in face of
variability] is actually getting better [in face of variability], not just
remaining robust and unchanging. This getting better is what he called
antifragility.
Below is a short summary of what antifragility is. (I would also encourage
reading Nassim Taleb directly, a lot of people, perhaps myself included,
tend to misunderstand and misrepresent this concept)

 [1]http://www.edge.org/conversation/understanding-is-a-poor-substitute-for-convexity-antifragility
 
On Sun, Apr 7, 2013 at 4:25 AM, Ondřej Bílka [2]nel...@seznam.cz wrote:
 
  On Sat, Apr 06, 2013 at 09:00:26PM -0700, David Barbour wrote:
      On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston
  [1][3]jul...@leviston.net
      wrote:
  
        LISP is perfectly precise. It's completely unambiguous. Of
  course,
        this makes it incredibly difficult to use or understand
  sometimes.
  
      Ambiguity isn't necessarily a bad thing, mind. One can consider it
  an
      opportunity: For live coding or conversational programming,
  ambiguity
      enables a rich form of iterative refinement and conversational
  programming
      styles, where the compiler/interpreter fills the gaps with
  something that
      seems reasonable then the programmer edits if the results aren't
  quite
      those desired. For mobile code, or portable code, ambiguity can
  provide
      some flexibility for a program to adapt to its environment. One can
      consider it a form of contextual abstraction. Ambiguity could even
  make a
      decent target for machine-learning, e.g. to find optimal results or
      improve system stability [1].
      [1]
  [2][4]http://awelonblue.wordpress.com/2012/03/14/stability-without-state/
  
 
  IMO unambiguity is property that looks good only in the paper.
 
  When you look to perfect solution you will get perfect solution for
  wrong problem.
 
  A purpose of language is to convey how to solve problems. You need to
  look for
  robust solution. You must deal with that real world is inprecise. Just
  transforming
  problem to words causes inaccuracy. when you tell something to many
  parties each of them wants to optimize something different. You again
  need flexibility.
 
  This is problem of logicians that they did not go into this direction
  but direction that makes their results more and more brittle.
  Until one can answer questions above along with how to choose between
  contradictrary data what is more important there is no chance to get
  decent AI.
 
  What is important is cost of knowledge. It has several important
  properties, for example that in 99% of cases it is negative.
 
  You can easily roll dice 50 times and make 50 statements about them that
  are completely unambiguous and completely useless.
 
  ___
  fonc mailing list
  [5]fonc@vpri.org
  [6]http://vpri.org/mailman/listinfo/fonc
 
 References
 
Visible links
1. 
 http://www.edge.org/conversation/understanding-is-a-poor-substitute-for-convexity-antifragility
2. mailto:nel...@seznam.cz
3. mailto:jul...@leviston.net
4. http://awelonblue.wordpress.com/2012/03/14/stability-without-state/
5. mailto:fonc@vpri.org
6. http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


-- 

new guy cross-connected phone lines with ac power bus.

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread Tristan Slominski

 I believe you imagine an actor simply demuxing or unzipping messages to
 two or more other actors. But such a design does not result in the same
 concurrency, consistency, or termination properties as a single actor,
 which is why you cannot (correctly) treat the grouping as a single actor.


Well... composing multiple functions does not result in the same
termination properties as a single function either, does it? Especially
when we are composing nondeterministic computations? (real question, not
rhetorical) I'm having difficulty seeing how this is unique to actors.


 but it is my understanding that most actors languages and implementations
 are developed with an assumption of ambient authority


Well, yes, and it's a bad idea if you want object capability security. I
implemented object capability security in a system that hosted JavaScript
actors on Node.js. As long as you have a vm to interpret code in, you can
control what it can and cannot access. I'm looking towards working on more
lower level concepts in that direction. So far I've only addressed this
problem from a distributed system perspective. I considered it (but haven't
implemented it) from the language/operating system layer yet so I may be
lacking a perspective that you already have. I'll know more in the future.

You can thus speak of composing lambdas with functional composition, or
 composing diagrams with graphics composition operations. But neither says
 nothing at all about whether actors, or Kernel, or C or whatever they're
 implemented in is composable. Composition is a property of abstractions.


That's fair. You're right.

 But how do you weigh freedom to make choices for the task at hand even
 if they're bad choices for the tasks NOT immediately at hand (such as
 integration, maintenance)?


For this, I think all of us fall back on heuristics as to what's a good
idea. But those heuristics come from past experience. This ties back
somewhat to what I mentioned about The Idea Flow method, and the difficulty
of being able to determine some of those things ahead of time. My personal
heuristic would argue that integration and maintenance should be of greater
consideration. But I learned that by getting hit in the face with
maintenance and integration problems. There are certainly approaches that
are better than a random walk of choices, but which one of those approaches
is best (Agile, XP, Lean, Lean Startup, Idea Flow Method, etc.) seems to
still be an open question.

At the low level of TCP/IP, there is no *generic* way to re-establish a
 broken connection or recover from missing datagrams. Each application or
 service has its own ad-hoc, inconsistent solution. The World Wide Web that
 you might consider resilient is so primarily due to representational state
 transfer (REST) styles and disciplines, which is effectively about giving
 the finger to events. (Events involves communicating and effecting
 *changes* in state, whereas REST involves communicating full
 *representations* of state.) Also, by many metrics (dead links, behavior
 under disruption, persistent inconsistency) neither the web nor the
 internet is especially resilient (though it does degrade gracefully).


Hmm. Based on your response, I think that we define event systems
differently. I'm not saying I'm right, but it feels that I might be picking
and choosing levels of abstraction where events occur. The system as a
whole seems to me to be resilient, and I don't see that even if there is no
generic way to re-establish connection at TCP/IP level this degrades
resiliency somehow. Multiple layers are at play here, and they come
together. No one layer would be successful by itself. I still think it's my
communication failure at describing my view of the Internet, and not a
lacking in the system itself. I'll have to do some more thinking on how to
express it better to address what you've presented.

People who focus on the latter often use phrases such as 'convergence',
 'stability', 'asymptotic', 'bounded error', 'eventual consistency'. It's
 all very interesting. But it is historically a mistake to disregard
 correctness then *handwave* about emergent properties; you really need a
 mathematical justification even for weak correctness properties.
 Biologically inspired models aren't always stable (there are all sorts of
 predator-prey cycles, extinctions, etc.) and the transient stability we
 observe is often anthropocentric.


Agreed that disregarding correctness and *handwaving* emergent properties
is a bad idea. It was more a comment on my starting state of mind to a
problem than a rigorous approach to solving it. Although, stability is not
necessarily the goal. Perhaps I'm more in the biomimetic camp than I think.

On Sun, Apr 7, 2013 at 3:47 PM, David Barbour dmbarb...@gmail.com wrote:


 On Sun, Apr 7, 2013 at 10:40 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:


 Consider: You can't treat a group of two actors as a single actor.


 You can 

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread David Barbour
On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 Well... composing multiple functions does not result in the same
 termination properties as a single function either, does it? Especially
 when we are composing nondeterministic computations? (real question, not
 rhetorical) I'm having difficulty seeing how this is unique to actors.


An individual actor is guaranteed to terminate after receiving a message.
It's in the definition: upon receiving a message an actor can send out a
*finite* number of messages to other actors. But two actors can easily (by
passing messages in circles) send out an infinite number of messages to
other actors upon receiving a single message. Therefore, with respect to
this property, you cannot (in general) reason about or treat groups of two
actors as though they were a single actor.

Similarly, there is no guarantee that two actors will finish processing
even a single message before they start processing the next one. A group of
actors thus lacks the atomicity and consistency properties of individual
actors.

(Of course, all this would be irrelevant if these weren't properties people
depend on when reasoning about system correctness. But they are.)

As you note, actors are not unique in their non-termination. But that
misses the point. The issue was our ability to reason about actors
compositionally, not whether termination is a good property.



 But how do you weigh freedom to make choices for the task at hand even
 if they're bad choices for the tasks NOT immediately at hand (such as
 integration, maintenance)?


 For this, I think all of us fall back on heuristics as to what's a good
 idea. But those heuristics come from past experience.


We also use models - e.g. actors/messaging, databases, frameworks, VMs,
pubsub, etc. - with understanding (or hope) that they're supposed to help.
We can then focus our brainpower on the task at hand with an
understanding that various cross-cutting concerns are either addressed or
can be at leisure.

And that's exactly what a good language should be doing - lowering the bar
for those cross-cutting concerns.


 Hmm. Based on your response, I think that we define event systems
 differently.


I provided a definition for 'events' in my article.

By events, I include commands, messages, procedure calls, other
conceptually `instantaneous` values in the system that independently effect
or report changes in state.


The 'instantaneous', 'independent', and 'changes in state' are all relevant
to my arguments. If you mean something different by event systems, we might
be talking past one another.



The system as a whole seems to me to be resilient, and I don't see that
 even if there is no generic way to re-establish connection at TCP/IP level
 this degrades resiliency somehow.


We can develop systems that exhibit resilience due to ad-hoc mechanisms.
That doesn't necessarily degrade resiliency, but it certainly loses the
generic and creates a lot of repeated development work.

Review my assertion with an understanding that I was emphasizing the
generic (and certainly not saying event systems lack resilience):

*Event systems lack generic resilience.* Developers have built patterns for
resilient event systems – timeouts, retries, watchdogs, command patterns.
Unfortunately, these patterns require careful attention to the specific
events model – i.e. where retries are safe, where losing an event is safe,
which subsystems can be restarted. Many of these recovery models are not
compositional – e.g. timeouts are not compositional because we need to
understand the timeouts of each subsystem. Many are non-deterministic and
work poorly if replicated across views. By comparison, state models can
generically achieve simple resilience properties like eventual consistency
and snapshot consistency. Often, developers in event systems will
eventually reinvent a more declarative, RESTful approach – but typically in
a non-composable, non-reentrant, glitchy, lossy, inefficient, buggy,
high-overhead manner (like observer patterns).


To the extent that 'multiple layers are at play' - I agree, but I believe
it's the non-eventful layers that contribute to resilience even as the
eventful ones (and POST) do their level best to tear it away.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread David Barbour
On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the biomimetic
 camp than I think.


Just keep in mind that the real world has quintillions of bugs. In
software, humans are probably still under a trillion.  :)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc