[fonc] Please feel free to change the subject line.

2013-04-09 Thread Casey Ransberger
What started off (at least ostensibly) as a conversation about NLP ended up 
being a conversation about the actor model, and the subject did change once, 
but to something not AFAIK related to actors. If I was less patient about 
wading though blah blah I might have missed interesting thoughts about actors, 
which is relevant to my interests. 

I'm not referring to the off topic origin of the thread so much as the fact 
that the subject line didn't track the context as it shifted. 

Trying not to be too much of a complainer, and I kind of have to applaud the 
community for trying patiently to bring a thread kicking and screaming back 
into the topical, as well as Kim Rose for putting the official foot down about 
what's too far off topic for discussion on this list, so thank you and thank 
you.

-- Casey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread Chris Warburton
David Barbour dmbarb...@gmail.com writes:

 relying on global knowledge when designing an actor system seems, to me,
 not to be the right way


 In our earlier discussion, you mentioned that actors model can be used to
 implement lambda calculus. And this is true, given bog standard actors
 model. But do you believe you can explain it from your 'physics' view
 point? How can you know that you've implemented, say, a Fibonacci
 function with actors, if you forbid knowledge beyond what can be discovered
 with messages? Especially if you allow message loss?

snip

 Any expressiveness issues that can be attributed to actors model are a
 consequence of the model. It doesn't matter whether you implement it as a
 language or a framework or even use it as a design pattern.

 Of course, one might overcome certain expressiveness issues by stepping
 outside the model, which may be easy for a framework or certain
 multi-paradigm languages. But to the extent you do so, you can't claim
 you're benefiting from actors model. It's a little sad when we work around
 our models rather than with them.

I think in these kinds of discussions it's important to keep in mind
Goedel's limits on self-reference. Any Turing-complete system is
equivalent to any other in the sense that they can implement each other,
but when we reason about their properties it makes a difference whether
our 'frame of reference' is the implemented language or the
implementation language.

For example, lets say we implement lambda calculus in the actor model:
 - From the lambda calculus frame of reference I *know* that
   λx. ax == a (eta-equivalence), which I can use in my reasoning.
 - From the actor model frame of reference I *cannot* know that an
   encoding of λx. ax == an encoding of a, since it won't be. More
   importantly, reducing an encoding of λx. ax will give different
   computational behaviour to reducing an encoding of a. I cannot
   interchange one for the other in an arbitrary expression and *know*
   that they'll reduce to the same thing, since this would solve the
   halting problem.

We can step outside the actor model and prove eta-equivalence of the
encoded terms (eg. it would follow trivially if we proved that we've
correctly implemented lambda calculus), but the actor model itself will
never believe such a proof.

To use David's analogy, there are some desirable properties that
programmers exploit which are inherently 3D and cannot be represented
in the 2D world. Of course, there are also 4D properties which our
3D infrastructure cannot represent, for example correct refactorings
that our IDE will think are unsafe, correct optimisations which our
compiler will think are unsafe, etc. At some point we have to give up
and claim that the meta-meta-meta--system is enough for practical
purposes and obviously correct in its implementation.

The properties that David is interested in preserving under composition
(termination, maintainability, security, etc.) are very meta, so it's
easy for them to become unrepresentable and difficult to encode when a
language/system/model isn't designed with them in mind.

Note that the above argument is based on Goedel's first incompleteness
theorem: no consistent system can prove all true statements. The halting
problem is just the most famous example of such a statement for
Turing-complete systems.

Goedel's second incompleteness theorem is also equally applicable here:
no system can prove its own consistency (or that of a system with
equivalent expressive power, ie. anything which can emulate it). In
other words, no system (eg. the actor model, lambda calculus, etc.) can
prove/reason about its own:

 - Consistency/correctness: a correct system could produce a correct
   proof that it's correct, or an incorrect system could produce an
   incorrect proof that it's correct; by trusting such a proof we become
   inconsistent.
 - Termination/productivity: a terminating/productive system could
   produce a proof that it's terminating/productive, or a divergent
   (non-terminating, non-productive) system could claim, but fail to
   produce, a proof that it's terminating/productive. By trusting such a
   proof we allow non-termination (or we could optimise it away,
   becoming inconsistent).
 - Security: a secure system could produce a proof that it's secure, or
   an insecure system could be tricked into accepting that it's
   secure. By trusting such a proof, we become insecure.
 - Reliability: a reliable system could produce a proof that it's
   reliable, or an unreliable system could fail in a way that produces
   an incorrect proof of its reliability. By trusting such a proof, we
   become unreliable.
 - Etc.

For such properties we *must* reason in an external language/system,
since Goedel showed that such loops cannot be closed without producing
inconsistency (or the analogous 'bad' outcome).

Regards,
Chris
___
fonc mailing list
fonc@vpri.org

Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 5:21 AM, Chris Warburton
chriswa...@googlemail.comwrote:


 To use David's analogy, there are some desirable properties that
 programmers exploit which are inherently 3D and cannot be represented
 in the 2D world. Of course, there are also 4D properties which our
 3D infrastructure cannot represent, for example correct refactorings
 that our IDE will think are unsafe, correct optimisations which our
 compiler will think are unsafe, etc. At some point we have to give up
 and claim that the meta-meta-meta--system is enough for practical
 purposes and obviously correct in its implementation.

 The properties that David is interested in preserving under composition
 (termination, maintainability, security, etc.) are very meta, so it's
 easy for them to become unrepresentable and difficult to encode when a
 language/system/model isn't designed with them in mind.


Well said.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread John Carlson
So it's message recognition and not actor recognition?  Can actors
collaborate to recognize a message?  I'm trying to put this in terms of
subjective/objective.  In a subjective world there are only messages
(waves).  In an objective world there are computers and routers and
networks (actors, locations, particles).
On Apr 8, 2013 4:52 PM, Tristan Slominski tristan.slomin...@gmail.com
wrote:

 Therefore, with respect to this property, you cannot (in general) reason
 about or treat groups of two actors as though they were a single actor.


 This is incorrect, well, it's based on a false premise.. this part is
 incorrect/invalid? (an appropriate word escapes me):

 But two actors can easily (by passing messages in circles) send out an
 infinite number of messages to other actors upon receiving a single message.


 I see it as the equivalent of saying: I can write an infinite loop,
 therefore, I cannot reason about functions

 As you note, actors are not unique in their non-termination. But that
 misses the point. The issue was our ability to reason about actors
 compositionally, not whether termination is a good property.


 The above statement, in my mind, sort of misunderstands reasoning about
 actors. What does it mean for an actor to terminate. The _only_ way you
 will know, is if the actor sends you a message that it's done. Any
 reasoning about actors and their compositionality must be done in terms of
 messages sent and received. Reasoning in other ways does not make sense in
 the actor model (as far as I understand). This is how I model it in my
 head:

 It's sort of the analog of asking what happened before the Big Bang.
 Well, there was no time before the Big Bang, so asking about before
 doesn't make sense. In a similar way, reasoning about actor systems with
 anything except messages, doesn't make sense. To use another physics
 analogy, there is no privileged frame of reference in actors, you only get
 messages. It's actually a really well abstracted system that requires no
 other abstractions. Actors and actor configurations (groupings of actors)
 become indistinguishable, because they are logically equivalent for
 reasoning purposes. The only way to interact with either is to send it a
 message and to receive a message. Whether it's millions of actors or just
 one doesn't matter, because *you can't tell the difference* (remember,
 there's no privileged frame of reference). To instrument an actor
 configuration, you need to put actors in front of it. But to the user of
 such instrumented configuration, they won't be able to tell the difference.
 And so on and so forth, It's Actors All The Way Down.

 ...

 I think we found common ground/understanding on other things.


 On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the
 biomimetic camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 On Sun, Apr 7, 2013 at 6:40 PM, David Barbour dmbarb...@gmail.com wrote:

 On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the
 biomimetic camp than I think.


 Just keep in mind that the real world has quintillions of bugs. In
 software, humans are probably still under a trillion.  :)


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 1:48 AM, Casey Ransberger
casey.obrie...@gmail.comwrote:


 The computer is going to keep getting smaller. How do you program a phone?
 It would be nice to be able to just talk to it, but it needs to be able --
 in a programming context -- to eliminate ambiguity by asking me questions
 about what I meant. Or *something.*


Well, once computers get small enough that we can easily integrate them
with our senses and gestures, it will become easier to program again.

Phones are an especially difficult target (big hands and fingers, small
screens, poor tactile feedback, noisy environments). But something like
Project Glass or AR glasses could project information onto different
surfaces - screens the size of walls, effectively - or perhaps the size of
our moleskin notebooks [1]. Something like myo [2] would support pointer
and gesture control without much interfering with our use of hands.

That said, I think supporting ambiguity and resolving it will be one of the
upcoming major revolutions in both HCI and software design. It has a rather
deep impact on software design [3].

(Your Siri converstation had me laughing out loud. Appreciated.)

[1]
http://awelonblue.wordpress.com/2012/10/26/ubiquitous-programming-with-pen-and-paper/
[2] https://getmyo.com/
[3] http://awelonblue.wordpress.com/2012/05/20/abandoning-commitment-in-hci/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Carl Gundel
LOL!  I love your example.  :-)

I used to work at a company working on natural language processing (in
Smalltalk no less).  We had more than a dozen doctorate linguists and
computational linguists working at LingoMotors.  Here's just one single and
overwhelming example of a challenge to overcome.  A perfectly grammatical
sentence in a human language can have many valid parse trees, and realize
that this isn't a design fault of the parser.  Then you have to pick the one
that the speaker intended. This is no mean feat.

So, first correctly recognize all the spoken words (hard enough), being sure
to know where the sentence boundaries are (also hard), then parse them
correctly into the possible correct senses (much harder), and then finally
decide based on expert knowledge and context that may not be present which
sense is the correct one (really, really difficult).

Natural language wins?  Not anytime soon.

-Carl Gundel

-Original Message-
From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of
Casey Ransberger
Sent: Tuesday, April 09, 2013 4:49 AM
To: Fundamentals of New Computing
Subject: [fonc] When natural language fails!

Here's my example. 

Siri: Intruders detected on the tenth floor. 

Me: Okay Siri, seal off decks six through twelve. Open the airlocks. Number
one! Arrange a security detatchment, let's light a fire under their asses!

Siri: Aye captain. Retracting cooling rods from primary and secondary
reactors. Fire should commence within minutes. 

Me: No no no no! Put the cooling rods back into the reactors, Siri! What the
[explitive deleted] were you thinking??

Siri: Got it. I've made an appointment with your dentist for Monday.
Approximately three minutes fourteen seconds to meltdown in primary and
secondary reactors. Your dentist says hello, by the way. 

(etcetera)

The computer is going to keep getting smaller. How do you program a phone?
It would be nice to be able to just talk to it, but it needs to be able --
in a programming context -- to eliminate ambiguity by asking me questions
about what I meant. Or *something.*

It's tragic that Siri can't tell me what you get when you multiply six by
nine. I think it's been crippled, based on stuff Woz has said about what
Apple did when they bought it up. 

So there are some really interesting angles without well understood
solutions wrt NLP (and of course the group is welcomed to slap me in the
face with my ignorance because I know there's stuff I don't know.)

The best thing I've found to a natural language programming system is Inform
7 which leaves things to be desired. At least it is unambiguous, but I think
that in natural language, what we need are ways to cope with disambiguation.


Anyone want to point me at cool stuff to read? :D

-- Casey Ransberger


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Chris Warburton
David Barbour dmbarb...@gmail.com writes:

 On Tue, Apr 9, 2013 at 1:48 AM, Casey Ransberger
 casey.obrie...@gmail.comwrote:


 The computer is going to keep getting smaller. How do you program a phone?
 It would be nice to be able to just talk to it, but it needs to be able --
 in a programming context -- to eliminate ambiguity by asking me questions
 about what I meant. Or *something.*


 Well, once computers get small enough that we can easily integrate them
 with our senses and gestures, it will become easier to program again.

 Phones are an especially difficult target (big hands and fingers, small
 screens, poor tactile feedback, noisy environments). But something like
 Project Glass or AR glasses could project information onto different
 surfaces - screens the size of walls, effectively - or perhaps the size of
 our moleskin notebooks [1]. Something like myo [2] would support pointer
 and gesture control without much interfering with our use of hands.

There is a distinction between programming a mobile phone and
programming when mobile.

I can easily program my phone right now: I plug it into a USB port,
bring up the USB network, SSH into it, apt-get some tools and off I
go.

I can't program easily when I'm in the middle of a field, with no
infrastructure around me other than the phone and possibly a wireless
network connection. Believe me, on-screen keyboards and bash don't play
well together ;) This is where ad-hoc programming like speech commands
become interesting.

Cheers,
Chris
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Chris Warburton
Carl Gundel ca...@psychesystems.com writes:

 LOL!  I love your example.  :-)

 I used to work at a company working on natural language processing (in
 Smalltalk no less).  We had more than a dozen doctorate linguists and
 computational linguists working at LingoMotors.  Here's just one single and
 overwhelming example of a challenge to overcome.  A perfectly grammatical
 sentence in a human language can have many valid parse trees, and realize
 that this isn't a design fault of the parser.  Then you have to pick the one
 that the speaker intended. This is no mean feat.

 So, first correctly recognize all the spoken words (hard enough), being sure
 to know where the sentence boundaries are (also hard), then parse them
 correctly into the possible correct senses (much harder), and then finally
 decide based on expert knowledge and context that may not be present which
 sense is the correct one (really, really difficult).

 Natural language wins?  Not anytime soon.

 -Carl Gundel

My intuition, based on a very limited course on speech recognition at
University and my own heavy bias towards programming languages, is that
'serious' use of speech commands will end up evolving some terse,
phonetic, unambiguous vocal programming language. It would resemble
speech in the same way that a bash session resembles an email chain.
There are probably languages like this in the wild already.

My reasoning is by analogy with text-based programming. Even Excel users
are used to saying SUM(A15:B20) / 1.5 rather than the sum of the
range from A fifteen to B twenty all divided by one and a half. I don't
imagine the equivalent we'll be saying sum open-paren dollar one five
colon... but something more phonetic, hopefully something that can be
strung together without becoming incomprehensible.

I also think that tonal audio output may be preferable to spoken output
as the amount of data increases. For example, imagine a service monitor
that hums along as requests are processed, becoming discordant when it
starts seeing error messages. This lets us internalise the status of the
system, noticing immediately when something is out of the ordinary.

An equivalent speech system could only alert us when certain conditions
are met, eg. Warning, 10% error rate.

Cheers,
Chris
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread John Carlson
Sometimes I think that something like http://leapmotion.com will use
something like Ameslan to revolutionize programming.  Maybe programming
will become less sedentary and more like dance dance revolution.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 9:19 AM, Chris Warburton
chriswa...@googlemail.comwrote:


 There is a distinction between programming a mobile phone and
 programming when mobile.


True enough! And there's also a distinction between programming WITH a
mobile phone and programming while mobile. As hard as it would be to use
bash with an on-screen keyboard while sitting in a noisy restaurant, it
would be a lot harder to program while jogging or skiing. (And HCI is very
closely related to programming...)

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Chris Warburton
John Carlson yottz...@gmail.com writes:

 Sometimes I think that something like http://leapmotion.com will use
 something like Ameslan to revolutionize programming.  Maybe programming
 will become less sedentary and more like dance dance revolution.

It depends on the programmer how sedentary they are. Now that
touchscreens are common, we should implement the user has punched the
screen event handler ;)

Cheers,
Chris
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread John Carlson
I thought the desktop metaphor was programming.
On Apr 9, 2013 12:08 PM, David Barbour dmbarb...@gmail.com wrote:

 On Tue, Apr 9, 2013 at 9:19 AM, Chris Warburton chriswa...@googlemail.com
  wrote:


 There is a distinction between programming a mobile phone and
 programming when mobile.


 True enough! And there's also a distinction between programming WITH a
 mobile phone and programming while mobile. As hard as it would be to use
 bash with an on-screen keyboard while sitting in a noisy restaurant, it
 would be a lot harder to program while jogging or skiing. (And HCI is very
 closely related to programming...)

 Regards,

 Dave


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Nathan Sorenson

 My intuition, based on a very limited course on speech recognition at
 University and my own heavy bias towards programming languages, is that
 'serious' use of speech commands will end up evolving some terse,
 phonetic, unambiguous vocal programming language

Tavis Rudd has become quite proficient at voice programming due to a bout with 
RSI. He gives a fascinating talk on his emacs+speech recognition setup: 
http://lanyrd.com/2013/pycon/scdzbr/
  ___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Miles Fidelman



John Carlson yottz...@gmail.com writes:


Sometimes I think that something like http://leapmotion.com will use
something like Ameslan to revolutionize programming.  Maybe programming
will become less sedentary and more like dance dance revolution.


Two words: Minority Report
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread Chris Warburton
David Barbour dmbarb...@gmail.com writes:

 I also think that tonal audio output may be preferable to spoken output
 as the amount of data increases. For example, imagine a service monitor
 that hums along as requests are processed, becoming discordant when it
 starts seeing error messages. This lets us internalise the status of the
 system, noticing immediately when something is out of the ordinary.


 Indeed! Tonal output is something I've experienced when I was young, but I
 haven't seen much over the last couple decades. I would like tones for my
 on-screen phone keyboards, so I know what buttons I press without looking.
 I've also been thinking about applications in security systems - e.g.
 associating tones with faces or machine-recognition of behaviors.

I've not seen in much in 'serious' use, but there are lots of
games/interactive artworks which reward the player by building up more
elaborate soundtracks and punish them by making a racket.

A couple of examples I can think of are flOw and Bit Trip
Runner. Turning this on its head, Vib-Ribbon's *graphics* become a mess
when the user does badly and smooth out when doing well.

Cheers,
Chris
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread Tristan Slominski
I think I am now bogged down in a Meta Tarpit :D

A good question to ask is: can I correctly and efficiently implement
 actors model, given these physical constraints? One might explore the
 limitations of scalability in the naive model. Another good question to ask
 is: is there a not-quite actors model suitable for a more
 scalable/efficient/etc. implementation. (But note that the not-quite
 actors model will never quite be the actors model.)


The problem with the above is that popular implementations (like Akka, for
example) give up things such as Object Capability for nothing.. it's
depressing. Hearing commentary from one of the creators of the framework
himself, as far as I understand, this was not a conscious choice, but a
result of unfamiliarity with the model.

Actors makes a guarantee that every message is delivered (along with a nigh
 uselessly weak fairness property), but for obvious reasons guaranteed
 delivery is difficult to scale to distributed systems. And it seems you're
 entertaining whether *ad-hoc message loss* is suitable.


I still prefer to model them as in every message is delivered. It wasn't I
who challenged this original guaranteed-delivery condition but Carl Hewitt
himself. (see:
http://letitcrash.com/post/20964174345/carl-hewitt-explains-the-essence-of-the-actorat
timestamp 14:00 ). I was quite surprised by this (
https://groups.google.com/d/msg/computational-actors-guild/Xi-aGdSotxw/nlq8Ib0fDaMJ)

Consider an alternative: explicitly model islands (within which no message
 loss occurs) and serialized connections (bridges) between them. Disruption
 and message loss could then occur in a controlled manner: a particular
 bridge is lost, with all of the messages beyond a certain point falling
 into the ether. Compared to ad-hoc message loss, the bridged islands design
 is much more effective for reasoning about and recovering from partial
 failure.


You've described composing actors into actor configurations :D, from the
outside world, your island looks like a single actor.

I find actors can only process one message at a time is an interesting
 constraint on concurrency, and certainly a useful one for reasoning. And
 it's certainly relevant with respect to composition (ability to treat an
 actor configuration as an actor) and decomposition (ability to divide an
 actor into an actor configuration).


From the same video I linked above at time 5:16 Carl Hewitt explains one
message at a time.

Do you also think zero and one are uninteresting numbers?


I've spent an equivalent of a semester reading/learning axiomatic set
theory and trying to understand why 0+1=1 and 1+1=2. I definitely don't
think they are uninteresting :D

It seems you misrepresent your true opinion and ignore difficult,
 topic-relevant issues by re-scoping discussion to one of explanatory power.


This may very well be the case, hence my earlier comment of being stuck in
a Meta Tarpit.

But to the extent you do so, you can't claim you're benefiting from actors
 model. It's a little sad when we work around our models rather than with
 them.


I don't think we have created enough tooling or understanding to fully grok
the consequences of the actor model yet. Where's our math for emergent
properties and swarm dynamics of actor systems? Where are our tools for
that? Even with large companies operating server farms, the obsession with
control of individual components is pervasive. Even a Java-world all-time
favorite distributed coordination system ZooKeeper has a *master* node
that is elected. If that's our pinnacle, no wonder we can't benefit from
the actor model. Where is our reasoning about symbiotic autopoietic and
allopoietic systems? (earlier reference to
http://pleiad.dcc.uchile.cl/_media/bic2007/papers/conscientioussoftwarecc.pdf)
This is, in my view,  where the actor systems will shine, but I haven't
seen (it could be my ignorance) sustained community searching to discover
and command such actor configurations. This is what I'm trying to highlight
when I discuss the appropriate frame of reference for actor system
programming. (This can be rooted in my ignorance, in which case, I would
love some pointers to how this approach failed in the past).

That said, focusing on explanatory power could be interesting - e.g.
 comparing actors model with other physically-inspired models (e.g. time
 warp, cellular automata, synchronous reactive). To be honest, I think
 actors model will fare poorly. Where do 'references' occur in physics? What
 about fan-in and locality?


 Such issues include composition, decomposition, consistency, discovery,
 persistence, runtime update.



 But there are also a few systemic with respect to implementation - e.g.
 regarding garbage collection, process control, and partitioning or partial
 failure in distributed systems, and certain optimizations (inlining,
 mirroring). Actors really aren't as scalable as they promise without quite
 a few hacks.


That's some stuff that I'm interested in 

Re: [fonc] When natural language fails!

2013-04-09 Thread Brendan Baldwin
Wasn't John McCarthy's Elephant programming language based on the metaphor
of conversation?  Perhaps voice based programming interactions are
addressed there?
On Apr 9, 2013 8:46 AM, David Barbour dmbarb...@gmail.com wrote:


 On Tue, Apr 9, 2013 at 1:48 AM, Casey Ransberger casey.obrie...@gmail.com
  wrote:


 The computer is going to keep getting smaller. How do you program a
 phone? It would be nice to be able to just talk to it, but it needs to be
 able -- in a programming context -- to eliminate ambiguity by asking me
 questions about what I meant. Or *something.*


 Well, once computers get small enough that we can easily integrate them
 with our senses and gestures, it will become easier to program again.

 Phones are an especially difficult target (big hands and fingers, small
 screens, poor tactile feedback, noisy environments). But something like
 Project Glass or AR glasses could project information onto different
 surfaces - screens the size of walls, effectively - or perhaps the size of
 our moleskin notebooks [1]. Something like myo [2] would support pointer
 and gesture control without much interfering with our use of hands.

 That said, I think supporting ambiguity and resolving it will be one of
 the upcoming major revolutions in both HCI and software design. It has a
 rather deep impact on software design [3].

 (Your Siri converstation had me laughing out loud. Appreciated.)

 [1]
 http://awelonblue.wordpress.com/2012/10/26/ubiquitous-programming-with-pen-and-paper/
 [2] https://getmyo.com/
 [3]
 http://awelonblue.wordpress.com/2012/05/20/abandoning-commitment-in-hci/


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-09 Thread Fernando Cacciola
On Sat, Apr 6, 2013 at 11:21 PM, David Barbour dmbarb...@gmail.com wrote:

 I think you're being optimistic about human rationality there. (I
 disagree. QED.)



Hmm, well, I'm afraid that indeed I would have only been right if we were
all consistently rational. And definitely we are not.

I find interesting how you proved me wrong by being rational about our
irrationality though.

Best

-- 
Fernando Cacciola
SciSoft Consulting, Founder
http://www.scisoft-consulting.com
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 12:44 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 popular implementations (like Akka, for example) give up things such as
 Object Capability for nothing.. it's depressing.


Indeed. Though, frameworks shouldn't rail too much against their hosts.



 I still prefer to model them as in every message is delivered. It wasn't I
 who challenged this original guaranteed-delivery condition but Carl Hewitt
 himself.


It is guaranteed in the original formalism, and even Hewitt can't change
that. But you can model loss of messages (e.g. by explicitly modeling a
lossy network).


 You've described composing actors into actor configurations :D, from the
 outside world, your island looks like a single actor.


I did not specify that there is only one bridge, nor that you finish
processing a message from a bridge before we start processing another next.
If you model the island as a single actor, you would fail to represent many
of the non-deterministic interactions possible in the 'island as a set' of
actors.


 I don't think we have created enough tooling or understanding to fully
 grok the consequences of the actor model yet. Where's our math for emergent
 properties and swarm dynamics of actor systems? [..] Where is our reasoning
 about symbiotic autopoietic and allopoietic systems? This is, in my view,
  where the actor systems will shine


I cannot fathom your optimism.

What we can say of a model is often specific to how we implemented it, the
main exceptions being compositional properties (which are trivially a
superset of invariants). Ad-hoc reasoning easily grows intractable and
ambiguous to the extent the number of possibilities increases or depends on
deep implementation details. And actors model seems to go out of its way to
make reasoning difficult - pervasive state, pervasive non-determinism,
negligible ability to make consistent observations or decisions involving
the states of two or more actors.

I think any goal to lower those comprehension barriers will lead to
development of a new models. Of course, they might first resolve as
frameworks or design patterns that get used pervasively (~ global
transformation done by hand, ugh). Before RDP, there were reactive design
patterns I had developed in the actors model while pursuing greater
consistency and resilience.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc