Re: [fonc] Fwd: Interaction Design for Languages

2013-08-30 Thread Chris Warburton
David Barbour dmbarb...@gmail.com writes:

 There are actually several ways to compose predictive/learning systems. The
 simple compositions: they can be chained, hierarchical, or run in parallel
 (combining independent estimates). More complicated composition: use
 speculative evaluation to feed predicted behaviors and sensory inputs back
 into the model. (Speculation is also useful in on-the-fly planning loops
 and constraint models.)

One important way to compose learning systems is co-evolution. This
couples the learning processes, rather than the predictions, and can
result composite systems which are more powerful/accurate than the sum
of their parts.

Cheers,
Chris
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Fwd: Interaction Design for Languages

2013-08-29 Thread David Barbour
[fwd to fonc]

When I speak of merging programming and HCI, I refer more to the experience
than the communities. I'm fine with creating a new HCI experience - perhaps
one supporting ubiquitous computing, augmented reality.

I can't say my principles are right! Just that they are, and that I
believe they're useful.

If you don't like call/cc, I could offer similar stories about
futures/promises (popular in some systems), or adaptive pipelines that
adjust to downstream preferences (my original use case).

There are actually several ways to compose predictive/learning systems. The
simple compositions: they can be chained, hierarchical, or run in parallel
(combining independent estimates). More complicated composition: use
speculative evaluation to feed predicted behaviors and sensory inputs back
into the model. (Speculation is also useful in on-the-fly planning loops
and constraint models.)

I think plodding along in search of new knowledge is fine. But my
impression is that many PL projects are ignoring old research, plodding the
old ground, making old mistakes. We can't know everything before we begin,
but we don't need to. Just know what has succeeded or failed before, and
have a hypothesis for why.
On Aug 29, 2013 8:50 PM, Sean McDirmid smcd...@microsoft.com wrote:

  My own ideas are quite mundane on this subject since I take design at
 face value. It is not something that can be formally validated, and is more
 of an experiential practice. I disagree that formally “right” answers are
 even possible at this point, and would have a hard time arguing with
 someone who did given that we would be talking past each other with two
 very different world views! 

 ** **

 Also, this might be interesting: HCI started out as a way of figuring out
 how humans programmed computers, since there was very little interacting
 with computers otherwise. The visual programming field was then heavily
 intertwined with HCI in the 90s, before HCI broke off into its current
 CHI/UIST form today. But I have to wonder: is merging HCI and programming
 worth it? I say this, because I don’t believe the HCI community is a very
 good role model; their techniques can be incredibly dicey. They don’t have
 a formal notion of right, of course, but even their informal methodologies
 are often contrived and not very useful (CHI has lots of papers, there are
 a few good ones that do well enough in their arguments). 

 ** **

 Composition is quite thorny outside of PL. There is no way to compose
 neural networks, most machine learning techniques result in models that are
 neither composable or decomposable! There is a fundamental limitation here
 that we in math/PL haven’t had to deal with yet. But this limitation arises
 in nature: we can often decompose systems logically in our head (e.g. the
 various biological systems of an organism), but we can’t really compose
 them…they just arise organically. 

 ** **

 I don’t think call/cc is a good example of flexibility vs. guarantees. Not
 many programmers use it directly, and those that do are very disciplined
 about it. I would love to see someone push linear types just to see how far
 they can go, and if they promote rather than limit flexibility like you
 claim. I’m suspecting not, but would be happy to be wrong. 

 ** **

 “worse is better” is very agile and prevents us from freezing up when our
 principles, intuitions, and formal theories fail to provide us with
 answers. Basically, we are arguing about how much we know and can know. My
 opinion is that there are just so many things we don’t know very well, and
 we’ll have to plod along in the pursuit of results as well as knowledge.
 This contrasts with the waterfall method, which assumes we can know
 everything before we begin. These different philosophies even surface in PL
 designs (e.g. Python vs. Haskell). 

 ** **

 *From:* augmented-programm...@googlegroups.com [mailto:
 augmented-programm...@googlegroups.com] *On Behalf Of *David Barbour
 *Sent:* Friday, August 30, 2013 11:20 AM
 *To:* augmented-programm...@googlegroups.com
 *Cc:* Fundamentals of New Computing
 *Subject:* Re: Interaction Design for Languages

 ** **

 Thanks for responding, Sean. But I hope you provide your own ideas and
 concepts, also, rather than just reacting to mine. :) 

 ** **

 On Thu, Aug 29, 2013 at 6:23 PM, Sean McDirmid smcd...@microsoft.com
 wrote:

  My response:

  

 1) Formal justification of human behavior is a lofty goal, especially with
 today’s technology. We know how to empirically measure simple reflexes (say
 Fitt’s or Hicks’ law), but anything complex gets pummeled by noise in the
 form of bias and diversity. And how do those simple processes compose into
 cognition. Focusing just on what can be will lead to very LCD (lowest
 common denominator) designs. And when we do finally figure it out, I’m
 afraid we’ll be at the point of singularity anyway when we’ve learned how
 to design something better