Re: [fonc] Programming Language Theory Stack Exchange

2014-09-28 Thread David Barbour
On Sun, Sep 28, 2014 at 8:36 AM, Miles Fidelman mfidel...@meetinghouse.net
wrote:

 You're assuming that QA is a good way to discuss a topic in depth.


I believe you're misreading Julian. AFAICT, he's said nothing about the
utility of the discussions on each site.

QA can be good for depth - see Socratic method. But a site like
StackExchange doesn't support follow-up questions to take advantage of that
point.

I believe a PL StackExchange site can fill a niche that LtU does not.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Programming Language Theory Stack Exchange

2014-09-27 Thread David Barbour
A proposed stack exchange for programming language theory has reached
commitment phase. It needs two hundred people. If you're interested in PL,
please participate:

http://area51.stackexchange.com/proposals/65167?phase=commitment
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] POL/DSL for security

2014-01-06 Thread David Barbour
Look into F* from Microsoft Research. It isn't quite what you're asking
for, but it might be what you need.


On Mon, Jan 6, 2014 at 11:27 AM, John Carlson yottz...@gmail.com wrote:

 Is anyone interested in a POL/DSL for security?   What's out there now?  I
 am thinking of a language which describes attacks, defenses, and
 vunerabilities (and perhaps superpowers), as well as faux-attacks,
 faux-defenses, and faux-vunerabilities (and perhaps faux-superpowers).

 A Granovetter diagram would be a very high level statement in such a
 language.

 Thanks,

 John

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Task management in a world without apps.

2013-10-31 Thread David Barbour
Instead of 'applications', you have objects you can manipulate (compose,
decompose, rearrange, etc.) in a common environment. The state of the
system, the construction of the objects, determines not only how they
appear but how they behave - i.e. how they influence and observe the world.
Task management is then simply rearranging objects: if you want to turn an
object 'off', you 'disconnect' part of the graph, or perhaps you flip a
switch that does the same thing under the hood.

This has very physical analogies. For example, there are at least two ways
to task manage a light: you could disconnect your lightbulb from its
socket, or you could flip a lightswitch, which opens a circuit.

There are a few interesting classes of objects, which might be described as
'tools'. There are tools for your hand, like different paintbrushes in
Paint Shop. There are also tools for your eyes/senses, like a magnifying
glass, x-ray goggles, heads-up display, events notification, or language
translation. And there are tools that touch both aspects - like a
projectional editor, lenses. If we extend the user-model with concepts like
'inventory', and programmable tools for both hand and eye, those can serve
as another form of task management. When you're done painting, put down the
paintbrush.

This isn't really the same as switching between tasks. I.e. you can still
get event notifications on your heads-up-display while you're editing an
image. It's closer to controlling your computational environment by direct
manipulation of structure that is interpreted as code (aka live
programming).

Best,

Dave



On Thu, Oct 31, 2013 at 10:29 AM, Casey Ransberger casey.obrie...@gmail.com
 wrote:

 A fun, but maybe idealistic idea: an application of a computer should
 just be what one decides to do with it at the time.

 I've been wondering how I might best switch between tasks (or really
 things that aren't tasks too, like toys and documentaries and symphonies)
 in a world that does away with most of the application level modality that
 we got with the first Mac.

 The dominant way of doing this with apps usually looks like either the OS
 X dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more
 interoperability between the virtual things I'm interacting with on a
 computer, without forcing me to multitask (read: do more than one thing
 at once very badly,) what's my best possible interaction language look like?

 I would love to know if these tools came from some interesting research
 once upon a time. I'd be grateful for any references that can be shared.
 I'm also interested in hearing any wild ideas that folks might have, or
 great ideas that fell by the wayside way back when.

 Out of curiosity, how does one change one's mood when interacting with
 Frank?

 Casey
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Task management in a world without apps.

2013-10-31 Thread David Barbour
Alan,

I appreciate the peek into history! I had to look up Fabrik and PARTS. I
love the idea of running presentations as live coding; in fact, I shall
endeavor to do so for any talks I give regarding my own system.

Smalltalk has a lot of good ideas, but they're sometimes mixed with
not-so-great ideas and difficult to separate. Even today, the idea of
applications as objects in the IDE gives results in a knee-jerk rejection
response from many people who fear a tight coupling (to share the app, I
need to share the whole IDE!) based largely on Smalltalk's example.
Language-layer security and an alternative state model could address this
issue, enabling easy decoupling of behavior from environment. Similarly,
MVC has several properties that I believe have been more harmful than
helpful. Models in MVC systems are neither compositional nor open, and
controls were decoupled from views, which hinders direct manipulation and
physical metaphors. More modern variations such as MVVM are improvements,
but they're still a long way from collaborative projectional editors or
spreadsheets.

But the good ideas should be preserved, separated from the chaff, reused in
new contexts. It's interesting to pick apart history and hypothesize why
various good ideas have failed to gain traction.

Best,

Dave

On Thu, Oct 31, 2013 at 11:31 AM, Alan Kay alan.n...@yahoo.com wrote:

 It's worth noting that this was the scheme at PARC and was used heavily
 later in Etoys.

 This is why Smalltalk has unlimited numbers of Projects. Each one is a
 persistant environment that serves both as a place to make things and as a
 page of desktop media.

 There are no apps, only objects and any and all objects can be brought to
 any project which will preserve them over time. This avoids the stovepiping
 of apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the
 objects, and George Bosworth's PARTS system showed a similar but slightly
 different way.

 Also there is no presentation app in Etoys, just an object that allows
 projects to be put in any order -- and there can many many such orderings
 all preserved -- and there is an object that will move from one project to
 the next as you give your talk. Builds etc are all done via Etoy scripts.

 This allows the full power of the system to be used for everything,
 including presentations. You can imagine how appalled we were by the
 appearance of Persuade and PowerPoint, etc.

 Etc.

 We thought we'd done away with both operating systems and with apps
 but we'd used the wrong wood in our stakes -- the vampires came back in the
 80s.

 One of the interesting misunderstandings was that Apple and then MS didn't
 really understand the universal viewing mechanism (MVC) so they thought
 views with borders around them were windows and view without borders were
 part of desktop publishing, but in fact all were the same. The Xerox Star
 confounded the problem by reverting to a single desktop and apps and missed
 the real media possibilities.

 They divided a unified media world into two regimes, neither of which are
 very good for end-users.

 Cheers,

 Alan


   --
  *From:* David Barbour dmbarb...@gmail.com
 *To:* Fundamentals of New Computing fonc@vpri.org
 *Sent:* Thursday, October 31, 2013 8:58 AM
 *Subject:* Re: [fonc] Task management in a world without apps.

 Instead of 'applications', you have objects you can manipulate (compose,
 decompose, rearrange, etc.) in a common environment. The state of the
 system, the construction of the objects, determines not only how they
 appear but how they behave - i.e. how they influence and observe the world.
 Task management is then simply rearranging objects: if you want to turn an
 object 'off', you 'disconnect' part of the graph, or perhaps you flip a
 switch that does the same thing under the hood.

 This has very physical analogies. For example, there are at least two ways
 to task manage a light: you could disconnect your lightbulb from its
 socket, or you could flip a lightswitch, which opens a circuit.

 There are a few interesting classes of objects, which might be described
 as 'tools'. There are tools for your hand, like different paintbrushes in
 Paint Shop. There are also tools for your eyes/senses, like a magnifying
 glass, x-ray goggles, heads-up display, events notification, or language
 translation. And there are tools that touch both aspects - like a
 projectional editor, lenses. If we extend the user-model with concepts like
 'inventory', and programmable tools for both hand and eye, those can serve
 as another form of task management. When you're done painting, put down the
 paintbrush.

 This isn't really the same as switching between tasks. I.e. you can still
 get event notifications on your heads-up-display while you're editing an
 image. It's closer to controlling your computational environment by direct
 manipulation of structure that is interpreted as code (aka live
 programming).

 Best

Re: [fonc] Task management in a world without apps.

2013-10-31 Thread David Barbour
It can be depressing, certainly, to look at the difference between where
we are and where we could be, if we weren't short-sighted and greedy.
OTOH, if you look at where we are vs. where we were, I think you can
find a lot to be optimistic about. FP and types have slowly wormed their
way into many PLs. Publish-subscribe is gaining mindshare. WebRTC, HTML
Canvas, WebSockets, etc. have finally resulted in a widespread VMs people
are actually willing to use (even if they could be better).

On Thu, Oct 31, 2013 at 1:16 PM, David Leibs david.le...@oracle.com wrote:

 In the spirit of equivocation when I look at the world we live in and and
 note the trends then I feel worse, not better.

 -David Leibs

 On Oct 31, 2013, at 11:10 AM, David Barbour dmbarb...@gmail.com wrote:

 The phrase Worse is better involves an equivocation - the 'worse' and
 'better' properties are applied in completely different domains (technical
 quality vs. market success). But, hate it or not, it is undeniable that
 worse is better philosophy has been historically successful.


 On Thu, Oct 31, 2013 at 12:50 PM, David Leibs david.le...@oracle.comwrote:

 Hi Chris,
 I get your point but I have really grown to dislike that phrase Worse is
 Better.  Worse is never better.  Worse is always worse and worse never
 reduces to better under any set of natural rewrite rules. Yes there are
 advantages in the short term to being first to market and things that are
 worse can have more mindshare in the arena of public opinion.

 Worse is Better sounds like some kind of apology to me.

 cheers,
 -David Leibs

 On Oct 31, 2013, at 10:37 AM, Chris Warburton chriswa...@googlemail.com
 wrote:

 Unfortunately, a big factor is also the first-to-market pressure,
 otherwise known as 'Worse Is Better': you can reduce the effort required
 to implement a system by increasing the effort required to use it. The
 classic example is C vs LISP, but a common one these days is
 multithreading vs actors, coroutines, etc.



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Task management in a world without apps.

2013-10-31 Thread David Barbour
On Thu, Oct 31, 2013 at 12:37 PM, Chris Warburton chriswa...@googlemail.com
 wrote:

 In the case of an OS, providing a dumb box to draw on is much easier
 than a complete, complementary suite of MVC/Morphic/etc. components,
 even though developers are forced to implement their own incompatible
 integration layers, if they bother at all.


 This is why I'm not a fan of HTML5 canvas, since it's a dumb box which
 strips away the precious-little semantics the Web has, and restrict
 mashups to little more than putting existing boxes next to each other.


There is worse is better, but there also is less is more. From a
limited perspective, it may be difficult to tell the difference. We should
be careful to not mistake these. In this case, the other POV is that the
canvas is a humble box that doesn't arrogantly presume it knows better than
its users how to format a display. :)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] What's wrong with storing programs as text?

2013-10-14 Thread David Barbour
On Tue, Oct 15, 2013 at 12:09 AM, Faré fah...@gmail.com wrote:


  Without going into too much detail on what becomes possible when programs
  are stored in domain specific data structures [...]
 Yes, yes, yes, but to beat a good text editor like Emacs, you have to
 fight this phenomenon identified long ago by Alan Perlis: It is
 better to have 100 functions operate on one data structure than 10
 functions on 10 data structures.
 [..]
 A good text editor can can in constant programming time improve all
 the points of views at the same time, and delegate the finer
 non-trivial details of bridging to the human imagination. That
 requires fewer people with loose coupling.


As I see it, text is still structured in domain-specific ways, and text
editors make it very difficult to provide useful views on this data without
also integrating ad-hoc parsers, language specific environment models and
module systems, even type checkers - all of which is very painful to do.
Alan Perlis noted in the same epigrams you quote: The string is a stark
data structure and everywhere it is passed there is much duplication of
process. It is a perfect vehicle for hiding information.

Also, I think we can have 100s of functions for 100s of data structures if
we take a different approach to developing those functions, such as
programming-by-example. Especially if many data structures have common
classes that can immediately use many generic functions (like document,
graph, diagram, geometry, table, matrix, spreadsheet, etc.).



 Doing it will [take] a committed person at the heart who has a good

architectural vision that makes sense and attracts hackers, can maybe

communicate it well enough to convince other people to join


Whatever happened to your TUNES project?

I believe I've developed a new approach that can achieve the TUNES vision,
albeit with a different... point of view. I'd be interested in your
opinion. See my recent post on Personal Programming Environment.


  data structures should be able to represent any desired state such as
 partially invalid program in textual notation.
 
 That's an essential insight indeed. It should be possible to start
 thinking according to one point of view without having to keep
 coherence with other points of view at all times. But then, how does
 your system reconcile things? What if two people edit at the same
 time, breaking different invariants? Soon enough, you'll find you need
 version control, transactions, schema invariant enforcement, and
 tracking of temporarily broken invariants (where temporary all too
 often means till death do us part).



I think a more useful question might be: how should these conditions be
addressed in a live programming system? I think the answer is that
programmers should be able to control and understand the boundaries for
failure. Making failure too fine-grained is problematic, since it requires
handling a partial success. Making failure too coarse-grained is
problematic because it hinders reasoning about security properties.

Best,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] What's wrong with storing programs as text?

2013-10-12 Thread David Barbour
I agree with much of what you describe about the problem. We should support
many different views of the data describing the program. And not just for
different people. Even a single person can benefit from many views, e.g.
when debugging or addressing different problems. Text seems to be very
opaque for most analyses, requires too much non-local interpretation.

What model do you use for projectional editing? Lenses or something else?


On Sat, Oct 12, 2013 at 6:48 AM, Levente Mészáros 
levente.mesza...@gmail.com wrote:

 I think the short answer is that there's nothing wrong with it per se, but
 on the other hand I think there's a lot. I think the problem lies in that
 it's difficult to understand, develop and maintain a complex program. The
 real question is this: what can tools do to help people (and not just
 programmers) with that? This quote comes to my mind:

 “Programs must be written for people to read, and only incidentally for
 machines to execute.”
 -- Hal Abelson, Structure and Interpretation of Computer Programs

 I think people working on a complex program
  - have different domain specific knowledge (requirements, business model,
 aesthetics, usability, technical background, etc.)
  - have different roles (business analyst, domain expert, graphical
 designer, programmer, tester, technical writer, project manager, etc.)
  - understand different aspects (user interface, business processes, data
 structures, algorithms, etc.)
  - work on different parts (individual forms, queries, algorithms,
 graphics, etc.)
  - collaborate with each other (feature discussions, design issues, well
 defined APIs, bug tickets)
  - manage multiple versions (testing, production, backward compatibility)
  - etc.

 I think people need better tools that provide different presentations of
 the same complex program based on their knownledge, role, understanding,
 working area, etc. in short: their current presentation context. The
 presented parts may contain business requirements, documentations,
 discussions, graphical designs, business processes, design decisisons,
 architectures, algorithms, data structures, version histories, APIs, code
 blocks, etc. and any combination of these. The presented parts should be
 interlinked, navigable and editable independently of the used notation be
 it textual, tabular, graphical, etc. or any combination ot these.
 Filtering, grouping, ordering and annotating information while editing
 should be provided as a generic feature.

 I think that it's easier to fulfill the above requirements if the program
 is stored in a complex mixture of domain specific data structures such as
 styled text, tables, trees, graphs, graphics, flow charts, programming
 language constructs, etc. Editing operations shouldn't be limited in any
 way: in other words the data structures should be able to represent any
 desired state such as partially invalid program in textual notation. I
 strongly believe that it's possible to make a tool that also allows editing
 programs in various notations including textual while stores them in domain
 specific data structures optionally resorting to store some parts as text
 if needed. The interesting thing to note here is that whether the program
 is valid at any given time is not that important (see the quote above).

 Without going into too much detail on what becomes possible when programs
 are stored in domain specific data structures let me list a few things here:
  - styling with colors, fonts, sizes, etc. is possible
  - renaming anything becomes trivial
  - moving things around doesn't break references
  - multiple different ordering and grouping becomes possible
  - semantic analyzation of changes results in less conflicts
   - discussing and annotating anything in place
  - mixing and using notations accorind to needs
  - presenting things in multiple locations while still being the same
  - literate programming is pushed to its extreme
  - filtering, grouping, ordering anything to your needs in multiple ways
  - cooperating with domain experts directly
  - versioning anything
  - etc.

 Let me present a Projectional Editor prototype I was working on as a hobby
 project in the past years (quite a few months in total working time).
 Unfortunately due to my day job I lack the time to make this more complete
 in the near future, but I think it already shows some interesting things.
 The editor (in textual notation) supports cursor navigation as normal text
 editors do, and this is despite the fact that the edited documents are
 highly domain specific data structures. There are some simple text editing
 operations such as changing string and numeric literals (by typing in
 place) and there are also some basic structure related operations. I
 created a somewhat complex demo that shows a document containing chapters,
 styled text, HTML, JavaScript, JSON and Common Lisp code mixed together.

 The function that creates the document:
 

Re: [fonc] hashes as names

2013-09-27 Thread David Barbour
The usual problem with this sort of handle this super-rare event when it
happens code is that it is poorly tested in practice. Should you trust it?


On Thu, Sep 26, 2013 at 11:46 PM, Robbert van Dalen 
robbert.van.da...@gmail.com wrote:

 Hi,

 ZFS has de-duplication built on top of SHA256 hashes.
 If the verify option is also enabled, it is possible for ZFS to work
 around detect hash collisions (although this option slows things down
 further).

 But ZFS can be considered as a kind of 'central authority' so its
 de-duplication scheme may not work in a distributed setting.

 Regards,
 Robbert.

 On Sep 27, 2013, at 2:50 AM, Wolfgang Eder e...@generalmagic.at wrote:

  hi,
  in recent discussions on this list, the idea of using hashes to
  identify or even name things is often mentioned.
  in this context, hashes are treated as being unique;
 
  albeit unlikely, it *is* possible that hashes are equal
  for two distinct things. are there ideas about
  how to handle such a situation?
 
  thanks and kind regards
  wolfgang
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hashes as names

2013-09-27 Thread David Barbour
I love these sorts of hashes. I call them fingerprints as in audio
fingerprints or visual fingerprints.  They're extremely useful for
augmented reality applications, for physical security applications, and
many others. Even better if fingerprints from multiple sources can be
combined.

Aliasing is always a major issue - e.g. these hashes should also be robust
to simple translation and motion (e.g. sounds are slightly different if
you're moving towards or away; faces are slightly different when caught at
an angle). It seems to me that ML-based approaches to turning features into
vectors are among the better ways to approach these hashes.


On Fri, Sep 27, 2013 at 2:21 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 Martin McClure martin.mccl...@gemtalksystems.com writes:

  Where a hash comes in is if you want the identifiers generated in
  different places to be the *same* if the content being identified is
  the same -- you hash the content, and the resulting hash is the
  identifier. If the identifiers must also be unique, it's important to
  use a strong cryptographic hash. These are designed so that you can't
  get collisions even if you know how they work and try really hard, so
  they have good uniqueness properties.

 There is another alternative, which is to use hash functions with high
 continuity[1] and embrace collisions. For example, the sparse
 distributed representations used by NuPIC[2] are basically hashes which
 attempt to work semantically (interpreting the data) rather than
 syntactically (treating the data as one huge int).

 This makes it likely that values with colliding hashes have the same
 'meaning', and those with similar hashes (eg. low edit distance) will
 have similar 'meaning'. This could overcome issues like re-encoding
 audio mentioned in the previous thread.

 The key to this approach is that it hand-waves all of the complexity
 into a magical hash function. In reality, hash functions which derive
 meaning from data will be limited in what they can spot and will always
 be domain-specific.

 This even applies to human senses: given two arbitrary files, we can
 only compare them in a limited number of ways. When our statistical
 tests can't spot similarities we might try sending them to imagemagick
 in case they're images of the same object, we might send them to VLC in
 case they're different encodings of the same audio, etc. but we'll
 always miss something. For example they might be turn out to be the same
 text saved in OOXML and ODF formats. Of course, these examples assume
 that the files are complete, valid files in some particular format;
 if we only have a fraction of a complete file, we're out of luck with
 these tools.

 [1] http://en.wikipedia.org/wiki/Hash_function#Continuity
 [2] http://numenta.org/

 Cheers,
 Chris
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hashes as names

2013-09-27 Thread David Barbour
You can test in a test environment, sure. But I think that doesn't qualify
as in practice, unless you leave your rogue code in.

I agree that 256-bit hash is unlikely to have accidental collisions
compared to other causes of error (see my first message on this subject). I
don't really trust it has a full 256-bits of security against intentional
collisions; almost every 'secure' hash has been at least partially
compromised.



On Fri, Sep 27, 2013 at 11:33 AM, Robbert van Dalen 
robbert.van.da...@gmail.com wrote:

 I believe such code can be tested in practice with great confidence.
 If I would test such kind of code, I would replace the SHA256 code with a
 rogue version that emits equal hashes for certain bit patterns.

 As a side note, I also don't trust my macbook's 16 gig of internal
 computer memory - there is a (rather big) chance that bit-errors will
 silently corrupt state.
 Even EEC memory suffers that fate (but of course with a much lower chance).

 It is impossible to build a system that achieves 100% data correctness:
 SHA256 will do fine for now.

 On Sep 27, 2013, at 4:37 PM, David Barbour dmbarb...@gmail.com wrote:

  The usual problem with this sort of handle this super-rare event when
 it happens code is that it is poorly tested in practice. Should you trust
 it?
 
 
  On Thu, Sep 26, 2013 at 11:46 PM, Robbert van Dalen 
 robbert.van.da...@gmail.com wrote:
  Hi,
 
  ZFS has de-duplication built on top of SHA256 hashes.
  If the verify option is also enabled, it is possible for ZFS to work
 around detect hash collisions (although this option slows things down
 further).
 
  But ZFS can be considered as a kind of 'central authority' so its
 de-duplication scheme may not work in a distributed setting.
 
  Regards,
  Robbert.
 
  On Sep 27, 2013, at 2:50 AM, Wolfgang Eder e...@generalmagic.at wrote:
 
   hi,
   in recent discussions on this list, the idea of using hashes to
   identify or even name things is often mentioned.
   in this context, hashes are treated as being unique;
  
   albeit unlikely, it *is* possible that hashes are equal
   for two distinct things. are there ideas about
   how to handle such a situation?
  
   thanks and kind regards
   wolfgang
   ___

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-26 Thread David Barbour
On Thu, Sep 26, 2013 at 5:21 AM, Eran Meir eranm...@gmail.com wrote:

 This is my personal programming environment. There are many like it, but
 this one is mine.


Indeed. That's the same way I feel about my smart phone, and my Ubuntu
desktop. :)

Except those aren't nearly as casually personalizable as I want, due to the
coarse granularity for code distribution and maintenance. :(

Regarding the deep discussion of names seeming out of place for a tacit
model: yeah, I thought so too.  My own vision involves
programming-by-example extraction or workspace compilation into an
inventory of reusable AR/VR/GUI tools (mattock, wand, menus, etc.) or macro
assignments that will often be just as tacit and nameless as the objects
upon which they operate. Sharing values, even behaviors, should rarely
involve use of names.

But Sean and Matt are envisioning a very text-based programming
environment, due to their own experiences and their own development
efforts. I'm not going to take that away from them (it would be futile to
try). Also, text-based programming is undoubtedly more convenient for a
subset of  domains. I'm still interested in supporting it (perhaps via
pen-and-paperhttp://awelonblue.wordpress.com/2012/10/26/ubiquitous-programming-with-pen-and-paper/
and
AR), and text-based artifacts (documents, diagrams) are easily represented
in the model I propose. At least for these cases, I can usefully discuss
written names.

I agree with your position on pet names. But I can also understand Sean's
position; technology hasn't quite reached the point where we can easily
discuss code while pointing at it in a shared environment supporting
multiple views. I keep looking forward to Dennou Coil and other visions of
ubiquitous computing and an AR future. The technology is getting there very
quickly.

There will always be some common ground for people to meet, e.g. due to
formal structure, initial visualizers, and the sharing of values. But I'd
love to see different communities evolve, diverging and merging at
different points. I'd love to see children picking up metaphors, tools, and
macros from their parents . The formal structure can still support a lot of
integration and translation.

Warm Regards,

Dave





 With regard to naming (that's a lot of naming discussion for a 
 *tacit*programming environment - don't you think?), I like the idea of 
 personal
 sets of PetNames. After all, we're discussing *personal* programming
 environment as an extension of *self*. It should assist the person and
 extend their personal capabilities. I believe most users of such enhancing
 system will appreciate communicating with their personal assistant in their
 own language, even if it's just a slightly modified dialect of some common
 language.


 And when I re-read the original post, I wonder if debates of ambiguity are
 not going the wrong way. So I'd like to offer my own incomplete metaphor:
 Recall that every user action is an act of meta-programming. And user
 actions are inherently unambiguous - at least in the personal frame of
 reference. Thus, the problem is actually a problem of change in coordinates
 systems. As an example, consider how one's notion of naming is another's
 shifted notion of identity.

 This relativity of semantics can perhaps be practically reconciled using
 some rewriting protocols (transformations), helping communicating parties
 find common ground. On the other hand, a foundational problem with name
 reconciliation is that it's basically a unification problem -and this
 problem is undecidable for some logic theories/type systems.

 I'm not sure understand enough of David's idea (or substructural logic) to
 tell if this is a real problem or not, but I wanted to chime in, since I
 find the thread fascinating.

 Best regards,
 Eran.


 On Thu, Sep 26, 2013 at 2:23 AM, David Barbour dmbarb...@gmail.comwrote:


 ...

  If I assume those responsibilities are handled, and also elimination of
 local variable or parameter names because of tacit programming, the
 remaining uses of 'names' I'm likely to encounter are:

 * names for dynamic scope, config, or implicit params
 * names for associative lookup in shared spaces
 * names as human short-hand for values or actions

 It is this last item that I think most directly corresponds to what Sean
 and Matt call names, though there might also be a bit of 'independent
 maintenance' (external state via the programming environment) mixed in.
 Regarding shorthand, I'm quite interested in alternative designs, such as
 binding human names to values based on pattern-matching (so when you write
 'foo' I might read 'bar'), but Sean's against this due to out-of-band
 communication concerns. To address those concerns, use of an extended
 dictionary that tracks different origins for words seems reasonable.

  --
 You received this message because you are subscribed to the Google Groups
 Augmented Programming group.
 To unsubscribe from this group and stop receiving emails from

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-26 Thread David Barbour
On Thu, Sep 26, 2013 at 10:03 AM, Sam Putman atmanis...@gmail.com wrote:

 The notion is to have a consistent way to map between a large sound file
 and the large sound file. From one perspective it's just a large number,
 and it's nice if two copies of that number are never treated as different
 things.


If we're considering the sound value, I think you cannot avoid having
multiple representations for the same meaning. There are different lossless
encodings (like Flac vs. Wav vs. 7zip'd Wav vs. self-extracting JavaScript)
and lossy encodings (Opus vs. MP3). There will be encodings more or less
suitable for streaming or security concerns. If we 'chunkify' a large sound
for streaming, there is some arbitrary aliasing regarding the size of each
chunk.

So when you discuss a sound file, you are not discussing the value or
meaning but rather a specific, syntactic representation of that meaning.

(A little philosophy.)

In my understanding, the difference between information (or data) and pure
mathematical values is that the information has origin, history, context,
inertia, physical spatial-temporal representation, and even physical mass
(related to Boltzmann's constant and Laundauer's principle). Information is
something mechanical, and much of computer science might be more accurately
described as information mechanics. From this perspective (which is the
usual one I hold) copies of a number really are different. They have
different locations, different futures. Further, they can only be
considered 'copies' if there was an act of copying (at a specific
spatial-temporal location). A large number constructed by two independent
computations isn't a copy and may have unique meaning.



 For identity, I prefer to formally treat uniqueness as a semantic
 feature, not a syntactic one.


 I entirely agree! Hence the proposal of a function hash(foo) that produces
 a unique value for any given foo, where foo is an integer of arbitrary size
 (aka data). We may then compare the hashes as though they are the values,
 while saving time.


How often do we compare very large integers for equality?

I agree that keeping some summary information about a number, perhaps even
a hash, would be useful for quick comparisons for very large integers
(large enough that keeping the hash in memory is negligible). But I imagine
this would be a rather specialized use-case.


 Hashing is not associative per se but it may be made to behave
 associatively through various tweaks:

 http://en.wikipedia.org/wiki/Merkle_tree



Even a Merkle tree or a tiger tree hash has the same problems with aliasing
and associativity of the underlying data.

Best,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-26 Thread David Barbour
On Thu, Sep 26, 2013 at 3:05 PM, Sam Putman atmanis...@gmail.com wrote:




 On Thu, Sep 26, 2013 at 2:31 PM, David Barbour dmbarb...@gmail.comwrote:


 On Thu, Sep 26, 2013 at 11:58 AM, Sam Putman atmanis...@gmail.comwrote:



 How often do we compare very large integers for equality?


 Rather often, and still less than we should. Git does this routinely,
 and Datomic revolves around it. The lower-level the capability is built in,
 the more often, in general, we can benefit from it.


 In incremental systems, there is a tradeoff between recomputing vs.
 caching. At too fine a granularity, caching can be orders of magnitude
 worse than recomputing, due to the extra memory and branching overheads. At
 too coarse a granularity, recomputing can be orders of magnitude worse.
 More of a good thing can be a bad thing. This is made more complicated when
 different substructures update at different frequencies, or in different
 bursts, or with various feedback properties. In general, there are no easy
 answers (other than profile!) when it comes to optimizing performance.


 It's less about performance, per se, as it is about making certain actions
 easy. In Datomic, for example, one may request a hash that represents the
 complete state of a database at any given point in time, and then query
 that database using that hash. This is a pretty good trick. It's also
 difficult to imagine a globally distributed cache that didn't in some
 fashion follow this principle.


Use of a hash to identify states in a large persistent data structure is
indeed a nice trick. I can see the appeal of centralizing information about
update over time. I do not find it difficult to think of alternatives,
though, such as keeping timestamps all-the-way down. The latter approach
weakens global serializability and offers greater expressiveness. These
properties can be leveraged for speculative evaluation or open systems
integration.

Anyhow, at this time I feel hashes should be used explicitly, and have
relatively specialized use-cases.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hashes as names

2013-09-26 Thread David Barbour
The usual idea here is that you use very large hashes (e.g. 256 bits or
larger) such that the probability of a collision is less than, for example,
the probability of cosmic radiation causing the same issues over the course
of a couple years, or of a meteor striking your computer.

Then you stop worrying and love the bomb.

In practice, even when a collision does occur, it is very unlikely to occur
in a context where it is problematic. But, just in case it does happen in a
case where it might be problematic, you should have some other robustness
layers - e.g. the normal sort of checks and balances a system should
possess.


On Thu, Sep 26, 2013 at 5:50 PM, Wolfgang Eder e...@generalmagic.at wrote:

 hi,
 in recent discussions on this list, the idea of using hashes to
 identify or even name things is often mentioned.
 in this context, hashes are treated as being unique;

 albeit unlikely, it *is* possible that hashes are equal
 for two distinct things. are there ideas about
 how to handle such a situation?

 thanks and kind regards
 wolfgang
 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hashes as names

2013-09-26 Thread David Barbour
I wouldn't trust anyone selling hash collision insurance.


On Thu, Sep 26, 2013 at 6:11 PM, mclelland.m...@gmail.com wrote:

 Buy insurance.

 On Sep 26, 2013, at 7:50 PM, Wolfgang Eder e...@generalmagic.at wrote:

  hi,
  in recent discussions on this list, the idea of using hashes to
  identify or even name things is often mentioned.
  in this context, hashes are treated as being unique;
 
  albeit unlikely, it *is* possible that hashes are equal
  for two distinct things. are there ideas about
  how to handle such a situation?
 
  thanks and kind regards
  wolfgang
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] hashes as names

2013-09-26 Thread David Barbour
On Thu, Sep 26, 2013 at 6:10 PM, Martin McClure 
martin.mccl...@gemtalksystems.com wrote:


 1) Have a single central authority that hands out identifiers.

 The central authority model works in some scenarios, but for widely
 distributed systems the reliability problems (the central authority may
 be down or unreachable) or scalability problems make it unworkable.


Creating unique IDs can scale quite well with a central authority. You
simply have the central authority hand out the first few bits of the
identity (e.g. a domain name), then the owner of the domain becomes an
authority for how the next few bits are distributed (e.g. the subdomains),
and so on. This 'splitting' technique can go to an arbitrary depth but
since it is exponential - and potentially very flat - it rarely needs to be
deep.

The main point against central authority is the desire to avoid placing
oneself under another authority. I like the idea of marginalizing DNS,
switching to a Chord/DHT based lookup model using a randomly generated
large public key (or a signed secure hash thereof) as the identity for each
machine.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Urbit, Nock, Hoon

2013-09-25 Thread David Barbour
Yeah. Then I tried chapter two.

The idea of memoizing optimized functions (jets) is neat. As is his
approach to networking.
On Sep 24, 2013 10:54 PM, Julian Leviston jul...@leviston.net wrote:

 http://www.urbit.org/2013/08/22/Chapter-0-intro.html

 Interesting?

 Julian
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Urbit, Nock, Hoon

2013-09-25 Thread David Barbour
In chapter four, Guy Yarvin (author of Urbit) describes Hoon. He assigns
names to glyphs, e.g. `|` is bar and `=` is tis, so the digraph `|=` is
called `bartis` (or barts). The first character is a semantic category (bar
is for 'gates').

The idea of 'speakable' PL does appeal to me. I've contemplated doing
similar a few times, though I've never gotten much past fanciful
contemplation. For the environment I'm describing in the other thread, I
imagine use of voice control might become part of it. I also imagine this
would be part of the personal language between a user and the environment,
via mix of machine learning and human learning - meeting half-way.

But I think a speakable PL also needs to operate at a level a human can
grok - i.e. higher artifact manipulations, raising menus, calling tools to
hand, refining gestures. There's no way anyone's going to sit there and
rattle off assembly, and even when we do use words they'll need to be
somewhat imprecise, allowing partial search for contextually relevant
semantics.

I find it interesting that Yarvin's view has remained pretty stable over
the last four years:

http://moronlab.blogspot.com/2010/01/urbit-functional-programming-from.html

Regarding 'jets', I'd be more interested if there was a way to easily guide
the machine to build new ones. As is, I'd hate to depend on them.

Regards,

Dave




On Tue, Sep 24, 2013 at 11:30 PM, David Barbour dmbarb...@gmail.com wrote:

 Yeah. Then I tried chapter two.

 The idea of memoizing optimized functions (jets) is neat. As is his
 approach to networking.
 On Sep 24, 2013 10:54 PM, Julian Leviston jul...@leviston.net wrote:

 http://www.urbit.org/2013/08/22/Chapter-0-intro.html

 Interesting?

 Julian
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] History of AR/VR Programming Environments? [was Re: Personal Programming Env...]

2013-09-25 Thread David Barbour
I would also be interested in a history for this subject.

I've read a few papers on the subject of VR programming. Well, I remember
the act of reading them, but I can't recall their subjects or authors or
being very impressed with them in PL terms.

Does anyone else have links?


On Wed, Sep 25, 2013 at 2:43 AM, Jb Labrune labr...@media.mit.edu wrote:


 oh! and since i post on fonc today, i would like to say that i'm very
 intrigued by the notion of AR programming (meaning programming in an actual
 VR/AR/MR environment) discussed in the recent mesh of emails. I would love
 to see references or historical notes on who/what/where was done on this
 topic. I mean, did Ivan Sutherland used its HMD system to program the
 specifications (EDM)  code of its own hardware ? did supercockpit VRD
 (virtual retinal display) sytem had a multimodal situational awareness (SA)
 real-time (RT) integrated development environment (IDE) to program directly
 with gaze  neuronal activity ? :)))



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of AR/VR Programming Environments? [was Re: Personal Programming Env...]

2013-09-25 Thread David Barbour
As I said below, this is no longer part of my 'recall' memory. It's been
many years since I looked at the existing research on the subject, and I've
lost most of my old links.

A few related things I did find:

http://homes.cs.washington.edu/~landay/pubs/publication-list.htm
ftp://ftp.cc.gatech.edu/pub/people/blair/dissertation.pdf

Landay I had looked up regarding non-speech voice control (apparently, it's
150% faster), and I recall some of the experiments being similar to
programming. I've never actually read Blair's paper, just 'saved it for
later' then forgot about it. It looks fascinating.

VRML sucks. X3D sucks only marginally less.

If your interest is representation of structure, I suggest abandoning any
fixed-form meshes and focusing on procedural generation. Procedurally
generated scenegraphs - where 'nodes' can track rough size, occlusion, and
rough brightness/color properties (to minimize pop-in) - can be vastly more
efficient, reactive, interactive, have finer 'level of detail' steps.
(Voxels are also interactive, but have a relatively high memory overhead,
and they're ugly.) Most importantly, PG content can also be 'adaptive' -
i.e. pieces of art that partially cooperate with their context to fit
themselves in.

If I ever get back to this subject in earnest, I'll certainly be pursuing a
few hypotheses that I haven't found opportunity to test:


http://awelonblue.wordpress.com/2012/09/07/stateless-stable-arts-for-game-development/

http://awelonblue.wordpress.com/2012/07/18/unlimited-detail-for-large-animated-worlds/

But even if those don't work out, the procedural generation communities
have a lot of useful stuff to say on the subject of VR.

I haven't paid attention to VWF. If you haven't done so, you should look
into Croquet and OpenCobalt.

Best,

Dave




On Wed, Sep 25, 2013 at 10:30 AM, danm d...@zen3d.com wrote:

 Hi David,

 Moving this outside the FONC universe, although your response might also
 be of interest to other FONCers.

 Can you share with me your findings on VR programming? I'm aware of VRML
 and X3D (and its related tech.) as well as VWF (Virtual Worlds Framework),
 but I'm always interested in expanding my horizons, since this topic is
 near and dear to my heart.

 Thanks.

 cheers, danm


 On 9/25/13 10:22 AM, David Barbour wrote:

 I would also be interested in a history for this subject.

 I've read a few papers on the subject of VR programming. Well, I
 remember the act of reading them, but I can't recall their subjects or
 authors or being very impressed with them in PL terms.

 Does anyone else have links?


 On Wed, Sep 25, 2013 at 2:43 AM, Jb Labrune labr...@media.mit.edu
 mailto:labr...@media.mit.edu** wrote:


 oh! and since i post on fonc today, i would like to say that i'm
 very intrigued by the notion of AR programming (meaning programming
 in an actual VR/AR/MR environment) discussed in the recent mesh of
 emails. I would love to see references or historical notes on
 who/what/where was done on this topic. I mean, did Ivan Sutherland
 used its HMD system to program the specifications (EDM)  code of
 its own hardware ? did supercockpit VRD (virtual retinal display)
 sytem had a multimodal situational awareness (SA) real-time (RT)
 integrated development environment (IDE) to program directly with
 gaze  neuronal activity ? :)))




 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-25 Thread David Barbour
What you first suggest is naming for compression and caching. I think
that's an okay performance hack (it's one I've contemplated before), but I
wouldn't call it naming. Names generally need to bind values that are
maintained independently or cannot be known at the local place or time. I
think that what you call identity, I might call naming.

It is not clear to me what you hope to gain from the global namespace, or
by hashing the identities (e.g. what do you gain relative to full URLs?).
Maybe if you're pursuing a DHT or Chord-like system, identity might be a
great way to avoid depending on centralized domain name services. But we
also need to be careful about any values we share through such models, due
to security concerns and the overheads involved. I would tend to imagine
only physical devices should be represented in this manner.

Any system that requires keeping a complete history for large,
automatically maintained objects has already doomed itself. We can handle
it for human-managed code - but only because humans are slow, our input is
low bandwidth, and the artifacts we build tend to naturally stabilize. None
of those apply to machine-managed objects. Exponential decay of history (
http://awelonblue.wordpress.com/2013/01/24/exponential-decay-of-history-improved/)
provides a better alternative for keeping a long-running history (for both
humans and devices).

Anyhow, can you explain what your global namespace offers?


On Wed, Sep 25, 2013 at 11:07 AM, Sam Putman atmanis...@gmail.com wrote:

 I've been kicking around a model that may be useful here, vis à vis naming
 and the difficulties it implies.

 In short, a language may have a single global namespace that is a
 collision-resistant hash function. Values below say 256 bits are referred
 to as themselves, those above are referred to as the 256 bit digest of
 their value.

 Identities are also hashes, across the 'initial' value of the identity and
 some metadata recording the 'what where when' of that identity. An identity
 has a pointer to the current state/value of the identity, which is, of
 course, a hash of the value or the value itself depending on size. We'd
 also want a complete history of all values the identity has ever had, for
 convenience, which might easily obtain git levels of complexity.

 Code always and only refers to these hashes, so there is never ambiguity
 as to which value is which. Symbols are pointer cells in the classic Lisp
 fashion, but the canonical 'symbol' is a hash and the text string
 associated with it is for user convenience. I've envisioned this as Lispy
 for my own convenience, though a concatenative language has much to
 recommend it.



 On Wed, Sep 25, 2013 at 3:04 AM, Eugen Leitl eu...@leitl.org wrote:

 On Wed, Sep 25, 2013 at 11:43:44AM +0200, Jb Labrune wrote:

  as a friend of some designers who think in space  colors, it always
  strucks me that many (not all of course!) of my programmers friends
 think
  like a turing-machine, in 1D, acting as if their code is a long vector,
 some
  kind of snake which unlikes the ouroboros does not eat its own tail...

 Today's dominating programming model still assumes human-generated and
 human-readable code.

 There are obvious ways where this is not working: GA-generated blobs
 for 3d-integration hardware, for instance. People are really lousy at
 dealing with massive parallelism and nondeterminism, yet this is not
 optional, at least according to known physics of this universe.

 So, let's say you have Avogadro number of cells in a hardware CA
 crystal, with an Edge of Chaos rule. Granted you can write the
 tranformation rule down on the back of the napkin, but what about
 the state residing in the volume of said crystal? And the state
 is not really compressible, though you could of course write a
 seed that grows into something which does something interesting
 on a somewhat larger napkin, but there's no human way how you could
 derive that seed, or even understand how that thing does even
 work.

 Programmers of the future are more like gardeners and farmers
 than architects.

 Programmers of the far future deal with APIs that are persons,
 or are themselves integral parts of the API, and no longer people.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-25 Thread David Barbour
If we're just naming values, I'd like to avoid the complexity and just
share the value directly. Rather than having foo function vs. bar
function, we'll just have a block of anonymous code. If we have a large
sound file that gets a lot of references, perhaps in that case explicitly
using a content-distribution and caching model would be appropriate, though
it might be better to borrow from Tahoe-LAFS for security reasons.

For identity, I prefer to formally treat uniqueness as a semantic feature,
not a syntactic one. Uniqueness can be formalized using substructural
types, i.e. we need an uncopyable (affine typed) source of unique values.
 I envision a uniqueness source is used for:

1) creating unique sealer/unsealer pairs.
2) creating initially 'exclusive' bindings to external state.
3) creating GUID-like values that afford equality testing.

In a sense, this is three different responsibilities for identity. Each
involves different types. It seems what you're calling 'identity'
corresponds to item 2.

If I assume those responsibilities are handled, and also elimination of
local variable or parameter names because of tacit programming, the
remaining uses of 'names' I'm likely to encounter are:

* names for dynamic scope, config, or implicit params
* names for associative lookup in shared spaces
* names as human short-hand for values or actions

It is this last item that I think most directly corresponds to what Sean
and Matt call names, though there might also be a bit of 'independent
maintenance' (external state via the programming environment) mixed in.
Regarding shorthand, I'm quite interested in alternative designs, such as
binding human names to values based on pattern-matching (so when you write
'foo' I might read 'bar'), but Sean's against this due to out-of-band
communication concerns. To address those concerns, use of an extended
dictionary that tracks different origins for words seems reasonable.

Regarding your 'foo' vs. 'bar' equivalence argument, I believe hashing is
not associative. Ultimately, `foo bar baz` might have the same
expansion-to-bytecode as `nitwit blubber oddment tweak` due to different
factorings, but I think it will have a different hash, unless you
completely expand and rebuild the 'deep' hashes each time. Of course, we
might want to do that anyway, i.e. for optimization across words.


 If I were to enter 3 characters a second into a computer for 40 years,
 assuming a byte per character, I'd have generated ~3.8 GiB of information,
 which would fit in memory on my laptop. I'd say that user input at least is
 well worth saving.


Huh, I think you underestimate how much data you generate, and how much
that will grow with different input devices. Entering characters in a
keyboard is minor compared to the info-dump caused by a LEAP motion. The
mouse is cheap when it's sitting still, but can model spatial-temporal
patterns. If you add information from your cell-phone - you've got GPS,
accelerometers, temperatures, touch, voice. If you get some AR setup,
you'll have six-axis motion for your head, GPS, voice, and gestures. It
adds up. But it's still small compared to what devices can input if we kept
a stream of microphone input or camera visual data.

I think any history will inevitably be lossy. But I agree that it would be
convenient to keep high-fidelity data available for a while, and preferably
extract the most interesting operations.




On Wed, Sep 25, 2013 at 2:45 PM, Sam Putman atmanis...@gmail.com wrote:

 Well, since we're talking about a concatenative bytecode, I'll try to
 speak Forthfully.

 Normally when we define a word in a stack language we make up an ASCII
 symbol and say this symbol refers to all these other symbols, in this
 definite order. Well and good, with two potential problems: we have to
 make up a symbol, and that symbol might conflict with someone else's
 symbol.

 Name clashes is an obvious problem. The fact that we must make up a symbol
 is less obviously a problem, except that the vast majority of our referents
 should be generated by a computer. A computer generated symbol may as well
 be a hash function, at which point, a user-generated symbol may as well be
 a hash also, in a special case where the data hashed includes an ASCII
 handle for user convenience.

 This is fine for immutable values, but for identities (referents to a
 series of immutable values, essentially), we need slightly more than this:
 a master hash, taken from the first value the identity refers to, the time
 of creation, and perhaps other useful information. This master hash then
 points to the various values the identity refers to, as they change.

 There are a few things that are nice about this approach, all of which
 derive from the fact that identical values have identical names and that
 relatively complex relationships between identifies and values may be
 established and modified programmatically.

 As an example, if I define a foo function which is identical to 

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-24 Thread David Barbour
Thanks for the ref, Chris. I'll take some time to absorb it.

On Tue, Sep 24, 2013 at 1:46 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 David Barbour dmbarb...@gmail.com writes:

  Text is also one of the problems I've been banging my head against since
  Friday. Thing is, I really hate escapes. They have this nasty geometric
  progression when dealing with deeply quoted code:
 
   {} - {{\}} - {{{\\\}\}} - \\\}\\\}\}} -
  {\\\}\\\}\\\}\}}
 
  I feel escapes are too easy to handle incorrectly, and too difficult to
  inspect for correctness. I'm currently contemplating a potential
 solution:
  require all literal text to use balanced `{` and `}` characters, and use
  post-processing in ABC to introduce any imbalance. This could be
 performed
  in a streaming manner. Inductively, all quoted code would be balanced.

 The geometric explosion comes from the unary nature of escaping. It
 wouldn't be too difficult to add a 'level', for example:

 {} - {{\0}} - {{{\1}\0}} - \2}\1}\0}} -
  {\3}\2}\1}\0}}

 The main problem with escaping is that it is homomorphic: ie. it is
 usually String - String. This is basically the source of all code
 injection attacks. It wouldn't be too bad if escaping were idempotent,
 since we could add extra escapes just in case, but it's not so we end up
 keeping track manually, and failing.

 There's a good post on this at

 http://blog.moertel.com/posts/2006-10-18-a-type-based-solution-to-the-strings-problem.html

 It would be tricky to implement a solution to this in a way that's open
 and extensible; if we're be passing around first-class functions anyway,
 we could do Haskell's dictionary-passing manually.

 Cheers,
 Chris
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-24 Thread David Barbour
I have nothing against name resolution at edit-time. My concern is that
giving the user a list of 108 subtly different definitions of 'OOP' and
saying I can't resolve this in context. Which one do you mean here? every
single time would be insufferable, even if the benefit is that everyone
'sees' the same code.


On Sep 24, 2013 7:15 AM, Matt M mclelland.m...@gmail.com wrote:

   Person.draw(object) -- What do I mean by this? Am I drawing a picture?
 a gun? a curtain?

 And the way this works in conversation is that your partner stops you and
 says wait, what do you mean by 'draw'?  Similarly an IDE can underline
 the ambiguity and leave it to the user to resolve, either explicitly or
 implicitly by continuing to write more (often ambiguity is removed with
 further context).

 I completely agree with Sean's quote (of Johnathan Edwards?) that names
 are for people, and that name resolution should almost never be part of the
 dynamics of a language.  Names should be resolved statically (preferably at
 edit time).

 Matt


 On Monday, September 23, 2013 9:24:52 PM UTC-5, dmbarbour wrote:

 Ambiguity in English is often a problem. The Artist vs. Cowboy example
 shows that ambiguity is in some cases not a problem. I think it reasonable
 to argue that: when the context for two meanings of a word is obviously
 different, you can easily disambiguate using context. The same is true for
 types. But review my concern below: when words have meanings that are *
 subtly* but significantly different. In many cases the difference is *
 subtle* but important. It is these cases where ambiguity can be
 troublesome.

 Person.draw(object) -- What do I mean by this? Am I drawing a picture? a
 gun? a curtain?

 Regarding conversations with coworkers:

 I think in the traditional KVM programming environment, the common view
 does seem important - e.g. for design discussions or over-the-shoulder
 debugging. At the moment, there is no easy way to use visual aides and
 demonstrations when communicating structure or meaning.

 In an AR or VR environment, I hypothesize this pressure would be
 alleviated a great deal, since the code could be shown to each participant
 in his or her own form and allow various 
 meaning-by-demonstration/**exploration
 forms of communication. I'm curious whether having different 'ways' of
 seeing the code might even help for debugging. Multiple views could also be
 juxtaposed if there are just a few people involved, enabling them to more
 quickly understand the other person's point of view.

 Best,

 Dave

 On Mon, Sep 23, 2013 at 6:21 PM, Sean McDirmid smc...@microsoft.comwrote:

  Ambiguity is common in English and it’s not a big problem: words have
 many different definitions, but when read in context we can usually tell
 what they mean. For “Cowboy.Draw(Gun)” and “Artist.Draw(Picture)”, we can
 get a clue about what Draw means; ambiguity is natural! For my language,
 choosing what Draw is meant drives type inference, so I can’t rely on types
 driving name lookup. But really, the displayed annotation goes in the type
 of the variables surrounding the Draw call (Cowboy, Gun) rather than the
 Draw Call itself. 

 ** **

 Language is an important part of society. Though I can use translation
 to talks to my Chinese speaking colleagues, that we all speak in English at
 work and share the names for things is very important for collaboration
 (and suffers when we don’t). For code, we might be taking about it even
 when we are not reading it, so standardizing the universe of names is still
 very important. 

 ** **

 *From:* augmented-...@**googlegroups.com [mailto:augmented-...@**
 googlegroups.com] *On Behalf Of *David Barbour
 *Sent:* Tuesday, September 24, 2013 9:11 AM
 *To:* augmented-...@**googlegroups.com; reactiv...@googlegroups.**com;
 Fundamentals of New Computing

 *Subject:* Re: Personal Programming Environment as Extension of Self

 ** **

 Okay, so if I understand correctly you want everyone to see the same
 thing, and just deal with the collisions when they occur. 

 ** **

 You also plan to mitigate this by using some visual indicators when
 that word doesn't mean what you think it means.  This would require
 search before rendering, but perhaps it could be a search of the user's
 personal dictionary - i.e. ambiguity only within a learned set. I wonder if
 we could use colors or icons to help disambiguate.

 ** **

 A concern I have about this design is when words have meanings that are
 subtly but significantly different. Selecting among these distinctions
 takes extra labor compared to using different words or parameterizing the
 distinctions. But perhaps this also could be mitigated, through automatic
 refactoring of the personal dictionary (such that future exposure to a
 given word will automatically translate it). 

 ** **

 I titled this Personal Programming Environment as Extension of Self
 because I think it should reflect our own metaphors, our own thoughts

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-24 Thread David Barbour
Even Lisp isn't parentheses balanced by definition, because you can do this:

'())

In that same sense my bytecode is block balanced, except in text.

  [ [ [] [] ] [] ]

 { ] }

The idea of using number of chars is possible, something like:

  {42|forty two characters here.

But that would also be very difficult to use in a streaming bytecode. It's
also difficult to read. I've always felt that bytecode should at least be
slightly legible, so I can easily get a sense of what's happening just by
looking at the raw streams, even though I don't expect people to program in
bytecode directly (nor read it that way often).

On Tue, Sep 24, 2013 at 12:49 AM, Pavel Bažant pbaz...@gmail.com wrote:

 I don't know the details of your language, but I see two possibilities:
 1) If the textual representation of your language is parenthesis-balanced
 by definition, just do what Lisp does.
 2) If not, format quoted text like quote number_of_chars char char char
 
 Yes I know 2) is fragile, but so is escaping.


 On Tue, Sep 24, 2013 at 7:24 AM, David Barbour dmbarb...@gmail.comwrote:

 Oh, I see. As I mentioned in the first message, I plan on UTF-8 text
 being one of the three basic types in ABC. There is text, rational numbers,
 and blocks. Even if I'm not using names, I think text is very useful for
 tagged values and such.

   {Hello, World!}

 Text is also one of the problems I've been banging my head against since
 Friday. Thing is, I really hate escapes. They have this nasty geometric
 progression when dealing with deeply quoted code:

  {} - {{\}} - {{{\\\}\}} - \\\}\\\}\}} -
 {\\\}\\\}\\\}\}}

 I feel escapes are too easy to handle incorrectly, and too difficult to
 inspect for correctness. I'm currently contemplating a potential solution:
 require all literal text to use balanced `{` and `}` characters, and use
 post-processing in ABC to introduce any imbalance. This could be performed
 in a streaming manner. Inductively, all quoted code would be balanced.

 Best,

 Dave





 On Mon, Sep 23, 2013 at 9:28 PM, John Carlson yottz...@gmail.com wrote:

 I don't really have a big concern.  If you just support numbers, people
 will find clever, but potentially incompatible ways of doing strings.  I
 recall in the pre-STL days supporting 6 different string classes.  I
 understand that a name is different than a string, but I come from a perl
 background.  People don't reinvent strings in perl to my knowledge.
  On Sep 23, 2013 11:15 PM, David Barbour dmbarb...@gmail.com wrote:

 I think it's fine if people model names, text, documents, association
 lists, wikis, etc. -- and processing thereof.

 And I do envision use of graphics as a common artifact structure, and
 just as easily leveraged for any explanation as text (though I imagine most
 such graphics will also have text associated).

 Can you explain your concern?
  On Sep 23, 2013 8:16 PM, John Carlson yottz...@gmail.com wrote:

 Don't forget that words can be images, vector graphics or 3D
 graphics.  If you have an open system, then people will incorporate
 names/symbols.  I'm not sure you want to avoid symbolic processing, but
 that's your choice.

 I'm reminded of the omgcraft ad for cachefly.
 John
 On Sep 23, 2013 8:11 PM, David Barbour dmbarb...@gmail.com wrote:

 Okay, so if I understand correctly you want everyone to see the same
 thing, and just deal with the collisions when they occur.

 You also plan to mitigate this by using some visual indicators when
 that word doesn't mean what you think it means.  This would require
 search before rendering, but perhaps it could be a search of the user's
 personal dictionary - i.e. ambiguity only within a learned set. I wonder 
 if
 we could use colors or icons to help disambiguate.

 A concern I have about this design is when words have meanings that
 are subtly but significantly different. Selecting among these 
 distinctions
 takes extra labor compared to using different words or parameterizing the
 distinctions. But perhaps this also could be mitigated, through automatic
 refactoring of the personal dictionary (such that future exposure to a
 given word will automatically translate it).

 I titled this Personal Programming Environment as Extension of Self
 because I think it should reflect our own metaphors, our own thoughts,
 while still being formally precise when we share values. Allowing me to 
 use
 your words, your meanings, your macros is one thing - a learning
 experience. Asking me to stick with it, when I have different subtle
 distinctions I favor, is something else.

 Personally, I think making the community see the same things is
 less important so long as they can share and discover by *meaning* of
 content rather than by the words used to describe it. Translator packages
 could be partially automated and further maintained implicitly with
 permission from the people who explore different projects and small
 communities.

 Can we

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-24 Thread David Barbour
Hmm. Indentation - i.e. newline as a default escape, then using spacing
after newline as a sort of counter-escape - is a possibility I hadn't
considered. It seems a little awkward in context of a bytecode, but I won't
dismiss it out of hand. I'd need to change my open-quote character, of
course. I'll give this some thought. Thanks.

On Tue, Sep 24, 2013 at 4:19 PM, Loup Vaillant-David 
l...@loup-vaillant.frwrote:

 One way of escaping is indentation, like Markdown.

 This is arbitrary code
 This is arbitrary code *in* arbitrary code.
 and so on.

 No more escape sequences in the quotation.  You just have the
 inconvenience of prefixing each line with a tab or something.

 Loup.


 On Mon, Sep 23, 2013 at 10:24:20PM -0700, David Barbour wrote:
  Text is also one of the problems I've been banging my head against since
  Friday. Thing is, I really hate escapes. They have this nasty geometric
  progression when dealing with deeply quoted code:
 
   {} - {{\}} - {{{\\\}\}} - \\\}\\\}\}} -
  {\\\}\\\}\\\}\}}
 
  I feel escapes are too easy to handle incorrectly, and too difficult to
  inspect for correctness. I'm currently contemplating a potential
 solution:
  require all literal text to use balanced `{` and `}` characters, and use
  post-processing in ABC to introduce any imbalance. This could be
 performed
  in a streaming manner. Inductively, all quoted code would be balanced.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-23 Thread David Barbour
Chris,

You offer a lot of good advice. I agree that dog-fooding early would be
ideal.

Though for UI, I currently favor one of two directions:
* web apps
* OpenGL (perhaps just a subset, the WebGL API)

I also want to address these in a manner more compatible with reactive
programming. Fortunately, UI is a relatively good fit for both pipelining
and reactive programming. I think I can make this work, but I might be
using GPipe or LambdaCube as bases for the GL API.

Best,

Dave

On Mon, Sep 23, 2013 at 2:59 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 David Barbour dmbarb...@gmail.com writes:

  My own plan is to implement a streamable, strongly typed,
 capability-secure
  TC bytecode (Awelon Bytecode, ABC) and build up from there, perhaps
  targeting Unity and/or developing a web-app IDE for visualization. (Unity
  is a tempting target for me due to my interest in AR and VR environments,
  and Meta's support for Unity.)

 When bootstrapping pervasive systems like this I think it's important to
 'dog food' them as early as possible, since that makes it easier to work
 out which underlying feature should be added next (what would help the
 most common irritation?), and allows for large libraries of 'scratch an
 itch' scripts to build up.


 I would find out what worked (and what didn't) for other projects which
 required bootstrapping. Minimalist and low-level systems are probably
 good examples, since it's harder for them to fall back on existing
 software. I suppose I have to mention self-hosting languages like
 Smalltalk, Self and Factor. I'd also look at operating systems
 (MenuetOS, ReactOS, Haiku, etc.), desktop 'ecosystems' (suckless, ROX,
 GNUStep, etc.), as well as Not-Invented-Here systems like Unhosted. What
 was essential for those systems to be usable? Which areas were
 implemented prematurely and subsequently replaced?

 If it were me, I would probably bootstrap via a macro system (on Linux):
  * Log all X events, eg. with xbindkeys (togglable, for password entry)
  * Write these logs as concatenative programs, which just call out to
xte over and over again
  * Write commands for quickly finding, editing and replaying these
programs

 With this in place, I'd have full control of my machine, but in a very
 fragile, low-level way. However, this would be enough to start
 scratching itches.

 When controlling Ratpoison via simulated keystrokes becomes too tedious,
 I might write a few Awelon words to wrap Ratpoison's script API. I might
 hook into Selenium to make Web automation easier. As each layer starts
 to flake, I can go down a level and hook into GTK widgets, Imagemagick,
 etc. until some tasks can be achieved by composing purely 'native'
 Awelon components.

 It would be very hacky and non-ideological to begin with, but would be
 ever-present and useful enough to get some real usage.

 Cheers,
 Chris
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-23 Thread David Barbour
Pavel,

I'm interested in collaborators. But the very first help I'd need is
administrative - figuring out how to effectively use collaborators. ;)

Regarding names: I think it best if names have an explicit lookup
mechanism. I.e. names aren't documentation, they're more like an index in a
map. If we don't automate the use of names, they won't do us very much
good. But by making the automation explicit, I think their fragility and
the difficulties surrounding the names (e.g. with respect to closures,
messaging, drag-and-drop, etc.) also becomes more obvious and easier to
analyze.

In Awelon at the moment, I use 'named stacks' that enable load/store/goto.
But these are formally modeled within Awelon - i.e. as an association list.
True external names and capabilities require more explicit lookups using
capabilities or a powerblock.

I agree with your point that many programmers probably aren't very
motivated to eliminate the boundary. Fortunately, we don't need the aide of
every programmer, just enough to get the project moving and past critical
mass. :)

Regards,

Dave


On Mon, Sep 23, 2013 at 3:21 AM, Pavel Bažant pbaz...@gmail.com wrote:

 Dear David,

 I am seriously interested in collaborating with you!

 I especially like the following points:
 1) Programming by text manipulation is not the only way to do programming
 I actually tend to have the more iconoclastic view that text-based
 programming is harmful -- see my previous rant on FONC, but you mentioned
 what should be done, whereas I only managed to point out what should not be
 done.
 2) I like the tacit idea. I always considered the omnipresent reliance on
 names as means of binding things together as extremely fragile. Do you
 think one could treat the names as annotations with documentation purpose,
 without them being the binding mechanism?
 3) Last but not least: There is no fundamental difference between
 programmers and users. Both groups are just using computers to create some
 digital content. Any sharp boundary between the way the two groups work is
 maybe unnatural. I think psychology is an important factor here. I actually
 do think that many programmers actually like the existence of such boundary
 and are not motivated to make it disappear, but this is really just an
 opinion.



 On Fri, Sep 20, 2013 at 7:35 AM, David Barbour dmbarb...@gmail.comwrote:

 Over the last month, I feel like I stumbled into something very simple
 and profound: a new perspective on an old idea, with consequences deeper
 and more pervasive than I had imagined.

 The idea is simply this: every user action is an act of meta-programming.

 More precisely:
 (1) Each user event addends a tacit concatenative program.
 (2) The output of the tacit concatenative program is another program.
 (3) We can understand the former as rewriting parts of the latter.
 (4) These rewrites include the user-model - navigation, clipboard, etc.

 I will further explain this idea, why it is powerful, how it is different.

 To clarify, this isn't another hand-wavy 'shalt' and 'must' proposal with
 no idea of how to achieve it. Hammering at a huge list of requirements for
 eight years got me to RDP. At this point, I have concrete ideas on how to
 accomplish everything I'm about to describe.

 Users Are Programmers.

 The TUNES vision is revived, and better than ever.

 *WHY TACIT CONCATENATIVE?*

 Concatenative programming is perhaps best known through FORTH. Most
 concatenative languages have followed in Charles Moore's forthsteps,
 sticking with the basic stack concept but focusing on higher-order
 programming, types, and other features.

 A stack would be an extremely impoverished and cramped environment for a
 user; even many programmers would not tolerate it. Fortunately, we can move
 beyond the stack environment. And I insist that we do! Concatenative
 programming can also be based upon such structures as trees, Huet zippers,
 and graphs. This proposal is based primarily on tree-structured data and
 zippers, with just a little indirect graph modeling through shared state or
 explicit labels (details later).

 A 'tacit' programming language is one that does not mention names for
 parameters or local variables. Many concatenative programming languages are
 also tacit, though the concepts don't fully intersect.

 A weakness of tacit concatenative programming is that, in a traditional
 text-based programming environment, users must visualize the environment
 (stack or other structure) in their head, and that they must memorize a
 bunch of arcane 'stack shuffling' words. By comparison, variable names in
 text are easy to visualize and review.

 My answer: change programming environments!

 Powerful advantages of tacit concatenative programming include:
 1. the environment has a precisely defined, visualizable value
 2. short strings of tacit concatenative code are easy to generate
 3. concatenative code is sequential, forming an implicit timeline
 4. code also subject to learning

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-23 Thread David Barbour
Okay, so if I understand correctly you want everyone to see the same thing,
and just deal with the collisions when they occur.

You also plan to mitigate this by using some visual indicators when that
word doesn't mean what you think it means.  This would require search
before rendering, but perhaps it could be a search of the user's personal
dictionary - i.e. ambiguity only within a learned set. I wonder if we could
use colors or icons to help disambiguate.

A concern I have about this design is when words have meanings that are
subtly but significantly different. Selecting among these distinctions
takes extra labor compared to using different words or parameterizing the
distinctions. But perhaps this also could be mitigated, through automatic
refactoring of the personal dictionary (such that future exposure to a
given word will automatically translate it).

I titled this Personal Programming Environment as Extension of Self
because I think it should reflect our own metaphors, our own thoughts,
while still being formally precise when we share values. Allowing me to use
your words, your meanings, your macros is one thing - a learning
experience. Asking me to stick with it, when I have different subtle
distinctions I favor, is something else.

Personally, I think making the community see the same things is less
important so long as they can share and discover by *meaning* of content
rather than by the words used to describe it. Translator packages could be
partially automated and further maintained implicitly with permission from
the people who explore different projects and small communities.

Can we create systems that enable people to use the same words and
metaphors with subtly different meanings, but still interact efficiently,
precisely, and unambiguously?

Best,

Dave


On Mon, Sep 23, 2013 at 5:26 PM, Sean McDirmid smcd...@microsoft.comwrote:

  The names are for people, and should favor readability over uniqueness
 in the namespace; like ambiguous English words context should go a long way
 in helping the reader understand on their own (if not, they can do some
 mouse over). We can even do fancy things with the names when they are being
 rendered, like, if they are ambiguous, underlay them with a dis-ambiguating
 qualifier. The world is wide open once you’ve mastered how to build a code
 editor! Other possibilities include custom names, or multi-lingual names,
 but I’m worried about different developers “seeing” different things…we’d
 like to develop a community that sees the same things.

 ** **

 The trick is mastering search and coming up with an interface so that it
 becomes as natural as identifier input. 

 ** **

 *From:* augmented-programm...@googlegroups.com [mailto:
 augmented-programm...@googlegroups.com] *On Behalf Of *David Barbour
 *Sent:* Tuesday, September 24, 2013 5:10 AM
 *To:* augmented-programm...@googlegroups.com

 *Subject:* Re: Personal Programming Environment as Extension of Self

 ** **

 It isn't clear to me what you're suggesting. That module names be subject
 to... edit-time lookups? Hyperlinks within the Wiki are effectively full
 URLs? That could work pretty well, I think, though it definitely favors the
 editor over the reader. 

 ** **

 Maybe what we need is a way for each user to have a personal set of
 PetNames.

 ** **

http://www.skyhunter.com/marcs/petnames/IntroPetNames.html

 ** **

 This way the reader sees xrefs in terms of her personal petname list, and
 the writer writes xrefs in terms of his.

 ** **

 I was actually contemplating this design at a more content-based layer:***
 *

 ** **

 * a sequence of bytecode may be given a 'pet-name' by a user, i.e. as a
 consequence of documenting or explaining their actions. 

 * when an equivalent sequence of bytecode is seen, we name it by the
 user's pet-name.

 *rewriting can help search for equivalencies.

 * unknown bytecode can be classifed by ML, animated, etc. to help
 highlight how it is different.  

 * we can potentially search in terms of code that 'does' X, Y, and Z at
 various locations. 

 * similarly, we can potentially search in terms of code that 'affords'
 operations X, Y, and Z.

 ** **

 I think both ideas could work pretty well together, especially since
 '{xref goes here}{lookup}$' itself could given a pet name.

 ** **

 ** **

 On Mon, Sep 23, 2013 at 1:41 PM, Sean McDirmid smcd...@microsoft.com
 wrote:

  Maybe think of it as a module rather than a namespace. I'm still quite
 against namespaces or name based resolution in the language semantics;
 names are for people, not compilers (subtext). Rather, search should be a
 fundamental part of the IDE, which is responsible for resolving strings
 into guids. 

 ** **

 It will just be like google mixed in with Wikipedia, not much to be afraid
 of. 


 On Sep 24, 2013, at 4:32, David Barbour dmbarb...@gmail.com wrote:

  Sean, 

 ** **

 I'm still interested

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-23 Thread David Barbour
Ambiguity in English is often a problem. The Artist vs. Cowboy example
shows that ambiguity is in some cases not a problem. I think it reasonable
to argue that: when the context for two meanings of a word is obviously
different, you can easily disambiguate using context. The same is true for
types. But review my concern below: when words have meanings that are *
subtly* but significantly different. In many cases the difference is *
subtle* but important. It is these cases where ambiguity can be
troublesome.

Person.draw(object) -- What do I mean by this? Am I drawing a picture? a
gun? a curtain?

Regarding conversations with coworkers:

I think in the traditional KVM programming environment, the common view
does seem important - e.g. for design discussions or over-the-shoulder
debugging. At the moment, there is no easy way to use visual aides and
demonstrations when communicating structure or meaning.

In an AR or VR environment, I hypothesize this pressure would be alleviated
a great deal, since the code could be shown to each participant in his or
her own form and allow various meaning-by-demonstration/exploration forms
of communication. I'm curious whether having different 'ways' of seeing the
code might even help for debugging. Multiple views could also be juxtaposed
if there are just a few people involved, enabling them to more quickly
understand the other person's point of view.

Best,

Dave

On Mon, Sep 23, 2013 at 6:21 PM, Sean McDirmid smcd...@microsoft.comwrote:

  Ambiguity is common in English and it’s not a big problem: words have
 many different definitions, but when read in context we can usually tell
 what they mean. For “Cowboy.Draw(Gun)” and “Artist.Draw(Picture)”, we can
 get a clue about what Draw means; ambiguity is natural! For my language,
 choosing what Draw is meant drives type inference, so I can’t rely on types
 driving name lookup. But really, the displayed annotation goes in the type
 of the variables surrounding the Draw call (Cowboy, Gun) rather than the
 Draw Call itself. 

 ** **

 Language is an important part of society. Though I can use translation to
 talks to my Chinese speaking colleagues, that we all speak in English at
 work and share the names for things is very important for collaboration
 (and suffers when we don’t). For code, we might be taking about it even
 when we are not reading it, so standardizing the universe of names is still
 very important. 

 ** **

 *From:* augmented-programm...@googlegroups.com [mailto:
 augmented-programm...@googlegroups.com] *On Behalf Of *David Barbour
 *Sent:* Tuesday, September 24, 2013 9:11 AM
 *To:* augmented-programm...@googlegroups.com;
 reactive-dem...@googlegroups.com; Fundamentals of New Computing

 *Subject:* Re: Personal Programming Environment as Extension of Self

 ** **

 Okay, so if I understand correctly you want everyone to see the same
 thing, and just deal with the collisions when they occur. 

 ** **

 You also plan to mitigate this by using some visual indicators when that
 word doesn't mean what you think it means.  This would require search
 before rendering, but perhaps it could be a search of the user's personal
 dictionary - i.e. ambiguity only within a learned set. I wonder if we could
 use colors or icons to help disambiguate.

 ** **

 A concern I have about this design is when words have meanings that are
 subtly but significantly different. Selecting among these distinctions
 takes extra labor compared to using different words or parameterizing the
 distinctions. But perhaps this also could be mitigated, through automatic
 refactoring of the personal dictionary (such that future exposure to a
 given word will automatically translate it). 

 ** **

 I titled this Personal Programming Environment as Extension of Self
 because I think it should reflect our own metaphors, our own thoughts,
 while still being formally precise when we share values. Allowing me to use
 your words, your meanings, your macros is one thing - a learning
 experience. Asking me to stick with it, when I have different subtle
 distinctions I favor, is something else.  

 ** **

 Personally, I think making the community see the same things is less
 important so long as they can share and discover by *meaning* of content
 rather than by the words used to describe it. Translator packages could be
 partially automated and further maintained implicitly with permission from
 the people who explore different projects and small communities. 

 ** **

 Can we create systems that enable people to use the same words and
 metaphors with subtly different meanings, but still interact efficiently,
 precisely, and unambiguously?

 ** **

 Best,

 ** **

 Dave

 ** **

 ** **

 On Mon, Sep 23, 2013 at 5:26 PM, Sean McDirmid smcd...@microsoft.com
 wrote:

  The names are for people, and should favor readability over uniqueness
 in the namespace; like ambiguous English words context should go a long

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-23 Thread David Barbour
I think it's fine if people model names, text, documents, association
lists, wikis, etc. -- and processing thereof.

And I do envision use of graphics as a common artifact structure, and just
as easily leveraged for any explanation as text (though I imagine most such
graphics will also have text associated).

Can you explain your concern?
 On Sep 23, 2013 8:16 PM, John Carlson yottz...@gmail.com wrote:

 Don't forget that words can be images, vector graphics or 3D graphics.  If
 you have an open system, then people will incorporate names/symbols.  I'm
 not sure you want to avoid symbolic processing, but that's your choice.

 I'm reminded of the omgcraft ad for cachefly.
 John
 On Sep 23, 2013 8:11 PM, David Barbour dmbarb...@gmail.com wrote:

 Okay, so if I understand correctly you want everyone to see the same
 thing, and just deal with the collisions when they occur.

 You also plan to mitigate this by using some visual indicators when that
 word doesn't mean what you think it means.  This would require search
 before rendering, but perhaps it could be a search of the user's personal
 dictionary - i.e. ambiguity only within a learned set. I wonder if we could
 use colors or icons to help disambiguate.

 A concern I have about this design is when words have meanings that are
 subtly but significantly different. Selecting among these distinctions
 takes extra labor compared to using different words or parameterizing the
 distinctions. But perhaps this also could be mitigated, through automatic
 refactoring of the personal dictionary (such that future exposure to a
 given word will automatically translate it).

 I titled this Personal Programming Environment as Extension of Self
 because I think it should reflect our own metaphors, our own thoughts,
 while still being formally precise when we share values. Allowing me to use
 your words, your meanings, your macros is one thing - a learning
 experience. Asking me to stick with it, when I have different subtle
 distinctions I favor, is something else.

 Personally, I think making the community see the same things is less
 important so long as they can share and discover by *meaning* of content
 rather than by the words used to describe it. Translator packages could be
 partially automated and further maintained implicitly with permission from
 the people who explore different projects and small communities.

 Can we create systems that enable people to use the same words and
 metaphors with subtly different meanings, but still interact efficiently,
 precisely, and unambiguously?

 Best,

 Dave


 On Mon, Sep 23, 2013 at 5:26 PM, Sean McDirmid smcd...@microsoft.comwrote:

  The names are for people, and should favor readability over uniqueness
 in the namespace; like ambiguous English words context should go a long way
 in helping the reader understand on their own (if not, they can do some
 mouse over). We can even do fancy things with the names when they are being
 rendered, like, if they are ambiguous, underlay them with a dis-ambiguating
 qualifier. The world is wide open once you’ve mastered how to build a code
 editor! Other possibilities include custom names, or multi-lingual names,
 but I’m worried about different developers “seeing” different things…we’d
 like to develop a community that sees the same things.

 ** **

 The trick is mastering search and coming up with an interface so that it
 becomes as natural as identifier input. 

 ** **

 *From:* augmented-programm...@googlegroups.com [mailto:
 augmented-programm...@googlegroups.com] *On Behalf Of *David Barbour
 *Sent:* Tuesday, September 24, 2013 5:10 AM
 *To:* augmented-programm...@googlegroups.com

 *Subject:* Re: Personal Programming Environment as Extension of Self

 ** **

 It isn't clear to me what you're suggesting. That module names be
 subject to... edit-time lookups? Hyperlinks within the Wiki are effectively
 full URLs? That could work pretty well, I think, though it definitely
 favors the editor over the reader. 

 ** **

 Maybe what we need is a way for each user to have a personal set of
 PetNames.

 ** **

http://www.skyhunter.com/marcs/petnames/IntroPetNames.html

 ** **

 This way the reader sees xrefs in terms of her personal petname list,
 and the writer writes xrefs in terms of his.

 ** **

 I was actually contemplating this design at a more content-based layer:*
 ***

 ** **

 * a sequence of bytecode may be given a 'pet-name' by a user, i.e. as a
 consequence of documenting or explaining their actions. 

 * when an equivalent sequence of bytecode is seen, we name it by the
 user's pet-name.

 *rewriting can help search for equivalencies.

 * unknown bytecode can be classifed by ML, animated, etc. to help
 highlight how it is different.  

 * we can potentially search in terms of code that 'does' X, Y, and Z at
 various locations. 

 * similarly, we can potentially search in terms of code that 'affords'
 operations X

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-23 Thread David Barbour
Oh, I see. As I mentioned in the first message, I plan on UTF-8 text being
one of the three basic types in ABC. There is text, rational numbers, and
blocks. Even if I'm not using names, I think text is very useful for tagged
values and such.

  {Hello, World!}

Text is also one of the problems I've been banging my head against since
Friday. Thing is, I really hate escapes. They have this nasty geometric
progression when dealing with deeply quoted code:

 {} - {{\}} - {{{\\\}\}} - \\\}\\\}\}} -
{\\\}\\\}\\\}\}}

I feel escapes are too easy to handle incorrectly, and too difficult to
inspect for correctness. I'm currently contemplating a potential solution:
require all literal text to use balanced `{` and `}` characters, and use
post-processing in ABC to introduce any imbalance. This could be performed
in a streaming manner. Inductively, all quoted code would be balanced.

Best,

Dave





On Mon, Sep 23, 2013 at 9:28 PM, John Carlson yottz...@gmail.com wrote:

 I don't really have a big concern.  If you just support numbers, people
 will find clever, but potentially incompatible ways of doing strings.  I
 recall in the pre-STL days supporting 6 different string classes.  I
 understand that a name is different than a string, but I come from a perl
 background.  People don't reinvent strings in perl to my knowledge.
 On Sep 23, 2013 11:15 PM, David Barbour dmbarb...@gmail.com wrote:

 I think it's fine if people model names, text, documents, association
 lists, wikis, etc. -- and processing thereof.

 And I do envision use of graphics as a common artifact structure, and
 just as easily leveraged for any explanation as text (though I imagine most
 such graphics will also have text associated).

 Can you explain your concern?
  On Sep 23, 2013 8:16 PM, John Carlson yottz...@gmail.com wrote:

 Don't forget that words can be images, vector graphics or 3D graphics.
 If you have an open system, then people will incorporate names/symbols.
 I'm not sure you want to avoid symbolic processing, but that's your choice.

 I'm reminded of the omgcraft ad for cachefly.
 John
 On Sep 23, 2013 8:11 PM, David Barbour dmbarb...@gmail.com wrote:

 Okay, so if I understand correctly you want everyone to see the same
 thing, and just deal with the collisions when they occur.

 You also plan to mitigate this by using some visual indicators when
 that word doesn't mean what you think it means.  This would require
 search before rendering, but perhaps it could be a search of the user's
 personal dictionary - i.e. ambiguity only within a learned set. I wonder if
 we could use colors or icons to help disambiguate.

 A concern I have about this design is when words have meanings that are
 subtly but significantly different. Selecting among these distinctions
 takes extra labor compared to using different words or parameterizing the
 distinctions. But perhaps this also could be mitigated, through automatic
 refactoring of the personal dictionary (such that future exposure to a
 given word will automatically translate it).

 I titled this Personal Programming Environment as Extension of Self
 because I think it should reflect our own metaphors, our own thoughts,
 while still being formally precise when we share values. Allowing me to use
 your words, your meanings, your macros is one thing - a learning
 experience. Asking me to stick with it, when I have different subtle
 distinctions I favor, is something else.

 Personally, I think making the community see the same things is less
 important so long as they can share and discover by *meaning* of content
 rather than by the words used to describe it. Translator packages could be
 partially automated and further maintained implicitly with permission from
 the people who explore different projects and small communities.

 Can we create systems that enable people to use the same words and
 metaphors with subtly different meanings, but still interact efficiently,
 precisely, and unambiguously?

 Best,

 Dave


 On Mon, Sep 23, 2013 at 5:26 PM, Sean McDirmid 
 smcd...@microsoft.comwrote:

  The names are for people, and should favor readability over
 uniqueness in the namespace; like ambiguous English words context should 
 go
 a long way in helping the reader understand on their own (if not, they can
 do some mouse over). We can even do fancy things with the names when they
 are being rendered, like, if they are ambiguous, underlay them with a
 dis-ambiguating qualifier. The world is wide open once you’ve mastered how
 to build a code editor! Other possibilities include custom names, or
 multi-lingual names, but I’m worried about different developers “seeing”
 different things…we’d like to develop a community that sees the same 
 things.
 

 ** **

 The trick is mastering search and coming up with an interface so that
 it becomes as natural as identifier input. 

 ** **

 *From:* augmented-programm...@googlegroups.com [mailto:
 augmented

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-22 Thread David Barbour
Mark,

You ask some good questions! I've been taking some concrete actions to
realize my vision, but I haven't much considered how easily others might
get involved.

As I've written, I think a tactic concatenative (TC) language is the key to
making it all work great. A TC language can provide a uniformly safe and
simple foundation for understanding and manipulating streaming updates.
User actions must be formally translated to TC commands, though I can start
at a higher level and work my way down. However, the artifacts constructed
and operated upon by this language must be concretely visualizable,
composable, and manipulable - e.g. documents, diagrams, graphs, geometries.
Homoiconic this is not.

My own plan is to implement a streamable, strongly typed, capability-secure
TC bytecode (Awelon Bytecode, ABC) and build up from there, perhaps
targeting Unity and/or developing a web-app IDE for visualization. (Unity
is a tempting target for me due to my interest in AR and VR environments,
and Meta's support for Unity.)

I would very much favor a lightweight toolkit approach, similar to what the
REBOL/Red community has achieved -fitting entire desktops and webservices
as tiny apps built upon portable OS/runtime ( 1MB).  BTW, if you are a big
believer in tools, I strongly recommend you look into what the
REBOLhttp://www.rebol.com/what-rebol.htmlcommunity has achieved, and
its offshoot
Red http://www.red-lang.org/p/about.html. These people have already
achieved and commercialized a fair portion of the FoNC ideals through their
use of dialects. They make emacs look like a bloated, outdated, arcane
behemoth.

(If REBOL/Red used capability-based security, pervasive reactivity, live
programming, strong types, substructural types, external state, and...
well, there are a lot of reasons I don't favor the languages. But what
they've accomplished is very impressive!)

I think the toolkit approach quite feasible. ABC is designed for continuous
reactive behaviors, but it turns out that it can be very effectively used
for one-off functions and imperative code, depending only on how the
capability invocations are interpreted. ABC can also be used for efficient
serialization, i.e. as the protocol to maintain values in a reactive model.
So it should be feasible to target Unity or build my own visualization/UI
toolkit. (ABC will be relatively inefficient until I have a good compiler
for it, but getting started should be easy once ABC is fully defined and
Agda-sanitized.)

Best,

Dave


On Sep 21, 2013 10:52 PM, Mark Haniford markhanif...@gmail.com wrote:

 David,

 Great Writeup.  To get down to more practical terms for laymen software
 engineers such as myself,  what can we do in immediate terms to realize
 your vision?

 I'm a big believer in tools( even though I'm installing emacs 24 and
 live-tool).  Is there currently a rich IDE environment core in which we can
 start exploring visualization tools?

 Here's what I'm getting at. We have rich IDEs (in relative terms),
 Intellij, Resharper, VS, Eclipse, whatever..  I think they are still very
 archaic in programmer productivity.  The problem I see is that we have a
 dichotomy with scripting ennviroments (Emacs) as opposed to heavy IDEs.
  e.g.  we can't easily script these IDEs for expermination.

 thought?




 I find myself agreeing with most of your intermediate reasoning and then
 failing to understand the jump to the conclusion of tactic concatenative
 programming and the appeal of viewing user interfaces as programs.


 Tacit concatenative makes it all work smoothly.

 TC is very effective for:
 * automatic visualization and animation
 * streaming programs
 * pattern detection (simple matching)
 * simple rewrite rules
 * search-based code generation
 * Markov model predictions (user anticipation)
 * genetic programming and tuning
 * typesafe dataflow for linear or modal

 Individually, each of these may look like an incremental improvement that
 could be achieved without TC.

 But every little point, every little bit of complexity, adds up, pushing
 the system beyond viable accessibility and usability thresholds.

 Further, these aren't little points, and TC is not just marginally
 more effective. Visualization and animation are extremely important.
 Predicting and anticipating user actions is highly valuable. Code
 extraction from history, programming by example, then tuning and optimizing
 this code from history are essential. Streaming commands is the very
 foundation.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-21 Thread David Barbour
'.



I see no special relationship between that primary program and the other
 program

we're implicitly building when we look at the IDE's UI as a programming
 language.


Some useful relationships:

(1) The value models are the same in both cases.
(2) Subprograms extracted from the latter are first-class values in the
former.
(3) Subprograms developed in the former can be used as tools in the latter.
(4) The set of user capabilities is exactly equal to the set of program
capabilities.
(5) Intuitions developed based on usage are directly effective for
automation.

These shouldn't be special relationships.



 My point is that if you start with a named compositional (values can only
 be composed) language, it looks like it would be easy to convert to tactic
 concatenative.  It's easy to replace uses of names with nonsense words that
 achieve the same data plumbing.


That's only true if you *assume* every dataflow you can express with the
names can also be expressed with concatenative 'nonsense' words.

My point is that concatenative allows for degrees of precise, typeful
control over dataflow that names do not. I can create tacit concatenative
models for which your named applicative (aka monadic) is generally too
expressive for translation, yet which are significantly more expressive
than pure applicative. Further, it's quite useful to do so for
correctness-by-construction involving substructural and modal types
(affine, relevant, linear, regional, staging) which are in turn useful for
security, resource control, and modeling heterogeneous and distributed
systems.

Concatenative with first-class names has the exact same problems. That's
why I argued even against John Purdy's use of local names. Granted, his
language doesn't have the greater expressiveness of substructural and modal
types. But now he couldn't add them even if he wanted to. My own modeling
of names using explicit association lists only avoids this issue due to its
second-class nature, a formal indirection between reference and referent
(such that capturing a reference does not imply capturing the referent).

Anyhow, the issue here is certainly named vs. tacit, not applicative vs.
concatenative.






 On Fri, Sep 20, 2013 at 8:28 PM, David Barbour dmbarb...@gmail.comwrote:


 On Fri, Sep 20, 2013 at 2:57 PM, Matt McLelland mclelland.m...@gmail.com
  wrote:


 I would say that in my experience text is a much better construct form
 for most programs than those other forms, so I would expect text to be the
 95% case.   I'm including in that more structured forms of text like
 tables.


 If you look at PL, text has been more effective. Graphical
 programming is historically very first-order and ineffective at addressing
 a variety of problems.

  If you look at UI, text input for control has been much less effective,
 and even text output is often augmented by icons or images. The common case
 is buttons, sliders, pointing, and so on. Some applications also use
 concepts of tooled pointers like brushes, or tooled views like layering.

 I've tried to explain this before: I see UI as a form of PL, and vice
 versa. Thus, to me, the 95% case is certainly not text. Rather, most users
 today are using a really bad PL (- the 95% case), and most programmers
 today are using a really unnatural UI (- and thus need to be really
 serious about it), and this gap is not essential.



 What I'm still not understanding is how viewing the editing of an image
 or graph as a tacit concatenative program is a big win.


 Have you ever used a professional image editing tool?

 If you haven't, the process actually does involve quite a bit of
 automation. The artist implicitly constructs a small pipeline of layers and
 filters based on the actions they perform. This pipeline can often then be
 separated from the current image and applied to another. Essentially, you
 have implicit macros and a limited form of programming-by-example.

 But, with the way applications are designed today, this pipeline is
 trapped within the image editing application. You cannot, for example,
 casually apply a pipeline of filters to a view of a website. Conversely,
 you cannot casually incorporate an image search into layers or elements in
 an image. Either of these efforts would require a lot of file manipulation
 by hand, but that effort would not be reusable.

 If you are an artist with a hobby of hacking code, you could:
 * don your programmer hat
 * fire up your favorite IDE and textual PL
 * go through the steps of starting a new project
 * review the APIs for HTTP loading
 * review the APIs for Filesystem operations
 * review the APIs for your image-editing app
 *oh bleep! doesn't have one!
 * review the APIs for image processing
 * export your pipeline
 * begin implementing a parser/interpreter for them
 *who the bleep designed this pipeline language?!
 * abort your effort to implement a general interpreter/parser
 * re-implement your pipeline in your language
 *ugh

Re: [fonc] Formation of Noctivagous, Inc.

2013-09-21 Thread David Barbour
Can you change the font on that website? My eyes are bleeding.


On Sat, Sep 21, 2013 at 9:49 AM, John Pratt jpra...@gmail.com wrote:


 In the process of learning programming to form Noctivagous, Inc.
 I came to question fundamental computing science, specifically
 how the programs are laid out.  I am, however, interested in anyone
 who wishes to work on my newest project, 1draw33, which I announced
 first on the FONC list, in 2012.  The website, if people are interested,
 is
 http://noctivagous.com
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-21 Thread David Barbour
On Sat, Sep 21, 2013 at 12:29 PM, Matt McLelland
mclelland.m...@gmail.comwrote:

  An image could be interpreted as a high level world-map to support
 procedural generation with colors indicating terrain types and heights.

 This is common practice in games, but it doesn't IMO make artists into
 programmers and it doesn't make the image into a program.


Not by itself, I agree. Just like one hair on the chin doesn't make a
beard, or one telephone doesn't make a social network.

But scale it up! One artist will eventually have dozens or hundreds of
data-objects representing different activities and interacting. In a
carefully designed environment, the relationships between these objects
also become accessible for observation, influence, and extension.

The only practical difference between what you're calling an 'artist' vs.
'programmer' is scale. And, really, it's your vision of an artist's role
that's failing to scale, not the artist's vision.  Artists are certainly
prepared to act as programmers if it means freedom to do their work (cf.
Unreal Kismet, or , for example). But they have this important
requirement that is not well addressed by most languages today: immediate
feedback, concreteness.

A team of artists can easily build systems with tens of thousands of
interactions, at which point they'll face all the problems a team of
programmers do. It is essential that they have better tools to modularize,
visualize, understand, and address these problems than do programmers
today.



 I think there is a useful distinction between user and programmer that
 should be maintained.


I think there should be a fuzzy continuum, no clear distinction. Sometimes
artists are more involved with concrete direct manipulations, sometimes
more involved with reuse or tooling, with smooth transitions between one
role and the other. No great gaps or barriers.

Do you have any convincing arguments for maintaining a clear distinction?
What precisely is useful about it?



How can you view playing a game of Quake as programming? what's to be
 gained?


Quake is a game with very simple and immutable mechanics. The act of
playing Quake does not alter the Quake world in any interesting ways.
Therefore, we would not develop a very interesting artifact-layer program.
 There would, however, be an implicit program developed by the act of
playing Quake: navigation, aiming, shooting. This implicit program would at
least be useful for developing action-scripts and Quake-bots so you can
cheat your way to the top. (If you aren't cheating, you aren't trying. :)

If you had a more mutable game world - e.g. Minecraft, Lemmings, Little Big
Planet 2, or even Pokemon
Yellowhttp://aurellem.org/vba-clojure/html/total-control.html -
then there is much more to gain by comprehending playing as programming,
since you can model interesting systems. The same is true for games
involving a lot of micromanagement: tower defense, city simulators,
real-time tactics and strategy. You could shift easily from micromanagement
to 'programming' higher level strategies.

Further, I believe there are many, many games we haven't been able to
implement effectively: real-time dungeon-mastering for DD-like games, for
example, and the sort of live story-play children tend to perform -
changing the rules on-the-fly while swishing and swooping with dolls and
dinosaurs. There are whole classes of games we can't easily imagine today
because the tools for realizing them are awful and inaccessible to those
with the vision.

To comprehend user interaction as programming opens opportunities even for
games.

Of course, if you just want to play, you can do that.



 I find myself agreeing with most of your intermediate reasoning and then
 failing to understand the jump to the conclusion of tactic concatenative
 programming and the appeal of viewing user interfaces as programs.


Tacit concatenative makes it all work smoothly.

TC is very effective for:
* automatic visualization and animation
* streaming programs
* pattern detection (simple matching)
* simple rewrite rules
* search-based code generation
* Markov model predictions (user anticipation)
* genetic programming and tuning
* typesafe dataflow for linear or modal

Individually, each of these may look like an incremental improvement that
could be achieved without TC.

You CAN get automatic visualization and animation with names, it's just
more difficult (no clear move vs. copy, and values held by names don't have
a clear location other than the text). You CAN do pattern recognition and
rewriting with names, it's just more difficult (TC can easily use regular
expressions). You CAN analyze for linear safety using names, it's just more
difficult (need to track names and scopes). You CAN predict actions using
names, it's just more difficult (machine-learning, Markov models, etc. are
very syntax/structure oriented). You CAN search logically for applicative
code or use genetic programming, it's just freakishly more difficult 

Re: [fonc] Personal Programming Environment as Extension of Self

2013-09-20 Thread David Barbour
: Serious
programming is only needed 5% as often. And, when needed, costs only 5% as
much.

Programming should not be a career.

Programming should be the most basic form of computer literacy - such that
people don't even think about it as programming.

A scientist who knows how to get big-data into one application and process
it in another should be able to build a direct pipeline - one that
optimizes away the intermediate loading - without ever peeking under the
hood, without learning an API.

A musician who knows how to watch YouTube videos, and who has learned of a
cool new machine-learning tool to extract and characterize microsounds,
should be able to apply the latter to the sounds from the former without
learning about HTTP and video transfer and how to scrape sounds from a
video. Further, it's better for both the artist and servers if this
processing can automatically be shifted close to the resources and
eliminates the irrelevant video rendering.

Artists, scientists, musicians, anyone should be able to think in terms of
capabilities:

* I can get an X
* I can get a Y with an X
* therefore, I can get a Y

But today, because UIs are bad PLs, they cannot. Instead, we have this
modal illogic:

* [app1] I can get X
* [app2] I can get Y with X
* ???
* profit

UI (and programming) is much more difficult today than it should be, or can
be.


 isn't the hard difference more the concatenative vs. applicative than
 named vs. tacit?


(context: challenge of translating traditional PL to tacit concatenative)

No. The primary challenge is due to named vs. tacit, and the dataflows
implicitly expressed by use of names. If you have an applicative language
that doesn't use names, then there is a much more limited dataflow. It is
really, literally, just Applicative.

  class Functor pl where
-- language supports pure functions
fmap :: (a - b) - pl a - pl b

  class Applicative pl where
-- language supports pointy values
pure :: a - pl a

-- language supports procedural sequencing
ap :: pl (a - b) - pl a - pl b

  -- (some thought has been given to separating pure and ap).

This much more constrained language is easy to express in a concatenative
language.

* `fmap` is implicit (you can express pure behaviors if you like)
* `pure` is modeled by literals (e.g. `42` puts a number on the stack)
* `ap` is a simple combinator.

But introducing names moves expressiveness from `Applicative` to `Monad`,
which supports ad-hoc deep bindings:

  foo = \ x - bar (\y - x+y)

Concatenative languages are often somewhere in between Applicative and
Monad, since they require explicitly modeling the data-plumbing and hence
can control the expressiveness of bindings.

Regards,

Dave






 On Fri, Sep 20, 2013 at 2:15 PM, David Barbour dmbarb...@gmail.comwrote:


 On Sep 20, 2013 10:46 AM, Matt McLelland mclelland.m...@gmail.com
 wrote:
 
  The TUNES vision is revived, and better than ever.
 
 
  Do you have a link that would tell me what TUNES is?

 Try tunes.org

 
  What is it about continuous automatic visualization that you think
 requires tacit or concatenative programming?

 Not requires. Just orders of magnitude better at it. Some reasons:

 1. well defined environment structure at every step
 2. well defined, small step movement operators
 3. strong, local distinction between move and copy.
 4. linear, incremental timeline built right in
 5. much weaker coupling to underlying text

 
  My main question is:  what's the problem with text?

 Manipulating diagrams, graphs, geometries, images via text is analogous
 to writing text through a line editor. It's usable for a nerd like me, but
 is still harder than it could be.

 My goal is to lower the barrier for programming so that normal people do
 it as part of their every day lives.

 
  From my point of view an essential aspect of programming is that we are
 building up an artifact -- a program  -- that can be reasoned about
 independently of its edit history / method of construction.

 I agree!

 That gets back to the meta in user actions are an act of
 metaprogramming.

 Yet there are many advantages to modeling the method of construction, and
 reasoning about it, too. It enables programming by example, formal macros,
 staged metaprogramming.

 Even better, if execution is fully consistent with method of
 construction, then the intuitions users gain during normal use will
 effectively inform them for higher order programming.

 
  Furthermore, I think it's a mistake to couple the ways of constructing
 / editing that artifact to its semantics as a program.

 I halfway agree with this.

 The programmatic 'meaning' of a graph, geometry, diagram, text, or other
 artifact should be controlled by the user of that artifact. Yet, it is
 ideal that the ways of manipulating the artifact also move it from one
 consistent meaning to another.

 In order to achieve both properties, it is necessary that the tools and
 macros for operating on a structure also

[fonc] Personal Programming Environment as Extension of Self

2013-09-19 Thread David Barbour
Over the last month, I feel like I stumbled into something very simple and
profound: a new perspective on an old idea, with consequences deeper and
more pervasive than I had imagined.

The idea is simply this: every user action is an act of meta-programming.

More precisely:
(1) Each user event addends a tacit concatenative program.
(2) The output of the tacit concatenative program is another program.
(3) We can understand the former as rewriting parts of the latter.
(4) These rewrites include the user-model - navigation, clipboard, etc.

I will further explain this idea, why it is powerful, how it is different.

To clarify, this isn't another hand-wavy 'shalt' and 'must' proposal with
no idea of how to achieve it. Hammering at a huge list of requirements for
eight years got me to RDP. At this point, I have concrete ideas on how to
accomplish everything I'm about to describe.

Users Are Programmers.

The TUNES vision is revived, and better than ever.

*WHY TACIT CONCATENATIVE?*

Concatenative programming is perhaps best known through FORTH. Most
concatenative languages have followed in Charles Moore's forthsteps,
sticking with the basic stack concept but focusing on higher-order
programming, types, and other features.

A stack would be an extremely impoverished and cramped environment for a
user; even many programmers would not tolerate it. Fortunately, we can move
beyond the stack environment. And I insist that we do! Concatenative
programming can also be based upon such structures as trees, Huet zippers,
and graphs. This proposal is based primarily on tree-structured data and
zippers, with just a little indirect graph modeling through shared state or
explicit labels (details later).

A 'tacit' programming language is one that does not mention names for
parameters or local variables. Many concatenative programming languages are
also tacit, though the concepts don't fully intersect.

A weakness of tacit concatenative programming is that, in a traditional
text-based programming environment, users must visualize the environment
(stack or other structure) in their head, and that they must memorize a
bunch of arcane 'stack shuffling' words. By comparison, variable names in
text are easy to visualize and review.

My answer: change programming environments!

Powerful advantages of tacit concatenative programming include:
1. the environment has a precisely defined, visualizable value
2. short strings of tacit concatenative code are easy to generate
3. concatenative code is sequential, forming an implicit timeline
4. code also subject to learning, pattern recognition, and rewrites
5. every step, small and large, is precisely defined and typed

Instead of an impoverished, text-based programming environment, we should
offer continuous automatic visualization. Rather than asking users to
memorize arcane words, we should offer direct manipulation: e.g. take, put,
slide, toss, drag, drop, copy, paste. Appropriate tacit concatenative code
is generated at every step, for every gesture. This code is easy to
generate because generators can focus on short vectors of learned 'known
useful' words without syntactic noise; this is subject to a variety of
well-known solutions (logical searches, genetic programming,
hill-climbing).

And then there are benefits that move beyond anything offered today, UI or
PL.

Not only can we visualize the environment, we can animate it. Users can
review and replay their actions, potentially from different perspectives or
highlighting different objects. Since even the smallest dataflow steps are
well defined, users can review at different temporal scales, based on the
granularity of their actions - zooming in to see precisely what taking an
object entails, or zooming out to see broad changes in an environment.

Rewrites can be used to make these animations smoother, more efficient, and
perhaps more aesthetically pleasing. And, in addition to undo, users can
rewrite parts of their history to better understand a decision or to fix a
mistake.

The programming environment can also help users construct macros: pattern
recognition is easy with tacit programming even if it were just in terms of
sequences of words. However, patterns are augmented further by looking at
context, the environment at the time a word was used. Proposed words can be
refined with very simple decision procedures to account for slight
context-sensitive variations. Discovered patterns can be used for simple
compression of history, or be used for programming-by-example.

An environment that recognizes a pattern might quietly and unobtrusively
offer a constructed tool or macro, that the user might refine a little
(e.g. clarifying the decision procedure) before using. The notion of
'dialecting' and 'DSLs' is replaced by problem-specific toolboxes and
macros, where a tool may act a lot like a paintbrush.

Further, there are advantages from the formalism and typing!

For one example, it is to guide user actions relevant to the typeful

Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-11 Thread David Barbour
Ah, yes, I had forgotten about nests and birds. IIRC, they're more like
channels than promises, but channels can serve a similar purpose. So again,
the idea seems to be to leave a token indicating where something should go,
and a token indicating where it should be picked up?

Aside: I haven't explicated it here, but I believe it much better to avoid
state embedded at the surface of the UI, for a lot of reasons (like
persistence, disruption tolerance, simpler consistency properties, simpler
undo or upgrade via history edits). Avoiding state can be achieved by
makePromise if I use linear types. Negative or fractional types can do a
similar job. A fractional type can model piping a time-varying signal
through a promise.

Best,

Dave


On Tue, Sep 10, 2013 at 10:35 PM, Darius Bacon wit...@gmail.com wrote:

 On Wed, Sep 11, 2013 at 12:24 AM, David Barbour dmbarb...@gmail.com
 wrote:
  One PL concept you didn't mention is promises/futures. How might those
 be realized in a UI?

 There's precedent: ToonTalk represents promises and resolvers in its
 UI as nests and birds. (Some reasons this may miss the mark: It's been
 many years since I played with ToonTalk; IIRC the system supports
 declarative concurrency and nothing more powerful; I don't understand
 you about negative and fractional types, though it sounds interesting:
 http://www.cs.indiana.edu/~sabry/papers/rational.pdf )

 Darius


  In the type system of a PL, promises can be modeled as a pair:
 
makePromise ::  1 - (resolver * future)
 
  Or we can potentially model promises using fractional or negative types,
 as developed by Amr Sabry, which has an advantage of addressing sums in a
 symmetric manner:
 
receive ::  1 -  (1/a * a)
return :: (1/a * a) - 1
receive+ :: 0 - (-a + a)
return+ :: (-a + a) - 0
 
  But what would this look like in a UI model? My intuition is leaving
 some sort of IOU where a value is needed then going to some other location
 to provide it (perhaps after a copy and a few transforms). I suspect this
 behavior might be convenient for a user, but it potentially leaves parts of
 the UI or system in an indefinite limbo state while a promise is
 unfulfilled. Though, perhaps that could be addressed by requiring the
 promises to be fulfilled before operations will 'commit' (i.e. enforcing
 type-safe UI transactions).
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-10 Thread David Barbour
Thanks for this ref. It looks interesting.


On Mon, Sep 9, 2013 at 7:33 PM, K. K. Subramaniam kksubbu...@gmail.comwrote:

 On Tuesday 10 September 2013 06:24 AM, Alan Kay wrote:

 Check out Smallstar by Dan Halbert at Xerox PARC (written up in a PARC
 bluebook)


 Available online at 
 http://danhalbert.org/pbe-**html.htmhttp://danhalbert.org/pbe-html.htm

 BTW, Dan Halbert is the author of the more command in Unix.

 Regards .. Subbu

 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-10 Thread David Barbour
I think we cannot rely on 'inspection' - ability to view source and so on -
except in a very shallow way - e.g. to find capabilities directly
underlying a form. Relying on deep inspection seems to have several
problems:

1) First it would take a lot more study and knowledge to figure out the
intention of code, to distinguish the significant behavior from the
insignificant. Intentions could be easily obfuscated.

2) Since it would be difficult to embed this 'study and knowledge' into our
programs, it would become very difficult to automate composition,
transclusion, view-transforms, and programmatic manipulation of UIs. We
would rely on too much problem-specific knowledge.

3) When so much logic is embedded in the surface of the UI, it becomes easy
for widgets to become entangled with the ambient logic and state. This
makes it infeasible to extract, at a fine granularity, a few specific
signals and capabilities from one form for use in another.

4) Relying on deep inspection can violate encapsulation and security
properties. It would be difficult to move beyond a closed system into the
wider world - cross-application mashups, agents that integrate independent
services, and so on.

If I'm to unify PL with UI, I cannot assume that I have access to the code
underlying the UI. Instead, I must ensure that the UI is a good PL at the
surface layer. We can understand UIs to be programming languages, but often
they are not very good languages with respect to composition, modularity,
appropriate level of abstraction. That's the problem to solve - at the
surface layer, not (just) under-the-hood.

In such a system, copy-and-paste code could be *exactly the same* as
copy-and-paste UI, though there may be different types of values
involved. We could have blocks of code that can be composed or directly
applied to UI elements - programmatically transforming or operating on
them. The moment users are forced to 'look under the hood' and extract
specification, the ideal fails. UI and PL are separated. There are now two
distinct surface syntaxes, two distinct meanings and semantics, and a gap
between them bridged with arcane logic.

To unify PL and UI, widgets must *be* values, behaviors, signals, code.

And any looking under the hood must be formally represented as reflection
or introspection, just as it would be in a PL.

On Mon, Sep 9, 2013 at 3:47 PM, John Carlson yottz...@gmail.com wrote:

 One thing you can do is create a bunch of named widgets that work together
 with copy and paste.  As long as you can do type safety, and can
 appropriately deal with variable explosion/collapsing.  You'll probably
 want to create very small functions, which can also be stored in widgets
 (lambdas).  Widgets will show up when their scope is entered, or you could
 have an inspect mode.
 On Sep 9, 2013 5:11 PM, David Barbour dmbarb...@gmail.com wrote:

 I like Paul's idea here - form a pit of success even for people who
 tend to copy-paste.

 I'm very interested in unifying PL with HCI/UI such that actions like
 copy-paste actually have formal meaning. If you copy a time-varying field
 from a UI form, maybe you can paste it as a signal into a software agent.
 Similarly with buttons becoming capabilities. (Really, if we can use a
 form, it should be easy to program something to use it for us. And vice
 versa.) All UI actions can be 'acts of programming', if we find the right
 way to formalize it. I think the trick, then, is to turn the UI into a good
 PL.

 To make copy-and-paste code more robust, what can we do?

 Can we make our code more adaptive? Able to introspect its environment?

 Can we reduce the number of environmental dependencies? Control namespace
 entanglement? Could we make it easier to grab all the dependencies for code
 when we copy it?

 Can we make it more provable?

 And conversely, can we provide IDEs that can help the kids understand
 the code they take - visualize and graph its behavior, see how it
 integrates with its environment, etc? I think there's a lot we can do. Most
 of my thoughts center on language design and IDE design, but there may also
 be social avenues - perhaps wiki-based IDEs, or Gist-like repositories that
 also make it easy to interactively explore and understand code before using
 it.


 On Sun, Sep 8, 2013 at 10:33 AM, Paul Homer paul_ho...@yahoo.ca wrote:


 These days, the kids do a quick google, then just copypaste the
 results into the code base, mostly unaware of what the underlying 'magic'
 instructions actually do. So example code is possibly a bad thing?

 But even if that's true, we've let the genie out of the bottle and he
 is't going back in. To fix the quality of software, for example, we can't
 just ban all cutpaste-able web pages.

 The alternate route out of the problem is to exploit these types of
 human deficiencies. If some programmers just want to cutpaste, then
 perhaps all we can do is too just make sure that what they are using is
 high enough quality. If someday they want

Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-10 Thread David Barbour
This is a good list of concept components.

I think branching should be open - I.e. modeled as a collection where only
one item is 'active' at a time. There is a clear duality between sums and
products, and interestingly a lot of the same UIs apply (i.e.
prisms/lenses, zippers for sum types). (But there can be some awkwardness
distributing sums over products.)

Recursion is an interesting case. One can model it as a closed value, or as
a fixpoint combinator. But to keep the UI/PL extensible, it might be better
to avoid closed loops (especially if they maintain state). Open loop
recursion happens easily and naturally enough if we have any shared state
resources, such as a database or tuple space... or the world itself (via
sensors and actuators).

Exceptions: in general, exceptions are not difficult to model as
choices/branches (a path of a sum type). I've usually considered this a
better way to model them. This can be combined with searching the
environment for some advice on how to handle the condition - I.e. in terms
of a dynamic scoped 'special' variable, or (in a concatenative language)
literally searching a stack or other environment model.

Keyboard/video/mouse/audio would be a good start for signals on the UI
side. I've been wondering how to get a lot of useful control signals
quickly... Maybe integrate with ROS from WillowGarage?
On Sep 10, 2013 10:54 AM, John Carlson yottz...@gmail.com wrote:

 To unify PL and UI:

 values: Date Calculator, String Calculator, Numeric Calculator,
 Zipper/Document Visualizer
 behavior, code:  Recorder (the container), Script,
 Branch/Table/Conditional/Recursion/Procedure/Function/Method (Unified
 Control Structure)
   Also, Exceptions (has anyone seen a UI for this?)
 signals:  Mouse, along with x,y coordinates
   Keyboard and Keystrokes
   Audio: waveform and controls
   Webcam:  video and controls
   Networking:  the extend/receive I/O operation
   System interface:  pipes, command prompt



 On Tue, Sep 10, 2013 at 12:25 PM, David Barbour dmbarb...@gmail.comwrote:

 I think we cannot rely on 'inspection' - ability to view source and so on
 - except in a very shallow way - e.g. to find capabilities directly
 underlying a form. Relying on deep inspection seems to have several
 problems:

 1) First it would take a lot more study and knowledge to figure out the
 intention of code, to distinguish the significant behavior from the
 insignificant. Intentions could be easily obfuscated.

 2) Since it would be difficult to embed this 'study and knowledge' into
 our programs, it would become very difficult to automate composition,
 transclusion, view-transforms, and programmatic manipulation of UIs. We
 would rely on too much problem-specific knowledge.

 3) When so much logic is embedded in the surface of the UI, it becomes
 easy for widgets to become entangled with the ambient logic and state. This
 makes it infeasible to extract, at a fine granularity, a few specific
 signals and capabilities from one form for use in another.

 4) Relying on deep inspection can violate encapsulation and security
 properties. It would be difficult to move beyond a closed system into the
 wider world - cross-application mashups, agents that integrate independent
 services, and so on.

 If I'm to unify PL with UI, I cannot assume that I have access to the
 code underlying the UI. Instead, I must ensure that the UI is a good PL at
 the surface layer. We can understand UIs to be programming languages, but
 often they are not very good languages with respect to composition,
 modularity, appropriate level of abstraction. That's the problem to solve -
 at the surface layer, not (just) under-the-hood.

 In such a system, copy-and-paste code could be *exactly the same* as
 copy-and-paste UI, though there may be different types of values
 involved. We could have blocks of code that can be composed or directly
 applied to UI elements - programmatically transforming or operating on
 them. The moment users are forced to 'look under the hood' and extract
 specification, the ideal fails. UI and PL are separated. There are now two
 distinct surface syntaxes, two distinct meanings and semantics, and a gap
 between them bridged with arcane logic.

 To unify PL and UI, widgets must *be* values, behaviors, signals, code.

 And any looking under the hood must be formally represented as
 reflection or introspection, just as it would be in a PL.

 On Mon, Sep 9, 2013 at 3:47 PM, John Carlson yottz...@gmail.com wrote:

 One thing you can do is create a bunch of named widgets that work
 together with copy and paste.  As long as you can do type safety, and can
 appropriately deal with variable explosion/collapsing.  You'll probably
 want to create very small functions, which can also be stored in widgets
 (lambdas).  Widgets will show up when their scope is entered, or you could
 have an inspect mode.
 On Sep 9, 2013 5:11 PM, David Barbour

Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-10 Thread David Barbour
Yes, sum (x + y) vs. product (x * y) corresponds to 'or vs. and' or 'union
vs. struct'.

For many purposes, they are very similar: associative, commutative,
identity elements, ability to focus attention and operations on just x or
just y. We can model zippers and lenses for each, which is kind of cool.

In widget form, products are obvious (typical hbox or vbox). Sums, OTOH,
are rarely represented directly because we tend to hide the inactive
values. One might model a sum in terms of a radio-button with an associated
active form - i.e. instead of hiding the inactive values, they're marked
inactive - but still accessible for manipulations. (We could have optional
views that hide the inactive forms.)

There are differences between sums and products:

products have `copy` x - (x * x)  (adding 1 bit of info)
sums have `merge` (x + x) - x (losing 1 bit of info)

And in general we want to deal with cases like:

  factoring :: (x*y) + (x*z) - x*(y+z)
  distribution :: x * (y + z) - (x * y) + (x * z)

The problem with if/then/else statements is that they're closed to
extension: they force the merge, and any intermediate decisions, to be
syntactically local. This is inflexible. It is also, often, very
inefficient - i.e. in some cases it means we repeatedly branch our behavior
by observing the same values, rather than just keeping the same branch
available for extension.

I'm not sure what you were trying to explain with the same number of
factors for each term or what you mean by true as a factor. (My thought
is that you're using 'true' like I might have used the unit type, 1.) But I
think the main difficulties for sum vs. product regard something else
entirely: distribution, in a distributed system. An issue is that `x*(y+z)`
cannot really be distributed unless we know whether we are in y or z at the
*same place and time* as we know x. Perhaps this isn't as much a problem in
a UI, since we may be forced to move values to the same place and time to
get them into the same UI in the first place.


In other thoughts...

One PL concept you didn't mention is promises/futures. How might those be
realized in a UI?

In the type system of a PL, promises can be modeled as a pair:

  makePromise ::  1 - (resolver * future)

Or we can potentially model promises using fractional or negative types, as
developed by Amr Sabry, which has an advantage of addressing sums in a
symmetric manner:

  receive ::  1 -  (1/a * a)
  return :: (1/a * a) - 1
  receive+ :: 0 - (-a + a)
  return+ :: (-a + a) - 0

But what would this look like in a UI model? My intuition is leaving some
sort of IOU where a value is needed then going to some other location to
provide it (perhaps after a copy and a few transforms). I suspect this
behavior might be convenient for a user, but it potentially leaves parts of
the UI or system in an indefinite limbo state while a promise is
unfulfilled. Though, perhaps that could be addressed by requiring the
promises to be fulfilled before operations will 'commit' (i.e. enforcing
type-safe UI transactions).





On Tue, Sep 10, 2013 at 7:45 PM, John Carlson yottz...@gmail.com wrote:

 If your sum/product is or/and, I tend to agree there is difficulty.  We
 chose to use a normalized representation:  the same number of factors for
 each term, true used liberally as a factor.  In many cases, there were
 only two branches to take.  I spent a great deal of time coming up with a
 table which handled repetition, mandatory, optional and floating components
 in the product, but it got so difficult to be implemented and tested
 especially, that I gave up and implemented 997 acknowledgements in C++.  I
 think our translation analyst may have been unique among EDI/X12 analysts
 for not using components designed specifically for X12 beyond the 997
 acknowledge code.  Instead we used something like tab-separated values to
 parse the X12...which was much less flexible...we couldn't read the
 separators from X12 file--they had to be constants in the code.  However,
 it might be possible that he used my fancy table, but I never heard of any
 bug reports, so I doubt it.  That's where I learn the rule don't make
 anything so complex you can't debug it, or automate tests for it.  This is
 likely why we see much more XML than X12 these days.  If you don't know
 what X12 is,  think of a mixture between s-expressions and comma separated
 values.
 On Sep 10, 2013 7:13 PM, David Barbour dmbarb...@gmail.com wrote:

 This is a good list of concept components.

 I think branching should be open - I.e. modeled as a collection where
 only one item is 'active' at a time. There is a clear duality between sums
 and products, and interestingly a lot of the same UIs apply (i.e.
 prisms/lenses, zippers for sum types). (But there can be some awkwardness
 distributing sums over products.)

 Recursion is an interesting case. One can model it as a closed value, or
 as a fixpoint combinator. But to keep the UI/PL extensible

Re: [fonc] Final STEP progress report abandoned?

2013-09-09 Thread David Barbour
On Fri, Sep 6, 2013 at 1:19 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 David Barbour dmbarb...@gmail.com writes:

  But favoring a simpler programming model - e.g. one with only
  integers, and where the only operation is to add or compare them
  -might also help.

 If the problem domain is X then I agree a minimal X-specific DSL is a
 good idea, although purely numeric problems are often amenable to more
 direct solutions; eg. dynamic programming, gradient-based methods, etc.


A rather nice property: given any general purpose concatenative language,
we can create a DSL for genetic programming by developing a set of
high-level words... then using those as the primitives for the GP.

The main issue, I think, is a more flexible environment model. The stack
doesn't offer very flexible interactions. A document (zipper) or graph
could be modeled as an object on the stack, though.

Best,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-09 Thread David Barbour
On Mon, Sep 9, 2013 at 2:11 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 I think a quite modest improvement would be more powerful
 calculators.


Smart phones? :)

(But seriously.)

Honestly, one of the things I would really want in a more powerful
calculator is a powerful array of sensors that can turn parts of my
environment into usable numbers. What is the distance between these two
trees? What is the GPS coordinate? What chemicals are detected in the area?

Even better if this happened all the time, so I can ask questions about
recent events. Unfortunately, we lack the energy technologies for it.
(Storage is much less an issue; we have lots of storage, and useful
exponential decay and compression models to remove the boring stuff.)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-09 Thread David Barbour
I like Paul's idea here - form a pit of success even for people who tend
to copy-paste.

I'm very interested in unifying PL with HCI/UI such that actions like
copy-paste actually have formal meaning. If you copy a time-varying field
from a UI form, maybe you can paste it as a signal into a software agent.
Similarly with buttons becoming capabilities. (Really, if we can use a
form, it should be easy to program something to use it for us. And vice
versa.) All UI actions can be 'acts of programming', if we find the right
way to formalize it. I think the trick, then, is to turn the UI into a good
PL.

To make copy-and-paste code more robust, what can we do?

Can we make our code more adaptive? Able to introspect its environment?

Can we reduce the number of environmental dependencies? Control namespace
entanglement? Could we make it easier to grab all the dependencies for code
when we copy it?

Can we make it more provable?

And conversely, can we provide IDEs that can help the kids understand the
code they take - visualize and graph its behavior, see how it integrates
with its environment, etc? I think there's a lot we can do. Most of my
thoughts center on language design and IDE design, but there may also be
social avenues - perhaps wiki-based IDEs, or Gist-like repositories that
also make it easy to interactively explore and understand code before using
it.


On Sun, Sep 8, 2013 at 10:33 AM, Paul Homer paul_ho...@yahoo.ca wrote:


 These days, the kids do a quick google, then just copypaste the results
 into the code base, mostly unaware of what the underlying 'magic'
 instructions actually do. So example code is possibly a bad thing?

 But even if that's true, we've let the genie out of the bottle and he is't
 going back in. To fix the quality of software, for example, we can't just
 ban all cutpaste-able web pages.

 The alternate route out of the problem is to exploit these types of human
 deficiencies. If some programmers just want to cutpaste, then perhaps all
 we can do is too just make sure that what they are using is high enough
 quality. If someday they want more depth, then it should be available in
 easily digestible forms, even if few will ever travel that route.

 If most people really don't want to think deeply about about their
 problems, then I think that the best we can do is ensure that their hasty
 decisions are based on as accurate knowledge as possible. It's far better
 than them just flipping a coin. In a sense it moves up our decision making
 to a higher level of abstraction. Some people lose the 'why' of the
 decision, but their underlying choice ultimately is superior, and the 'why'
 can still be found by doing digging into the data. In a way, isn't that
 what we've already done with micro-code, chips and assembler? Or machinery?
 Gradually we move up towards broader problems...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-05 Thread David Barbour
On Thu, Sep 5, 2013 at 8:17 AM, Carl Gundel ca...@psychesystems.com wrote:

 I’m not sure why you think I’m attributing special reverence to
 computing.  Break all the rules, please.  ;-)


To say you're touching the hem generally implies you're also on your
knees and bowing your head.


 

 ** **

 The claim that life is somehow inefficient so that computing should be
 different begs for qualification.  I’m sure there are a lot of ideas that
 can be gleaned for future computing technologies by studying biology, but
 living things are not computers in the sense of what people mean when they
 use the term computer.  It’s apples and oranges.


I agree we can gain some inspirations from life. Genetic programming,
neural networks, the development of robust systems in terms of reactive
cycles, focus on adaptive rather than abstractive computation.

But it's easy to forget that life had millions or billions of years to get
where it's at, and that it has burned through materials, that it fails to
recognize the awesomeness of many of the really cool 'programs' it has
created (like Wolfgang Amadeus Mozart ;).

A lot of logic must be encoded in the heuristic to evaluate some programs
as better than others. It can be difficult to recognize value that one did
not anticipate finding. It can be difficult to recognize how a particular
mutation might evolve into something great, especially if it causes
problems in the short term. The search space is unbelievably large, and it
can take a long time to examine it.

It isn't a matter of life being 'inefficient'. It's that, if we want to use
this 'genetic programming' technique that life used to create cool things
like Mozart, we need to be vastly more efficient than life at searching the
spaces, developing value, recognizing how small things might contribute to
a greater whole and thus should be preserved. In practice, this will often
require very special-purpose applications - e.g. genetic programming for
the procedural generation of cities in a video game might use a completely
different set of primitives than genetic programming for the facial
structures and preferred behaviors/habits of NPCs (and it still wouldn't
be easy to decide whether a particular habit contributes value).

Machine code - by which I mean x86 code and similar - would be a terribly
inefficient way to obtain value using genetic programming. It is far too
fragile (breaks easily under minor mutations), too fine grained (resulting
in a much bigger search space), and far too difficult to evaluate.

Though, we could potentially create a virtual-machine code suitable for
genetic programming.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-05 Thread David Barbour
Ah. Perhaps a more direct reference to the elephant would have worked
better. :)

Yeah, I'll grant the metaphor that we have a lot of different people
focused on different parts of the computational elephant.

On Thu, Sep 5, 2013 at 9:40 AM, Carl Gundel ca...@psychesystems.com wrote:

 By touching the hem in this sense I meant that we’ve got a blindfold on
 and we’re trying to guess what the elephant looks like by touching any one
 part of it.

 ** **

 -Carl

 ** **

 *From:* fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] *On Behalf
 Of *David Barbour
 *Sent:* Thursday, September 05, 2013 12:16 PM

 *To:* Fundamentals of New Computing
 *Subject:* Re: [fonc] Final STEP progress report abandoned?

 ** **

 ** **

 On Thu, Sep 5, 2013 at 8:17 AM, Carl Gundel ca...@psychesystems.com
 wrote:

 I’m not sure why you think I’m attributing special reverence to
 computing.  Break all the rules, please.  ;-)

 ** **

 To say you're touching the hem generally implies you're also on your
 knees and bowing your head.

  

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-05 Thread David Barbour
All very good points, Chris.


On Thu, Sep 5, 2013 at 10:27 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 David Barbour dmbarb...@gmail.com writes:

  I agree we can gain some inspirations from life. Genetic programming,
  neural networks, the development of robust systems in terms of reactive
  cycles, focus on adaptive rather than abstractive computation.
 
  But it's easy to forget that life had millions or billions of years to
 get
  where it's at, and that it has burned through materials, that it fails to
  recognize the awesomeness of many of the really cool 'programs' it has
  created (like Wolfgang Amadeus Mozart ;).

 Artificial neural networks and genetic programming are often grouped
 together, eg. as nature-inspired optimisation, but it's important to
 keep in mind that their natural counterparts work on very different
 timescales. Neural networks can take a person's lifetime to become
 proficient at some task, but genetics can take a planet's lifetime ;)
 (of course, there has been a lot of overlap as brains are the product of
 evolution and organisms must compete in a world full of brains).

  A lot of logic must be encoded in the heuristic to evaluate some programs
  as better than others. It can be difficult to recognize value that one
 did
  not anticipate finding. It can be difficult to recognize how a particular
  mutation might evolve into something great, especially if it causes
  problems in the short term. The search space is unbelievably large, and
 it
  can take a long time to examine it.

 There is interesting work going on in artificial curiosity, where
 regular rewards/fitness/reinforcement is treated as external, but
 there is also an internal reward, usually based on finding new
 patterns and how to predict/compress them. In theory this rewards a
 system for learning more about its domain, regardless of whether it
 leads to an immediate increase in the given fitness function.

 There are some less drastic departures from GP like Fitness Uniform
 Optimisation, which values population diversity rather than high
 fitness: we only need one fit individual, the rest can explore.

 Bayesian Exploration is also related: which addresses the
 exploration/exploitation problem explicitly by assuming that a more-fit
 solution exists and choosing our next candidate based on the highest
 expected fitness (this is known as 'optimism').

 These algorithms attempt to value unique/novel solutions, which may
 contribute to solving 'deceptive' problems; where high-fitness solutions
 may be surrounded by low-fitness ones.

  It isn't a matter of life being 'inefficient'. It's that, if we want to
 use
  this 'genetic programming' technique that life used to create cool things
  like Mozart, we need to be vastly more efficient than life at searching
 the
  spaces, developing value, recognizing how small things might contribute
 to
  a greater whole and thus should be preserved. In practice, this will
 often
  require very special-purpose applications - e.g. genetic programming for
  the procedural generation of cities in a video game might use a
 completely
  different set of primitives than genetic programming for the facial
  structures and preferred behaviors/habits of NPCs (and it still wouldn't
  be easy to decide whether a particular habit contributes value).

 You're dead right, but at the same time these kind of situations make me
 instinctively want to go up a level and solve the meta-problem. If I
 were programming Java, I'd want a geneticProgrammingFactory ;)

  Machine code - by which I mean x86 code and similar - would be a terribly
  inefficient way to obtain value using genetic programming. It is far too
  fragile (breaks easily under minor mutations), too fine grained
 (resulting
  in a much bigger search space), and far too difficult to evaluate.

 True. 'Optimisation' is often seen as the quest to get closer to machine
 code, when actually there are potentially bigger gains to be had by
 working at a level where we know enough about our code to eliminate lots
 of it. For example all of the fusion work going on in Haskell, or even
 something as everyday as constant folding. Whilst humans can scoff that
 'real' programmers would have written their assembly with all of these
 optimisations already-applied, it's far more likely that auto-generated
 code will be full of such high-level optimisation potentials. For
 example, we could evolve programs using an interpreter until they reach
 a desired fitness, then compile the best solution with a
 highly-aggressive optimising compiler for use in production.

  Though, we could potentially create a virtual-machine code suitable for
  genetic programming.

 This will probably be the best option for most online adaptation, where
 the system continues to learn over the course of its life. The search
 must use high-level code to be efficient, but compiling every candidate
 when most will only be run once usually won't be worth

Re: [fonc] Final STEP progress report abandoned?

2013-09-05 Thread David Barbour
On Thu, Sep 5, 2013 at 11:40 AM, Chris Warburton
chriswa...@googlemail.comwrote:

  to prevent type errors like true 5 + it uses a different stack for each
 type


I think these errors might not be essential to prevent. But we might want
to support some redundant structure, i.e. something like 'genes' where
multiple agents contribute to a property, such that if some of them error
out they don't ruin the entire model.

If we think in terms of a code-base, consider each sample in the population
having two definitions for each word in the code-base. Each time we use a
word, we apply both definitions, and if one of them doesn't make sense we
discard it; if both make some sense, we combine the results in some simple
way. The 'words' in this case would be like genes, and we could easily
model sexual recombinations.

Usefully, we could also model hierarchical 'stages' such that lower-level
words use primitives, but higher-level words use lower-level words. This
would allow us to scale upwards: codons, proteins, organelles, cells,
organs, etc.

Anyhow, there are a lot of directions we can take such things. Avoiding
type-errors isn't the main issue; I think keeping it simple and supporting
some redundancy is much more useful. But favoring a simpler programming
model - e.g. one with only integers, and where the only operation is to add
or compare them -might also help.



 many languages designed for genetic programming
 actually get rid of errors completely (eg. by skipping nonsensical
 instructions);


I see. If you want to avoid errors completely, it is always possible to
ensure consistent input and output types for each named 'gene' or 'codon',
while allowing many implementations. The lowest level genes or codons could
be automatically generated outside the normal genetic mechanism (using
brute-force logic to find instances of a type), and occasionally injected
into a few members of the population (to model mutations and such).

Best,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-05 Thread David Barbour
On Thu, Sep 5, 2013 at 5:35 AM, Chris Warburton
chriswa...@googlemail.comwrote:

 there can often be a semantic cost in trying to assign meaning

to arbitrary combinations of tokens. This can complicate the runtime
 (eg. using different stacks for different datatypes) and require
 arbitrary/ad-hoc rules and special-cases (eg. empty stacks).


The concatenative language I'm developing uses multiple stacks, but it's
about different stacks for different tasks. I think this works well
conceptually, when dealing with concurrent dataflows or workflows.



 I think this semantic cost is often not appreciated, since it's hidden
 in the running time rather than being immediately apparent like
 malformed programs are.


Eh, that isn't an issue, really. Creating strongly type-safe concatenative
languages (where types are fully inferred) isn't difficult. We can ensure
it is immediately apparent that programs are malformed without actually
running them.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-04 Thread David Barbour
Life is, in some ways, less messy than binary. At least less fragile. DNA
cannot encode absolute offsets, for example. Closer to associative memory.

In any case, we want to reach useful solutions quickly. Life doesn't evolve
at a scale commensurate with human patience, despite having vastly more
parallelism and memory. So we need to design systems more efficient, and
perhaps more specialized, than life.
On Sep 4, 2013 5:37 PM, Casey Ransberger casey.obrie...@gmail.com wrote:

 John, you're right. I have seen raw binary used as DNA and I left that
 out. This could be my own prejudice, but it seems like a messy way to do
 things. I suppose I want to limit what the animal can do by constraining it
 to some set of safe primitives. Maybe that's a silly thing to worry
 about, though. If we're going to grow software, I suppose maybe I should
 expect the process to be as messy as life is:)


 On Wed, Sep 4, 2013 at 4:06 PM, John Carlson yottz...@gmail.com wrote:

 I meant to say you could perform and record operations while the program
 was running.

 I think people have missed machine language as syntaxless.
 On Sep 4, 2013 4:17 PM, John Carlson yottz...@gmail.com wrote:


 On Sep 3, 2013 8:25 PM, Casey Ransberger casey.obrie...@gmail.com
 wrote:

  It yields a kind of syntaxlessness that's interesting.

 Our TWB/TE language was mostly syntaxless.  Instead, you performed
 operations on desktop objects that were recorded (like AppleScript, but
 with an iconic language).  You could even record while the program was
 running.  We had a tiny bit of syntax in our predicates, stuff like range
 and set notation.

 Can anyone describe Minecraft's syntax and semantics?


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 --
 CALIFORNIA
 H  U  M  A  N

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread David Barbour
 what will computing be in a hundred years?

We'll have singularity - i.e. software and technology will be developed by
AIs. But there will also be a lot of corporate influence on which direction
that goes; there will likely be repeated conflicts regarding privacy,
ownership, computational rights, the issue of 'patents' and 'copyrights' in
a world with high-quality 3D printers, high quality scanners, and
AI-created technologies. As always, big companies with deep pockets will
hang on through legal actions, lobbying, lashing out at the people and
suppressing what some people will argue to be rights or freedoms.

Computing will be much more widespread. Sensors and interactive elements
will be ubiquitous in our environments, whether we like them or not.
(Already, a huge portion of the population carries a multi-purpose sensor
device... smartphone. Later, they'll be out of the pockets, on the heads,
active all the time.) Before singularity, we'll be able to program
on-the-fly, while walking around, using augmented reality, gestures or
words, even pen-and-paper [1]. After singularity, programming will be aided
heavily by AI even when we want to write our own. Mr. Clippy might have
more street smarts and degrees than you.

And, yeah, we'll have lots of video games. Procedural generation is already
a thing - creating worlds larger than any human could. With AI support, we
can actually create on-the-fly, creative content - e.g. like a team of
dungeon live masters dedicated to keeping the story interesting, and
keeping you on the border between addicted and terrified (or whatever
experience the game designer decides for you).

Best,

Dave

[1]
http://awelonblue.wordpress.com/2013/07/18/programming-with-augmented-reality/




On Tue, Sep 3, 2013 at 12:04 PM, karl ramberg karlramb...@gmail.com wrote:

 So what will computing be in a hundred years?
 Will we still painstakingly construct systems with a keyboard interface
 one letter at a time ?
 And what systems will we use ?  And for what ?
 Will we use computers for slashing virtual fruits and post images of our
 breakfast on Facebook version 1000,2 ?

 What are the future man using computers for ?

 Karl


 On Tue, Sep 3, 2013 at 2:01 PM, Alan Kay alan.n...@yahoo.com wrote:

 Hi Kevin

 At some point I'll gather enough brain cells to do the needed edits and
 get the report on the Viewpoints server.

 Dan Amelang is in the process of writing his thesis on Nile, and we will
 probably put Nile out in a more general form after that. (A nice project
 would be to do Nile in the Chrome Native Client to get a usable speedy
 and very compact graphics system for web based systems.)

 Yoshiki's K-Script has been experimentally implemented on top of
 Javascript, and we've been learning a lot about this variant of
 stream-based FRP as it is able to work within someone else's
 implementation of a language.

 A lot of work on the cooperating solvers part of STEPS is going on
 (this was an add-on that wasn't really in the scope of the original
 proposal).

 We are taking another pass at the interoperating alien modules problem
 that was part of the original proposal, but that we never really got around
 to trying to make progress on it.

 And, as has been our pattern in the past, we have often alternated
 end-user systems (especially including children) with the deep systems
 projects, and we are currently pondering this 50+ year old problem again.

 A fair amount of time is being put into problem finding (the basic idea
 is that initially trying to manifest visions of desirable future states
 is better than going directly into trying to state new goals -- good
 visions will often help problem finding which can then be the context for
 picking actual goals).

 And most of my time right now is being spent in extending environments
 for research.

 Cheers

 Alan


   --
  *From:* Kevin Driedger linuxbox+f...@gmail.com
 *To:* Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
 fonc@vpri.org
 *Sent:* Monday, September 2, 2013 2:41 PM
 *Subject:* Re: [fonc] Final STEP progress report abandoned?

 Alan,

 Can you give us any more details or direction on these research projects?


 ]{evin ])riedger


 On Mon, Sep 2, 2013 at 1:45 PM, Alan Kay alan.n...@yahoo.com wrote:

 Hi Dan

 It actually got written and given to NSF and approved, etc., a while ago,
 but needs a little more work before posting on the VPRI site.

 Meanwhile we've been consumed by setting up a number of additional, and
 wider scale, research projects, and this has occupied pretty much all of my
 time for the last 5-6 months.

 Cheers,

 Alan

   --
  *From:* Dan Melchione dm.f...@melchione.com
 *To:* fonc@vpri.org
 *Sent:* Monday, September 2, 2013 10:40 AM
 *Subject:* [fonc] Final STEP progress report abandoned?

 Haven't seen much regarding this for a while.  Has it been been abandoned
 or put at such low priority that it is effectively abandoned?

 

Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread David Barbour
I doubt there will be a clear instant of oh, this, just now, was
singularity. The ability even of a great AI to improve technologies is
limited by its ability to hypothesize and experiment, and understand
requirements. More likely, we'll see a lot of automated thinking
(constraint solvers, probabilistic models, weighted logics, genetic
programming) slowly take over aspects of different products and tasks.
Indeed, I'm already seeing this. What humans might call 'real AI' will
initially just be the human interfaces - the pieces that automate call
centers, or support interactive storytelling.

Singularity won't be instantaneous from the POV of the people living within
it. Though, it might seem that way from a future historian's perspective.

I've been fascinated by the progress in machine learning and deep learning
over just the last few years. If you haven't followed them, there have been
quite a few strides forward over the last six years or so, in part due to
new processing technologies (programmable GPUs, et al.) and in part due to
new ways of thinking about algorithms (not really 'new' but they take some
time to gain traction) - e.g. the more recent focus on deep learning, and
alternatives to backwards propagation such as using genetic programming to
set weights and connectivity in neural networks.

Regarding the language under-the-hood: If we want to automate software
development, we would gain a great deal of efficiency and robustness by
focusing on languages whose programs are easy to evaluate, and that will
(a) be meaningful/executable by construction, and (b) avoid redundant
meanings (aka full abstraction, or near enough). Even better if the
languages are good for exploration by genetic programming - i.e. easily
sliced, spliced, rearranged, mutated. I imagine a developer who favors such
languages would have an advantage over one who sticks with C.

Though, it might still compile to C.



On Tue, Sep 3, 2013 at 1:13 PM, Carl Gundel ca...@psychesystems.com wrote:

 We will have singularity and real AI?  We may indeed, or perhaps the last
 50 years will replay itself.  Progress in artificial intelligence has moved
 along at a fraction of expectations.

 ** **

 I expect that there will be an incredible increase of eye candy, and when
 you strip it down to the bottom there will still be languages derived from
 Java, C, Python, BASIC, etc.


 -Carl

 ** **

 *From:* fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] *On Behalf
 Of *David Barbour
 *Sent:* Tuesday, September 03, 2013 3:50 PM

 *To:* Fundamentals of New Computing
 *Subject:* Re: [fonc] Final STEP progress report abandoned?

 ** **

  what will computing be in a hundred years? 

 ** **

 We'll have singularity - i.e. software and technology will be developed by
 AIs. But there will also be a lot of corporate influence on which direction
 that goes; there will likely be repeated conflicts regarding privacy,
 ownership, computational rights, the issue of 'patents' and 'copyrights' in
 a world with high-quality 3D printers, high quality scanners, and
 AI-created technologies. As always, big companies with deep pockets will
 hang on through legal actions, lobbying, lashing out at the people and
 suppressing what some people will argue to be rights or freedoms. 

 ** **

 Computing will be much more widespread. Sensors and interactive elements
 will be ubiquitous in our environments, whether we like them or not.
 (Already, a huge portion of the population carries a multi-purpose sensor
 device... smartphone. Later, they'll be out of the pockets, on the heads,
 active all the time.) Before singularity, we'll be able to program
 on-the-fly, while walking around, using augmented reality, gestures or
 words, even pen-and-paper [1]. After singularity, programming will be aided
 heavily by AI even when we want to write our own. Mr. Clippy might have
 more street smarts and degrees than you.

 ** **

 And, yeah, we'll have lots of video games. Procedural generation is
 already a thing - creating worlds larger than any human could. With AI
 support, we can actually create on-the-fly, creative content - e.g. like a
 team of dungeon live masters dedicated to keeping the story interesting,
 and keeping you on the border between addicted and terrified (or whatever
 experience the game designer decides for you). 

 ** **

 Best,

 ** **

 Dave

 ** **

 [1]
 http://awelonblue.wordpress.com/2013/07/18/programming-with-augmented-reality/
 

 ** **

 ** **

 ** **

 On Tue, Sep 3, 2013 at 12:04 PM, karl ramberg karlramb...@gmail.com
 wrote:

 So what will computing be in a hundred years? 

 Will we still painstakingly construct systems with a keyboard interface
 one letter at a time ?

 And what systems will we use ?  And for what ?

 Will we use computers for slashing virtual fruits and post images of our
 breakfast on Facebook version 1000,2 ?

 ** **

 What are the future man using computers

Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread David Barbour
Factor would be another decent example of a concatenative language.

But I think arrowized programming models would work better. They aren't
limited to a stack, and instead can compute rich types that can be
evaluated as documents or diagrams. Further, they're really easy to model
in a concatenative language. Further, subprograms can interact through the
arrow's model - e.g. sharing data or constraints - thus operating like
agents in a multi-agent system; we could feasibly model 'chromosomes' in
terms of different agents.

I've recently (mid August) started developing a language that has these
properties: arrowized, strongly typed, concatenative, reactive. I'm already
using Prolog to find functions to help me bootstrap (it seems bootstrap
functions are not always the most intuitive :). I look forward to trying
some genetic programming, once I'm further along.

Best,

Dave


On Tue, Sep 3, 2013 at 4:45 PM, Brian Rice briantr...@gmail.com wrote:

 With Forth, you are probably reaching for the definition of a
 concatenative language like Joy.

 APL, J, K, etc. would also qualify.


 On Tue, Sep 3, 2013 at 4:43 PM, Casey Ransberger casey.obrie...@gmail.com
  wrote:

 I've heavily abridged your message David; sorry if I've dropped important
 context. My words below...

 On Sep 3, 2013, at 3:04 PM, David Barbour dmbarb...@gmail.com wrote:

  Even better if the languages are good for exploration by genetic
 programming - i.e. easily sliced, spliced, rearranged, mutated.

 I've only seen this done with two languages. Certainly it's possible in
 any language with the right semantic chops but so far it seems like we're
 looking at Lisp (et al) and FORTH.

 My observation has been that the main quality that yields (ease of
 recombination? I don't even know what it is for sure) is syntaxlessness.

 I'd love to know about other languages and qualities of languages that
 are conducive to this sort of thing, especially if anyone has seen
 interesting work done with one of the logic languages.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 --
 -Brian T. Rice

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread David Barbour
Arrows are essentially a formalization of box-and-wire paradigms.

http://en.wikibooks.org/wiki/Haskell/Understanding_arrows

Arrows represent a rigid structure for dataflow, but are just expressive
enough for non-linear composition of subprograms (i.e. parallel pipelines
that branch and merge). One might consider this a bitter-sweet spot. For
some people, it's too rigid. Fortunately, we can add just a little more
flexibility:

1) runtime-configurable boxes/arrows, that might even take another
box/arrow as input
2) metaprogramming - components execute in earlier stage than the runtime
arrows

I support both, but metaprogramming is my preferred approach to
flexibility. Box-and-wire paradigms, even arrows, usually run into a
problem where they get unwieldy for a single human to construct - too much
wiring, too much tweaking, too much temptation to bypass the model (e.g.
using a database or tuple space) to integrate different subprograms because
we don't want wires all over the place. Metaprogramming overcomes those
limitations, and enables structured approaches to deep entanglement where
we need them. :)

Best,

Dave



On Tue, Sep 3, 2013 at 6:33 PM, Casey Ransberger
casey.obrie...@gmail.comwrote:

 Sorry, I've missed a beat somewhere. Arrowized? What's this bit with
 arrows?

 I saw the term arrow earlier and I think I've assumed that it was some
 slang for the FRP thing (if you think about it, that makes some sense.) But
 starting with intuitive assumptions is usually a bad plan, so I'd love some
 clarification if possible.


 On Sep 3, 2013, at 5:30 PM, David Barbour dmbarb...@gmail.com wrote:

 Factor would be another decent example of a concatenative language.

 But I think arrowized programming models would work better. They aren't
 limited to a stack, and instead can compute rich types that can be
 evaluated as documents or diagrams. Further, they're really easy to model
 in a concatenative language. Further, subprograms can interact through the
 arrow's model - e.g. sharing data or constraints - thus operating like
 agents in a multi-agent system; we could feasibly model 'chromosomes' in
 terms of different agents.

 I've recently (mid August) started developing a language that has these
 properties: arrowized, strongly typed, concatenative, reactive. I'm already
 using Prolog to find functions to help me bootstrap (it seems bootstrap
 functions are not always the most intuitive :). I look forward to trying
 some genetic programming, once I'm further along.

 Best,

 Dave


 On Tue, Sep 3, 2013 at 4:45 PM, Brian Rice briantr...@gmail.com wrote:

 With Forth, you are probably reaching for the definition of a
 concatenative language like Joy.

 APL, J, K, etc. would also qualify.


 On Tue, Sep 3, 2013 at 4:43 PM, Casey Ransberger 
 casey.obrie...@gmail.com wrote:

 I've heavily abridged your message David; sorry if I've dropped
 important context. My words below...

 On Sep 3, 2013, at 3:04 PM, David Barbour dmbarb...@gmail.com wrote:

  Even better if the languages are good for exploration by genetic
 programming - i.e. easily sliced, spliced, rearranged, mutated.

 I've only seen this done with two languages. Certainly it's possible in
 any language with the right semantic chops but so far it seems like we're
 looking at Lisp (et al) and FORTH.

 My observation has been that the main quality that yields (ease of
 recombination? I don't even know what it is for sure) is syntaxlessness.

 I'd love to know about other languages and qualities of languages that
 are conducive to this sort of thing, especially if anyone has seen
 interesting work done with one of the logic languages.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 --
 -Brian T. Rice

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Fwd: Programmer Models integrating Program and IDE

2013-08-30 Thread David Barbour
Finally got a look at the Sublime editor. Looks cool! I suppose a few of
those tricks were using special keyboard macros, though... didn't really
'see' what was happening there. But I had an impression of programming
macros by example, which would be really neat.

Lenses for common structures would be a fun library to write in my tacit
language anyway. I do have an advantage over Haskell, here, in that I'm
actually traversing a heterogeneous type (which may contain rationals and
text values) rather than a homogeneous tree data structure: affine
traversals are easier to enforce (if you operate on them, you must have
left some other type in their place).

One thing I'm curious about is whether the following set of (arrowized)
data plumbing operators is complete (i.e. no need for 'first'):

assocl :: (a * (b * c)) ~ ((a * b) * c)
swap :: (a * b) ~ (b * a)
intro1 :: a ~ (1 * a)  -- 1 is unit
elim1 :: (1 * a) ~ a
rot3 :: (a * (b * (c * d))) ~ (c * (a * (b * d)))

I haven't figured out a good way to express the proof. But these have
worked for every pure data-plumbing function I've tried writing (or prodded
Prolog into writing for me).  There are advantages of a block-free encoding
for the pure data plumbing... especially in a tacit language which needs to
first load the 'block' into the type.

I suppose a block-free encoding of a tree-zipper, traversals, and lenses
would be a pretty good proof. :)

Thanks for the advice, Michael.

On Thu, Aug 29, 2013 at 10:45 PM, Michael Sloan mgsl...@gmail.com wrote:

 Substructural type systems do seem to go hand in hand with the lens
 menagerie.  Speaking quite informally about quite formal topics:

 * Affine types seem to have something to do with traversals, as it
 violates traversal laws to visit something multiple times.  Of course, this
 is unchecked in its usage in Haskell.

 * Linear types seem to have something to do with isomorphisms.  If you
 show that all inputs / output possibilities are handled (like GHC's pattern
 match checker), then unless a variable has only one or zero possible
 values, it must be used (relevant types).  If it's only used once (linear),
 and the only functions used are isomorphisms, then you know the whole thing
 has to be an isomorphism.

 I'm doubting the deepness of this one, because you could certainly
 construct a valid isomorphism where values are not used in a linear
 fashion.  Still, it's nice that there's a large set of things that are
 straightforwardly isomorphic.

 * Prisms are somewhere between what's called an affine traversal and an
 isomorphism.  They're functions that can fail in one direction but not the
 other.  If it's not failing in one direction, then running it in the other
 must yield the input.  A good example of this is if you have a parser that
 preserves all syntax info - locations, comments, whitespace, etc.  The
 parser could fail, but if it doesn't, then the printer ought to take you
 back to the exact input.

 Overall, lens seems like an excellent place to draw inspiration for a DSL
 that is in some ways very imperative (focused on mutation), while also
 playing nicely with the type system and providing nice abstractions for
 thinking about the properties of your code.



 On Thu, Aug 29, 2013 at 9:42 PM, David Barbour dmbarb...@gmail.comwrote:

 Thanks for the refs. I hadn't heard of multi-focus zippers. I'll give
 modeling those a try soon. Even if just for curiosity. I've used traversals
 but I could certainly use a refresher there.

 Lenses might also be worth modeling, if I can do so within the limits of
 not knowing which types might be linear. Probably not a problem. :)
 On Aug 29, 2013 8:58 PM, Michael Sloan mgsl...@gmail.com wrote:

 My current preferred text editor nicely supports multiple cursors:
 https://www.youtube.com/watch?v=E9QYlmvRVRQ

 It's extremely convenient because it's like having some of the power of
 macros without the necessity of premeditation.  In a sense, it's the
 speculative / live version of macros.  Now, sublime's implementation of
 this isn't perfect, but it's functional enough to obviate macros for my
 text editing needs.  My ideal multiple cursors implementation would also
 attempt to make all cursors visible by omitting intermediary lines when
 necessary.  Of course, this won't manage all cases - but it would be very
 cool to be able to edit a bunch of files at once, seeing all of the
 affected code.

 Of course, it's also possible with emacs:
 https://github.com/emacsmirror/multiple-cursors


 As for tree zippers and multiple selections, it turns out that higher
 order (higher derivative) zippers introduce multiple focuses:
 http://blog.poucet.org/2007/07/higher-order-zippers/

 I tried to build a minimal text editor on this idea, and it got very
 clumsy, so I'm not really recommending this approach beyond it being a
 curiosity.  A single zipper was of course beautiful for text editing, but
 when you got into the real world of having selections

Re: [fonc] Fwd: Programmer Models integrating Program and IDE

2013-08-30 Thread David Barbour
On Fri, Aug 30, 2013 at 4:40 AM, Ian Piumarta i...@vpri.org wrote:

 
  I'm doubting the deepness of this one, because you could certainly
 construct a valid isomorphism where values are not used in a linear fashion.

 Your second statement says only that implication is not commutative.


I had misread this as: values are not used (i.e. they are not consumed or
used at all) in a linear fashion. Seemed trivially true to me. :)

If Michael was saying that values are used in a non-linear fashion, yeah
that could be an issue for an isomorphism. Fortunately my not used
interpretation is vastly more relevant for lenses and zippers for
navigating an ad-hoc heterogeneously typed environment. The usual case one
would navigate to some part of the environment and perform a very specific
operation, then return (which is basically what lenses do: there and back
again!)

*Hypothesis:* Full model transforms will disrupt the developers' intuitions
of location and inertia. Humans will favor homeomorphism - i.e. an
isomorphism that is additionally constrained to protect continuity in the
small and modularity in the large.

cf. http://awelonblue.wordpress.com/2012/01/03/isomorphism-is-not-enough/




 Given the luxury of restricting your system to linear functions preserving
 the isomorphism you care about, then your first statement has deep
 consequences.  For example:
 http://home.pipeline.com/~hbaker1/LinearLisp.html


LinearLisp doesn't seem to have any real linear types. It provides COPY and
FREE operations universally. Consequently, it is not really restricted to
linear functions. Tracking copies precisely (so there is no sharing) is
useful for memory management... but if we want to model something like
linear call/cc, or prevent file handles from being copied, it seems we'd be
out of luck.

But, yeah, my language does require explicit (and type-safe) copy/drop. In
my language it's more logical (I can share structure if I want), since
values aren't stateful. But there is no need for GC. It's very nice. :)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Programmer Models integrating Program and IDE

2013-08-29 Thread David Barbour
On Wed, Aug 28, 2013 at 5:57 PM, John Carlson yottz...@gmail.com wrote:

 Multi-threaded Object-Oriented Stack Environment ... MOOSE for short.


Would you mind pointing me to some documentation? I found your document on
A Visual Language for Data Mapping but it doesn't discuss MOOSE. From the
intro thread, my best guess is that you added objects to and arrays to your
RPN language? But I'm not sure how the multi-threading is involved.


 Also check out VIPR from Wayne Citrin and friends at UC Boulder. Also
 check out AgentSheets, AgentCubes and XMLisp while you are at it.  Not far
 from SimCity and friends. Also looking at videos from unreal kismet may be
 helpful if you haven't already seen them.


I've now checked these out.  I am curious what led you to recommend them.

To clarify, my interest in visual programming is about finding a way to
unify HCI with programming and vice versa. To make the 'programmer-model' a
formal part of the 'program' is, I now believe, the most promising step in
that direction after live programming. As I described (but did not clarify)
this enables the IDE to be very thin, primarily a way of rendering a
program and extending it. The bulk of the logic of the IDE, potentially
even the menu systems, is shifted into the program itself.

(While I am interested in game development, my mention of it was intended
more as a declaration of expressiveness than a purpose.)

Croquet - with its pervasively hackable user environment - is much closer
to what I'm looking for than AgentCubes. But even Croquet still has a
strong separation between 'interacting with objects' and 'programming'.

Other impressions:

VIPR - Visual Imperative PRogramming - seems to be exploring visual
representations. I was confused that they did not address acquisition or
assignment of data - those would be the most important edges in data-flow
systems. But I guess VIPR is more a control-flow model than a data-flow.
One good point. made repeatedly in the VIPR papers is that we need to avoid
edges because they create complexity that is difficult to comprehend,
especially as we zoom away from the graph.

I do like that Kismet is making reactive computation accessible and useful
to a couple million people.




 I think you should replace stack with collection


I could model a number of different collections, within the limit that it
be constructed of products (pairs) to fit the
arrowizedhttp://en.wikipedia.org/wiki/Arrow_(computer_science)semantics.
So far I've modeled:

* one stack (operate only near top - take, put, roll; no navigation)
* list zippers (navigational interface in one dimension: stepLeft,
stepRight)
* tree zipper (two-dimensional navigation in a tree; up, down, left, right)
* list zipper of stacks (stepLeft, stepRight, take, put, roll)
* named stacks via metaprogramming (ad-hoc navigation: foo goto)

The tree-zipper is the most expressive I can achieve without
metaprogramming.

The more expressive collections, however, are not necessarily good. After
building the tree zipper, I couldn't figure out how I wanted to use it.
Same for the list zipper, though the 'hand' concept serves a similar role
(take and put instead of stepLeft and stepRight). For a list of anonymous
stacks: I tend to stick around on one stack for a while, and forget the
relative positions of other stacks. That's why I eventually went for named
stacks.



 Have you considered controlling stacks, program counters and iterators
 from the same basic metaphor? We used recorder buttons. Forward, Reverse,
 Stop, Fast Forward, and Fast Reverse.  Then undo (delete previous
 operation) and delete next operation. [..] You'd probably want to add copy
 and paste as well. [..] Along with the recorder metaphor we added
 breakpoints which worked travelling in either direction in the code.


My language doesn't have runtime stacks, program counters, or iterators.
But I've mentioned viewing and animating parts of the compile-time history.




I know you can make a recipe maker with a recipe,  but who decides what a
 recipe makes?


Another recipe maker; you need to bootstrap.



Can you make more than one type of thing at the same time?  Can a human
 make more than one type of thing at the same time?  Or a robot?


Living humans are always making more than one type of thing at a time. I
mean, unless you discount 'perspiration' and 'CO2' and 'heat' and 'sound'
and a bunch of other products I don't care to mention. I imagine the same
could be said for robots.  ;)

Humans can do a lot once it's shifted into their subconscious thoughts. But
their eye focus is about the size of a dime at arms length, and they aren't
very good at consciously focusing on more than one problem at a time.
Robots, however, are only limited by their mobility, sensors, actuators,
processors, programming, and resources. Okay... that's a lot of limits. But
if you had the funds and the time and the skills, you could build a robot
that can make more than one thing at a time.

Re: [fonc] Interaction Design for Languages

2013-08-29 Thread David Barbour
Thanks for responding, Sean. But I hope you provide your own ideas and
concepts, also, rather than just reacting to mine. :)

On Thu, Aug 29, 2013 at 6:23 PM, Sean McDirmid smcd...@microsoft.comwrote:

  My response:

 ** **

 1) Formal justification of human behavior is a lofty goal, especially with
 today’s technology. We know how to empirically measure simple reflexes (say
 Fitt’s or Hicks’ law), but anything complex gets pummeled by noise in the
 form of bias and diversity. And how do those simple processes compose into
 cognition. Focusing just on what can be will lead to very LCD (lowest
 common denominator) designs. And when we do finally figure it out, I’m
 afraid we’ll be at the point of singularity anyway when we’ve learned how
 to design something better than us (and they’ll have no problem
 programming!).


Well, my goal isn't to justify human behavior in the wild, but rather to
help guide it:

* I provide a basis for formally justifiable intuitions
* I provide an environment that supports exploratory programming and
learning
* I provide type-systems, design patterns, idioms that guide thoughts in
certain terms
* The valid intuitions will develop under these conditions.

Even more so if I accomplish my goal at integrating programming with HCI.
Under those conditions, the intuitions can have a very long time to
develop, many years as children grow to adolescence and adulthood. The
implicit understanding of composition, reactivity, authority, process
control, etc. will transfer from their user experience to their programming
experience.

I believe a very high LCD can be achieved. Compositional properties can
address: robustness and resilience, consistency and
inconsistency-tolerance, process and resource control, live update and
orthogonal persistence, security and safety, roles and responsibilities.
One area I have not been able to address is correctness. Actually, I'm
pretty sure that correctness cannot ever be addressed in a composable
manner, due to its highly context-sensitive nature.

I think even AIs will benefit from compositional properties. The thing is,
it won't be one big AI that has everyone's best interests at heart; I
believe it will be a million AIs... and making them integrate will still
require solving all these issues.


 

 ** **

 I think enforcing rigid philosophies in languages is useful (languages
 shape thinking) but also comes at a cost, since you then alienate code (and
 their programmers) who cannot live up to your strict standards. The use
 cases for linear types, for example, is important but fairly niche; most
 programmers need flexibility over guarantees.

The perception that 'flexibility' and 'guarantees' are somehow in conflict
is an incorrect one, but is unfortunately widespread. I suppose a fear is
that the API makers will constrain the API users more than strictly
necessary. In practice, these features - fine-grained, formal, and precise
- result in greater flexibility and lower impedance, i.e. since the API
designers don't resort to gatekeeper patterns or DSL interpreters or other
overbearing mechanisms to protect their contracts or invariants.

Basically, without such guarantees, programmers are often forced to be
flexible in private, on their own turf. :(

As an example: call/cc patterns offer a lot of flexibility compared to the
traditional call-return paradigm. There are many use-cases where something
like call/cc would be convenient, e.g. for 'port based objects' - a design
pattern where framework code is interwoven with multiple functions in
client-object code. But the problem is that a current continuation might be
forgotten or called twice, which may wreak havoc on the framework's
invariants. If we could flag the cc as linear, we could address this, and
gain the flexibility of the cc - such as simpler threading or ability to
temporarily return to a different framework than the one that called us.

There are quite a few 'flexible' patterns - both for dataflow and
control-flow - that become safer and thus easier to express and integrate
in the presence of linear types. The weaker variations on linear types -
affine types (use at most once) and relevant types (use at least once) -
are also useful for various purposes. I especially like affine types to
control fan-in; e.g. I can create a bag of affine types like 'tickets' to a
limited resource - big enough to be flexible, small enough to keep
implementations simple (no 'zero one infinity' nonsense).

[next e-mail!]


 Accidents are merely unexpected innovations. Not everything good and
 useful will fall out of the principles we know about or follow. I will
 throw in one more essential principle that should act as a last-resort
 default “worse is better;” aka the New Jersey principle J. Sometimes we
 don’t know the “right” answer, but just putting in a plausible answer is
 better than pondering forever what the right answer would be. Sometimes we
 just “don’t know” and need more experience to 

Re: [fonc] Fwd: Programmer Models integrating Program and IDE

2013-08-29 Thread David Barbour
Thanks for clarifying. I can think of a few ways to model such cursors, but
I think this use-case would generally not fit Awelon's model.

* if we have 'reactive' text, we'll generally be limited to atomic text
signals (the small time-varying chunks of text) and whatever
post-processing we choose to perform.
* with static text, we have a similar features for reactive text... but due
to stability across updates (e.g. if we later have a multi-language app,
the text would change; how should the selection and copy change?) So in
general, we'd be operating on atomic chunks of static text.
* I can, of course, create stateful copies of the text-as-it-is. But this
would essentially be external to Awelon's type system.




On Thu, Aug 29, 2013 at 7:10 PM, John Carlson yottz...@gmail.com wrote:

 Multiple cursors...like having one cursor at the beginning of a line and
 moving that cursor to the next line etc.   Then if the doc was a flat file,
 have two cursors which are at columns 5 and 9 relative to the first cursor
 and row 0 relative to the first cursor--this defines a selection which can
 be copied.  Then write a loop that processes every row, essentially moving
 the selection down the column of text.  You could have multiple selected
 text regions in the document.
 On Aug 29, 2013 5:47 PM, David Barbour dmbarb...@gmail.com wrote:

 [fwd to fonc]

 Use of tree zippers to model multi-media documents in the type system is
 an interesting possibility. It seems obvious in hindsight, but I had been
 focusing on other problem spaces.

 Hmm. I wonder if it might be intuitive to place the doc as an object on
 the stack, then use the stack for the up/down (inclusion/extrusion) zipper
 ops, allowing ops on sub-docs, as opposed to always keeping the full tree
 as the top stack item. OTOH, either approach would be limited to one
 cursor.

 What are you envisioning when you say multiple cursors? I can't think
 how to do that without picking the doc apart and essentially modeling
 hyperlinks (I.e. putting different divs on different named stacks so I can
 have a different cursor in each div, then using a logical href to docs on
 other stacks). This might or might not fit what you're imagining.

 (I can easily model full multi-stack environments as first-class types.
 This might also be a favorable approach to representing docs.)

 Model transform by example sounds like something this design could be
 very good for. Actually, I was imagining some of Bret Victor's drawing
 examples (where it builds a procedure) would also be a good fit.

 My language has a name: Awelon.  But thanks for offering the name of your
 old project. :)


 On Aug 29, 2013 2:11 PM, John Carlson yottz...@gmail.com wrote:

 I was suggesting MOOSE as a working name for your project.

 I used to keep a list of features for MOOSE that I wanted to develop.
 MOOSE (future) was the next step beyond TWB/TE (now) that never got
 funded.  TWB was single threaded for the most part.  I have done some work
 on creating multiple recorder desktop objects. MOOSE would have had a way
 to create new desktop objects as types, instead of creating them in C++.
 There would have been way to create aggregate desktop objects, either as
 lists or maps.  I would have provided better navigation for Forms, which
 are essentially used for XML  and EDI/X12.  One thing I recall wanting to
 add was some kind of parser for desktop objects in addition to text file
 parsers and C++ persistent object parsers.

 RPN was only for the calculator.  The other stack that we had was the
 undo stack for reversible debugging.

 I believe an extension to VIPR was to add object visualization to the
 pipeline.  The reason I pointed you at VIPR is that the programmer model is
 similar to ours.

 I found that document was the best implementation I had of of tree
 zipper.  You could focus the activity anywhere in the document.  I tried to
 do form as a tree zipper, but limited movement made it difficult to use.  I
 ruined a demo by focusing on the form too much.  At one point, I could kind
 of drag the icon on the form to the document and produce a text document
 from the form (and vica versa).  I think I also worked on dragging the
 recorder icon to the document.   This would have converted the iconic
 representation to the C++ representation.

 All the MOOSE extensions after TWB/TE left production were rather
 experimental in nature.

 I suggest you might use a multimedia document as the visualization of
 your tree zipper.  Then have multiple cursors which might rely on each
 other to manipulate the tree.

 Check out end-user programming and model transformation by demonstration
 for more recent ideas.
  On Aug 29, 2013 2:37 AM, David Barbour dmbarb...@gmail.com wrote:


 On Wed, Aug 28, 2013 at 5:57 PM, John Carlson yottz...@gmail.comwrote:

 Multi-threaded Object-Oriented Stack Environment ... MOOSE for short.


 Would you mind pointing me to some documentation? I found your document
 on A Visual

Re: [fonc] Fwd: Programmer Models integrating Program and IDE

2013-08-29 Thread David Barbour
Thanks for the refs. I hadn't heard of multi-focus zippers. I'll give
modeling those a try soon. Even if just for curiosity. I've used traversals
but I could certainly use a refresher there.

Lenses might also be worth modeling, if I can do so within the limits of
not knowing which types might be linear. Probably not a problem. :)
On Aug 29, 2013 8:58 PM, Michael Sloan mgsl...@gmail.com wrote:

 My current preferred text editor nicely supports multiple cursors:
 https://www.youtube.com/watch?v=E9QYlmvRVRQ

 It's extremely convenient because it's like having some of the power of
 macros without the necessity of premeditation.  In a sense, it's the
 speculative / live version of macros.  Now, sublime's implementation of
 this isn't perfect, but it's functional enough to obviate macros for my
 text editing needs.  My ideal multiple cursors implementation would also
 attempt to make all cursors visible by omitting intermediary lines when
 necessary.  Of course, this won't manage all cases - but it would be very
 cool to be able to edit a bunch of files at once, seeing all of the
 affected code.

 Of course, it's also possible with emacs:
 https://github.com/emacsmirror/multiple-cursors


 As for tree zippers and multiple selections, it turns out that higher
 order (higher derivative) zippers introduce multiple focuses:
 http://blog.poucet.org/2007/07/higher-order-zippers/

 I tried to build a minimal text editor on this idea, and it got very
 clumsy, so I'm not really recommending this approach beyond it being a
 curiosity.  A single zipper was of course beautiful for text editing, but
 when you got into the real world of having selections and such things got
 pretty hairy.

 Have you seen the lens library's zippers?  Not only does it provide
 generic zippers as a library, but it also provides for something like
 having many focuses by being able to move along a level within the
 tree.  Whereas lenses can refer to one component and be used to modify it,
 traversals refer to multiple targets and allow you to modify them.  Levels
 come from descending into the tree using different traversals


 http://hackage.haskell.org/packages/archive/lens/latest/doc/html/Control-Lens-Zipper.html

 The main property of these levels, a result of the traversal laws, is that
 the different focuses cannot contain eachother.  Otherwise, there would be
 no reasonable way to mutate them concurrently.  Seems very related to your
 stack of stacks ideas.


 On Thu, Aug 29, 2013 at 8:36 PM, David Barbour dmbarb...@gmail.comwrote:

 Thanks for clarifying. I can think of a few ways to model such cursors,
 but I think this use-case would generally not fit Awelon's model.

 * if we have 'reactive' text, we'll generally be limited to atomic text
 signals (the small time-varying chunks of text) and whatever
 post-processing we choose to perform.
 * with static text, we have a similar features for reactive text... but
 due to stability across updates (e.g. if we later have a multi-language
 app, the text would change; how should the selection and copy change?) So
 in general, we'd be operating on atomic chunks of static text.
 * I can, of course, create stateful copies of the text-as-it-is. But this
 would essentially be external to Awelon's type system.


 Given your concern with stability / consistency / commutativity, CRDTs and
 their ilk might be interesting:
 http://hal.upmc.fr/docs/00/55/55/88/PDF/techreport.pdf

 They have nice properties for collaborative editing, and could help make
 it efficient to know what derived information needs to be recomputed.

 -Michael




 On Thu, Aug 29, 2013 at 7:10 PM, John Carlson yottz...@gmail.com wrote:

 Multiple cursors...like having one cursor at the beginning of a line and
 moving that cursor to the next line etc.   Then if the doc was a flat file,
 have two cursors which are at columns 5 and 9 relative to the first cursor
 and row 0 relative to the first cursor--this defines a selection which can
 be copied.  Then write a loop that processes every row, essentially moving
 the selection down the column of text.  You could have multiple selected
 text regions in the document.
  On Aug 29, 2013 5:47 PM, David Barbour dmbarb...@gmail.com wrote:

 [fwd to fonc]

 Use of tree zippers to model multi-media documents in the type system
 is an interesting possibility. It seems obvious in hindsight, but I had
 been focusing on other problem spaces.

 Hmm. I wonder if it might be intuitive to place the doc as an object on
 the stack, then use the stack for the up/down (inclusion/extrusion) zipper
 ops, allowing ops on sub-docs, as opposed to always keeping the full tree
 as the top stack item. OTOH, either approach would be limited to one
 cursor.

 What are you envisioning when you say multiple cursors? I can't think
 how to do that without picking the doc apart and essentially modeling
 hyperlinks (I.e. putting different divs on different named stacks so I can
 have a different cursor in each div

[fonc] Fwd: Interaction Design for Languages

2013-08-29 Thread David Barbour
[fwd to fonc]

When I speak of merging programming and HCI, I refer more to the experience
than the communities. I'm fine with creating a new HCI experience - perhaps
one supporting ubiquitous computing, augmented reality.

I can't say my principles are right! Just that they are, and that I
believe they're useful.

If you don't like call/cc, I could offer similar stories about
futures/promises (popular in some systems), or adaptive pipelines that
adjust to downstream preferences (my original use case).

There are actually several ways to compose predictive/learning systems. The
simple compositions: they can be chained, hierarchical, or run in parallel
(combining independent estimates). More complicated composition: use
speculative evaluation to feed predicted behaviors and sensory inputs back
into the model. (Speculation is also useful in on-the-fly planning loops
and constraint models.)

I think plodding along in search of new knowledge is fine. But my
impression is that many PL projects are ignoring old research, plodding the
old ground, making old mistakes. We can't know everything before we begin,
but we don't need to. Just know what has succeeded or failed before, and
have a hypothesis for why.
On Aug 29, 2013 8:50 PM, Sean McDirmid smcd...@microsoft.com wrote:

  My own ideas are quite mundane on this subject since I take design at
 face value. It is not something that can be formally validated, and is more
 of an experiential practice. I disagree that formally “right” answers are
 even possible at this point, and would have a hard time arguing with
 someone who did given that we would be talking past each other with two
 very different world views! 

 ** **

 Also, this might be interesting: HCI started out as a way of figuring out
 how humans programmed computers, since there was very little interacting
 with computers otherwise. The visual programming field was then heavily
 intertwined with HCI in the 90s, before HCI broke off into its current
 CHI/UIST form today. But I have to wonder: is merging HCI and programming
 worth it? I say this, because I don’t believe the HCI community is a very
 good role model; their techniques can be incredibly dicey. They don’t have
 a formal notion of right, of course, but even their informal methodologies
 are often contrived and not very useful (CHI has lots of papers, there are
 a few good ones that do well enough in their arguments). 

 ** **

 Composition is quite thorny outside of PL. There is no way to compose
 neural networks, most machine learning techniques result in models that are
 neither composable or decomposable! There is a fundamental limitation here
 that we in math/PL haven’t had to deal with yet. But this limitation arises
 in nature: we can often decompose systems logically in our head (e.g. the
 various biological systems of an organism), but we can’t really compose
 them…they just arise organically. 

 ** **

 I don’t think call/cc is a good example of flexibility vs. guarantees. Not
 many programmers use it directly, and those that do are very disciplined
 about it. I would love to see someone push linear types just to see how far
 they can go, and if they promote rather than limit flexibility like you
 claim. I’m suspecting not, but would be happy to be wrong. 

 ** **

 “worse is better” is very agile and prevents us from freezing up when our
 principles, intuitions, and formal theories fail to provide us with
 answers. Basically, we are arguing about how much we know and can know. My
 opinion is that there are just so many things we don’t know very well, and
 we’ll have to plod along in the pursuit of results as well as knowledge.
 This contrasts with the waterfall method, which assumes we can know
 everything before we begin. These different philosophies even surface in PL
 designs (e.g. Python vs. Haskell). 

 ** **

 *From:* augmented-programm...@googlegroups.com [mailto:
 augmented-programm...@googlegroups.com] *On Behalf Of *David Barbour
 *Sent:* Friday, August 30, 2013 11:20 AM
 *To:* augmented-programm...@googlegroups.com
 *Cc:* Fundamentals of New Computing
 *Subject:* Re: Interaction Design for Languages

 ** **

 Thanks for responding, Sean. But I hope you provide your own ideas and
 concepts, also, rather than just reacting to mine. :) 

 ** **

 On Thu, Aug 29, 2013 at 6:23 PM, Sean McDirmid smcd...@microsoft.com
 wrote:

  My response:

  

 1) Formal justification of human behavior is a lofty goal, especially with
 today’s technology. We know how to empirically measure simple reflexes (say
 Fitt’s or Hicks’ law), but anything complex gets pummeled by noise in the
 form of bias and diversity. And how do those simple processes compose into
 cognition. Focusing just on what can be will lead to very LCD (lowest
 common denominator) designs. And when we do finally figure it out, I’m
 afraid we’ll be at the point of singularity anyway when we’ve learned how
 to design something better

[fonc] Programmer Models integrating Program and IDE

2013-08-28 Thread David Barbour
I understand 'user modeling' [1] to broadly address long-term details (e.g.
user preferences and settings), mid-term details (goals, tasks, workflow),
and short-term details (focus, attention, clipboards and cursors,
conversational context, history). The unifying principle is that we have
more context to make smart decisions, to make systems behave in ways their
users expect. This is a form of context sensitivity, where the user is
explicitly part of the context.

Programming can be understood as a form of user interface. But,
historically, user modeling (in this case 'programmer modeling') has been
kept carefully separate from the program itself; instead, it is instead
part of an Integrated Development Environment (IDE)

*Hypothesis:* the separation of user-model from program has hindered both
programmers and the art of programming. There are several reasons for this:

1) Our IDEs are not sufficiently smart. The context IDEs keep is heuristic,
fragile, and can be trusted with only the simplest of tasks.
2) Poor integration with the IDE and visual environments: it is difficult
to assign formal meaning to gestures and programmer actions.
3) Programmer-layer goals, tasks, and workflows are generally opaque to the
IDE, the programs and the type system.
4) Our code must be explicit and verbose about many interactions that could
be implicit if we tracked user context.
5) Programmers cannot easily adjust their environment or language to know
what they mean, and act as they expect.

I believe we can do much better. I'll next provide a little background
about how this belief came to be, then what I'm envisioning.

*Background*

Recently, I started developing a tacit representation for an arrowized
reactive programming model. Arrows provide a relatively rigid 'structure'
to the program. In the tacit representation, this structure was represented
as a stack consisting of a mix of compile-time values (text, numbers,
blocks) and runtime signals (e.g. mouse position). Essentially, I can give
the stack a 'static type', but I still used FORTH-like idioms to roll and
pick items from the stack as though it were a dynamic structure. With
just a little static introspection, I could even model `7 pick` as copying
the seventh element of the stack to the top of the stack.

But I didn't like this single stack environment. It felt cramped.

Often, I desire to decompose a problem into multiple concurrent tasks or
workflows. And when I do so, I must occasionally integrate intermediate
results, which can involve some complex scattering and gathering
operations. On a single stack, this integration is terribly painful: it
involves rolling or copying intermediate signals and values upwards or
downwards, with relative positions that are difficult to remember.
Conclusion: a single stack is good only for a single, sequential task - a
single pipeline, in a dataflow model.

But then I realized: I'm not limited to modeling a stack. A stack is just
one possible way of organizing and conceptualizing the 'type' of the arrow.
I can model any environment I please! (I'm being serious. With the same
level of introspection needed for `7 pick`, I could model a MUD, MOO, or
interactive fiction in the type system.) After experimenting with tree
zippers [2] or a list of anonymous stacks [3], I'm kind of (hopefully)
settling on an easy-to-use environment [4] that consists of:

* current stack
* hand
* current stack name
* list of named stacks

The current stack serves the traditional role. The 'hand' enables
developers to 'take' and 'put' objects (and juggle a few of them, like
'roll' except for the hand) - it's really convenient even for operating on
a single stack, and also helps carry items between stacks (implicit data
plumbing). The list of named stacks is achieved using compile-time
introspection (~type matching for different stack names) and is very
flexible:

* different stacks for different tasks; ability to navigate to a different
stack (goto)
* programmers can 'load' and 'store' from a stack remotely (treat it like a
variable or register)
* programmers can use named stacks to record preferences and configuration
options
* programmers can use named stacks to store dynamic libraries of code (as
blocks)

As I developed this rich environment, it occurred to me that I had
essentially integrated a user-model with the program itself. Actually, my
first thought was closer to hey, I'm modeling a character in a game! Go go
Data Plumber! The programmer is manipulating an avatar, navigating from
task to task and stack to stack. The programmer has items in hand, plus a
potential inventory (e.g. an inventory stack). To push metaphors a bit: I
can model keyrings full of sealer/unsealer pairs, locked rooms with sealed
values, unique 'artifacts' and 'puzzles' in the form of affine and relevant
types [5], quests goals in the form of  fractional types (representing
futures/promises) [6], and 'spellbooks' in the form of static capabilities
[7]. But in 

Re: [fonc] Deoptimization as fallback

2013-07-31 Thread David Barbour
When we speak of separating meaning from optimization, I get the impression
we want to automate the optimization. In that case, we should validate the
optimizer(s). But you seem to be assuming hand-optimized code with a
(simplified) reference implementation. That's a pretty good pattern for
validation and debugging, and I've seen it used several times (most
recently in a library called 'reactive banana', and most systematically in
Goguen's BOBJ). But I've never seen it used as a fallback. (I have heard of
cases of systematically running multiple implementations and comparing them
for agreement, e.g. with airbus.)




On Tue, Jul 30, 2013 at 6:08 PM, Casey Ransberger
casey.obrie...@gmail.comwrote:

 Hi David, comments below.

 On Jul 30, 2013, at 5:46 PM, David Barbour dmbarb...@gmail.com wrote:

  I'm confused about what you're asking. If you apply an optimizer to an
 algorithm, it absolutely shouldn't affect the output. When we debug or
 report errors, it should always be in reference to the original source code.
 
  Or do you mean some other form of 'optimized'? I might rephrase your
 question in terms of 'levels of service' and graceful degradation (e.g.
 switching from video conferencing to teleconferencing gracefully if it
 turns out the video uses too much bandwidth), then there's a lot of
 research there. One course I took - survivable networks and systems -
 heavily explored that subject, along with resilience. Resilience involves
 quickly recovering to a better level of service once the cause for the
 fault is removed (e.g. restoring the video once the bandwidth is available).
 
  Achieving ability to fall back gracefully can be a challenge. Things
 can go wrong many more ways than they can go right. Things can break in
 many more ways than they can be whole. A major issue is 'partial failure' -
 because partial failure means partial success. Often some state has been
 changed before the failure occurs. It can be difficult to undo those
 changes.

 So I'm talking about the steps goal around separating meaning from
 optimization. The reason we don't have to count optimizations as a part of
 our complexity count is: the system can work flawlessly without them, it
 might just need a lot of RAM and a crazy fast CPU(s) to do it.

 The thought I had was just this: complex algs tend to have bugs more often
 than simple algs. So why not fail over if you have an architecture that
 always contains the simplest solution to the problem right *beside* the
 optimal solution that makes it feasible on current hardware?

 Of course you're right, I can imagine there being lots of practical
 problems with doing things this way, especially if you've got an e.g.
 Google sized fleet of machines all switching to a bubble sort at the same
 time! But it's still an interesting question to my mind, because one of the
 most fundamental questions I can ask is how do I make these systems less
 brittle?

 Does this clarify what I was asking about?

 Case
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deoptimization as fallback

2013-07-30 Thread David Barbour
I'm confused about what you're asking. If you apply an optimizer to an
algorithm, it absolutely shouldn't affect the output. When we debug or
report errors, it should always be in reference to the original source code.

Or do you mean some other form of 'optimized'? I might rephrase your
question in terms of 'levels of service' and graceful degradation (e.g.
switching from video conferencing to teleconferencing gracefully if it
turns out the video uses too much bandwidth), then there's a lot of
research there. One course I took - survivable networks and systems -
heavily explored that subject, along with resilience. Resilience involves
quickly recovering to a better level of service once the cause for the
fault is removed (e.g. restoring the video once the bandwidth is
available).

Achieving ability to fall back gracefully can be a challenge. Things can
go wrong many more ways than they can go right. Things can break in many
more ways than they can be whole. A major issue is 'partial failure' -
because partial failure means partial success. Often some state has been
changed before the failure occurs. It can be difficult to undo those
changes.



On Tue, Jul 30, 2013 at 1:22 PM, Casey Ransberger
casey.obrie...@gmail.comwrote:

 Thought I had: when a program hits an unhandled exception, we crash, often
 there's a hook to log the crash somewhere.

 I was thinking: if a system happens to be running an optimized version of
 some algorithm, and hit a crash bug, what if it could fall back to the
 suboptimal but conceptually simpler Occam's explanation?

 All other things being equal, the simple implementation is usually more
 stable than the faster/less-RAM solution.

 Is anyone aware of research in this direction?
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The end of technology has arrived

2013-07-24 Thread David Barbour
Your pessimism does not seem justified. And I'm quite looking forward to
the future of heads up displays and augmented reality.

http://www.meta-view.com/
https://www.thalmic.com/myo/
http://awelonblue.wordpress.com/2013/07/18/programming-with-augmented-reality/




On Wed, Jul 24, 2013 at 3:18 PM, John Pratt jpra...@gmail.com wrote:


 In the first place, Steve was very conservative when it came
 to hardware and advances; relatively few things pushed the edge
 technologically, in terms of achieving some kind of science future.
 Three-dimensional displays exist, but no one ever explored that option.

 From 1999 onwards, the focus of Apple was to produce commerce,
 not advance the state of products overall.  Most everything
 he did from 1997 to 2011 simply leveraged the work that had been
 done previously and repackaged it.  Bitter failures at NeXT and
 massive success at Pixar led to the candy coating of Apple products,
 in which all progress underneath the covers ceased abruptly.

 But now Apple is unaware of this and they are still riding forward into
 a wall.  They don't know that they are riding into a wall because they
 are just rehashing and rehashing things written in the 1980's and 1990's,
 which weren't, in the first place, as advanced as people envisioned them
 to be able to be in the 1950's even.

 Since Microsoft follows Apple in large part and SGI is basically
 gone, no one leads the world except Apple.  So if Apple does not
 incorporate
 a technology, it will never become mainstream.  No major competing
 operating
 systems exist anymore and no one is even thinking like that.  And since
 Google is
 only splitting itself when it gets into hardware and not staying on track
 with web,
 it cannot really overcome this, either.

 This is really the end game, for all of technology.
 If Apple never improves itself in this regard, never thinks at a
 fundamental
 level, if it never examines the faults that Steve borrowed from PARC
 without
 examining the conceptual underpinnings, Apple will just decline, as is
 the case right now.

 Everything that was aspired in the 1990's is now a narrowly-defined
 reality:
 video exists in all formats and is available in any way possible.  Audio
 and
 music are consumable in all ways.  All information is basically
 transmittable
 as quickly as one really wants it given technology.  Speed the computers
 up by
 10x and it won't make much difference anymore.

 Stock analysts and news journalists can't see that the underpinnings of
 technology have now hit a wall.  Go ahead and make a watch or whatever.
 Real observers know that technology is over; it is just in its last throes.

 Once you define a tablet in the form of an iPad, no one can do anything
 else.
 Now that a mobile phone is synonymous with a touch pad, no one can think
 of anything else.  Mankind has boxed itself in and it is all over.




 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Theory vs practice [syntax]

2013-04-20 Thread David Barbour
How is that a theory? Sounds like a design principle.


On Sat, Apr 20, 2013 at 9:42 PM, John Carlson yottz...@gmail.com wrote:

 Here's my theory: reduce arguing with the compiler to minimum.  This means
 reducing programmers' syntax errors.  Only add syntax to reduce errors (the
 famous FORTRAN do loop error).  The syntax that creates errors should be
 removed.
 On Apr 20, 2013 11:18 PM, John Carlson yottz...@gmail.com wrote:

 I think it's better to work from examples, ala JUnit and end-user
 programming than come up with a theory that solves nothing.  One can
 compare EGGG to GDL in scope and expressiveness.  One interesting part of
 gaming is arguing about rules.  What computer systems do that?
 On Apr 20, 2013 11:09 PM, John Carlson yottz...@gmail.com wrote:

 Practice or practical?  Maybe there's space for practical theory,
 instead of relying on things that don't exist.  Why do we distinguish
 practice from theory?  Seems like a fallacy there.
 On Apr 20, 2013 10:51 PM, David Barbour dmbarb...@gmail.com wrote:

 only in practice


 On Sat, Apr 20, 2013 at 8:23 PM, John Carlson yottz...@gmail.comwrote:

 Take my word for it, theory comes down to Monday Night Football on
 ESPN.
 On Apr 20, 2013 10:13 PM, John Carlson yottz...@gmail.com wrote:

 I think that concepts in some sense transcend the universe.  Are
 there more digits in pi than there are atoms  in the universe?  I guess 
 we
 are asking if there are transcendental volumes which are bigger or more
 complex than the universe.  If the universe contains the transcendental 
 as
 symbols then how many transcendental symbols are there?  I think you 
 still
 run into Russell's Paradox.
 On Apr 20, 2013 9:15 PM, Simon Forman forman.si...@gmail.com
 wrote:

 On 4/20/13, John Carlson yottz...@gmail.com wrote:
  Do you need one symbol for the number infinity and another for
 denoting
  that a set is inifinite?  Or do you just reason about the size of
 the set?
  Is there a difference between a set that is countably infinite and
 one that
  isn't countable?  I barely know Russell's paradox... you're ahead
 of me.

 Well, for what it's worth, quoting from Meguire's 2007 Boundary
 Algebra: A Simple Notation for Boolean Algebra and the Truth
 Functors:

 Let U be the universal set, a,b∈U, and ∅ be the null set. Then the
 columns headed by “Sets” show how the algebra of sets and the pa are
 equivalent.

 Table 4-2. The 10 Nontrivial Binary Connectives (Functors).

 NameLogic  Sets BA

 Alternation  a∨b   a∪b  ab
 Conditional  a→b   a⊆b  (a)b
 Converse a←b   a⊇b  a(b)
 Conjunction  a∧b   a∩b  ((a)(b))
___
 NOR  a↓b   a∪b   (ab)
___
 Sheffer stroke   a|b   a∩b  (a)(b)

 Biconditionala↔b   a⊆b⊆a  (((a)b)(a(b))) -or- ((a)(b))(ab)

 (Apologies if the Unicode characters got mangled!)

 Check out http://www.markability.net/sets.htm also.


 I don't know much about set theory but I think the Universal set
 stands for the set of everything, no?

 Cheers,
 ~Simon





 The history of mankind for the last four centuries is rather like
 that of
 an imprisoned sleeper, stirring clumsily and uneasily while the
 prison that
 restrains and shelters him catches fire, not waking but
 incorporating the
 crackling and warmth of the fire with ancient and incongruous
 dreams, than
 like that of a man consciously awake to danger and opportunity.
 --H. P. Wells, A Short History of the World
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-18 Thread David Barbour
Well, communicating with genuine aliens would probably best be solved by
multi-modal machine-learning techniques. The ML community already has
techniques for two machines to teach one another their vocabularies, and
thus build a strong correspondence. Of course, if we have space alien
visitors, they'll probably have a solution to the problem and already know
our language from media.

Natural language has a certain robustness to it, due to its probabilistic,
contextual, and interactive natures (offering much opportunity for
refinement and retroactive correction). If we want to support
machine-learning between software elements, one of the best things we could
do is to emulate this robustness
end-to-endhttp://awelonblue.wordpress.com/2012/05/20/abandoning-commitment-in-hci/.
Such things have been done before, but I'm a bit stuck on how to do so
without big latency, efficiency, and security sacrifices. (There are two
issues: the combinatorial explosion of possible models, and the modular
hiding of dependencies that are inherently related through shared
observation or influence.)

Fortunately, there are many other issues we can
addresshttp://awelonblue.wordpress.com/2011/06/15/data-model-independence/to
facilitate communication that are peripheral to translation. Further,
we
could certainly leverage code-by-example for type translations (if they're
close).

Regards,

Dave


On Thu, Apr 18, 2013 at 8:06 AM, Alan Kay alan.n...@yahoo.com wrote:

 Hi David

 This is an interesting slant on a 50+ year old paramount problem (and one
 that is even more important today).

 Licklider called it the communicating with aliens problem. He said 50
 years ago this month that if we succeed in constructing the 'intergalactic
 network' then our main problem will be learning how to 'communicate with
 aliens'. He meant not just humans to humans but software to software and
 humans to software.

 (We gave him his intergalactic network but did not solve the communicating
 with aliens problem.)

 I think a key to finding better solutions is to -- as he did -- really
 push the scale beyond our imaginations -- intergalactic -- and then ask
 how can we *still* establish workable communications of overlapping
 meanings?.

 Another way to look at this is to ask: What kinds of prep *can* you do
 *beforehand* to facilitate communications with alien modules?

 Cheers,

 Alan



   --
  *From:* David Barbour dmbarb...@gmail.com
 *To:* Fundamentals of New Computing fonc@vpri.org
 *Sent:* Wednesday, April 17, 2013 6:13 PM
 *Subject:* Re: [fonc] 90% glue code

 Sounds like you want stone soup 
 programminghttp://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/.
 :D

 In retrospect, I've been disappointed with most techniques that involve
 providing information about module capabilities to some external
 configurator (e.g. linkers as constraint solvers). Developers are asked
 to grok at least two very different programming models. Hand annotations or
 hints become common practice because many properties cannot be inferred.
 The resulting system isn't elegantly metacircular, i.e. you need that
 'configurator' in the loop and the metada with the inputs.

 An alternative I've been thinking about recently is to shift the link
 logic to the modules themselves. Instead of being passive bearers of
 information that some external linker glues together, the modules become
 active agents in a link environment that collaboratively construct the
 runtime behavior (which may afterwards be extracted). Developers would have
 some freedom to abstract and separate problem-specific link logic
 (including decision-making) rather than having a one-size-fits-all solution.

 Re: In my mind powerful languages thus means 98% requirements

 To me, power means something much more graduated: that I can get as much
 power as I need, that I can do so late in development without rewriting
 everything, that my language will grow with me and my projects.


 On Wed, Apr 17, 2013 at 2:04 PM, John Nilsson j...@milsson.nu wrote:

 Maybe not. If there is enough information about different modules'
 capabilities, suitability for solving various problems and requirements,
 such that the required glue can be generated or configured automatically
 at run time. Then what is left is the input to such a generator or
 configurator. At some level of abstraction the input should transition from
 being glue and better be described as design.
 Design could be seen as kind of a gray area if thought of mainly as
 picking what to glue together as it still involves a significant amount of
 gluing ;)
 But even design should be possible to formalize enough to minimize the
 amount of actual design decisions required to encode in the source and what
 decisions to leave to algorithms though. So what's left is to encode the
 requirements as input to the designer.
 In my mind powerful languages thus means 98% requirements, 2% design and
 0% glue.
 BR
 John
 Den 17

Re: [fonc] 90% glue code

2013-04-17 Thread David Barbour
On Wed, Apr 17, 2013 at 8:26 AM, Steve Wart st...@wart.ca wrote:

 It depends what you mean by 'glue' - I think if you're going to quantify
 something you should define it.


Glue code is reasonably well defined in the community.

http://en.wikipedia.org/wiki/Glue_code

A related term sometimes used is 'data plumbing'.

http://www.johndcook.com/blog/2011/11/15/plumber-programmers/



 Do you think accessors in Java and Smalltalk code qualify as 'glue'?


Yes.



I suppose object-relational mapping declarations would as well, likely any
 code traversing an object to obtain data for presentation to a UI.


Indeed. The traversal code is glue. The precise code deciding (during or
after traversal) what to display and how to format it would not be glue.



 Is all application code glue, and the only non-glue code is parsing,
 compilation or interpretation of glue?


No. Information, e.g. from sensors, is not glue. The logic that decides
actuation of a robotic arm or what to display on a UI is not glue code.
Music, art, character AIs, procedurally generated worlds, dialog trees,
etc. may consist of considerable quantities of data and code that is not
glue.

Of course, many of these things still involve much glue to integrate them
into one coherent application.



 Alternatively the only non-glue is the hardware :)


There is glue hardware, too. :D

http://en.wikipedia.org/wiki/Glue_logic



 Is glue code code devoid of semantic or computational intent? Are type
 systems purely glue code if they don't have any value at runtime? Does the
 term even have any meaning at all?


Glue code does have computational intent and purpose.

Every application must gather data from multiple sources (sensors, user
input, various databases), make decisions based on some logic, then effect
those decisions by distribution of control streams (robotic arms, monitors,
data maintenance).

In a world without glue code, at least as source code, only the middle step
would be explicit.

In state-of-the-art systems, every step is explicit, plus there's a lot of
overhead - e.g. explicitly managing local state and caches so we can
combine data streams so we can make decisions; ad-hoc recovery code after
partial failure;  data conversions from different sources or between
algorithms.

Type systems are not glue code because they don't connect different parts
of a system. (Though, they could be leveraged for glue code, e.g. using
type-matching for auto-wiring.)

Regards,

David
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-17 Thread David Barbour
On Wed, Apr 17, 2013 at 11:04 AM, Steve Wart st...@wart.ca wrote:

 The wikipedia definition is circular, but I agree that people know it when
 they see it :)


I don't believe it's circular. It does assume you already know the meaning
of glue and code independently.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-17 Thread David Barbour
Sounds like you want stone soup
programminghttp://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/.
:D

In retrospect, I've been disappointed with most techniques that involve
providing information about module capabilities to some external
configurator (e.g. linkers as constraint solvers). Developers are asked
to grok at least two very different programming models. Hand annotations or
hints become common practice because many properties cannot be inferred.
The resulting system isn't elegantly metacircular, i.e. you need that
'configurator' in the loop and the metada with the inputs.

An alternative I've been thinking about recently is to shift the link logic
to the modules themselves. Instead of being passive bearers of information
that some external linker glues together, the modules become active agents
in a link environment that collaboratively construct the runtime behavior
(which may afterwards be extracted). Developers would have some freedom to
abstract and separate problem-specific link logic (including
decision-making) rather than having a one-size-fits-all solution.

Re: In my mind powerful languages thus means 98% requirements

To me, power means something much more graduated: that I can get as much
power as I need, that I can do so late in development without rewriting
everything, that my language will grow with me and my projects.


On Wed, Apr 17, 2013 at 2:04 PM, John Nilsson j...@milsson.nu wrote:

 Maybe not. If there is enough information about different modules'
 capabilities, suitability for solving various problems and requirements,
 such that the required glue can be generated or configured automatically
 at run time. Then what is left is the input to such a generator or
 configurator. At some level of abstraction the input should transition from
 being glue and better be described as design.
 Design could be seen as kind of a gray area if thought of mainly as
 picking what to glue together as it still involves a significant amount of
 gluing ;)
 But even design should be possible to formalize enough to minimize the
 amount of actual design decisions required to encode in the source and what
 decisions to leave to algorithms though. So what's left is to encode the
 requirements as input to the designer.
 In my mind powerful languages thus means 98% requirements, 2% design and
 0% glue.
 BR
 John
 Den 17 apr 2013 05:04 skrev Miles Fidelman mfidel...@meetinghouse.net:

 So let's ask the obvious question, if we have powerful languages, and/or
 powerful libraries, is not an application comprised primarily of glue code
 that ties all the piece parts together in an application-specific way?

 David Barbour wrote:


 On Tue, Apr 16, 2013 at 2:25 PM, Steve Wart st...@wart.ca mailto:
 st...@wart.ca wrote:

  On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
  In real systems, 90% of code (conservatively) is glue code.

 What is the origin of this claim?


 I claimed it from observation and experience. But I'm sure there are
 other people who have claimed it, too. Do you doubt its veracity?



 On Mon, Apr 15, 2013 at 12:15 PM, David Barbour
 dmbarb...@gmail.com mailto:dmbarb...@gmail.com wrote:


 On Mon, Apr 15, 2013 at 11:57 AM, David Barbour
 dmbarb...@gmail.com mailto:dmbarb...@gmail.com wrote:


 On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David
 l...@loup-vaillant.fr mailto:l...@loup-vaillant.fr wrote:

 On Sun, Apr 14, 2013 at 04:17:48PM -0700, David
 Barbour wrote:
  On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
  In real systems, 90% of code (conservatively) is
 glue code.

 Does this *have* to be the case?  Real systems also
 use C++ (or
 Java).  Better languages may require less glue, (even
 if they require
 just as much core logic).


 Yes.

 The prevalence of glue code is a natural consequence of
 combinatorial effects. E.g. there are many ways to
 partition and summarize properties into data-structures.
 Unless we uniformly make the same decisions - and we won't
 (due to context-dependent variations in convenience or
 performance) - then we will eventually have many
 heterogeneous data models. Similarly can be said of event
 models.

 We can't avoid this problem. At best, we can delay it a
 little.


 I should clarify: a potential answer to the glue-code issue is
 to *infer* much more of it, i.e. auto-wiring, constraint
 models, searches. We could automatically build pipelines that
 convert one type to another, given smaller steps (though this
 does risk aggregate lossiness due to intermediate summaries or
 subtle incompatibilities).  Machine-learning could

Re: [fonc] 90% glue code

2013-04-16 Thread David Barbour
On Tue, Apr 16, 2013 at 2:25 PM, Steve Wart st...@wart.ca wrote:

  On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
  In real systems, 90% of code (conservatively) is glue code.

 What is the origin of this claim?


I claimed it from observation and experience. But I'm sure there are other
people who have claimed it, too. Do you doubt its veracity?




 On Mon, Apr 15, 2013 at 12:15 PM, David Barbour dmbarb...@gmail.comwrote:


 On Mon, Apr 15, 2013 at 11:57 AM, David Barbour dmbarb...@gmail.comwrote:


 On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David 
 l...@loup-vaillant.fr wrote:

 On Sun, Apr 14, 2013 at 04:17:48PM -0700, David Barbour wrote:
  On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
  In real systems, 90% of code (conservatively) is glue code.

 Does this *have* to be the case?  Real systems also use C++ (or
 Java).  Better languages may require less glue, (even if they require
 just as much core logic).


 Yes.

 The prevalence of glue code is a natural consequence of combinatorial
 effects. E.g. there are many ways to partition and summarize properties
 into data-structures. Unless we uniformly make the same decisions - and we
 won't (due to context-dependent variations in convenience or performance) -
 then we will eventually have many heterogeneous data models. Similarly can
 be said of event models.

 We can't avoid this problem. At best, we can delay it a little.


 I should clarify: a potential answer to the glue-code issue is to *infer*
 much more of it, i.e. auto-wiring, constraint models, searches. We could
 automatically build pipelines that convert one type to another, given
 smaller steps (though this does risk aggregate lossiness due to
 intermediate summaries or subtle incompatibilities).  Machine-learning
 could be leveraged to find correspondences between structures, perhaps
 aiding humans. 90% or more of code will be glue-code, but it doesn't all
 need to be hand-written. I am certainly pursuing such techniques in my
 current language development.


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-16 Thread David Barbour
On Tue, Apr 16, 2013 at 4:28 PM, Loup Vaillant-David 
l...@loup-vaillant.frwrote:

 On Mon, Apr 15, 2013 at 12:15:10PM -0700, David Barbour wrote:
  On Mon, Apr 15, 2013 at 11:57 AM, David Barbour dmbarb...@gmail.com
 wrote:
 
  90% or more of code will be glue-code, but it doesn't all need to be
  hand-written. I am certainly pursuing such techniques in my current
  language development.

 Err, I may sound nitpicky, but when I say code, I usually mean
 *source* code.  Automatically generated code counts only if it becomes
 the preferred medium for modification.  Most of the time, it isn't:
 you'd tweak the generation procedure instead.


 So, maybe it's not hopeless.  Maybe we do have ways to reduce the
 amount (if not the proportion) of glue code.


If you have enough setup, parameters, and holes to fill for the generation
procedure, then that just becomes a higher layer of glue code. You can't
expect huge savings even with automatic methods.

OTOH, almost any reduction is worthwhile. One way to think of '90% glue
code' is I'm 10% efficient. So if we cut it down to 70% glue code, that's
like tripling efficiency - a naive analysis, unfortunately, but a
motivating one.



 Now, reducing the hardware ressources (CPU, memory…) needed to *run*
 glue code is another matter.


Ooh, that's where we can really win, if we approach it right - supporting
deep, cross-layer optimizations, fusing intermediate steps, optimize
memoization or caching, content distribution networks. But we need to model
glue code in higher level languages to achieve such things automatically.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-16 Thread David Barbour
On Tue, Apr 16, 2013 at 8:03 PM, Miles Fidelman
mfidel...@meetinghouse.netwrote:

 So let's ask the obvious question, if we have powerful languages, and/or
 powerful libraries, is not an application comprised primarily of glue code
 that ties all the piece parts together in an application-specific way?


Yes. I believe so.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)

2013-04-15 Thread David Barbour
On Mon, Apr 15, 2013 at 6:46 AM, Eugen Leitl eu...@leitl.org wrote:


 Few ns are effective eternities in terms of modern gate delays.
 I presume the conversation was about synchronization, which
 should be avoided in general unless absolutely necessary, and
 not done directly in hardware.


Synchronization always has bounded error tolerances - which may differ by
many orders of magnitude, based on application. Synchronized audio-video,
for example, generally has a tolerance of about 10 milliseconds - large
enough to accomplish it in software. But really good AV software tries to
push it below 1ms. Synchronization for modern CPUs has extremely tight
tolerances (just like everything else about modern CPUs). But you should
not only think about CPUs or hardware when you think 'synchronization'.

You say 'synchronization should be avoided unless absolutely necessary'. I
disagree; a blanket statement like that is too extreme. Sometimes
synchronized is more efficient even if it is not 'absolutely' necessary -
it reduces need to keep state, which has its own expense. It Depends.

In any case, the conversation wasn't even about synchronization (which
means to CAUSE to be synchronized). It was simply about 'synchronized' -
whether things can happen at the same time or rate (which often has natural
causes).

And synchronization is never about clocks. It's the reverse, really.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-15 Thread David Barbour
On Mon, Apr 15, 2013 at 11:57 AM, David Barbour dmbarb...@gmail.com wrote:


 On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David 
 l...@loup-vaillant.frwrote:

 On Sun, Apr 14, 2013 at 04:17:48PM -0700, David Barbour wrote:
  On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
  In real systems, 90% of code (conservatively) is glue code.

 Does this *have* to be the case?  Real systems also use C++ (or
 Java).  Better languages may require less glue, (even if they require
 just as much core logic).


 Yes.

 The prevalence of glue code is a natural consequence of combinatorial
 effects. E.g. there are many ways to partition and summarize properties
 into data-structures. Unless we uniformly make the same decisions - and we
 won't (due to context-dependent variations in convenience or performance) -
 then we will eventually have many heterogeneous data models. Similarly can
 be said of event models.

 We can't avoid this problem. At best, we can delay it a little.


I should clarify: a potential answer to the glue-code issue is to *infer*
much more of it, i.e. auto-wiring, constraint models, searches. We could
automatically build pipelines that convert one type to another, given
smaller steps (though this does risk aggregate lossiness due to
intermediate summaries or subtle incompatibilities).  Machine-learning
could be leveraged to find correspondences between structures, perhaps
aiding humans. 90% or more of code will be glue-code, but it doesn't all
need to be hand-written. I am certainly pursuing such techniques in my
current language development.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)

2013-04-15 Thread David Barbour
On Mon, Apr 15, 2013 at 3:10 PM, Pascal J. Bourguignon 
p...@informatimago.com wrote:

 I think that one place where light cone considerations are involved is
 with caches in multi-processor systems. If all processors could have

instantaneous knowledge of what the views of the other processors are
 about memory, there wouldn't be any cache coherence problem.  But light
 speed, or information transmission speed is not infinite, hence the
 appearance of light cones or light cones-like phenomena.


Many people seem to jump from one extremism to another - from
instantaneous transfer to unbounded delay - without seriously
considering the useful middle (predictable, bounded delay). The middle has
many models (including cellular automata) and is capable of supporting
synchronous/real-time distributed systems. It's also where you'll find
light cones... and many interesting, efficient synchronization patterns.

Interestingly, cache coherence is not a problem if your programming model
*doesn't* assume instantaneous transfer, i.e. because you'd end up
explicitly modeling the delays and thus managing the distinct views in a
formal manner - using distinct locations in memory, and thus distinct cache
lines. (I believe this contributes to the success of modeling
multi-processor systems as distributed systems.)

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Meta-Reasoning in Actor Systems (was: Layering, Thinking and Computing)

2013-04-14 Thread David Barbour
I always miss a few when making such lists. The easiest way to find new
good questions is to try finding models that address the existing
questions, then figuring out why you should be disappointed with it. :D

On Sun, Apr 14, 2013 at 9:55 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 Impressive.  But with Turing complete models, the ability to build a
 system is not a good measure of distance. How much discipline (best
 practices, boiler-plate, self-constraint) and foresight (or up-front
 design) would it take to develop and use your system directly from a pure
 actors model?


 I don't know the answer to that yet. You've highlighted really good
 questions that a pure actor model system would have to answer (and I
 added a few). I believe they were:

 - composition
 - decomposition
 - consistency
 - discovery
 - persistence
 - runtime update
 - garbage collection
 - process control
 - configuration partitioning
 - partial failure
 - inlining? (optimization)
 - mirroring? (optimization)
 - interactions
 - safety
 - security
 - progress
 - extensibility
 - antifragility
 - message reliability
 - actor persistence

 Did I miss any?

 On Sat, Apr 13, 2013 at 1:29 PM, David Barbour dmbarb...@gmail.comwrote:


 On Sat, Apr 13, 2013 at 9:01 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think we don't know whether time exists in the first place.


 That only matters to people who want as close to the Universe as
 possible.

 To the rare scientist who is not also a philosopher, it only matters
 whether time is effective for describing and predicting behavior about the
 universe, and the same is true for notions of particles, waves, energy,
 entropy, etc..

 I believe our world is 'synchronous' in the sense of things happening at
 the same time in different places...


 It seems to me that you are describing a privileged frame of reference.


 How is it privileged?

 Would you consider your car mechanic to have a 'privileged' frame of
 reference on our universe because he can look down at your vehicle's engine
 and recognize when components are in or out of synch? Is it not obviously
 the case that, even while out of synch, the different components are still
 doing things at the same time?

 Is there any practical or scientific merit for your claim? I believe
 there is abundant scientific and practical merit to models and technologies
 involving multiple entities or components moving and acting at the same
 time.



 I've built a system that does what you mention is difficult above. It
 incorporates autopoietic and allopoietic properties, enables object
 capability security and has hints of antifragility, all guided by the actor
 model of computation.


 Impressive.  But with Turing complete models, the ability to build a
 system is not a good measure of distance. How much discipline (best
 practices, boiler-plate, self-constraint) and foresight (or up-front
 design) would it take to develop and use your system directly from a pure
 actors model?



 I don't want programming to be easier than physics. Why? First, this
 implies that physics is somehow difficult, and that there ought to be a
 better way.


 Physics is difficult. More precisely: setting up physical systems to
 compute a value or accomplish a task is very difficult. Measurements are
 noisy. There are many non-obvious interactions (e.g. heat, vibration,
 covert channels). There are severe spatial constraints, locality
 constraints, energy constraints. It is very easy for things to 'go wrong'.

 Programming should be easier than physics so it can handle higher levels
 of complexity. I'm not suggesting that programming should violate physics,
 but programs shouldn't be subject to the same noise and overhead. If we had
 to think about adding fans and radiators to our actor configurations to
 keep them cool, we'd hardly get anything done.

 I hope you aren't so hypocritical as to claim that 'programming shouldn't
 be easier than physics' in one breath then preach 'use actors' in another.
 Actors are already an enormous simplification from physics. It even
 simplifies away the media for communication.



 Whatever happened to the pursuit of Maxwell's equations for Computer
 Science? Simple is not the same as easy.


 Simple is also not the same as physics.

 Maxwell's equations are a metaphor that we might apply to a specific
 model or semantics. Maxwell's equations describe a set of invariants and
 relationships between properties. If you want such equations, you'll
 generally need to design your model to achieve them.

 On this forum, 'Nile' is sometimes proffered as an example of the power
 of equational reasoning, but is a domain specific model.



 if we (literally, you and I in our bodies communicating via the
 Internet) did not get here through composition, integration, open extension
 and abstraction, then I don't know how to make a better argument to
 demonstrate those properties are a part of physics and layering

Re: [fonc] Actors, Light Cones and Epistemology (was Layering, Thinking and Computing)

2013-04-14 Thread David Barbour
On Sun, Apr 14, 2013 at 1:23 PM, Pascal J. Bourguignon 
p...@informatimago.com wrote:

 David Barbour dmbarb...@gmail.com writes:

  On Apr 14, 2013 9:46 AM, Tristan Slominski 
  tristan.slomin...@gmail.com wrote:
 
  A mechanic is a poor example because frame of reference is almost
  irrelevant in Newtonian view of physics.
 
  The vast majority of information processing technologies allow you to
  place, with fair precision, every bit in the aether at any given
  instant. The so-called Newtonian view will serve more precisely and
  accurately than dubious metaphors to light cones.

 What are you talking about???


I don't know how to answer that without repeating myself, and in this case
it's a written conversation. Do you have a more specific question? Hmm. At
a guess, I'll provide an answer that might or might not be to the real
question you intended: The air-quotes around Newtonian are because (if we
step back in context a bit) the context is Tristan is claiming that any
knowledge of synchronization is somehow 'privileged'. (Despite the fact
nearly all our technology relies on this knowledge, and it's readily
available at a glance, and does not depend on Newtonian anything.)

And I've seen Grace Hopper's video on nanoseconds before. If you carry a
piece of wire of the right length, it isn't difficult to say where light
carrying information will be after a few nanoseconds. :D
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Compiler Passes

2013-04-14 Thread David Barbour
(Forwarded from Layers thread)

On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
gath.na.geala...@gmail.comwrote:

 Isn't one of the points of idst/COLA/Frank/whatever-it-is-called-today to
 simplify the development of domain-specific models to such an extent that
 their casual application becomes conceivable, and indeed even practical, as
 opposed to designing a new one-size-fits-all language every decade or so?



I had another idea the other day that could profit from a domain-specific
 model: a model for compiler passes. I stumbled upon the nanopass approach
 [1] to compiler construction some time ago and found that I like it. Then
 it occurred to me that if one could express the passes in some sort of a
 domain-specific language, the total compilation pipeline could be assembled
 from the individual passes in a much more efficient way that would be the
 case if the passes were written in something like C++.

 In order to do that, however, no matter what the intermediate values in
 the pipeline would be (trees? annotated graphs?), the individual passes
 would have to be analyzable in some way. For example, two passes may or may
 not interfere with each other, and therefore may or may not be commutative,
 associative, and/or fusable (in the same respect that, say, Haskell maps
 over lists are fusable). I can't imagine that C++ code would be analyzable
 in this way, unless one were to use some severely restricted subset of C++
 code. It would be ugly anyway.

 Composing the passes by fusing the traversals and transformations would
 decrease the number of memory accesses, speed up the compilation process,
 and encourage the compiler writer to write more fine-grained passes, in the
 same sense that deep inlining in modern language implementations encourages
 the programmer to write small and reusable routines, even higher-order
 ones, without severe performance penalties. Lowering the barrier to
 implementing such a problem-specific language seems to make such an
 approach viable, perhaps even desirable, given how convoluted most
 production compilers seem to be.

 (If I've just written something that amounts to complete gibberish, please
 shoot me. I just felt like writing down an idea that occurred to me
 recently and bouncing it off somebody.)

 - Gath

 [1] Kent Dybvig, A nanopass framework for compiler education (2005),
 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.72.5578


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-13 Thread David Barbour
On Sat, Apr 13, 2013 at 9:01 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 I think we don't know whether time exists in the first place.


That only matters to people who want as close to the Universe as
possible.

To the rare scientist who is not also a philosopher, it only matters
whether time is effective for describing and predicting behavior about the
universe, and the same is true for notions of particles, waves, energy,
entropy, etc..

I believe our world is 'synchronous' in the sense of things happening at
 the same time in different places...


 It seems to me that you are describing a privileged frame of reference.


How is it privileged?

Would you consider your car mechanic to have a 'privileged' frame of
reference on our universe because he can look down at your vehicle's engine
and recognize when components are in or out of synch? Is it not obviously
the case that, even while out of synch, the different components are still
doing things at the same time?

Is there any practical or scientific merit for your claim? I believe there
is abundant scientific and practical merit to models and technologies
involving multiple entities or components moving and acting at the same
time.



 I've built a system that does what you mention is difficult above. It
 incorporates autopoietic and allopoietic properties, enables object
 capability security and has hints of antifragility, all guided by the actor
 model of computation.


Impressive.  But with Turing complete models, the ability to build a system
is not a good measure of distance. How much discipline (best practices,
boiler-plate, self-constraint) and foresight (or up-front design) would it
take to develop and use your system directly from a pure actors model?



I don't want programming to be easier than physics. Why? First, this
 implies that physics is somehow difficult, and that there ought to be a
 better way.


Physics is difficult. More precisely: setting up physical systems to
compute a value or accomplish a task is very difficult. Measurements are
noisy. There are many non-obvious interactions (e.g. heat, vibration,
covert channels). There are severe spatial constraints, locality
constraints, energy constraints. It is very easy for things to 'go wrong'.

Programming should be easier than physics so it can handle higher levels of
complexity. I'm not suggesting that programming should violate physics, but
programs shouldn't be subject to the same noise and overhead. If we had to
think about adding fans and radiators to our actor configurations to keep
them cool, we'd hardly get anything done.

I hope you aren't so hypocritical as to claim that 'programming shouldn't
be easier than physics' in one breath then preach 'use actors' in another.
Actors are already an enormous simplification from physics. It even
simplifies away the media for communication.



Whatever happened to the pursuit of Maxwell's equations for Computer
 Science? Simple is not the same as easy.


Simple is also not the same as physics.

Maxwell's equations are a metaphor that we might apply to a specific model
or semantics. Maxwell's equations describe a set of invariants and
relationships between properties. If you want such equations, you'll
generally need to design your model to achieve them.

On this forum, 'Nile' is sometimes proffered as an example of the power of
equational reasoning, but is a domain specific model.



 if we (literally, you and I in our bodies communicating via the Internet)
 did not get here through composition, integration, open extension and
 abstraction, then I don't know how to make a better argument to demonstrate
 those properties are a part of physics and layering on top of it


Do you even have an argument that we are here through composition,
integration, open extension, and abstraction? I'm a bit lost as to what
that would even mean unless you're liberally reinterpreting the words.

In any case, it doesn't matter whether physics has these properties, only
whether they're accessible to a programmer. It is true that any programming
model must be implemented within physics, of course, but that's not the
layer exposed to the programmers.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] CodeSpells. Learn how to program Java by writing spells for a 3D environment.

2013-04-12 Thread David Barbour
Neat! I love how the IDE looks like a spellbook.

There is also an associated paper, On the Nature of Fires and How to Spark
Them
When You’re Not There [1].

[1]
http://db.grinnell.edu/sigcse/sigcse2013/Program/viewAcceptedProposal.pdf?sessionType=papersessionNumber=252

I've occasionally contemplated developing such a game: program the behavior
of your team of goblins (who may have different strengths, capabilities,
and some behavioral habits/quirks) to get through a series of puzzles, with
players building/managing a library as they go. My interest in PLs was
sparked from MOOs (in particular, the question of how to decentralize
them).

Today, I'm more interested in approaches that can be generalized beyond
gaming. But I might get back to game development one day.

CodeHero is another game that combines coding with gameplay, but it seems
to a much less integrated degree than CodeSpells [2][3].

[2] http://primerlabs.com/codehero0
[3]
http://www.howtogeek.com/106431/codehero-teaches-programming-via-first-person-shooter-game/

Regards,

Dave


On Fri, Apr 12, 2013 at 9:30 AM, John Carlson yottz...@gmail.com wrote:

 http://www.jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=1347

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread David Barbour
On Fri, Apr 12, 2013 at 11:07 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 my main criticism of Bloom/CALM was assumption of timesteps, which is an
 indicator of a meta-framework relying on something else to implement it
 within reality


At the moment, we don't know whether or not reality has discrete
timesteps [1]. I would not dismiss a model as being distinguishable from
'reality' on that basis. A meta-framework is necessary because we're
implementing Bloom/CALM not directly in reality, but rather upon a silicone
chip that enforces sequential computation even where it is not most
appropriate.

[1] http://en.wikipedia.org/wiki/Chronon


 ; and my criticism of Lightweight Time Warps had to do with that it is a
 protocol for message-driven simulation and (I think) actors are minimal
 implementors of message-driven protocols [from edit]


This criticism isn't relevant for the same reason that your but you can
implement lambda calculus in actors arguments weren't relevant. The
properties of the abstraction must be considered separately from the
properties of the model implementing it.

It is true that you can implement a time warp system with actors. It is
also the case that you can implement actors in a time warp system. Either
direction will involve a framework or global transform.



 I think you and I personally care about different things. I want a
 computational model that is as close to how the Universe works as possible,


You want more than close to how the Universe works.

For example, you also want symbiotic autopoietic and allopoietic systems
and antifragility, and possibly even object capability security. Do you
deny this? But you should consider whether your wants might be in conflict
with one another. And, if so, you should weigh and consider what you want
more.

I believe they are in conflict with one another. Reality has a lot of
properties that make it a difficult programming model. There are reasons we
write software instead of hardware. There is a related discussion [fonc]
Physics and Types in 2011 August. There, I say:


Physics has a very large influence on a lot of language designs and
programming models, especially those oriented around concurrency and
communication. We cannot fight physics, because physics will ruthlessly
violate our abstractions and render our programming models unscalable, or
non-performant, or inconsistent (pick two).

But we want programming to be easier than physics. We need composition
(like Lego bricks), integration (without 'impedance mismatch'), open
extension, and abstraction (IMO, in roughly that order). So the trick is to
isolate behaviors that we can utilize and feasibly implement at arbitrary
scales (small and large), yet that support our other desiderata.


As I said earlier, the limits on growth, and thus on scalability, are often
limits of reasoning or extensibility. But it is not impossible to develop
programming models that align well with physical constraints but that do
not sacrifice reasoning and extensibility.

Based on our discussions so far, you seem to believe that if you develop a
model very near our universe, the rest will follow - that actor systems
will shine. But I am not aware of any sound argument that will take you
from this model works just like our universe to this model is usable by
collaborating humans.



a minimalistic set of constructs from which everything else can be built


A minima is just a model from which you can't take anything away. There are
lots of Turing complete minima. Also, there is no guarantee that our
universe is minimalistic.



Anything which starts as synchronous cannot be minimalistic because
 that's not what we observe in the world, our world is asynchronous, and if
 we disagree on this axiom, then so much for that :D


I believe our world is 'synchronous' in the sense of things happening at
the same time in different places. I believe our world is 'synchronous' in
the sense that two photons pointed in the same direction will (barring
interference) move the same distance over the same period. If you send
those photons at the same time, they will arrive at the same time.

It seems, at the physical layer, that 'asynchronous' only happens when you
have some sort of intermediate storage or non-homogeneous delay. And even
in those cases, if I were to model the storage, transport, and retrieval
processes down to the physical minutiae, every micro process would be
end-to-end synchronous - or close enough to reason about and model them
that way. (I'm not sure about the quantum layers.)

Asynchronous communication can be a useful *abstraction* because it allows
us to hide some of the physical minutiae and heterogeneous computations.
But asynchrony isn't the only choice in that role. E.g. if we model static
latencies, those can also hide flexible processing, while perhaps being
easier to reason about for real-time systems.



 But, without invasive code changes or some other form of cheating (e.g.
 

Re: [fonc] Layering, Thinking and Computing

2013-04-12 Thread David Barbour
Existing stuff from outside of mainstream is exactly what you should be
digging into.
On Apr 12, 2013 12:08 PM, John Pratt jpra...@gmail.com wrote:


 I feel like these discussions are tangential to the larger issues
 brought up on FONC and just serve to indulge personal interest
 discussions.  Aren't any of us interested in revolution?  It won't
 start with digging into existing stuff like this.


 On Apr 12, 2013, at 11:13 AM, Tristan Slominski wrote:

 oops, I forgot to edit this part:

  and my criticism of Lightweight Time Warps had to do with that it is a
 protocol for message-driven simulation, which also needs an implementor
 that touches reality


 It should have read:

 and my criticism of Lightweight Time Warps had to do with that it is a
 protocol for message-driven simulation and (I think) actors are minimal
 implementors of message-driven protocols



 On Fri, Apr 12, 2013 at 1:07 PM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I had this long response drafted criticizing Bloom/CALM and Lightweight
 Time Warps, when I realized that we are probably again not aligned as to
 which meta level we're discussing.

 (my main criticism of Bloom/CALM was assumption of timesteps, which is an
 indicator of a meta-framework relying on something else to implement it
 within reality; and my criticism of Lightweight Time Warps had to do with
 that it is a protocol for message-driven simulation, which also needs an
 implementor that touches reality; synchronous reactive programming has
 the word synchronous in it) - hence my assertion that this is more meta
 level than actors.

 I think you and I personally care about different things. I want a
 computational model that is as close to how the Universe works as possible,
 with a minimalistic set of constructs from which everything else can be
 built. Hence my references to cellular automata and Wolfram's hobby of
 searching for the Universe. Anything which starts as synchronous cannot
 be minimalistic because that's not what we observe in the world, our world
 is asynchronous, and if we disagree on this axiom, then so much for that :D

 But actors model fails with regards to extensibility(*) and reasoning


 Those are concerns of an imperator, are they not? Again, I'm not saying
 you're wrong, I'm trying to highlight that our goals differ.

 But, without invasive code changes or some other form of cheating (e.g.
 global reflection) it can be difficult to obtain the name of an actor that
 is part of an actor configuration.


 Again, this is ignorance of the power of Object Capability and the Actor
 Model itself. The above is forbidden in the actor model unless the
 configuration explicitly sends you an address in the message. My earlier
 comment about Akka refers to this same mistake.

 However, you do bring up interesting meta-level reasoning complaints
 against the actor model. I'm not trying to dismiss them away or anything.
 As I mentioned before, that list is a good guide as to what meta-level
 programmers care about when writing programs. It would be great if actors
 could make it easier... and I'm probably starting to get lost here between
 the meta-levels again :/

 Which brings me to a question. Am I the only one that loses track of
 which meta-level I'm reasoning or is this a common occurrence  Bringing it
 back to the topic somewhat, how do people handle reasoning about all the
 different layers (meta-levels) when thinking about computing?


 On Wed, Apr 10, 2013 at 12:21 PM, David Barbour dmbarb...@gmail.comwrote:

 On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
 tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


 I'm confused. Why would you be pessimistic about non-actor models when
 your argument is essentially that very simple, deterministic, non-actor
 models can be both Turing complete and address unbounded scalability?

 Hmm. Perhaps what you're really arguing is pessimistic about
 procedural - which today is the mainstream paradigm of choice. The
 imperial nature of procedures makes it difficult to compose or integrate
 them in any extensional or collaborative manner - imperative works best
 when there is exactly one imperator (emperor). I can agree with that
 pessimism.

 In practice, the limits of scalability are very often limits of
 reasoning (too hard to reason about the interactions, safety, security,
 consistency, progress, process control, partial failure) or limits of
 extensibility (to inject or integrate new behaviors with existing systems
 requires invasive changes that are inconvenient or unauthorized). If either
 of those limits exist, scaling will stall. E.g. pure functional programming
 fails to scale for extensibility reasons, even though it admits a lot of
 natural parallelism.

 Of course

Re: [fonc] When natural language fails!

2013-04-12 Thread David Barbour
Elephant has nothing to do with voice, nor even with natural language, but
rather with a new approach to control (based on 'speech acts' -
requests, commitments, promises) and state (based on recording and
reviewing speech acts).  But it's still a good read.


On Fri, Apr 12, 2013 at 10:05 PM, Casey Ransberger casey.obrie...@gmail.com
 wrote:

 I had never heard of Elephant. Of course anything John McCarthy is worth a
 look, and this is relevant to my interests:) Also: thanks for pointing me
 at all the papers folks!


 On Tue, Apr 9, 2013 at 1:25 PM, Brendan Baldwin bren...@usergenic.comwrote:

 Wasn't John McCarthy's Elephant programming language based on the
 metaphor of conversation?  Perhaps voice based programming interactions are
 addressed there?
 On Apr 9, 2013 8:46 AM, David Barbour dmbarb...@gmail.com wrote:


 On Tue, Apr 9, 2013 at 1:48 AM, Casey Ransberger 
 casey.obrie...@gmail.com wrote:


 The computer is going to keep getting smaller. How do you program a
 phone? It would be nice to be able to just talk to it, but it needs to be
 able -- in a programming context -- to eliminate ambiguity by asking me
 questions about what I meant. Or *something.*


 Well, once computers get small enough that we can easily integrate them
 with our senses and gestures, it will become easier to program again.

 Phones are an especially difficult target (big hands and fingers, small
 screens, poor tactile feedback, noisy environments). But something like
 Project Glass or AR glasses could project information onto different
 surfaces - screens the size of walls, effectively - or perhaps the size of
 our moleskin notebooks [1]. Something like myo [2] would support pointer
 and gesture control without much interfering with our use of hands.

 That said, I think supporting ambiguity and resolving it will be one of
 the upcoming major revolutions in both HCI and software design. It has a
 rather deep impact on software design [3].

 (Your Siri converstation had me laughing out loud. Appreciated.)

 [1]
 http://awelonblue.wordpress.com/2012/10/26/ubiquitous-programming-with-pen-and-paper/
 [2] https://getmyo.com/
 [3]
 http://awelonblue.wordpress.com/2012/05/20/abandoning-commitment-in-hci/


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 --
 Casey Ransberger

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-10 Thread David Barbour
On Wed, Apr 10, 2013 at 5:35 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 I think it's more of a pessimism about other models. [..] My
 non-pessimism about actors is linked to Wolfram's cellular automata turing
 machine [..] overwhelming consideration across all those hints is
 unbounded scalability.


I'm confused. Why would you be pessimistic about non-actor models when your
argument is essentially that very simple, deterministic, non-actor models
can be both Turing complete and address unbounded scalability?

Hmm. Perhaps what you're really arguing is pessimistic about procedural -
which today is the mainstream paradigm of choice. The imperial nature of
procedures makes it difficult to compose or integrate them in any
extensional or collaborative manner - imperative works best when there is
exactly one imperator (emperor). I can agree with that pessimism.

In practice, the limits of scalability are very often limits of reasoning
(too hard to reason about the interactions, safety, security, consistency,
progress, process control, partial failure) or limits of extensibility (to
inject or integrate new behaviors with existing systems requires invasive
changes that are inconvenient or unauthorized). If either of those limits
exist, scaling will stall. E.g. pure functional programming fails to scale
for extensibility reasons, even though it admits a lot of natural
parallelism.

Of course, scalable performance is sometimes the issue, especially in
models that have global 'instantaneous' relationships (e.g. ad-hoc
non-modular logic programming) or global maintenance issues (like garbage
collection). Unbounded scalability requires a consideration for locality of
computation, and that it takes time for information to propagate.

Actors model is one (of many) models that provides some of the
considerations necessary for unbounded performance scalability. But actors
model fails with regards to extensibility(*) and reasoning. So do most of
the other models you mention - e.g. cellular automatons are even less
extensible than actors (cells only talk to a fixed set of immediate
neighbors), though one can address that with a notion of visitors (mobile
agents).

From what you say, I get the impression that you aren't very aware of other
models that might compete with actors, that attempt to address not only
unbounded performance scalability but some of the other limiting factors on
growth. Have you read about Bloom and the CALM conjecture? Lightweight time
warp? What do you know of synchronous reactive programming?

There is a lot to be optimistic about, just not with actors.

(*) People tend to think of actors as extensible since you just need names
of actors. But, without invasive code changes or some other form of
cheating (e.g. global reflection) it can be difficult to obtain the name of
an actor that is part of an actor configuration. This wouldn't be a problem
except that actors pervasively encapsulate state, and ad-hoc extension of
applications often requires access to internal state [1], especially to
data models represented in that state [2].

Regards,

Dave

[1] http://awelonblue.wordpress.com/2012/10/21/local-state-is-poison/
[2] http://awelonblue.wordpress.com/2011/06/15/data-model-independence/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 5:21 AM, Chris Warburton
chriswa...@googlemail.comwrote:


 To use David's analogy, there are some desirable properties that
 programmers exploit which are inherently 3D and cannot be represented
 in the 2D world. Of course, there are also 4D properties which our
 3D infrastructure cannot represent, for example correct refactorings
 that our IDE will think are unsafe, correct optimisations which our
 compiler will think are unsafe, etc. At some point we have to give up
 and claim that the meta-meta-meta--system is enough for practical
 purposes and obviously correct in its implementation.

 The properties that David is interested in preserving under composition
 (termination, maintainability, security, etc.) are very meta, so it's
 easy for them to become unrepresentable and difficult to encode when a
 language/system/model isn't designed with them in mind.


Well said.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 1:48 AM, Casey Ransberger
casey.obrie...@gmail.comwrote:


 The computer is going to keep getting smaller. How do you program a phone?
 It would be nice to be able to just talk to it, but it needs to be able --
 in a programming context -- to eliminate ambiguity by asking me questions
 about what I meant. Or *something.*


Well, once computers get small enough that we can easily integrate them
with our senses and gestures, it will become easier to program again.

Phones are an especially difficult target (big hands and fingers, small
screens, poor tactile feedback, noisy environments). But something like
Project Glass or AR glasses could project information onto different
surfaces - screens the size of walls, effectively - or perhaps the size of
our moleskin notebooks [1]. Something like myo [2] would support pointer
and gesture control without much interfering with our use of hands.

That said, I think supporting ambiguity and resolving it will be one of the
upcoming major revolutions in both HCI and software design. It has a rather
deep impact on software design [3].

(Your Siri converstation had me laughing out loud. Appreciated.)

[1]
http://awelonblue.wordpress.com/2012/10/26/ubiquitous-programming-with-pen-and-paper/
[2] https://getmyo.com/
[3] http://awelonblue.wordpress.com/2012/05/20/abandoning-commitment-in-hci/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] When natural language fails!

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 9:19 AM, Chris Warburton
chriswa...@googlemail.comwrote:


 There is a distinction between programming a mobile phone and
 programming when mobile.


True enough! And there's also a distinction between programming WITH a
mobile phone and programming while mobile. As hard as it would be to use
bash with an on-screen keyboard while sitting in a noisy restaurant, it
would be a lot harder to program while jogging or skiing. (And HCI is very
closely related to programming...)

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-09 Thread David Barbour
On Tue, Apr 9, 2013 at 12:44 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 popular implementations (like Akka, for example) give up things such as
 Object Capability for nothing.. it's depressing.


Indeed. Though, frameworks shouldn't rail too much against their hosts.



 I still prefer to model them as in every message is delivered. It wasn't I
 who challenged this original guaranteed-delivery condition but Carl Hewitt
 himself.


It is guaranteed in the original formalism, and even Hewitt can't change
that. But you can model loss of messages (e.g. by explicitly modeling a
lossy network).


 You've described composing actors into actor configurations :D, from the
 outside world, your island looks like a single actor.


I did not specify that there is only one bridge, nor that you finish
processing a message from a bridge before we start processing another next.
If you model the island as a single actor, you would fail to represent many
of the non-deterministic interactions possible in the 'island as a set' of
actors.


 I don't think we have created enough tooling or understanding to fully
 grok the consequences of the actor model yet. Where's our math for emergent
 properties and swarm dynamics of actor systems? [..] Where is our reasoning
 about symbiotic autopoietic and allopoietic systems? This is, in my view,
  where the actor systems will shine


I cannot fathom your optimism.

What we can say of a model is often specific to how we implemented it, the
main exceptions being compositional properties (which are trivially a
superset of invariants). Ad-hoc reasoning easily grows intractable and
ambiguous to the extent the number of possibilities increases or depends on
deep implementation details. And actors model seems to go out of its way to
make reasoning difficult - pervasive state, pervasive non-determinism,
negligible ability to make consistent observations or decisions involving
the states of two or more actors.

I think any goal to lower those comprehension barriers will lead to
development of a new models. Of course, they might first resolve as
frameworks or design patterns that get used pervasively (~ global
transformation done by hand, ugh). Before RDP, there were reactive design
patterns I had developed in the actors model while pursuing greater
consistency and resilience.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread David Barbour
On Mon, Apr 8, 2013 at 2:52 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 This is incorrect, well, it's based on a false premise.. this part is
 incorrect/invalid?


A valid argument with a false premise is called an 'unsound' argument. (
http://en.wikipedia.org/wiki/Validity#Validity_and_soundness)



 What does it mean for an actor to terminate. The _only_ way you will
 know, is if the actor sends you a message that it's done.


That is incorrect. One can also know things via static or global knowledge
- e.g. type systems, symbolic analysis, proofs, definitions. Actors happen
to be defined in such a manner to guarantee progress and certain forms of
fairness at the message-passing level. From their definition, I can know
that a single actor will terminate (i.e. finish processing a message),
without ever receiving a response. If it doesn't terminate, then it isn't
an actor.

In any case, non-termination (and our ability or inability to reason about
it) was never the point. Composition is the point. If individual actors
were allowed to send an infinite number of messages in response to a single
message (thus obviating any fairness properties), then they could easily be
compositional with respect to that property.

Unfortunately, they would still fail to be compositional with respect to
other relevant properties, such as serializable state updates, or message
structure.



Any reasoning about actors and their compositionality must be done in terms
 of messages sent and received. Reasoning in other ways does not make sense
 in the actor model (as far as I understand).


Carl Hewitt was careful to include certain fairness and progress properties
in the model, in order to support a few forms of system-level reasoning.
Similarly, the notion that actor-state effectively serializes messages
(i.e. each message defines the behavior for processing the next message) is
important for safe concurrency within an actor. Do you really avoid all
such reasoning? Or is such reasoning simply at a level that you no longer
think about it consciously?



 there is no privileged frame of reference in actors, you only get messages


I'm curious what your IDE looks like. :-)

A fact is that programming is NOT like physics, in that we do have a
privileged frame of reference that is only compromised at certain
boundaries for open systems programming. It is this frame of reference that
supports abstraction, refactoring, static typing, maintenance,
optimizations, orthogonal persistence, process control (e.g. kill,
restart), live coding, and the like.

If you want an analogy, it's like having a 3D view of a 2D world. As
developers, often use our privilege to examine our systems from frames that
no actor can achieve within our model.

This special frame of reference isn't just for humans, of course. It's just
as useful for metaprogramming, e.g. for those 'layered' languages with
which Julian opened this topic.


 Actors and actor configurations (groupings of actors)
 become indistinguishable, because they are logically equivalent for
 reasoning purposes. The only way to interact with either is to send it a
 message and to receive a message.


It is true that, from within the actor system, we cannot distinguish an
actor from an actor configuration.



 It's Actors All The Way Down.


Actors don't have clear phase separation or staging. There is no down,
just an ad-hoc graph. Also, individual actors often aren't decomposable
into actor configurations. A phrase I favored while developing actors
systems (before realizing their systemic problems) was It's actors all the
way out.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-08 Thread David Barbour
On Mon, Apr 8, 2013 at 6:29 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 The problem with this, that I see, is that [..] in my physics view of
 actors [..] Messages could be lost.


Understanding computational physics is a good thing. More people should do
it. A couple times each year I end up in discussions with people who think
software and information aren't bound by physical laws, who have never
heard of Landauer's principle, who seem to mistakenly believe that
distinction of concepts (like information and representation, or mind and
body) implies they are separable.

However, it is not correct to impose physical law on the actors model.
Actors is its own model, apart from physics.

A good question to ask is: can I correctly and efficiently implement
actors model, given these physical constraints? One might explore the
limitations of scalability in the naive model. Another good question to ask
is: is there a not-quite actors model suitable for a more
scalable/efficient/etc. implementation. (But note that the not-quite
actors model will never quite be the actors model.) Actors makes a
guarantee that every message is delivered (along with a nigh uselessly weak
fairness property), but for obvious reasons guaranteed delivery is
difficult to scale to distributed systems. And it seems you're entertaining
whether *ad-hoc message loss* is suitable.

That doesn't mean ad-hoc message-loss is a good choice, of course. I've
certainly entertained that same thought, as have other, but we can't trust
every fool thought that enters our heads.

Consider an alternative: explicitly model islands (within which no message
loss occurs) and serialized connections (bridges) between them. Disruption
and message loss could then occur in a controlled manner: a particular
bridge is lost, with all of the messages beyond a certain point falling
into the ether. Compared to ad-hoc message loss, the bridged islands design
is much more effective for reasoning about and recovering from partial
failure.

One could *implement* either of those loss models within actors model,
perhaps requiring some global transforms. But, as we discussed earlier
regarding composition, the implementation is not relevant while reasoning
with abstractions.

Reason about the properties of each abstraction or model. Separately,
reason about whether the abstraction can be correctly (and easily,
efficiently, scalably) implemented. This is 'layering' at its finest.


 This is another hint that we might have a different mental model. I don't
 find concurrency within an actor interesting. Actors can only process one
 message at a time. So concurrency is only relevant in that sending messages
 to other actors happens in parallel. That's not an interesting property.


I find actors can only process one message at a time is an interesting
constraint on concurrency, and certainly a useful one for reasoning. And
it's certainly relevant with respect to composition (ability to treat an
actor configuration as an actor) and decomposition (ability to divide an
actor into an actor configuration).

Do you also think zero and one are uninteresting numbers? Well, de gustibus
non est disputandum.



 Actor behavior is a mapping function from a message that was received to
 creation of finite number of actors, sending finite number of messages, and
 changing own behavior to process the next message. This could be a simple
 dictionary lookup in the degenerate case. What's there to reason about in
 here?


Exactly what you said: finite, finite, sequential - useful axioms from
which we can derive theorems and knowledge.



 A fact is that programming is NOT like physics,


 This is a description


Indeed. See how easily we can create straw-man arguments with which we can
casually agree or disagree by stupidly taking sentence fragments out of
context? :-)



I like actors precisely because I CAN make programming look like physics.


I am fond of linear logic and stream processing for similar reasons. I
certainly approve, in a general sense, of developing models designed to
operate within physical constraints. But focusing on the aspects I enjoy,
or disregarding those I find uninteresting, would certainly put me at risk
of reasoning about an idealized model a few handwaves removed from the
original.



relying on global knowledge when designing an actor system seems, to me,
 not to be the right way


In our earlier discussion, you mentioned that actors model can be used to
implement lambda calculus. And this is true, given bog standard actors
model. But do you believe you can explain it from your 'physics' view
point? How can you know that you've implemented, say, a Fibonacci
function with actors, if you forbid knowledge beyond what can be discovered
with messages? Especially if you allow message loss?



 I think this highlights our different frames of reference when we discuss
 actor systems. Refactoring, static typing, maintenance, optimizations, etc.
 are [..] 

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread David Barbour
On Sun, Apr 7, 2013 at 5:44 AM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 I agree that largely, we can use more work on languages, but it seems that
 making the programming language responsible for solving all of programming
 problems is somewhat narrow.


I believe each generation of languages should address a few more of the
cross-cutting problems relative to their predecessors, else why the new
language?

But to address a problem is not necessarily to automate the solution, just
to push solutions below the level of conscious thought, e.g. into a path of
least resistance, or into simple disciplines that (after a little
education) come as easily and habitually (no matter how unnaturally) as
driving a car or looking both ways before crossing a street.


 imagine that I write a really crappy piece of code that works, in a
 corner of the program that nobody ever ends up looking in, nobody
 understands it, and it just works. If nobody ever has to touch it, and no
 bugs appear that have to be dealt with, then as far as the broader
 organization is concerned, it doesn't matter how beautiful that code is, or
 which level of Dante's Inferno it hails from


Unfortunately, it is not uncommon that bugs are difficult to isolate, and
may even exhibit in locations far removed from their source. In such cases,
having code that nobody understands can be a significant burden - one you
pay for with each new bug, even if each time you eventually determine that
the cause is elsewhere.

Such can motivate use of theorem provers: if the code is so simple or so
complex that no human can readily grasp why it works, then perhaps such
understanding should be automated, with humans on the periphery asking for
proofs that various requirements and properties are achieved.



 Of course, I can only defend the deal with it if it breaks strategy only
 so far. Every component that is built shapes it's surface area and other
 components need to mold themselves to it. Thus, if one of them is wrong, it
 gets non-linearly worse the more things are shaped to the wrong component,
 and those shape to those, etc.


Yes. Of course, even being right in different ways can cause much
awkwardness - like a bridge built from both ends not quite meeting in he
middle.



We then end up thinking about protocols, objects, actors, and so on.. and I
 end up agreeing with you that composition becomes the most desirable
 feature of a software system. I think in terms of actors/messages first, so
 no argument there :D


Actors/messaging is much more about reasoning in isolation (understanding
'each part') than composition. Consider: You can't treat a group of two
actors as a single actor. You can't treat a sequence of two messages as a
single message. There are no standard composition operators for using two
actors or messages together, e.g. to pipe output from one actor as input to
another.

It is very difficult, with actors, to reason about system-level properties
(e.g. consistency, latency, partial failure). But it is not difficult to
reason about actors individually.

I've a few articles on related issues:

[1] http://awelonblue.wordpress.com/2012/07/01/why-not-events/
[2] http://awelonblue.wordpress.com/2012/05/01/life-with-objects/
[3]
http://awelonblue.wordpress.com/2013/03/07/objects-as-dependently-typed-functions/




 To me, the most striking thing about this being the absence of a strict
 hierarchy at all, i.e., no strict hierarchical inheritance. The ability to
 mix and match various attributes together as needed seems to most closely
 resemble how we think. That's composition again, yes?


Yes, of sorts.

The ability to combine traits, flavors, soft constraints, etc. in a
standard way constitutes a form of composition. But they don't suggest rich
compositional reasoning (i.e. the set of compositional properties may be
trivial or negligible). Thus, trait composition, soft constraints, etc.
tend to be 'shallow'. Still convenient and useful, though.

I mention some related points in the 'life with objects' article (linked
above) and also in my stone soup programming article [4].

[4] http://awelonblue.wordpress.com/2012/09/12/stone-soup-programming/

(from a following message)


 robustness is a limited goal, and antifragility seems a much more worthy
 one.


Some people interpret 'robustness' rather broadly, cf. the essay 'Building
Robust Systems' from Gerald Jay Sussman [5]. In my university education,
the word opposite fragility was 'survivability' (e.g. 'survivable
networking' was a course).

I tend to break various survivability properties into robustness (resisting
damage, security), graceful degradation (breaking cleanly and predictably;
succeeding within new capabilities), resilience (recovering quickly; self
stabilizing; self healing or easy fixes). Of course, these are all passive
forms; I'm a bit wary about developing computer systems that 'hit back'
when attacked, at least as a default policy.

[5]

Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread David Barbour
On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 Well... composing multiple functions does not result in the same
 termination properties as a single function either, does it? Especially
 when we are composing nondeterministic computations? (real question, not
 rhetorical) I'm having difficulty seeing how this is unique to actors.


An individual actor is guaranteed to terminate after receiving a message.
It's in the definition: upon receiving a message an actor can send out a
*finite* number of messages to other actors. But two actors can easily (by
passing messages in circles) send out an infinite number of messages to
other actors upon receiving a single message. Therefore, with respect to
this property, you cannot (in general) reason about or treat groups of two
actors as though they were a single actor.

Similarly, there is no guarantee that two actors will finish processing
even a single message before they start processing the next one. A group of
actors thus lacks the atomicity and consistency properties of individual
actors.

(Of course, all this would be irrelevant if these weren't properties people
depend on when reasoning about system correctness. But they are.)

As you note, actors are not unique in their non-termination. But that
misses the point. The issue was our ability to reason about actors
compositionally, not whether termination is a good property.



 But how do you weigh freedom to make choices for the task at hand even
 if they're bad choices for the tasks NOT immediately at hand (such as
 integration, maintenance)?


 For this, I think all of us fall back on heuristics as to what's a good
 idea. But those heuristics come from past experience.


We also use models - e.g. actors/messaging, databases, frameworks, VMs,
pubsub, etc. - with understanding (or hope) that they're supposed to help.
We can then focus our brainpower on the task at hand with an
understanding that various cross-cutting concerns are either addressed or
can be at leisure.

And that's exactly what a good language should be doing - lowering the bar
for those cross-cutting concerns.


 Hmm. Based on your response, I think that we define event systems
 differently.


I provided a definition for 'events' in my article.

By events, I include commands, messages, procedure calls, other
conceptually `instantaneous` values in the system that independently effect
or report changes in state.


The 'instantaneous', 'independent', and 'changes in state' are all relevant
to my arguments. If you mean something different by event systems, we might
be talking past one another.



The system as a whole seems to me to be resilient, and I don't see that
 even if there is no generic way to re-establish connection at TCP/IP level
 this degrades resiliency somehow.


We can develop systems that exhibit resilience due to ad-hoc mechanisms.
That doesn't necessarily degrade resiliency, but it certainly loses the
generic and creates a lot of repeated development work.

Review my assertion with an understanding that I was emphasizing the
generic (and certainly not saying event systems lack resilience):

*Event systems lack generic resilience.* Developers have built patterns for
resilient event systems – timeouts, retries, watchdogs, command patterns.
Unfortunately, these patterns require careful attention to the specific
events model – i.e. where retries are safe, where losing an event is safe,
which subsystems can be restarted. Many of these recovery models are not
compositional – e.g. timeouts are not compositional because we need to
understand the timeouts of each subsystem. Many are non-deterministic and
work poorly if replicated across views. By comparison, state models can
generically achieve simple resilience properties like eventual consistency
and snapshot consistency. Often, developers in event systems will
eventually reinvent a more declarative, RESTful approach – but typically in
a non-composable, non-reentrant, glitchy, lossy, inefficient, buggy,
high-overhead manner (like observer patterns).


To the extent that 'multiple layers are at play' - I agree, but I believe
it's the non-eventful layers that contribute to resilience even as the
eventful ones (and POST) do their level best to tear it away.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Layering, Thinking and Computing

2013-04-07 Thread David Barbour
On Sun, Apr 7, 2013 at 2:56 PM, Tristan Slominski 
tristan.slomin...@gmail.com wrote:

 stability is not necessarily the goal. Perhaps I'm more in the biomimetic
 camp than I think.


Just keep in mind that the real world has quintillions of bugs. In
software, humans are probably still under a trillion.  :)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-15 Thread David Barbour
On Fri, Feb 15, 2013 at 12:10 AM, John Carlson yottz...@gmail.com wrote:

 The way I read rest over http post (wikipedia) is that you either create a
 new entry in a collection uri, or you create a new entry in the element
 uri, which becomes a collection.

There are other options. For example, if you have a URI that represents a
particular subset of records in a table (i.e. perhaps the URI itself
contains a WHERE clause) then you could GET or PUT (or DELETE) just that
subset of records in a sensible and RESTful manner.

(Alternatively, you can PUT a resource that defines a view, then PUT some
values into the view's results. But this requires code distribution, which
is unpopular for a lot of security reasons.)

For REST in general, the resource identifier schema can be a lot more
flexible or structured than HTTP's URIs (and aren't necessarily limited to
a few kilobytes). But even HTTP's URIs can be applied creatively.

I read in rfc2616 quoted on stackoverflow that clients should not pipeline
 non-idempotent methods or non-idempotent sequences of methods.  So
 basically that tosses pipelining of posts.  Face it, REST over HTTP is slow

PUTs are idempotent and can safely be pipelined. It isn't clear to me why
you're using POST as the measure for REST performance.

(POST is not very RESTful. The content of POST does not represent the state
of any existing resource. It takes discipline to consistently use POST in a
RESTful manner. POST is effectively HTTP's version of RPC, with all the
attendant problems. GET and PUT are both much more closely aligned with
REST.)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-15 Thread David Barbour
On Fri, Feb 15, 2013 at 12:56 AM, J. Andrew Rogers and...@jarbox.orgwrote:

 REST is not a highly efficient protocol by any means


Indeed. REST isn't even a protocol.


 The largest total consumer of CPU time [.. is ..] parsing JSON source
 formats


While this is a good point, it isn't clear to me that it applies more to
RESTful communication systems than to others. Even SOAP or RPC often uses
structured text.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


  1   2   3   >