Hi Shaun

I'll take a look at this when I get a chance (probably a few weeks from now). 
As I think I mentioned back in 1993 when the history was written, I had to try 
to do a recreation of this because I couldn't find the original one pager, and 
there are likely to be bugs. In the original (done in '72) I committed the 
classic blunder of trying to be too clever and to accomplish too many goals in 
this one project -- in particular I wanted to show how this could be actually 
done on a sequential machine with a small memory. 


So instead of writing a recursive program in a recursive language using cons 
pairs, I wrote it as a loop with an explicit activation record structure, 
"states" for indicating where the different jumps to "eval" should be returned 
to, and arrays and indices instead of lists.This was like trying to write the 
second illustration of the Lisp interpreter (how it really ran on the 709) that 
is given in the Lisp 1.5 manual.


Another big complication in the original was that I tried to have "returns" be 
symmetric with sends (by interpreting the results in a similar fashion).

I also wanted this to be "network like" and "language like" at the same time, 
"function like" and "object like", "dynamically extensible with nice syntax", 
etc. etc. I wanted to use the Dave Fisher extensible control ideas as well as 
extensible forms.


I would have been much better off to have written it McCarthy-style first and 
then hand-compiled it to its loop and jump form (and John's several versions 
had bugs because it was hard to hold the whole thing in one's head, even in his 
amazing head). 


Or, even better, to have written it in the style of the Flex Machine, which was 
a takeoff on the way Wirth did his Euler byte-code compiler and B5000-like 
interpreter with a better parsing scheme. This latter would have been clearer 
-- and more like the eventual Smalltalk-76, but I was pretty sure that it 
wouldn't fit into a half page (or even one page). Wirth's didn't, even though 
it was nice and small.


One way to recreate this is to take a look at the logic of the approach, which 
in Lisp terms would be to use something like FEXPs to be able to look at raw 
expressions, and to APPLY results from functions to the rest of an expression 
list to avoid have to parenthesize every expression individually.


Once looked at this way the functions can be seen as also being like the 
"meta-linguistic procedures" of Meta II (and you have something that is like a 
parser being used as a direct interpreter). Meta II could make itself (but not 
its VM) in about half a page, so this was suggestive also.

A key to this last point is that it meant that the "ways of matching and 
binding" could be written as separate functions and did not have to be part of 
the actual interpreter. These functions would have to use the Fisher 
environment context to allow this to be done, and the idea was that one was 
making/extending "too powerful" control environments to make "still powerful 
but safe" control and environment structures.


The "grammar as interface" scheme also has some of the flavor of Irons' IMP. 


Objects would be like closures and you would be unifying functions and objects. 


In abstraction, one would thus have a "message send" to an object just notify 
the object that a message is there and to give the object just the pointer to 
the message. The object would then try to recognize and receive the message via 
pattern matching/parsing. This preserves complete encapsulation and implements 
a real "Internet" style of sending and receiving messages between perfectly 
protected virtual machines. "Extensible languages" are created via new object 
classes that implement new grammars.

The logic of this approach heavily influenced the subsequent development of 
Hewitt's Actors formalisms (though they didn't seem to understand that the 
"APPLY" technique didn't "consume" anything -- so the metaphor I unfortunately 
used of "objects eating their messages" was vivid but not accurate and led to 
misinterpretations).


Dan Ingalls was a much better programmer than I was, so it only took him about 
a month to reimplement the "one-pager" in a real form to produce a real 
language. This history was interestingly like that of Lisp, where McCarthy was 
able to find a nice model, and Steve Russell reimplemented it to make a working 
Lisp.

Looking back: "Wow, why did I try to win the bet that way?"


Cheers,

Alan






>________________________________
> From: shaun gilchrist <shaunxc...@gmail.com>
>To: Fundamentals of New Computing <fonc@vpri.org> 
>Sent: Thursday, March 15, 2012 9:28 PM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>
>Alan, 
>
>Regarding these quotes from "the early history of smalltalk" re ST 71:
>
>I had originally made the boast because McCarthy’s self-describing 
LISP interpreter was written in itself. It was about “a page”, and as 
far as power goes, LISP was the whole nine-yards for functional 
languages. I was quite sure I could do the same for object-oriented 
languages plus be able to do a reasonable syntax for the code a loa some of the 
FLEX machine techniques.Once I read that I HAD to see it. I finally tracked 
down a PDF version of the history of smalltalk which contained the mythical 
appendix III and thus the "one pager". The scan is pretty bad so I have 
attempted to transcribe it into a plain text file, which I am attaching here. I 
was wondering if you could take a look and see if it is roughly correctly 
transcribed? For the sake of completeness I am working on SVG versions of the 
diagrams. 
>
>My main questions specifically relating to the one pager are:
>
>1. Does my use of "." instead of the other symbol (e.g. e.MSG v.s. 
>e{other-char}MSG) present an issue?
>2. I had a hard time making out the single-quoted characters on lines 23 and 
>24 following the e.MSG.PC, my best guess was a comma and a period (which 
>relates to my first question, is the character actually the same as 
>{other-char})? 
>3. regarding line 54 and the "etc..." any idea what that ellipsis would be 
>once expanded haha? Would it just be a full definition of escapes or would it 
>be further definitions relating to the interpreter?
>
>4. line 62 where I put {?} what is that character meant to be? I believe it is 
>the same as what is on line 70 and also marked as {?}. 
>
>5. Is it implied that things like quote, set (<-), Table, null, atom, notlist, 
>escape, goto, if-then-else, select-case, +, > etc. would exist as primitives?
>
>Any other insight you could provide would be much appreciated. Thanks -Shaun
>
>
>On Thu, Mar 15, 2012 at 6:40 PM, Andre van Delft <andre.vande...@gmail.com> 
>wrote:
>
>The theory Algebra of Communicating Processes (ACP) 
>>offers non-determinism (as in Meta II) plus concurrency.
>>I will present a paper on extending Scala with ACP 
>>next month at Scala Days 2012. For an abstract, see
>>http://days2012.scala-lang.org/node/92
>>
>>
>>A non-final version of the paper is at 
>>http://code.google.com/p/subscript/downloads/detail?name=SubScript-TR2012.pdf
>>
>>
>>André
>>
>>Op 15 mrt. 2012, om 03:03 heeft Alan Kay het volgende geschreven:
>>
>>
>>Well, it was very much a "mythical beast" even on paper -- and you really 
>>have to implement programming languages and make a lot of things with them to 
>>be able to assess them ....
>>>
>>>
>>>
>>>But -- basically -- since meeting Seymour and starting to think about 
>>>children and programming, there were eight systems that I thought were 
>>>really nifty and cried out to be unified somehow:
>>>  1. Joss
>>>  2. Lisp
>>>  3. Logo -- which was originally a unification of Joss and Lisp, but I 
>>>thought more could be done in this direction).
>>>  4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for 
>>>logic programming and "pattern directed inference" both forward and 
>>>backwards with backtracking)
>>>  5. Meta II -- a super simple meta parser and compiler done by Val Schorre 
>>>at UCLA ca 1963
>>>  6. IMP -- perhaps the first real extensible language that worked well -- 
>>>by Ned Irons (CACM, Jan 1970)
>>>
>>>  7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with 
>>>some design ideas by me
>>>
>>>  8. The object and pattern directed extension stuff I'd been doing 
>>>previously with the Flex Machine and afterwards at SAIL (that also was 
>>>influenced by Meta II)
>>>
>>>
>>>
>>>One of the schemes was to really make the pattern matching parts of this 
>>>"work for everything" that eventually required "invocations and binding". 
>>>This was doable semantically but was a bear syntactically because of the 
>>>different senses of what kinds of matching and binding were intended for 
>>>different problems. This messed up the readability and desired "simple 
>>>things should be simple".
>>>
>>>Examples I wanted to cover included simple translations of languages 
>>>(English to Pig Latin, English to French, etc. some of these had been done 
>>>in Logo), the Winograd robot block stacking and other examples done with 
>>>Planner, the making of the language the child was using, message sending and 
>>>receiving, extensions to Smalltalk-71, and so forth.
>>>
>>>I think today the way to try to do this would be with a much more graphical 
>>>UI than with text -- one could imagine tiles that would specify what to 
>>>match, and the details of the match could be submerged a bit.
>>>
>>>More recently, both OMeta and several of Ian's matchers can handle multiple 
>>>kinds of matching with binding and do backtracking, etc., so one could 
>>>imagine a more general language that could be based on this.
>>>
>>>On the other hand, trying to stuff 8 kinds of language ideas into one new 
>>>language in a graceful way could be a siren's song of a goal.
>>>
>>>Still
 ....
>>>
>>>Cheers,
>>>
>>>Alan
>>>
>>>
>>>
>>>
>>>>________________________________
>>>> From: shaun gilchrist <shaunxc...@gmail.com>
>>>>To: fonc@vpri.org 
>>>>Sent: Wednesday, March 14, 2012 11:38 AM
>>>>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
>>>> 
>>>>
>>>>Alan, 
>>>>
>>>>"I would go way back to the never implemented Smalltalk-71"
>>>>
>>>>Is there a formal specification of what 71 should have been? I have only 
>>>>ever read about it in passing reference in the various histories of 
>>>>smalltalk as a step on the way to 72, 76, and finally 80. 
>>>>
>>>>I am very intrigued as to what sets 71 apart so dramatically. -Shaun
>>>>
>>>>
>>>>On Wed, Mar 14, 2012 at 12:29 PM, Alan Kay <alan.n...@yahoo.com> wrote:
>>>>
>>>>Hi Scott --
>>>>>
>>>>>
>>>>>1. I will see if I can get one of these scanned for you. Moore tended to 
>>>>>publish in journals and there is very little of his stuff available on 
>>>>>line.
>>>>>
>>>>>
>>>>>2.a. "if (a<b) { ... }" is easier to read than "if a<b then ..."? There is 
>>>>>no hint of the former being tweaked for decades to make it easier to read.
>>>>>
>>>>>
>>>>>Several experiments from the past cast doubt on the rest of the idea. At 
>>>>>Disney we did a variety of "code display" generators to see what kinds of 
>>>>>transformations we could do to the underlying Smalltalk (including 
>>>>>syntactic) to make it something that could be subsetted as a "growable 
>>>>>path from Etoys". 
>>>>>
>>>>>
>>>>>
>>>>>We got some good results from this (and this is what I'd do with 
>>>>>Javascript in both directions -- Alex Warth's OMeta is in Javascript and 
>>>>>is quite complete and could do this).
>>>>>
>>>>>
>>>>>However, the showstopper was all the parentheses that had to be rendered 
>>>>>in tiles. Mike Travers at MIT had done one of the first tile based editors 
>>>>>for a version of Lisp that he used, and this was even worse.
>>>>>
>>>>>
>>>>>More recently, Jens Moenig (who did SNAP) also did a direct renderer and 
>>>>>editor for Squeak Smalltalk (this can be tried out) and it really seemed 
>>>>>to be much too cluttered.
>>>>>
>>>>>
>>>>>One argument for some of this, is "well, teach the kids a subset that 
>>>>>doesn't use so many parens ...". This could be a solution.
>>>>>
>>>>>
>>>>>However, in the end, I don't think Javascript semantics is particularly 
>>>>>good for kids. For example, one of features of Etoys that turned out to be 
>>>>>very powerful for children and other Etoy programmers is the easy/trivial 
>>>>>parallel methods execution. And there are others in Etoys and yet others 
>>>>>in Scractch that are non-standard in regular programming languages but are 
>>>>>very powerful for children (and some of them are better than standard CS 
>>>>>language ideas).
>>>>>
>>>>>
>>>>>I'm encouraging you to do something better (that would be ideal). Or at 
>>>>>least as workable. Giving kids less just because that's what an existing 
>>>>>language for adults has is not a good tactic.
>>>>>
>>>>>
>>>>>
>>>>>2.c. Ditto 2.a. above
>>>>>
>>>>>
>>>>>2.d. Ditto above above
>>>>>
>>>>>
>>>>>Cheers,
>>>>>
>>>>>
>>>>>Alan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>________________________________
>>>>>> From: C. Scott Ananian <csc...@laptop.org>
>>>>>>To: Alan Kay <alan.n...@yahoo.com> 
>>>>>>Cc: IAEP SugarLabs <i...@lists.sugarlabs.org>; Fundamentals of New 
>>>>>>Computing <fonc@vpri.org>; Viewpoints Research <a...@vpri.org> 
>>>>>>Sent: Wednesday, March 14, 2012 10:25 AM
>>>>>>Subject: Re: [IAEP] [fonc] Barbarians at the gate! (Project Nell)
>>>>>> 
>>>>>>
>>>>>>
>>>>>>On Wed, Mar 14, 2012 at 12:54 PM, Alan Kay <alan.n...@yahoo.com> wrote:
>>>>>>
>>>>>>The many papers from this work greatly influenced the thinking about 
>>>>>>personal computing at Xerox PARC in the 70s. Here are a couple:
>>>>>>>
>>>>>>>
>>>>>>>-- O. K. Moore, Autotelic Responsive Environments and Exceptional 
>>>>>>>Children, Experience, Structure and Adaptabilty (ed. Harvey), Springer, 
>>>>>>>1966
>>>>>>>-- Anderson and Moore, Autotelic Folk Models, Sociological Quarterly, 
>>>>>>>1959
>>>>>>>
>>>>>>
>>>>>>
>>>>>>Thank you for these references.  I will chase them down and learn as much 
>>>>>>as I can.
>>>>>> 
>>>>>>2. Separating out some of the programming ideas here:
>>>>>>>
>>>>>>>
>>>>>>>a. Simplest one is that the most important users of this system are the 
>>>>>>>children, so it would be a better idea to make the tile scripting look 
>>>>>>>as easy for them as possible. I don't agree with the rationalization in 
>>>>>>>the paper about "preserving the code reading skills of existing 
>>>>>>>programmers".
>>>>>>
>>>>>>
>>>>>>I probably need to clarify the reasoning in the paper for this point.
>>>>>>
>>>>>>
>>>>>>"Traditional" text-based programming languages have been tweaked over 
>>>>>>decades to be easy to read -- for both small examples and large systems.  
>>>>>>It's somewhat of a heresy, but I thought it would be interesting to 
>>>>>>explore a tile-based system that *didn't* throw away the traditional text 
>>>>>>structure, and tried simply to make the structure of the traditional text 
>>>>>>easier to visualize and manipulate.
>>>>>>
>>>>>>
>>>>>>So it's not really "skills of existing programmers" I'm interested in -- 
>>>>>>I should reword that.  It's that I feel we have an existence proof that 
>>>>>>the traditional textual form of a program is easy to read, even for very 
>>>>>>complicated programs.  So I'm trying to scale down the thing that works, 
>>>>>>instead of trying to invent something new which proves unwieldy at scale.
>>>>>>
>>>>>>
>>>>>>b. Good idea to go all the way to the bottom with the children's language.
>>>>>>>
>>>>>>>
>>>>>>>c. Figure 2 introduces another -- at least equally important language -- 
>>>>>>>in my opinion, this one should be made kid usable and programmable -- 
>>>>>>>and I would try to see how it could fit with the TS language in some 
>>>>>>>way. 
>>>>>>>
>>>>>>
>>>>>>
>>>>>>This language is JSON, which is just the object-definition subset of 
>>>>>>JavaScript.  So it can in fact be expressed with TurtleScript tiles.  
>>>>>>(Although I haven't yet tackled quasiquote in TurtleScript.)
>>>>>>
>>>>>>
>>>>>>d. There is another language -- AIML -- introduced for recognizing 
>>>>>>things. I would use something much nicer, easier, more readable, etc., -- 
>>>>>>like OMeta -- or more likely I would go way back to the never implemented 
>>>>>>Smalltalk-71 (which had these and some of the above features in its 
>>>>>>design and also tried to be kid usable) -- and try to make a version that 
>>>>>>worked (maybe too hard to do in general or for the scope of this project, 
>>>>>>but you can see why it would be nice to have all of the mechanisms that 
>>>>>>make your system work be couched in kid terms and looks and feels if 
>>>>>>possible).
>>>>>>
>>>>>>
>>>>>>This I completely agree with.  The AIML will be translated to JSON on the 
>>>>>>device itself.  The use of AIML is a compromise: it exists and has 
>>>>>>well-defined semantics and does 90% of what I'd like it to do.  It also 
>>>>>>has an active community who have spend a lot of time building reasonable 
>>>>>>dialog rules in AIML.  At some point it will have to be extended or 
>>>>>>replaced, but I think it will get me through version 1.0 at least.
>>>>>> 
>>>>>>I'll probably translate the AIML example to JSON in the next revision of 
>>>>>>the paper, and state the relationship of JSON to JavaScript and 
>>>>>>TurtleScript more precisely.
>>>>>>
>>>>>>
>>>>>>3. It's out of the scope of your paper and these comments to discuss 
>>>>>>"getting kids to add other structures besides stories and narrative to 
>>>>>>think with". You have to start with stories, and that is enough for now. 
>>>>>>A larger scale plan (you may already have) would involve a kind of 
>>>>>>weaning process to get kids to add non-story thinking (as is done in math 
>>>>>>and science, etc.) to their skills. This is a whole curriculum of its own.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>I make these comments because I think your project is a good idea, on 
>>>>>>>the right track, and needs to be done
>>>>>>
>>>>>>
>>>>>>Thank you.  I'll keep your encouragement in mind during the hard work of 
>>>>>>implementation.
>>>>>>  --scott
>>>>>>
>>>>>>-- 
>>>>>>      ( http://cscott.net )
>>>>>>
>>>>>>
>>>>>>
>>>>>_______________________________________________
>>>>>fonc mailing list
>>>>>fonc@vpri.org
>>>>>http://vpri.org/mailman/listinfo/fonc
>>>>>
>>>>>
>>>>
>>>>_______________________________________________
>>>>fonc mailing list
>>>>fonc@vpri.org
>>>>http://vpri.org/mailman/listinfo/fonc
>>>>
>>>>
>>>>_______________________________________________
>>>fonc mailing list
>>>fonc@vpri.org
>>>http://vpri.org/mailman/listinfo/fonc
>>>
>>
>>_______________________________________________
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>_______________________________________________
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to