Re: [fonc] Modern General Purpose Programming Language (Was: Task management in a world without apps.)

2013-11-04 Thread Alan Kay
Each to their own, but we have always started with 10-20 example cases that 
we'd like to be really "well fitted" and "nice", plus a few possible "powerful 
principles". I.e "expressiveness" is usually the main aim. If this seems to be 
promising then there are lots of ways to approach fast-enough implementation 
(including making new hardware or using FPGAs, etc.)

Cheers,

Alan




>________
> From: Loup Vaillant-David 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Cc: karl ramberg  
>Sent: Monday, November 4, 2013 4:00 PM
>Subject: Modern General Purpose Programming Language (Was: Task management in 
>a world without apps.)
> 
>
>On Sun, Nov 03, 2013 at 04:11:15AM -0800, Alan Kay wrote:
>
>> if we were to attempt an ultra high level general purpose language
>> today, we wouldn't use Squeak or any other Smalltalk as a model or a
>> starting place.
>
>May I ask what would be an acceptable starting point?  Maru, maybe?
>
>Loup.
>
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Task management in a world without apps.

2013-11-03 Thread Alan Kay
When I mention Smalltalk I always point to the 40 year ago past because it was 
then that the language and its implementation were significant. It was quite 
clear by the late 70s that many of the compromises (some of them wonderfully 
clever) that were made in order to run on the tiny machines of the day were not 
going to scale well.

It's worth noting that both the "radical" desire to "burn the disk packs", 
*and* the "sensible" desire to use "powers that are immediately available" make 
sense in their respective contexts. But we shouldn't confuse the two desires. 
I.e. if we were to attempt an ultra high level general purpose language today, 
we wouldn't use Squeak or any other Smalltalk as a model or a starting place.

Cheers,

Alan




>________
> From: karl ramberg 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Sunday, November 3, 2013 3:18 AM
>Subject: Re: [fonc] Task management in a world without apps.
> 
>
>
>One issue with the instance development in Squeak is that it is quite fragile. 
>It is easy to pull the building blocks apart and it all falls down like a 
>house of cards. 
>
>
>It's currently hard to work on different parts and individually version them 
>independent of the rest of the system. All parts are versioned by the whole 
>project.
>
>
>It is also quite hard to reuse separate parts and share is with others. Now 
>you must share a whole project and pull out the parts you want.
>
>
>I look forward to using more rugged tools for instance programming/ creation 
>:-)
>
>
>Karl
>
>
>
>On Thu, Oct 31, 2013 at 5:31 PM, Alan Kay  wrote:
>
>It's worth noting that this was the scheme at PARC and was used heavily later 
>in Etoys. 
>>
>>This is why Smalltalk has unlimited numbers of "Projects". Each one is a 
>>persistant environment that serves both as a place to make things and as a 
>>"page" of "desktop media". 
>>
>>There are no apps, only objects and any and all objects can be brought to any 
>>project which will preserve them over time. This avoids the stovepiping of 
>>apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the 
>>objects, and George Bosworth's PARTS system showed a similar but slightly 
>>different way.
>>
>>Also there is no "presentation app" in Etoys, just an object that allows 
>>projects to be put in any order -- and there can many many such orderings all 
>>preserved -- and there is an object that will move from one project to the 
>>next as you
 give your talk. "Builds" etc are all done via Etoy scripts.
>>
>>This allows the full power of the system to be used for everything, including 
>>presentations. You can imagine how appalled we were by the appearance of 
>>Persuade and PowerPoint, etc.
>>
>>Etc.
>>
>>We thought we'd done away with both "operating systems" and with "apps" but 
>>we'd used the wrong wood in our stakes -- the vampires came back in the 80s.
>>
>>One of the interesting misunderstandings was that Apple and then MS didn't 
>>really understand the universal viewing mechanism (MVC) so they thought views 
>>with borders around them were "windows" and view without borders were part of 
>>"desktop publishing", but in fact all were the same. The Xerox Star 
>>confounded the problem by reverting to a single desktop and apps and missed 
>>the real media possibilities.
>>
>>They divided a unified media world into two regimes, neither of which are 
>>very good for
 end-users.
>>
>>Cheers,
>>
>>Alan
>>
>>
>>
>>
>>
>>
>>>
>>> From: David Barbour 
>>>To: Fundamentals of New Computing  
>>>Sent: Thursday, October 31, 2013 8:58 AM
>>>Subject: Re: [fonc] Task management in a world without apps.
>>> 
>>>
>>>
>>>Instead of 'applications', you have objects you can manipulate (compose, 
>>>decompose, rearrange, etc.) in a common environment. The state of the 
>>>system, the construction of the objects, determines not only how they appear 
>>>but how they behave - i.e. how they influence and observe the world. Task 
>>>management is then simply rearranging objects: if you want to turn an object 
>>>'off', you 'disconnect' part of the graph, or perhaps you flip a switch that 
>>>does the same thing under the hood. 
>>>
>>>
>>>This has very physical analogies. For exa

Re: [fonc] Task management in a world without apps.

2013-10-31 Thread Alan Kay
It's worth noting that this was the scheme at PARC and was used heavily later 
in Etoys. 

This is why Smalltalk has unlimited numbers of "Projects". Each one is a 
persistant environment that serves both as a place to make things and as a 
"page" of "desktop media". 

There are no apps, only objects and any and all objects can be brought to any 
project which will preserve them over time. This avoids the stovepiping of 
apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the 
objects, and George Bosworth's PARTS system showed a similar but slightly 
different way.

Also there is no "presentation app" in Etoys, just an object that allows 
projects to be put in any order -- and there can many many such orderings all 
preserved -- and there is an object that will move from one project to the next 
as you give your talk. "Builds" etc are all done via Etoy scripts.

This allows the full power of the system to be used for everything, including 
presentations. You can imagine how appalled we were by the appearance of 
Persuade and PowerPoint, etc.

Etc.

We thought we'd done away with both "operating systems" and with "apps" but 
we'd used the wrong wood in our stakes -- the vampires came back in the 80s.

One of the interesting misunderstandings was that Apple and then MS didn't 
really understand the universal viewing mechanism (MVC) so they thought views 
with borders around them were "windows" and view without borders were part of 
"desktop publishing", but in fact all were the same. The Xerox Star confounded 
the problem by reverting to a single desktop and apps and missed the real media 
possibilities.

They divided a unified media world into two regimes, neither of which are very 
good for end-users.

Cheers,

Alan




>
> From: David Barbour 
>To: Fundamentals of New Computing  
>Sent: Thursday, October 31, 2013 8:58 AM
>Subject: Re: [fonc] Task management in a world without apps.
> 
>
>
>Instead of 'applications', you have objects you can manipulate (compose, 
>decompose, rearrange, etc.) in a common environment. The state of the system, 
>the construction of the objects, determines not only how they appear but how 
>they behave - i.e. how they influence and observe the world. Task management 
>is then simply rearranging objects: if you want to turn an object 'off', you 
>'disconnect' part of the graph, or perhaps you flip a switch that does the 
>same thing under the hood. 
>
>
>This has very physical analogies. For example, there are at least two ways to 
>"task manage" a light: you could disconnect your lightbulb from its socket, or 
>you could flip a lightswitch, which opens a circuit.
>
>
>There are a few interesting classes of objects, which might be described as 
>'tools'. There are tools for your hand, like different paintbrushes in Paint 
>Shop. There are also tools for your eyes/senses, like a magnifying glass, 
>x-ray goggles, heads-up display, events notification, or language translation. 
>And there are tools that touch both aspects - like a projectional editor, 
>lenses. If we extend the user-model with concepts like 'inventory', and 
>programmable tools for both hand and eye, those can serve as another form of 
>task management. When you're done painting, put down the paintbrush.
>
>
>This isn't really the same as switching between tasks. I.e. you can still get 
>event notifications on your heads-up-display while you're editing an image. 
>It's closer to controlling your computational environment by direct 
>manipulation of structure that is interpreted as code (aka live programming).
>
>
>Best,
>
>
>Dave
>
>
>
>
>
>
>On Thu, Oct 31, 2013 at 10:29 AM, Casey Ransberger  
>wrote:
>
>A fun, but maybe idealistic idea: an "application" of a computer should just 
>be what one decides to do with it at the time.
>>
>>I've been wondering how I might best switch between "tasks" (or really things 
>>that aren't tasks too, like toys and documentaries and symphonies) in a world 
>>that does away with most of the application level modality that we got with 
>>the first Mac.
>>
>>The dominant way of doing this with apps usually looks like either the OS X 
>>dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more 
>>interoperability between the virtual things I'm interacting with on a 
>>computer, without forcing me to "multitask" (read: do more than one thing at 
>>once very badly,) what's my best possible interaction language look like?
>>
>>I would love to know if these tools came from some interesting research once 
>>upon a time. I'd be grateful for any references that can be shared. I'm also 
>>interested in hearing any wild ideas that folks might have, or great ideas 
>>that fell by the wayside way back when.
>>
>>Out of curiosity, how does one change one's "mood" when interacting with 
>>Frank?
>>
>>Casey
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>

Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-09 Thread Alan Kay
Check out "Smallstar" by Dan Halbert at Xerox PARC (written up in a PARC 
"bluebook")

Cheers,

Alan



 From: John Carlson 
To: Fundamentals of New Computing  
Sent: Monday, September 9, 2013 3:47 PM
Subject: Re: [fonc] Software Crisis (was Re: Final STEP progress report 
abandoned?)
 


One thing you can do is create a bunch of named widgets that work together with 
copy and paste.  As long as you can do type safety, and can appropriately deal 
with variable explosion/collapsing.  You'll probably want to create very small 
functions, which can also be stored in widgets (lambdas).  Widgets will show up 
when their scope is entered, or you could have an inspect mode.
On Sep 9, 2013 5:11 PM, "David Barbour"  wrote:

I like Paul's idea here - form a "pit of success" even for people who tend to 
copy-paste.
>
>
>I'm very interested in unifying PL with HCI/UI such that actions like 
>copy-paste actually have formal meaning. If you copy a time-varying field from 
>a UI form, maybe you can paste it as a signal into a software agent. Similarly 
>with buttons becoming capabilities. (Really, if we can use a form, it should 
>be easy to program something to use it for us. And vice versa.) All UI actions 
>can be 'acts of programming', if we find the right way to formalize it. I 
>think the trick, then, is to turn the UI into a good PL.
>
>
>To make copy-and-paste code more robust, what can we do?
>
>
>Can we make our code more adaptive? Able to introspect its environment?
>
>
>Can we reduce the number of environmental dependencies? Control namespace 
>entanglement? Could we make it easier to grab all the dependencies for code 
>when we copy it? 
>
>Can we make it more provable?
>
>
>And conversely, can we provide IDEs that can help the "kids" understand the 
>code they take - visualize and graph its behavior, see how it integrates with 
>its environment, etc? I think there's a lot we can do. Most of my thoughts 
>center on language design and IDE design, but there may also be social avenues 
>- perhaps wiki-based IDEs, or Gist-like repositories that also make it easy to 
>interactively explore and understand code before using it.
>
>
>
>On Sun, Sep 8, 2013 at 10:33 AM, Paul Homer  wrote:
>
>
>>These days, the "kids" do a quick google, then just copy&paste the results 
>>into the code base, mostly unaware of what the underlying 'magic' 
>>instructions actually do. So example code is possibly a bad thing?
>>
>>But even if that's true, we've let the genie out of the bottle and he is't 
>>going back in. To fix the quality of software, for example, we can't just ban 
>>all cut&paste-able web pages.
>>
>>The alternate route out of the problem is to exploit these types of human 
>>deficiencies. If some programmers just want to cut&paste, then perhaps all we 
>>can do is too just make sure that what they are using is high enough quality. 
>>If someday they want more depth, then it should be available in easily 
>>digestible forms, even if few will ever travel that route.
>>
>>If most people really don't want to think deeply about about their problems, 
>>then I think that the best we can do is ensure that their hasty decisions are 
>>based on as accurate knowledge as possible. It's far better than them just 
>>flipping a coin. In a sense it moves up our decision making to a higher level 
>>of abstraction. Some people lose the 'why' of the decision, but their 
>>underlying choice ultimately is superior, and the 'why' can still be found by 
>>doing digging into the data. In a way, isn't that what we've already done 
>>with micro-code, chips and assembler? Or machinery? Gradually we move up 
>>towards broader problems...
>>
>>
>>>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] People and evidence

2013-09-09 Thread Alan Kay
Kenneth Clarke once remarked that "People in the Middle Ages were as 
passionately interested in truth as we are, but their sense of evidence was 
very different".

Marshall McLuhan said "I can't see it until I believe it"

Neil Postman once remarked that "People today have to accept twice as much on 
faith: *both* religion and science!"

In a letter to Kepler of August 1610, Galileo complained that some of the 
philosophers who opposed his discoveries had refused even to look through a 
telescope:
My dear Kepler, I wish that we might laugh at the remarkable stupidity of the 
common herd. What do you have to say about the principal philosophers of this 
academy who are filled with the stubbornness of an asp and do not want to look 
at either the planets, the moon or the telescope, even though I have freely and 
deliberately offered them the opportunity a thousand times? Truly, just as the 
asp stops its ears, so do these philosophers shut their eyes to the light of 
truth."
Many of the commenters on this list have missed that "evidence" and "data" 
requires a fruitful context -- even to consider them! -- and that better tools 
and data will only tend to help those who are already set up epistemologically 
to use them wisely. (And don't forget the scientists I mentioned who have been 
shown to be deeply influenced by the context of their own employers.)


The fault is not in our stars ...

Cheers,

Alan___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-08 Thread Alan Kay
Hi Paul

When I said "even scientists go against their training" I was also pointing out 
really deep problems in humanity's attempts at thinking (we are quite terrible 
thinkers!).


If we still make most decisions without realizing why, and use conventional 
"thinking tools" as ways to rationalize them, then technologists providing 
vastly more efficient, wide and deep, sources for rationalizing is the opposite 
of a great gift.

Imagine a Google that also retrieves counter-examples. Or one that actively 
tries to help find chains of reasoning that are based on principles one -- or 
others -- claim to hold. Or one that looks at the system implications of local 
human desires and actions.

Etc.

I'm guessing that without a lot of training, most humans would not choose to 
use a real "thinking augmenter".

Best wishes,

Alan


____
 From: Paul Homer 
To: Alan Kay  
Cc: Fundamentals of New Computing  
Sent: Sunday, September 8, 2013 7:34 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hi Alan,

I agree that there is, and probably will always be, a necessity to 'think 
outside of the box', although if the box was larger, it would be less 
necessary. But I wasn't really thinking about scientists and the pursuit of new 
knowledge, but rather the trillions? of mundane decisions that people regularly 
make on a daily basis. 

A tool like Wikipedia really helps in being able to access a refined chunk of 
knowledge, but the navigation and categorization are statically defined. 
Sometimes what I am trying to find is spread horizontally across a large number 
of pages. If, as a simple example, a person could have a dynamically generated 
Wikipedia page created just for them that factored in their current knowledge 
and the overall context of the situation then they'd be able to utilize that 
knowledge more appropriately. They could still choose to skim or ignore it, but 
if they wanted a deeper understanding, they could read the compiled research in 
a few minutes. 

The Web, particularly for programmers, has been a great tease for this. You can 
look up any coding example instantly (although you do have to sort through the 
bad examples and misinformation). The downside is that I find it far more 
common for people to not really understanding what is actually happening 
underneath, but I suspect that that is driven by increasing time pressures and 
expectations rather than but a shift in the way we relate to knowledge.

What I think would really help is not just to allow access to the breadth of 
knowledge, but to also enable individuals to get to the depth as well. Also the 
ability to quickly recognize lies, myths, propaganda, etc. 

Paul.

Sent from my iPad

On 2013-09-08, at 7:12 AM, Alan Kay  wrote:


Hi Paul
>
>
>I'm sure you are aware that yours is a very "Engelbartian" point of view, and 
>I think there is still much value in trying to make things better in this 
>direction.
>
>
>However, it's also worth noting the studies over the last 40 years (and 
>especially recently) that show how often even scientists go against their 
>training and knowledge in their decisions, and are driven more by desire and 
>environment than they realize. More knowledge is not the answer here -- but 
>it's possible that very different kinds of training could help greatly.
>
>
>Best wishes,
>
>
>Alan
>
>
>
>
> From: Paul Homer 
>To: Alan Kay ; Fundamentals of New Computing 
>; Fundamentals of New Computing  
>Sent: Saturday, September 7, 2013 12:24 PM
>Subject: Re: [fonc] Final STEP progress report abandoned?
> 
>
>
>Hi Alan,
>
>
>I can't predict what will come, but I definitely have a sense of where I think 
>we should go. Collectively as a species, we know a great deal, but 
>individually people still make important choices based on too little 
>knowledge. 
>
>
>
>In a very abstract sense 'intelligence' is just a more dynamic offshoot of 
>'evolution'. A sort of hyper-evolution. It allows a faster route towards 
>reacting to changes in the enviroment, but it is still very limited by 
>individual perspectives of the world. I don't think we need AI in the classic 
>Hollywood sense, but we could enable a sort of hyper-intelligence by giving 
>people easily digestable access to our collective understanding. Not a 'borg' 
>style single intelligence, but rather just the tools that can be used to make 
>descisions that are more "accurate" than an individual would have made 
>normally. 
>
>
>
>To me the path to get there lies within our understanding of data. It needs to 
>be better organized, better understood and far more accessible. It can't keep 
>g

Re: [fonc] Final STEP progress report abandoned?

2013-09-08 Thread Alan Kay
Hi Paul

I'm sure you are aware that yours is a very "Engelbartian" point of view, and I 
think there is still much value in trying to make things better in this 
direction.

However, it's also worth noting the studies over the last 40 years (and 
especially recently) that show how often even scientists go against their 
training and knowledge in their decisions, and are driven more by desire and 
environment than they realize. More knowledge is not the answer here -- but 
it's possible that very different kinds of training could help greatly.

Best wishes,

Alan


____
 From: Paul Homer 
To: Alan Kay ; Fundamentals of New Computing 
; Fundamentals of New Computing  
Sent: Saturday, September 7, 2013 12:24 PM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hi Alan,

I can't predict what will come, but I definitely have a sense of where I think 
we should go. Collectively as a species, we know a great deal, but individually 
people still make important choices based on too little knowledge. 


In a very abstract sense 'intelligence' is just a more dynamic offshoot of 
'evolution'. A sort of hyper-evolution. It allows a faster route towards 
reacting to changes in the enviroment, but it is still very limited by 
individual perspectives of the world. I don't think we need AI in the classic 
Hollywood sense, but we could enable a sort of hyper-intelligence by giving 
people easily digestable access to our collective understanding. Not a 'borg' 
style single intelligence, but rather just the tools that can be used to make 
descisions that are more "accurate" than an individual would have made 
normally. 


To me the path to get there lies within our understanding of data. It needs to 
be better organized, better understood and far more accessible. It can't keep 
getting caught up in silos, and it really needs ways to share it appropriately. 
The world changes dramatically when we've developed the ability to fuse all of 
our digitized information into one great structural model that has the 
capability to separate out fact from fiction. It's a long way off, but I've 
always thought it was possible...

Paul.




>
> From: Alan Kay 
>To: Fundamentals of New Computing  
>Sent: Tuesday, September 3, 2013 7:48:22 AM
>Subject: Re: [fonc] Final STEP progress report abandoned?
> 
>
>
>Hi Jonathan
>
>
>We are not soliciting proposals, but we like to hear the opinions of others on 
>"burning issues" and "better directions" in computing.
>
>
>Cheers,
>
>
>Alan
>
>
>
>
> From: Jonathan Edwards 
>To: fonc@vpri.org 
>Sent: Tuesday, September 3, 2013 4:44 AM
>Subject: Re: [fonc] Final STEP progress report abandoned?
> 
>
>
>That's great news! We desperately need fresh air. As you know, the way a 
>problem is framed bounds its solutions. Do you already know what problems to 
>work on or are you soliciting proposals?
>
>
>Jonathan
>
>
>
>From: Alan Kay 
>>To: Fundamentals of New Computing 
>>Cc: 
>>Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
>>Subject: Re: [fonc] Final STEP progress report abandoned?
>>
>>Hi Dan
>>
>>
>>It actually got written and given to NSF and approved, etc., a while ago, but 
>>needs a little more work before posting on the VPRI site. 
>>
>>
>>Meanwhile we've been consumed by setting up a number of additional, and wider 
>>scale, research projects, and this has occupied pretty much all of my time 
>>for the last 5-6 months.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>> From: Dan Melchione 
>>To: fonc@vpri.org 
>>Sent: Monday, September 2, 2013 10:40 AM
>>Subject: [fonc] Final STEP progress report abandoned?
>> 
>>
>>
>>Haven't seen much regarding this for a while.  Has it been been abandoned or 
>>put at such low priority that it is effectively abandoned?
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Alan Kay
Yes, the "communication with aliens" problem -- in many different aspects -- is 
going to be a big theme for VPRI over the next few years.

Cheers,

Alan



 From: Tristan Slominski 
To: Alan Kay ; Fundamentals of New Computing 
 
Sent: Tuesday, September 3, 2013 7:25 PM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hey Alan,

With regards to "burning issues" and "better directions", I want to highlight 
the "communicating with aliens" problem as worth of remembering. Machines 
figuring out on their own a protocol and goals for communication. This might 
relate to "cooperating solvers" aspect of your work.

Cheers,

Tristan



On Tue, Sep 3, 2013 at 6:48 AM, Alan Kay  wrote:

Hi Jonathan
>
>
>We are not soliciting proposals, but we like to hear the opinions of others on 
>"burning issues" and "better directions" in computing.
>
>
>Cheers,
>
>
>Alan
>
>
>
>
> From: Jonathan Edwards 
>To: fonc@vpri.org 
>Sent: Tuesday, September 3, 2013 4:44 AM
>
>Subject: Re: [fonc] Final STEP progress report abandoned?
> 
>
>
>That's great news! We desperately need fresh air. As you know, the way a 
>problem is framed bounds its solutions. Do you already know what problems to 
>work on or are you soliciting proposals?
>
>
>Jonathan
>
>
>
>From: Alan Kay 
>>To: Fundamentals of New Computing 
>>Cc: 
>>Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
>>Subject: Re: [fonc] Final STEP progress report abandoned?
>>
>>Hi Dan
>>
>>
>>It actually got written and given to NSF and approved, etc., a while ago, but 
>>needs a little more work before posting on the VPRI site. 
>>
>>
>>Meanwhile we've been consumed by setting up a number of additional, and wider 
>>scale, research projects, and this has occupied pretty much all of my time 
>>for the last 5-6 months.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>> From: Dan Melchione 
>>To: fonc@vpri.org 
>>Sent: Monday, September 2, 2013 10:40 AM
>>Subject: [fonc] Final STEP progress report abandoned?
>> 
>>
>>
>>Haven't seen much regarding this for a while.  Has it been been abandoned or 
>>put at such low priority that it is effectively abandoned?
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Alan Kay
Hi Kevin

At some point I'll gather enough brain cells to do the needed edits and get the 
report on the Viewpoints server.

Dan Amelang is in the process of writing his thesis on Nile, and we will 
probably put Nile out in a more general form after that. (A nice project would 
be to do Nile in the Chrome "Native Client" to get a usable speedy and very 
compact graphics system for web based systems.)

Yoshiki's K-Script has been experimentally implemented on top of Javascript, 
and we've been learning a lot about this variant of stream-based FRP as it is 
able to work within "someone else's implementation of a language".

A lot of work on the "cooperating solvers" part of STEPS is going on (this was 
an add-on that wasn't really in the scope of the original proposal).

We are taking another pass at the "interoperating alien modules" problem that 
was part of the original proposal, but that we never really got around to 
trying to make progress on it.

And, as has been our pattern in the past, we have often alternated end-user 
systems (especially including children) with the "deep systems" projects, and 
we are currently pondering this 50+ year old problem again.

A fair amount of time is being put into "problem finding" (the basic idea is 
that initially trying to manifest "visions" of desirable future states is 
better than going directly into trying to state new goals -- good visions will 
often help "problem finding" which can then be the context for picking actual 
goals).

And most of my time right now is being spent in extending environments for 
research.

Cheers

Alan




 From: Kevin Driedger 
To: Alan Kay ; Fundamentals of New Computing 
 
Sent: Monday, September 2, 2013 2:41 PM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Alan,

Can you give us any more details or direction on these research projects?



]{evin ])riedger



On Mon, Sep 2, 2013 at 1:45 PM, Alan Kay  wrote:

Hi Dan
>
>
>It actually got written and given to NSF and approved, etc., a while ago, but 
>needs a little more work before posting on the VPRI site. 
>
>
>Meanwhile we've been consumed by setting up a number of additional, and wider 
>scale, research projects, and this has occupied pretty much all of my time for 
>the last 5-6 months.
>
>
>Cheers,
>
>
>Alan
>
>
>
>
> From: Dan Melchione 
>To: fonc@vpri.org 
>Sent: Monday, September 2, 2013 10:40 AM
>Subject: [fonc] Final STEP progress report abandoned?
> 
>
>
>Haven't seen much regarding this for a while.  Has it been been abandoned or 
>put at such low priority that it is effectively abandoned?
>
___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Alan Kay
Hi Jonathan

We are not soliciting proposals, but we like to hear the opinions of others on 
"burning issues" and "better directions" in computing.

Cheers,

Alan



 From: Jonathan Edwards 
To: fonc@vpri.org 
Sent: Tuesday, September 3, 2013 4:44 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


That's great news! We desperately need fresh air. As you know, the way a 
problem is framed bounds its solutions. Do you already know what problems to 
work on or are you soliciting proposals?

Jonathan



From: Alan Kay 
>To: Fundamentals of New Computing 
>Cc: 
>Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
>Subject: Re: [fonc] Final STEP progress report abandoned?
>
>Hi Dan
>
>
>It actually got written and given to NSF and approved, etc., a while ago, but 
>needs a little more work before posting on the VPRI site. 
>
>
>Meanwhile we've been consumed by setting up a number of additional, and wider 
>scale, research projects, and this has occupied pretty much all of my time for 
>the last 5-6 months.
>
>
>Cheers,
>
>
>Alan
>
>
>
>
> From: Dan Melchione 
>To: fonc@vpri.org 
>Sent: Monday, September 2, 2013 10:40 AM
>Subject: [fonc] Final STEP progress report abandoned?
> 
>
>
>Haven't seen much regarding this for a while.  Has it been been abandoned or 
>put at such low priority that it is effectively abandoned?
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-02 Thread Alan Kay
Hi Dan

It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 

Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time for 
the last 5-6 months.

Cheers,

Alan



 From: Dan Melchione 
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deoptimization as fallback

2013-07-30 Thread Alan Kay
This is how Smalltalk has always treated its primitives, etc.

Cheers,

Alan



 From: Casey Ransberger 
To: Fundamentals of New Computing  
Sent: Tuesday, July 30, 2013 1:22 PM
Subject: [fonc] Deoptimization as fallback
 

Thought I had: when a program hits an unhandled exception, we crash, often 
there's a hook to log the crash somewhere. 

I was thinking: if a system happens to be running an optimized version of some 
algorithm, and hit a crash bug, what if it could fall back to the suboptimal 
but conceptually simpler "Occam's explanation?"

All other things being equal, the simple implementation is usually more stable 
than the faster/less-RAM solution.

Is anyone aware of research in this direction?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [HISTORY][MYSTERY][THRILLS][OT] Sunrise anyone?

2013-07-20 Thread Alan Kay
This was Xerox essentially following SONY (check out their Model 30 Word 
Processor -- early 80s -- and the portable keyboard capture device that you 
could get with it. I think SONY invented the 3.5" floppy for this machine). 

The Xerox Sunrise came years later but is very similar to the portable keyboard 
capture device. I'm not sure that the Sunrise was ever a full-fledged product.

Cheers,

Alan



 From: Casey Ransberger 
To: Fundamentals of New Computing  
Sent: Saturday, July 20, 2013 1:19 AM
Subject: [fonc] [HISTORY][MYSTERY][THRILLS][OT] Sunrise anyone?
 


There's a local computer/electronics store that basically resells old unloved 
gear in the neighborhood called Re-PC. While the band was buying adapters at 
the counter, your intrepid bassist wandered off to the little computer museum 
at the back in order to drool over the PDP-8 a little and saw this odd little 
thing:














The card next to it just said, where there's usually an interesting blurb:

"Xerox Sunrise (???)
 
         19??

We used to know all about this.

Re-PC Certificate of Antiquity"

It looked like some kind of word processor with the long LCD display line, but 
the micro cassette drive on the right and accompanying speaker (which seemed a 
bit large just for a beeper) had me curious. There's also an accompanying disk 
drive thingie so I was sure it wasn't just a word processor. Googling tells me 
it ran CP/M and didn't live long as a product, but not much else, at least at 
first glance.

The tag seemed to indicate that at one point there was an interesting story 
associated with the machine. Anyone know anything about it or why it wasn't 
made for very long?

-- 
Casey Ransberger 
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-20 Thread Alan Kay
This is the essential benefit of a LINDA or some other kind of 
publish-subscribe approach. Each object has two billboards, one stating what 
they can do and the other what they need, etc.

The catch is how are these described (especially given that local tags have 
only accidental global meaning). There are several obvious ways to get beyond 
this that have their own levels of difficulties. And as David has pointed out, 
getting people to make reasonable interfaces even for their own local needs has 
not succeeded well. 

But ...

Cheers,

Alan



>
> From: John Nilsson 
>To: Fundamentals of New Computing  
>Sent: Saturday, April 20, 2013 7:32 AM
>Subject: Re: [fonc] 90% glue code
> 
>
>
>One approach I've been thinking about is to invert the "information hiding" 
>principle.
>
>
>The problem with information hiding is that the interface and properties 
>exposed by a module is determined by the module: "I am a..." And some line is 
>drawn between which properties are implementation details, and which are the 
>contract.
>  
>So I was thinking, what if the roles were swapped. What if modules could not 
>declare a public contract but instead just had to conform to any type, 
>interface or property that a client depending on it would care to declare as a 
>requirement. In effect changing the module description into a collection of 
>"You are a..." statements. Kind of similar to how a structural type allow any 
>module conforming to the interface without the module having to implement a 
>particular nominal type.
>
>For one, declaring a contract for a dependency is rather easy as it is based 
>on local reasoning: "What do I do, what do I need?" as compared to "What do I 
>do, what do others need?"
>
>Another benefit would be that there is no arbitrary reduction of the modules 
>full capabilities. For example a Java List only implementing Iterable couldn't 
>be used by clients requiring an ordered and finite sequence.
>
>I would expect this to encourage module writers to declare the smallest set of 
>properties possible to depend on so that there would be more focus on 
>"information shielding", what information to expose one self to, rather than 
>what information not to expose to others.
>
>
>
>The problem with this approach is that the proof of conformance can't come 
>from the module, and it's hardly productive to require each client to provide 
>one. I guess in some sense this is partly solved by a mechanism such as type 
>classes as done in Scala or Haskell. One problem with this scheme though is 
>that they do this by means of a static dispatch, making it impossible to 
>specialize implementations by runtime polymorphism. While I haven't played 
>with it, I do believe that Clojure has solved it while preserving runtime 
>polymorphism.
>
>
>
>
>BR,
>John
>
>
>
>
>On Thu, Apr 18, 2013 at 3:13 AM, David Barbour  wrote:
>
>Sounds like you want stone soup programming. :D
>>
>>
>>In retrospect, I've been disappointed with most techniques that involve 
>>providing "information about module capabilities" to some external 
>>"configurator" (e.g. linkers as constraint solvers). Developers are asked to 
>>grok at least two very different programming models. Hand annotations or 
>>hints become common practice because many properties cannot be inferred. The 
>>resulting system isn't elegantly metacircular, i.e. you need that 
>>'configurator' in the loop and the metada with the inputs.
>>
>>
>>An alternative I've been thinking about recently is to shift the link logic 
>>to the modules themselves. Instead of being passive bearers of information 
>>that some external linker glues together, the modules become active agents in 
>>a link environment that collaboratively construct the runtime behavior (which 
>>may afterwards be extracted). Developers would have some freedom to abstract 
>>and separate problem-specific link logic (including decision-making) rather 
>>than having a one-size-fits-all solution.
>>
>>
>>Re: In my mind "powerful languages" thus means 98% requirements
>>
>>
>>To me, "power" means something much more graduated: that I can get as much 
>>power as I need, that I can do so late in development without rewriting 
>>everything, that my language will grow with me and my projects.
>>
>>
>>
>>
>>On Wed, Apr 17, 2013 at 2:04 PM, John Nilsson  wrote:
>>
>>Maybe not. If there is enough information about different modules' 
>>capabilities, suitability for solving various problems and requirements, such 
>>that the required "glue" can be generated or configured automatically at run 
>>time. Then what is left is the input to such a generator or configurator. At 
>>some level of abstraction the input should transition from being glue and 
>>better be described as design.
>>>Design could be seen as kind of a gray area if thought of mainly as picking 
>>>what to glue together as it still involves a significant amount of gluing ;) 
>>>But even design should be possible to formalize

Re: [fonc] 90% glue code

2013-04-19 Thread Alan Kay
Wow, automatic spelling correctors suck, especially early in the morning 

The only really good -- and reasonably accurate -- book about the history of 
Lick, ARPA-IPTO (no "D", that is when things went bad), and Xerox PARC is 
"Dream Machines" by Mitchell Waldrop.


Cheers,

Alan



>________
> From: Alan Kay 
>To: Fundamentals of New Computing  
>Sent: Friday, April 19, 2013 5:53 AM
>Subject: Re: [fonc] 90% glue code
> 
>
>
>The only really good -- and reasonable accurate -- book about the history of 
>Lick, ARPA-IPTO (no "D", that is went things went bad), and Xerox PARC is 
>"Dream Machines" by Mitchel Waldrop.
>
>
>Cheers,
>
>
>Alan
>
>
>
>>
>> From: Miles Fidelman 
>>To: Fundamentals of New Computing  
>>Sent: Friday, April 19, 2013 5:45 AM
>>Subject: Re: [fonc] 90% glue code
>> 
>>
>>Casey Ransberger wrote:
>>> This Licklider guy is interesting. CS + psych = cool.
>>
>>A lot more than cool.  Lick was the guy who:
>>- MIT Professor
>>- pioneered timesharing (bought the first production PDP-1 for BBN) and AI 
>>work at BBN
>>- served as the initial Program Manager at DARPA/IPTO (the folks who funded 
>>the ARPANET)
>>- Director of Project MAC at MIT for a while
>>- wrote some really seminal papers - "Man-Computer Symbiosis"is write up 
>>there with Vannevar Bush's "As We May Think"
>>
>>/It seems reasonable to envision, for a time 10 or 15 years hence, a 
>>'thinking center' that will incorporate the functions of present-day 
>>libraries together with anticipated advances in information storage and 
>>retrieval./
>>
>>/The picture readily enlarges itself into a network of such centers, 
>>connected to one another by wide-band communication lines and to individual 
>>users by leased-wire services. In such a system, the speed of the
 computers would be balanced, and the cost of the gigantic memories and the 
sophisticated programs would be divided by the number of users./
>>
>>-  J.C.R. Licklider, Man-Computer Symbiosis 
>><http://memex.org/licklider.html>, 1960.
>>
>>- perhaps the earliest conception of the Internet:
>>In a 1963 memo to "Members and Affiliates of the Intergalactic Computer 
>>Network," Licklider theorized that a computer network could help researchers 
>>share information and even enable people with common interests to interact 
>>online.
>>(http://web.archive.org/web/20071224090235/http://www.today.ucla.edu/1999/990928looking.html)
>>
>>Outside the community he kept a very low profile. One of the greats.
>>
>>Miles Fidelman
>>
>>-- In theory, there is no difference between theory and practice.
>>In practice, there is.    Yogi
 Berra
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-19 Thread Alan Kay
The only really good -- and reasonable accurate -- book about the history of 
Lick, ARPA-IPTO (no "D", that is went things went bad), and Xerox PARC is 
"Dream Machines" by Mitchel Waldrop.

Cheers,

Alan



>
> From: Miles Fidelman 
>To: Fundamentals of New Computing  
>Sent: Friday, April 19, 2013 5:45 AM
>Subject: Re: [fonc] 90% glue code
> 
>
>Casey Ransberger wrote:
>> This Licklider guy is interesting. CS + psych = cool.
>
>A lot more than cool.  Lick was the guy who:
>- MIT Professor
>- pioneered timesharing (bought the first production PDP-1 for BBN) and AI 
>work at BBN
>- served as the initial Program Manager at DARPA/IPTO (the folks who funded 
>the ARPANET)
>- Director of Project MAC at MIT for a while
>- wrote some really seminal papers - "Man-Computer Symbiosis"is write up there 
>with Vannevar Bush's "As We May Think"
>
>/It seems reasonable to envision, for a time 10 or 15 years hence, a 'thinking 
>center' that will incorporate the functions of present-day libraries together 
>with anticipated advances in information storage and retrieval./
>
>/The picture readily enlarges itself into a network of such centers, connected 
>to one another by wide-band communication lines and to individual users by 
>leased-wire services. In such a system, the speed of the computers would be 
>balanced, and the cost of the gigantic memories and the sophisticated programs 
>would be divided by the number of users./
>
>-  J.C.R. Licklider, Man-Computer Symbiosis , 
>1960.
>
>- perhaps the earliest conception of the Internet:
>In a 1963 memo to "Members and Affiliates of the Intergalactic Computer 
>Network," Licklider theorized that a computer network could help researchers 
>share information and even enable people with common interests to interact 
>online.
>(http://web.archive.org/web/20071224090235/http://www.today.ucla.edu/1999/990928looking.html)
>
>Outside the community he kept a very low profile. One of the greats.
>
>Miles Fidelman
>
>-- In theory, there is no difference between theory and practice.
>In practice, there is.    Yogi Berra
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-18 Thread Alan Kay
Hi David

We are being invaded by "stupid aliens" aka our's and other people's software. 
The gods who made their spaceships are on vacation and didn't care about 
intercommunication (is this a modern version of the Tower of Babel?).

Discovery can take a long time (and probably should) but might not be needed 
for most subsequent communications (per Joe Becker's "Phrasal Lexicon"). Maybe 
ML coupled to something more semantic (e.g. CYC) could make impressive inroads 
here.

I'm guessing that even the large range of ideas -- good and bad -- in CS today 
is a lot smaller (and mostly stupider) than the ones that need to be dealt with 
when trying for human to human or human to alien overlap.

Cheers,

Alan



>________
> From: David Barbour 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Thursday, April 18, 2013 9:25 AM
>Subject: Re: [fonc] 90% glue code
> 
>
>
>Well, communicating with genuine aliens would probably best be solved by 
>multi-modal machine-learning techniques. The ML community already has 
>techniques for two machines to "teach" one another their vocabularies, and 
>thus build a strong correspondence. Of course, if we have space alien 
>visitors, they'll probably have a solution to the problem and already know our 
>language from media. 
>
>
>Natural language has a certain robustness to it, due to its probabilistic, 
>contextual, and interactive natures (offering much opportunity for refinement 
>and retroactive correction). If we want to support machine-learning between 
>software elements, one of the best things we could do is to emulate this 
>robustness end-to-end. Such things have been done before, but I'm a bit stuck 
>on how to do so without big latency, efficiency, and security sacrifices. 
>(There are two issues: the combinatorial explosion of possible models, and the 
>modular hiding of dependencies that are inherently related through shared 
>observation or influence.)
>
>
>Fortunately, there are many other issues we can address to facilitate 
>communication that are peripheral to translation. Further, we could certainly 
>leverage code-by-example for type translations (if they're close). 
>
>
>Regards,
>
>
>Dave
>
>
>
>
>On Thu, Apr 18, 2013 at 8:06 AM, Alan Kay  wrote:
>
>Hi David
>>
>>
>>This is an interesting slant on a 50+ year old paramount problem (and one 
>>that is even more important today).
>>
>>
>>Licklider called it the "communicating with aliens problem". He said 50 years 
>>ago this month that "if we succeed in constructing the 'intergalactic 
>>network' then our main problem will be learning how to 'communicate with 
>>aliens'. He meant not just humans to humans but software to software and 
>>humans to software. 
>>
>>
>>(We gave him his intergalactic network but did not solve the communicating 
>>with aliens problem.)
>>
>>
>>
>>I think a key to finding better solutions is to -- as he did -- really push 
>>the scale beyond our imaginations -- "intergalactic" -- and then ask "how can 
>>we *still* establish workable communications of overlapping meanings?".
>>
>>
>>
>>Another way to look at this is to ask: "What kinds of prep *can* you do 
>>*beforehand* to facilitate communications with alien modules?"
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>>
>>
>>
>>>
>>> From: David Barbour 
>>>To: Fundamentals of New Computing  
>>>Sent: Wednesday, April 17, 2013 6:13 PM
>>>Subject: Re: [fonc] 90% glue code
>>> 
>>>
>>>
>>>Sounds like you want stone soup programming. :D
>>>
>>>
>>>In retrospect, I've been disappointed with most techniques that involve 
>>>providing "information about module capabilities" to some external 
>>>"configurator" (e.g. linkers as constraint solvers). Developers are asked to 
>>>grok at least two very different programming models. Hand annotations or 
>>>hints become common practice because many properties cannot be inferred. The 
>>>resulting system isn't elegantly metacircular, i.e. you need that 
>>>'configurator' in the loop and the metada with the inputs.
>>>
>>>
>>>An alternative I've been thinking about recently is to shift the link logic 
>>>to the modules themselves. Instead of being passive bearers of information 
>>>that some external

Re: [fonc] 90% glue code

2013-04-18 Thread Alan Kay
The basic idea is to find really fundamental questions about negotiating about 
meaning, and to invent mental and computer tools to help.

David is quite right to complain about the current state of things in this area 
-- but -- for example -- I don't know of anyone trying a "discovery system" 
like Lenat's Eurisko, or to imitate how a programmer would go about the alien 
module problem, or to e.g. look at how a linguist like Charles Hockett could 
learn a traditional culture's language well enough in a few hours to speak to 
them in it. (I recall some fascinating movies from my Anthro classes in 
linguistics that I think were made in the 50s showing (I think) Hockett put in 
the middle of a village and what he did to "find" their language).

There are certainly tradeoffs here about just what kind of overlap at what 
levels can be gained. This is similar to the idea that there are lots of 
wonderful things in Biology that are out of scale with our computer 
technologies. So we should find the things in both Bio and Anthro that will 
help us think.

Cheers,

Alan



>____
> From: Jeff Gonis 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Thursday, April 18, 2013 8:39 AM
>Subject: Re: [fonc] 90% glue code
> 
>
>
>Hi Alan,
>
>Your metaphor brought up a connection that I have been thinking about for a 
>while, but I unfortunately don't have enough breadth of knowledge to know if 
>the connection is worthwhile or not, so I am throwing it out there to this 
>list to see what people think.
>
>If figuring out module communication can be likened to communicating with 
>aliens, could we not look at how we go about communicating with "alien" 
>cultures right now?  Maybe trying to use "real-world" metaphors in this case 
>is foolish, but it seemed to work out pretty well when you used some of your 
>thoughts on biology to inform OOP.  
>
>So can we look to the real world and ask how linguists go about communicating 
>with unknown cultures or remote tribes of people?  Has this process occurred 
>frequently enough that there is some sort of protocol or process that is 
>followed by which concepts from one language are mapped onto those contained 
>in the indigenous language until communication can occur?  Could we use this 
>process as a source of metaphors to think about how to create a protocol for 
>"discovering" how two different software modules can map their own concepts 
>onto the other?
>
>Anyway, that was something that had been running in the background of my mind 
>for a while, since I saw you talk about the importance of figuring out ways to 
>mechanize the process modules figuring out how to communicate with each other.
>
>Thanks,
>Jeff
>
>
>
>
>On Thu, Apr 18, 2013 at 9:06 AM, Alan Kay  wrote:
>
>Hi David
>>
>>
>>This is an interesting slant on a 50+ year old paramount problem (and one 
>>that is even more important today).
>>
>>
>>Licklider called it the "communicating with aliens problem". He said 50 years 
>>ago this month that "if we succeed in constructing the 'intergalactic 
>>network' then our main problem will be learning how to 'communicate with 
>>aliens'. He meant not just humans to humans but software to software and 
>>humans to software. 
>>
>>
>>(We gave him his intergalactic network but did not solve the communicating 
>>with aliens problem.)
>>
>>
>>
>>I think a key to finding better solutions is to -- as he did -- really push 
>>the scale beyond our imaginations -- "intergalactic" -- and then ask "how can 
>>we *still* establish workable communications of overlapping meanings?".
>>
>>
>>
>>Another way to look at this is to ask: "What kinds of prep *can* you do 
>>*beforehand* to facilitate communications with alien modules?"
>>
>>
>>Cheers,
>>
>>
>>Alan
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-18 Thread Alan Kay
Hi David

This is an interesting slant on a 50+ year old paramount problem (and one that 
is even more important today).

Licklider called it the "communicating with aliens problem". He said 50 years 
ago this month that "if we succeed in constructing the 'intergalactic network' 
then our main problem will be learning how to 'communicate with aliens'. He 
meant not just humans to humans but software to software and humans to 
software. 

(We gave him his intergalactic network but did not solve the communicating with 
aliens problem.)


I think a key to finding better solutions is to -- as he did -- really push the 
scale beyond our imaginations -- "intergalactic" -- and then ask "how can we 
*still* establish workable communications of overlapping meanings?".


Another way to look at this is to ask: "What kinds of prep *can* you do 
*beforehand* to facilitate communications with alien modules?"

Cheers,

Alan





>
> From: David Barbour 
>To: Fundamentals of New Computing  
>Sent: Wednesday, April 17, 2013 6:13 PM
>Subject: Re: [fonc] 90% glue code
> 
>
>
>Sounds like you want stone soup programming. :D
>
>
>In retrospect, I've been disappointed with most techniques that involve 
>providing "information about module capabilities" to some external 
>"configurator" (e.g. linkers as constraint solvers). Developers are asked to 
>grok at least two very different programming models. Hand annotations or hints 
>become common practice because many properties cannot be inferred. The 
>resulting system isn't elegantly metacircular, i.e. you need that 
>'configurator' in the loop and the metada with the inputs.
>
>
>An alternative I've been thinking about recently is to shift the link logic to 
>the modules themselves. Instead of being passive bearers of information that 
>some external linker glues together, the modules become active agents in a 
>link environment that collaboratively construct the runtime behavior (which 
>may afterwards be extracted). Developers would have some freedom to abstract 
>and separate problem-specific link logic (including decision-making) rather 
>than having a one-size-fits-all solution.
>
>
>Re: In my mind "powerful languages" thus means 98% requirements
>
>
>To me, "power" means something much more graduated: that I can get as much 
>power as I need, that I can do so late in development without rewriting 
>everything, that my language will grow with me and my projects.
>
>
>
>
>On Wed, Apr 17, 2013 at 2:04 PM, John Nilsson  wrote:
>
>Maybe not. If there is enough information about different modules' 
>capabilities, suitability for solving various problems and requirements, such 
>that the required "glue" can be generated or configured automatically at run 
>time. Then what is left is the input to such a generator or configurator. At 
>some level of abstraction the input should transition from being glue and 
>better be described as design.
>>Design could be seen as kind of a gray area if thought of mainly as picking 
>>what to glue together as it still involves a significant amount of gluing ;) 
>>But even design should be possible to formalize enough to minimize the amount 
>>of actual design decisions required to encode in the source and what 
>>decisions to leave to algorithms though. So what's left is to encode the 
>>requirements as input to the designer. 
>>In my mind "powerful languages" thus means 98% requirements, 2% design and 0% 
>>glue. 
>>BR
>>John
>>Den 17 apr 2013 05:04 skrev "Miles Fidelman" :
>>
>>
>>So let's ask the obvious question, if we have powerful languages, and/or 
>>powerful libraries, is not an application comprised primarily of glue code 
>>that ties all the "piece parts" together in an application-specific way?
>>>
>>>David Barbour wrote:
>>>
>>>
On Tue, Apr 16, 2013 at 2:25 PM, Steve Wart >>>> wrote:

    > On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
    > In real systems, 90% of code (conservatively) is glue code.

    What is the origin of this claim?


I claimed it from observation and experience. But I'm sure there are other 
people who have claimed it, too. Do you doubt its veracity?



    On Mon, Apr 15, 2013 at 12:15 PM, David Barbour
    mailto:dmbarb...@gmail.com>> wrote:


        On Mon, Apr 15, 2013 at 11:57 AM, David Barbour
        mailto:dmbarb...@gmail.com>> wrote:


            On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David
            mailto:l...@loup-vaillant.fr>> wrote:

                On Sun, Apr 14, 2013 at 04:17:48PM -0700, David
                Barbour wrote:
                > On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
                > In real systems, 90% of code (conservatively) is
                glue code.

                Does this *have* to be the case?  Real systems also
                use C++ (or
                Java). 

Re: [fonc] CodeSpells. Learn how to program Java by writing spells for a 3D environment.

2013-04-13 Thread Alan Kay
Bill is the son of Ralph Griswold of Snobol and Icon fame.



>
> From: John Carlson 
>To: Fundamentals of New Computing  
>Sent: Friday, April 12, 2013 8:38 PM
>Subject: Re: [fonc] CodeSpells. Learn how to program Java by writing spells 
>for a 3D environment.
> 
>
>
>Btw, is this the same Griswold of Snobol and Icon (programming languages) fame?
>On Apr 12, 2013 3:20 PM, "shaun gilchrist"  wrote:
>
>One of the "fundamentals" we are all still grasping at is how to teach 
>programming. These are links to people attempting to contribute something 
>meaningful in that direction rather than posting derisive comments and blatant 
>cult related wing nuttery which, in fact, have nothing to do with computing. 
>Good day sir!
>>
>>
>>
>>On Fri, Apr 12, 2013 at 2:12 PM, John Pratt  wrote:
>>
>>Fine, but what does that have to do with
>>>setting the fundamentals of new computing?
>>>
>>>Is this just a mailing list for computer scientist to jerk off?
>>>
>>>
>>>
>>>On Apr 12, 2013, at 1:00 PM, Josh Grams wrote:
>>>
 On 2013-04-12 11:11AM, David Barbour wrote:
> I've occasionally contemplated developing such a game: program the 
> behavior
> of your team of goblins (who may have different strengths, capabilities,
> and some behavioral habits/quirks) to get through a series of puzzles, 
> with
> players building/managing a library as they go.

 Forth Warrior? :)

 https://github.com/JohnEarnest/Mako/tree/master/games/Warrior2

 --Josh
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
>>>
>>>___
>>>fonc mailing list
>>>fonc@vpri.org
>>>http://vpri.org/mailman/listinfo/fonc
>>>
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Old Boxer Paper

2013-03-27 Thread Alan Kay
Yep, it had some good ideas.

Cheers,

Alan



>
> From: Francisco Garau 
>To: "fonc@vpri.org"  
>Sent: Wednesday, March 27, 2013 12:51 AM
>Subject: [fonc] Old Boxer Paper
> 
>It reminds me of scratch & etoys
>
>http://www.soe.berkeley.edu/boxer/20reasons.pdf
>
>- Francisco
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2013-03-26 Thread Alan Kay
The first ~100 pages are still especially good as food for thought

Cheers,

Alan



>
> From: Duncan Mak 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Cc: "mo...@codetransform.com"  
>Sent: Tuesday, March 26, 2013 10:35 AM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>This is great!
>
>
>Dirk - how did you find it?
>
>
>Duncan.
>
>
>
>On Tue, Mar 26, 2013 at 6:09 AM, Alan Kay  wrote:
>
>That's it -- a real classic!
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>>
>>> From: Dirk Pranke 
>>>To: Fundamentals of New Computing ; mo...@codetransform.com 
>>>Sent: Monday, March 25, 2013 11:18 PM
>>>
>>>Subject: Re: [fonc] Kernel & Maru
>>> 
>>>
>>>In response to a long-dormant thread ... Fisher's thesis seems to have 
>>>surfaced on the web:
>>>
>>>
>>>http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/usr0/ftp/scan/CMU-CS-70-fisher.pdf
>>>
>>>
>>>
>>>-- Dirk
>>>
>>>
>>>
>>>On Tue, Apr 10, 2012 at 11:34 AM, Monty Zukowski  
>>>wrote:
>>>
>>>If anyone finds an electronic copy of Fisher's thesis I'd love to know
>>>>about it.  My searches have been fruitless.
>>>>
>>>>Monty
>>>>
>>>>On Tue, Apr 10, 2012 at 10:04 AM, Alan Kay  wrote:
>>>>...
>>>>
>>>>> Dave Fisher's thesis "A Control Definition Language" CMU 1970 is a very
>>>>> clean approach to thinking about environments for LISP like languages. He
>>>>> separates the "control" path, from the "environment" path, etc.
>>>>...
>>>>
>>>>___
>>>>fonc mailing list
>>>>fonc@vpri.org
>>>>http://vpri.org/mailman/listinfo/fonc
>>>>
>>>
>>>___
>>>fonc mailing list
>>>fonc@vpri.org
>>>http://vpri.org/mailman/listinfo/fonc
>>>
>>>
>>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>
>
>-- 
>Duncan. 
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2013-03-26 Thread Alan Kay
That's it -- a real classic!

Cheers,

Alan



>
> From: Dirk Pranke 
>To: Fundamentals of New Computing ; mo...@codetransform.com 
>Sent: Monday, March 25, 2013 11:18 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>In response to a long-dormant thread ... Fisher's thesis seems to have 
>surfaced on the web:
>
>
>http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/usr0/ftp/scan/CMU-CS-70-fisher.pdf
>
>
>
>-- Dirk
>
>
>
>On Tue, Apr 10, 2012 at 11:34 AM, Monty Zukowski  
>wrote:
>
>If anyone finds an electronic copy of Fisher's thesis I'd love to know
>>about it.  My searches have been fruitless.
>>
>>Monty
>>
>>On Tue, Apr 10, 2012 at 10:04 AM, Alan Kay  wrote:
>>...
>>
>>> Dave Fisher's thesis "A Control Definition Language" CMU 1970 is a very
>>> clean approach to thinking about environments for LISP like languages. He
>>> separates the "control" path, from the "environment" path, etc.
>>...
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Building blocks and use of text

2013-02-14 Thread Alan Kay
And of course, for some time there has been Croquet

http://en.wikipedia.org/wiki/Croquet_project


... and its current manifestation

http://en.wikipedia.org/wiki/Open_Cobalt


These are based on Dave Reed's 1978 MIT thesis and were first implemented about 
10 years ago at Viewpoints.

Besides allowing massively distributed computing without servers, the approach 
is interesting in just how widely it comprehends Internet sized systems.

Cheers,

Alan




>
> From: Casey Ransberger 
>To: Fundamentals of New Computing  
>Sent: Wednesday, February 13, 2013 10:52 PM
>Subject: Re: [fonc] Building blocks and use of text
> 
>
>The next big thing probably won't be some version of Minecraft, even if 
>Minecraft is really awesome. OTOH, you and your kids can prove me wrong today 
>with Minecraft Raspberry Pi Edition, which is free, and comes with _source 
>code_.
>
>
>http://mojang.com/2013/02/minecraft-pi-edition-is-available-for-download/
>
>
>
>
>
>
>
>On Wed, Feb 13, 2013 at 5:55 PM, John Carlson  wrote:
>
>Miles wrote:
>>> There's a pretty good argument to be made that what "works" are powerful 
>>> building blocks that can be combined in lots of different ways; 
>>So the next big thing will be some version of minecraft?  Or perhaps the 
>>older toontalk?  Agentcubes?  What is the right 3D metaphor?  Does anyone 
>>have a comfortable metaphor?  It would seem like if there was an open, 
>>federated MMO system that supported object lifecycles, we would have 
>>something.  Do we have an "object web" yet, or are we stuck with text 
>>forever, with all the nasty security vunerabilities involved?  Yes I agree 
>>that we lost something when we moved to the web.  Perhaps we need to step 
>>away from the document model purely for security reasons.
>>What's the alternative?  Scratch and Alice?  Storing/transmitting ASTs?  Does 
>>our reliance on https/ssl/tls which is based on streams limit us? When are we 
>>going to stop making streams secure and start making secure network objects?  
>>Object-capability security anyone?
>>Are we stuck with documents because they are the best thing for debugging?
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>
>
>-- 
>Casey Ransberger 
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
My suggestion is to learn a little about biology and anthropology and media as 
it intertwines with human thought, then check back in.



>
> From: Miles Fidelman 
>To: Fundamentals of New Computing  
>Sent: Wednesday, February 13, 2013 5:56 PM
>Subject: Re: [fonc] Design of web, POLs for rules. Fuzz testing nile
> 
>Hi Alan
>> 
>> First, my email was not about Ted Nelson, Doug Engelbart or what massively 
>> distributed media should be like. It was strictly about architectures that 
>> allow a much wider range of possibilities.
>
>Ahh... but my argument is that the architecture of the current web is SIMPLER 
>than earlier concepts but has proven more powerful (or at least more 
>effective).
>
>> 
>> Second, can you see that your argument really doesn't hold? This is because 
>> it even more justifies oral speech rather than any kind of writing -- and 
>> for hundreds of thousands of years rather than a few thousand. The invention 
>> of writing was very recent and unusual. Most of the humans who have lived on 
>> the earth never learned it. Using your logic, humans should have stuck with 
>> oral modes and not bothered to go through all the work to learn to read and 
>> write.
>
>Actually, no.  Oral communication begat oral communication at a distance - via 
>radio, telephone, VoIP, etc. - all of which have pretty much plateaued in 
>terms of functionality.  Written communication is (was) something new and 
>different, and the web is a technological extension of written communication.  
>My hypothesis is that, as long as we're dealing with interfaces that look a 
>lot like paper (i.e., screens), we may have plateaued as to what's effective 
>in augmenting written communication with technology.  Simple building blocks 
>that we can mix and match in lots of ways.
>
>Now... if we want to talk about new forms of communication (telepathy?), or 
>new kinds of interfaces (3d immersion, neural interfaces that align with some 
>of the kinds of parallel/visual thinking that we do internally), then we start 
>to need to talk about qualitatively different kinds of technological 
>augmentation.
>
>Of course there is a counter-argument to be made that our kids engage in a new 
>and different form of cognition - by dint of continual immersion in large 
>numbers of parallel information streams.  Then again, we seem to be talking 
>lots of short messages (twitter, texting), and there does seem to be a lot of 
>evidence that multi-tasking and information overload are counter-productive 
>(do we really need society-wide ADHD?).
>
>> 
>> There is also more than a tinge of "false Darwin" in your argument. 
>> Evolutionary-like processes don't optimize, they just find fits to the 
>> environment and ecology that exists.
>
>Umm.. that sounds a lot like optimizing to me.  In any case, there's the 
>question of what are we trying to optimize?  That seems to be both an 
>evolutionary question and one of emergent behaviors.
>> The real question here is not "what do humans want?" (consumerism finds this 
>> and supplies it to the general detriment of society), but "what do humans 
>> *need*?" (even if what we need takes a lot of learning to take on).
>
>Now that's truly a false argument.  "Consumerism," as we tend to view it, is 
>driven by producers, advertising, and creation of false needs.
>
>
>Cheers,
>
>Miles
>
>-- In theory, there is no difference between theory and practice.
>In practice, there is.    Yogi Berra
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
Hi Miles

First, my email was not about Ted Nelson, Doug Engelbart or what massively 
distributed media should be like. It was strictly about architectures that 
allow a much wider range of possibilities.

Second, can you see that your argument really doesn't hold? This is because it 
even more justifies oral speech rather than any kind of writing -- and for 
hundreds of thousands of years rather than a few thousand. The invention of 
writing was very recent and unusual. Most of the humans who have lived on the 
earth never learned it. Using your logic, humans should have stuck with oral 
modes and not bothered to go through all the work to learn to read and write.

There is also more than a tinge of "false Darwin" in your argument. 
Evolutionary-like processes don't optimize, they just find fits to the 
environment and ecology that exists. The real question here is not "what do 
humans want?" (consumerism finds this and supplies it to the general detriment 
of society), but "what do humans *need*?" (even if what we need takes a lot of 
learning to take on).





>
> From: Miles Fidelman 
>To: Fundamentals of New Computing  
>Sent: Wednesday, February 13, 2013 4:58 PM
>Subject: Re: [fonc] Design of web, POLs for rules. Fuzz testing nile
> 
>Alan Kay wrote:
>> 
>> Or you could look at the actual problem "a web" has to solve, which is to 
>> present arbitrary information to a user that comes from any of several 
>> billion sources. Looked at from this perspective we can see that the current 
>> web design could hardly be more wrong headed. For example, what is the 
>> probability that we can make an authoring app that has all the features 
>> needed by billions of producers?
>
>Hmmm let me take an opposing view here, at least for the purpose of 
>playing devil's advocate:
>
>1. Paper and ink have served for 1000s of years, and with the addition of the 
>printing press and libraries have served to distribute and preserve 
>information, from several billion sources, for an awfully long time.
>
>2. If one actually looks at what people use when generating and distributing 
>information it tends to be essentially "smarter paper" - word processors, 
>spreadsheets, powerpoint slides; and when we look at distribution systems, it 
>comes down to email and the electronic equivalent of file rooms and libraries.
>
>Sure, we've added additional media types to the mix, but the basic model 
>hasn't changed all that much.  Pretty much all the more complicated 
>technologies people have come up with don't actually work that well, or get 
>used that much.  Even in the web world, single direction hyperlinks dominate 
>(remember all the complicated, bi-directional links that Ted Nelson came up 
>with).  And when it comes to "groupware," what dominates seems to be chat and 
>twitter.
>
>There's a pretty good argument to be made that what "works" are powerful 
>building blocks that can be combined in lots of different ways; driven by some 
>degree of natural selection and evolution. (Where the web is concerned, first 
>there was ftp, then techinfo, then gopher, and now the web.  Simple mashups 
>seem to have won out over more complicated service oriented architectures.  We 
>might well have plateaued.)
>
>Miles Fidelman
>
>-- In theory, there is no difference between theory and practice.
>In practice, there is.    Yogi Berra
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-13 Thread Alan Kay
Hi Barry

I like your characterization, and do think the next level also will require a 
qualitatively different approach

Cheers,

Alan



>
> From: Barry Jay 
>To: fonc@vpri.org 
>Sent: Wednesday, February 13, 2013 1:13 PM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>
>Hi Alan,
>
>the phrase I picked up on was "doing experiments". One way to think of
the problem is that we are trying to automate the scientific process,
which is a blend of reasoning and experiments. Most of us focus on one
or the other, as in deductive AI versus databases of common knowledge,
but the history of physics etc suggests that we need to develop both
within a single system, e.g. a language that supports both higher-order
programming (for strategies, etc) and generic queries (for conducting
experiments on newly met systems). 
>
>Yours,
>Barry
>
>
>On 02/14/2013 02:26 AM, Alan Kay wrote: 
>Hi Thiago
>>
>>
>>I
think you are on a good path.
>>
>>
>>One
way to think about this problem is that the broker is a human
programmer who has received a module from half way around the world
that claims to provide important services. The programmer would confine
it in an address space and start doing experiments with it to try to
discover what it does (and/or perhaps how well its behavior matches up
to its claims). Many of the discovery approaches of Lenat in AM and
Eurisko could be very useful here.
>>
>>
>>Another
part of the scaling of modules approach could be to require modules to
have much better models of the environments they expect/need in order
to run.
>>
>>
>>For
example, suppose a module has a variable that it would like to refer to
some external resource. Both static and dynamic typing are insufficient
here because they are only about kinds of results rather than meanings
of results. 
>>
>>
>>But
we could readily imagine a language in which the variable had
associated with it a "dummy" or "stand-in" model of what is desired. It
could be a slow version of something we are hoping to get a faster
version of. It could be sample values and tests, etc. All of these
would be useful for debugging our module -- in fact, we could make this
a requirement of our module system, that the modules carry enough
information to allow them to be debugged with only their own model of
the environment. 
>>
>>
>>And
the more information the model has, the easier it will be for a program
to see if the model of an environment for a module matches up to
possible modules out in the environment when the system is running for
real.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>>
>>> From: Thiago Silva 
>>>To: fonc  
>>>Sent: Wednesday,
February 13, 2013 2:09 AM
>>>Subject: Re: [fonc]
Terminology: "Object Oriented" vs "Message Oriented"
>>> 
>>>Hello,
>>>
>>>as I was thinking over these problems today, here are some initial
thoughts,
>>>just to get the conversation going...
>>>
>>>
>>>The first time I read about the Method Finder and Ted's memo, I tried
to grasp
>>>the broader issue, and I'm still thinking of some interesting examples
to
>>>explore.
>>>
>>>I can see the problem of finding operations by their meanings, the
problem of
>>>finding objects by the services they provide and the overal structure
of the
>>>discovery, negotiation and binding.
>>>
>>>My feeling is that, besides using worlds as mechanism, an explicit
"discovery"
>>>context may be required (though I can't say much without further
>>>experimentations), specially when trying to figure out operations that
don't
>>>produce a distinguishable value but rather change the state of
computation
>>>(authenticating, opening a file, sending a message through the network,
etc)
>>>or when doing remote discovery.
>>>
>>>For brokering (and I'm presuming the use of such entities, as I could
not get
>>>rid of them in my mind so far), my first thought was that a chain of
brokers
>>>of some sorts could be useful in the architecture where each could have
>>>specific ways of mediating discovery and negotiation through the
"levels" (or
>>>narrowed options, providing isolation for some services. Worlds come to
mind).
>>>
>>>During the "binding time", I think it would be important that some
>>>requirements of the client could be relaxed or even be tagged optional
to
>>>allow th

Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
Or the (earlier) Smalltalk Models Views Controllers mechanism which had a 
dynamic language with dynamic graphics to allow quite a bit of flexibility with 
arbitrary "models". 



>
> From: David Harris 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Wednesday, February 13, 2013 7:44 AM
>Subject: Re: [fonc] Design of web, POLs for rules. Fuzz testing nile
> 
>
>Alan --
>
>
>Yes, we seem to slowly getting back the the NeWS (Network extensible Windowing 
>System) paradigm which used a modified Display Postscript to allow the 
>intelligence, including user input, to live in the terminal (as opposed to the 
>X-Windows model).  But I am sure I am teaching my grandmother to suck eggs, 
>here, sorry :-) .
>
>
>David
>[[ NeWS = Network extensible Windowing System 
>http://en.wikipedia.org/wiki/NeWS ]]
>
>
>On Wed, Feb 13, 2013 at 7:37 AM, Alan Kay  wrote:
>
>Hi John
>>
>>
>>Or you could look at the actual problem "a web" has to solve, which is to 
>>present arbitrary information to a user that comes from any of several 
>>billion sources. Looked at from this perspective we can see that the current 
>>web design could hardly be more wrong headed. For example, what is the 
>>probability that we can make an authoring app that has all the features 
>>needed by billions of producers?
>>
>>
>>One conclusion could be that the web/browser is not an app but should be a 
>>kind of operating system that should be set up to safely execute anything 
>>from anywhere and to present the results in forms understandable by the 
>>end-user.
>>
>>
>>After literally decades of trying to add more and more features and not yet 
>>matching up to the software than ran on the machines the original browser was 
>>done on, they are slowly coming around to the idea that they should be safely 
>>executing programs written by others. It has only been in the last few years 
>>-- with Native Client in Chrome -- that really fast programs can be safely 
>>downloaded as executables without having to have permission of a SysAdmin.
>>
>>
>>So another way to look at all this is to ask what such an "OS" really needs 
>>to have to allow all in the world to make their own media and have it used by 
>>others ...
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>>
>>> From: John Carlson 
>>>To: Fundamentals of New Computing  
>>>Sent: Tuesday, February 12, 2013 9:00 PM
>>>Subject: [fonc] Design of web, POLs for rules. Fuzz testing nile
>>> 
>>>
>>>Although I have read very little about the design of the web, things are 
>>>starting to gel in my mind.  At the lowest level lies the static or 
>>>declarative part of the web.  The html, dom, xml and json are the main 
>>>languages used in the declarative part.  Layered on top of this is the 
>>>dynamic or procedural part of the web.  Javascript and xslt are the main 
>>>languages in the procedural part.   The final level is the constraints or 
>>>rule based part of the web, normally called stylesheets.  The languages in 
>>>the rule based web are css1, 2, 3 and xsl. Jquery provides a way to apply 
>>>operations in this arena.  I am excluding popular server side 
>>>languages...too many.
>>>What I am wondering is what is the best way to incorporate rules into a 
>>>language.  Vrml has routes.  Uml has ocl. Is avoiding if statements and 
>>>for/while loops the goal of rules languages--that syntax?  That is, do a 
>>>query or find, and apply the operations or rules to all returned values.
>>>Now, if I wanted to apply probabilistic or fuzzy rules to the dom, that 
>>>seems fairly straightforward.  Fuzz testing does this moderately well.  Has 
>>>there been attempts at better fuzz testing? Fuzz about fuzz?  Or is brute 
>>>force best?
>>>We've also seen probablistic parser generators, correct?
>>>But what about probablistic rules?  Can we design an ultimate website w/o a 
>>>designer?  Can we use statistics to create a great solitaire player--i have 
>>>a pretty good stochastic solitaire player for one version of solitaire...how 
>>>about others?  How does one create a great set of rules?  One can create 
>>>great rule POLs, but where are the authors?  Something like cameron browne's 
>>>thesis seems great for grid games.  He is quite prolific.  Can we apply the 
>>>same logic to card games? Web sites?  We have "The Nature of O

Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-13 Thread Alan Kay
Unit tests are just a small part of the kinds of description that could be used 
and are needed. 



>
> From: David Harris 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Wednesday, February 13, 2013 7:39 AM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>
>This sounds suspiciously like Unit Testing, which is basically "When I say 
>this, you should answer that."    Thos are precomputed answers, but could be 
>computed I suppose -- so a bit like your Postscript example ... you send the 
>Testing-Agent down the pipe.  
>
>
>David
>
>
>
>On Wed, Feb 13, 2013 at 7:26 AM, Alan Kay  wrote:
>
>Hi Thiago
>>
>>
>>I think you are on a good path.
>>
>>
>>One way to think about this problem is that the broker is a human programmer 
>>who has received a module from half way around the world that claims to 
>>provide important services. The programmer would confine it in an address 
>>space and start doing experiments with it to try to discover what it does 
>>(and/or perhaps how well its behavior matches up to its claims). Many of the 
>>discovery approaches of Lenat in AM and Eurisko could be very useful here.
>>
>>
>>Another part of the scaling of modules approach could be to require modules 
>>to have much better models of the environments they expect/need in order to 
>>run.
>>
>>
>>For example, suppose a module has a variable that it would like to refer to 
>>some external resource. Both static and dynamic typing are insufficient here 
>>because they are only about kinds of results rather than meanings of results. 
>>
>>
>>But we could readily imagine a language in which the variable had associated 
>>with it a "dummy" or "stand-in" model of what is desired. It could be a slow 
>>version of something we are hoping to get a faster version of. It could be 
>>sample values and tests, etc. All of these would be useful for debugging our 
>>module -- in fact, we could make this a requirement of our module system, 
>>that the modules carry enough information to allow them to be debugged with 
>>only their own model of the environment. 
>>
>>
>>And the more information the model has, the easier it will be for a program 
>>to see if the model of an environment for a module matches up to possible 
>>modules out in the environment when the system is running for real.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>>
>>> From: Thiago Silva 
>>>To: fonc  
>>>Sent: Wednesday, February 13, 2013 2:09 AM
>>>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
>>> 
>>>Hello,
>>>
>>>as I was thinking over these problems today, here are some initial thoughts,
>>>just to get the conversation going...
>>>
>>>
>>>The first time I read about the Method Finder and Ted's memo, I tried to 
>>>grasp
>>>the broader issue, and I'm still thinking of some interesting examples to
>>>explore.
>>>
>>>I can see the problem of finding operations by their meanings, the problem of
>>>finding objects by the services they provide and the overal structure of the
>>>discovery, negotiation and binding.
>>>
>>>My feeling is that, besides using worlds as mechanism, an explicit 
>>>"discovery"
>>>context may be required (though I can't say much without further
>>>experimentations), specially when trying to figure out operations that don't
>>>produce a distinguishable value but rather change the state of computation
>>>(authenticating, opening a file, sending a message through the network, etc)
>>>or when doing remote discovery.
>>>
>>>For brokering (and
 I'm presuming the use of such entities, as I could not get
>>>rid of them in my mind so far), my first thought was that a chain of brokers
>>>of some sorts could be useful in the architecture where each could have
>>>specific ways of mediating discovery and negotiation through the "levels" (or
>>>narrowed options, providing isolation for some services. Worlds come to 
>>>mind).
>>>
>>>During the "binding time", I think it would be important that some
>>>requirements of the client could be relaxed or even be tagged optional to
>>>allow the module to execute at least a subset of its features (or to execute

Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
Hi John

Or you could look at the actual problem "a web" has to solve, which is to 
present arbitrary information to a user that comes from any of several billion 
sources. Looked at from this perspective we can see that the current web design 
could hardly be more wrong headed. For example, what is the probability that we 
can make an authoring app that has all the features needed by billions of 
producers?

One conclusion could be that the web/browser is not an app but should be a kind 
of operating system that should be set up to safely execute anything from 
anywhere and to present the results in forms understandable by the end-user.

After literally decades of trying to add more and more features and not yet 
matching up to the software than ran on the machines the original browser was 
done on, they are slowly coming around to the idea that they should be safely 
executing programs written by others. It has only been in the last few years -- 
with Native Client in Chrome -- that really fast programs can be safely 
downloaded as executables without having to have permission of a SysAdmin.

So another way to look at all this is to ask what such an "OS" really needs to 
have to allow all in the world to make their own media and have it used by 
others ...

Cheers,

Alan



>
> From: John Carlson 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 12, 2013 9:00 PM
>Subject: [fonc] Design of web, POLs for rules. Fuzz testing nile
> 
>
>Although I have read very little about the design of the web, things are 
>starting to gel in my mind.  At the lowest level lies the static or 
>declarative part of the web.  The html, dom, xml and json are the main 
>languages used in the declarative part.  Layered on top of this is the dynamic 
>or procedural part of the web.  Javascript and xslt are the main languages in 
>the procedural part.   The final level is the constraints or rule based part 
>of the web, normally called stylesheets.  The languages in the rule based web 
>are css1, 2, 3 and xsl. Jquery provides a way to apply operations in this 
>arena.  I am excluding popular server side languages...too many.
>What I am wondering is what is the best way to incorporate rules into a 
>language.  Vrml has routes.  Uml has ocl. Is avoiding if statements and 
>for/while loops the goal of rules languages--that syntax?  That is, do a query 
>or find, and apply the operations or rules to all returned values.
>Now, if I wanted to apply probabilistic or fuzzy rules to the dom, that seems 
>fairly straightforward.  Fuzz testing does this moderately well.  Has there 
>been attempts at better fuzz testing? Fuzz about fuzz?  Or is brute force best?
>We've also seen probablistic parser generators, correct?
>But what about probablistic rules?  Can we design an ultimate website w/o a 
>designer?  Can we use statistics to create a great solitaire player--i have a 
>pretty good stochastic solitaire player for one version of solitaire...how 
>about others?  How does one create a great set of rules?  One can create great 
>rule POLs, but where are the authors?  Something like cameron browne's thesis 
>seems great for grid games.  He is quite prolific.  Can we apply the same 
>logic to card games? Web sites?  We have "The Nature of Order" by c. 
>Alexander.  Are there nile designers or fuzz testers/genetic algorithms for 
>nile?
>Is fuzz testing a by product of nile design...should it be?
>If you want to check out the state of the art for dungeons and dragons POLs 
>check out fantasy grounds...xml hell.  We can do better.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-13 Thread Alan Kay
 are lots of facets, and one has to do with messaging. The idea that 
>> "sending a message" has scaling problems is one that has been around for 
>> quite a while. It was certainly something that we pondered at PARC 35 years 
>> ago, and it was an issue earlier for both the ARPAnet and its offspring: the 
>> Internet.
>> 
>> Several members of this list have pointed this out also.
>> 
>> There are similar scaling problems with the use of tags in XML and EMI etc. 
>> which have to be agreed on somehow
>> 
>> 
>> Part of the problem is that for vanilla sends, the sender has to know the 
>> receiver in some fashion. This starts requiring the interior of a module to 
>> know too much if this is a front line mechanism.
>> 
>> This leads to wanting to do something more like LINDA "coordination" or 
>> "publish and subscribe" where there are pools of producers and consumers who 
>> don't have to know explicitly about each other. A "send" is now a general 
>> request for a resource. But the vanilla approaches here still require that 
>> the "sender" and "receiver" have a fair amount of common knowledge (because 
>> the matching is usually done on "terms in common").
>> 
>> For example, in order to invoke a module that will compute the sine of an 
>> angle, do you and the receiver both have to agree about the term "sine"? In 
>> APL I think the name of this function is "circle 1" and in Smalltalk it's 
>> "degreeSin", etc. 
>> 
>> Ted Kaehler solved this problem some years ago in Squeak Smalltalk with his 
>> "message finder". For example, if you enter 3. 4. 7 Squeak will instantly 
>> come back with:
>>    3 bitOr: 4 --> 7
>>    3 bitXor: 4 --> 7
>>    3 + 4 --> 7
>> 
>> For the sine example you would enter 30. 0.5 and Squeak will come up with: 
>>    30 degreeSin --> 0.5
>> 
>> The method finder is acting a bit like Doug Lenat's "discovery" systems. 
>> Simple brute force is used here (Ted executes all the methods that could fit 
>> in the system safely to see what they do.)
>> 
>> One of the solutions at PARC for dealing with a part of the problem is the 
>> idea of "send an agent, not a message". It was quickly found that defining 
>> file formats for all the different things that could be printed on the new 
>> laser printer was not scaling well. The solution was to send a program that 
>> would just execute safely and blindly in the printer -- the printer would 
>> then just print out the bit bin. This was known as PostScript when it came 
>> out in the world.
>> 
>> The "Trickles" idea from Cornell has much of the same flavor.
>> 
>> One possible starting place is to notice that there are lots more terms that 
>> people can use than the few that are needed to make a powerful compact 
>> programming language. So why not try to describe meanings and match on 
>> meanings -- and let there be not just matching (which is like a password) 
>> but "negotiation", which is what a discovery agent does.
>> 
>> And so forth. I think this is a difficult but doable problem -- it's easier 
>> than AI, but has some tinges of it.
>> 
>> Got any ideas?
>> 
>> Cheers,
>> 
>> Alan
>> 
>> >
>> > From: Jeff Gonis 
>> >To: Alan Kay  
>> >Cc: Fundamentals of New Computing  
>> >Sent: Tuesday, February 12, 2013 10:33 AM
>> >Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
>> > 
>> >
>> >I see no one has taken Alan's bait and asked the million dollar question: 
>> >if you decided that messaging is no longer the right path for scaling, what 
>> >approach are you currently using?
>> >I would assume that FONC is the current approach, meaning, at the risk of 
>> >grossly over-simplifying and sounding ignorant, "problem oriented 
>> >languages" allowing for compact expression of meaning.  But even here, FONC 
>> >struck me as providing vastly better ways of creating code that, at its 
>> >core, still used messaging for robustness, etc, rather than using something 
>> >entirely different.
>> >Have I completely misread the FONC projects? And if not messaging, what 
>> >approach are you currently using to handle scalability?
>> >A little more history ...
>> >
>> >
>> >The first Smalltal

Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-13 Thread Alan Kay
One of the original reasons for "message-based" was the simple relativistic 
one. What we decided is that trying to send messages to explicit receivers had 
real scaling problems, whereas "receiving messages" is a good idea.

Cheers,

Alan



>
> From: Eugen Leitl 
>To: Fundamentals of New Computing  
>Sent: Wednesday, February 13, 2013 5:11 AM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>On Tue, Feb 12, 2013 at 11:33:04AM -0700, Jeff Gonis wrote:
>> I see no one has taken Alan's bait and asked the million dollar question:
>> if you decided that messaging is no longer the right path for scaling, what
>> approach are you currently using?
>
>Classical computation doesn't allow storing multiple bits
>in the same location, so relativistic signalling introduces
>latency. Asynchronous shared-nothing message passing is
>the only thing that scales, as it matches the way how this 
>universe does things (try looking at light cones for consistent
>state for multiple writes to the same location -- this
>of course applies to cache coherency).
>
>Inversely, doing things in a different way will guarantee
>that you won't be able to scale. It's not just a good idea,
>it's the law. 
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Alan Kay
Hi Miles

I wouldn't characterize it that way.

Much of today's way of doing "object-oriented" programming with regard to 
simulating data-structures is due to C++ and Bjarn Stroustrup's explicit intent 
to "do to C, what Simula did to Algol". He says clearly in the first C++ 
documentation that he is going to follow Simula by making a preprocessor for C, 
and not try to do a late-bound integrated system like Smalltalk.

The idea of simulating data structures using procedures goes way back -- the 
B5000 had it in hardware! -- and I think it was a multiple invention to see 
this as useful in objects. My first investigation (also recounted in TEHOS) was 
to "fix" one thing the B5000 couldn't quite do, which was to simulate a "sparse 
array" efficiently using objects.

When Simula 67 later appeared, one of their examples was Class String.

My way of thinking about it was much more influenced by Sketchpad and Biology. 
Objects should be like active entities -- Dan Ingalls later characterized the 
feeling of OO programming as being more like training intelligent entities than 
as a puppet master having to pull all the wires directly. So my thought was 
that sending a message should be requesting a goal to be carried out, etc.


What seemed to be really nice -- and still does -- was the simulation idea. 
This meant that you could make one OO system and that you could simulate all 
the components. What if you didn't know how to make a particular smart object 
"the right way"? No problem, you could write a kluge for now, stick it inside 
the object to hide it, and make it look the way it should on the outside. What 
if you couldn't think of anything better than a data structure for a classic 
algorithm? No problem, you could simulate the data structure.

From my bio background, this seemed really nice. You want to do things in terms 
of cells and tissues, but you might have to resort to lesser organizations of 
atoms to make them. The idea in the OOP languages we did was that you should 
really take advantage of the simulation possibilities to design as strongly as 
possible, and then you had a universal material that could allow every 
qualitative level of structure and process to be done.

So this would again be very "actor-like" -- and I thought "actors" were just a 
great name for this way of doing things.

My own personal thoughts about what was accomplished are completely intertwined 
with what our entire group was able to do in a few years at PARC. I would give 
us credit for a very high level combination of "computer science" and "software 
engineering" and "human centered design" and "commingled software and 
hardware", etc. The accomplishment was the group's accomplishment. And this 
whole (to me at least) was a lot more interesting than just a language idea.

I hasten to redirect personal praise to the group accomplishment whenever it 
happens.


I think this is also true for the larger ARPA-PARC community, and why it was 
able to accomplish so much at so many levels.

The "awards to individuals" structure beloved of other fields and of 
journalists completely misses the nature of this process. Any recognition 
should be like "World Series" rings -- everybody gets one, and that's it.

Best wishes,

Alan



>
> From: Miles Fidelman 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 12, 2013 1:09 PM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>Hi Alan,
>
>Is it fair to say that the path you took with Smalltalk led to today's object 
>model of data structures, associated methods, and inheritance, with either a 
>single thread-of-control, or small numbers of threads; while the Actor model 
>led (perhaps not directly) to massive concurrency and Erlang?  (I'm still 
>waiting for something that looks like Smalltalk meets Erlang.)
>
>Cheers,
>
>Miles
>
>Alan Kay wrote:
>> Hi Miles
>> 
>> (Again "The Early History of Smalltalk" has some of this history ...)
>> 
>> It is unfair to Carl Hewitt to say that "Actors were his reaction to 
>> Smalltalk-72" (because he had been thinking early thoughts from other 
>> influences). And I had been doing a lot of thinking about the import of his 
>> "Planner" language.
>> 
>> But that is the simplest way of stating the facts and the ordering.
>> 
>> ST-72 and the early Actors follow on were very similar. The Smalltalk that 
>> didn't get made, "-71", was a kind of merge of the object idea, Logo, and 
>> Carl's Planner system (which predated Prolog and was in many respects more 
>> powerful). Planner used &

Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Alan Kay
Hi Miles

(Again "The Early History of Smalltalk" has some of this history ...)

It is unfair to Carl Hewitt to say that "Actors were his reaction to 
Smalltalk-72" (because he had been thinking early thoughts from other 
influences). And I had been doing a lot of thinking about the import of his 
"Planner" language.

But that is the simplest way of stating the facts and the ordering. 

ST-72 and the early Actors follow on were very similar. The Smalltalk that 
didn't get made, "-71", was a kind of merge of the object idea, Logo, and 
Carl's Planner system (which predated Prolog and was in many respects more 
powerful). Planner used "pattern-directed invocation" and I thought you could 
both receive messages with it if it were made the interface of an object, and 
also use it for deduction. Smalltalk-72 was a bit of an accident

The divergence later was that we got a bit dirtier as we made a real system 
that you could program a real system in. Actors got cleaner as they looked at 
many interesting theoretical possibilities for distributed computing etc. My 
notion of "object oriented" would now seem to be very actor-like.

Cheers,

Alan




>
> From: Miles Fidelman 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 12, 2013 11:05 AM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>Alan Kay wrote:
>> A little more history ...
>> 
>> The first Smalltalk (-72) was "modern" (as used below), and similar to 
>> Erlang in several ways -- for example, messages were received with 
>> "structure and pattern matching", etc. The language was extended using the 
>> same mechanisms ...
>
>Alan,
>
>As I recall, some of your early writings on Smalltalk sounded very actor-like 
>- i.e., objects as processes, with lots of messages floating around, rather 
>than a sequential thread-of-control model. Or is my memory just getting fuzzy? 
> In any case, I'm surprised that the term "actor" hasn't popped up in this 
>thread, along with "object" and "messaging."
>
>Miles Fidelman
>
>
>
>-- In theory, there is no difference between theory and practice.
>In practice, there is.    Yogi Berra
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Alan Kay
Hi Jeff

I think "intermodule communication schemes" that *really scale* is one of the 
most important open issues of the last 45 years or so.

It is one of the several "pursuits" written into the STEPS proposal that we 
didn't use our initial efforts on -- so we've done little to advance this over 
the last few years. But now that the NSF funded part of STEPS has concluded, we 
are planning to use much of the other strand of STEPS to look at some of these 
neglected issues.

There are lots of facets, and one has to do with messaging. The idea that 
"sending a message" has scaling problems is one that has been around for quite 
a while. It was certainly something that we pondered at PARC 35 years ago, and 
it was an issue earlier for both the ARPAnet and its offspring: the Internet.

Several members of this list have pointed this out also.

There are similar scaling problems with the use of tags in XML and EMI etc. 
which have to be agreed on somehow


Part of the problem is that for vanilla sends, the sender has to know the 
receiver in some fashion. This starts requiring the interior of a module to 
know too much if this is a front line mechanism.

This leads to wanting to do something more like LINDA "coordination" or 
"publish and subscribe" where there are pools of producers and consumers who 
don't have to know explicitly about each other. A "send" is now a general 
request for a resource. But the vanilla approaches here still require that the 
"sender" and "receiver" have a fair amount of common knowledge (because the 
matching is usually done on "terms in common").

For example, in order to invoke a module that will compute the sine of an 
angle, do you and the receiver both have to agree about the term "sine"? In APL 
I think the name of this function is "circle 1" and in Smalltalk it's 
"degreeSin", etc. 

Ted Kaehler solved this problem some years ago in Squeak Smalltalk with his 
"message finder". For example, if you enter 3. 4. 7 Squeak will instantly come 
back with:
   3 bitOr: 4 --> 7
   3 bitXor: 4 --> 7
   3 + 4 --> 7

For the sine example you would enter 30. 0.5 and Squeak will come up with: 
   30 degreeSin --> 0.5

The method finder is acting a bit like Doug Lenat's "discovery" systems. Simple 
brute force is used here (Ted executes all the methods that could fit in the 
system safely to see what they do.)

One of the solutions at PARC for dealing with a part of the problem is the idea 
of "send an agent, not a message". It was quickly found that defining file 
formats for all the different things that could be printed on the new laser 
printer was not scaling well. The solution was to send a program that would 
just execute safely and blindly in the printer -- the printer would then just 
print out the bit bin. This was known as PostScript when it came out in the 
world.

The "Trickles" idea from Cornell has much of the same flavor.

One possible starting place is to notice that there are lots more terms that 
people can use than the few that are needed to make a powerful compact 
programming language. So why not try to describe meanings and match on meanings 
-- and let there be not just matching (which is like a password) but 
"negotiation", which is what a discovery agent does.

And so forth. I think this is a difficult but doable problem -- it's easier 
than AI, but has some tinges of it.

Got any ideas?

Cheers,

Alan




>
> From: Jeff Gonis 
>To: Alan Kay  
>Cc: Fundamentals of New Computing  
>Sent: Tuesday, February 12, 2013 10:33 AM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>
>I see no one has taken Alan's bait and asked the million dollar question: if 
>you decided that messaging is no longer the right path for scaling, what 
>approach are you currently using?
>I would assume that FONC is the current approach, meaning, at the risk of 
>grossly over-simplifying and sounding ignorant, "problem oriented languages" 
>allowing for compact expression of meaning.  But even here, FONC struck me as 
>providing vastly better ways of creating code that, at its core, still used 
>messaging for robustness, etc, rather than using something entirely different.
>Have I completely misread the FONC projects? And if not messaging, what 
>approach are you currently using to handle scalability?
>A little more history ...
>
>
>The first Smalltalk (-72) was "modern" (as used below), and similar to Erlang 
>in several ways -- for example, messages were received with "structure and 
>pattern matching", etc. The language was extended using the same mechanisms ...
>
>
>Cheers,
>
>
>

Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Alan Kay
A little more history ...

The first Smalltalk (-72) was "modern" (as used below), and similar to Erlang 
in several ways -- for example, messages were received with "structure and 
pattern matching", etc. The language was extended using the same mechanisms ...

Cheers,


Alan



>
> From: Brian Rice 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 12, 2013 8:54 AM
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>
>Independently of the originally-directed historical intent, I'll pose my own 
>quick perspective.
>
>Perhaps a contrast with Steve Yegge's Kingdom of Nouns essay would help:
>http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html
>
>
>
>The modern post-Erlang sense of message-oriented computing has to do with 
>messages with structure and pattern-matching, where error-handling isn't about 
>sequential, nested access, but more about independent structures dealing with 
>untrusted noise.
>
>
>Anyway, treating the messages as first-class objects (in the Lisp sense) is 
>what gets you there:
>http://www.erlang.org/doc/getting_started/conc_prog.html
>
>
>
>
>
>
>On Tue, Feb 12, 2013 at 7:15 AM, Loup Vaillant  wrote:
>
>This question was prompted by a quote by Joe Armstrong about OOP[1].
>>It is for Alan Kay, but I'm totally fine with a relevant link.  Also,
>>"I don't know" and "I don't have time for this" are perfectly okay.
>>
>>Alan, when the term "Object oriented" you coined has been hijacked by
>>Java and Co, you made clear that you were mainly about messages, not
>>classes. My model of you even says that Erlang is far more OO than Java.
>>
>>Then why did you chose the term "object" instead of "message" in the
>>first place?  Was there a specific reason for your preference, or did
>>you simply not bother foreseeing any terminology issue? (20/20 hindsight and 
>>such.)
>>
>>Bonus question: if you had choose "message" instead, do you think it
>>would have been hijacked too?
>>
>>Thanks,
>>Loup.
>>
>>
>>[1]: http://news.ycombinator.com/item?id=5205976
>>     (This is for reference, you don't really need to read it.)
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>
>
>
>-- 
>-Brian T. Rice 
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Alan Kay
I left out the Lisp influences. I didn't know Lisp when the original thoughts 
were happening, but a few years later, Lisp turned out to be a huge positive 
factor in working out both how things could be done, and also provided ideas 
and contexts for this kind of thinking. For example, the first Smalltalk was 
really an exercise in "apply"

Cheers,

Alan



>
> From: David Hussman 
>To: 'Alan Kay' ; 'Fundamentals of New Computing' 
> 
>Sent: Tuesday, February 12, 2013 8:36 AM
>Subject: RE: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>
>Alan,
> 
>Thanks for the thoughtful words / history. I am a lurker on this group and I 
>dig seeing this kind of dialog during times when I am so often surrounded by 
>bright shiny object types.
> 
>David
> 
>From:fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Alan Kay
>Sent: Tuesday, February 12, 2013 10:23 AM
>To: Fundamentals of New Computing
>Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>Hi Loup
> 
>I think how this happened has already been described in "The Early History of 
>Smalltalk". 
> 
>But 
> 
>In the Fall of 1966, Sketchpad was what got me started thinking about 
>"representing concepts as whole things". Simula, a week later, provided a 
>glimpse of how one could "deal with issues that couldn't be done wonderfully 
>with constraints and solving" (namely, you could hide procedures inside the 
>entities). 
> 
>This triggered off many thoughts in a few minutes, bringing in "ideas that 
>seemed similar" from biology, math (algebras), logic (Carnap's intensional 
>logic), philosophy (Plato's "Ideas"), hardware (running multiple active units 
>off a bus), systems design (the use of virtual machines in time-sharing), and 
>networking (the ARPA community was getting ready to do the ARPAnet). Bob 
>Barton had pronounced that "recursive design is making the parts have the same 
>powers as the wholes", which for the first time I was able to see was really 
>powerful if the wholes and the parts were entire computers hardware or 
>software or some mixture.
> 
>The latter was hugely important to me because it allowed a "universal 
>simulation system" to be created from just a few ideas that would cover 
>everything and every other kind of thing.
> 
>During this period I had no label for what I was doing, including "this thing 
>I was doing", I was just doing.
> 
>A few months later someone asked me what I was doing, and I didn't think about 
>the answer -- I was still trying to see how the synthesis of ideas could be 
>pulled off without a lot of machinery (kind of the math stage of the process).
> 
>Back then, there was already a term in use called "data driven programming". 
>This is where "data" contains info that will help find appropriate procedures. 
> 
>And the term "objects" was also used for "composite data" i.e. blocks of 
>storage with different fields containing values of various kinds. This came 
>naturally from "card images" (punched cards were usually 80 or more characters 
>long and divided into fields). 
> 
>At some point someone (probably in the 50s) decided to use some of the fields 
>to help the logic of plug board programming and "drive" the processes off the 
>cards rather than "just processing" them.
> 
>So if you looked at how Sketchpad was implemented you would see, in the terms 
>of the day: "objects that were data driven". Ivan gives Doug Ross credit for 
>his "plex structures", which were an MIT way to think about these ideas. 
>Sketchpad also used "threaded lists" in its blocks (this was not a great idea 
>but it was popular back then -- Simula later took this up as well).
> 
>So I just said "object oriented programming" and went back to work.
> 
>Later I regretted this (and some of the other labels that were also put in 
>service) after the ideas worked out nicely and were very powerful for us at 
>PARC. 
> 
>The success of the ideas made what we were doing popular, and people wanted to 
>be a part of it. This led to using the term "object oriented" as a designer 
>jeans label for pretty much anything (there was even an "object-oriented" 
>COBOL!). This appropriation of labels without content is a typical pop culture 
>"fantasy football" syndrome.
> 
>PARC was an integral part of the ARPA community, the last gasp of which in the 
>70s was designing the Internet via a design

Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Alan Kay
 was to not manifest a "message" unless someone wanted to see it -- this 
was part of the magic of Dan Ingalls' design).

So I think the problem with "messaging" was partly that it was the more subtle 
and invisible idea and this "verb part") got lost because the representational 
"noun part" got all the attention. (And we generally don't think of noun-like 
things as "in process".) This is part of the big problem in "OOP" today, 
because it is mostly a complicated way of making new data structures whose 
fields are munged by "setters".


Back to your original question, it *might* have helped to have better 
terminology. The Simula folks tried this in the first Simula, but their choice 
of English words was confusing (they used "Activity" for "Class" and "Process" 
for "Instance"). This is almost good and much more in keeping with what should 
be the philosophical underpinnings of this kind of design. 

After being told that no one had understood this (I and two other grad students 
had to read the machine code listing of the Simula compiler to understand its 
documentation!), the Nygaard and Dahl chose "Class" and "Instance" for Simula 
67. I chose these for Smalltalk also because why multiply terms? (I should have 
chosen better terms here also.)

To sum up, besides the tiny computers we had to use back then, we didn't have a 
good enough theory of messaging -- we did have a start that was based on Dave 
Fisher's "Control Definition Language" CMU 1970 thesis. But then we got 
overwhelmed by the excitement of being able to make personal computing on the 
Alto. A few years later I decided that "sending messages" was not a good 
scaling idea, and that something more general to get needed resources "from the 
outside" needed to be invented.


Cheers,

Alan



>
> From: Loup Vaillant 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 12, 2013 7:15 AM
>Subject: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> 
>This question was prompted by a quote by Joe Armstrong about OOP[1].
>It is for Alan Kay, but I'm totally fine with a relevant link.  Also,
>"I don't know" and "I don't have time for this" are perfectly okay.
>
>Alan, when the term "Object oriented" you coined has been hijacked by
>Java and Co, you made clear that you were mainly about messages, not
>classes. My model of you even says that Erlang is far more OO than Java.
>
>Then why did you chose the term "object" instead of "message" in the
>first place?  Was there a specific reason for your preference, or did
>you simply not bother foreseeing any terminology issue? (20/20 hindsight and 
>such.)
>
>Bonus question: if you had choose "message" instead, do you think it
>would have been hijacked too?
>
>Thanks,
>Loup.
>
>
>[1]: http://news.ycombinator.com/item?id=5205976
>     (This is for reference, you don't really need to read it.)
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] yet another meta compiler compiler

2013-02-08 Thread Alan Kay
Looks nice to me!

But no ivory towers around to pillage. (However planting a few seeds is almost 
always a good idea)

Cheers,

Alan




>
> From: Charles Perkins 
>To: Fundamentals of New Computing  
>Sent: Friday, February 8, 2013 3:52 PM
>Subject: [fonc] yet another meta compiler compiler
> 
>While we're all waiting for the next STEP report I thought I'd share something 
>I've been working on, inspired by O'Meta and by the Meta-II paper and by the 
>discussions on this list from November.
>
>I've written up the construction of a parser generator and compiler compiler 
>here: https://github.com/charlesap/ibnf/blob/master/SyntaxChapter.pdf?raw=true
>
>The source code can be had here: https://github.com/charlesap/ibnf
>
>Don't be fooled by the footnotes and references, this is a piece of outsider 
>literature. I am a barbarian come to pillage the ivory tower. Yarr.
>
>Chuck
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] deriving a POL from existing code

2013-01-08 Thread Alan Kay
Yes indeed, I quite agree with David. 

One of the main points in the 2012 STEPS report (when I get around to finally 
finishing it and getting it out) is exactly David's -- that it is a huge design 
task to pull off a good DSL -- actually it is a double design task: you first 
need to come up with a great design of the domain area that is good enough to 
make it worth while, and to then try to design and "make a math" for the 
fruitful ways the domain area can be viewed.

This pays off if the domain area is very important (and often large and 
complicated). In the STEPS project both Nile (factors of 100 to 1000) and OMeta 
(wide spectrum flexibility and compactness) paid off really well. K-script also 
has paid off well because it enabled us to get about a 5-6 reduction in the 
Smalltalk code we had done the first scaffolding of the UI and Media in. Maru 
is working well as the backend for Nile and is very compact, etc.

In other words, the DSLs that really pay off are "actual languages" with all 
that implies.

But it's a little sobering to look at the many languages we did to learn about 
doing this, and ones that wound up being awkward, etc. and not used in the end.

On the other hand, our main point in doing STEPS was for both learning -- 
answering some "lunch questions" we've had for years -- and also to put a lot 
of effort into getting a handle on actual complexities vs complications. We 
purposely picked a well-mined set of domains -- "personal computing" -- so we 
would not have to invent fundamental concepts, but rather to take areas that 
are well known in one sense, and try to see how they could be captured in a 
very compact but readable form.

In other words, it's good to choose the battles to be fought and those to be 
avoided -- it's hard to "invent everything". This was even true at Xerox PARC 
-- even though it seems as though that is what we did. However, pretty much 
everything there in the computing research area had good -- but flawed -- 
precursors from the 60s (and from many of us who went to PARC). So the idea 
there was "brand new" HW-SW-UI-Networking but leaning on the "promising 
gestures and failures" of the 60s. This interesting paradox of "from scratch" 
but "don't forget" worked really well.

Cheers,

Alan





>
> From: David Barbour 
>To: Fundamentals of New Computing  
>Sent: Tuesday, January 8, 2013 8:19 AM
>Subject: Re: [fonc] deriving a POL from existing code
> 
>
>Take a look at the Inform 7 language (http://inform7.com/) and its modular 
>'rulebooks'.
>
>
>Creating good rules-based languages isn't trivial, mostly because ad-hoc rules 
>can interfere in ways that are awkward to reason about or optimize. Temporal 
>logic (e.g. Dedalus, Bloom) and constraint-logic techniques are both 
>appropriate and effective. I think my RDP will also be effective. 
>
>
>Creating a good POL can be difficult. (cf. 
>http://lambda-the-ultimate.org/node/4653)
>
>
>
>On Tue, Jan 8, 2013 at 7:33 AM, John Carlson  wrote:
>
>Has anyone ever attempted to automatically add meaning, semantics, longer 
>variable names, loops, and comments automatically to code?  Just how good are 
>the beautifiers out there, and can we do better?
>>
>>No, I'm not asking anyone to break a license agreement.  Ideally, I would 
>>want this to work on code written by a human being--me.
>>
>>Yes, I realize that literate programming is the way to go.  I just have never 
>>explored other options before, and would like to know about the literature.
>>
>>
>>Say I am trying to derive language for requirements and use cases based on 
>>existing source code.
>>
>>I believe this may be beyond current reverse engineering techniques which 
>>stop at the UML models.  I don't want models, I want text, perhaps written in 
>>a Problem Oriented Language (POL).
>>
>>That is, how does one derive a good POL from existing code?  Is this the same 
>>as writing a scripting language on top of a library?
>>
>>What approaches have been tried?
>>
>>Here's the POL I want.  I want a POL to describe game rules at the same order 
>>of magnitude as English.  I am not speaking of animation or resources--just 
>>rules and constraints.  Does the Object Constraint Language suffice for this?
>>
>>Thanks,
>>
>>John
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>
>
>-- 
>bringing s-words to a pen fight 
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report?

2013-01-04 Thread Alan Kay
Sliding deadlines very often allow other pursuits to creep in ...

Cheers,

Alan



>
> From: Dale Schumacher 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Friday, January 4, 2013 8:59 AM
>Subject: Re: [fonc] Final STEP progress report?
> 
>Kind of like "music starts at 9pm" :-)
>
>We're all anxious to see the results of your work.  Thanks (in
>advance) for sharing it.
>
>On Fri, Jan 4, 2013 at 10:51 AM, Alan Kay  wrote:
>> It turns out that the "due date" is actually a "due interval" that starts
>> Jan 1st and extends for a few months ... so we are working on putting the
>> report together amongst other activities ...
>>
>> Cheers,
>>
>> Alan
>>
>> 
>> From: Mathnerd314 
>> To: Fundamentals of New Computing 
>> Sent: Friday, January 4, 2013 8:43 AM
>> Subject: Re: [fonc] Final STEP progress report?
>>
>> On 11/7/2012 4:37 PM, Kim Rose wrote:
>>> Hello,
>>>
>>> For those of you interested and waiting -- the NSF (National Science
>>> Foundation) funding for the 5-year "STEPS" project has now finished (we
>>> stretched that funding to last for 6 years).  The final report on this work
>>> will be published and available on our website by the end of this calendar
>>> year.
>> It's four days past the end of the calendar year, and I don't see a final
>> report: http://www.vpri.org/html/writings.php
>>
>> Am I looking in the wrong place? Or will it be a few more days until it's
>> published?
>>
>> -- Mathnerd314
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report?

2013-01-04 Thread Alan Kay
It turns out that the "due date" is actually a "due interval" that starts Jan 
1st and extends for a few months ... so we are working on putting the report 
together amongst other activities ...

Cheers,

Alan



>
> From: Mathnerd314 
>To: Fundamentals of New Computing  
>Sent: Friday, January 4, 2013 8:43 AM
>Subject: Re: [fonc] Final STEP progress report?
> 
>On 11/7/2012 4:37 PM, Kim Rose wrote:
>> Hello,
>> 
>> For those of you interested and waiting -- the NSF (National Science 
>> Foundation) funding for the 5-year "STEPS" project has now finished (we 
>> stretched that funding to last for 6 years).  The final report on this work 
>> will be published and available on our website by the end of this calendar 
>> year.
>It's four days past the end of the calendar year, and I don't see a final 
>report: http://www.vpri.org/html/writings.php
>
>Am I looking in the wrong place? Or will it be a few more days until it's 
>published?
>
>-- Mathnerd314
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-03 Thread Alan Kay
Hi David

I think both of your essays are important, as is the general style of 
aspiration.

The "ingredients of a soup" idea is one of the topics we were supposed to work 
on in the STEPS project, but it counts as a shortfall: we wound up using our 
time on other parts. We gesture at it in some of the yearly reports.

The thought was that a kind of "semantic publish and subscribe" scheme -- that 
dealt in descriptions and avoided having to know names of functionalities as 
much as possible -- would provide a very scalable loose coupling mechanism. We 
were hoping to get beyond the pitfalls of attempts at program synthesis from 
years ago that used pre-conditions and post-conditions to help matchers paste 
things together.


I'm hoping that you can cast more light on this area. One of my thoughts is 
that a good matcher might be more like a "dynamic discovery system" (e.g. 
Lenat's "Eurisko") than a simple matcher 

It's interesting to think of what the commonalities of such a system should be 
like. A thought here was that a suitable descriptive language would be could be 
should be lots smaller and simpler than a set of "standard conventions" and 
"tags" for functionality

Joe Goguen was a good friend of mine, and his early death was a real tragedy. 
As you know, he spent many years trying to find sweet spots in formal semantics 
that could also be used in practical ways...

Best wishes,

Alan



>
> From: David Barbour 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Wednesday, January 2, 2013 11:09 PM
>Subject: Re: [fonc] Current topics
> 
>
>On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay  wrote:
>
>As humans, we are used to being sloppy about message creation and sending, and 
>rely on negotiation and good will after the fact to deal with errors. 
>
>
>You might be interested in my article on avoiding commitment in HCI, and its 
>impact on programming languages. I address some issues of negotiation and 
>clarification after-the-fact. I'm interested in techniques that might make 
>this property more systematic and compositional, such as modeling messages or 
>signals as having probabilistic meanings in context.
>
>
>
>>
>>you are much better off making -- with great care -- a few kinds of 
>>relatively big modules as basic building blocks than to have zillions of 
>>different modules being constructed by vanilla programmers
>
>
>Large components are probably a good idea if humans are hand-managing the glue 
>between them. But what if there was another way? Instead of modules being 
>rigid components that we painstakingly wire together, they can be ingredients 
>of a soup - with the melding and combination process being largely automated.
>
>
>If the modules are composed automatically, they can become much smaller, more 
>specialized and reusable. Large components require a lot of inefficient 
>duplication of structure and computation (seen even in biology).
>
>
> 
>
>>
>>Note that desires for runable specifications, etc., could be quite harmonious 
>>with a viable module scheme that has great systems integrity.
>
>
>Certainly. Before his untimely departure, Joseph Goguen was doing a lot of 
>work on modular, runable specifications (the BOBJ - behavioral OBJ - language, 
>like a fusion of OOP and term rewriting). 
> 
>Regards,
>
>
>Dave
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Current topics

2013-01-01 Thread Alan Kay
The most recent discussions get at a number of important issues whose 
pernicious snares need to be handled better.

In an analogy to sending messages "most of the time successfully" through noisy 
channels -- where the noise also affects whatever we add to the messages to 
help (and we may have imperfect models of the noise) -- we have to ask: what 
kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So a 
message that can be proved perfectly "received as sent" can still be 
interpreted poorly by a human directly, or by software written by humans.


A wonderful "specification language" that produces runable code good enough to 
make a prototype, is still going to require debugging because it is hard to get 
the spec-specs right (even with a machine version of human level AI to help 
with "larger goals" comprehension).

As humans, we are used to being sloppy about message creation and sending, and 
rely on negotiation and good will after the fact to deal with errors. 

We've not done a good job of dealing with these tendencies within programming 
-- we are still sloppy, and we tend not to create negotiation processes to deal 
with various kinds of errors. 

However, we do see something that is "actual engineering" -- with both care in 
message sending *and* negotiation -- where "eventual failure" is not tolerated: 
mostly in hardware, and in a few vital low-level systems which have to scale 
pretty much "finally-essentially error-free" such as the Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error 
detection and improvements (if possible). Dan Ingalls was (and is) a master at 
getting a whole system going in such a way that it has enough integrity to 
"exhibit its failures" and allow many of them to be addressed in the context of 
what is actually going on, even with very low level failures. It is interesting 
to note the contributions from what you can say statically (the higher the 
level the language the better) -- what can be done with "meta" (the more 
dynamic and deep the integrity, the more powerful and safe "meta" becomes) -- 
and the tradeoffs of modularization (hard to sum up, but as humans we don't 
give all modules the same care and love when designing and building them).

Mix in real human beings and a world-wide system, and what should be done? (I 
don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers contrasted 
with engineers. The second is human systems contrasted with biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million engineers 
(some of them in computing). The current estimates of "programmers in the US" 
are about 1.3 million (US Dept of Labor counting "programmers and developers"). 
Also, the Internet and multinational corporations, etc., internationalizes the 
impact of programming, so we need an estimate of the "programmers world-wide", 
probably another million or two? Add in the ad hoc programmers, etc? The 
populations are similar in size enough to make the contrasts in methods and 
results quite striking.

Looking for analogies, to my eye what is happening with programming is more 
similar to what has happened with law than with classical engineering. Everyone 
will have an opinion on this, but I think it is partly because nature is a 
tougher critic on human built structures than humans are on each other's 
opinions, and part of the impact of this is amplified by the simpler shorter 
term liabilities of imperfect structures on human safety than on imperfect laws 
(one could argue that the latter are much more of a disaster in the long run).

And, in trying to tease useful analogies from Biology, one I get is that the 
largest gap in complexity of atomic structures is the one from polymers to the 
simplest living cells. (One of my two favorite organisms is Pelagibacter 
unique, which is the smallest non-parasitic standalone organism. Discovered 
just 10 years ago, it is the most numerous known bacterium in the world, and 
accounts for 25% of all of the plankton in the oceans. Still it has about 1300+ 
genes, etc.) 

What's interesting (to me) about cell biology is just how much stuff is 
organized to make "integrity" of life. Craig Ventor thinks that a minimal 
hand-crafted genome for a cell would still require about 300 genes (and a 
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here should 
be scrutinized -- but this one harmonizes with one of Butler Lampson's 
conclusions/prejudices: that you are much better off making -- with great care 
-- a few kinds of relatively big modules as basic building blocks than to have 
zillions of different modules being constructed by vanilla programmers. One of 
my favorite examples of this was the "Beings" master's thesis by Doug Lenat at 
Stanford 

Re: [fonc] A META-II for C that fits in a half a sheet of paper

2012-11-22 Thread Alan Kay
Oh yes ... I'd forgotten that I'd given this paper to the 1401 restoration 
group at the Computer History Museum (the 1401 was my first computer more than 
50 years ago now -- it was "a bit odd" even relative to the more diverse 
designs of its day)

http://ibm-1401.info/AlanKay-META-II.html

Cheers,

Alan





>
> From: Christian Neukirchen 
>To: Fundamentals of New Computing  
>Sent: Thursday, November 22, 2012 7:06 AM
>Subject: Re: [fonc] A META-II for C that fits in a half a sheet of paper
> 
>Reuben Thomas  writes:
>
>> On 22 November 2012 07:54, Long Nguyen  wrote:
>>
>>> Hello everyone,
>>>
>>> I was very impressed with Val Schorre's META-II paper that Dr. Kay gave me
>>> to read,
>>
>>
>> A paper which, as far as I can tell, one still has to pay the ACM to read.
>> Sigh.
>
>Or not: http://ibm-1401.info/Meta-II-schorre.pdf
>
>-- 
>Christian Neukirchen    http://chneukirchen.org
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Interview with Alan Kay

2012-11-16 Thread Alan Kay
Hi Jarek

This is the editor's hodgepodge of my oral hodgepodge -- and one could argue 
that the nature of orality mitigates against trying to simply extract quotes, 
try to fix them, and then print them. I find them pretty much impossible to 
read.


The "Cerf and prizes" part doesn't make sense (since Vint has been awarded just 
about everything that can be awarded). But he and I have both served on that 
committee (post our awards), and while I regard it as the best most civilized 
committee I've ever been on (a real pleasure!), I'm not so happy about the 
larger process. 


The most important points here (I think) are that (a) giving recognition to 
*everyone* who deserves it just doesn't happen with any of the major awards (b) 
there are biases from faddism, popularity, etc. (c) most of computing's 
advances have to be implemented to assess them, and this almost always requires 
teams -- and so much is learned by team interactions that (I think) the whole 
team should be awarded (even the Draper Prize isn't comprehensive in this 
fashion). 


And, with regard to great hardware/system designers, the Turing Award has only 
been given a few times and there are at least 5 to 10 people who have deeply 
deserved it, including Bob Barton.

With regard to Barton, he was certainly one of the most amazing people I've 
ever worked with. As with some of the other greats, he was not bound in any way 
to mere opinion around him, and was able to identify great goals without regard 
to their difficulty (which was sometimes too high for a particular era).

The class was quite an experience and completely liberating because he forced 
us back to zero, but with the knowledge available "as perfume", just not in the 
way of real thinking things through. He was able to demolish his own 
accomplishments as well, so the destruction was total.

Cheers,

Alan


>
> From: Jarek Rzeszótko 
>To: Fundamentals of New Computing  
>Sent: Friday, November 16, 2012 6:59 AM
>Subject: Re: [fonc] Interview with Alan Kay
> 
>
>Hi,
>
>Very interesting, but:
>
>"[Digression on who, in addition to Cerf, should have won various computing 
>prizes…]"
>
>I guess that's not the best editing job ever, I for one would like to hear the 
>digression, and if they edit it out mentioning it at all is a bit 
>irritating... It would be interesting to hear more details about that Bob 
>Barton class, too. 
>
>Cheers,
>Jarosław Rzeszótko
>
>
>2012/11/16 Eugen Leitl 
>
>
>>http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442#
>>
>>Interview with Alan Kay
>>
>>By Andrew Binstock, July 10, 2012
>>
>>The pioneer of object-orientation, co-designer of Smalltalk, and UI luminary
>>opines on programming, browsers, objects, the illusion of patterns, and how
>>Socrates could still make it to heaven.
>>
>>In June of this year, the Association of Computing Machinery (ACM) celebrated
>>the centenary of Alan Turing's birth by holding a conference with
>>presentations by more than 30 Turing Award winners. The conference was filled
>>with unusual lectures and panels (videos are available here) both about
>>Turing and present-day computing. During a break in the proceedings, I
>>interviewed Alan Kay — a Turing Award recipient known for many innovations
>>and his articulated belief that the best way to predict the future is to
>>invent it.
>>
>>[A side note: Re-creating Kay's answers to interview questions was
>>particularly difficult. Rather than the linear explanation in response to an
>>interview question, his answers were more of a cavalcade of topics, tangents,
>>and tales threaded together, sometimes quite loosely — always rich, and
>>frequently punctuated by strong opinions. The text that follows attempts to
>>create somewhat more linearity to the content. — ALB]
>>
>>Childhood As A Prodigy
>>
>>Binstock: Let me start by asking you about a famous story. It states that
>>you'd read more than 100 books by the time you went to first grade. This
>>reading enabled you to realize that your teachers were frequently lying to
>>you.
>>
>>Kay: Yes, that story came out in a commemorative essay I was asked to write.
>>
>>Binstock: So you're sitting there in first grade, and you're realizing that
>>teachers are lying to you. Was that transformative? Did you all of a sudden
>>view the whole world as populated by people who were dishonest?
>>
>>Kay: Unless you're completely, certifiably insane, or a special kind of
>>narcissist, you regard yourself as normal. So I didn't really think that much
&g

Re: [fonc] Final STEP progress report?

2012-11-07 Thread Alan Kay
Hi Carl

Just to keep on saying it ... the STEPS project had/has completely different 
goals than the Smalltalk project -- STEPS really is a "science project" -- or a 
collection of science projects -- that has never been aimed at a deployable 
artifact, but instead is aimed at finding better and more compact ways to 
express meanings. 

We did make it possible to exhibit and permute a wide range of media in 
real-time -- it makes it much easier to show people the results of important 
pieces of code this way -- but this is quite a ways from the packaging and 
finishing that was done for the Smalltalks.

However, it would be possible to do something analogous to a Smalltalk from the 
current state of things, but we are instead pushing even more deeply into 
"Whats" rather than "Hows".

Cheers,

Alan





>
> From: Carl Gundel 
>To: 'Fundamentals of New Computing'  
>Sent: Wednesday, November 7, 2012 2:45 PM
>Subject: Re: [fonc] Final STEP progress report?
> 
>Thanks Kim.  Will there be enough documentation for interested people to be
>able to build on top of VPRI's STEPS project?  I'm thinking something like
>the blue book with a CDROM.  :-)
>
>-Carl
>
>-Original Message-
>From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Kim
>Rose
>Sent: Wednesday, November 07, 2012 5:37 PM
>To: Fundamentals of New Computing
>Subject: Re: [fonc] Final STEP progress report?
>
>Hello,
>
>For those of you interested and waiting -- the NSF (National Science
>Foundation) funding for the 5-year "STEPS" project has now finished (we
>stretched that funding to last for 6 years).  The final report on this work
>will be published and available on our website by the end of this calendar
>year.
>
>We have received some more funding (although not to the extent of this
>original 5-year grant) and our work will carry on.   That said, we're always
>looking for more funding to maintain day to day operations so we welcome any
>support and donations at any time.  :-)
>
>Regards,
>Kim Rose
>Viewpoints Research Institute
>
>Viewpoints Research is a 501(c)(3) nonprofit organization dedicated to
>improving "powerful ideas education" for the world's children and advancing
>the state of systems research and personal computing. Please visit us online
>at www.vpri.org
>
>
>
>
>
>On Nov 8, 2012, at 5:23 AM, Carl Gundel wrote:
>
>> Well, I do hope that VPRI has managed to find more funding money so that
>this doesn't have to be a final STEP report.  ;-)
>> 
>> -Carl
>> 
>> -Original Message-
>> From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of
>Loup Vaillant
>> Sent: Wednesday, November 07, 2012 1:45 PM
>> To: Fundamentals of New Computing
>> Subject: [fonc] Final STEP progress report?
>> 
>> Hi,
>> 
>> The two last progress reports having being published in October, I was
>wondering if we will have the final one soon.  Have we an estimation of when
>this might be completed? As a special request, I'd like to know a bit about
>what to expect.
>> 
>> Unless of course it's all meant to be a surprise.  But please at least
>tell me you have decided not to disclose anything right now. In any case, I
>promise I'll be patient.
>> 
>> Cheers,
>> Loup.
>> 
>> PS: If I sound like a jumping impatient child, that's because right
>>     now, I feel like one.
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>> 
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay in the news [german]

2012-07-19 Thread Alan Kay
Hi John

Sorry to hear about your nerve problems.

I got a variety of books to get started -- including Anton Shearer's and 
Christopher Parkening's. 


Then I started corresponding with a fabulous and wonderfully expressive player 
in the NetherlandsI found on YouTube-- Enno Voorhorst 

Check out: http://www.youtube.com/watch?v=viVl-G4lFQ4

I like his approach very much -- part of it is that he started out as a violin 
player, and still does a fair amount of playing in string quartets, etc. You 
can hear that his approach to tremolo playing is that of a solo timbre rather 
than an effect.


And some of the violin ideas of little to no support for the left hand do work 
well on classical guitar.But many of the barres (especially the hinged ones) do 
require some thumb support. What has been interesting about this process is to 
find out how much of the basic classical guitar technique is quite different 
from steel string jazz chops -- it's taken a while to unlearn some "spinal 
reflexes" that were developed a lifetime ago.


Cheers,

Alan




>
> From: John Zabroski 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Thursday, July 19, 2012 5:40 PM
>Subject: Re: [fonc] Alan Kay in the news [german]
> 
>
>
>
>
>On Wed, Jul 18, 2012 at 2:01 PM, Alan Kay  wrote:
>
>Hi Long,
>>
>>
>>I can keep my elbows into my body typing on a laptop. My problem is that I 
>>can't reach out further for more than a few seconds without a fair amount of 
>>pain from all the ligament tendon and rotator cuff damage along that axis. If 
>>I get that close to the keys on an organ I still have trouble reaching the 
>>other keyboards and my feet are too far forward to play the pedals. Similar 
>>geometry with the piano, plus the reaches on the much wider keyboard are too 
>>far on the right side. Also at my age there are some lower back problems from 
>>trying to lean in at a low angle -- this doesn't work.
>>
>>
>>
>>But, after a few months I realized I could go back to guitar playing (which I 
>>did a lot 50 years ago) because you can play guitar with your right elbow in. 
>>After a few years of getting some jazz technique back and playing in some 
>>groups in New England in the summers, I missed the polyphonic classical music 
>>and wound up starting to learn classical guitar a little over a year ago. 
>>This has proved to be quite a challenge -- much more difficult than I 
>>imagined it would be -- and there was much less transfer from jazz/steel 
>>string technique that I would have thought. It not only feels very different 
>>physically, but also mentally, and has many extra dimensions of nuance and 
>>color that is both its charm, and also makes it quite a separate learning 
>>experience.
>>
>>
>>Cheers,
>>
>>
>>Alan
>
>
>
>
>Hey Alan,
>
>
>That's awesome that you are learning classical guitar.  Are you using Aaron 
>Shearer's texts to teach yourself?  One trick I have learned is to not support 
>my left hand at all when playing.  In this way, the dexterity in my fingers 
>increases and when I press down on the fretboard I am using only my finger 
>muscles.
>
>
>I've had bilateral ulnar nerve transposition, and for a whole year in college 
>could not type at all due to muscle atrophy from nerve compression!  I wrote 
>all my computer assignments on paper, and paid a "personal secretary" to type 
>them in for me.  I thought about everything the program would do before I 
>wrote anything on paper, since I hated crossing out code and writing editorial 
>arrows.
>
>
>Dragon Naturally Speaking is really quite good, although not good for 
>programming in most languages.  I've found Microsoft Visual Basic is somewhat 
>possible to speak.  I also experimented with various exotic keyboards, like 
>the DataHand keyboard in the movie The Fifth Element.  It was easily my 
>favorite keyboard, but the main problem and reason I don't use it after 
>getting better is that going to somebody else's desk and typing becomes a 
>lesson in learning how to type again.
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay in the news [german]

2012-07-18 Thread Alan Kay
Hi Long,

I can keep my elbows into my body typing on a laptop. My problem is that I 
can't reach out further for more than a few seconds without a fair amount of 
pain from all the ligament tendon and rotator cuff damage along that axis.If I 
get that close to the keys on an organ I still have trouble reaching the other 
keyboards and my feet are too far forward to play the pedals. Similar geometry 
with the piano, plus the reaches on the much wider keyboard are too far on the 
right side. Also at my age there are some lower back problems from trying to 
lean in at a low angle -- this doesn't work.


But, after a few months I realized I could go back to guitar playing (which I 
did a lot 50 years ago) because you can play guitar with your right elbow in. 
After a few years of getting some jazz technique back and playing in some 
groups in New England in the summers, I missed the polyphonic classical music 
and wound up starting to learn classical guitar a little over a year ago. This 
has proved to be quite a challenge -- much more difficult than I imagined it 
would be -- and there was much less transfer from jazz/steel string technique 
that I would have thought. It not only feels very different physically, but 
also mentally, and has many extra dimensions of nuance and color that is both 
its charm, and also makes it quite a separate learning experience.

Cheers,

Alan




>
> From: Long Nguyen 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Wednesday, July 18, 2012 10:47 AM
>Subject: Re: [fonc] Alan Kay in the news [german]
> 
>Dear Dr. Kay,
>
>May I ask, how would you type on a computer if you cannot play keyboards?
>
>Best,
>Long
>
>On Wed, Jul 18, 2012 at 10:44 AM, Alan Kay  wrote:
>> I should mention that there is both garbling and also lots of fabrication in
>> this report.
>>
>> I didn't say "abandon theory" -- I did urge doing more real experiments with
>> software (from which the first might have been incorrectly inferred).
>>
>> But where did all the organ stuff come from? I never mentioned it, so it
>> must have been gleaned from the net. And I suddenly became a better organist
>> than I every was. And he had me touring around when I have not been able to
>> play keyboards for four years because of a severe shoulder trauma from a
>> tennis accident.
>>
>> But the University of Paderborn and faculty and students were very
>> hospitable, and it was fun to help them dedicate the building.
>>
>> Cheers,
>>
>> Alan
>>
>> 
>> From: Eugen Leitl 
>> To: Fundamentals of New Computing 
>> Sent: Wednesday, July 18, 2012 7:19 AM
>> Subject: [fonc] Alan Kay in the news [german]
>>
>>
>> http://www.heise.de/newsticker/meldung/Alan-Kay-Nicht-in-der-Theorie-der-Informatik-verharren-1644597.html
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay in the news [german]

2012-07-18 Thread Alan Kay
I should mention that there is both garbling and also lots of fabrication in 
this report.

I didn't say "abandon theory" -- I did urge doing more real experiments with 
software (from which the first might have been incorrectly inferred).

But where did all the organ stuff come from? I never mentioned it, so it must 
have been gleaned from the net. And I suddenly became a better organist than I 
every was. And he had me touring around when I have not been able to play 
keyboards for four years because of a severe shoulder trauma from a tennis 
accident.

But the University of Paderborn and faculty and students were very hospitable, 
and it was fun to help them dedicate the building.

Cheers,

Alan




>
> From: Eugen Leitl 
>To: Fundamentals of New Computing  
>Sent: Wednesday, July 18, 2012 7:19 AM
>Subject: [fonc] Alan Kay in the news [german]
> 
>
>http://www.heise.de/newsticker/meldung/Alan-Kay-Nicht-in-der-Theorie-der-Informatik-verharren-1644597.html
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about the Burroughs B5000 series and apability-based computing

2012-05-28 Thread Alan Kay
In a nutshell, the B5000 embodied a number of great ideas in its architecture. 
Design paper by Bob Barton in 1961, machine appeared ca 1962-3.

-- multiple CPUs
-- rotating drum secondary memory

-- automatic process switching

-- no assembly code programming, all was done in ESPOL (executive systems 
problem oriented language), an extended version of ALGOL 60.
-- this produced polish postfix code from a "pass and a half" compiler.
-- Tron note: the OS built in ESPOL was called the "MCP" (master control 
program).


-- direct execution of polish postfix code
-- code was reentrant

-- code in terms of 12 bit "syllables" (we would call them "bytes") 4 to a 48 
bit word

-- automatic stack
-- automatic stack frames for parameters and temporary variables
-- code did not have any kind of addresses
The most unusual part was the treatment of memory and environments

-- every word was marked with a "flag bit" -- which was "outside" of normal 
program space -- and determined whether a word was a "value" (a number) or a 
"descriptor"


-- code was "granted" an "environment" in the form of (1) a "program reference 
table" (essentially an object instance) containing values and descriptors, and 
(2) a stack with frame. Code could only reference these via offsets to 
registers that themselves were not in code's purview. (This is the basis of the 
first capability scheme)

-- the protected descriptors were the only way resources could be accessed and 
used.


-- fixed and floating point formats were combined for numbers


-- array descriptors automatically checked for bounds violations, and allowed 
automatic swapping (one of the bits in an array descriptor was "presence" and 
if "not present" the address was a disk reference, if "present" the address was 
a core storage reference, etc.

-- procedure descriptors pointed to code. 


-- a code syllable could ask for either a value to be fetched to the top of the 
stack ("right hand side" expressions), or a "name" (address) to be fetched 
(computing the "left hand side").

-- if the above ran into a procedure descriptor with a "value call" then the 
procedure would execute as we would expect. If it was a "name call" then a bit 
was set that the procedure could test so one could compute a "left hand side" 
for an expression. In other words, one could "simulate data". (The difficulty 
of simulating a sparse array efficiently was a clue for me of how to think of 
object-oriented simulation of classical computer structures.)

-


As for the Alto in 1973, it had a register bank, plus 16 program counters into 
microcode (which could execute 5-6 instructions within each 750ns main memory 
cycle). Conditions/signals in the machine were routed through separate logic to 
determine which program counter to use for the next microinstruction. There was 
no delay between instruction executions i.e. the low level hardware tasking was 
"zero-overhead".

(The overall scheme designed by Chuck Thacker was a vast improvement over my 
earlier Flex Machine -- which had 4 such program counters, etc.)

The low level tasks replaced almost all the usual hardware on a normal 
computer: disk and display and keyboard controllers, general I/O, even refresh 
for the DRAM, etc. 


This was a great design: about 160 MSI chips plus memory.

Cheers,

Alan




>
> From: Shawn Morel 
>To: Kevin Jones ; Fundamentals of New Computing 
> 
>Cc: Alan Kay  
>Sent: Sunday, May 27, 2012 6:50 PM
>Subject: Re: [fonc] Question about the Burroughs B5000 series and 
>apability-based computing
> 
>Kevin, 
>
>I'll quote one of my earlier questions to the list - in it I had a few 
>pointers that you might find a useful starting place. 
>> In the videotaped presentation from HIPK 
>> (http://www.bradfuller.com/squeak/Kay-HPIK_2011.mp4) you made reference to 
>> the Burroughs 5000-series implementing capabilities.
>
>
>There's also a more detailed set of influences / references to Bob Barton and 
>the B* architectures in part 3 of the early history of smalltalk: 
>http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_III.html
>
>"I liked the B5000 scheme, but Butler did not want to have to decode bytes, 
>and pointed out that since an 8-bit byte had 256 total possibilities, what we 
>should do is map different meanings onto different parts of the "instruction 
>space." this would give us a "poor man's Huffman code" that would be both 
>flexible and simple. All subsequent emulators at PARC used this general 
>scheme." [Kay]
>
>You should take the time to read that entire essay, it's chock-full 

Re: [fonc] The problem with programming languages

2012-05-08 Thread Alan Kay
Hi Jarek

I think your main point is a good one ... this is why I used to urge Smalltalk 
programmers to initially stay away from the libraries full of features and just 
use the kernel language as a "runnable pseudo-code" to sketch an outline of a 
simple solution. As you say, this helps to gradually find a better architecture 
for the more detailed version, and perhaps gives some hints of where to look in 
the library when it is time to optimize. This was perhaps even easier and nicer 
in Smalltalk-72 where you could make up on the fly a "pseudo-code that actually 
ran and could be debugged" (and it had almost no library full of features)

This is one of many good arguments for finding ways to "separate meaning from 
optimization"

Cheers,

Alan




>
> From: Jarek Rzeszótko 
>To: Fundamentals of New Computing  
>Sent: Tuesday, May 8, 2012 9:20 AM
>Subject: Re: [fonc] The problem with programming languages
> 
>
>Natural languages are commonly much more ambiguous and you could say "fuzzy" 
>(as in fuzzy logic) than (currently popular) programming languages and hence 
>switching between those two has to cause some difficulties.
>
>Example: I have been programming in Ruby for 7 years now, for 5 years 
>professionally, and yet when I face a really difficult problem the best way 
>still turns out to be to write out a basic outline of the overall algorithm in 
>pseudo-code. It might be a personal thing, but for me there are just too many 
>irrelevant details to keep in mind when trying to solve a complex problem 
>using a programming language right from the start. I cannot think of classes, 
>method names, arguments etc. until I get a basic idea of how the given 
>computation should work like on a very high level (and with the low-level 
>details staying "fuzzy"). I know there are people who feel the same way, there 
>was an interesting essay from Paul Graham followed by a very interesting 
>comment on MetaFilter about this:
>
>http://www.paulgraham.com/head.html
>http://www.metafilter.com/64094/its-only-when-you-have-your-code-in-your-head-that-you-really-understand-the-problem#1810690
>
>There is also the Pseudo-code Programming Process from Steve McConnell and his 
>"Code Complete":
>
>http://www.coderookie.com/2006/tutorial/the-pseudocode-programming-process/
>
>Another thing is that the code tends to evolve quite rapidly as the 
>constraints of a given problem are explored. Plenty of things in almost any 
>program end up being the way they are because of those constraints that 
>frequently were not obvious in the start and might not be obvious from just 
>reading the code - that's why people often rush to do a complete rewrite of a 
>program just to run into the same problems they had with the original one. The 
>question now is how much more time would documenting those constraints in the 
>code take and how much time would it save with future maintenance of the code. 
>I guess the amount of this context that would be beneficial varies with 
>applications a lot.
>
>If you mention TeX, I think literate programming is pretty relevant to this 
>discussion too, and I am personally looking forward to trying it out one day. 
>Knuth himself said he would not be able to write TeX without literate 
>programming, and the technique is of course partially related to what I've 
>said above regarding pseudocode:
>
>http://www.literateprogramming.com/
>
>Cheers,
>Jarosław Rzeszótko
>
>
>
>2012/5/8 David Goehrig 
>
>
>>On May 8, 2012, at 2:56 AM, Julian Leviston  wrote:
>>
>>>
>>> Humans parsing documents without proper definitions are like coders trying 
>>> to read programming languages that have no comments
>>
>>One of the under appreciated aspects of system like TeX with the ability to 
>>do embedded programming, or a system like Self with its Annotations as part 
>>of the object, or even python's .__doc__ attributes is that they provide 
>>context for the programmer.
>>
>>A large part of the reason that these are under appreciated is that most 
>>programs aren't sufficiently well factored to take advantage of these 
>>capabilities.  As a human description of what the code does and why will 
>>invariably take about a paragraph of human text per line of code, a 20 line 
>>function requires a pamphlet of documentation to provide sufficient context.
>>
>>Higher order functions, objects, actors, complex messaging topologies, 
>>exception handling (and all manner of related nonlocal exits), and the like 
>>only compound the context problem as they are "non-obvious".  Most of the FP 
>>movement is a reaction against "non-obvious" programming. Ideally this would 
>>result in a positive "self-evident" model, but in the real world we end up 
>>with Haskell monads (non-obvious functional programming).
>>
>>In the end the practical art is to express your code in such a way as the 
>>interpretation of the written word and the effective semantics of the program 
>>are congruent. Or in human te

Re: [fonc] LightTable UI

2012-04-24 Thread Alan Kay
Fonc bounced me on sending the Balzer doc directly, but here is the link at RAND

http://www.rand.org/content/dam/rand/pubs/research_memoranda/2009/RM5772.pdf

A few more references below

Cheers,

Alan






>
> From: Alan Kay 
>To: Fundamentals of New Computing  
>Sent: Tuesday, April 24, 2012 9:52 AM
>Subject: Re: [fonc] LightTable UI
> 
>
>Check out Bob Balzer's EXDAMS from the late 60s (attached -- there is also an 
>AFIPS paper on this).
>
>
>Also take a look at Warren Teitelbaum's DWIM (and his earlier attempt at an 
>"Advice Taker" UI -- called "Pilot" -- his MIT Phd Thesis).
>
>
>And there is Dan Swinehart's later Stanford PhD thesis that takes a further 
>step -- called "Copilot".
>
>
>And 
>
>
>... of course, there is the Viewpoints "Worlds" paper ...
>
>
>
>Cheers,
>
>
>Alan
>
>
>
>
>>
>> From: Jarek Rzeszótko 
>>To: Fundamentals of New Computing  
>>Sent: Tuesday, April 24, 2012 9:32 AM
>>Subject: Re: [fonc] LightTable UI
>> 
>>
>>On the other hand, Those who cannot remember the past are condemned to repeat 
>>it.
>>
>>Also, please excuse me (especially Julian Leviston) for maybe sounding too 
>>pessimistic and too offensive, the idea surely is exciting, my point is just 
>>that it excited me and probably many other persons before Bret Victor or 
>>Chris Granger did (very interesting) demos of it and what would _really_ 
>>excite me now is any glimpse of any idea whatsoever on how to make such 
>>things work in a general enough domain. Maybe they have or will have such 
>>idea, that would be cool, but until that time I think it's not unreasonable 
>>to restrain a bit, especially those ideas are relatively easy to realize in 
>>special domains and very hard to generalize to the wide scope of software 
>>people create.
>>
>>I would actually also love to hear from someone more knowledgeable about 
>>interesting historic attempts at doing such things, e.g. reversible 
>>computations, because there certainly were some: for one I remember a few 
>>years ago "back in time debugging" was quite a fashionable topic of talks 
>>(just google the phrase for a sampling), from a more hardware/physical 
>>standpoint there is http://en.wikipedia.org/wiki/Reversible_computing etc.
>>
>>Cheers,
>>Jarosław Rzeszótko
>>
>>
>>2012/4/24 David Nolen 
>>
>>"The best way to predict the future is to invent it"
>>>
>>>
>>>On Tue, Apr 24, 2012 at 3:50 AM, Jarek Rzeszótko  
>>>wrote:
>>>
>>>You make it sound a bit like this was a working solution already, while it 
>>>seems to be a prototype at best, they are collecting funding right now: 
>>>http://www.kickstarter.com/projects/306316578/light-table. 
>>>>
>>>>I would love to be proven wrong, but I think given the state of the 
>>>>project, many people overexcite over it: some of the things proposed aren't 
>>>>new, just wrapped into a nice modern design (you could try to create a new 
>>>>"skin" or UI toolkit for some Smalltalk IDE for a similiar effect), while 
>>>>for the ones that would be new like the real-time evaluation or 
>>>>visualisation there is too little detail to say whether they are onto 
>>>>something or not - I am sure many people thought of such things in the 
>>>>past, but it is highly questionable to what extent those are actually 
>>>>doable, especially in an existing language like Clojure or JavaScript. I am 
>>>>not convinced if dropping 200,000$ at the thing will help with coming up 
>>>>with a solution if there is no decent set of ideas to begin with. I would 
>>>>personally be much more enthusiastic if the people behind the project at 
>>>>least outlined possible approaches they might take, before trying to 
>>>>collect money. Currently it
 sounds like they just plan to "hack" it until it handles a reasonable number 
of special cases, but tools that work only some of the time are favoured by 
few. I think we need good theoretical approaches to problems like this before 
we can make any progress in how the actual real tools work like.
>>>>
>>>>Cheers,
>>>>Jarosław Rzeszótko
>>>>
>>>>
>>>>
>>>>2012/4/24 Julian Leviston 
>>>>
>>>>Thought this is worth a look as a next step after Brett Victor's work

Re: [fonc] LightTable UI

2012-04-24 Thread Alan Kay
(Hi Toby)

And don't forget that John McCarthy was one of the very first to try to 
automatically compute inverses of functions (this grew out of his PhD work at 
Princeton in the mid-50s ...)

Cheers,

Alan




>
> From: Toby Schachman 
>To: Fundamentals of New Computing  
>Sent: Tuesday, April 24, 2012 9:48 AM
>Subject: Re: [fonc] LightTable UI
> 
>Benjamin Pierce et al did some work on bidirectional computation. The
>premise is to work with bidirectional transformations (which they call
>"lenses") rather than (unidirectional) functions. They took a stab at
>identifying some primitives, and showing how they would work in some
>applications. Of course we can do all the composition tricks with
>lenses that we can do with functions :)
>http://www.seas.upenn.edu/~harmony/
>
>
>See also Gerald Sussman's essay Building Robust Systems,
>http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/robust-systems.pdf
>
>In particular, he has a section called "Constraints Generalize
>Procedures". He gives an example of a system as a constraint solver
>(two-way information flow) contrasted with the system as a procedure
>(one-way flow).
>
>
>Also I submitted a paper for Onward 2012 which discusses this topic
>among other things,
>http://totem.cc/onward2012/onward.pdf
>
>My own interest is in programming interfaces for artists. I am
>interested in these "causally agnostic" programming ideas because I
>think they could support a more non-linear, improvisational approach
>to programming.
>
>
>Toby
>
>
>2012/4/24 Jarek Rzeszótko :
>> On the other hand, Those who cannot remember the past are condemned to
>> repeat it.
>>
>> Also, please excuse me (especially Julian Leviston) for maybe sounding too
>> pessimistic and too offensive, the idea surely is exciting, my point is just
>> that it excited me and probably many other persons before Bret Victor or
>> Chris Granger did (very interesting) demos of it and what would _really_
>> excite me now is any glimpse of any idea whatsoever on how to make such
>> things work in a general enough domain. Maybe they have or will have such
>> idea, that would be cool, but until that time I think it's not unreasonable
>> to restrain a bit, especially those ideas are relatively easy to realize in
>> special domains and very hard to generalize to the wide scope of software
>> people create.
>>
>> I would actually also love to hear from someone more knowledgeable about
>> interesting historic attempts at doing such things, e.g. reversible
>> computations, because there certainly were some: for one I remember a few
>> years ago "back in time debugging" was quite a fashionable topic of talks
>> (just google the phrase for a sampling), from a more hardware/physical
>> standpoint there is http://en.wikipedia.org/wiki/Reversible_computing etc.
>>
>> Cheers,
>> Jarosław Rzeszótko
>>
>>
>> 2012/4/24 David Nolen 
>>>
>>> "The best way to predict the future is to invent it"
>>>
>>> On Tue, Apr 24, 2012 at 3:50 AM, Jarek Rzeszótko 
>>> wrote:

 You make it sound a bit like this was a working solution already, while
 it seems to be a prototype at best, they are collecting funding right now:
 http://www.kickstarter.com/projects/306316578/light-table.

 I would love to be proven wrong, but I think given the state of the
 project, many people overexcite over it: some of the things proposed aren't
 new, just wrapped into a nice modern design (you could try to create a new
 "skin" or UI toolkit for some Smalltalk IDE for a similiar effect), while
 for the ones that would be new like the real-time evaluation or
 visualisation there is too little detail to say whether they are onto
 something or not - I am sure many people thought of such things in the 
 past,
 but it is highly questionable to what extent those are actually doable,
 especially in an existing language like Clojure or JavaScript. I am not
 convinced if dropping 200,000$ at the thing will help with coming up with a
 solution if there is no decent set of ideas to begin with. I would
 personally be much more enthusiastic if the people behind the project at
 least outlined possible approaches they might take, before trying to 
 collect
 money. Currently it sounds like they just plan to "hack" it until it 
 handles
 a reasonable number of special cases, but tools that work only some of the
 time are favoured by few. I think we need good theoretical approaches to
 problems like this before we can make any progress in how the actual real
 tools work like.

 Cheers,
 Jarosław Rzeszótko


 2012/4/24 Julian Leviston 
>
> Thought this is worth a look as a next step after Brett Victor's work
> (http://vimeo.com/36579366) on UI for programmers...
>
> http://www.kickstarter.com/projects/ibdknox/light-table
>
> We're still not quite "there" yet IMHO, but that's getting towards

Re: [fonc] Smalltalk-75

2012-04-20 Thread Alan Kay
Ivan and Bert have been special in the development of some of the most 
important parts of computing. 


This year is the 50th anniversary of Sketchpad -- still one of the very few top 
conceptions in computing, still one of the greatest theses ever done, and still 
very much worth reading today.

Ivan is such a giant that we sometimes forget that Bert's thesis was a 
graphical programming language in which he invented and used the idea of 
dataflow. (This was done after Ivan's thesis even though Bert was the older 
brother, because Bert did a stint as a Navy pilot before going to grad school).

One experience that many "inquisitive children" had while living in the 
vicinity of New York in the 50s was to be able to visit and get stuff on "Radio 
Row" -- most of it on Courtlandt Street in lower Manhattan where the World 
Trade Center was later built. There were literally hundreds of shops on both 
sides of the street for what I recall was at least a mile full of nothing but 
second hand gear, much of it WWII surplus electronics and some mechanical gear. 
You could mow a few lawns and earn enough for a subway ride to and from (I 
lived in Queens at that time -- Ivan and Bert lived in Scarsdale I think) and 
still have enough left over to buy 15,000 volt transformers, RCA 811A 
transmitting triodes, etc., to make dandy Tesla coils, ham radios, little 
computers out of relays as set forth in Ed Berkeley's books, etc. 

Cheers,

Alan






>
> From: Jb Labrune 
>To: Fundamentals of New Computing  
>Sent: Friday, April 20, 2012 2:59 AM
>Subject: Re: [fonc] Smalltalk-75
> 
>about people that learned how to assemble a computer at a young age, i 
>remember talking with the Sutherland brothers once about their childhood. Ivan 
>explained to me that Ed Berkeley gave Sutherland's family a DIY computer 
>called SIMON in the 50's. Ivan and Bert were very creative for sure, but they 
>also benefited from great ressources in their environment! When will we see 
>STEPS, MARU and other foncabulous seeds in schools and DIY magazines ? :]
>
>about Simple Simon & SIMON
>http://www.cs.ubc.ca/~hilpert/e/simon/index.html
>http://en.wikipedia.org/wiki/Simon_%28computer%29
>
>ooh, and of course this video about the S's bros is so great! i would like to 
>watch one for each one of you guys on this list ^^
>
>http://www.youtube.com/watch?v=sM1bNR4DmhU  .:( Mom Loved Him Best - w/ Alan 
>in the audience! ):.
>
>cheers*
>Jb
>
>Le 20 avr. 2012 à 03:20, Fernando Cacciola a écrit :
>
>> On Thu, Apr 19, 2012 at 9:43 PM, Alan Kay  wrote:
>>> Well, part of it is that the 15 year old was exceptional -- his name is
>>> Steve Putz, and as with several others of our children programmers -- such
>>> as Bruce Horn, who was the originator of the Mac Finder -- became a very
>>> good professional.
>>> 
>>> And that Smalltalk (basically Smalltalk-72) was quite approachable for
>>> children. We also had quite a few exceptional 12 year old girls who did
>>> remarkable applications.
>>> 
>> I was curious, so I googled a bit (impressive how easy it is, these
>> days, to find something within a couple of minutes)
>> 
>> The girls you are most likely talking about would be: Marion Goldeen
>> and Susan Hamet, who created a painiting and a OOP-Illustration
>> system, respectively.
>> I've found some additional details and illustrations here:
>> http://www.manovich.net/26-kay-03.pdf
>> 
>> What is truly remarkable IMO, is Smalltalk (even -72). Because these
>> children might have been exceptional, but IIUC is not like they were,
>> say, a forth-generation of mathematicians and programmers who learned
>> how to assemble a computer at age 3 :)
>> 
>> 
>> Best
>> 
>> -- 
>> Fernando Cacciola
>> SciSoft Consulting, Founder
>> http://www.scisoft-consulting.com
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>
>--
>
>Jean-Baptiste Labrune
>MIT Media Laboratory
>20 Ames St / 75 Amherst St
>Cambridge, MA 02139, USA
>
>http://web.media.mit.edu/~labrune/
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Smalltalk-75

2012-04-19 Thread Alan Kay
Well, part of it is that the 15 year old was exceptional -- his name is Steve 
Putz, and as with several others of our children programmers -- such as Bruce 
Horn, who was the originator of the Mac Finder -- became a very good 
professional.

And that Smalltalk (basically Smalltalk-72) was quite approachable for 
children. We also had quite a few exceptional 12 year old girls who did 
remarkable applications.

Even so, Steve Putz's circuit diagram drawing program was terrific! Especially 
the UI he designed and built for it.

Cheers,

Alan




>
> From: John Pratt 
>To: fonc@vpri.org 
>Sent: Thursday, April 19, 2012 4:05 PM
>Subject: [fonc] Smalltalk-75
> 
>
>
>How is it that a 15-year-old could program a schematic diagram drawing 
>application in the 1970's?  Is there any more information about this?
>
>I think I read that Smalltalk changed afterwards.  Isn't this kind of a big 
>deal, everyone?
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Ask For Forgiveness Programming - Or How We'll Program 1000 Cores

2012-04-14 Thread Alan Kay
This is a good idea (and, interestingly, was a common programming style in the 
first Simula) ...

Cheers,

Alan




>
> From: David Barbour 
>To: Fundamentals of New Computing  
>Sent: Friday, April 13, 2012 11:03 PM
>Subject: Re: [fonc] Ask For Forgiveness Programming - Or How We'll Program 
>1000 Cores
> 
>
>Another option would be to introduce slack in the propagation delay itself. 
>E.g. if I send you a message indicating the meeting has been moved to 1:30pm, 
>it would be a good idea to send it a good bit in advance - perhaps at 10:30am 
>- so you can schedule and prepare. 
>
>
>With computers, the popular solution - sending a message *when* we want 
>something to happen - seems analogous to sending the message at 1:29:58pm, 
>leaving the computer always on the edge with little time to prepare even for 
>highly predictable events, and making the system much more vulnerable to 
>variations in communication latency.
>
>
>On Fri, Apr 13, 2012 at 8:34 AM, David Goehrig  wrote:
>
>There's a very simple concept that most of the world embraces in everything 
>from supply chain management, to personnel allocations, to personal 
>relationships. 
>>
>>
>>We call it *slack*
>>
>>
>>What Dan is talking about amounts to introducing slack into distributed 
>>models.  Particularly this version of the definition of slack:
>>
>>
>>": lacking in completeness, finish, or perfection >work>"
>>
>>
>>Which is a more realistic version of computation in a universe with 
>>propagation delay (finite speed of light). But it also introduces a concept 
>>similar to anyone familiar with ropes. You can't tie a knot without some 
>>slack. (computation being an exercise in binary sequence knot making). 
>>Finishing a computation is analogous to pulling the rope taunt. 
>>
>>
>>Dave
>>
>>
>>
>>
>>
>>
>>
>>
>>-=-=- d...@nexttolast.com -=-=-
>>
>>On Apr 13, 2012, at 5:53 AM, Eugen Leitl  wrote:
>>
>>
>>
>>>http://highscalability.com/blog/2012/3/6/ask-for-forgiveness-programming-or-how-well-program-1000-cor.html
>>>
>>>Ask For Forgiveness Programming - Or How We'll Program 1000 Cores
>>>
>>>Tuesday, March 6, 2012 at 9:15AM
>>>
>>>The argument for a massively multicore future is now familiar: while clock
>>>speeds have leveled off, device density is increasing, so the future is cheap
>>>chips with hundreds and thousands of cores. That’s the inexorable logic
>>>behind our multicore future.
>>>
>>>The unsolved question that lurks deep in the dark part of a programmer’s mind
>>>is: how on earth are we to program these things? For problems that aren’t
>>>embarrassingly parallel, we really have no idea. IBM Research’s David Ungar
>>>has an idea. And it’s radical in the extreme...
>>>
>>>Grace Hopper once advised “It's easier to ask for forgiveness than it is to
>>>get permission.” I wonder if she had any idea that her strategy for dealing
>>>with human bureaucracy would the same strategy David Ungar thinks will help
>>>us tame  the technological bureaucracy of 1000+ core systems?
>>>
>>>You may recognize David as the co-creator of the Self programming language,
>>>inspiration for the HotSpot technology in the JVM and the prototype model
>>>used by Javascript. He’s also the innovator behind using cartoon animation
>>>techniques to build user interfaces. Now he’s applying that same creative
>>>zeal to solving the multicore problem.
>>>
>>>During a talk on his research, Everything You Know (about Parallel
>>>Programming) Is Wrong! A Wild Screed about the Future, he called his approach
>>>“anti-lock or “race and repair” because the core idea is that the only way
>>>we’re going to be able to program the new multicore chips of the future is to
>>>sidestep Amdhal’s Law and program without serialization, without locks,
>>>embracing non-determinism. Without locks calculations will obviously be
>>>wrong, but correct answers can be approached over time using techniques like
>>>fresheners:
>>>
>>>   A thread that, instead of responding to user requests, repeatedly selects
>>>a cached value according to some strategy, and recomputes that value from its
>>>inputs, in case the value had been inconsistent. Experimentation with a
>>>prototype showed that on a 16-core system with a 50/50 split between workers
>>>and fresheners, fewer than 2% of the queries would return an answer that had
>>>been stale for at least eight mean query times. These results suggest that
>>>tolerance of inconsistency can be an effective strategy in circumventing
>>>Amdahl’s law.
>>>
>>>During his talk David mentioned that he’s trying  to find a better name than
>>>“anti-lock or “race and repair” for this line of thinking. Throwing my hat
>>>into the name game, I want to call it Ask For Forgiveness Programming (AFFP),
>>>based on the idea that using locks is “asking for permission” programming, so
>>>not using locks along with fresheners is really “asking for forgiveness.” I
>>>think it works, but it’s just a thought.
>>>
>>>No Shared Lock Goes Unpunish

Re: [fonc] Kernel & Maru

2012-04-12 Thread Alan Kay
Hi John

Yes you are right about Teitelbaum ...

As far as "Hansen" the time period is right -- but I think there were several 
such ... the one I remember seeing was "syntax driven ..." There was a feeling 
of being trapped ...


Cheers,

Alan




>
> From: John Zabroski 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Thursday, April 12, 2012 5:34 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>By the way, I think you are referring to the work done by Reps's thesis 
>advisor, Tim Teitelbaum, and his Cornell Program Synthesizer.  Teitelbaum and 
>Reps still work together to this day.  Over a year ago I e-mailed Reps asking, 
>"Why did the Synthesizer Generator fail to become mainstream?"  Reps, from 
>what I could tell, hated my question.  My word choice was poor.  I think he 
>took the word "Fail" personally.  Also, he claims that his company still uses 
>the Synthesizer Generator internally for their static analysis tools, even 
>though they no longer sell the Synthesizer Generator.
>
>
>The only earlier approach I know of, superficially, is a 1971 paper linked to 
>on Wikipedia [1] that I have not read.
>
>
>[1] http://en.wikipedia.org/wiki/Structure_editor#cite_note-hansen-0 
>
>
>On Thu, Apr 12, 2012 at 7:31 PM, Alan Kay  wrote:
>
>Hi John 
>>
>>
>>
>>The simple answer is that Tom's stuff happened in the early 80s, and I was 
>>out of PARC working on things other than Smalltalk.
>>
>>
>>I'm trying to remember something similar that was done earlier (by someone 
>>can't recall who, maybe at CMU) that was a good convincer that this was not a 
>>great UI style for thinking about programming in.
>>
>>
>>An interesting side light on all this is that -- if one could avoid 
>>paralyzing nestings in program form -- the tile based approach allows 
>>language building and extensions *and* provides the start of a UI for doing 
>>the programming that feels "open". Both work at Disney and the later work by 
>>Jens Moenig show that tiles start losing their charm in a hurry if one builds 
>>nested expressions. An interesting idea via Marvin (in his Turing Lecture) is 
>>the idea of "deferred expressions", and these could be a way to deal with 
>>some of this. Also the ISWIM design of Landin uses another way to defer 
>>nestings to achieve better readability.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>>
>>>
>>> From: John Zabroski 
>>>To: Florin Mateoc ; Fundamentals of New Computing 
>>> 
>>>Sent: Thursday, April 12, 2012 3:59 PM
>>>
>>>Subject: Re: [fonc] Kernel & Maru
>>> 
>>>
>>>
>>>It depends what your goals are. If you want to automatically derive an IDE 
>>>from a grammar then the best work is the Synthesizer Generator but it is 
>>>limited to absolutely noncircular attribute grammars IIRC. But it had wicked 
>>>cool features like incremental live evaluation. Tom Reps won a ACM 
>>>Disssrtation award for the work. The downside was scaling this approach to 
>>>so-called very large scale software systems. But there are two reasons why I 
>>>feel that concern is overblown: (1) nobody has brute forced the memory 
>>>exhaustion problem using the cloud (2) with systems like FONC we wouldnt be 
>>>building huge systems anyway.
>>>Alternatively, "grammarware" hasn't died simply because of the SG scaling 
>>>issue. Ralf Lammel, Eelco Visser and others have all contributed to ASF+SDF 
>>>and the Spoofax language environment. But none of these are as cool as SG 
>>>and with stuff like Spoofax you have to sidt thru Big And Irregular APIs for 
>>>IME hooking into Big And Irregular Eclipse APIs. Seperating the intellectual 
>>>wheat from the chaff was a PITA Although I did enjoy Visser's thesis on 
>>>scannerless parsing which led me to apprrciate boolean grammars.
>>>Alan,
>>>A question for you is Did SG approach ever come up in desivn discuszions or 
>>>prototypes for any Smalltalk? I always assumed No due to selection bias... 
>>>Until Ometa there hasnt been a clear use case.
>>>Cheers,
>>>Z-Bo
>>>On Apr 11, 2012 10:21 AM, "Florin Mateoc"  wrote:
>>>
>>>Yes, these threads are little gems by themselves, thank you!
>>>>
>>>>
>>>>I hope I am not straying too much from the main topic when asking about 
>>>>

Re: [fonc] Kernel & Maru

2012-04-12 Thread Alan Kay
Yes, that was part of Tom's work ...

Cheers,

Alan




>
> From: John Zabroski 
>To: Fundamentals of New Computing ; Alan Kay 
> 
>Sent: Thursday, April 12, 2012 4:46 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>What does it have to do with thinking about programming?  Are you referring to 
>editing an AST directly?
>On Apr 12, 2012 7:31 PM, "Alan Kay"  wrote:
>
>Hi John 
>>
>>
>>
>>The simple answer is that Tom's stuff happened in the early 80s, and I was 
>>out of PARC working on things other than Smalltalk.
>>
>>
>>I'm trying to remember something similar that was done earlier (by someone 
>>can't recall who, maybe at CMU) that was a good convincer that this was not a 
>>great UI style for thinking about programming in.
>>
>>
>>An interesting side light on all this is that -- if one could avoid 
>>paralyzing nestings in program form -- the tile based approach allows 
>>language building and extensions *and* provides the start of a UI for doing 
>>the programming that feels "open". Both work at Disney and the later work by 
>>Jens Moenig show that tiles start losing their charm in a hurry if one builds 
>>nested expressions. An interesting idea via Marvin (in his Turing Lecture) is 
>>the idea of "deferred expressions", and these could be a way to deal with 
>>some of this. Also the ISWIM design of Landin uses another way to defer 
>>nestings to achieve better readability.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>>
>>>
>>> From: John Zabroski 
>>>To: Florin Mateoc ; Fundamentals of New Computing 
>>> 
>>>Sent: Thursday, April 12, 2012 3:59 PM
>>>Subject: Re: [fonc] Kernel & Maru
>>> 
>>>
>>>It depends what your goals are. If you want to automatically derive an IDE 
>>>from a grammar then the best work is the Synthesizer Generator but it is 
>>>limited to absolutely noncircular attribute grammars IIRC. But it had wicked 
>>>cool features like incremental live evaluation. Tom Reps won a ACM 
>>>Disssrtation award for the work. The downside was scaling this approach to 
>>>so-called very large scale software systems. But there are two reasons why I 
>>>feel that concern is overblown: (1) nobody has brute forced the memory 
>>>exhaustion problem using the cloud (2) with systems like FONC we wouldnt be 
>>>building huge systems anyway.
>>>Alternatively, "grammarware" hasn't died simply because of the SG scaling 
>>>issue. Ralf Lammel, Eelco Visser and others have all contributed to ASF+SDF 
>>>and the Spoofax language environment. But none of these are as cool as SG 
>>>and with stuff like Spoofax you have to sidt thru Big And Irregular APIs for 
>>>IME hooking into Big And Irregular Eclipse APIs. Seperating the intellectual 
>>>wheat from the chaff was a PITA Although I did enjoy Visser's thesis on 
>>>scannerless parsing which led me to apprrciate boolean grammars.
>>>Alan,
>>>A question for you is Did SG approach ever come up in desivn discuszions or 
>>>prototypes for any Smalltalk? I always assumed No due to selection bias... 
>>>Until Ometa there hasnt been a clear use case.
>>>Cheers,
>>>Z-Bo
>>>On Apr 11, 2012 10:21 AM, "Florin Mateoc"  wrote:
>>>
>>>Yes, these threads are little gems by themselves, thank you!
>>>>
>>>>
>>>>I hope I am not straying too much from the main topic when asking about 
>>>>what I think is a related problem: a great help for playing with languages 
>>>>are the tools. Since we are talking about bootstrapping everything, we 
>>>>would ideally also be able to generate the tools together with all the 
>>>>rest. This is a somewhat different kind of language bootstrap, where 
>>>>actions and predicates in the language grammar have their own grammar, so 
>>>>they don't need to rely on any host language, but still allow one to 
>>>>flexibly generate a lot of boilerplate code, including for example classes 
>>>>(or other language specific structures) representing the AST nodes, 
>>>>including visiting code, formatters, code comparison tools, even 
>>>>abstract(ideally with a flexible level of abstraction)evaluation code over 
>>>>those AST nodes, and debuggers. This obviously goes beyond language syntax, 
>>>>one needs an executio

Re: [fonc] Kernel & Maru

2012-04-12 Thread Alan Kay
Hi John 


The simple answer is that Tom's stuff happened in the early 80s, and I was out 
of PARC working on things other than Smalltalk.

I'm trying to remember something similar that was done earlier (by someone 
can't recall who, maybe at CMU) that was a good convincer that this was not a 
great UI style for thinking about programming in.

An interesting side light on all this is that -- if one could avoid paralyzing 
nestings in program form -- the tile based approach allows language building 
and extensions *and* provides the start of a UI for doing the programming that 
feels "open". Both work at Disney and the later work by Jens Moenig show that 
tiles start losing their charm in a hurry if one builds nested expressions. An 
interesting idea via Marvin (in his Turing Lecture) is the idea of "deferred 
expressions", and these could be a way to deal with some of this. Also the 
ISWIM design of Landin uses another way to defer nestings to achieve better 
readability.

Cheers,

Alan




>
> From: John Zabroski 
>To: Florin Mateoc ; Fundamentals of New Computing 
> 
>Sent: Thursday, April 12, 2012 3:59 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>It depends what your goals are. If you want to automatically derive an IDE 
>from a grammar then the best work is the Synthesizer Generator but it is 
>limited to absolutely noncircular attribute grammars IIRC. But it had wicked 
>cool features like incremental live evaluation. Tom Reps won a ACM 
>Disssrtation award for the work. The downside was scaling this approach to 
>so-called very large scale software systems. But there are two reasons why I 
>feel that concern is overblown: (1) nobody has brute forced the memory 
>exhaustion problem using the cloud (2) with systems like FONC we wouldnt be 
>building huge systems anyway.
>Alternatively, "grammarware" hasn't died simply because of the SG scaling 
>issue. Ralf Lammel, Eelco Visser and others have all contributed to ASF+SDF 
>and the Spoofax language environment. But none of these are as cool as SG and 
>with stuff like Spoofax you have to sidt thru Big And Irregular APIs for IME 
>hooking into Big And Irregular Eclipse APIs. Seperating the intellectual wheat 
>from the chaff was a PITA Although I did enjoy Visser's thesis on 
>scannerless parsing which led me to apprrciate boolean grammars.
>Alan,
>A question for you is Did SG approach ever come up in desivn discuszions or 
>prototypes for any Smalltalk? I always assumed No due to selection bias... 
>Until Ometa there hasnt been a clear use case.
>Cheers,
>Z-Bo
>On Apr 11, 2012 10:21 AM, "Florin Mateoc"  wrote:
>
>Yes, these threads are little gems by themselves, thank you!
>>
>>
>>I hope I am not straying too much from the main topic when asking about what 
>>I think is a related problem: a great help for playing with languages are the 
>>tools. Since we are talking about bootstrapping everything, we would ideally 
>>also be able to generate the tools together with all the rest. This is a 
>>somewhat different kind of language bootstrap, where actions and predicates 
>>in the language grammar have their own grammar, so they don't need to rely on 
>>any host language, but still allow one to flexibly generate a lot of 
>>boilerplate code, including for example classes (or other language specific 
>>structures) representing the AST nodes, including visiting code, formatters, 
>>code comparison tools, even abstract(ideally with a flexible level of 
>>abstraction)evaluation code over those AST nodes, and debuggers. This 
>>obviously goes beyond language syntax, one needs an execution model as well 
>>(perhaps in combination with a worlds-like approach). I am still not
 sure how far one can go, what can be succinctly specified and how. 
>>
>>
>>
>>I would greatly appreciate any pointers in this direction
>>
>>
>>Florin
>>
>>
>>
>>
>>
>> From: Monty Zukowski 
>>To: Fundamentals of New Computing  
>>Sent: Wednesday, April 11, 2012 12:20 AM
>>Subject: Re: [fonc] Kernel & Maru
>> 
>>Thank you everyone for the great references.  I've got some homework
>>to do now...
>>
>>Monty
>>
>>On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta  wrote:
>>> Extending Alan's comments...
>>>
>>> A small, well explained, and easily understandable example of an iterative 
>>> implementation of a recursive language (Scheme) can be found in R. Kent 
>>> Dybvig's Ph.D. thesis.
>>>
>>> http://www.cs.unm.edu/~williams/cs491/three-imp.pdf
>>>
>>> Regards,
>>> Ian
>>>
>>> ___
>>> fonc mailing list
>>> fonc@vpri.org
>>> http://vpri.org/mailman/listinfo/fonc
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>___
>fonc mailing list

Re: [fonc] Migrating / syncing computation "live-documents"

2012-04-11 Thread Alan Kay
Croquet does replicated distributed computing. LOCUS did "freely migrating 
system nodes".

One actually needs both (though a lot can be done with today's capacities just 
using the Croquet techniques).

Cheers,

Alan




>
> From: Shawn Morel 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Wednesday, April 11, 2012 5:14 PM
>Subject: Migrating / syncing computation "live-documents"
> 
>
>
>
>Taken from the massive multi-thread "Error trying to compile COLA"
>
>And I've also mentioned Popek's LOCUS system as a nice model for migrating 
>processes over a network. It was Unix only, but there was nothing about his 
>design that required this.
>
>
>When thinking about storing and accessing documents, it's fairly 
>straightforward to think of some sort of migration / synchronization scheme.
>
>In thinking of objects more on par with VMs / computers / services, has there 
>been any work on process migration that's of any importance since LOCUS? Did 
>Croquet push the boundaries of distributing computations?
>
>
>shawn
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Smalltalk & Actors?

2012-04-11 Thread Alan Kay
Yes, the Object idea was "very like Actors" that Hewitt later introduced. 


As I mentioned in the "Early History of Smalltalk", a variety of influences -- 
biology, abstract algebras, the no-centers networking that ARPA was planning 
for the ARPAnet, the process architectures starting to appear in time-sharing, 
etc. -- gave rise to the desire for a software entities that acted like whole 
computers, had very little overhead, could do loose coupling via 
pattern-matching and message passing, etc.

Our main drive at PARC was to invent and make a version of "personal computing" 
that had a kind of universality to it. 


We experimented with more mechanisms than we generally used (for example with 
highly parallel simulation control and scheduling subsystems) but in the end 
the Alto did not have enough size and horsepower to deal with the additional 
overheads needed to do things like this (and some of Fisher's other control 
structure ideas). The original plan called for a next gen computer after the 
Alto, but Xerox wasn't willing to fund it for quite a few years. This forced 
everyone to do optimization for their next-gen software and pretty much removed 
our invention hats.

However, the other abstraction mechanisms and the low overheads for Smalltalk 
objects -- via Dan Ingalls' and others' brilliant touch -- were enough to allow 
the big inventions to get done and built.

Cheers,

Alan





>________
> From: Miles Fidelman 
>To: Alan Kay  
>Cc: Fundamentals of New Computing  
>Sent: Wednesday, April 11, 2012 12:27 PM
>Subject: Smalltalk & Actors?
> 
>Hi Alan,
>
>Apropos some of the recent threads:
>
>As I recall, some of the very early Smalltalk versions had a more concurrent 
>view of the world, and inspired Hewitt's work on Actors, as is now perhaps 
>best embodied in Erlang.
>
>It's long occurred to me that I sure would love to have an environment that 
>felt like Smalltalk (say Squeak) but that allowed objects to behave like 
>actors - all the message passing is there, but message/event-driven flow of 
>control isn't.  It's always seemed to me that a merger of Smalltalk and an 
>Erlang-like run-time environment would be an interesting direction - but I've 
>yet to see any efforts toward concurrent or distributed smalltalk go very far 
>(well, maybe Croquet qualifies).
>
>I wonder if you might have any comments to offer on why Smalltalk took the 
>path it did re. flow-of-control, and/or future directions.
>
>Regards,
>
>Miles Fidelman
>
>-- In theory, there is no difference between theory and practice.
>In practice, there is.    Yogi Berra
>
>
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-11 Thread Alan Kay
Take a look at the use of the "System Tracer" in the "Back to the Future" 
paper. It is an example of such a tool (it is a bit like a garbage collector 
except that it is actually a "new system finder" -- it can find and write out 
just the objects in the new system to make a fresh image).

Cheers,

Alan




>____
> From: Shawn Morel 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Cc: Florin Mateoc  
>Sent: Wednesday, April 11, 2012 11:52 AM
>Subject: Re: [fonc] Kernel & Maru
> 
>This thread is a real treasure trove! Thanks for all the pointers Alan :)
>
>> A nice feature of Smalltalk (which has been rarely used outside of a small 
>> group) is a collection of tools that can be used to create an entirely 
>> different language within it and then launch it without further needing 
>> Smalltalk. This was used 3 or 4 times at PARC to do radically different 
>> designs and implementations for the progression of Smalltalks 
>
>Could you elaborate more here? How might this compare to some of the work 
>happening with Racket these days?
>
>thanks
>shawn
>
>
>> Cheers,
>> 
>> Alan
>> 
>> From: Florin Mateoc 
>> To: Fundamentals of New Computing  
>> Sent: Wednesday, April 11, 2012 7:20 AM
>> Subject: Re: [fonc] Kernel & Maru
>> 
>> Yes, these threads are little gems by themselves, thank you!
>> 
>> I hope I am not straying too much from the main topic when asking about what 
>> I think is a related problem: a great help for playing with languages are 
>> the tools. Since we are talking about bootstrapping everything, we would 
>> ideally also be able to generate the tools together with all the rest. This 
>> is a somewhat different kind of language bootstrap, where actions and 
>> predicates in the language grammar have their own grammar, so they don't 
>> need to rely on any host language, but still allow one to flexibly generate 
>> a lot of boilerplate code, including for example classes (or other language 
>> specific structures) representing the AST nodes, including visiting code, 
>> formatters, code comparison tools, even abstract (ideally with a flexible 
>> level of abstraction) evaluation code over those AST nodes, and debuggers. 
>> This obviously goes beyond language syntax, one needs an execution model as 
>> well (perhaps in combination with a worlds-like approach). I am still
 not sure how far one can go, what can be succinctly specified and how. 
>> 
>> I would greatly appreciate any pointers in this direction
>> 
>> Florin
>> 
>> From: Monty Zukowski 
>> To: Fundamentals of New Computing  
>> Sent: Wednesday, April 11, 2012 12:20 AM
>> Subject: Re: [fonc] Kernel & Maru
>> 
>> Thank you everyone for the great references.  I've got some homework
>> to do now...
>> 
>> Monty
>> 
>> On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta  wrote:
>> > Extending Alan's comments...
>> >
>> > A small, well explained, and easily understandable example of an iterative 
>> > implementation of a recursive language (Scheme) can be found in R. Kent 
>> > Dybvig's Ph.D. thesis.
>> >
>> > http://www.cs.unm.edu/~williams/cs491/three-imp.pdf
>> >
>> > Regards,
>> > Ian
>> >
>> > ___
>> > fonc mailing list
>> > fonc@vpri.org
>> > http://vpri.org/mailman/listinfo/fonc
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>> 
>> 
>> 
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>> 
>> 
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-11 Thread Alan Kay
Take a look at the Squeak bootstrap process paper, which generated a "small 
everything", including tools and the ability to self bootstrap to other 
platforms.

http://www.vpri.org/pdf/tr1997001_backto.pdf

A few of us (at Apple at the time) did the original bootstrap (from an old 
Smalltalk that Apple owned) to the Mac. Andreas Raab (then in Germany) did the 
quick port to MS, and Ian Piumarta (then in France) did the quick port to Linux.

A nice feature of Smalltalk (which has been rarely used outside of a small 
group) is a collection of tools that can be used to create an entirely 
different language within it and then launch it without further needing 
Smalltalk. This was used 3 or 4 times at PARC to do radically different designs 
and implementations for the progression of Smalltalks 


Cheers,

Alan




>
> From: Florin Mateoc 
>To: Fundamentals of New Computing  
>Sent: Wednesday, April 11, 2012 7:20 AM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>Yes, these threads are little gems by themselves, thank you!
>
>
>I hope I am not straying too much from the main topic when asking about what I 
>think is a related problem: a great help for playing with languages are the 
>tools. Since we are talking about bootstrapping everything, we would ideally 
>also be able to generate the tools together with all the rest. This is a 
>somewhat different kind of language bootstrap, where actions and predicates in 
>the language grammar have their own grammar, so they don't need to rely on any 
>host language, but still allow one to flexibly generate a lot of boilerplate 
>code, including for example classes (or other language specific structures) 
>representing the AST nodes, including visiting code, formatters, code 
>comparison tools, even abstract(ideally with a flexible level of 
>abstraction)evaluation code over those AST nodes, and debuggers. This 
>obviously goes beyond language syntax, one needs an execution model as well 
>(perhaps in combination with a worlds-like approach). I am still not
 sure how far one can go, what can be succinctly specified and how. 
>
>
>
>I would greatly appreciate any pointers in this direction
>
>
>Florin
>
>
>
>
>
> From: Monty Zukowski 
>To: Fundamentals of New Computing  
>Sent: Wednesday, April 11, 2012 12:20 AM
>Subject: Re: [fonc] Kernel & Maru
> 
>Thank you everyone for the great references.  I've got some homework
>to do now...
>
>Monty
>
>On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta  wrote:
>> Extending Alan's comments...
>>
>> A small, well explained, and easily understandable example of an iterative 
>> implementation of a recursive language (Scheme) can be found in R. Kent 
>> Dybvig's Ph.D. thesis.
>>
>> http://www.cs.unm.edu/~williams/cs491/three-imp.pdf
>>
>> Regards,
>> Ian
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-11 Thread Alan Kay
Yes, the work was done at Stanford (and Bill McKeeman did a lot of the systems 
programming for the implementation).

The CACM article is a cleaned up version of this.


Cheers,

Alan




>
> From: Monty Zukowski 
>To: Fundamentals of New Computing  
>Sent: Wednesday, April 11, 2012 9:06 AM
>Subject: Re: [fonc] Kernel & Maru
> 
>This one seems to be available as a technical report as well:
>
>http://infolab.stanford.edu/TR/CS-TR-65-20.html
>
>Monty
>
>On Wed, Apr 11, 2012 at 4:44 AM, Alan Kay  wrote:
>> One more that is fun (and one I learned a lot from when I was in grad school
>> in 1966) is Niklaus Wirth's "Euler" paper, published in two parts in CACM
>> Jan and Feb 1966.
>>
>> This is "a generalization of Algol" via some ideas of van Wijngaarten and
>> winds up with a very Lispish kind of language by virtue of consolidating and
>> merging specific features of Algol into a more general much smaller kernel.
>>
>> The fun of this paper is that Klaus presents a complete implementation that
>> includes a simple byte-code interpreter.
>>
>> This paper missed getting read enough historically (I think) because one
>> large part of it is a precedence parsing scheme invented by Wirth to allow a
>> mechanical transition between a BNF-like grammar and a parser. This part was
>> not very effective and it was very complicated.
>>
>> So just ignore this. You can use a Meta II type parser (or some modern PEG
>> parser like OMeta) to easily parse Euler directly into byte-codes.
>>
>> Everything else is really clear, including the use of the Dijkstra "display"
>> technique for quick access to the static nesting of contexts used by Algol
>> (and later by Scheme).
>>
>> Cheers,
>>
>> Alan
>>
>> 
>> From: Monty Zukowski 
>> To: Fundamentals of New Computing 
>> Sent: Tuesday, April 10, 2012 9:20 PM
>>
>> Subject: Re: [fonc] Kernel & Maru
>>
>> Thank you everyone for the great references.  I've got some homework
>> to do now...
>>
>> Monty
>>
>> On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta 
>> wrote:
>>> Extending Alan's comments...
>>>
>>> A small, well explained, and easily understandable example of an iterative
>>> implementation of a recursive language (Scheme) can be found in R. Kent
>>> Dybvig's Ph.D. thesis.
>>>
>>> http://www.cs.unm.edu/~williams/cs491/three-imp.pdf
>>>
>>> Regards,
>>> Ian
>>>
>>> ___
>>> fonc mailing list
>>> fonc@vpri.org
>>> http://vpri.org/mailman/listinfo/fonc
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-11 Thread Alan Kay
The survey paper is just a survey. Dave's thesis is how to make all the control 
structure by extending a "McCarthy" like tiny kernel. Still gold today.

Cheers,

Alan




>
> From: Eugene Wallingford 
>To: Fundamentals of New Computing  
>Sent: Wednesday, April 11, 2012 9:02 AM
>Subject: Re: [fonc] Kernel & Maru
> 
>
>>> If anyone finds an electronic copy of Fisher's thesis I'd love to know
>>> about it.  My searches have been fruitless.
>> 
>> The title is not the same, but maybe these are variants of the same paper?
>> 
>> http://dl.acm.org/author_page.cfm?id=81100550987&coll=DL&dl=ACM&trk=0&cfid=76786786&cftoken=53955875
>> 
>> Also, I no longer have access to ACM digital library, so I can't post the 
>> PDFs.
>
>     The thesis link is bibliographic only.  The survey paper
>     is available as PDF, so I grabbed it.  If you'd like a
>     copy, let me know.
>
> Eugene
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-11 Thread Alan Kay
One more that is fun (and one I learned a lot from when I was in grad school in 
1966) is Niklaus Wirth's "Euler" paper, published in two parts in CACM Jan and 
Feb 1966.

This is "a generalization of Algol" via some ideas of van Wijngaarten and winds 
up with a very Lispish kind of language by virtue of consolidating and merging 
specific features of Algol into a more general much smaller kernel.

The fun of this paper is that Klaus presents a complete implementation that 
includes a simple byte-code interpreter.

This paper missed getting read enough historically (I think) because one large 
part of it is a precedence parsing scheme invented by Wirth to allow a 
mechanical transition between a BNF-like grammar and a parser. This part was 
not very effective and it was very complicated. 


So just ignore this. You can use a Meta II type parser (or some modern PEG 
parser like OMeta) to easily parse Euler directly into byte-codes. 


Everything else is really clear, including the use of the Dijkstra "display" 
technique for quick access to the static nesting of contexts used by Algol (and 
later by Scheme).

Cheers,

Alan




>
> From: Monty Zukowski 
>To: Fundamentals of New Computing  
>Sent: Tuesday, April 10, 2012 9:20 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>Thank you everyone for the great references.  I've got some homework
>to do now...
>
>Monty
>
>On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta  wrote:
>> Extending Alan's comments...
>>
>> A small, well explained, and easily understandable example of an iterative 
>> implementation of a recursive language (Scheme) can be found in R. Kent 
>> Dybvig's Ph.D. thesis.
>>
>> http://www.cs.unm.edu/~williams/cs491/three-imp.pdf
>>
>> Regards,
>> Ian
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-10 Thread Alan Kay
The first 110 pages of this are still deep gold

Cheers,

Alan




>
> From: Monty Zukowski 
>To: Fundamentals of New Computing  
>Sent: Tuesday, April 10, 2012 12:02 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>Yes, it looks like this is it.  $37 for a PDF.  Thanks!
>
>CONTROL STRUCTURES FOR PROGRAMMING LANGUAGES
>by FISHER, DAVID ALLEN, Ph.D., Carnegie Mellon University, 1970, 215
>pages; AAT 7021590
>
>On Tue, Apr 10, 2012 at 11:54 AM, Duncan Mak  wrote:
>> On Tue, Apr 10, 2012 at 2:34 PM, Monty Zukowski 
>> wrote:
>>>
>>> If anyone finds an electronic copy of Fisher's thesis I'd love to know
>>> about it.  My searches have been fruitless.
>>
>>
>> The title is not the same, but maybe these are variants of the same paper?
>>
>> http://dl.acm.org/author_page.cfm?id=81100550987&coll=DL&dl=ACM&trk=0&cfid=76786786&cftoken=53955875
>>
>> Also, I no longer have access to ACM digital library, so I can't post the
>> PDFs.
>>
>> --
>> Duncan.
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel & Maru

2012-04-10 Thread Alan Kay
Hi Julian

(Adding to Ian's comments)

Doing as Ian suggests and trying out variants can be an extremely illuminating 
experience (for example, BBN Lisp (1.85) had three or four choices for what was 
meant by a "lambda closure" -- three of the options I remember were (a) "do 
Algol" -- this is essentially what Scheme wound up doing 15 years later, (b) 
make a private a-list for free variables, (c) lock the private a-list to the 
values of the free variables at the time of the closure).

I  suggest not trying to write your eval in the style that McCarthy used (it's 
too convoluted and intertwined). The 
first thing to do is to identify and isolate separate cases that have to be 
taken care of -- e.g. what does it mean to eval the "function position" of an 
expression (LISP keeps on evaling until a lambda is found ...). Write these 
separate cases as separately as possible.


Dave Fisher's thesis "A Control Definition Language" CMU 1970 is a very clean 
approach to thinking about environments for LISP like languages. He separates 
the "control" path, from the "environment" path, etc.


You have to think about whether 
"special forms" are a worthwhile idea (other ploys can be used to 
control when and if arguments are evaled).

You will need to think about the tradeoffs between a pure applicative style vs. 
being able to set values imperatively. For example, you could use "Strachey's 
device" to write "loops" as clean single assignment structures which are 
actually tail recursions. Couple this with "fluents" (McCarthy's "time 
management") and you get a very clean non-assignment language that can 
nonetheless traverse through "time". Variants of this idea were used in Lucid 
(Ashcroft and Wadge).

Even if you use a recursive language to write your eval in, you might also 
consider taking a second pass and writing the eval just in terms of loops -- 
this is also very illuminating.

What one gets from doing these exercises is a visceral feel for "great power 
with very little mechanics" -- this is obtained via "mathematical thinking" and 
it is obscured almost completely by the standard approaches to characterizing 
programming languages (as "things in themselves" rather than a simple powerful 
kernel "with decorations").

Cheers,

Alan





>
> From: Ian Piumarta 
>To: Julian Leviston  
>Cc: Fundamentals of New Computing  
>Sent: Monday, April 9, 2012 8:58 PM
>Subject: Re: [fonc] Kernel & Maru
> 
>Dear Julian,
>
>On Apr 9, 2012, at 19:40 , Julian Leviston wrote:
>
>> Also, simply, what are the "semantic inadequacies" of LISP that the "Maru 
>> paper" refers to (http://piumarta.com/freeco11/freeco11-piumarta-oecm.pdf)? 
>> I read the footnoted article (The Influence of the Designer on the Design—J. 
>> McCarthy and Lisp), but it didn't elucidate things very much for me.
>
>Here is a list that remains commented in my TeX file but which was never 
>expanded with justifications and inserted into the final version.  (The ACM 
>insisted that a paper published online, for download only, be strictly limited 
>to five pages -- go figure!)
>
>%%   Difficulties and omissions arise
>%%   involving function-valued arguments, application of function-valued
>%%   non-atomic expressions, inconsistent evaluation rules for arguments,
>%%   shadowing of local by global bindings, the disjoint value spaces for
>%%   functions and symbolic expressions, etc.
>
>IIRC these all remain in the evaluator published in the first part of the 
>LISP-1.5 Manual.
>
>> I have to say that all of these papers and works are making me feel like a 3 
>> year old making his first steps into understanding about the world. I guess 
>> I must be learning, because this is the feeling I've always had when I've 
>> been growing, yet I don't feel like I have any semblance of a grasp on any 
>> part of it, really... which bothers me a lot.
>
>My suggestion would be to forget everything that has been confusing you and 
>begin again with the LISP-1.5 Manual (and maybe "Recursive Functions of 
>Symbolic Expressions and Their Computation by Machine").  Then pick your 
>favourite superfast-prototyping programming language and build McCarthy's 
>evaluator in it.  (This step is not optional if you want to understand 
>properly.)  Then throw some expressions containing higher-order functions and 
>free variables at it, figure out why it behaves oddly, and fix it without 
>adding any conceptual complexity.
>
>A weekend or two should be enough for all of this.  At the end of it you will 
>understand profoundly why most of the things that bothered you were bothering 
>you, and you will never be bothered by them again.  Anything that remains 
>bothersome might be caused by trying to think of Common Lisp as a 
>dynamically-evaluated language, rather than a compiled one.
>
>(FWIW: Subsequently fixing every significant asymmetry, semantic irregularity 
>and immutable special case that you can find in your evaluator should lead you 
>to some

Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-04-09 Thread Alan Kay
Yes "time management" is a good idea. 


Looking at the documentation here I see no mention of the (likely) inventor of 
the idea -- John McCarthy ca 1962-3, or the most adventurous early design to 
actually use the idea (outside of AI robots/agents work) -- David Reed in his 
1978 MIT thesis "A Network Operating System". 


Viewpoints implemented a strong "real-time enough" version of Reed's ideas 
about 10 years ago -- "Croquet"


The ALSP blurb on Wikipedia does mention the PARC Pup Protocol and Netword (the 
"Internet" before the Internet).

Cheers,

Alan




>
> From: David Barbour 
>To: Fundamentals of New Computing  
>Sent: Monday, April 9, 2012 9:44 AM
>Subject: Re: [fonc] Everything You Know (about Parallel Programming) Is 
>Wrong!: A Wild Screed about the Future
> 
>
>Going back to this post (to avoid distraction), I note that
>
>
>Aggregate Level Simulation Protocol
>   and its successor
>High Level Architecture
>
>
>Both provide "time management" to achieve consistency, i.e. "so that the times 
>for all simulations appear the same to users and so that event causality is 
>maintained – events should occur in the same sequence in all simulations."
>
>
>You should not conclude for simulations that it is easier to spawn a process 
>than to serialize things. You'll end up spawning a process AND serializing 
>things.
>
>
>Regards,
>
>
>Dave
>
>
>
>
>http://en.wikipedia.org/wiki/Aggregate_Level_Simulation_Protocol 
>http://en.wikipedia.org/wiki/High_Level_Architecture_(simulation) 
>
>
>
>The ALSP page goes into more detail on how this is achieved. HLA started as 
>the merging of Distributed Interactive Simulation (DIS) with ALSP. 
>
>
>
>On Tue, Apr 3, 2012 at 8:02 AM, Miles Fidelman  
>wrote:
>
>Steven Robertson wrote:
>>
>>On Tue, Apr 3, 2012 at 7:23 AM, Tom Novelli  wrote:
>>>
>>>Even if there does turn out to be a simple and general way to do parallel
programming, there'll always be tradeoffs weighing against it - energy usage
and design complexity, to name two obvious ones.

To design complexity: you have to be kidding.  For huge classes of problems - 
anything that's remotely transactional or event driven, simulation, gaming come 
to mind immediately - it's far easier to conceptualize as spawning a process 
than trying to serialize things.  The stumbling block has always been context 
switching overhead.  That problem goes away as your hardware becomes massively 
parallel.
>>
>>Miles Fidelman
>>
>>-- 
>>In theory, there is no difference between theory and practice.
>>In practice, there is.    Yogi Berra
>>
>>
>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>
>
>
>-- 
>bringing s-words to a pen fight
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] O. K. Moore's talking typewriter: where is it now?

2012-03-20 Thread Alan Kay
This just came up recently on the IAEP and fonc lists via Scott Ananian's 
request for comments on his Nell proposal.

Moore's work continues to be impressive today (at least to me). Moore's 
thinking was wide deep and rich -- and much of it is quite relevant and useful 
today. There was a lot more to it than you suggest below.


IBM and John Henry Martin did "Writing to Read" in the mid-60s using the PC -- 
a very similar approach but with little or even no attribution to Moore's ideas.

From the technological standpoint, both of these ideas were very early and 
expensive. But they fact that they were both quite successful should have made 
them more memorable, and to be picked up in the last decade where these ideas 
(and more) can be propagated quite inexpensively.

Of your reasons, "2" is the closest. One you didn't give was 

4. Things get easily forgotten in a pop-culture

Cheers,

Alan




>
> From: Mohamed Samy 
>To: Fundamentals of New Computing  
>Sent: Monday, March 19, 2012 8:48 PM
>Subject: [fonc] O. K. Moore's talking typewriter: where is it now?
> 
>
>In Alan Kay's original paper "A personal computer for children of all ages" in 
>1972, he described an experiment by Omar Khayyam Moore; the talking 
>typewriter: it was a device that spoke the words typed on it, but remained 
>silent for whatever entered that isn't a word.
>
>The experiment was to leave the typewriter in a play area populated by 
>toddlers (about the age of 3) and eventually the devices taught them - more 
>precisely enabled them to teach themselves - reading and writing.
>
>My question is: why isn't everyone doing this now? You'd expect those results 
>would influence schools, nurseries, and parents. You'd expect tons of such 
>electronic devices to be for sale since decades ago. If there's something that 
>would sell to parents, it would be 'instant reading teacher'. So why didn't 
>this just spread?
>
>I have 3 guesses:
>1- The experiment was discredited for some reason or disproven by another 
>later experiment.
>2- It was scientifically sound, but no one simply cared. That's perfectly 
>possible since social and cultural aspects have much more influence than 
>expected.
>3- No one of the scientific community cared, so no further work was done to 
>prove or disprove it. It remains a hypothesis.
>
>I've tried to search online for papers or articles about the experiment, but 
>most of what I found was news about it from the 60s...I thought I'd ask here :)
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Publish/subscribe vs. send

2012-03-20 Thread Alan Kay
One of the motivations is to handle some kinds of scaling more gracefully. If 
you think about things from a module's point of view, the fewer details it has 
to know about resources it needs (and about its environment in general) the 
better. 

It can be thought of as a next stage in going from explicit procedure calls 
(where you have to use the exact name) to message passing with polymorphism 
where the system uses context to actually choose the method, and the name you 
use is (supposed to be) a term denoted a kind of goal (whose specifics will be 
determined outside the module).

If you can specify what you *need* via a description, you can eliminate even 
having to know the tag for the goal, the system will still find it for you.

Another way to think of this is as a kind of "semantic typing"

This could be a great idea, because a language for writing descriptions will 
almost certainly have fewer things that have to be agreed on globally, and this 
should allow more graceful coordinations and better scaling.

However, this has yet to be exhibited -- so it needs to be done and critiqued 
before we should get too excited here.

I think a little $ and a lot of work in CYC or Genesereth's game language would 
be a good first place to start. For example, in CYC you should be able to write 
a description using the base relational language, and Cyc should be able to 
find you the local terms it uses for these

Cheers,

Alan




>
> From: Casey Ransberger 
>To: Fundamentals of New Computing  
>Sent: Monday, March 19, 2012 3:35 PM
>Subject: [fonc] Publish/subscribe vs. send
> 
>Here's the real naive question...
>
>I'm fuzzy about why objects should receive messages but not send them. I think 
>I can see the mechanics of how it might work, I just don't grok why it's 
>important. 
>
>What motivates? Are we trying to eliminate the overhead of ST-style message 
>passing? Is publish/subscribe easier to understand? Does it lead to simpler 
>artifacts? Looser coupling? Does it simplify matters of concurrency?
>
>I feel like I'm still missing a pretty important concept, but I have a feeling 
>that once I've grabbed at it, several things might suddenly fit and make sense.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Naive question

2012-03-19 Thread Alan Kay
Hi Benoit

This is basically what "publish and subscribe" schemes are all about. Linda is 
a simple "coordination protocol" for organizing such loose couplings. There are 
sketches of such mechanisms in most of the STEPS reports 

Spreadsheets are simple versions of this


The Playground language for the Vivarium project was set up like this


For real scaling, one would like to move to more general semantic descriptions 
of "what is needed" and "what is supplied" ...

Cheers,

Alan




>
> From: Benoît Fleury 
>To: Fundamentals of New Computing  
>Sent: Monday, March 19, 2012 1:10 PM
>Subject: [fonc] Naive question
> 
>
>Hi,
>
>
>I was wondering if there is any language out there that lets you describe the 
>behavior of an "object" as a grammar.
>
>
>An object would receive a stream of events. The rules of the grammar describe 
>the sequence of events the object can respond to. The "semantic actions" 
>inside these rules can change the internal state of the object or emit other 
>events.
>
>
>We don't want the objects to send message to each other. A bus-like structure 
>would collect events and dispatch them to all interested objects. To avoid 
>pushing an event to all objects, the "bus" would ask first to all objects what 
>kind of event they're waiting for. These events are the possible alternatives 
>in the object's grammar based on the current internal state of the object.
>
>
>It's different from object-oriented programming since objects don't talk 
>directly to each other.
>
>
>A few questions the come up when thinking about this:
> - do we want backtracking? probably not, if the semantic actions are 
>different, it might be awkward or impossible to undo them. If the semantic 
>actions are the same in the grammar, we might want to do some factoring to 
>remove repeated semantic actions.
> - how to represent time? Do all objects need to share the same clock? Do we 
>have to send "tick" events to all objects?
> - should we allow the parallel execution of multiple scenarios for the same 
>object? What does it make more complex in the design of the object's behavior? 
>What does it make simpler?
>
>
>If we assume an object receive a tick event to represent time, and using a 
>syntax similar to ometa, we could write a simplistic behavior of an ant this 
>way:
>
>
># the ant find food when there is a food event raised and the ant's position 
>is in the area of the food
># food indicates an event of type "food", the question mark starts a semantic 
>predicate
>findFood    = food ?(this.position.inRect(food.area))
>
>
># similar rule to find the nest
>findNest     = nest ?(this.position.inRect(nest.area))
>
>
># at every step, the ant move
>move         = tick (=> move 1 unit in current direction (or pick random 
>direction if no direction))
>
>
># the gatherFood scenario can then be described as finding food then finding 
>the nest
>gatherFood = findFood (=> pick up food, change direction to nest)
>                    findNest (=> drop food, change direction to food source)
>
>
>There is probably a lot of thing missing and not thought through.
>
>
>But I was just wondering if you know a language to do this kind of things?
>
>
>Thanks,
>Benoit.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-16 Thread Alan Kay
The WAM and other fast schemes for Prolog are worth looking at. But the 
Javascript version that Alex did using his and Stephen Murrell's design for 
compact Prolog semantics (about 90 lines of Javascript code) is very 
illuminating for those interested in "the logic of logic". 


But Prolog has always had some serious flaws -- so it is worth looking at 
cleaned up and enhanced versions (such as the Datalog with negation and time 
variants I've mentioned). Also, Shapiro's Concurrent Prolog did quite a cleanup 
long ago.

I particularly liked the arguments of Bill Kornfield's "Prolog With Equality" 
paper from many years ago -- this is one of several seminal perspectives on 
where this kind of language should be taken.

The big flaw with most of the attempts I've see to combine "Logic and Objects" 
is that what should be done about state is not taken seriously. The first sins 
were committed in Prolog itself by allowing a non-automatic undoable "assert". 
I've argued that it would be much better to use takeoffs of "situation 
calculus" and "pseudotime" to allow perfect 
deductions/implications/functional-relationships to be computed while still 
moving from one context to another to have a model of before, now, and after. 
These are not new ideas, and I didn't have them first.

Cheers,

Alan




>
> From: Ryan Mitchley 
>To: Fundamentals of New Computing  
>Sent: Friday, March 16, 2012 5:26 AM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>
>On 15/03/2012 14:20, Alan Kay wrote: 
>Alex Warth did both a standard Prolog and an English based language one using 
>OMeta in both Javascript, and in Smalltalk.
>>
>>
>>
>I must have a look at these. Thanks for all of the references. I was
working my way through Warren Abstract Machine implementation
details but it was truly headache-inducing (for me, anyway).
>
>A book I keep meaning to get is "Paradigms of Artificial
Intelligence Programming: Case Studies in Common Lisp", which
describes a Prolog-like implementation (and much more) in Lisp.
>
>The Minsky book would be very welcome!
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-16 Thread Alan Kay
>
>Regarding these quotes from "the early history of smalltalk" re ST 71:
>
>I had originally made the boast because McCarthy’s self-describing 
LISP interpreter was written in itself. It was about “a page”, and as 
far as power goes, LISP was the whole nine-yards for functional 
languages. I was quite sure I could do the same for object-oriented 
languages plus be able to do a reasonable syntax for the code a loa some of the 
FLEX machine techniques.Once I read that I HAD to see it. I finally tracked 
down a PDF version of the history of smalltalk which contained the mythical 
appendix III and thus the "one pager". The scan is pretty bad so I have 
attempted to transcribe it into a plain text file, which I am attaching here. I 
was wondering if you could take a look and see if it is roughly correctly 
transcribed? For the sake of completeness I am working on SVG versions of the 
diagrams. 
>
>My main questions specifically relating to the one pager are:
>
>1. Does my use of "." instead of the other symbol (e.g. e.MSG v.s. 
>e{other-char}MSG) present an issue?
>2. I had a hard time making out the single-quoted characters on lines 23 and 
>24 following the e.MSG.PC, my best guess was a comma and a period (which 
>relates to my first question, is the character actually the same as 
>{other-char})? 
>3. regarding line 54 and the "etc..." any idea what that ellipsis would be 
>once expanded haha? Would it just be a full definition of escapes or would it 
>be further definitions relating to the interpreter?
>
>4. line 62 where I put {?} what is that character meant to be? I believe it is 
>the same as what is on line 70 and also marked as {?}. 
>
>5. Is it implied that things like quote, set (<-), Table, null, atom, notlist, 
>escape, goto, if-then-else, select-case, +, > etc. would exist as primitives?
>
>Any other insight you could provide would be much appreciated. Thanks -Shaun
>
>
>On Thu, Mar 15, 2012 at 6:40 PM, Andre van Delft  
>wrote:
>
>The theory Algebra of Communicating Processes (ACP) 
>>offers non-determinism (as in Meta II) plus concurrency.
>>I will present a paper on extending Scala with ACP 
>>next month at Scala Days 2012. For an abstract, see
>>http://days2012.scala-lang.org/node/92
>>
>>
>>A non-final version of the paper is at 
>>http://code.google.com/p/subscript/downloads/detail?name=SubScript-TR2012.pdf
>>
>>
>>André
>>
>>Op 15 mrt. 2012, om 03:03 heeft Alan Kay het volgende geschreven:
>>
>>
>>Well, it was very much a "mythical beast" even on paper -- and you really 
>>have to implement programming languages and make a lot of things with them to 
>>be able to assess them 
>>>
>>>
>>>
>>>But -- basically -- since meeting Seymour and starting to think about 
>>>children and programming, there were eight systems that I thought were 
>>>really nifty and cried out to be unified somehow:
>>>  1. Joss
>>>  2. Lisp
>>>  3. Logo -- which was originally a unification of Joss and Lisp, but I 
>>>thought more could be done in this direction).
>>>  4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for 
>>>logic programming and "pattern directed inference" both forward and 
>>>backwards with backtracking)
>>>  5. Meta II -- a super simple meta parser and compiler done by Val Schorre 
>>>at UCLA ca 1963
>>>  6. IMP -- perhaps the first real extensible language that worked well -- 
>>>by Ned Irons (CACM, Jan 1970)
>>>
>>>  7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with 
>>>some design ideas by me
>>>
>>>  8. The object and pattern directed extension stuff I'd been doing 
>>>previously with the Flex Machine and afterwards at SAIL (that also was 
>>>influenced by Meta II)
>>>
>>>
>>>
>>>One of the schemes was to really make the pattern matching parts of this 
>>>"work for everything" that eventually required "invocations and binding". 
>>>This was doable semantically but was a bear syntactically because of the 
>>>different senses of what kinds of matching and binding were intended for 
>>>different problems. This messed up the readability and desired "simple 
>>>things should be simple".
>>>
>>>Examples I wanted to cover included simple translations of languages 
>>>(English to Pig Latin, English to French, etc. some of these had been done 
>>>in Logo), the Winograd robot block stacking and other examples done with 
>>>Plan

Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-15 Thread Alan Kay
John Sculley was the ultimate champion of getting Hypercard from a research 
project to getting productized.

Cheers,

Alan




>
> From: Jecel Assumpcao Jr. 
>To: Fundamentals of New Computing  
>Sent: Thursday, March 15, 2012 4:03 PM
>Subject: Re: [fonc] Apple and hardware (was: Error trying to compile COLA)
> 
>Alan Kay wrote on Wed, 14 Mar 2012 16:44:33 -0700 (PDT)
>> The CRISP was too slow, and had other problems in its details. Sakoman liked 
>> it ...
>
>Thanks for the information! Just looking at the papers about it I had
>the impression that it would be reasonably faster than an ARM at the
>same clock frequency while having a VAX-like code density. I was going
>to suggest that implementing CRISP on an FPGA could be an interesting
>project for one of the grad students at my university, but that doesn't
>seem to be the case.
>
>A rather different processor for running C (for floating point intensive
>code) was the WM architecture:
>
>http://www.cs.virginia.edu/~wm/wm.html
>http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.45.1092
>
>I only heard about it because it was the inspiration for the memory
>interface in the version of Chuck Thacker's Tiny Computer used in the
>Beehive multicore project. This, unfortunately, is probably too complex
>for a student in my group.
>
>> Bill Atkinson did Hypercard ... Larry made many other contributions at Xerox 
>> and Apple
>
>I know that Bill was the developer, but had the impression that Larry
>had done what was needed to move from project to product. He certainly
>was the one promoting it pre-launch in the old Smalltalk forums at BIX
>(Byte Information eXchange).
>
>> To me the Dynabook has always been 95% a "service model" and 5% physical
>> specs (there were three main physical ideas for it, only one was the tablet).
>
>2015 is almost here - time to move to the "computer in glasses" model
>(lame "Back to the Future" reference, but I know you will agree).
>
>BGB wrote on Wed, 14 Mar 2012 17:23:07 -0700
>> the TSS?...
>> 
>> it is still usable on x86 in 32-bit Protected-Mode.
>
>I was thinking about about the LDT and GDT (Local Descriptor Table and
>Global Descriptor Table). These still work, of course, but the current
>implementations are so bad that it is faster to do the same thing 100%
>in software. You do lose the security aspect, however.
>
>It is funny that the wish to put the TSS to good use was the big
>motivation for Linux. The resulting non portability (which is a lot less
>important now than then) was one of the main complaints in Andrew
>Tanenbaum's famous early rant about the OS. The first attempt to port
>Linux (to the Alpha, if I remember correctly) required completely
>rewriting that part and the changes were quickly brought back to the
>Intel version.
>
>Marcel Weiher wrote on Thu, 15 Mar 2012 15:33:07 +0100
>> I have a little Postscript interpreter/scratchpad in the AppStore 
>> (TouchScript,
>> http://itunes.apple.com/en/app/touchscript/id398914579?mt=8 ).  Admittedly, 
>> it
>> was mostly a trial balloon to see if something like that would be accepted, 
>> and
>> it was (2nd revision so far).  And somewhat surprisingly a (very) few people
>> even seem to be using it!
>>
>> Sharing is via iTunes.
>
>Thanks for the tip! I see your description is "Use the Postscript(tm)
>language to express your ideas and see the results on your iPhone.
>Transfer your creations to your computer via iTunes sharing as either
>PNG or Postscript documents."
>
>It is likely that the reviewers considered that "Postscript documents"
>means a text file (like a .pdf or .doc). The user who gave you a bad
>review certainly did (another user corrected him/her). So this doesn't
>tell us what Apple would do with a language that allows you to share
>programs.
>
>David Harris wrote on Thu, 15 Mar 2012 08:35:06 -0700 about
>Wolfram|Alpha mobile
>
>Thanks, but that is exactly what I was calling "just a terminal" and "a
>waste".
>
>-- Jecel
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Dynabook ideas

2012-03-15 Thread Alan Kay
The other two physical ideas were

-- via head mounted display (in glasses a la Ivan Sutherland's goggles -- ca. 
1968 -- but invisible) 


-- as embodied in the environment (a la Nicholas Negroponte's and Dick Bolt's 
"Dataland" and "Spatial Data Management System" of the 70s).

In the late 60s, many of us thought that it might be easier to do the tiny flat 
panels needed for a HMD than to make the big ones needed for the tablet form 
factor. But in fact no one with development funds in the US was interested in 
flat panel displays at that time. All we had were the main patents and 
knowledge for all the subsequent development work of each of the technologies 
required (liquid crystal, plasma, particle migration, thin film, amorphous 
semiconductors, etc.)

Cheers,

Alan




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Thursday, March 15, 2012 3:59 PM
>Subject: [fonc] Dynabook ideas
> 
>Le 15/03/2012 00:44, Alan Kay a écrit :
>
>> To me the Dynabook has always been 95% a "service model" and 5% physical
>> specs (there were three main physical ideas for it, only one was the
>> tablet).
>
>Err, what those ideas were?  I have seen videos of you presenting it,
>but I can't see more than a tablet with a keyboard and a touch screen
>—wait, are the keyboard and the touch screen the other two ideas?
>
>Loup.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-15 Thread Alan Kay
It's in the book "Semantic Information Processing" that Marvin Minsky put 
together in the mid 60s. I will get it scanned and send around (it is paired 
with the even more class "Advice Taker" paper that led to Lisp ...

Cheers,

Alan




>
> From: Wesley Smith 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Thursday, March 15, 2012 10:13 AM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>On Thu, Mar 15, 2012 at 5:23 AM, Alan Kay  wrote:
>> You don't want to use assert because it doesn't get undone during
>> backtracking. Look at the Alex Warth et al "Worlds" paper on the Viewpoints
>> site to see a better way to do this. (This is an outgrowth of the "labeled
>> situations" idea of McCarthy in 1963.)
>
>I found a reference to this paper in
>http://arxiv.org/pdf/1201.2430.pdf .  Looks like it's a paper called
>"Situations, actions and causal laws".  I'm not able to find any
>PDF/online version.  Anyone know how to get ahold of this document?
>
>thanks,
>wes
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-15 Thread Alan Kay
Sure ... (and this is what Lisp-70 did also ... and a number of systems before 
it, etc.)

Cheers,

Alan




>
> From: Peter C. Marks 
>To: Fundamentals of New Computing  
>Sent: Thursday, March 15, 2012 6:09 AM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>
>
>WRT: patterns, I wonder if the list is aware of the work by Barry Jay with his 
>Pattern Calculus, wherein he introduces patterns as first class citizens at 
>the lambda level. 
>
>
>Peter
>
>
>
>On Thu, Mar 15, 2012 at 8:23 AM, Alan Kay  wrote:
>
>You don't want to use assert because it doesn't get undone during 
>backtracking. Look at the Alex Warth et al "Worlds" paper on the Viewpoints 
>site to see a better way to do this. (This is an outgrowth of the "labeled 
>situations" idea of McCarthy in 1963.)
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>>>
>>> From: Ryan Mitchley 
>>>
>>>To: Fundamentals of New Computing  
>>>Sent: Thursday, March 15, 2012 5:02 AM
>>>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
>>> 
>>>
>>>On 15/03/2012 13:01, Ryan Mitchley wrote:
>>>>  It still doesn't fit well with a procedural model, in common with Prolog, 
>>>>though.
>>>> 
>>>> 
>>>
>>>Although, it has to be said that a procedural approach can be faked with a 
>>>combination of assertion and forward chaining.
>>>
>>>e.g.
>>>
>>>IsASquare(X, Y) iff line(X, blah), angle(blahblah) etc.
>>>assert IsASquare(100, 200).
>>>(System goes ahead and forward chains all of the subgoals, asserting facts 
>>>and creating a square as specified. Excuse the made-up syntax.)
>>>
>>>Forward chaining doesn't come standard with micro-PROLOG (or Prolog), but 
>>>can be added.
>>>
>>>
>>>Disclaimer: http://www.peralex.com/disclaimer.html
>>>
>>>
>>>___
>>>fonc mailing list
>>>fonc@vpri.org
>>>http://vpri.org/mailman/listinfo/fonc
>>>
>>>
>>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-15 Thread Alan Kay
You don't want to use assert because it doesn't get undone during backtracking. 
Look at the Alex Warth et al "Worlds" paper on the Viewpoints site to see a 
better way to do this. (This is an outgrowth of the "labeled situations" idea 
of McCarthy in 1963.)

Cheers,

Alan




>
> From: Ryan Mitchley 
>To: Fundamentals of New Computing  
>Sent: Thursday, March 15, 2012 5:02 AM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>On 15/03/2012 13:01, Ryan Mitchley wrote:
>>  It still doesn't fit well with a procedural model, in common with Prolog, 
>>though.
>> 
>> 
>
>Although, it has to be said that a procedural approach can be faked with a 
>combination of assertion and forward chaining.
>
>e.g.
>
>IsASquare(X, Y) iff line(X, blah), angle(blahblah) etc.
>assert IsASquare(100, 200).
>(System goes ahead and forward chains all of the subgoals, asserting facts and 
>creating a square as specified. Excuse the made-up syntax.)
>
>Forward chaining doesn't come standard with micro-PROLOG (or Prolog), but can 
>be added.
>
>
>Disclaimer: http://www.peralex.com/disclaimer.html
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-15 Thread Alan Kay
Alex Warth did both a standard Prolog and an English based language one using 
OMeta in both Javascript, and in Smalltalk.

Again, why just go with something that happens to be around? Why not try to 
make a language that fits to the users and the goals?

A stronger version of this kind of language is Datalog, especially the "Datalog 
+ Time" language -- called Daedalus -- used in the BOOM project at Berkeley.


Cheers,

Alan




>
> From: Ryan Mitchley 
>To: Fundamentals of New Computing  
>Sent: Thursday, March 15, 2012 4:01 AM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>I wonder if micro-PROLOG isn't worth revisiting by someone:
>
>ftp://ftp.worldofspectrum.org/pub/sinclair/games-info/m/Micro-PROLOGPrimer.pdf
>
>You get pattern matching, backtracking and a "nicer" syntax than Prolog. It's 
>easy enough to extend with IsA and notions of classes of objects. It still 
>doesn't fit well with a procedural model, in common with Prolog, though.
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-14 Thread Alan Kay
Well, it was very much a "mythical beast" even on paper -- and you really have 
to implement programming languages and make a lot of things with them to be 
able to assess them 


But -- basically -- since meeting Seymour and starting to think about children 
and programming, there were eight systems that I thought were really nifty and 
cried out to be unified somehow:
  1. Joss
  2. Lisp
  3. Logo -- which was originally a unification of Joss and Lisp, but I thought 
more could be done in this direction).
  4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for 
logic programming and "pattern directed inference" both forward and backwards 
with backtracking)
  5. Meta II -- a super simple meta parser and compiler done by Val Schorre at 
UCLA ca 1963
  6. IMP -- perhaps the first real extensible language that worked well -- by 
Ned Irons (CACM, Jan 1970)

  7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with some 
design ideas by me

  8. The object and pattern directed extension stuff I'd been doing previously 
with the Flex Machine and afterwards at SAIL (that also was influenced by Meta 
II)


One of the schemes was to really make the pattern matching parts of this "work 
for everything" that eventually required "invocations and binding". This was 
doable semantically but was a bear syntactically because of the different 
senses of what kinds of matching and binding were intended for different 
problems. This messed up the readability and desired "simple things should be 
simple".

Examples I wanted to cover included simple translations of languages (English 
to Pig Latin, English to French, etc. some of these had been done in Logo), the 
Winograd robot block stacking and other examples done with Planner, the making 
of the language the child was using, message sending and receiving, extensions 
to Smalltalk-71, and so forth.

I think today the way to try to do this would be with a much more graphical UI 
than with text -- one could imagine tiles that would specify what to match, and 
the details of the match could be submerged a bit.

More recently, both OMeta and several of Ian's matchers can handle multiple 
kinds of matching with binding and do backtracking, etc., so one could imagine 
a more general language that could be based on this.

On the other hand, trying to stuff 8 kinds of language ideas into one new 
language in a graceful way could be a siren's song of a goal.

Still 

Cheers,

Alan




>
> From: shaun gilchrist 
>To: fonc@vpri.org 
>Sent: Wednesday, March 14, 2012 11:38 AM
>Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
> 
>
>Alan, 
>
>"I would go way back to the never implemented Smalltalk-71"
>
>Is there a formal specification of what 71 should have been? I have only ever 
>read about it in passing reference in the various histories of smalltalk as a 
>step on the way to 72, 76, and finally 80. 
>
>I am very intrigued as to what sets 71 apart so dramatically. -Shaun
>
>
>On Wed, Mar 14, 2012 at 12:29 PM, Alan Kay  wrote:
>
>Hi Scott --
>>
>>
>>1. I will see if I can get one of these scanned for you. Moore tended to 
>>publish in journals and there is very little of his stuff available on line.
>>
>>
>>2.a. "if (a>hint of the former being tweaked for decades to make it easier to read.
>>
>>
>>Several experiments from the past cast doubt on the rest of the idea. At 
>>Disney we did a variety of "code display" generators to see what kinds of 
>>transformations we could do to the underlying Smalltalk (including syntactic) 
>>to make it something that could be subsetted as a "growable path from Etoys". 
>>
>>
>>
>>We got some good results from this (and this is what I'd do with Javascript 
>>in both directions -- Alex Warth's OMeta is in Javascript and is quite 
>>complete and could do this).
>>
>>
>>However, the showstopper was all the parentheses that had to be rendered in 
>>tiles. Mike Travers at MIT had done one of the first tile based editors for a 
>>version of Lisp that he used, and this was even worse.
>>
>>
>>More recently, Jens Moenig (who did SNAP) also did a direct renderer and 
>>editor for Squeak Smalltalk (this can be tried out) and it really seemed to 
>>be much too cluttered.
>>
>>
>>One argument for some of this, is "well, teach the kids a subset that doesn't 
>>use so many parens ...". This could be a solution.
>>
>>
>>However, in the end, I don't think Javascript semantics is particularly good 
>>for kids. For example, one of features of Etoys that turned o

Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Alan Kay
Hi Jecel

The CRISP was too slow, and had other problems in its details. Sakoman liked it 
...

Bill Atkinson did Hypercard ... Larry made many other contributions at Xerox 
and Apple

To me the Dynabook has always been 95% a "service model" and 5% physical specs 
(there were three main physical ideas for it, only one was the tablet).

Cheers,

Alan




>
> From: Jecel Assumpcao Jr. 
>To: Fundamentals of New Computing  
>Sent: Wednesday, March 14, 2012 3:55 PM
>Subject: Re: [fonc] Apple and hardware (was: Error trying to compile COLA)
> 
>Alan Kay wrote on Wed, 14 Mar 2012 11:36:30 -0700 (PDT)
>> Yep, I was there and trying to get the Newton project off the awful ATT chip
>> they had first chosen.
>
>Interesting - a few months ago I studied the datasheets for the Hobbit
>and read all the old CRISP papers and found this chip rather cute. It is
>even more C centric than RISCs (specially the ARM) so might not be a
>good choice for other languages. Another project that started out using
>this and then had to switch (to the PowerPC) was the BeBox. In the link
>I give below it says both projects were done by the same people (Jean
>Louis Gassée and Steve Sakoman), so in a way it was really just one
>project that used the chip.
>
>> Larry Tesler (who worked with us at PARC) finally wound up taking over this
>> project and doing a number of much better things with it.
>
>He was also responsible for giving us Hypercard, right?
>
>> Overall what happened with Newton was too bad -- it could have been much
>> better -- but there were many too many different opinions and power bases
>> involved.
>
>This looks like a reasonable history of the Newton project (though some
>parts that I know aren't quite right, so I can't guess how accurate the
>parts I didn't know are):
>
>http://lowendmac.com/orchard/06/john-sculley-newton-origin.html
>
>It doesn't mention NewtonScript nor Object Soups. I have never used it
>myself, only read about it and seen some demos. But my impression is
>that this was the closest thing we have had to the dynabook yet.
>
>> If you have a good version of confinement (which is pretty simple HW-wise) 
>> you
>> can use Butler Lampson's schemes for Cal-TSS to make a workable version of a
>> capability system.
>
>The 286 protected mode was good enough for this, and was extended in the
>386. I am not sure all modern x86 processors still implement these, and
>if they do it is likely that actually using them will hurt performance
>so much that it isn't an option in practice.
>
>> And, yep, I managed to get them to allow interpreters to run on the iPad, 
>> but was
>> not able to get Steve to countermand the "no sharing" rule.
>
>That is a pity, though at least having native languages makes these
>devices a reasonable replacement for my old Radio Shack PC-4 calculator.
>I noticed that neither Matlab nor Mathematica are available for the
>iPad, but only simple terminal apps that allow you to access these
>applications running on your PC. What a waste!
>
>-- Jecel
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Talking Typwriter [was: Barbarians at the gate! (Project Nell)]

2012-03-14 Thread Alan Kay
You had to have a lot of moxie in the 60s to try to make Moore's ideas into 
real technology. It was amazing what they were able to do.

I wonder where this old junk is now? Should be in the Computer History Museum!

Cheers,

Alan




>
> From: Martin McClure 
>To: Fundamentals of New Computing  
>Cc: Viewpoints Research  
>Sent: Wednesday, March 14, 2012 11:26 AM
>Subject: Re: [fonc] Talking Typwriter [was:  Barbarians at the gate! (Project 
>Nell)]
> 
>On 03/14/2012 09:54 AM, Alan Kay wrote:
>> 
>> 1. Psychologist O.K. Moore in the early 60s at Yale and elsewhere
>> pioneered the idea of a "talking typewriter" to help children learn how
>> to read via learning to write. This was first a grad student in a closet
>> with a microphone simulating a smart machine -- but later the Edison
>> division of McGraw-Hill made a technology that did some of these things.
>
>Now that reference brings back some memories!
>
>As an undergrad I had a student job in the Computer Assisted Instruction
>lab. One day, a large pile of old parts arrived from somewhere, with no
>accompanying documentation, and I was told, "Put them together." It
>turned out to be two Edison talking typewriters. I got one fully
>functional; the other had a couple of minor parts missing. This was in
>late '77 or early '78, about the same time I was attempting
>(unsuccessfully) to learn something about Smalltalk.
>
>Regards,
>
>-Martin
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Alan Kay
Yep, I was there and trying to get the Newton project off the awful ATT chip 
they had first chosen. Larry Tesler (who worked with us at PARC) finally wound 
up taking over this project and doing a number of much better things with it. 
Overall what happened with Newton was too bad -- it could have been much better 
-- but there were many too many different opinions and power bases involved.

If you have a good version of confinement (which is pretty simple HW-wise) you 
can use Butler Lampson's schemes for Cal-TSS to make a workable version of a 
capability system.

And, yep, I managed to get them to allow interpreters to run on the iPad, but 
was not able to get Steve to countermand the "no sharing" rule.

Cheers,

Alan




>
> From: Jecel Assumpcao Jr. 
>To: Fundamentals of New Computing  
>Sent: Wednesday, March 14, 2012 9:17 AM
>Subject: [fonc] Apple and hardware (was: Error trying to compile COLA)
> 
>Alan Kay wrote on Wed, 14 Mar 2012 05:53:21 -0700 (PDT)
>> A hardware vendor with huge volumes (like Apple) should be able to get a CPU
>> vendor to make HW that offers real protection, and at a granularity that 
>> makes
>> more systems sense.
>
>They did just that when they founded ARM Ltd (with Acorn and VTI): the
>most significant change from the ARM3 to the ARM6 was a new MMU with a
>more fine grained protection mechnism which was designed specially for
>the Newton OS. No other system used it and though I haven't checked, I
>wouldn't be surprised if this feature was eliminated from more recent
>versions of ARM.
>
>Compared to a real capability system (like the Intel iAPX432/BiiN/960XA
>or the IBM AS/400) it was a rather awkward solution, but at least they
>did make an effort.
>
>Having been created under Scully, this technology did not survive Jobs'
>return.
>
>> But the main point here is that there are no technical reasons why a child 
>> should
>> be restricted from making an Etoys or Scratch project and sharing it with 
>> another
>> child on an iPad.
>> No matter what Apple says, the reasons clearly stem from strategies and 
>> tactics
>> of economic exclusion.
>> So I agree with Max that the iPad at present is really the anti-Dynabook
>
>They have changed their position a little. I have a "Hand Basic" on my
>iPhone which is compatible with the Commodore 64 Basic. I can write and
>save programs, but can't send them to another device or load new
>programs from the Internet. Except I can - there are applications for
>the iPhone that give you access to the filing system and let you
>exchange files with a PC or Mac. But that is beyond most users, which
>seems to be a good enough barrier from Apple's viewpoint.
>
>The same thing applies to this nice native development environment for
>Lua on the iPad:
>
>http://twolivesleft.com/Codea/
>
>You can program on the iPad/iPhone, but can't share.
>
>-- Jecel
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-14 Thread Alan Kay
Hi Scott --

1. I will see if I can get one of these scanned for you. Moore tended to 
publish in journals and there is very little of his stuff available on line.

2.a. "if (a
> From: C. Scott Ananian 
>To: Alan Kay  
>Cc: IAEP SugarLabs ; Fundamentals of New Computing 
>; Viewpoints Research  
>Sent: Wednesday, March 14, 2012 10:25 AM
>Subject: Re: [IAEP] [fonc] Barbarians at the gate! (Project Nell)
> 
>
>On Wed, Mar 14, 2012 at 12:54 PM, Alan Kay  wrote:
>
>The many papers from this work greatly influenced the thinking about personal 
>computing at Xerox PARC in the 70s. Here are a couple:
>>
>>
>>-- O. K. Moore, Autotelic Responsive Environments and Exceptional Children, 
>>Experience, Structure and Adaptabilty (ed. Harvey), Springer, 1966
>>-- Anderson and Moore, Autotelic Folk Models, Sociological Quarterly, 1959
>>
>
>
>Thank you for these references.  I will chase them down and learn as much as I 
>can.
> 
>2. Separating out some of the programming ideas here:
>>
>>
>>a. Simplest one is that the most important users of this system are the 
>>children, so it would be a better idea to make the tile scripting look as 
>>easy for them as possible. I don't agree with the rationalization in the 
>>paper about "preserving the code reading skills of existing programmers".
>
>
>I probably need to clarify the reasoning in the paper for this point.
>
>
>"Traditional" text-based programming languages have been tweaked over decades 
>to be easy to read -- for both small examples and large systems.  It's 
>somewhat of a heresy, but I thought it would be interesting to explore a 
>tile-based system that *didn't* throw away the traditional text structure, and 
>tried simply to make the structure of the traditional text easier to visualize 
>and manipulate.
>
>
>So it's not really "skills of existing programmers" I'm interested in -- I 
>should reword that.  It's that I feel we have an existence proof that the 
>traditional textual form of a program is easy to read, even for very 
>complicated programs.  So I'm trying to scale down the thing that works, 
>instead of trying to invent something new which proves unwieldy at scale.
>
>
>b. Good idea to go all the way to the bottom with the children's language.
>>
>>
>>c. Figure 2 introduces another -- at least equally important language -- in 
>>my opinion, this one should be made kid usable and programmable -- and I 
>>would try to see how it could fit with the TS language in some way. 
>>
>
>
>This language is JSON, which is just the object-definition subset of 
>JavaScript.  So it can in fact be expressed with TurtleScript tiles.  
>(Although I haven't yet tackled quasiquote in TurtleScript.)
>
>
>d. There is another language -- AIML -- introduced for recognizing things. I 
>would use something much nicer, easier, more readable, etc., -- like OMeta -- 
>or more likely I would go way back to the never implemented Smalltalk-71 
>(which had these and some of the above features in its design and also tried 
>to be kid usable) -- and try to make a version that worked (maybe too hard to 
>do in general or for the scope of this project, but you can see why it would 
>be nice to have all of the mechanisms that make your system work be couched in 
>kid terms and looks and feels if possible).
>
>
>This I completely agree with.  The AIML will be translated to JSON on the 
>device itself.  The use of AIML is a compromise: it exists and has 
>well-defined semantics and does 90% of what I'd like it to do.  It also has an 
>active community who have spend a lot of time building reasonable dialog rules 
>in AIML.  At some point it will have to be extended or replaced, but I think 
>it will get me through version 1.0 at least.
> 
>I'll probably translate the AIML example to JSON in the next revision of the 
>paper, and state the relationship of JSON to JavaScript and TurtleScript more 
>precisely.
>
>
>3. It's out of the scope of your paper and these comments to discuss "getting 
>kids to add other structures besides stories and narrative to think with". You 
>have to start with stories, and that is enough for now. A larger scale plan 
>(you may already have) would involve a kind of weaning process to get kids to 
>add non-story thinking (as is done in math and science, etc.) to their skills. 
>This is a whole curriculum of its own.
>>
>>
>>
>>I make these comments because I think your project is a good idea, on the 
>>right track, and needs to be done
>
>
>Thank you.  I'll keep your encouragement in mind during the hard work of 
>implementation.
>  --scott
>
>-- 
>      ( http://cscott.net )
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-14 Thread Alan Kay
Hi Scott

This seems like a plan that should be done and tried and carefully evaluated. I 
think the approach is good. It could be "not quite enough" to work, but it 
should give rise to a lot of useful information for further passes at this.


1. Psychologist O.K. Moore in the early 60s at Yale and elsewhere pioneered the 
idea of a "talking typewriter" to help children learn how to read via learning 
to write. This was first a grad student in a closet with a microphone 
simulating a smart machine -- but later the Edison division of McGraw-Hill made 
a technology that did some of these things. 


The significance of Moore's work is that he really thought things through, both 
with respect to what such a curriculum might be, but also to the nature of the 
whole environment made for the child. 


He first defined a *responsive environment* as one that:
a.   permits learners to explore freely
b.   informs learners immediately about the consequences of their actions
c.   is self-pacing, i.e. events happen within the environment at a rate 
determined by the learner
d.  permits the learners to make full use of their capacities to discover 
relations of various kinds
e.   has a structure such that learners are likely to make a series of 
interconnected discoveries about the physical, cultural or social world


He called a responsive environment: “*autotelic*, if engaging in it is done for 
its own sake rather than for obtaining rewards or avoiding punishments that 
have no inherent connection with the activity itself”. By “discovery” he meant 
“gently guided discovery” in the sense of Montessori, Vygotsky, Bruner and 
Papert (i.e. recognizing that it is very difficult for human beings to come up 
with good ideas from scratch—hence the need for forms of guidance—but that 
things are learned best if the learner puts in the effort to make the final 
connections themselves—hence the need for forms of discovery.

The many papers from this work greatly influenced the thinking about personal 
computing at Xerox PARC in the 70s. Here are a couple:

-- O. K. Moore, Autotelic Responsive Environments and Exceptional Children, 
Experience, Structure and Adaptabilty (ed. Harvey), Springer, 1966
-- Anderson and Moore, Autotelic Folk Models, Sociological Quarterly, 1959

2. Separating out some of the programming ideas here:

a. Simplest one is that the most important users of this system are the 
children, so it would be a better idea to make the tile scripting look as easy 
for them as possible. I don't agree with the rationalization in the paper about 
"preserving the code reading skills of existing programmers".

b. Good idea to go all the way to the bottom with the children's language.

c. Figure 2 introduces another -- at least equally important language -- in my 
opinion, this one should be made kid usable and programmable -- and I would try 
to see how it could fit with the TS language in some way. 


d. There is another language -- AIML -- introduced for recognizing things. I 
would use something much nicer, easier, more readable, etc., -- like OMeta -- 
or more likely I would go way back to the never implemented Smalltalk-71 (which 
had these and some of the above features in its design and also tried to be kid 
usable) -- and try to make a version that worked (maybe too hard to do in 
general or for the scope of this project, but you can see why it would be nice 
to have all of the mechanisms that make your system work be couched in kid 
terms and looks and feels if possible).

3. It's out of the scope of your paper and these comments to discuss "getting 
kids to add other structures besides stories and narrative to think with". You 
have to start with stories, and that is enough for now. A larger scale plan 
(you may already have) would involve a kind of weaning process to get kids to 
add non-story thinking (as is done in math and science, etc.) to their skills. 
This is a whole curriculum of its own.


I make these comments because I think your project is a good idea, on the right 
track, and needs to be done

Best wishes

Alan




>
> From: C. Scott Ananian 
>To: IAEP SugarLabs  
>Sent: Tuesday, March 13, 2012 4:07 PM
>Subject: [IAEP] Barbarians at the gate! (Project Nell)
> 
>
>I read the following today:
>
>
>"A healthy [project] is, confusingly, one at odds with itself. There is a 
>healthy part which is attempting to normalize and to create predictability, 
>and there needs to be another part that is tasked with building something new 
>that is going to disrupt and eventually destroy that normality." 
>(http://www.randsinrepose.com/archives/2012/03/13/hacking_is_important.html)
>
>
>So, in this vein, I'd like to encourage Sugar-folk to read the short paper 
>Chris Ball, Michael Stone, and I just submitted (to IDC 2012) on Nell, our 
>design for XO-3 software for the reading project:
>
>
>     http://cscott.net/Publications/OLPC/idc2012.pdf
>
>
>You're expected not to like it: t

Re: [fonc] Error trying to compile COLA

2012-03-14 Thread Alan Kay
rovenance, however I don't 
>see a future in which non-trivial unsigned code is generally exchanged.  This 
>is the beginning of a necessary trend.  I'd love to hear how I'm wrong about 
>this.
>
>
>My suspicion is that for the most part, Apple's current set up is as locked 
>down as it's ever going to be, and that over time the signing system will be 
>extended to allow more fine grained human relationships to be expressed.  
>
>
>For example at the moment, as an iOS developer, I can allow different apps 
>that I write to access the same shared data via iCloud.  That makes sense 
>because I am solely responsible for making sure that the apps share a common 
>understanding of the meaning of the data, and Apple's APIs permit multiple 
>independent processes to coordinate access to the same file.  
>
>
>I am curious to see how Apple plans to make it possible for different 
>developers to share data.  Will this be done by a network of cryptographic 
>permissions between apps?
>
>
>>-- Max
>>
>>
>>On Tue, Mar 13, 2012 at 9:28 AM, Mack  wrote:
>>
>>For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
>>to rectify this via the "Terms and Conditions" route.
>>>
>>>
>>>It's been announced that both Windows 8 and OSX Mountain Lion will require 
>>>applications to be installed via download thru their respective "App Stores" 
>>>in order to obtain certification required for the OS to allow them access to 
>>>features (like an installed camera, or the network) that are outside the 
>>>default application sandbox.  
>>>
>>>
>>>The acceptance of the App Store model for the iPhone/iPad has persuaded them 
>>>that this will be (commercially) viable as a model for general public 
>>>distribution of trustable software.
>>>
>>>
>>>In that world, the Squeak plugin could be certified as safe to download in a 
>>>way that System Admins might believe.
>>>
>>>
>>>
>>>On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:
>>>
>>>Windows (especially) is so porous that SysAdmins (especially in school 
>>>districts) will not allow teachers to download .exe files. This wipes out 
>>>the Squeak plugin that provides all the functionality.
>>>>
>>>>
>>>>But there is still the browser and Javascript. But Javascript isn't fast 
>>>>enough to do the particle system. But why can't we just download the 
>>>>particle system and run it in a safe address space? The browser people 
>>>>don't yet understand that this is what they should have allowed in the 
>>>>first place. So right now there is only one route for this (and a few years 
>>>>ago there were none) -- and that is Native Client on Google Chrome. 
>>>>
>>>>
>>>>
>>>> But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
>>>>don't like NaCl. Google Chrome is an .exe file so teachers can't 
>>>>download it (and if they could, they could download the Etoys plugin).
>>>>
>>>
>>>___
>>>fonc mailing list
>>>fonc@vpri.org
>>>http://vpri.org/mailman/listinfo/fonc
>>>
>>>
>>___
>>fonc mailing list
>>fonc@vpri.org
>>http://vpri.org/mailman/listinfo/fonc
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Alan Kay
But we haven't wanted to program in Smalltalk for a long time.

This is a crazy non-solution (and is so on the iPad already)

No one should have to work around someone else's bad designs and 
implementations ...


Cheers,

Alan




>
> From: Mack 
>To: Fundamentals of New Computing  
>Sent: Tuesday, March 13, 2012 9:28 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>
>For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
>to rectify this via the "Terms and Conditions" route.
>
>
>It's been announced that both Windows 8 and OSX Mountain Lion will require 
>applications to be installed via download thru their respective "App Stores" 
>in order to obtain certification required for the OS to allow them access to 
>features (like an installed camera, or the network) that are outside the 
>default application sandbox.  
>
>
>The acceptance of the App Store model for the iPhone/iPad has persuaded them 
>that this will be (commercially) viable as a model for general public 
>distribution of trustable software.
>
>
>In that world, the Squeak plugin could be certified as safe to download in a 
>way that System Admins might believe.
>
>
>
>On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:
>
>Windows (especially) is so porous that SysAdmins (especially in school 
>districts) will not allow teachers to download .exe files. This wipes out the 
>Squeak plugin that provides all the functionality.
>>
>>
>>But there is still the browser and Javascript. But Javascript isn't fast 
>>enough to do the particle system. But why can't we just download the particle 
>>system and run it in a safe address space? The browser people don't yet 
>>understand that this is what they should have allowed in the first place. So 
>>right now there is only one route for this (and a few years ago there were 
>>none) -- and that is Native Client on Google Chrome. 
>>
>>
>>
>> But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
>>don't like NaCl. Google Chrome is an .exe file so teachers can't download 
>>it (and if they could, they could download the Etoys plugin).
>>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Chrome Penetration

2012-03-01 Thread Alan Kay
My friend Peter Norvig is the Director of Research at Google. 


I told him that I had heard of an "astounding jump" in the penetration of 
Chrome.

He says the best numbers they have at present is that Chrome is "20% to 30% 
penetrated" ...

Cheers,

Alan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Sorting the WWW mess

2012-03-01 Thread Alan Kay
Hi Loup

Someone else said that about links.

Browsing about either knowing where you are (and going) and/or about dealing 
with a rough max of 100 items. After that search is necessary.

However, Ted Nelson said a lot in each of the last 5 decades about what kinds 
of linking do the most good. (Chase down what he has to say about why one-way 
links are not what should be done.) He advocated from the beginning that the 
"provenance" of links must be preserved (which also means that you cannot copy 
what is being pointed to without also copying its provenance). This allows a 
much better way to deal with all manner of usage, embeddings, etc. -- including 
both fair use and also various forms of micropayments and subscriptions.

One way to handle this requirement is via protection mechanisms that "real 
objects" can supply.

Cheers,

Alan




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Thursday, March 1, 2012 6:36 AM
>Subject: Re: [fonc] Sorting the WWW mess
> 
>Martin Baldan wrote:
>> That said, I don't see why you have an issue with search engines and
>> search services. Even on your own machine, searching files with complex
>> properties is far from trivial. When outside, untrusted sources are
>> involved, you need someone to tell you what is relevant, what is not,
>> who is lying, and so on. Google got to dominate that niche for the right
>> reasons, namely, being much better than the competition.
>
>I wasn't clear.  Actually, I didn't want to state my opinion.  I can't
>find the message, but I (incorrectly?) remembered Alan saying that
>one-way links basically created the need for big search engines.  As I
>couldn't imagine an architecture that could do away with centralized
>search engines, I wanted to ask about it.
>
>That said, I do have issues with Big Data search engines: they are
>centralized.  That alone gives them more power than I'd like them to
>have.  If we could remove the centralization while keeping the good
>stuff (namely, finding things), that would be really cool.
>
>Loup.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
Hi Duncan

The short answers to these questions have already been given a few times on 
this list. But let me try another direction to approach this.

The first thing to notice about the overlapping windows interface "personal 
computer experience" is that it is logically independent of the code/processes 
running underneath. This means (a) you don't have to have a single religion 
"down below" (b) the different kinds of things that might be running can be 
protected from each other using the address space mechanisms of the CPU(s), and 
(c) you can think about allowing "outsiders" to do pretty much what they want 
to create a really scalable really expandable WWW.

If you are going to put a "browser app" on an "OS", then the "browser" has to 
be a mini-OS, not an app. 


But "standard apps" are a bad idea (we thought we'd gotten rid of them in the 
70s) because what you really want to do is to integrate functionality visually 
and operationally using the overlapping windows interface, which can safely get 
images from the processes and composite them on the screen. (Everything is now 
kind of "super-desktop-publishing".) An "app" is now just a kind of integration.

But the route that was actually taken with the WWW and the browser was in the 
face of what was already being done.

Hypercard existed, and showed what a WYSIWYG authoring system for end-users 
could do. This was ignored.

Postscript existed, and showed that a small interpreter could be moved easily 
from machine to machine while retaining meaning. This was ignored.

And so forth.

19 years later we see various attempts at inventing things that were already 
around when the WWW was tacked together.

But the thing that is amazing to me is that in spite of the almost universal 
deployment of it, it still can't do what you can do on any of the machines it 
runs on. And there have been very few complaints about this from the mostly 
naive end-users (and what seem to be mostly naive computer folks who deal with 
it).

Some of the blame should go to Apple and MS for not making real OSs for 
personal computers -- or better, going the distance to make something better 
than the old OS model. In either case both companies blew doing basic 
protections between processes. 


On the other hand, the WWW and first browsers were originally done on 
workstations that had stronger systems underneath -- so why were they so blind?


As an aside I should mention that there have been a number of attempts to do 
something about "OS bloat". Unix was always "too little too late" but its one 
outstanding feature early on was its tiny kernel with a design that wanted 
everything else to be done in "user-mode-code". Many good things could have 
come from the later programmers of this system realizing that being careful 
about dependencies is a top priority. (And you especially do not want to have 
your dependencies handled by a central monolith, etc.)


So, this gradually turned into an awful mess. But Linus went back to square one 
and redefined a tiny kernel again -- the realization here is that you do have 
to arbitrate basic resources of memory and process management, but you should 
allow everyone else to make the systems they need. This really can work well if 
processes can be small and interprocess communication fast (not the way Intel 
and Motorola saw it ...).


And I've also mentioned Popek's LOCUS system as a nice model for migrating 
processes over a network. It was Unix only, but there was nothing about his 
design that required this.

Cutting to the chase with a current day example. We made Etoys 15 years ago so 
children could learn about math, science, systems, etc. It has a particle 
system that allows many interesting things to be explored.

Windows (especially) is so porous that SysAdmins (especially in school 
districts) will not allow teachers to download .exe files. This wipes out the 
Squeak plugin that provides all the functionality.

But there is still the browser and Javascript. But Javascript isn't fast enough 
to do the particle system. But why can't we just download the particle system 
and run it in a safe address space? The browser people don't yet understand 
that this is what they should have allowed in the first place. So right now 
there is only one route for this (and a few years ago there were none) -- and 
that is Native Client on Google Chrome. 


 But Google Chrome is only 13% penetrated, and the other browser fiefdoms don't 
like NaCl. Google Chrome is an .exe file so teachers can't download it (and 
if they could, they could download the Etoys plugin).

Just in from browserland ... there is now -- 19 years later -- an allowed route 
to put samples in your machine's sound buffer that works on some of the 
browsers.

Holy cow folks!

Alan






>__

Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
I think it is domain dependent -- for example, it is very helpful to have a 
debugger of some kind for a parser, but less so for a projection language like 
Nile. On the other hand, debuggers for making both of these systems are very 
helpful. Etoys doesn't have a debugger because the important state is mostly 
visible in the form of graphical objects. OTOH, having a capturing tracer (a la 
EXDAMS) could be nice for both reviewing and understanding complex interactions 
and also dealing with "unrepeatable events"

The topic of going from an idea for a useful POL to an actually mission usable 
POL is prime thesis territory.


Cheers,

Alan




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Wednesday, February 29, 2012 5:43 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>Yes, I'm aware of that limitation.  I have the feeling however that
>IDEs and debuggers are overrated.  Sure, when dealing with a complex
>program in a complex language (say, tens of thousands of lines in C++),
>then sure, IDEs and debuggers are a must.  But I'm not sure their
>absence outweigh the simplicity potentially achieved with POLs. (I
>mean, I really don't know.  It could even be domain-dependent.)
>
>I agree however that having both (POLs + tools) would be much better,
>and is definitely worth pursuing.  I'll think about it.
>
>Loup.
>
>
>
>Alan Kay wrote:
>> With regard to your last point -- making POLs -- I don't think we are
>> there yet. It is most definitely a lot easier to make really powerful
>> POLs fairly quickly than it used to be, but we still don't have a nice
>> methodology and tools to automatically supply the IDE, debuggers, etc.
>> that need to be there for industrial-strength use.
>>
>> Cheers,
>>
>> Alan
>>
>>     *From:* Loup Vaillant 
>>     *To:* fonc@vpri.org
>>     *Sent:* Wednesday, February 29, 2012 1:27 AM
>>     *Subject:* Re: [fonc] Error trying to compile COLA
>>
>>     Alan Kay wrote:
>>      > Hi Loup
>>      >
>>      > Very good question -- and tell your Boss he should support you!
>>
>>     Cool, thank you for your support.
>>
>>
>>      > […] One general argument is
>>      > that "non-machine-code" languages are POLs of a weak sort, but
>>     are more
>>      > effective than writing machine code for most problems. (This was
>>     quite
>>      > controversial 50 years ago -- and lots of bosses forbade using any
>>      > higher level language.)
>>
>>     I didn't thought about this historical perspective. I'll keep that in
>>     mind, thanks.
>>
>>
>>      > Companies (and programmers within) are rarely rewarded for saving
>>     costs
>>      > over the real lifetime of a piece of software […]
>>
>>     I think my company is. We make custom software, and most of the time
>>     also get to maintain it. Of course, we charge for both. So, when we
>>     manage to keep the maintenance cheap (less bugs, simpler code…), we win.
>>
>>     However, we barely acknowledge it: much code I see is a technical debt
>>     waiting to be paid, because the original implementer wasn't given the
>>     time to do even a simple cleanup.
>>
>>
>>      > An argument that resonates with some bosses is the "debuggable
>>      > requirements/specifications -> ship the prototype and improve it"
>>     whose
>>      > benefits show up early on.
>>
>>     But of course. I should have thought about it, thanks.
>>
>>
>>      > […] one of the most important POLs to be worked on are
>>      > the ones that are for making POLs quickly.
>>
>>     This why I am totally thrilled by Ometa and Maru. I use them to point
>>     out that programming languages can be much cheaper to implement than
>>     most think they are. It is difficult however to get past the idea that
>>     implementing a language (even a small, specialized one) is by default a
>>     huge undertaking.
>>
>>     Cheers,
>>     Loup.
>>     ___
>>     fonc mailing list
>>    fonc@vpri.org <mailto:fonc@vpri.org>
>>     http://vpri.org/mailman/listinfo/fonc
>>
>>
>>
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org
>> http://vpri.org/mailman/listinfo/fonc
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
With regard to your last point -- making POLs -- I don't think we are there 
yet. It is most definitely a lot easier to make really powerful POLs fairly 
quickly  than it used to be, but we still don't have a nice methodology and 
tools to automatically supply the IDE, debuggers, etc. that need to be there 
for industrial-strength use.

Cheers,

Alan




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Wednesday, February 29, 2012 1:27 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>Alan Kay wrote:
>> Hi Loup
>>
>> Very good question -- and tell your Boss he should support you!
>
>Cool, thank you for your support.
>
>
>> […] One general argument is
>> that "non-machine-code" languages are POLs of a weak sort, but are more
>> effective than writing machine code for most problems. (This was quite
>> controversial 50 years ago -- and lots of bosses forbade using any
>> higher level language.)
>
>I didn't thought about this historical perspective. I'll keep that in
>mind, thanks.
>
>
>> Companies (and programmers within) are rarely rewarded for saving costs
>> over the real lifetime of a piece of software […]
>
>I think my company is.  We make custom software, and most of the time
>also get to maintain it.  Of course, we charge for both.  So, when we
>manage to keep the maintenance cheap (less bugs, simpler code…), we win.
>
>However, we barely acknowledge it: much code I see is a technical debt
>waiting to be paid, because the original implementer wasn't given the
>time to do even a simple cleanup.
>
>
>> An argument that resonates with some bosses is the "debuggable
>> requirements/specifications -> ship the prototype and improve it" whose
>> benefits show up early on.
>
>But of course.  I should have thought about it, thanks.
>
>
>> […] one of the most important POLs to be worked on are
>> the ones that are for making POLs quickly.
>
>This why I am totally thrilled by Ometa and Maru. I use them to point
>out that programming languages can be much cheaper to implement than
>most think they are.  It is difficult however to get past the idea that
>implementing a language (even a small, specialized one) is by default a
>huge undertaking.
>
>Cheers,
>Loup.
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Yes, this is why the STEPS proposal was careful to avoid "the current day 
world". 


For example, one of the many current day standards that was dismissed 
immediately is the WWW (one could hardly imagine more of a mess). 


But the functionality plus more can be replaced in our "ideal world" with 
encapsulated confined migratory VMs ("Internet objects") as a kind of next 
version of Gerry Popek's LOCUS. 

The browser and other storage confusions are all replaced by the simple idea of 
separating out the safe objects from the various modes one uses to send and 
receive them. This covers files, email, web browsing, search engines, etc. What 
is left in this model is just a UI that can integrate the visual etc., outputs 
from the various encapsulated VMs, and send them events to react to. (The 
original browser folks missed that a scalable browser is more like a kernel OS 
than an App)

These are old ideas, but the vendors etc didn't get it ...


Cheers,

Alan




>
> From: Reuben Thomas 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 28, 2012 1:01 PM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>On 28 February 2012 20:51, Niklas Larsson  wrote:
>>
>> But Linux contains much more duplication than drivers only, it
>> supports many filesystems, many networking protocols, and many
>> architectures of which only a few of each are are widely used. It also
>> contains a lot of complicated optimizations of operations that would
>> be unwanted in a simple, transparent OS.
>
>Absolutely. And many of these cannot be removed, because otherwise you
>cannot interoperate with the systems that use them. (A similar
>argument can be made for hardware if you want your OS to be widely
>usable, but the software argument is rather more difficult to avoid.)
>
>> Let's put a number on that: the first public
>> release of Linux, 0.01, contains 5929 lines i C-files and 2484 in
>> header files. I'm sure that is far closer to what a minimal viable OS
>> is than what current Linux is.
>
>I'm not sure that counts as "viable".
>
>A portable system will always have to cope with a wide range of
>hardware. Alan has already pointed to a solution to this: devices that
>come with their own drivers. At the very least, it's not unreasonable
>to assume something like the old Windows model, where drivers are
>installed with the device, rather than shipped with the OS. So that
>percentage of code can indeed be removed.
>
>More troublingly, an interoperable system will always have to cope
>with a wide range of file formats, network protocols &c. As FoNC has
>demonstrated with TCP/IP, implementations of these sometimes made much
>smaller, but many formats and protocols will not be susceptible to
>reimplementation, for technical, legal or simple lack of interest.
>
>As far as redundancy in the Linux model, then, one is left with those
>parts of the system that can either be implemented with less code
>(hopefully, a lot of it), or where there is already duplication (e.g.
>schedulers).
>
>Unfortunately again, one person's "little-used architecture" is
>another's bread and butter (and since old architectures are purged
>from Linux, it's a reasonable bet that there are significant numbers
>of users of each supported architecture), and one person's
>"complicated optimization" is another's essential performance boost.
>It's precisely due to heavy optimization of the kernel and libc that
>the basic UNIX programming model has remained stable for so long in
>Linux, while still delivering the performance of advanced hardware
>undreamed-of when UNIX was designed.
>
>-- 
>http://rrt.sc3d.org
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
As I mentioned, Smalltalk-71 was never implemented -- and rarely mentioned (but 
it was part of the history of Smalltalk so I put in a few words about it).

If we had implemented it, we probably would have cleaned up the look of it, and 
also some of the conventions. 

You are right that part of it is like a term rewriting system, and part of it 
has state (object state).

to ... do ... is a operation. The match is on everything between toand do

For example, the first line with "cons" in it does the "car" operation (which 
here is "hd").

The second line with "cons" in it does "replaca". The value of "hd" is being 
replaced by the value of "c". 

One of the struggles with this design was to try to make something almost as 
simple as LOGO, but that could do language extensions, simple AI backward 
chaining inferencing (like Winograd's block stacking problem), etc.

The turgid punctuations (as I mentioned in the history) were attempts to find 
ways to do many different kinds of matching.

So we were probably lucky that Smalltalk-72 came along  It's pattern 
matching was less general, but quite a bit could be done as far as driving an 
extensible interpreter with it.

However, some of these ideas were done better later. I think by Leler, and 
certainly by Joe Goguen, and others.

Cheers,

Alan


>
> From: Jakob Praher 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Tuesday, February 28, 2012 12:52 PM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>
>Dear Alan,
>
>Am 28.02.12 14:54, schrieb Alan Kay: 
>Hi Ryan
>>
>>
>>Check out Smalltalk-71, which was a design to do just what you suggest -- it 
>>was basically an attempt to combine some of my favorite languages of the time 
>>-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere? Something like 
a Smalltalk 71 for Smalltalk 80 programmers :-)
>In the early history of Smalltalk you mention it as 
>> It was a kind of parser with object-attachment that executed tokens 
>> directly. The Early History of Smalltalk From the examples I think that "do 
>> 'expr'" is evaluating expr by using previous "to 'ident' :arg1..:argN 
>> ".
>
>As an example "do 'factorial 3'" should  evaluate to 6 considering:
>
>to 'factorial' 0 is 1
>to 'factorial' :n do 'n*factorial n-1' The Early History of Smalltalk What 
>about arithmetic and precendence: What part of language was built into the 
>system? 
>- :var denote variables, whereas var denotes the instantiated value
of :var in the expr, e.g. :n vs 'n-1'
>- '' denote simple tokens (in the head) as well as expressions
(in the body)?
>- to, do are keywords
>- () can be used for precedence
>
>You described evaluation as straightforward pattern-matching.
>It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons'
:a :b) '<-'  :c " is a structured term.
>I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract
syntax - whereas in Smalltalk 71 the concrete syntax seems to be
used in the rules.
>
>Also it seems redundant to both have:
>to 'hd' ('cons' :a :b) do 'a' 
>and
>to 'hd' ('cons' :a :b) '<-'  :c  do 'a <- c'
>
>Is this made to make sure that the left hand side of <- has to be
a hd (cons :a :b) expression?
>
>Best,
>Jakob
>
>
>
>>
>>This never got implemented because of "a bet" that turned into Smalltalk-72, 
>>which also did what you suggest, but in a less comprehensive way -- think of 
>>each object as a Lisp closure that could be sent a pointer to the message and 
>>could then parse-and-eval that. 
>>
>>
>>A key to scaling -- that we didn't try to do -- is "semantic typing" (which I 
>>think is discussed in some of the STEPS material) -- that is: to be able to 
>>characterize the meaning of what is needed and produced in terms of a 
>>description rather than a label. Looks like we won't get to that idea this 
>>time either.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>>>
>>> From: Ryan Mitchley 
>>>To: fonc@vpri.org 
>>>Sent: Tuesday, February 28, 2012 12:57 AM
>>>Subject: Re: [fonc] Error trying to compile COLA
>>> 
>>>
>>> 
>>>On 27/02/2012 19:48, Tony Ga

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Reuben

Yep. One of the many "finesses" in the STEPS project was to point out that 
requiring OSs to have drivers for everything misses what being networked is all 
about. In a nicer distributed systems design (such as Popek's LOCUS), one would 
get drivers from the devices automatically, and they would not be part of any 
OS code count. Apple even did this in the early days of the Mac for its own 
devices, but couldn't get enough other vendors to see why this was a really big 
idea.

Eventually the OS melts away to almost nothing (as it did at PARC in the 70s).

Then the question starts to become "how much code has to be written to make the 
various functional parts that will be semi-automatically integrated to make 
'vanilla personal computing' " ?


Cheers,

Alan




>
> From: Reuben Thomas 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 28, 2012 9:33 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>On 28 February 2012 16:41, BGB  wrote:
>>>
>>>  - 1 order of magnitude is gained by removing feature creep.  I agree
>>>   feature creep can be important.  But I also believe most feature
>>>   belong to a long tail, where each is needed by a minority of users.
>>>   It does matter, but if the rest of the system is small enough,
>>>   adding the few features you need isn't so difficult any more.
>>>
>>
>> this could help some, but isn't likely to result in an order of magnitude.
>
>Example: in Linux 3.0.0, which has many drivers (and Linux is often
>cited as being "mostly drivers"), actually counting the code reveals
>about 55-60% in drivers (depending how you count). So that even with
>only one hardware configuration, you'd save less than 50% of the code
>size, i.e. a factor of 2 at very best.
>
>-- 
>http://rrt.sc3d.org
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

Very good question -- and tell your Boss he should support you!

If your boss has a math or science background, this will be an easy sell 
because there are many nice analogies that hold, and also some good examples in 
computing itself.

The POL approach is generally good, but for a particular problem area could be 
as difficult as any other approach. One general argument is that 
"non-machine-code" languages are POLs of a weak sort, but are more effective 
than writing machine code for most problems. (This was quite controversial 50 
years ago -- and lots of bosses forbade using any higher level language.)

Four arguments against POLs are the difficulties of (a) designing them, (b) 
making them, (c) creating IDE etc tools for them, and (d) learning them. (These 
are similar to the arguments about using math and science in engineering, but 
are not completely bogus for a small subset of problems ...).

Companies (and programmers within) are rarely rewarded for saving costs over 
the real lifetime of a piece of software (similar problems exist in the climate 
problems we are facing).These are social problems, but part of real 
engineering. However, at some point life-cycle costs and savings will become 
something that is accounted and rewarded-or-dinged. 

An argument that resonates with some bosses is the "debuggable 
requirements/specifications -> ship the prototype and improve it" whose 
benefits show up early on. However, these quicker track processes will often be 
stressed for time to do a new POL.

This suggests that one of the most important POLs to be worked on are the ones 
that are for making POLs quickly. I think this is a huge important area and 
much needs to be done here (also a very good area for new PhD theses!).


Taking all these factors (and there are more), I think the POL and extensible 
language approach works best for really difficult problems that small numbers 
of really good people are hooked up to solve (could be in a company, and very 
often in one of many research venues) -- and especially if the requirements 
will need to change quite a bit, both from learning curve and quick response to 
the outside world conditions.

Here's where a factor of 100 or 1000 (sometimes even a factor of 10) less code 
will be qualitatively powerful.

Right now I draw a line at *100. If you can get this or more, it is worth 
surmounting the four difficulties listed above. If you can get *1000, you are 
in a completely new world of development and thinking.


Cheers,

Alan





>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Tuesday, February 28, 2012 8:17 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>Alan Kay wrote:
>> Hi Loup
>>
>> As I've said and written over the years about this project, it is not
>> possible to compare features in a direct way here.
>
>Yes, I'm aware of that.  The problem rises when I do advocacy. A
>response I often get is "but with only 20,000 lines, they gotta
>leave features out!".  It is not easy to explain that a point by
>point comparison is either unfair or flatly impossible.
>
>
>> Our estimate so far is that we are getting our best results from the
>> consolidated redesign (folding features into each other) and then from
>> the POLs. We are still doing many approaches where we thought we'd have
>> the most problems with LOCs, namely at "the bottom".
>
>If I got it, what you call "consolidated redesign" encompasses what I
>called "feature creep" and "good engineering principles" (I understand
>now that they can't be easily separated). I originally estimated that:
>
>- You manage to gain 4 orders of magnitude compared to current OSes,
>- consolidated redesign gives you roughly 2 of those  (from 200M to 2M),
>- problem oriented languages give you the remaining 2.(from 2M  to 20K)
>
>Did I…
>- overstated the power of problem oriented languages?
>- understated the benefits of consolidated redesign?
>- forgot something else?
>
>(Sorry to bother you with those details, but I'm currently trying to
>  convince my Boss to pay me for a PhD on the grounds that PoLs are
>  totally amazing, so I'd better know real fast If I'm being
>  over-confident.)
>
>Thanks,
>Loup.
>
>
>
>> Cheers,
>>
>> Alan
>>
>>
>>     *From:* Loup Vaillant 
>>     *To:* fonc@vpri.org
>>     *Sent:* Tuesday, February 28, 2012 2:21 AM
>>     *Subject:* Re: [fonc] Error trying to compile COLA
>>
>>     Originally, the VPRI claims to be able to do a system that's 10,000
>>     smaller than our current bloatware. That's going from roughly 200
>>     million lines to 20,000. (Or, as Alan Kay puts

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

As I've said and written over the years about this project, it is not possible 
to compare features in a direct way here. The aim is to make something that 
feels like "vanilla personal computing" to an end-user -- that can do "a lot" 
-- and limit ourselves to 20,000 lines of code. We picked "personal computing" 
for three main reasons (a) we had some experience with doing this the first 
time around at PARC (in a very small amount of code), (b) it is something that 
people experience everyday, so they will be able to have opinions without 
trying to do a laborious point by point comparison, and (c) we would fail if we 
had to reverse engineer typical renditions of this (i.e. MS or Linux) -- we 
needed to do our own design to have a chance at this.

Our estimate so far is that we are getting our best results from the 
consolidated redesign (folding features into each other) and then from the 
POLs. We are still doing many approaches where we thought we'd have the most 
problems with LOCs, namely at "the bottom".

Cheers,

Alan




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Tuesday, February 28, 2012 2:21 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>Originally,  the VPRI claims to be able to do a system that's 10,000
>smaller than our current bloatware.  That's going from roughly 200
>million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
>to a single book.) That's 4 orders of magnitude.
>
>From the report, I made a rough break down of the causes for code
>reduction.  It seems that
>
>- 1 order of magnitude is gained by removing feature creep.  I agree
>   feature creep can be important.  But I also believe most feature
>   belong to a long tail, where each is needed by a minority of users.
>   It does matter, but if the rest of the system is small enough,
>   adding the few features you need isn't so difficult any more.
>
>- 1 order of magnitude is gained by mere good engineering principles.
>   In Frank for instance, there is _one_ drawing system, that is used
>   for everywhere.  Systematic code reuse can go a long way.
>     Another example is the  code I work with.  I routinely find
>   portions whose volume I can divide by 2 merely by rewriting a couple
>   of functions.  I fully expect to be able to do much better if I
>   could refactor the whole program.  Not because I'm a rock star (I'm
>   definitely not).  Very far from that.  Just because the code I
>   maintain is sufficiently abysmal.
>
>- 2 orders of magnitude are gained through the use of Problem Oriented
>   Languages (instead of C or C++). As examples, I can readily recall:
>    + Gezira vs Cairo    (÷95)
>    + Ometa  vs Lex+Yacc (÷75)
>    + TCP-IP             (÷93)
>   So I think this is not exaggerated.
>
>Looked at it this way, it doesn't seems so impossible any more.  I
>don't expect you to suddenly agree the "4 orders of magnitude" claim
>(It still defies my intuition), but you probably disagree specifically
>with one of my three points above.  Possible objections I can think of
>are:
>
>- Features matter more than I think they do.
>- One may not expect the user to write his own features, even though
>   it would be relatively simple.
>- Current systems may be not as badly written as I think they are.
>- Code reuse could be harder than I think.
>- The two orders of magnitude that seem to come from problem oriented
>   languages may not come from _only_ those.  It could come from the
>   removal of features, as well as better engineering principles,
>   meaning I'm counting some causes twice.
>
>Loup.
>
>
>BGB wrote:
>> On 2/27/2012 10:08 PM, Julian Leviston wrote:
>>> Structural optimisation is not compression. Lurk more.
>> 
>> probably will drop this, as arguing about all this is likely pointless
>> and counter-productive.
>> 
>> but, is there any particular reason for why similar rules and
>> restrictions wouldn't apply?
>> 
>> (I personally suspect that similar applies to nearly all forms of
>> communication, including written and spoken natural language, and a
>> claim that some X can be expressed in Y units does seem a fair amount
>> like a compression-style claim).
>> 
>> 
>> but, anyways, here is a link to another article:
>> http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
>> 
>>> Julian
>>> 
>>> On 28/02/2012, at 3:38 PM, BGB wrote:
>>> 
>>>> granted, I remain a little skeptical.
>>>> 
>>>> I think there is a bit of a difference though between, say, a log
>>>> 

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Ryan

Check out Smalltalk-71, which was a design to do just what you suggest -- it 
was basically an attempt to combine some of my favorite languages of the time 
-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.

This never got implemented because of "a bet" that turned into Smalltalk-72, 
which also did what you suggest, but in a less comprehensive way -- think of 
each object as a Lisp closure that could be sent a pointer to the message and 
could then parse-and-eval that. 

A key to scaling -- that we didn't try to do -- is "semantic typing" (which I 
think is discussed in some of the STEPS material) -- that is: to be able to 
characterize the meaning of what is needed and produced in terms of a 
description rather than a label. Looks like we won't get to that idea this time 
either.

Cheers,

Alan




>
> From: Ryan Mitchley 
>To: fonc@vpri.org 
>Sent: Tuesday, February 28, 2012 12:57 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>
> 
>On 27/02/2012 19:48, Tony Garnock-Jones wrote:
>
>
>>My interest in it came out of thinking about integrating
  pub/sub (multi- and broadcast) messaging into the heart of a
  language. What would a Smalltalk look like if, instead of a
  strict unicast model with multi- and broadcast constructed
  atop (via Observer/Observable), it had a messaging model
  capable of natively expressing unicast, anycast, multicast,
  and broadcast patterns? 
>>
>I've wondered if pattern matching shouldn't be a foundation of
method resolution (akin to binding with backtracking in Prolog) - if
a multicast message matches, the "method" is invoked (with much less
specificity than traditional method resolution by name/token). This
is maybe closer to the biological model of a cell surface receptor.
>
>Of course, complexity is an issue with this approach (potentially
NP-complete).
>
>Maybe this has been done and I've missed it.
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


  1   2   3   >