Re: [fonc] Task management in a world without apps.

2013-11-03 Thread Alan Kay
When I mention Smalltalk I always point to the 40 year ago past because it was 
then that the language and its implementation were significant. It was quite 
clear by the late 70s that many of the compromises (some of them wonderfully 
clever) that were made in order to run on the tiny machines of the day were not 
going to scale well.

It's worth noting that both the radical desire to burn the disk packs, 
*and* the sensible desire to use powers that are immediately available make 
sense in their respective contexts. But we shouldn't confuse the two desires. 
I.e. if we were to attempt an ultra high level general purpose language today, 
we wouldn't use Squeak or any other Smalltalk as a model or a starting place.

Cheers,

Alan





 From: karl ramberg karlramb...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Sunday, November 3, 2013 3:18 AM
Subject: Re: [fonc] Task management in a world without apps.
 


One issue with the instance development in Squeak is that it is quite fragile. 
It is easy to pull the building blocks apart and it all falls down like a 
house of cards. 


It's currently hard to work on different parts and individually version them 
independent of the rest of the system. All parts are versioned by the whole 
project.


It is also quite hard to reuse separate parts and share is with others. Now 
you must share a whole project and pull out the parts you want.


I look forward to using more rugged tools for instance programming/ creation 
:-)


Karl



On Thu, Oct 31, 2013 at 5:31 PM, Alan Kay alan.n...@yahoo.com wrote:

It's worth noting that this was the scheme at PARC and was used heavily later 
in Etoys. 

This is why Smalltalk has unlimited numbers of Projects. Each one is a 
persistant environment that serves both as a place to make things and as a 
page of desktop media. 

There are no apps, only objects and any and all objects can be brought to any 
project which will preserve them over time. This avoids the stovepiping of 
apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the 
objects, and George Bosworth's PARTS system showed a similar but slightly 
different way.

Also there is no presentation app in Etoys, just an object that allows 
projects to be put in any order -- and there can many many such orderings all 
preserved -- and there is an object that will move from one project to the 
next as you
 give your talk. Builds etc are all done via Etoy scripts.

This allows the full power of the system to be used for everything, including 
presentations. You can imagine how appalled we were by the appearance of 
Persuade and PowerPoint, etc.

Etc.

We thought we'd done away with both operating systems and with apps but 
we'd used the wrong wood in our stakes -- the vampires came back in the 80s.

One of the interesting misunderstandings was that Apple and then MS didn't 
really understand the universal viewing mechanism (MVC) so they thought views 
with borders around them were windows and view without borders were part of 
desktop publishing, but in fact all were the same. The Xerox Star 
confounded the problem by reverting to a single desktop and apps and missed 
the real media possibilities.

They divided a unified media world into two regimes, neither of which are 
very good for
 end-users.

Cheers,

Alan







 From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Thursday, October 31, 2013 8:58 AM
Subject: Re: [fonc] Task management in a world without apps.
 


Instead of 'applications', you have objects you can manipulate (compose, 
decompose, rearrange, etc.) in a common environment. The state of the 
system, the construction of the objects, determines not only how they appear 
but how they behave - i.e. how they influence and observe the world. Task 
management is then simply rearranging objects: if you want to turn an object 
'off', you 'disconnect' part of the graph, or perhaps you flip a switch that 
does the same thing under the hood. 


This has very physical analogies. For example, there are at least two ways 
to task manage a light: you could disconnect your lightbulb from its 
socket, or you could flip a lightswitch, which opens a circuit.


There are a few interesting classes of objects, which might be described as 
'tools'. There are tools for your hand, like different paintbrushes in Paint 
Shop. There are also tools for your eyes/senses, like a magnifying glass, 
x-ray goggles, heads-up display, events notification, or language 
translation. And there are tools that touch both aspects - like a 
projectional editor, lenses. If we extend the user-model with concepts like 
'inventory', and programmable tools for both hand and eye, those can serve 
as another form of task management. When you're done painting, put down the 
paintbrush.


This isn't really the same as switching between tasks

Re: [fonc] Task management in a world without apps.

2013-10-31 Thread Alan Kay
It's worth noting that this was the scheme at PARC and was used heavily later 
in Etoys. 

This is why Smalltalk has unlimited numbers of Projects. Each one is a 
persistant environment that serves both as a place to make things and as a 
page of desktop media. 

There are no apps, only objects and any and all objects can be brought to any 
project which will preserve them over time. This avoids the stovepiping of 
apps. Dan Ingalls (in Fabrik) showed one UI and scheme to integrate the 
objects, and George Bosworth's PARTS system showed a similar but slightly 
different way.

Also there is no presentation app in Etoys, just an object that allows 
projects to be put in any order -- and there can many many such orderings all 
preserved -- and there is an object that will move from one project to the next 
as you give your talk. Builds etc are all done via Etoy scripts.

This allows the full power of the system to be used for everything, including 
presentations. You can imagine how appalled we were by the appearance of 
Persuade and PowerPoint, etc.

Etc.

We thought we'd done away with both operating systems and with apps but 
we'd used the wrong wood in our stakes -- the vampires came back in the 80s.

One of the interesting misunderstandings was that Apple and then MS didn't 
really understand the universal viewing mechanism (MVC) so they thought views 
with borders around them were windows and view without borders were part of 
desktop publishing, but in fact all were the same. The Xerox Star confounded 
the problem by reverting to a single desktop and apps and missed the real media 
possibilities.

They divided a unified media world into two regimes, neither of which are very 
good for end-users.

Cheers,

Alan





 From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Thursday, October 31, 2013 8:58 AM
Subject: Re: [fonc] Task management in a world without apps.
 


Instead of 'applications', you have objects you can manipulate (compose, 
decompose, rearrange, etc.) in a common environment. The state of the system, 
the construction of the objects, determines not only how they appear but how 
they behave - i.e. how they influence and observe the world. Task management 
is then simply rearranging objects: if you want to turn an object 'off', you 
'disconnect' part of the graph, or perhaps you flip a switch that does the 
same thing under the hood. 


This has very physical analogies. For example, there are at least two ways to 
task manage a light: you could disconnect your lightbulb from its socket, or 
you could flip a lightswitch, which opens a circuit.


There are a few interesting classes of objects, which might be described as 
'tools'. There are tools for your hand, like different paintbrushes in Paint 
Shop. There are also tools for your eyes/senses, like a magnifying glass, 
x-ray goggles, heads-up display, events notification, or language translation. 
And there are tools that touch both aspects - like a projectional editor, 
lenses. If we extend the user-model with concepts like 'inventory', and 
programmable tools for both hand and eye, those can serve as another form of 
task management. When you're done painting, put down the paintbrush.


This isn't really the same as switching between tasks. I.e. you can still get 
event notifications on your heads-up-display while you're editing an image. 
It's closer to controlling your computational environment by direct 
manipulation of structure that is interpreted as code (aka live programming).


Best,


Dave






On Thu, Oct 31, 2013 at 10:29 AM, Casey Ransberger casey.obrie...@gmail.com 
wrote:

A fun, but maybe idealistic idea: an application of a computer should just 
be what one decides to do with it at the time.

I've been wondering how I might best switch between tasks (or really things 
that aren't tasks too, like toys and documentaries and symphonies) in a world 
that does away with most of the application level modality that we got with 
the first Mac.

The dominant way of doing this with apps usually looks like either the OS X 
dock or the Windows 95 taskbar. But if I wanted less shrink wrap and more 
interoperability between the virtual things I'm interacting with on a 
computer, without forcing me to multitask (read: do more than one thing at 
once very badly,) what's my best possible interaction language look like?

I would love to know if these tools came from some interesting research once 
upon a time. I'd be grateful for any references that can be shared. I'm also 
interested in hearing any wild ideas that folks might have, or great ideas 
that fell by the wayside way back when.

Out of curiosity, how does one change one's mood when interacting with 
Frank?

Casey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list

[fonc] People and evidence

2013-09-09 Thread Alan Kay
Kenneth Clarke once remarked that People in the Middle Ages were as 
passionately interested in truth as we are, but their sense of evidence was 
very different.

Marshall McLuhan said I can't see it until I believe it

Neil Postman once remarked that People today have to accept twice as much on 
faith: *both* religion and science!

In a letter to Kepler of August 1610, Galileo complained that some of the 
philosophers who opposed his discoveries had refused even to look through a 
telescope:
My dear Kepler, I wish that we might laugh at the remarkable stupidity of the 
common herd. What do you have to say about the principal philosophers of this 
academy who are filled with the stubbornness of an asp and do not want to look 
at either the planets, the moon or the telescope, even though I have freely and 
deliberately offered them the opportunity a thousand times? Truly, just as the 
asp stops its ears, so do these philosophers shut their eyes to the light of 
truth.
Many of the commenters on this list have missed that evidence and data 
requires a fruitful context -- even to consider them! -- and that better tools 
and data will only tend to help those who are already set up epistemologically 
to use them wisely. (And don't forget the scientists I mentioned who have been 
shown to be deeply influenced by the context of their own employers.)


The fault is not in our stars ...

Cheers,

Alan___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Software Crisis (was Re: Final STEP progress report abandoned?)

2013-09-09 Thread Alan Kay
Check out Smallstar by Dan Halbert at Xerox PARC (written up in a PARC 
bluebook)

Cheers,

Alan



 From: John Carlson yottz...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Monday, September 9, 2013 3:47 PM
Subject: Re: [fonc] Software Crisis (was Re: Final STEP progress report 
abandoned?)
 


One thing you can do is create a bunch of named widgets that work together with 
copy and paste.  As long as you can do type safety, and can appropriately deal 
with variable explosion/collapsing.  You'll probably want to create very small 
functions, which can also be stored in widgets (lambdas).  Widgets will show up 
when their scope is entered, or you could have an inspect mode.
On Sep 9, 2013 5:11 PM, David Barbour dmbarb...@gmail.com wrote:

I like Paul's idea here - form a pit of success even for people who tend to 
copy-paste.


I'm very interested in unifying PL with HCI/UI such that actions like 
copy-paste actually have formal meaning. If you copy a time-varying field from 
a UI form, maybe you can paste it as a signal into a software agent. Similarly 
with buttons becoming capabilities. (Really, if we can use a form, it should 
be easy to program something to use it for us. And vice versa.) All UI actions 
can be 'acts of programming', if we find the right way to formalize it. I 
think the trick, then, is to turn the UI into a good PL.


To make copy-and-paste code more robust, what can we do?


Can we make our code more adaptive? Able to introspect its environment?


Can we reduce the number of environmental dependencies? Control namespace 
entanglement? Could we make it easier to grab all the dependencies for code 
when we copy it? 

Can we make it more provable?


And conversely, can we provide IDEs that can help the kids understand the 
code they take - visualize and graph its behavior, see how it integrates with 
its environment, etc? I think there's a lot we can do. Most of my thoughts 
center on language design and IDE design, but there may also be social avenues 
- perhaps wiki-based IDEs, or Gist-like repositories that also make it easy to 
interactively explore and understand code before using it.



On Sun, Sep 8, 2013 at 10:33 AM, Paul Homer paul_ho...@yahoo.ca wrote:


These days, the kids do a quick google, then just copypaste the results 
into the code base, mostly unaware of what the underlying 'magic' 
instructions actually do. So example code is possibly a bad thing?

But even if that's true, we've let the genie out of the bottle and he is't 
going back in. To fix the quality of software, for example, we can't just ban 
all cutpaste-able web pages.

The alternate route out of the problem is to exploit these types of human 
deficiencies. If some programmers just want to cutpaste, then perhaps all we 
can do is too just make sure that what they are using is high enough quality. 
If someday they want more depth, then it should be available in easily 
digestible forms, even if few will ever travel that route.

If most people really don't want to think deeply about about their problems, 
then I think that the best we can do is ensure that their hasty decisions are 
based on as accurate knowledge as possible. It's far better than them just 
flipping a coin. In a sense it moves up our decision making to a higher level 
of abstraction. Some people lose the 'why' of the decision, but their 
underlying choice ultimately is superior, and the 'why' can still be found by 
doing digging into the data. In a way, isn't that what we've already done 
with micro-code, chips and assembler? Or machinery? Gradually we move up 
towards broader problems...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-08 Thread Alan Kay
Hi Paul

I'm sure you are aware that yours is a very Engelbartian point of view, and I 
think there is still much value in trying to make things better in this 
direction.

However, it's also worth noting the studies over the last 40 years (and 
especially recently) that show how often even scientists go against their 
training and knowledge in their decisions, and are driven more by desire and 
environment than they realize. More knowledge is not the answer here -- but 
it's possible that very different kinds of training could help greatly.

Best wishes,

Alan



 From: Paul Homer paul_ho...@yahoo.ca
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org; Fundamentals of New Computing fonc@vpri.org 
Sent: Saturday, September 7, 2013 12:24 PM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hi Alan,

I can't predict what will come, but I definitely have a sense of where I think 
we should go. Collectively as a species, we know a great deal, but individually 
people still make important choices based on too little knowledge. 


In a very abstract sense 'intelligence' is just a more dynamic offshoot of 
'evolution'. A sort of hyper-evolution. It allows a faster route towards 
reacting to changes in the enviroment, but it is still very limited by 
individual perspectives of the world. I don't think we need AI in the classic 
Hollywood sense, but we could enable a sort of hyper-intelligence by giving 
people easily digestable access to our collective understanding. Not a 'borg' 
style single intelligence, but rather just the tools that can be used to make 
descisions that are more accurate than an individual would have made 
normally. 


To me the path to get there lies within our understanding of data. It needs to 
be better organized, better understood and far more accessible. It can't keep 
getting caught up in silos, and it really needs ways to share it appropriately. 
The world changes dramatically when we've developed the ability to fuse all of 
our digitized information into one great structural model that has the 
capability to separate out fact from fiction. It's a long way off, but I've 
always thought it was possible...

Paul.





 From: Alan Kay alan.n...@yahoo.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, September 3, 2013 7:48:22 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hi Jonathan


We are not soliciting proposals, but we like to hear the opinions of others on 
burning issues and better directions in computing.


Cheers,


Alan




 From: Jonathan Edwards edwa...@csail.mit.edu
To: fonc@vpri.org 
Sent: Tuesday, September 3, 2013 4:44 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


That's great news! We desperately need fresh air. As you know, the way a 
problem is framed bounds its solutions. Do you already know what problems to 
work on or are you soliciting proposals?


Jonathan



From: Alan Kay alan.n...@yahoo.com
To: Fundamentals of New Computing fonc@vpri.org
Cc: 
Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
Subject: Re: [fonc] Final STEP progress report abandoned?

Hi Dan


It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 


Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time 
for the last 5-6 months.


Cheers,


Alan




 From: Dan Melchione dm.f...@melchione.com
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Alan Kay
Hi Jonathan

We are not soliciting proposals, but we like to hear the opinions of others on 
burning issues and better directions in computing.

Cheers,

Alan



 From: Jonathan Edwards edwa...@csail.mit.edu
To: fonc@vpri.org 
Sent: Tuesday, September 3, 2013 4:44 AM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


That's great news! We desperately need fresh air. As you know, the way a 
problem is framed bounds its solutions. Do you already know what problems to 
work on or are you soliciting proposals?

Jonathan



From: Alan Kay alan.n...@yahoo.com
To: Fundamentals of New Computing fonc@vpri.org
Cc: 
Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
Subject: Re: [fonc] Final STEP progress report abandoned?

Hi Dan


It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 


Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time for 
the last 5-6 months.


Cheers,


Alan




 From: Dan Melchione dm.f...@melchione.com
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Alan Kay
Hi Kevin

At some point I'll gather enough brain cells to do the needed edits and get the 
report on the Viewpoints server.

Dan Amelang is in the process of writing his thesis on Nile, and we will 
probably put Nile out in a more general form after that. (A nice project would 
be to do Nile in the Chrome Native Client to get a usable speedy and very 
compact graphics system for web based systems.)

Yoshiki's K-Script has been experimentally implemented on top of Javascript, 
and we've been learning a lot about this variant of stream-based FRP as it is 
able to work within someone else's implementation of a language.

A lot of work on the cooperating solvers part of STEPS is going on (this was 
an add-on that wasn't really in the scope of the original proposal).

We are taking another pass at the interoperating alien modules problem that 
was part of the original proposal, but that we never really got around to 
trying to make progress on it.

And, as has been our pattern in the past, we have often alternated end-user 
systems (especially including children) with the deep systems projects, and 
we are currently pondering this 50+ year old problem again.

A fair amount of time is being put into problem finding (the basic idea is 
that initially trying to manifest visions of desirable future states is 
better than going directly into trying to state new goals -- good visions will 
often help problem finding which can then be the context for picking actual 
goals).

And most of my time right now is being spent in extending environments for 
research.

Cheers

Alan




 From: Kevin Driedger linuxbox+f...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Monday, September 2, 2013 2:41 PM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Alan,

Can you give us any more details or direction on these research projects?



]{evin ])riedger



On Mon, Sep 2, 2013 at 1:45 PM, Alan Kay alan.n...@yahoo.com wrote:

Hi Dan


It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 


Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time for 
the last 5-6 months.


Cheers,


Alan




 From: Dan Melchione dm.f...@melchione.com
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-03 Thread Alan Kay
Yes, the communication with aliens problem -- in many different aspects -- is 
going to be a big theme for VPRI over the next few years.

Cheers,

Alan



 From: Tristan Slominski tristan.slomin...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Tuesday, September 3, 2013 7:25 PM
Subject: Re: [fonc] Final STEP progress report abandoned?
 


Hey Alan,

With regards to burning issues and better directions, I want to highlight 
the communicating with aliens problem as worth of remembering. Machines 
figuring out on their own a protocol and goals for communication. This might 
relate to cooperating solvers aspect of your work.

Cheers,

Tristan



On Tue, Sep 3, 2013 at 6:48 AM, Alan Kay alan.n...@yahoo.com wrote:

Hi Jonathan


We are not soliciting proposals, but we like to hear the opinions of others on 
burning issues and better directions in computing.


Cheers,


Alan




 From: Jonathan Edwards edwa...@csail.mit.edu
To: fonc@vpri.org 
Sent: Tuesday, September 3, 2013 4:44 AM

Subject: Re: [fonc] Final STEP progress report abandoned?
 


That's great news! We desperately need fresh air. As you know, the way a 
problem is framed bounds its solutions. Do you already know what problems to 
work on or are you soliciting proposals?


Jonathan



From: Alan Kay alan.n...@yahoo.com
To: Fundamentals of New Computing fonc@vpri.org
Cc: 
Date: Mon, 2 Sep 2013 10:45:50 -0700 (PDT)
Subject: Re: [fonc] Final STEP progress report abandoned?

Hi Dan


It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 


Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time 
for the last 5-6 months.


Cheers,


Alan




 From: Dan Melchione dm.f...@melchione.com
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report abandoned?

2013-09-02 Thread Alan Kay
Hi Dan

It actually got written and given to NSF and approved, etc., a while ago, but 
needs a little more work before posting on the VPRI site. 

Meanwhile we've been consumed by setting up a number of additional, and wider 
scale, research projects, and this has occupied pretty much all of my time for 
the last 5-6 months.

Cheers,

Alan



 From: Dan Melchione dm.f...@melchione.com
To: fonc@vpri.org 
Sent: Monday, September 2, 2013 10:40 AM
Subject: [fonc] Final STEP progress report abandoned?
 


Haven't seen much regarding this for a while.  Has it been been abandoned or 
put at such low priority that it is effectively abandoned?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deoptimization as fallback

2013-07-30 Thread Alan Kay
This is how Smalltalk has always treated its primitives, etc.

Cheers,

Alan



 From: Casey Ransberger casey.obrie...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, July 30, 2013 1:22 PM
Subject: [fonc] Deoptimization as fallback
 

Thought I had: when a program hits an unhandled exception, we crash, often 
there's a hook to log the crash somewhere. 

I was thinking: if a system happens to be running an optimized version of some 
algorithm, and hit a crash bug, what if it could fall back to the suboptimal 
but conceptually simpler Occam's explanation?

All other things being equal, the simple implementation is usually more stable 
than the faster/less-RAM solution.

Is anyone aware of research in this direction?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-19 Thread Alan Kay
The only really good -- and reasonable accurate -- book about the history of 
Lick, ARPA-IPTO (no D, that is went things went bad), and Xerox PARC is 
Dream Machines by Mitchel Waldrop.

Cheers,

Alan




 From: Miles Fidelman mfidel...@meetinghouse.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, April 19, 2013 5:45 AM
Subject: Re: [fonc] 90% glue code
 

Casey Ransberger wrote:
 This Licklider guy is interesting. CS + psych = cool.

A lot more than cool.  Lick was the guy who:
- MIT Professor
- pioneered timesharing (bought the first production PDP-1 for BBN) and AI 
work at BBN
- served as the initial Program Manager at DARPA/IPTO (the folks who funded 
the ARPANET)
- Director of Project MAC at MIT for a while
- wrote some really seminal papers - Man-Computer Symbiosisis write up there 
with Vannevar Bush's As We May Think

/It seems reasonable to envision, for a time 10 or 15 years hence, a 'thinking 
center' that will incorporate the functions of present-day libraries together 
with anticipated advances in information storage and retrieval./

/The picture readily enlarges itself into a network of such centers, connected 
to one another by wide-band communication lines and to individual users by 
leased-wire services. In such a system, the speed of the computers would be 
balanced, and the cost of the gigantic memories and the sophisticated programs 
would be divided by the number of users./

-  J.C.R. Licklider, Man-Computer Symbiosis http://memex.org/licklider.html, 
1960.

- perhaps the earliest conception of the Internet:
In a 1963 memo to Members and Affiliates of the Intergalactic Computer 
Network, Licklider theorized that a computer network could help researchers 
share information and even enable people with common interests to interact 
online.
(http://web.archive.org/web/20071224090235/http://www.today.ucla.edu/1999/990928looking.html)

Outside the community he kept a very low profile. One of the greats.

Miles Fidelman

-- In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-18 Thread Alan Kay
Hi David

This is an interesting slant on a 50+ year old paramount problem (and one that 
is even more important today).

Licklider called it the communicating with aliens problem. He said 50 years 
ago this month that if we succeed in constructing the 'intergalactic network' 
then our main problem will be learning how to 'communicate with aliens'. He 
meant not just humans to humans but software to software and humans to 
software. 

(We gave him his intergalactic network but did not solve the communicating with 
aliens problem.)


I think a key to finding better solutions is to -- as he did -- really push the 
scale beyond our imaginations -- intergalactic -- and then ask how can we 
*still* establish workable communications of overlapping meanings?.


Another way to look at this is to ask: What kinds of prep *can* you do 
*beforehand* to facilitate communications with alien modules?

Cheers,

Alan






 From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, April 17, 2013 6:13 PM
Subject: Re: [fonc] 90% glue code
 


Sounds like you want stone soup programming. :D


In retrospect, I've been disappointed with most techniques that involve 
providing information about module capabilities to some external 
configurator (e.g. linkers as constraint solvers). Developers are asked to 
grok at least two very different programming models. Hand annotations or hints 
become common practice because many properties cannot be inferred. The 
resulting system isn't elegantly metacircular, i.e. you need that 
'configurator' in the loop and the metada with the inputs.


An alternative I've been thinking about recently is to shift the link logic to 
the modules themselves. Instead of being passive bearers of information that 
some external linker glues together, the modules become active agents in a 
link environment that collaboratively construct the runtime behavior (which 
may afterwards be extracted). Developers would have some freedom to abstract 
and separate problem-specific link logic (including decision-making) rather 
than having a one-size-fits-all solution.


Re: In my mind powerful languages thus means 98% requirements


To me, power means something much more graduated: that I can get as much 
power as I need, that I can do so late in development without rewriting 
everything, that my language will grow with me and my projects.




On Wed, Apr 17, 2013 at 2:04 PM, John Nilsson j...@milsson.nu wrote:

Maybe not. If there is enough information about different modules' 
capabilities, suitability for solving various problems and requirements, such 
that the required glue can be generated or configured automatically at run 
time. Then what is left is the input to such a generator or configurator. At 
some level of abstraction the input should transition from being glue and 
better be described as design.
Design could be seen as kind of a gray area if thought of mainly as picking 
what to glue together as it still involves a significant amount of gluing ;) 
But even design should be possible to formalize enough to minimize the amount 
of actual design decisions required to encode in the source and what 
decisions to leave to algorithms though. So what's left is to encode the 
requirements as input to the designer. 
In my mind powerful languages thus means 98% requirements, 2% design and 0% 
glue. 
BR
John
Den 17 apr 2013 05:04 skrev Miles Fidelman mfidel...@meetinghouse.net:


So let's ask the obvious question, if we have powerful languages, and/or 
powerful libraries, is not an application comprised primarily of glue code 
that ties all the piece parts together in an application-specific way?

David Barbour wrote:


On Tue, Apr 16, 2013 at 2:25 PM, Steve Wart st...@wart.ca 
mailto:st...@wart.ca wrote:

     On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
     In real systems, 90% of code (conservatively) is glue code.

    What is the origin of this claim?


I claimed it from observation and experience. But I'm sure there are other 
people who have claimed it, too. Do you doubt its veracity?



    On Mon, Apr 15, 2013 at 12:15 PM, David Barbour
    dmbarb...@gmail.com mailto:dmbarb...@gmail.com wrote:


        On Mon, Apr 15, 2013 at 11:57 AM, David Barbour
        dmbarb...@gmail.com mailto:dmbarb...@gmail.com wrote:


            On Mon, Apr 15, 2013 at 10:40 AM, Loup Vaillant-David
            l...@loup-vaillant.fr mailto:l...@loup-vaillant.fr wrote:

                On Sun, Apr 14, 2013 at 04:17:48PM -0700, David
                Barbour wrote:
                 On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
                 In real systems, 90% of code (conservatively) is
                glue code.

                Does this *have* to be the case?  Real systems also
                use C++ (or
                Java).  Better languages may require less glue, (even
                if they require
                just as much core 

Re: [fonc] 90% glue code

2013-04-18 Thread Alan Kay
The basic idea is to find really fundamental questions about negotiating about 
meaning, and to invent mental and computer tools to help.

David is quite right to complain about the current state of things in this area 
-- but -- for example -- I don't know of anyone trying a discovery system 
like Lenat's Eurisko, or to imitate how a programmer would go about the alien 
module problem, or to e.g. look at how a linguist like Charles Hockett could 
learn a traditional culture's language well enough in a few hours to speak to 
them in it. (I recall some fascinating movies from my Anthro classes in 
linguistics that I think were made in the 50s showing (I think) Hockett put in 
the middle of a village and what he did to find their language).

There are certainly tradeoffs here about just what kind of overlap at what 
levels can be gained. This is similar to the idea that there are lots of 
wonderful things in Biology that are out of scale with our computer 
technologies. So we should find the things in both Bio and Anthro that will 
help us think.

Cheers,

Alan




 From: Jeff Gonis jeff.go...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, April 18, 2013 8:39 AM
Subject: Re: [fonc] 90% glue code
 


Hi Alan,

Your metaphor brought up a connection that I have been thinking about for a 
while, but I unfortunately don't have enough breadth of knowledge to know if 
the connection is worthwhile or not, so I am throwing it out there to this 
list to see what people think.

If figuring out module communication can be likened to communicating with 
aliens, could we not look at how we go about communicating with alien 
cultures right now?  Maybe trying to use real-world metaphors in this case 
is foolish, but it seemed to work out pretty well when you used some of your 
thoughts on biology to inform OOP.  

So can we look to the real world and ask how linguists go about communicating 
with unknown cultures or remote tribes of people?  Has this process occurred 
frequently enough that there is some sort of protocol or process that is 
followed by which concepts from one language are mapped onto those contained 
in the indigenous language until communication can occur?  Could we use this 
process as a source of metaphors to think about how to create a protocol for 
discovering how two different software modules can map their own concepts 
onto the other?

Anyway, that was something that had been running in the background of my mind 
for a while, since I saw you talk about the importance of figuring out ways to 
mechanize the process modules figuring out how to communicate with each other.

Thanks,
Jeff




On Thu, Apr 18, 2013 at 9:06 AM, Alan Kay alan.n...@yahoo.com wrote:

Hi David


This is an interesting slant on a 50+ year old paramount problem (and one 
that is even more important today).


Licklider called it the communicating with aliens problem. He said 50 years 
ago this month that if we succeed in constructing the 'intergalactic 
network' then our main problem will be learning how to 'communicate with 
aliens'. He meant not just humans to humans but software to software and 
humans to software. 


(We gave him his intergalactic network but did not solve the communicating 
with aliens problem.)



I think a key to finding better solutions is to -- as he did -- really push 
the scale beyond our imaginations -- intergalactic -- and then ask how can 
we *still* establish workable communications of overlapping meanings?.



Another way to look at this is to ask: What kinds of prep *can* you do 
*beforehand* to facilitate communications with alien modules?


Cheers,


Alan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] 90% glue code

2013-04-18 Thread Alan Kay
Hi David

We are being invaded by stupid aliens aka our's and other people's software. 
The gods who made their spaceships are on vacation and didn't care about 
intercommunication (is this a modern version of the Tower of Babel?).

Discovery can take a long time (and probably should) but might not be needed 
for most subsequent communications (per Joe Becker's Phrasal Lexicon). Maybe 
ML coupled to something more semantic (e.g. CYC) could make impressive inroads 
here.

I'm guessing that even the large range of ideas -- good and bad -- in CS today 
is a lot smaller (and mostly stupider) than the ones that need to be dealt with 
when trying for human to human or human to alien overlap.

Cheers,

Alan




 From: David Barbour dmbarb...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, April 18, 2013 9:25 AM
Subject: Re: [fonc] 90% glue code
 


Well, communicating with genuine aliens would probably best be solved by 
multi-modal machine-learning techniques. The ML community already has 
techniques for two machines to teach one another their vocabularies, and 
thus build a strong correspondence. Of course, if we have space alien 
visitors, they'll probably have a solution to the problem and already know our 
language from media. 


Natural language has a certain robustness to it, due to its probabilistic, 
contextual, and interactive natures (offering much opportunity for refinement 
and retroactive correction). If we want to support machine-learning between 
software elements, one of the best things we could do is to emulate this 
robustness end-to-end. Such things have been done before, but I'm a bit stuck 
on how to do so without big latency, efficiency, and security sacrifices. 
(There are two issues: the combinatorial explosion of possible models, and the 
modular hiding of dependencies that are inherently related through shared 
observation or influence.)


Fortunately, there are many other issues we can address to facilitate 
communication that are peripheral to translation. Further, we could certainly 
leverage code-by-example for type translations (if they're close). 


Regards,


Dave




On Thu, Apr 18, 2013 at 8:06 AM, Alan Kay alan.n...@yahoo.com wrote:

Hi David


This is an interesting slant on a 50+ year old paramount problem (and one 
that is even more important today).


Licklider called it the communicating with aliens problem. He said 50 years 
ago this month that if we succeed in constructing the 'intergalactic 
network' then our main problem will be learning how to 'communicate with 
aliens'. He meant not just humans to humans but software to software and 
humans to software. 


(We gave him his intergalactic network but did not solve the communicating 
with aliens problem.)



I think a key to finding better solutions is to -- as he did -- really push 
the scale beyond our imaginations -- intergalactic -- and then ask how can 
we *still* establish workable communications of overlapping meanings?.



Another way to look at this is to ask: What kinds of prep *can* you do 
*beforehand* to facilitate communications with alien modules?


Cheers,


Alan








 From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, April 17, 2013 6:13 PM
Subject: Re: [fonc] 90% glue code
 


Sounds like you want stone soup programming. :D


In retrospect, I've been disappointed with most techniques that involve 
providing information about module capabilities to some external 
configurator (e.g. linkers as constraint solvers). Developers are asked to 
grok at least two very different programming models. Hand annotations or 
hints become common practice because many properties cannot be inferred. The 
resulting system isn't elegantly metacircular, i.e. you need that 
'configurator' in the loop and the metada with the inputs.


An alternative I've been thinking about recently is to shift the link logic 
to the modules themselves. Instead of being passive bearers of information 
that some external linker glues together, the modules become active agents 
in a link environment that collaboratively construct the runtime behavior 
(which may afterwards be extracted). Developers would have some freedom to 
abstract and separate problem-specific link logic (including 
decision-making) rather than having a one-size-fits-all solution.


Re: In my mind powerful languages thus means 98% requirements


To me, power means something much more graduated: that I can get as much 
power as I need, that I can do so late in development without rewriting 
everything, that my language will grow with me and my projects.




On Wed, Apr 17, 2013 at 2:04 PM, John Nilsson j...@milsson.nu wrote:

Maybe not. If there is enough information about different modules' 
capabilities, suitability for solving various problems and requirements, 
such that the required glue can

Re: [fonc] Old Boxer Paper

2013-03-27 Thread Alan Kay
Yep, it had some good ideas.

Cheers,

Alan




 From: Francisco Garau fga...@gmail.com
To: fonc@vpri.org fonc@vpri.org 
Sent: Wednesday, March 27, 2013 12:51 AM
Subject: [fonc] Old Boxer Paper
 
It reminds me of scratch  etoys

http://www.soe.berkeley.edu/boxer/20reasons.pdf

- Francisco

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel Maru

2013-03-26 Thread Alan Kay
That's it -- a real classic!

Cheers,

Alan




 From: Dirk Pranke dpra...@chromium.org
To: Fundamentals of New Computing fonc@vpri.org; mo...@codetransform.com 
Sent: Monday, March 25, 2013 11:18 PM
Subject: Re: [fonc] Kernel  Maru
 

In response to a long-dormant thread ... Fisher's thesis seems to have 
surfaced on the web:


http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/usr0/ftp/scan/CMU-CS-70-fisher.pdf



-- Dirk



On Tue, Apr 10, 2012 at 11:34 AM, Monty Zukowski mo...@codetransform.com 
wrote:

If anyone finds an electronic copy of Fisher's thesis I'd love to know
about it.  My searches have been fruitless.

Monty

On Tue, Apr 10, 2012 at 10:04 AM, Alan Kay alan.n...@yahoo.com wrote:
...

 Dave Fisher's thesis A Control Definition Language CMU 1970 is a very
 clean approach to thinking about environments for LISP like languages. He
 separates the control path, from the environment path, etc.
...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel Maru

2013-03-26 Thread Alan Kay
The first ~100 pages are still especially good as food for thought

Cheers,

Alan




 From: Duncan Mak duncan...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Cc: mo...@codetransform.com mo...@codetransform.com 
Sent: Tuesday, March 26, 2013 10:35 AM
Subject: Re: [fonc] Kernel  Maru
 

This is great!


Dirk - how did you find it?


Duncan.



On Tue, Mar 26, 2013 at 6:09 AM, Alan Kay alan.n...@yahoo.com wrote:

That's it -- a real classic!


Cheers,


Alan




 From: Dirk Pranke dpra...@chromium.org
To: Fundamentals of New Computing fonc@vpri.org; mo...@codetransform.com 
Sent: Monday, March 25, 2013 11:18 PM

Subject: Re: [fonc] Kernel  Maru
 

In response to a long-dormant thread ... Fisher's thesis seems to have 
surfaced on the web:


http://reports-archive.adm.cs.cmu.edu/anon/anon/usr/ftp/usr0/ftp/scan/CMU-CS-70-fisher.pdf



-- Dirk



On Tue, Apr 10, 2012 at 11:34 AM, Monty Zukowski mo...@codetransform.com 
wrote:

If anyone finds an electronic copy of Fisher's thesis I'd love to know
about it.  My searches have been fruitless.

Monty

On Tue, Apr 10, 2012 at 10:04 AM, Alan Kay alan.n...@yahoo.com wrote:
...

 Dave Fisher's thesis A Control Definition Language CMU 1970 is a very
 clean approach to thinking about environments for LISP like languages. He
 separates the control path, from the environment path, etc.
...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





-- 
Duncan. 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Building blocks and use of text

2013-02-14 Thread Alan Kay
And of course, for some time there has been Croquet

http://en.wikipedia.org/wiki/Croquet_project


... and its current manifestation

http://en.wikipedia.org/wiki/Open_Cobalt


These are based on Dave Reed's 1978 MIT thesis and were first implemented about 
10 years ago at Viewpoints.

Besides allowing massively distributed computing without servers, the approach 
is interesting in just how widely it comprehends Internet sized systems.

Cheers,

Alan





 From: Casey Ransberger casey.obrie...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, February 13, 2013 10:52 PM
Subject: Re: [fonc] Building blocks and use of text
 

The next big thing probably won't be some version of Minecraft, even if 
Minecraft is really awesome. OTOH, you and your kids can prove me wrong today 
with Minecraft Raspberry Pi Edition, which is free, and comes with _source 
code_.


http://mojang.com/2013/02/minecraft-pi-edition-is-available-for-download/



/fanboy



On Wed, Feb 13, 2013 at 5:55 PM, John Carlson yottz...@gmail.com wrote:

Miles wrote:
 There's a pretty good argument to be made that what works are powerful 
 building blocks that can be combined in lots of different ways; 
So the next big thing will be some version of minecraft?  Or perhaps the 
older toontalk?  Agentcubes?  What is the right 3D metaphor?  Does anyone 
have a comfortable metaphor?  It would seem like if there was an open, 
federated MMO system that supported object lifecycles, we would have 
something.  Do we have an object web yet, or are we stuck with text 
forever, with all the nasty security vunerabilities involved?  Yes I agree 
that we lost something when we moved to the web.  Perhaps we need to step 
away from the document model purely for security reasons.
What's the alternative?  Scratch and Alice?  Storing/transmitting ASTs?  Does 
our reliance on https/ssl/tls which is based on streams limit us? When are we 
going to stop making streams secure and start making secure network objects?  
Object-capability security anyone?
Are we stuck with documents because they are the best thing for debugging?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





-- 
Casey Ransberger 
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: Object Oriented vs Message Oriented

2013-02-13 Thread Alan Kay
One of the original reasons for message-based was the simple relativistic 
one. What we decided is that trying to send messages to explicit receivers had 
real scaling problems, whereas receiving messages is a good idea.

Cheers,

Alan




 From: Eugen Leitl eu...@leitl.org
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, February 13, 2013 5:11 AM
Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
 
On Tue, Feb 12, 2013 at 11:33:04AM -0700, Jeff Gonis wrote:
 I see no one has taken Alan's bait and asked the million dollar question:
 if you decided that messaging is no longer the right path for scaling, what
 approach are you currently using?

Classical computation doesn't allow storing multiple bits
in the same location, so relativistic signalling introduces
latency. Asynchronous shared-nothing message passing is
the only thing that scales, as it matches the way how this 
universe does things (try looking at light cones for consistent
state for multiple writes to the same location -- this
of course applies to cache coherency).

Inversely, doing things in a different way will guarantee
that you won't be able to scale. It's not just a good idea,
it's the law. 
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: Object Oriented vs Message Oriented

2013-02-13 Thread Alan Kay
. 
 which have to be agreed on somehow
 
 
 Part of the problem is that for vanilla sends, the sender has to know the 
 receiver in some fashion. This starts requiring the interior of a module to 
 know too much if this is a front line mechanism.
 
 This leads to wanting to do something more like LINDA coordination or 
 publish and subscribe where there are pools of producers and consumers who 
 don't have to know explicitly about each other. A send is now a general 
 request for a resource. But the vanilla approaches here still require that 
 the sender and receiver have a fair amount of common knowledge (because 
 the matching is usually done on terms in common).
 
 For example, in order to invoke a module that will compute the sine of an 
 angle, do you and the receiver both have to agree about the term sine? In 
 APL I think the name of this function is circle 1 and in Smalltalk it's 
 degreeSin, etc. 
 
 Ted Kaehler solved this problem some years ago in Squeak Smalltalk with his 
 message finder. For example, if you enter 3. 4. 7 Squeak will instantly 
 come back with:
    3 bitOr: 4 -- 7
    3 bitXor: 4 -- 7
    3 + 4 -- 7
 
 For the sine example you would enter 30. 0.5 and Squeak will come up with: 
    30 degreeSin -- 0.5
 
 The method finder is acting a bit like Doug Lenat's discovery systems. 
 Simple brute force is used here (Ted executes all the methods that could fit 
 in the system safely to see what they do.)
 
 One of the solutions at PARC for dealing with a part of the problem is the 
 idea of send an agent, not a message. It was quickly found that defining 
 file formats for all the different things that could be printed on the new 
 laser printer was not scaling well. The solution was to send a program that 
 would just execute safely and blindly in the printer -- the printer would 
 then just print out the bit bin. This was known as PostScript when it came 
 out in the world.
 
 The Trickles idea from Cornell has much of the same flavor.
 
 One possible starting place is to notice that there are lots more terms that 
 people can use than the few that are needed to make a powerful compact 
 programming language. So why not try to describe meanings and match on 
 meanings -- and let there be not just matching (which is like a password) 
 but negotiation, which is what a discovery agent does.
 
 And so forth. I think this is a difficult but doable problem -- it's easier 
 than AI, but has some tinges of it.
 
 Got any ideas?
 
 Cheers,
 
 Alan
 
 
  From: Jeff Gonis jeff.go...@gmail.com
 To: Alan Kay alan.n...@yahoo.com 
 Cc: Fundamentals of New Computing fonc@vpri.org 
 Sent: Tuesday, February 12, 2013 10:33 AM
 Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
  
 
 I see no one has taken Alan's bait and asked the million dollar question: 
 if you decided that messaging is no longer the right path for scaling, what 
 approach are you currently using?
 I would assume that FONC is the current approach, meaning, at the risk of 
 grossly over-simplifying and sounding ignorant, problem oriented 
 languages allowing for compact expression of meaning.  But even here, FONC 
 struck me as providing vastly better ways of creating code that, at its 
 core, still used messaging for robustness, etc, rather than using something 
 entirely different.
 Have I completely misread the FONC projects? And if not messaging, what 
 approach are you currently using to handle scalability?
 A little more history ...
 
 
 The first Smalltalk (-72) was modern (as used below), and similar to 
 Erlang in several ways -- for example, messages were received with 
 structure and pattern matching, etc. The language was extended using the 
 same mechanisms ...
 
 
 Cheers,
 
 
 
 Alan
 
 
 
 
  From: Brian Rice briantr...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org 
 Sent: Tuesday, February 12, 2013 8:54 AM
 Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
  
 
 Independently of the originally-directed historical intent, I'll pose my 
 own quick perspective.
 
 Perhaps a contrast with Steve Yegge's Kingdom of Nouns essay would help:
 http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html
 
 
 
 The modern post-Erlang sense of message-oriented computing has to do with 
 messages with structure and pattern-matching, where error-handling isn't 
 about sequential, nested access, but more about independent structures 
 dealing with untrusted noise.
 
 
 Anyway, treating the messages as first-class objects (in the Lisp sense) 
 is what gets you there:
 http://www.erlang.org/doc/getting_started/conc_prog.html
 
 
 
 
 
 
 On Tue, Feb 12, 2013 at 7:15 AM, Loup Vaillant l...@loup-vaillant.fr 
 wrote:
 
 This question was prompted by a quote by Joe Armstrong about OOP[1].
 It is for Alan Kay, but I'm totally fine with a relevant link.  Also,
 I don't know and I don't have time

Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
Hi John

Or you could look at the actual problem a web has to solve, which is to 
present arbitrary information to a user that comes from any of several billion 
sources. Looked at from this perspective we can see that the current web design 
could hardly be more wrong headed. For example, what is the probability that we 
can make an authoring app that has all the features needed by billions of 
producers?

One conclusion could be that the web/browser is not an app but should be a kind 
of operating system that should be set up to safely execute anything from 
anywhere and to present the results in forms understandable by the end-user.

After literally decades of trying to add more and more features and not yet 
matching up to the software than ran on the machines the original browser was 
done on, they are slowly coming around to the idea that they should be safely 
executing programs written by others. It has only been in the last few years -- 
with Native Client in Chrome -- that really fast programs can be safely 
downloaded as executables without having to have permission of a SysAdmin.

So another way to look at all this is to ask what such an OS really needs to 
have to allow all in the world to make their own media and have it used by 
others ...

Cheers,

Alan




 From: John Carlson yottz...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 12, 2013 9:00 PM
Subject: [fonc] Design of web, POLs for rules. Fuzz testing nile
 

Although I have read very little about the design of the web, things are 
starting to gel in my mind.  At the lowest level lies the static or 
declarative part of the web.  The html, dom, xml and json are the main 
languages used in the declarative part.  Layered on top of this is the dynamic 
or procedural part of the web.  Javascript and xslt are the main languages in 
the procedural part.   The final level is the constraints or rule based part 
of the web, normally called stylesheets.  The languages in the rule based web 
are css1, 2, 3 and xsl. Jquery provides a way to apply operations in this 
arena.  I am excluding popular server side languages...too many.
What I am wondering is what is the best way to incorporate rules into a 
language.  Vrml has routes.  Uml has ocl. Is avoiding if statements and 
for/while loops the goal of rules languages--that syntax?  That is, do a query 
or find, and apply the operations or rules to all returned values.
Now, if I wanted to apply probabilistic or fuzzy rules to the dom, that seems 
fairly straightforward.  Fuzz testing does this moderately well.  Has there 
been attempts at better fuzz testing? Fuzz about fuzz?  Or is brute force best?
We've also seen probablistic parser generators, correct?
But what about probablistic rules?  Can we design an ultimate website w/o a 
designer?  Can we use statistics to create a great solitaire player--i have a 
pretty good stochastic solitaire player for one version of solitaire...how 
about others?  How does one create a great set of rules?  One can create great 
rule POLs, but where are the authors?  Something like cameron browne's thesis 
seems great for grid games.  He is quite prolific.  Can we apply the same 
logic to card games? Web sites?  We have The Nature of Order by c. 
Alexander.  Are there nile designers or fuzz testers/genetic algorithms for 
nile?
Is fuzz testing a by product of nile design...should it be?
If you want to check out the state of the art for dungeons and dragons POLs 
check out fantasy grounds...xml hell.  We can do better.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
Or the (earlier) Smalltalk Models Views Controllers mechanism which had a 
dynamic language with dynamic graphics to allow quite a bit of flexibility with 
arbitrary models. 




 From: David Harris dphar...@telus.net
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, February 13, 2013 7:44 AM
Subject: Re: [fonc] Design of web, POLs for rules. Fuzz testing nile
 

Alan --


Yes, we seem to slowly getting back the the NeWS (Network extensible Windowing 
System) paradigm which used a modified Display Postscript to allow the 
intelligence, including user input, to live in the terminal (as opposed to the 
X-Windows model).  But I am sure I am teaching my grandmother to suck eggs, 
here, sorry :-) .


David
[[ NeWS = Network extensible Windowing System 
http://en.wikipedia.org/wiki/NeWS ]]


On Wed, Feb 13, 2013 at 7:37 AM, Alan Kay alan.n...@yahoo.com wrote:

Hi John


Or you could look at the actual problem a web has to solve, which is to 
present arbitrary information to a user that comes from any of several 
billion sources. Looked at from this perspective we can see that the current 
web design could hardly be more wrong headed. For example, what is the 
probability that we can make an authoring app that has all the features 
needed by billions of producers?


One conclusion could be that the web/browser is not an app but should be a 
kind of operating system that should be set up to safely execute anything 
from anywhere and to present the results in forms understandable by the 
end-user.


After literally decades of trying to add more and more features and not yet 
matching up to the software than ran on the machines the original browser was 
done on, they are slowly coming around to the idea that they should be safely 
executing programs written by others. It has only been in the last few years 
-- with Native Client in Chrome -- that really fast programs can be safely 
downloaded as executables without having to have permission of a SysAdmin.


So another way to look at all this is to ask what such an OS really needs 
to have to allow all in the world to make their own media and have it used by 
others ...


Cheers,


Alan




 From: John Carlson yottz...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 12, 2013 9:00 PM
Subject: [fonc] Design of web, POLs for rules. Fuzz testing nile
 

Although I have read very little about the design of the web, things are 
starting to gel in my mind.  At the lowest level lies the static or 
declarative part of the web.  The html, dom, xml and json are the main 
languages used in the declarative part.  Layered on top of this is the 
dynamic or procedural part of the web.  Javascript and xslt are the main 
languages in the procedural part.   The final level is the constraints or 
rule based part of the web, normally called stylesheets.  The languages in 
the rule based web are css1, 2, 3 and xsl. Jquery provides a way to apply 
operations in this arena.  I am excluding popular server side 
languages...too many.
What I am wondering is what is the best way to incorporate rules into a 
language.  Vrml has routes.  Uml has ocl. Is avoiding if statements and 
for/while loops the goal of rules languages--that syntax?  That is, do a 
query or find, and apply the operations or rules to all returned values.
Now, if I wanted to apply probabilistic or fuzzy rules to the dom, that 
seems fairly straightforward.  Fuzz testing does this moderately well.  Has 
there been attempts at better fuzz testing? Fuzz about fuzz?  Or is brute 
force best?
We've also seen probablistic parser generators, correct?
But what about probablistic rules?  Can we design an ultimate website w/o a 
designer?  Can we use statistics to create a great solitaire player--i have 
a pretty good stochastic solitaire player for one version of solitaire...how 
about others?  How does one create a great set of rules?  One can create 
great rule POLs, but where are the authors?  Something like cameron browne's 
thesis seems great for grid games.  He is quite prolific.  Can we apply the 
same logic to card games? Web sites?  We have The Nature of Order by c. 
Alexander.  Are there nile designers or fuzz testers/genetic algorithms for 
nile?
Is fuzz testing a by product of nile design...should it be?
If you want to check out the state of the art for dungeons and dragons POLs 
check out fantasy grounds...xml hell.  We can do better.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: Object Oriented vs Message Oriented

2013-02-13 Thread Alan Kay
Hi Barry

I like your characterization, and do think the next level also will require a 
qualitatively different approach

Cheers,

Alan




 From: Barry Jay barry@uts.edu.au
To: fonc@vpri.org 
Sent: Wednesday, February 13, 2013 1:13 PM
Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
 

Hi Alan,

the phrase I picked up on was doing experiments. One way to think of
the problem is that we are trying to automate the scientific process,
which is a blend of reasoning and experiments. Most of us focus on one
or the other, as in deductive AI versus databases of common knowledge,
but the history of physics etc suggests that we need to develop both
within a single system, e.g. a language that supports both higher-order
programming (for strategies, etc) and generic queries (for conducting
experiments on newly met systems). 

Yours,
Barry


On 02/14/2013 02:26 AM, Alan Kay wrote: 
Hi Thiago


I
think you are on a good path.


One
way to think about this problem is that the broker is a human
programmer who has received a module from half way around the world
that claims to provide important services. The programmer would confine
it in an address space and start doing experiments with it to try to
discover what it does (and/or perhaps how well its behavior matches up
to its claims). Many of the discovery approaches of Lenat in AM and
Eurisko could be very useful here.


Another
part of the scaling of modules approach could be to require modules to
have much better models of the environments they expect/need in order
to run.


For
example, suppose a module has a variable that it would like to refer to
some external resource. Both static and dynamic typing are insufficient
here because they are only about kinds of results rather than meanings
of results. 


But
we could readily imagine a language in which the variable had
associated with it a dummy or stand-in model of what is desired. It
could be a slow version of something we are hoping to get a faster
version of. It could be sample values and tests, etc. All of these
would be useful for debugging our module -- in fact, we could make this
a requirement of our module system, that the modules carry enough
information to allow them to be debugged with only their own model of
the environment. 


And
the more information the model has, the easier it will be for a program
to see if the model of an environment for a module matches up to
possible modules out in the environment when the system is running for
real.


Cheers,


Alan




 From: Thiago Silva tsi...@sourcecraft.info
To: fonc fonc@vpri.org 
Sent: Wednesday,
February 13, 2013 2:09 AM
Subject: Re: [fonc]
Terminology: Object Oriented vs Message Oriented
 
Hello,

as I was thinking over these problems today, here are some initial
thoughts,
just to get the conversation going...


The first time I read about the Method Finder and Ted's memo, I tried
to grasp
the broader issue, and I'm still thinking of some interesting examples
to
explore.

I can see the problem of finding operations by their meanings, the
problem of
finding objects by the services they provide and the overal structure
of the
discovery, negotiation and binding.

My feeling is that, besides using worlds as mechanism, an explicit
discovery
context may be required (though I can't say much without further
experimentations), specially when trying to figure out operations that
don't
produce a distinguishable value but rather change the state of
computation
(authenticating, opening a file, sending a message through the network,
etc)
or when doing remote discovery.

For brokering (and I'm presuming the use of such entities, as I could
not get
rid of them in my mind so far), my first thought was that a chain of
brokers
of some sorts could be useful in the architecture where each could have
specific ways of mediating discovery and negotiation through the
levels (or
narrowed options, providing isolation for some services. Worlds come to
mind).

During the binding time, I think it would be important that some
requirements of the client could be relaxed or even be tagged optional
to
allow the module to execute at least a subset of its features (or to
execute
features with suboptimal operations) when full binding isn't possible --
though this might require special attention to guarantee that eg.
disabling
optional features don't break the execution.

Further, different versions of services may require different kinds of
pre/post-processing (eg. initialization and finalization routines). When
abstracting a service (eg. storage) like this, I think it's when the
glue
code starts to require sophistication (because it needs to fill more
blanks)...and to have it automated, the provider will need to make
requirements to the client as well. This is where I think a common
vocabulary
will be more necessary.

--
Thiago

Excerpts from Alan Kay's message of 2013-02-12 16:12:40 -0300:
 Hi Jeff

Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
Hi Miles

First, my email was not about Ted Nelson, Doug Engelbart or what massively 
distributed media should be like. It was strictly about architectures that 
allow a much wider range of possibilities.

Second, can you see that your argument really doesn't hold? This is because it 
even more justifies oral speech rather than any kind of writing -- and for 
hundreds of thousands of years rather than a few thousand. The invention of 
writing was very recent and unusual. Most of the humans who have lived on the 
earth never learned it. Using your logic, humans should have stuck with oral 
modes and not bothered to go through all the work to learn to read and write.

There is also more than a tinge of false Darwin in your argument. 
Evolutionary-like processes don't optimize, they just find fits to the 
environment and ecology that exists. The real question here is not what do 
humans want? (consumerism finds this and supplies it to the general detriment 
of society), but what do humans *need*? (even if what we need takes a lot of 
learning to take on).






 From: Miles Fidelman mfidel...@meetinghouse.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, February 13, 2013 4:58 PM
Subject: Re: [fonc] Design of web, POLs for rules. Fuzz testing nile
 
Alan Kay wrote:
 
 Or you could look at the actual problem a web has to solve, which is to 
 present arbitrary information to a user that comes from any of several 
 billion sources. Looked at from this perspective we can see that the current 
 web design could hardly be more wrong headed. For example, what is the 
 probability that we can make an authoring app that has all the features 
 needed by billions of producers?

Hmmm let me take an opposing view here, at least for the purpose of 
playing devil's advocate:

1. Paper and ink have served for 1000s of years, and with the addition of the 
printing press and libraries have served to distribute and preserve 
information, from several billion sources, for an awfully long time.

2. If one actually looks at what people use when generating and distributing 
information it tends to be essentially smarter paper - word processors, 
spreadsheets, powerpoint slides; and when we look at distribution systems, it 
comes down to email and the electronic equivalent of file rooms and libraries.

Sure, we've added additional media types to the mix, but the basic model 
hasn't changed all that much.  Pretty much all the more complicated 
technologies people have come up with don't actually work that well, or get 
used that much.  Even in the web world, single direction hyperlinks dominate 
(remember all the complicated, bi-directional links that Ted Nelson came up 
with).  And when it comes to groupware, what dominates seems to be chat and 
twitter.

There's a pretty good argument to be made that what works are powerful 
building blocks that can be combined in lots of different ways; driven by some 
degree of natural selection and evolution. (Where the web is concerned, first 
there was ftp, then techinfo, then gopher, and now the web.  Simple mashups 
seem to have won out over more complicated service oriented architectures.  We 
might well have plateaued.)

Miles Fidelman

-- In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Design of web, POLs for rules. Fuzz testing nile

2013-02-13 Thread Alan Kay
My suggestion is to learn a little about biology and anthropology and media as 
it intertwines with human thought, then check back in.




 From: Miles Fidelman mfidel...@meetinghouse.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, February 13, 2013 5:56 PM
Subject: Re: [fonc] Design of web, POLs for rules. Fuzz testing nile
 
Hi Alan
 
 First, my email was not about Ted Nelson, Doug Engelbart or what massively 
 distributed media should be like. It was strictly about architectures that 
 allow a much wider range of possibilities.

Ahh... but my argument is that the architecture of the current web is SIMPLER 
than earlier concepts but has proven more powerful (or at least more 
effective).

 
 Second, can you see that your argument really doesn't hold? This is because 
 it even more justifies oral speech rather than any kind of writing -- and 
 for hundreds of thousands of years rather than a few thousand. The invention 
 of writing was very recent and unusual. Most of the humans who have lived on 
 the earth never learned it. Using your logic, humans should have stuck with 
 oral modes and not bothered to go through all the work to learn to read and 
 write.

Actually, no.  Oral communication begat oral communication at a distance - via 
radio, telephone, VoIP, etc. - all of which have pretty much plateaued in 
terms of functionality.  Written communication is (was) something new and 
different, and the web is a technological extension of written communication.  
My hypothesis is that, as long as we're dealing with interfaces that look a 
lot like paper (i.e., screens), we may have plateaued as to what's effective 
in augmenting written communication with technology.  Simple building blocks 
that we can mix and match in lots of ways.

Now... if we want to talk about new forms of communication (telepathy?), or 
new kinds of interfaces (3d immersion, neural interfaces that align with some 
of the kinds of parallel/visual thinking that we do internally), then we start 
to need to talk about qualitatively different kinds of technological 
augmentation.

Of course there is a counter-argument to be made that our kids engage in a new 
and different form of cognition - by dint of continual immersion in large 
numbers of parallel information streams.  Then again, we seem to be talking 
lots of short messages (twitter, texting), and there does seem to be a lot of 
evidence that multi-tasking and information overload are counter-productive 
(do we really need society-wide ADHD?).

 
 There is also more than a tinge of false Darwin in your argument. 
 Evolutionary-like processes don't optimize, they just find fits to the 
 environment and ecology that exists.

Umm.. that sounds a lot like optimizing to me.  In any case, there's the 
question of what are we trying to optimize?  That seems to be both an 
evolutionary question and one of emergent behaviors.
 The real question here is not what do humans want? (consumerism finds this 
 and supplies it to the general detriment of society), but what do humans 
 *need*? (even if what we need takes a lot of learning to take on).

Now that's truly a false argument.  Consumerism, as we tend to view it, is 
driven by producers, advertising, and creation of false needs.


Cheers,

Miles

-- In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: Object Oriented vs Message Oriented

2013-02-12 Thread Alan Kay
.) This is part of the big problem in OOP today, 
because it is mostly a complicated way of making new data structures whose 
fields are munged by setters.


Back to your original question, it *might* have helped to have better 
terminology. The Simula folks tried this in the first Simula, but their choice 
of English words was confusing (they used Activity for Class and Process 
for Instance). This is almost good and much more in keeping with what should 
be the philosophical underpinnings of this kind of design. 

After being told that no one had understood this (I and two other grad students 
had to read the machine code listing of the Simula compiler to understand its 
documentation!), the Nygaard and Dahl chose Class and Instance for Simula 
67. I chose these for Smalltalk also because why multiply terms? (I should have 
chosen better terms here also.)

To sum up, besides the tiny computers we had to use back then, we didn't have a 
good enough theory of messaging -- we did have a start that was based on Dave 
Fisher's Control Definition Language CMU 1970 thesis. But then we got 
overwhelmed by the excitement of being able to make personal computing on the 
Alto. A few years later I decided that sending messages was not a good 
scaling idea, and that something more general to get needed resources from the 
outside needed to be invented.


Cheers,

Alan




 From: Loup Vaillant l...@loup-vaillant.fr
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 12, 2013 7:15 AM
Subject: [fonc] Terminology: Object Oriented vs Message Oriented
 
This question was prompted by a quote by Joe Armstrong about OOP[1].
It is for Alan Kay, but I'm totally fine with a relevant link.  Also,
I don't know and I don't have time for this are perfectly okay.

Alan, when the term Object oriented you coined has been hijacked by
Java and Co, you made clear that you were mainly about messages, not
classes. My model of you even says that Erlang is far more OO than Java.

Then why did you chose the term object instead of message in the
first place?  Was there a specific reason for your preference, or did
you simply not bother foreseeing any terminology issue? (20/20 hindsight and 
such.)

Bonus question: if you had choose message instead, do you think it
would have been hijacked too?

Thanks,
Loup.


[1]: http://news.ycombinator.com/item?id=5205976
     (This is for reference, you don't really need to read it.)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Terminology: Object Oriented vs Message Oriented

2013-02-12 Thread Alan Kay
Hi Jeff

I think intermodule communication schemes that *really scale* is one of the 
most important open issues of the last 45 years or so.

It is one of the several pursuits written into the STEPS proposal that we 
didn't use our initial efforts on -- so we've done little to advance this over 
the last few years. But now that the NSF funded part of STEPS has concluded, we 
are planning to use much of the other strand of STEPS to look at some of these 
neglected issues.

There are lots of facets, and one has to do with messaging. The idea that 
sending a message has scaling problems is one that has been around for quite 
a while. It was certainly something that we pondered at PARC 35 years ago, and 
it was an issue earlier for both the ARPAnet and its offspring: the Internet.

Several members of this list have pointed this out also.

There are similar scaling problems with the use of tags in XML and EMI etc. 
which have to be agreed on somehow


Part of the problem is that for vanilla sends, the sender has to know the 
receiver in some fashion. This starts requiring the interior of a module to 
know too much if this is a front line mechanism.

This leads to wanting to do something more like LINDA coordination or 
publish and subscribe where there are pools of producers and consumers who 
don't have to know explicitly about each other. A send is now a general 
request for a resource. But the vanilla approaches here still require that the 
sender and receiver have a fair amount of common knowledge (because the 
matching is usually done on terms in common).

For example, in order to invoke a module that will compute the sine of an 
angle, do you and the receiver both have to agree about the term sine? In APL 
I think the name of this function is circle 1 and in Smalltalk it's 
degreeSin, etc. 

Ted Kaehler solved this problem some years ago in Squeak Smalltalk with his 
message finder. For example, if you enter 3. 4. 7 Squeak will instantly come 
back with:
   3 bitOr: 4 -- 7
   3 bitXor: 4 -- 7
   3 + 4 -- 7

For the sine example you would enter 30. 0.5 and Squeak will come up with: 
   30 degreeSin -- 0.5

The method finder is acting a bit like Doug Lenat's discovery systems. Simple 
brute force is used here (Ted executes all the methods that could fit in the 
system safely to see what they do.)

One of the solutions at PARC for dealing with a part of the problem is the idea 
of send an agent, not a message. It was quickly found that defining file 
formats for all the different things that could be printed on the new laser 
printer was not scaling well. The solution was to send a program that would 
just execute safely and blindly in the printer -- the printer would then just 
print out the bit bin. This was known as PostScript when it came out in the 
world.

The Trickles idea from Cornell has much of the same flavor.

One possible starting place is to notice that there are lots more terms that 
people can use than the few that are needed to make a powerful compact 
programming language. So why not try to describe meanings and match on meanings 
-- and let there be not just matching (which is like a password) but 
negotiation, which is what a discovery agent does.

And so forth. I think this is a difficult but doable problem -- it's easier 
than AI, but has some tinges of it.

Got any ideas?

Cheers,

Alan





 From: Jeff Gonis jeff.go...@gmail.com
To: Alan Kay alan.n...@yahoo.com 
Cc: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 12, 2013 10:33 AM
Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
 

I see no one has taken Alan's bait and asked the million dollar question: if 
you decided that messaging is no longer the right path for scaling, what 
approach are you currently using?
I would assume that FONC is the current approach, meaning, at the risk of 
grossly over-simplifying and sounding ignorant, problem oriented languages 
allowing for compact expression of meaning.  But even here, FONC struck me as 
providing vastly better ways of creating code that, at its core, still used 
messaging for robustness, etc, rather than using something entirely different.
Have I completely misread the FONC projects? And if not messaging, what 
approach are you currently using to handle scalability?
A little more history ...


The first Smalltalk (-72) was modern (as used below), and similar to Erlang 
in several ways -- for example, messages were received with structure and 
pattern matching, etc. The language was extended using the same mechanisms ...


Cheers,



Alan




 From: Brian Rice briantr...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 12, 2013 8:54 AM
Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
 

Independently of the originally-directed historical intent, I'll pose my own 
quick perspective.

Perhaps a contrast with Steve Yegge's

Re: [fonc] Terminology: Object Oriented vs Message Oriented

2013-02-12 Thread Alan Kay
Hi Miles

(Again The Early History of Smalltalk has some of this history ...)

It is unfair to Carl Hewitt to say that Actors were his reaction to 
Smalltalk-72 (because he had been thinking early thoughts from other 
influences). And I had been doing a lot of thinking about the import of his 
Planner language.

But that is the simplest way of stating the facts and the ordering. 

ST-72 and the early Actors follow on were very similar. The Smalltalk that 
didn't get made, -71, was a kind of merge of the object idea, Logo, and 
Carl's Planner system (which predated Prolog and was in many respects more 
powerful). Planner used pattern-directed invocation and I thought you could 
both receive messages with it if it were made the interface of an object, and 
also use it for deduction. Smalltalk-72 was a bit of an accident

The divergence later was that we got a bit dirtier as we made a real system 
that you could program a real system in. Actors got cleaner as they looked at 
many interesting theoretical possibilities for distributed computing etc. My 
notion of object oriented would now seem to be very actor-like.

Cheers,

Alan





 From: Miles Fidelman mfidel...@meetinghouse.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 12, 2013 11:05 AM
Subject: Re: [fonc] Terminology: Object Oriented vs Message Oriented
 
Alan Kay wrote:
 A little more history ...
 
 The first Smalltalk (-72) was modern (as used below), and similar to 
 Erlang in several ways -- for example, messages were received with 
 structure and pattern matching, etc. The language was extended using the 
 same mechanisms ...

Alan,

As I recall, some of your early writings on Smalltalk sounded very actor-like 
- i.e., objects as processes, with lots of messages floating around, rather 
than a sequential thread-of-control model. Or is my memory just getting fuzzy? 
 In any case, I'm surprised that the term actor hasn't popped up in this 
thread, along with object and messaging.

Miles Fidelman



-- In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] yet another meta compiler compiler

2013-02-08 Thread Alan Kay
Looks nice to me!

But no ivory towers around to pillage. (However planting a few seeds is almost 
always a good idea)

Cheers,

Alan





 From: Charles Perkins ch...@kuracali.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, February 8, 2013 3:52 PM
Subject: [fonc] yet another meta compiler compiler
 
While we're all waiting for the next STEP report I thought I'd share something 
I've been working on, inspired by O'Meta and by the Meta-II paper and by the 
discussions on this list from November.

I've written up the construction of a parser generator and compiler compiler 
here: https://github.com/charlesap/ibnf/blob/master/SyntaxChapter.pdf?raw=true

The source code can be had here: https://github.com/charlesap/ibnf

Don't be fooled by the footnotes and references, this is a piece of outsider 
literature. I am a barbarian come to pillage the ivory tower. Yarr.

Chuck
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] deriving a POL from existing code

2013-01-08 Thread Alan Kay
Yes indeed, I quite agree with David. 

One of the main points in the 2012 STEPS report (when I get around to finally 
finishing it and getting it out) is exactly David's -- that it is a huge design 
task to pull off a good DSL -- actually it is a double design task: you first 
need to come up with a great design of the domain area that is good enough to 
make it worth while, and to then try to design and make a math for the 
fruitful ways the domain area can be viewed.

This pays off if the domain area is very important (and often large and 
complicated). In the STEPS project both Nile (factors of 100 to 1000) and OMeta 
(wide spectrum flexibility and compactness) paid off really well. K-script also 
has paid off well because it enabled us to get about a 5-6 reduction in the 
Smalltalk code we had done the first scaffolding of the UI and Media in. Maru 
is working well as the backend for Nile and is very compact, etc.

In other words, the DSLs that really pay off are actual languages with all 
that implies.

But it's a little sobering to look at the many languages we did to learn about 
doing this, and ones that wound up being awkward, etc. and not used in the end.

On the other hand, our main point in doing STEPS was for both learning -- 
answering some lunch questions we've had for years -- and also to put a lot 
of effort into getting a handle on actual complexities vs complications. We 
purposely picked a well-mined set of domains -- personal computing -- so we 
would not have to invent fundamental concepts, but rather to take areas that 
are well known in one sense, and try to see how they could be captured in a 
very compact but readable form.

In other words, it's good to choose the battles to be fought and those to be 
avoided -- it's hard to invent everything. This was even true at Xerox PARC 
-- even though it seems as though that is what we did. However, pretty much 
everything there in the computing research area had good -- but flawed -- 
precursors from the 60s (and from many of us who went to PARC). So the idea 
there was brand new HW-SW-UI-Networking but leaning on the promising 
gestures and failures of the 60s. This interesting paradox of from scratch 
but don't forget worked really well.

Cheers,

Alan






 From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, January 8, 2013 8:19 AM
Subject: Re: [fonc] deriving a POL from existing code
 

Take a look at the Inform 7 language (http://inform7.com/) and its modular 
'rulebooks'.


Creating good rules-based languages isn't trivial, mostly because ad-hoc rules 
can interfere in ways that are awkward to reason about or optimize. Temporal 
logic (e.g. Dedalus, Bloom) and constraint-logic techniques are both 
appropriate and effective. I think my RDP will also be effective. 


Creating a good POL can be difficult. (cf. 
http://lambda-the-ultimate.org/node/4653)



On Tue, Jan 8, 2013 at 7:33 AM, John Carlson yottz...@gmail.com wrote:

Has anyone ever attempted to automatically add meaning, semantics, longer 
variable names, loops, and comments automatically to code?  Just how good are 
the beautifiers out there, and can we do better?

No, I'm not asking anyone to break a license agreement.  Ideally, I would 
want this to work on code written by a human being--me.

Yes, I realize that literate programming is the way to go.  I just have never 
explored other options before, and would like to know about the literature.


Say I am trying to derive language for requirements and use cases based on 
existing source code.

I believe this may be beyond current reverse engineering techniques which 
stop at the UML models.  I don't want models, I want text, perhaps written in 
a Problem Oriented Language (POL).

That is, how does one derive a good POL from existing code?  Is this the same 
as writing a scripting language on top of a library?

What approaches have been tried?

Here's the POL I want.  I want a POL to describe game rules at the same order 
of magnitude as English.  I am not speaking of animation or resources--just 
rules and constraints.  Does the Object Constraint Language suffice for this?

Thanks,

John

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





-- 
bringing s-words to a pen fight 
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report?

2013-01-04 Thread Alan Kay
It turns out that the due date is actually a due interval that starts Jan 
1st and extends for a few months ... so we are working on putting the report 
together amongst other activities ...

Cheers,

Alan




 From: Mathnerd314 mathnerd314@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, January 4, 2013 8:43 AM
Subject: Re: [fonc] Final STEP progress report?
 
On 11/7/2012 4:37 PM, Kim Rose wrote:
 Hello,
 
 For those of you interested and waiting -- the NSF (National Science 
 Foundation) funding for the 5-year STEPS project has now finished (we 
 stretched that funding to last for 6 years).  The final report on this work 
 will be published and available on our website by the end of this calendar 
 year.
It's four days past the end of the calendar year, and I don't see a final 
report: http://www.vpri.org/html/writings.php

Am I looking in the wrong place? Or will it be a few more days until it's 
published?

-- Mathnerd314
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Final STEP progress report?

2013-01-04 Thread Alan Kay
Sliding deadlines very often allow other pursuits to creep in ...

Cheers,

Alan




 From: Dale Schumacher dale.schumac...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Friday, January 4, 2013 8:59 AM
Subject: Re: [fonc] Final STEP progress report?
 
Kind of like music starts at 9pm :-)

We're all anxious to see the results of your work.  Thanks (in
advance) for sharing it.

On Fri, Jan 4, 2013 at 10:51 AM, Alan Kay alan.n...@yahoo.com wrote:
 It turns out that the due date is actually a due interval that starts
 Jan 1st and extends for a few months ... so we are working on putting the
 report together amongst other activities ...

 Cheers,

 Alan

 
 From: Mathnerd314 mathnerd314@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Friday, January 4, 2013 8:43 AM
 Subject: Re: [fonc] Final STEP progress report?

 On 11/7/2012 4:37 PM, Kim Rose wrote:
 Hello,

 For those of you interested and waiting -- the NSF (National Science
 Foundation) funding for the 5-year STEPS project has now finished (we
 stretched that funding to last for 6 years).  The final report on this work
 will be published and available on our website by the end of this calendar
 year.
 It's four days past the end of the calendar year, and I don't see a final
 report: http://www.vpri.org/html/writings.php

 Am I looking in the wrong place? Or will it be a few more days until it's
 published?

 -- Mathnerd314
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-03 Thread Alan Kay
Hi David

I think both of your essays are important, as is the general style of 
aspiration.

The ingredients of a soup idea is one of the topics we were supposed to work 
on in the STEPS project, but it counts as a shortfall: we wound up using our 
time on other parts. We gesture at it in some of the yearly reports.

The thought was that a kind of semantic publish and subscribe scheme -- that 
dealt in descriptions and avoided having to know names of functionalities as 
much as possible -- would provide a very scalable loose coupling mechanism. We 
were hoping to get beyond the pitfalls of attempts at program synthesis from 
years ago that used pre-conditions and post-conditions to help matchers paste 
things together.


I'm hoping that you can cast more light on this area. One of my thoughts is 
that a good matcher might be more like a dynamic discovery system (e.g. 
Lenat's Eurisko) than a simple matcher 

It's interesting to think of what the commonalities of such a system should be 
like. A thought here was that a suitable descriptive language would be could be 
should be lots smaller and simpler than a set of standard conventions and 
tags for functionality

Joe Goguen was a good friend of mine, and his early death was a real tragedy. 
As you know, he spent many years trying to find sweet spots in formal semantics 
that could also be used in practical ways...

Best wishes,

Alan




 From: David Barbour dmbarb...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, January 2, 2013 11:09 PM
Subject: Re: [fonc] Current topics
 

On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay alan.n...@yahoo.com wrote:

As humans, we are used to being sloppy about message creation and sending, and 
rely on negotiation and good will after the fact to deal with errors. 


You might be interested in my article on avoiding commitment in HCI, and its 
impact on programming languages. I address some issues of negotiation and 
clarification after-the-fact. I'm interested in techniques that might make 
this property more systematic and compositional, such as modeling messages or 
signals as having probabilistic meanings in context.




you are much better off making -- with great care -- a few kinds of 
relatively big modules as basic building blocks than to have zillions of 
different modules being constructed by vanilla programmers


Large components are probably a good idea if humans are hand-managing the glue 
between them. But what if there was another way? Instead of modules being 
rigid components that we painstakingly wire together, they can be ingredients 
of a soup - with the melding and combination process being largely automated.


If the modules are composed automatically, they can become much smaller, more 
specialized and reusable. Large components require a lot of inefficient 
duplication of structure and computation (seen even in biology).


 


Note that desires for runable specifications, etc., could be quite harmonious 
with a viable module scheme that has great systems integrity.


Certainly. Before his untimely departure, Joseph Goguen was doing a lot of 
work on modular, runable specifications (the BOBJ - behavioral OBJ - language, 
like a fusion of OOP and term rewriting). 
 
Regards,


Dave

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Current topics

2013-01-01 Thread Alan Kay
The most recent discussions get at a number of important issues whose 
pernicious snares need to be handled better.

In an analogy to sending messages most of the time successfully through noisy 
channels -- where the noise also affects whatever we add to the messages to 
help (and we may have imperfect models of the noise) -- we have to ask: what 
kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So a 
message that can be proved perfectly received as sent can still be 
interpreted poorly by a human directly, or by software written by humans.


A wonderful specification language that produces runable code good enough to 
make a prototype, is still going to require debugging because it is hard to get 
the spec-specs right (even with a machine version of human level AI to help 
with larger goals comprehension).

As humans, we are used to being sloppy about message creation and sending, and 
rely on negotiation and good will after the fact to deal with errors. 

We've not done a good job of dealing with these tendencies within programming 
-- we are still sloppy, and we tend not to create negotiation processes to deal 
with various kinds of errors. 

However, we do see something that is actual engineering -- with both care in 
message sending *and* negotiation -- where eventual failure is not tolerated: 
mostly in hardware, and in a few vital low-level systems which have to scale 
pretty much finally-essentially error-free such as the Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error 
detection and improvements (if possible). Dan Ingalls was (and is) a master at 
getting a whole system going in such a way that it has enough integrity to 
exhibit its failures and allow many of them to be addressed in the context of 
what is actually going on, even with very low level failures. It is interesting 
to note the contributions from what you can say statically (the higher the 
level the language the better) -- what can be done with meta (the more 
dynamic and deep the integrity, the more powerful and safe meta becomes) -- 
and the tradeoffs of modularization (hard to sum up, but as humans we don't 
give all modules the same care and love when designing and building them).

Mix in real human beings and a world-wide system, and what should be done? (I 
don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers contrasted 
with engineers. The second is human systems contrasted with biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million engineers 
(some of them in computing). The current estimates of programmers in the US 
are about 1.3 million (US Dept of Labor counting programmers and developers). 
Also, the Internet and multinational corporations, etc., internationalizes the 
impact of programming, so we need an estimate of the programmers world-wide, 
probably another million or two? Add in the ad hoc programmers, etc? The 
populations are similar in size enough to make the contrasts in methods and 
results quite striking.

Looking for analogies, to my eye what is happening with programming is more 
similar to what has happened with law than with classical engineering. Everyone 
will have an opinion on this, but I think it is partly because nature is a 
tougher critic on human built structures than humans are on each other's 
opinions, and part of the impact of this is amplified by the simpler shorter 
term liabilities of imperfect structures on human safety than on imperfect laws 
(one could argue that the latter are much more of a disaster in the long run).

And, in trying to tease useful analogies from Biology, one I get is that the 
largest gap in complexity of atomic structures is the one from polymers to the 
simplest living cells. (One of my two favorite organisms is Pelagibacter 
unique, which is the smallest non-parasitic standalone organism. Discovered 
just 10 years ago, it is the most numerous known bacterium in the world, and 
accounts for 25% of all of the plankton in the oceans. Still it has about 1300+ 
genes, etc.) 

What's interesting (to me) about cell biology is just how much stuff is 
organized to make integrity of life. Craig Ventor thinks that a minimal 
hand-crafted genome for a cell would still require about 300 genes (and a 
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here should 
be scrutinized -- but this one harmonizes with one of Butler Lampson's 
conclusions/prejudices: that you are much better off making -- with great care 
-- a few kinds of relatively big modules as basic building blocks than to have 
zillions of different modules being constructed by vanilla programmers. One of 
my favorite examples of this was the Beings master's thesis by Doug Lenat at 
Stanford in the 70s. And this 

Re: [fonc] A META-II for C that fits in a half a sheet of paper

2012-11-22 Thread Alan Kay
Oh yes ... I'd forgotten that I'd given this paper to the 1401 restoration 
group at the Computer History Museum (the 1401 was my first computer more than 
50 years ago now -- it was a bit odd even relative to the more diverse 
designs of its day)

http://ibm-1401.info/AlanKay-META-II.html

Cheers,

Alan






 From: Christian Neukirchen chneukirc...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Thursday, November 22, 2012 7:06 AM
Subject: Re: [fonc] A META-II for C that fits in a half a sheet of paper
 
Reuben Thomas r...@sc3d.org writes:

 On 22 November 2012 07:54, Long Nguyen cgb...@gmail.com wrote:

 Hello everyone,

 I was very impressed with Val Schorre's META-II paper that Dr. Kay gave me
 to read,


 A paper which, as far as I can tell, one still has to pay the ACM to read.
 Sigh.

Or not: http://ibm-1401.info/Meta-II-schorre.pdf

-- 
Christian Neukirchen  chneukirc...@gmail.com  http://chneukirchen.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Interview with Alan Kay

2012-11-16 Thread Alan Kay
Hi Jarek

This is the editor's hodgepodge of my oral hodgepodge -- and one could argue 
that the nature of orality mitigates against trying to simply extract quotes, 
try to fix them, and then print them. I find them pretty much impossible to 
read.


The Cerf and prizes part doesn't make sense (since Vint has been awarded just 
about everything that can be awarded). But he and I have both served on that 
committee (post our awards), and while I regard it as the best most civilized 
committee I've ever been on (a real pleasure!), I'm not so happy about the 
larger process. 


The most important points here (I think) are that (a) giving recognition to 
*everyone* who deserves it just doesn't happen with any of the major awards (b) 
there are biases from faddism, popularity, etc. (c) most of computing's 
advances have to be implemented to assess them, and this almost always requires 
teams -- and so much is learned by team interactions that (I think) the whole 
team should be awarded (even the Draper Prize isn't comprehensive in this 
fashion). 


And, with regard to great hardware/system designers, the Turing Award has only 
been given a few times and there are at least 5 to 10 people who have deeply 
deserved it, including Bob Barton.

With regard to Barton, he was certainly one of the most amazing people I've 
ever worked with. As with some of the other greats, he was not bound in any way 
to mere opinion around him, and was able to identify great goals without regard 
to their difficulty (which was sometimes too high for a particular era).

The class was quite an experience and completely liberating because he forced 
us back to zero, but with the knowledge available as perfume, just not in the 
way of real thinking things through. He was able to demolish his own 
accomplishments as well, so the destruction was total.

Cheers,

Alan



 From: Jarek Rzeszótko jrzeszo...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, November 16, 2012 6:59 AM
Subject: Re: [fonc] Interview with Alan Kay
 

Hi,

Very interesting, but:

[Digression on who, in addition to Cerf, should have won various computing 
prizes…]

I guess that's not the best editing job ever, I for one would like to hear the 
digression, and if they edit it out mentioning it at all is a bit 
irritating... It would be interesting to hear more details about that Bob 
Barton class, too. 

Cheers,
Jarosław Rzeszótko


2012/11/16 Eugen Leitl eu...@leitl.org


http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442#

Interview with Alan Kay

By Andrew Binstock, July 10, 2012

The pioneer of object-orientation, co-designer of Smalltalk, and UI luminary
opines on programming, browsers, objects, the illusion of patterns, and how
Socrates could still make it to heaven.

In June of this year, the Association of Computing Machinery (ACM) celebrated
the centenary of Alan Turing's birth by holding a conference with
presentations by more than 30 Turing Award winners. The conference was filled
with unusual lectures and panels (videos are available here) both about
Turing and present-day computing. During a break in the proceedings, I
interviewed Alan Kay — a Turing Award recipient known for many innovations
and his articulated belief that the best way to predict the future is to
invent it.

[A side note: Re-creating Kay's answers to interview questions was
particularly difficult. Rather than the linear explanation in response to an
interview question, his answers were more of a cavalcade of topics, tangents,
and tales threaded together, sometimes quite loosely — always rich, and
frequently punctuated by strong opinions. The text that follows attempts to
create somewhat more linearity to the content. — ALB]

Childhood As A Prodigy

Binstock: Let me start by asking you about a famous story. It states that
you'd read more than 100 books by the time you went to first grade. This
reading enabled you to realize that your teachers were frequently lying to
you.

Kay: Yes, that story came out in a commemorative essay I was asked to write.

Binstock: So you're sitting there in first grade, and you're realizing that
teachers are lying to you. Was that transformative? Did you all of a sudden
view the whole world as populated by people who were dishonest?

Kay: Unless you're completely, certifiably insane, or a special kind of
narcissist, you regard yourself as normal. So I didn't really think that much
of it. I was basically an introverted type, and I was already following my
own nose, and it was too late. I was just stubborn when they made me go
along.

Binstock: So you called them on the lying.

Kay: Yeah. But the thing that traumatized me occurred a couple years later,
when I found an old copy of Life magazine that had the Margaret Bourke-White
photos from Buchenwald. This was in the 1940s — no TV, living on a farm.
That's when I realized that adults were dangerous. Like, really dangerous. I

Re: [fonc] Final STEP progress report?

2012-11-07 Thread Alan Kay
Hi Carl

Just to keep on saying it ... the STEPS project had/has completely different 
goals than the Smalltalk project -- STEPS really is a science project -- or a 
collection of science projects -- that has never been aimed at a deployable 
artifact, but instead is aimed at finding better and more compact ways to 
express meanings. 

We did make it possible to exhibit and permute a wide range of media in 
real-time -- it makes it much easier to show people the results of important 
pieces of code this way -- but this is quite a ways from the packaging and 
finishing that was done for the Smalltalks.

However, it would be possible to do something analogous to a Smalltalk from the 
current state of things, but we are instead pushing even more deeply into 
Whats rather than Hows.

Cheers,

Alan






 From: Carl Gundel ca...@psychesystems.com
To: 'Fundamentals of New Computing' fonc@vpri.org 
Sent: Wednesday, November 7, 2012 2:45 PM
Subject: Re: [fonc] Final STEP progress report?
 
Thanks Kim.  Will there be enough documentation for interested people to be
able to build on top of VPRI's STEPS project?  I'm thinking something like
the blue book with a CDROM.  :-)

-Carl

-Original Message-
From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Kim
Rose
Sent: Wednesday, November 07, 2012 5:37 PM
To: Fundamentals of New Computing
Subject: Re: [fonc] Final STEP progress report?

Hello,

For those of you interested and waiting -- the NSF (National Science
Foundation) funding for the 5-year STEPS project has now finished (we
stretched that funding to last for 6 years).  The final report on this work
will be published and available on our website by the end of this calendar
year.

We have received some more funding (although not to the extent of this
original 5-year grant) and our work will carry on.   That said, we're always
looking for more funding to maintain day to day operations so we welcome any
support and donations at any time.  :-)

Regards,
Kim Rose
Viewpoints Research Institute

Viewpoints Research is a 501(c)(3) nonprofit organization dedicated to
improving powerful ideas education for the world's children and advancing
the state of systems research and personal computing. Please visit us online
at www.vpri.org





On Nov 8, 2012, at 5:23 AM, Carl Gundel wrote:

 Well, I do hope that VPRI has managed to find more funding money so that
this doesn't have to be a final STEP report.  ;-)
 
 -Carl
 
 -Original Message-
 From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of
Loup Vaillant
 Sent: Wednesday, November 07, 2012 1:45 PM
 To: Fundamentals of New Computing
 Subject: [fonc] Final STEP progress report?
 
 Hi,
 
 The two last progress reports having being published in October, I was
wondering if we will have the final one soon.  Have we an estimation of when
this might be completed? As a special request, I'd like to know a bit about
what to expect.
 
 Unless of course it's all meant to be a surprise.  But please at least
tell me you have decided not to disclose anything right now. In any case, I
promise I'll be patient.
 
 Cheers,
 Loup.
 
 PS: If I sound like a jumping impatient child, that's because right
     now, I feel like one.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay in the news [german]

2012-07-19 Thread Alan Kay
Hi John

Sorry to hear about your nerve problems.

I got a variety of books to get started -- including Anton Shearer's and 
Christopher Parkening's. 


Then I started corresponding with a fabulous and wonderfully expressive player 
in the NetherlandsI found on YouTube-- Enno Voorhorst 

Check out: http://www.youtube.com/watch?v=viVl-G4lFQ4

I like his approach very much -- part of it is that he started out as a violin 
player, and still does a fair amount of playing in string quartets, etc. You 
can hear that his approach to tremolo playing is that of a solo timbre rather 
than an effect.


And some of the violin ideas of little to no support for the left hand do work 
well on classical guitar.But many of the barres (especially the hinged ones) do 
require some thumb support. What has been interesting about this process is to 
find out how much of the basic classical guitar technique is quite different 
from steel string jazz chops -- it's taken a while to unlearn some spinal 
reflexes that were developed a lifetime ago.


Cheers,

Alan





 From: John Zabroski johnzabro...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, July 19, 2012 5:40 PM
Subject: Re: [fonc] Alan Kay in the news [german]
 




On Wed, Jul 18, 2012 at 2:01 PM, Alan Kay alan.n...@yahoo.com wrote:

Hi Long,


I can keep my elbows into my body typing on a laptop. My problem is that I 
can't reach out further for more than a few seconds without a fair amount of 
pain from all the ligament tendon and rotator cuff damage along that axis. If 
I get that close to the keys on an organ I still have trouble reaching the 
other keyboards and my feet are too far forward to play the pedals. Similar 
geometry with the piano, plus the reaches on the much wider keyboard are too 
far on the right side. Also at my age there are some lower back problems from 
trying to lean in at a low angle -- this doesn't work.



But, after a few months I realized I could go back to guitar playing (which I 
did a lot 50 years ago) because you can play guitar with your right elbow in. 
After a few years of getting some jazz technique back and playing in some 
groups in New England in the summers, I missed the polyphonic classical music 
and wound up starting to learn classical guitar a little over a year ago. 
This has proved to be quite a challenge -- much more difficult than I 
imagined it would be -- and there was much less transfer from jazz/steel 
string technique that I would have thought. It not only feels very different 
physically, but also mentally, and has many extra dimensions of nuance and 
color that is both its charm, and also makes it quite a separate learning 
experience.


Cheers,


Alan




Hey Alan,


That's awesome that you are learning classical guitar.  Are you using Aaron 
Shearer's texts to teach yourself?  One trick I have learned is to not support 
my left hand at all when playing.  In this way, the dexterity in my fingers 
increases and when I press down on the fretboard I am using only my finger 
muscles.


I've had bilateral ulnar nerve transposition, and for a whole year in college 
could not type at all due to muscle atrophy from nerve compression!  I wrote 
all my computer assignments on paper, and paid a personal secretary to type 
them in for me.  I thought about everything the program would do before I 
wrote anything on paper, since I hated crossing out code and writing editorial 
arrows.


Dragon Naturally Speaking is really quite good, although not good for 
programming in most languages.  I've found Microsoft Visual Basic is somewhat 
possible to speak.  I also experimented with various exotic keyboards, like 
the DataHand keyboard in the movie The Fifth Element.  It was easily my 
favorite keyboard, but the main problem and reason I don't use it after 
getting better is that going to somebody else's desk and typing becomes a 
lesson in learning how to type again.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay in the news [german]

2012-07-18 Thread Alan Kay
I should mention that there is both garbling and also lots of fabrication in 
this report.

I didn't say abandon theory -- I did urge doing more real experiments with 
software (from which the first might have been incorrectly inferred).

But where did all the organ stuff come from? I never mentioned it, so it must 
have been gleaned from the net. And I suddenly became a better organist than I 
every was. And he had me touring around when I have not been able to play 
keyboards for four years because of a severe shoulder trauma from a tennis 
accident.

But the University of Paderborn and faculty and students were very hospitable, 
and it was fun to help them dedicate the building.

Cheers,

Alan





 From: Eugen Leitl eu...@leitl.org
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, July 18, 2012 7:19 AM
Subject: [fonc] Alan Kay in the news [german]
 

http://www.heise.de/newsticker/meldung/Alan-Kay-Nicht-in-der-Theorie-der-Informatik-verharren-1644597.html
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay in the news [german]

2012-07-18 Thread Alan Kay
Hi Long,

I can keep my elbows into my body typing on a laptop. My problem is that I 
can't reach out further for more than a few seconds without a fair amount of 
pain from all the ligament tendon and rotator cuff damage along that axis.If I 
get that close to the keys on an organ I still have trouble reaching the other 
keyboards and my feet are too far forward to play the pedals. Similar geometry 
with the piano, plus the reaches on the much wider keyboard are too far on the 
right side. Also at my age there are some lower back problems from trying to 
lean in at a low angle -- this doesn't work.


But, after a few months I realized I could go back to guitar playing (which I 
did a lot 50 years ago) because you can play guitar with your right elbow in. 
After a few years of getting some jazz technique back and playing in some 
groups in New England in the summers, I missed the polyphonic classical music 
and wound up starting to learn classical guitar a little over a year ago. This 
has proved to be quite a challenge -- much more difficult than I imagined it 
would be -- and there was much less transfer from jazz/steel string technique 
that I would have thought. It not only feels very different physically, but 
also mentally, and has many extra dimensions of nuance and color that is both 
its charm, and also makes it quite a separate learning experience.

Cheers,

Alan





 From: Long Nguyen cgb...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, July 18, 2012 10:47 AM
Subject: Re: [fonc] Alan Kay in the news [german]
 
Dear Dr. Kay,

May I ask, how would you type on a computer if you cannot play keyboards?

Best,
Long

On Wed, Jul 18, 2012 at 10:44 AM, Alan Kay alan.n...@yahoo.com wrote:
 I should mention that there is both garbling and also lots of fabrication in
 this report.

 I didn't say abandon theory -- I did urge doing more real experiments with
 software (from which the first might have been incorrectly inferred).

 But where did all the organ stuff come from? I never mentioned it, so it
 must have been gleaned from the net. And I suddenly became a better organist
 than I every was. And he had me touring around when I have not been able to
 play keyboards for four years because of a severe shoulder trauma from a
 tennis accident.

 But the University of Paderborn and faculty and students were very
 hospitable, and it was fun to help them dedicate the building.

 Cheers,

 Alan

 
 From: Eugen Leitl eu...@leitl.org
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Wednesday, July 18, 2012 7:19 AM
 Subject: [fonc] Alan Kay in the news [german]


 http://www.heise.de/newsticker/meldung/Alan-Kay-Nicht-in-der-Theorie-der-Informatik-verharren-1644597.html
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Question about the Burroughs B5000 series and apability-based computing

2012-05-28 Thread Alan Kay
In a nutshell, the B5000 embodied a number of great ideas in its architecture. 
Design paper by Bob Barton in 1961, machine appeared ca 1962-3.

-- multiple CPUs
-- rotating drum secondary memory

-- automatic process switching

-- no assembly code programming, all was done in ESPOL (executive systems 
problem oriented language), an extended version of ALGOL 60.
-- this produced polish postfix code from a pass and a half compiler.
-- Tron note: the OS built in ESPOL was called the MCP (master control 
program).


-- direct execution of polish postfix code
-- code was reentrant

-- code in terms of 12 bit syllables (we would call them bytes) 4 to a 48 
bit word

-- automatic stack
-- automatic stack frames for parameters and temporary variables
-- code did not have any kind of addresses
The most unusual part was the treatment of memory and environments

-- every word was marked with a flag bit -- which was outside of normal 
program space -- and determined whether a word was a value (a number) or a 
descriptor


-- code was granted an environment in the form of (1) a program reference 
table (essentially an object instance) containing values and descriptors, and 
(2) a stack with frame. Code could only reference these via offsets to 
registers that themselves were not in code's purview. (This is the basis of the 
first capability scheme)

-- the protected descriptors were the only way resources could be accessed and 
used.


-- fixed and floating point formats were combined for numbers


-- array descriptors automatically checked for bounds violations, and allowed 
automatic swapping (one of the bits in an array descriptor was presence and 
if not present the address was a disk reference, if present the address was 
a core storage reference, etc.

-- procedure descriptors pointed to code. 


-- a code syllable could ask for either a value to be fetched to the top of the 
stack (right hand side expressions), or a name (address) to be fetched 
(computing the left hand side).

-- if the above ran into a procedure descriptor with a value call then the 
procedure would execute as we would expect. If it was a name call then a bit 
was set that the procedure could test so one could compute a left hand side 
for an expression. In other words, one could simulate data. (The difficulty 
of simulating a sparse array efficiently was a clue for me of how to think of 
object-oriented simulation of classical computer structures.)

-


As for the Alto in 1973, it had a register bank, plus 16 program counters into 
microcode (which could execute 5-6 instructions within each 750ns main memory 
cycle). Conditions/signals in the machine were routed through separate logic to 
determine which program counter to use for the next microinstruction. There was 
no delay between instruction executions i.e. the low level hardware tasking was 
zero-overhead.

(The overall scheme designed by Chuck Thacker was a vast improvement over my 
earlier Flex Machine -- which had 4 such program counters, etc.)

The low level tasks replaced almost all the usual hardware on a normal 
computer: disk and display and keyboard controllers, general I/O, even refresh 
for the DRAM, etc. 


This was a great design: about 160 MSI chips plus memory.

Cheers,

Alan





 From: Shawn Morel shawnmo...@me.com
To: Kevin Jones investtcart...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Cc: Alan Kay alan.n...@yahoo.com 
Sent: Sunday, May 27, 2012 6:50 PM
Subject: Re: [fonc] Question about the Burroughs B5000 series and 
apability-based computing
 
Kevin, 

I'll quote one of my earlier questions to the list - in it I had a few 
pointers that you might find a useful starting place. 
 In the videotaped presentation from HIPK 
 (http://www.bradfuller.com/squeak/Kay-HPIK_2011.mp4) you made reference to 
 the Burroughs 5000-series implementing capabilities.


There's also a more detailed set of influences / references to Bob Barton and 
the B* architectures in part 3 of the early history of smalltalk: 
http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_III.html

I liked the B5000 scheme, but Butler did not want to have to decode bytes, 
and pointed out that since an 8-bit byte had 256 total possibilities, what we 
should do is map different meanings onto different parts of the instruction 
space. this would give us a poor man's Huffman code that would be both 
flexible and simple. All subsequent emulators at PARC used this general 
scheme. [Kay]

You should take the time to read that entire essay, it's chock-full of great 
idea launching points :)

Note that the Alto could simulate (I believe) 16 instances. Not quite a full 
on bare metal VM the way VMware grossly virtualized an entire x86 system, but 
much more capable than what you'd call a hardware thread (e.g. processor cores 
or hyper threading).

 Could you elaborate on how capabilities were structured, stored and 
 processed in the the B5000

Re: [fonc] LightTable UI

2012-04-24 Thread Alan Kay
(Hi Toby)

And don't forget that John McCarthy was one of the very first to try to 
automatically compute inverses of functions (this grew out of his PhD work at 
Princeton in the mid-50s ...)

Cheers,

Alan





 From: Toby Schachman t...@alum.mit.edu
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, April 24, 2012 9:48 AM
Subject: Re: [fonc] LightTable UI
 
Benjamin Pierce et al did some work on bidirectional computation. The
premise is to work with bidirectional transformations (which they call
lenses) rather than (unidirectional) functions. They took a stab at
identifying some primitives, and showing how they would work in some
applications. Of course we can do all the composition tricks with
lenses that we can do with functions :)
http://www.seas.upenn.edu/~harmony/


See also Gerald Sussman's essay Building Robust Systems,
http://groups.csail.mit.edu/mac/users/gjs/6.945/readings/robust-systems.pdf

In particular, he has a section called Constraints Generalize
Procedures. He gives an example of a system as a constraint solver
(two-way information flow) contrasted with the system as a procedure
(one-way flow).


Also I submitted a paper for Onward 2012 which discusses this topic
among other things,
http://totem.cc/onward2012/onward.pdf

My own interest is in programming interfaces for artists. I am
interested in these causally agnostic programming ideas because I
think they could support a more non-linear, improvisational approach
to programming.


Toby


2012/4/24 Jarek Rzeszótko jrzeszo...@gmail.com:
 On the other hand, Those who cannot remember the past are condemned to
 repeat it.

 Also, please excuse me (especially Julian Leviston) for maybe sounding too
 pessimistic and too offensive, the idea surely is exciting, my point is just
 that it excited me and probably many other persons before Bret Victor or
 Chris Granger did (very interesting) demos of it and what would _really_
 excite me now is any glimpse of any idea whatsoever on how to make such
 things work in a general enough domain. Maybe they have or will have such
 idea, that would be cool, but until that time I think it's not unreasonable
 to restrain a bit, especially those ideas are relatively easy to realize in
 special domains and very hard to generalize to the wide scope of software
 people create.

 I would actually also love to hear from someone more knowledgeable about
 interesting historic attempts at doing such things, e.g. reversible
 computations, because there certainly were some: for one I remember a few
 years ago back in time debugging was quite a fashionable topic of talks
 (just google the phrase for a sampling), from a more hardware/physical
 standpoint there is http://en.wikipedia.org/wiki/Reversible_computing etc.

 Cheers,
 Jarosław Rzeszótko


 2012/4/24 David Nolen dnolen.li...@gmail.com

 The best way to predict the future is to invent it

 On Tue, Apr 24, 2012 at 3:50 AM, Jarek Rzeszótko jrzeszo...@gmail.com
 wrote:

 You make it sound a bit like this was a working solution already, while
 it seems to be a prototype at best, they are collecting funding right now:
 http://www.kickstarter.com/projects/306316578/light-table.

 I would love to be proven wrong, but I think given the state of the
 project, many people overexcite over it: some of the things proposed aren't
 new, just wrapped into a nice modern design (you could try to create a new
 skin or UI toolkit for some Smalltalk IDE for a similiar effect), while
 for the ones that would be new like the real-time evaluation or
 visualisation there is too little detail to say whether they are onto
 something or not - I am sure many people thought of such things in the 
 past,
 but it is highly questionable to what extent those are actually doable,
 especially in an existing language like Clojure or JavaScript. I am not
 convinced if dropping 200,000$ at the thing will help with coming up with a
 solution if there is no decent set of ideas to begin with. I would
 personally be much more enthusiastic if the people behind the project at
 least outlined possible approaches they might take, before trying to 
 collect
 money. Currently it sounds like they just plan to hack it until it 
 handles
 a reasonable number of special cases, but tools that work only some of the
 time are favoured by few. I think we need good theoretical approaches to
 problems like this before we can make any progress in how the actual real
 tools work like.

 Cheers,
 Jarosław Rzeszótko


 2012/4/24 Julian Leviston jul...@leviston.net

 Thought this is worth a look as a next step after Brett Victor's work
 (http://vimeo.com/36579366) on UI for programmers...

 http://www.kickstarter.com/projects/ibdknox/light-table

 We're still not quite there yet IMHO, but that's getting towards the
 general direction... tie that in with a tile-script like language, and I
 think we might have something really useful.

 Julian
 

Re: [fonc] Smalltalk-75

2012-04-20 Thread Alan Kay
Ivan and Bert have been special in the development of some of the most 
important parts of computing. 


This year is the 50th anniversary of Sketchpad -- still one of the very few top 
conceptions in computing, still one of the greatest theses ever done, and still 
very much worth reading today.

Ivan is such a giant that we sometimes forget that Bert's thesis was a 
graphical programming language in which he invented and used the idea of 
dataflow. (This was done after Ivan's thesis even though Bert was the older 
brother, because Bert did a stint as a Navy pilot before going to grad school).

One experience that many inquisitive children had while living in the 
vicinity of New York in the 50s was to be able to visit and get stuff on Radio 
Row -- most of it on Courtlandt Street in lower Manhattan where the World 
Trade Center was later built. There were literally hundreds of shops on both 
sides of the street for what I recall was at least a mile full of nothing but 
second hand gear, much of it WWII surplus electronics and some mechanical gear. 
You could mow a few lawns and earn enough for a subway ride to and from (I 
lived in Queens at that time -- Ivan and Bert lived in Scarsdale I think) and 
still have enough left over to buy 15,000 volt transformers, RCA 811A 
transmitting triodes, etc., to make dandy Tesla coils, ham radios, little 
computers out of relays as set forth in Ed Berkeley's books, etc. 

Cheers,

Alan







 From: Jb Labrune labr...@media.mit.edu
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, April 20, 2012 2:59 AM
Subject: Re: [fonc] Smalltalk-75
 
about people that learned how to assemble a computer at a young age, i 
remember talking with the Sutherland brothers once about their childhood. Ivan 
explained to me that Ed Berkeley gave Sutherland's family a DIY computer 
called SIMON in the 50's. Ivan and Bert were very creative for sure, but they 
also benefited from great ressources in their environment! When will we see 
STEPS, MARU and other foncabulous seeds in schools and DIY magazines ? :]

about Simple Simon  SIMON
http://www.cs.ubc.ca/~hilpert/e/simon/index.html
http://en.wikipedia.org/wiki/Simon_%28computer%29

ooh, and of course this video about the S's bros is so great! i would like to 
watch one for each one of you guys on this list ^^

http://www.youtube.com/watch?v=sM1bNR4DmhU  .:( Mom Loved Him Best - w/ Alan 
in the audience! ):.

cheers*
Jb

Le 20 avr. 2012 à 03:20, Fernando Cacciola a écrit :

 On Thu, Apr 19, 2012 at 9:43 PM, Alan Kay alan.n...@yahoo.com wrote:
 Well, part of it is that the 15 year old was exceptional -- his name is
 Steve Putz, and as with several others of our children programmers -- such
 as Bruce Horn, who was the originator of the Mac Finder -- became a very
 good professional.
 
 And that Smalltalk (basically Smalltalk-72) was quite approachable for
 children. We also had quite a few exceptional 12 year old girls who did
 remarkable applications.
 
 I was curious, so I googled a bit (impressive how easy it is, these
 days, to find something within a couple of minutes)
 
 The girls you are most likely talking about would be: Marion Goldeen
 and Susan Hamet, who created a painiting and a OOP-Illustration
 system, respectively.
 I've found some additional details and illustrations here:
 http://www.manovich.net/26-kay-03.pdf
 
 What is truly remarkable IMO, is Smalltalk (even -72). Because these
 children might have been exceptional, but IIUC is not like they were,
 say, a forth-generation of mathematicians and programmers who learned
 how to assemble a computer at age 3 :)
 
 
 Best
 
 -- 
 Fernando Cacciola
 SciSoft Consulting, Founder
 http://www.scisoft-consulting.com
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

--

Jean-Baptiste Labrune
MIT Media Laboratory
20 Ames St / 75 Amherst St
Cambridge, MA 02139, USA

http://web.media.mit.edu/~labrune/

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Smalltalk-75

2012-04-19 Thread Alan Kay
Well, part of it is that the 15 year old was exceptional -- his name is Steve 
Putz, and as with several others of our children programmers -- such as Bruce 
Horn, who was the originator of the Mac Finder -- became a very good 
professional.

And that Smalltalk (basically Smalltalk-72) was quite approachable for 
children. We also had quite a few exceptional 12 year old girls who did 
remarkable applications.

Even so, Steve Putz's circuit diagram drawing program was terrific! Especially 
the UI he designed and built for it.

Cheers,

Alan





 From: John Pratt jpra...@gmail.com
To: fonc@vpri.org 
Sent: Thursday, April 19, 2012 4:05 PM
Subject: [fonc] Smalltalk-75
 


How is it that a 15-year-old could program a schematic diagram drawing 
application in the 1970's?  Is there any more information about this?

I think I read that Smalltalk changed afterwards.  Isn't this kind of a big 
deal, everyone?
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Ask For Forgiveness Programming - Or How We'll Program 1000 Cores

2012-04-14 Thread Alan Kay
This is a good idea (and, interestingly, was a common programming style in the 
first Simula) ...

Cheers,

Alan





 From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, April 13, 2012 11:03 PM
Subject: Re: [fonc] Ask For Forgiveness Programming - Or How We'll Program 
1000 Cores
 

Another option would be to introduce slack in the propagation delay itself. 
E.g. if I send you a message indicating the meeting has been moved to 1:30pm, 
it would be a good idea to send it a good bit in advance - perhaps at 10:30am 
- so you can schedule and prepare. 


With computers, the popular solution - sending a message *when* we want 
something to happen - seems analogous to sending the message at 1:29:58pm, 
leaving the computer always on the edge with little time to prepare even for 
highly predictable events, and making the system much more vulnerable to 
variations in communication latency.


On Fri, Apr 13, 2012 at 8:34 AM, David Goehrig d...@nexttolast.com wrote:

There's a very simple concept that most of the world embraces in everything 
from supply chain management, to personnel allocations, to personal 
relationships. 


We call it *slack*


What Dan is talking about amounts to introducing slack into distributed 
models.  Particularly this version of the definition of slack:


: lacking in completeness, finish, or perfection a very slack piece of 
work


Which is a more realistic version of computation in a universe with 
propagation delay (finite speed of light). But it also introduces a concept 
similar to anyone familiar with ropes. You can't tie a knot without some 
slack. (computation being an exercise in binary sequence knot making). 
Finishing a computation is analogous to pulling the rope taunt. 


Dave








-=-=- d...@nexttolast.com -=-=-

On Apr 13, 2012, at 5:53 AM, Eugen Leitl eu...@leitl.org wrote:



http://highscalability.com/blog/2012/3/6/ask-for-forgiveness-programming-or-how-well-program-1000-cor.html

Ask For Forgiveness Programming - Or How We'll Program 1000 Cores

Tuesday, March 6, 2012 at 9:15AM

The argument for a massively multicore future is now familiar: while clock
speeds have leveled off, device density is increasing, so the future is cheap
chips with hundreds and thousands of cores. That’s the inexorable logic
behind our multicore future.

The unsolved question that lurks deep in the dark part of a programmer’s mind
is: how on earth are we to program these things? For problems that aren’t
embarrassingly parallel, we really have no idea. IBM Research’s David Ungar
has an idea. And it’s radical in the extreme...

Grace Hopper once advised “It's easier to ask for forgiveness than it is to
get permission.” I wonder if she had any idea that her strategy for dealing
with human bureaucracy would the same strategy David Ungar thinks will help
us tame  the technological bureaucracy of 1000+ core systems?

You may recognize David as the co-creator of the Self programming language,
inspiration for the HotSpot technology in the JVM and the prototype model
used by Javascript. He’s also the innovator behind using cartoon animation
techniques to build user interfaces. Now he’s applying that same creative
zeal to solving the multicore problem.

During a talk on his research, Everything You Know (about Parallel
Programming) Is Wrong! A Wild Screed about the Future, he called his approach
“anti-lock or “race and repair” because the core idea is that the only way
we’re going to be able to program the new multicore chips of the future is to
sidestep Amdhal’s Law and program without serialization, without locks,
embracing non-determinism. Without locks calculations will obviously be
wrong, but correct answers can be approached over time using techniques like
fresheners:

   A thread that, instead of responding to user requests, repeatedly selects
a cached value according to some strategy, and recomputes that value from its
inputs, in case the value had been inconsistent. Experimentation with a
prototype showed that on a 16-core system with a 50/50 split between workers
and fresheners, fewer than 2% of the queries would return an answer that had
been stale for at least eight mean query times. These results suggest that
tolerance of inconsistency can be an effective strategy in circumventing
Amdahl’s law.

During his talk David mentioned that he’s trying  to find a better name than
“anti-lock or “race and repair” for this line of thinking. Throwing my hat
into the name game, I want to call it Ask For Forgiveness Programming (AFFP),
based on the idea that using locks is “asking for permission” programming, so
not using locks along with fresheners is really “asking for forgiveness.” I
think it works, but it’s just a thought.

No Shared Lock Goes Unpunished

Amdahl’s Law is used to understand why simply having more cores won’t save us
for a large class of problems. The idea is that any program is made up of a
serial 

Re: [fonc] Kernel Maru

2012-04-12 Thread Alan Kay
Hi John 


The simple answer is that Tom's stuff happened in the early 80s, and I was out 
of PARC working on things other than Smalltalk.

I'm trying to remember something similar that was done earlier (by someone 
can't recall who, maybe at CMU) that was a good convincer that this was not a 
great UI style for thinking about programming in.

An interesting side light on all this is that -- if one could avoid paralyzing 
nestings in program form -- the tile based approach allows language building 
and extensions *and* provides the start of a UI for doing the programming that 
feels open. Both work at Disney and the later work by Jens Moenig show that 
tiles start losing their charm in a hurry if one builds nested expressions. An 
interesting idea via Marvin (in his Turing Lecture) is the idea of deferred 
expressions, and these could be a way to deal with some of this. Also the 
ISWIM design of Landin uses another way to defer nestings to achieve better 
readability.

Cheers,

Alan





 From: John Zabroski johnzabro...@gmail.com
To: Florin Mateoc fmat...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, April 12, 2012 3:59 PM
Subject: Re: [fonc] Kernel  Maru
 

It depends what your goals are. If you want to automatically derive an IDE 
from a grammar then the best work is the Synthesizer Generator but it is 
limited to absolutely noncircular attribute grammars IIRC. But it had wicked 
cool features like incremental live evaluation. Tom Reps won a ACM 
Disssrtation award for the work. The downside was scaling this approach to 
so-called very large scale software systems. But there are two reasons why I 
feel that concern is overblown: (1) nobody has brute forced the memory 
exhaustion problem using the cloud (2) with systems like FONC we wouldnt be 
building huge systems anyway.
Alternatively, grammarware hasn't died simply because of the SG scaling 
issue. Ralf Lammel, Eelco Visser and others have all contributed to ASF+SDF 
and the Spoofax language environment. But none of these are as cool as SG and 
with stuff like Spoofax you have to sidt thru Big And Irregular APIs for IME 
hooking into Big And Irregular Eclipse APIs. Seperating the intellectual wheat 
from the chaff was a PITA Although I did enjoy Visser's thesis on 
scannerless parsing which led me to apprrciate boolean grammars.
Alan,
A question for you is Did SG approach ever come up in desivn discuszions or 
prototypes for any Smalltalk? I always assumed No due to selection bias... 
Until Ometa there hasnt been a clear use case.
Cheers,
Z-Bo
On Apr 11, 2012 10:21 AM, Florin Mateoc fmat...@yahoo.com wrote:

Yes, these threads are little gems by themselves, thank you!


I hope I am not straying too much from the main topic when asking about what 
I think is a related problem: a great help for playing with languages are the 
tools. Since we are talking about bootstrapping everything, we would ideally 
also be able to generate the tools together with all the rest. This is a 
somewhat different kind of language bootstrap, where actions and predicates 
in the language grammar have their own grammar, so they don't need to rely on 
any host language, but still allow one to flexibly generate a lot of 
boilerplate code, including for example classes (or other language specific 
structures) representing the AST nodes, including visiting code, formatters, 
code comparison tools, even abstract(ideally with a flexible level of 
abstraction)evaluation code over those AST nodes, and debuggers. This 
obviously goes beyond language syntax, one needs an execution model as well 
(perhaps in combination with a worlds-like approach). I am still not
 sure how far one can go, what can be succinctly specified and how. 



I would greatly appreciate any pointers in this direction


Florin





 From: Monty Zukowski mo...@codetransform.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, April 11, 2012 12:20 AM
Subject: Re: [fonc] Kernel  Maru
 
Thank you everyone for the great references.  I've got some homework
to do now...

Monty

On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta piuma...@speakeasy.net wrote:
 Extending Alan's comments...

 A small, well explained, and easily understandable example of an iterative 
 implementation of a recursive language (Scheme) can be found in R. Kent 
 Dybvig's Ph.D. thesis.

 http://www.cs.unm.edu/~williams/cs491/three-imp.pdf

 Regards,
 Ian

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



Re: [fonc] Kernel Maru

2012-04-12 Thread Alan Kay
Yes, that was part of Tom's work ...

Cheers,

Alan





 From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org; Alan Kay 
alan.n...@yahoo.com 
Sent: Thursday, April 12, 2012 4:46 PM
Subject: Re: [fonc] Kernel  Maru
 

What does it have to do with thinking about programming?  Are you referring to 
editing an AST directly?
On Apr 12, 2012 7:31 PM, Alan Kay alan.n...@yahoo.com wrote:

Hi John 



The simple answer is that Tom's stuff happened in the early 80s, and I was 
out of PARC working on things other than Smalltalk.


I'm trying to remember something similar that was done earlier (by someone 
can't recall who, maybe at CMU) that was a good convincer that this was not a 
great UI style for thinking about programming in.


An interesting side light on all this is that -- if one could avoid 
paralyzing nestings in program form -- the tile based approach allows 
language building and extensions *and* provides the start of a UI for doing 
the programming that feels open. Both work at Disney and the later work by 
Jens Moenig show that tiles start losing their charm in a hurry if one builds 
nested expressions. An interesting idea via Marvin (in his Turing Lecture) is 
the idea of deferred expressions, and these could be a way to deal with 
some of this. Also the ISWIM design of Landin uses another way to defer 
nestings to achieve better readability.


Cheers,


Alan






 From: John Zabroski johnzabro...@gmail.com
To: Florin Mateoc fmat...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, April 12, 2012 3:59 PM
Subject: Re: [fonc] Kernel  Maru
 

It depends what your goals are. If you want to automatically derive an IDE 
from a grammar then the best work is the Synthesizer Generator but it is 
limited to absolutely noncircular attribute grammars IIRC. But it had wicked 
cool features like incremental live evaluation. Tom Reps won a ACM 
Disssrtation award for the work. The downside was scaling this approach to 
so-called very large scale software systems. But there are two reasons why I 
feel that concern is overblown: (1) nobody has brute forced the memory 
exhaustion problem using the cloud (2) with systems like FONC we wouldnt be 
building huge systems anyway.
Alternatively, grammarware hasn't died simply because of the SG scaling 
issue. Ralf Lammel, Eelco Visser and others have all contributed to ASF+SDF 
and the Spoofax language environment. But none of these are as cool as SG 
and with stuff like Spoofax you have to sidt thru Big And Irregular APIs for 
IME hooking into Big And Irregular Eclipse APIs. Seperating the intellectual 
wheat from the chaff was a PITA Although I did enjoy Visser's thesis on 
scannerless parsing which led me to apprrciate boolean grammars.
Alan,
A question for you is Did SG approach ever come up in desivn discuszions or 
prototypes for any Smalltalk? I always assumed No due to selection bias... 
Until Ometa there hasnt been a clear use case.
Cheers,
Z-Bo
On Apr 11, 2012 10:21 AM, Florin Mateoc fmat...@yahoo.com wrote:

Yes, these threads are little gems by themselves, thank you!


I hope I am not straying too much from the main topic when asking about 
what I think is a related problem: a great help for playing with languages 
are the tools. Since we are talking about bootstrapping everything, we 
would ideally also be able to generate the tools together with all the 
rest. This is a somewhat different kind of language bootstrap, where 
actions and predicates in the language grammar have their own grammar, so 
they don't need to rely on any host language, but still allow one to 
flexibly generate a lot of boilerplate code, including for example classes 
(or other language specific structures) representing the AST nodes, 
including visiting code, formatters, code comparison tools, even 
abstract(ideally with a flexible level of abstraction)evaluation code over 
those AST nodes, and debuggers. This obviously goes beyond language syntax, 
one needs an execution model as well (perhaps in combination with a 
worlds-like approach). I am still
 not sure how far one can go, what can be succinctly specified and how. 



I would greatly appreciate any pointers in this direction


Florin





 From: Monty Zukowski mo...@codetransform.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, April 11, 2012 12:20 AM
Subject: Re: [fonc] Kernel  Maru
 
Thank you everyone for the great references.  I've got some homework
to do now...

Monty

On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta piuma...@speakeasy.net 
wrote:
 Extending Alan's comments...

 A small, well explained, and easily understandable example of an 
 iterative implementation of a recursive language (Scheme) can be found in 
 R. Kent Dybvig's Ph.D. thesis.

 http://www.cs.unm.edu/~williams/cs491/three-imp.pdf

 Regards,
 Ian

Re: [fonc] Kernel Maru

2012-04-11 Thread Alan Kay
The survey paper is just a survey. Dave's thesis is how to make all the control 
structure by extending a McCarthy like tiny kernel. Still gold today.

Cheers,

Alan





 From: Eugene Wallingford walli...@cs.uni.edu
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, April 11, 2012 9:02 AM
Subject: Re: [fonc] Kernel  Maru
 

 If anyone finds an electronic copy of Fisher's thesis I'd love to know
 about it.  My searches have been fruitless.
 
 The title is not the same, but maybe these are variants of the same paper?
 
 http://dl.acm.org/author_page.cfm?id=81100550987coll=DLdl=ACMtrk=0cfid=76786786cftoken=53955875
 
 Also, I no longer have access to ACM digital library, so I can't post the 
 PDFs.

     The thesis link is bibliographic only.  The survey paper
     is available as PDF, so I grabbed it.  If you'd like a
     copy, let me know.

 Eugene
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel Maru

2012-04-11 Thread Alan Kay
Yes, the work was done at Stanford (and Bill McKeeman did a lot of the systems 
programming for the implementation).

The CACM article is a cleaned up version of this.


Cheers,

Alan





 From: Monty Zukowski mo...@codetransform.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, April 11, 2012 9:06 AM
Subject: Re: [fonc] Kernel  Maru
 
This one seems to be available as a technical report as well:

http://infolab.stanford.edu/TR/CS-TR-65-20.html

Monty

On Wed, Apr 11, 2012 at 4:44 AM, Alan Kay alan.n...@yahoo.com wrote:
 One more that is fun (and one I learned a lot from when I was in grad school
 in 1966) is Niklaus Wirth's Euler paper, published in two parts in CACM
 Jan and Feb 1966.

 This is a generalization of Algol via some ideas of van Wijngaarten and
 winds up with a very Lispish kind of language by virtue of consolidating and
 merging specific features of Algol into a more general much smaller kernel.

 The fun of this paper is that Klaus presents a complete implementation that
 includes a simple byte-code interpreter.

 This paper missed getting read enough historically (I think) because one
 large part of it is a precedence parsing scheme invented by Wirth to allow a
 mechanical transition between a BNF-like grammar and a parser. This part was
 not very effective and it was very complicated.

 So just ignore this. You can use a Meta II type parser (or some modern PEG
 parser like OMeta) to easily parse Euler directly into byte-codes.

 Everything else is really clear, including the use of the Dijkstra display
 technique for quick access to the static nesting of contexts used by Algol
 (and later by Scheme).

 Cheers,

 Alan

 
 From: Monty Zukowski mo...@codetransform.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Tuesday, April 10, 2012 9:20 PM

 Subject: Re: [fonc] Kernel  Maru

 Thank you everyone for the great references.  I've got some homework
 to do now...

 Monty

 On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta piuma...@speakeasy.net
 wrote:
 Extending Alan's comments...

 A small, well explained, and easily understandable example of an iterative
 implementation of a recursive language (Scheme) can be found in R. Kent
 Dybvig's Ph.D. thesis.

 http://www.cs.unm.edu/~williams/cs491/three-imp.pdf

 Regards,
 Ian

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel Maru

2012-04-11 Thread Alan Kay
Take a look at the use of the System Tracer in the Back to the Future 
paper. It is an example of such a tool (it is a bit like a garbage collector 
except that it is actually a new system finder -- it can find and write out 
just the objects in the new system to make a fresh image).

Cheers,

Alan





 From: Shawn Morel shawnmo...@me.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Cc: Florin Mateoc fmat...@yahoo.com 
Sent: Wednesday, April 11, 2012 11:52 AM
Subject: Re: [fonc] Kernel  Maru
 
This thread is a real treasure trove! Thanks for all the pointers Alan :)

 A nice feature of Smalltalk (which has been rarely used outside of a small 
 group) is a collection of tools that can be used to create an entirely 
 different language within it and then launch it without further needing 
 Smalltalk. This was used 3 or 4 times at PARC to do radically different 
 designs and implementations for the progression of Smalltalks 

Could you elaborate more here? How might this compare to some of the work 
happening with Racket these days?

thanks
shawn


 Cheers,
 
 Alan
 
 From: Florin Mateoc fmat...@yahoo.com
 To: Fundamentals of New Computing fonc@vpri.org 
 Sent: Wednesday, April 11, 2012 7:20 AM
 Subject: Re: [fonc] Kernel  Maru
 
 Yes, these threads are little gems by themselves, thank you!
 
 I hope I am not straying too much from the main topic when asking about what 
 I think is a related problem: a great help for playing with languages are 
 the tools. Since we are talking about bootstrapping everything, we would 
 ideally also be able to generate the tools together with all the rest. This 
 is a somewhat different kind of language bootstrap, where actions and 
 predicates in the language grammar have their own grammar, so they don't 
 need to rely on any host language, but still allow one to flexibly generate 
 a lot of boilerplate code, including for example classes (or other language 
 specific structures) representing the AST nodes, including visiting code, 
 formatters, code comparison tools, even abstract (ideally with a flexible 
 level of abstraction) evaluation code over those AST nodes, and debuggers. 
 This obviously goes beyond language syntax, one needs an execution model as 
 well (perhaps in combination with a worlds-like approach). I am still
 not sure how far one can go, what can be succinctly specified and how. 
 
 I would greatly appreciate any pointers in this direction
 
 Florin
 
 From: Monty Zukowski mo...@codetransform.com
 To: Fundamentals of New Computing fonc@vpri.org 
 Sent: Wednesday, April 11, 2012 12:20 AM
 Subject: Re: [fonc] Kernel  Maru
 
 Thank you everyone for the great references.  I've got some homework
 to do now...
 
 Monty
 
 On Tue, Apr 10, 2012 at 2:54 PM, Ian Piumarta piuma...@speakeasy.net wrote:
  Extending Alan's comments...
 
  A small, well explained, and easily understandable example of an iterative 
  implementation of a recursive language (Scheme) can be found in R. Kent 
  Dybvig's Ph.D. thesis.
 
  http://www.cs.unm.edu/~williams/cs491/three-imp.pdf
 
  Regards,
  Ian
 
  ___
  fonc mailing list
  fonc@vpri.org
  http://vpri.org/mailman/listinfo/fonc
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Migrating / syncing computation live-documents

2012-04-11 Thread Alan Kay
Croquet does replicated distributed computing. LOCUS did freely migrating 
system nodes.

One actually needs both (though a lot can be done with today's capacities just 
using the Croquet techniques).

Cheers,

Alan





 From: Shawn Morel shawnmo...@me.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, April 11, 2012 5:14 PM
Subject: Migrating / syncing computation live-documents
 



Taken from the massive multi-thread Error trying to compile COLA

And I've also mentioned Popek's LOCUS system as a nice model for migrating 
processes over a network. It was Unix only, but there was nothing about his 
design that required this.


When thinking about storing and accessing documents, it's fairly 
straightforward to think of some sort of migration / synchronization scheme.

In thinking of objects more on par with VMs / computers / services, has there 
been any work on process migration that's of any importance since LOCUS? Did 
Croquet push the boundaries of distributing computations?


shawn

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel Maru

2012-04-10 Thread Alan Kay
Hi Julian

(Adding to Ian's comments)

Doing as Ian suggests and trying out variants can be an extremely illuminating 
experience (for example, BBN Lisp (1.85) had three or four choices for what was 
meant by a lambda closure -- three of the options I remember were (a) do 
Algol -- this is essentially what Scheme wound up doing 15 years later, (b) 
make a private a-list for free variables, (c) lock the private a-list to the 
values of the free variables at the time of the closure).

I  suggest not trying to write your eval in the style that McCarthy used (it's 
too convoluted and intertwined). The 
first thing to do is to identify and isolate separate cases that have to be 
taken care of -- e.g. what does it mean to eval the function position of an 
expression (LISP keeps on evaling until a lambda is found ...). Write these 
separate cases as separately as possible.


Dave Fisher's thesis A Control Definition Language CMU 1970 is a very clean 
approach to thinking about environments for LISP like languages. He separates 
the control path, from the environment path, etc.


You have to think about whether 
special forms are a worthwhile idea (other ploys can be used to 
control when and if arguments are evaled).

You will need to think about the tradeoffs between a pure applicative style vs. 
being able to set values imperatively. For example, you could use Strachey's 
device to write loops as clean single assignment structures which are 
actually tail recursions. Couple this with fluents (McCarthy's time 
management) and you get a very clean non-assignment language that can 
nonetheless traverse through time. Variants of this idea were used in Lucid 
(Ashcroft and Wadge).

Even if you use a recursive language to write your eval in, you might also 
consider taking a second pass and writing the eval just in terms of loops -- 
this is also very illuminating.

What one gets from doing these exercises is a visceral feel for great power 
with very little mechanics -- this is obtained via mathematical thinking and 
it is obscured almost completely by the standard approaches to characterizing 
programming languages (as things in themselves rather than a simple powerful 
kernel with decorations).

Cheers,

Alan






 From: Ian Piumarta piuma...@speakeasy.net
To: Julian Leviston jul...@leviston.net 
Cc: Fundamentals of New Computing fonc@vpri.org 
Sent: Monday, April 9, 2012 8:58 PM
Subject: Re: [fonc] Kernel  Maru
 
Dear Julian,

On Apr 9, 2012, at 19:40 , Julian Leviston wrote:

 Also, simply, what are the semantic inadequacies of LISP that the Maru 
 paper refers to (http://piumarta.com/freeco11/freeco11-piumarta-oecm.pdf)? 
 I read the footnoted article (The Influence of the Designer on the Design—J. 
 McCarthy and Lisp), but it didn't elucidate things very much for me.

Here is a list that remains commented in my TeX file but which was never 
expanded with justifications and inserted into the final version.  (The ACM 
insisted that a paper published online, for download only, be strictly limited 
to five pages -- go figure!)

%%   Difficulties and omissions arise
%%   involving function-valued arguments, application of function-valued
%%   non-atomic expressions, inconsistent evaluation rules for arguments,
%%   shadowing of local by global bindings, the disjoint value spaces for
%%   functions and symbolic expressions, etc.

IIRC these all remain in the evaluator published in the first part of the 
LISP-1.5 Manual.

 I have to say that all of these papers and works are making me feel like a 3 
 year old making his first steps into understanding about the world. I guess 
 I must be learning, because this is the feeling I've always had when I've 
 been growing, yet I don't feel like I have any semblance of a grasp on any 
 part of it, really... which bothers me a lot.

My suggestion would be to forget everything that has been confusing you and 
begin again with the LISP-1.5 Manual (and maybe Recursive Functions of 
Symbolic Expressions and Their Computation by Machine).  Then pick your 
favourite superfast-prototyping programming language and build McCarthy's 
evaluator in it.  (This step is not optional if you want to understand 
properly.)  Then throw some expressions containing higher-order functions and 
free variables at it, figure out why it behaves oddly, and fix it without 
adding any conceptual complexity.

A weekend or two should be enough for all of this.  At the end of it you will 
understand profoundly why most of the things that bothered you were bothering 
you, and you will never be bothered by them again.  Anything that remains 
bothersome might be caused by trying to think of Common Lisp as a 
dynamically-evaluated language, rather than a compiled one.

(FWIW: Subsequently fixing every significant asymmetry, semantic irregularity 
and immutable special case that you can find in your evaluator should lead you 
to something not entirely unlike the tiny evaluator at 

Re: [fonc] Publish/subscribe vs. send

2012-03-20 Thread Alan Kay
One of the motivations is to handle some kinds of scaling more gracefully. If 
you think about things from a module's point of view, the fewer details it has 
to know about resources it needs (and about its environment in general) the 
better. 

It can be thought of as a next stage in going from explicit procedure calls 
(where you have to use the exact name) to message passing with polymorphism 
where the system uses context to actually choose the method, and the name you 
use is (supposed to be) a term denoted a kind of goal (whose specifics will be 
determined outside the module).

If you can specify what you *need* via a description, you can eliminate even 
having to know the tag for the goal, the system will still find it for you.

Another way to think of this is as a kind of semantic typing

This could be a great idea, because a language for writing descriptions will 
almost certainly have fewer things that have to be agreed on globally, and this 
should allow more graceful coordinations and better scaling.

However, this has yet to be exhibited -- so it needs to be done and critiqued 
before we should get too excited here.

I think a little $ and a lot of work in CYC or Genesereth's game language would 
be a good first place to start. For example, in CYC you should be able to write 
a description using the base relational language, and Cyc should be able to 
find you the local terms it uses for these

Cheers,

Alan





 From: Casey Ransberger casey.obrie...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Monday, March 19, 2012 3:35 PM
Subject: [fonc] Publish/subscribe vs. send
 
Here's the real naive question...

I'm fuzzy about why objects should receive messages but not send them. I think 
I can see the mechanics of how it might work, I just don't grok why it's 
important. 

What motivates? Are we trying to eliminate the overhead of ST-style message 
passing? Is publish/subscribe easier to understand? Does it lead to simpler 
artifacts? Looser coupling? Does it simplify matters of concurrency?

I feel like I'm still missing a pretty important concept, but I have a feeling 
that once I've grabbed at it, several things might suddenly fit and make sense.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Naive question

2012-03-19 Thread Alan Kay
Hi Benoit

This is basically what publish and subscribe schemes are all about. Linda is 
a simple coordination protocol for organizing such loose couplings. There are 
sketches of such mechanisms in most of the STEPS reports 

Spreadsheets are simple versions of this


The Playground language for the Vivarium project was set up like this


For real scaling, one would like to move to more general semantic descriptions 
of what is needed and what is supplied ...

Cheers,

Alan





 From: Benoît Fleury benoit.fle...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Monday, March 19, 2012 1:10 PM
Subject: [fonc] Naive question
 

Hi,


I was wondering if there is any language out there that lets you describe the 
behavior of an object as a grammar.


An object would receive a stream of events. The rules of the grammar describe 
the sequence of events the object can respond to. The semantic actions 
inside these rules can change the internal state of the object or emit other 
events.


We don't want the objects to send message to each other. A bus-like structure 
would collect events and dispatch them to all interested objects. To avoid 
pushing an event to all objects, the bus would ask first to all objects what 
kind of event they're waiting for. These events are the possible alternatives 
in the object's grammar based on the current internal state of the object.


It's different from object-oriented programming since objects don't talk 
directly to each other.


A few questions the come up when thinking about this:
 - do we want backtracking? probably not, if the semantic actions are 
different, it might be awkward or impossible to undo them. If the semantic 
actions are the same in the grammar, we might want to do some factoring to 
remove repeated semantic actions.
 - how to represent time? Do all objects need to share the same clock? Do we 
have to send tick events to all objects?
 - should we allow the parallel execution of multiple scenarios for the same 
object? What does it make more complex in the design of the object's behavior? 
What does it make simpler?


If we assume an object receive a tick event to represent time, and using a 
syntax similar to ometa, we could write a simplistic behavior of an ant this 
way:


# the ant find food when there is a food event raised and the ant's position 
is in the area of the food
# food indicates an event of type food, the question mark starts a semantic 
predicate
findFood    = food ?(this.position.inRect(food.area))


# similar rule to find the nest
findNest     = nest ?(this.position.inRect(nest.area))


# at every step, the ant move
move         = tick (= move 1 unit in current direction (or pick random 
direction if no direction))


# the gatherFood scenario can then be described as finding food then finding 
the nest
gatherFood = findFood (= pick up food, change direction to nest)
                    findNest (= drop food, change direction to food source)


There is probably a lot of thing missing and not thought through.


But I was just wondering if you know a language to do this kind of things?


Thanks,
Benoit.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-16 Thread Alan Kay
The WAM and other fast schemes for Prolog are worth looking at. But the 
Javascript version that Alex did using his and Stephen Murrell's design for 
compact Prolog semantics (about 90 lines of Javascript code) is very 
illuminating for those interested in the logic of logic. 


But Prolog has always had some serious flaws -- so it is worth looking at 
cleaned up and enhanced versions (such as the Datalog with negation and time 
variants I've mentioned). Also, Shapiro's Concurrent Prolog did quite a cleanup 
long ago.

I particularly liked the arguments of Bill Kornfield's Prolog With Equality 
paper from many years ago -- this is one of several seminal perspectives on 
where this kind of language should be taken.

The big flaw with most of the attempts I've see to combine Logic and Objects 
is that what should be done about state is not taken seriously. The first sins 
were committed in Prolog itself by allowing a non-automatic undoable assert. 
I've argued that it would be much better to use takeoffs of situation 
calculus and pseudotime to allow perfect 
deductions/implications/functional-relationships to be computed while still 
moving from one context to another to have a model of before, now, and after. 
These are not new ideas, and I didn't have them first.

Cheers,

Alan





 From: Ryan Mitchley ryan.mitch...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, March 16, 2012 5:26 AM
Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
 

On 15/03/2012 14:20, Alan Kay wrote: 
Alex Warth did both a standard Prolog and an English based language one using 
OMeta in both Javascript, and in Smalltalk.



I must have a look at these. Thanks for all of the references. I was
working my way through Warren Abstract Machine implementation
details but it was truly headache-inducing (for me, anyway).

A book I keep meaning to get is Paradigms of Artificial
Intelligence Programming: Case Studies in Common Lisp, which
describes a Prolog-like implementation (and much more) in Lisp.

The Minsky book would be very welcome!


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-15 Thread Alan Kay
It's in the book Semantic Information Processing that Marvin Minsky put 
together in the mid 60s. I will get it scanned and send around (it is paired 
with the even more class Advice Taker paper that led to Lisp ...

Cheers,

Alan





 From: Wesley Smith wesley.h...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Thursday, March 15, 2012 10:13 AM
Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
 
On Thu, Mar 15, 2012 at 5:23 AM, Alan Kay alan.n...@yahoo.com wrote:
 You don't want to use assert because it doesn't get undone during
 backtracking. Look at the Alex Warth et al Worlds paper on the Viewpoints
 site to see a better way to do this. (This is an outgrowth of the labeled
 situations idea of McCarthy in 1963.)

I found a reference to this paper in
http://arxiv.org/pdf/1201.2430.pdf .  Looks like it's a paper called
Situations, actions and causal laws.  I'm not able to find any
PDF/online version.  Anyone know how to get ahold of this document?

thanks,
wes


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Dynabook ideas

2012-03-15 Thread Alan Kay
The other two physical ideas were

-- via head mounted display (in glasses a la Ivan Sutherland's goggles -- ca. 
1968 -- but invisible) 


-- as embodied in the environment (a la Nicholas Negroponte's and Dick Bolt's 
Dataland and Spatial Data Management System of the 70s).

In the late 60s, many of us thought that it might be easier to do the tiny flat 
panels needed for a HMD than to make the big ones needed for the tablet form 
factor. But in fact no one with development funds in the US was interested in 
flat panel displays at that time. All we had were the main patents and 
knowledge for all the subsequent development work of each of the technologies 
required (liquid crystal, plasma, particle migration, thin film, amorphous 
semiconductors, etc.)

Cheers,

Alan





 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Thursday, March 15, 2012 3:59 PM
Subject: [fonc] Dynabook ideas
 
Le 15/03/2012 00:44, Alan Kay a écrit :

 To me the Dynabook has always been 95% a service model and 5% physical
 specs (there were three main physical ideas for it, only one was the
 tablet).

Err, what those ideas were?  I have seen videos of you presenting it,
but I can't see more than a tablet with a keyboard and a touch screen
—wait, are the keyboard and the touch screen the other two ideas?

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-14 Thread Alan Kay
 as it's ever going to be, and that over time the signing system will be 
extended to allow more fine grained human relationships to be expressed.  


For example at the moment, as an iOS developer, I can allow different apps 
that I write to access the same shared data via iCloud.  That makes sense 
because I am solely responsible for making sure that the apps share a common 
understanding of the meaning of the data, and Apple's APIs permit multiple 
independent processes to coordinate access to the same file.  


I am curious to see how Apple plans to make it possible for different 
developers to share data.  Will this be done by a network of cryptographic 
permissions between apps?


-- Max


On Tue, Mar 13, 2012 at 9:28 AM, Mack m...@mackenzieresearch.com wrote:

For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
to rectify this via the Terms and Conditions route.


It's been announced that both Windows 8 and OSX Mountain Lion will require 
applications to be installed via download thru their respective App Stores 
in order to obtain certification required for the OS to allow them access to 
features (like an installed camera, or the network) that are outside the 
default application sandbox.  


The acceptance of the App Store model for the iPhone/iPad has persuaded them 
that this will be (commercially) viable as a model for general public 
distribution of trustable software.


In that world, the Squeak plugin could be certified as safe to download in a 
way that System Admins might believe.



On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:

Windows (especially) is so porous that SysAdmins (especially in school 
districts) will not allow teachers to download .exe files. This wipes out 
the Squeak plugin that provides all the functionality.


But there is still the browser and Javascript. But Javascript isn't fast 
enough to do the particle system. But why can't we just download the 
particle system and run it in a safe address space? The browser people 
don't yet understand that this is what they should have allowed in the 
first place. So right now there is only one route for this (and a few years 
ago there were none) -- and that is Native Client on Google Chrome. 



 But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
don't like NaCl. Google Chrome is an .exe file so teachers can't 
download it (and if they could, they could download the Etoys plugin).


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-14 Thread Alan Kay
Hi Scott

This seems like a plan that should be done and tried and carefully evaluated. I 
think the approach is good. It could be not quite enough to work, but it 
should give rise to a lot of useful information for further passes at this.


1. Psychologist O.K. Moore in the early 60s at Yale and elsewhere pioneered the 
idea of a talking typewriter to help children learn how to read via learning 
to write. This was first a grad student in a closet with a microphone 
simulating a smart machine -- but later the Edison division of McGraw-Hill made 
a technology that did some of these things. 


The significance of Moore's work is that he really thought things through, both 
with respect to what such a curriculum might be, but also to the nature of the 
whole environment made for the child. 


He first defined a *responsive environment* as one that:
a.   permits learners to explore freely
b.   informs learners immediately about the consequences of their actions
c.   is self-pacing, i.e. events happen within the environment at a rate 
determined by the learner
d.  permits the learners to make full use of their capacities to discover 
relations of various kinds
e.   has a structure such that learners are likely to make a series of 
interconnected discoveries about the physical, cultural or social world


He called a responsive environment: “*autotelic*, if engaging in it is done for 
its own sake rather than for obtaining rewards or avoiding punishments that 
have no inherent connection with the activity itself”. By “discovery” he meant 
“gently guided discovery” in the sense of Montessori, Vygotsky, Bruner and 
Papert (i.e. recognizing that it is very difficult for human beings to come up 
with good ideas from scratch—hence the need for forms of guidance—but that 
things are learned best if the learner puts in the effort to make the final 
connections themselves—hence the need for forms of discovery.

The many papers from this work greatly influenced the thinking about personal 
computing at Xerox PARC in the 70s. Here are a couple:

-- O. K. Moore, Autotelic Responsive Environments and Exceptional Children, 
Experience, Structure and Adaptabilty (ed. Harvey), Springer, 1966
-- Anderson and Moore, Autotelic Folk Models, Sociological Quarterly, 1959

2. Separating out some of the programming ideas here:

a. Simplest one is that the most important users of this system are the 
children, so it would be a better idea to make the tile scripting look as easy 
for them as possible. I don't agree with the rationalization in the paper about 
preserving the code reading skills of existing programmers.

b. Good idea to go all the way to the bottom with the children's language.

c. Figure 2 introduces another -- at least equally important language -- in my 
opinion, this one should be made kid usable and programmable -- and I would try 
to see how it could fit with the TS language in some way. 


d. There is another language -- AIML -- introduced for recognizing things. I 
would use something much nicer, easier, more readable, etc., -- like OMeta -- 
or more likely I would go way back to the never implemented Smalltalk-71 (which 
had these and some of the above features in its design and also tried to be kid 
usable) -- and try to make a version that worked (maybe too hard to do in 
general or for the scope of this project, but you can see why it would be nice 
to have all of the mechanisms that make your system work be couched in kid 
terms and looks and feels if possible).

3. It's out of the scope of your paper and these comments to discuss getting 
kids to add other structures besides stories and narrative to think with. You 
have to start with stories, and that is enough for now. A larger scale plan 
(you may already have) would involve a kind of weaning process to get kids to 
add non-story thinking (as is done in math and science, etc.) to their skills. 
This is a whole curriculum of its own.


I make these comments because I think your project is a good idea, on the right 
track, and needs to be done

Best wishes

Alan





 From: C. Scott Ananian csc...@laptop.org
To: IAEP SugarLabs i...@lists.sugarlabs.org 
Sent: Tuesday, March 13, 2012 4:07 PM
Subject: [IAEP] Barbarians at the gate! (Project Nell)
 

I read the following today:


A healthy [project] is, confusingly, one at odds with itself. There is a 
healthy part which is attempting to normalize and to create predictability, 
and there needs to be another part that is tasked with building something new 
that is going to disrupt and eventually destroy that normality. 
(http://www.randsinrepose.com/archives/2012/03/13/hacking_is_important.html)


So, in this vein, I'd like to encourage Sugar-folk to read the short paper 
Chris Ball, Michael Stone, and I just submitted (to IDC 2012) on Nell, our 
design for XO-3 software for the reading project:


     http://cscott.net/Publications/OLPC/idc2012.pdf


You're expected not to like 

Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-14 Thread Alan Kay
Hi Scott --

1. I will see if I can get one of these scanned for you. Moore tended to 
publish in journals and there is very little of his stuff available on line.

2.a. if (ab) { ... } is easier to read than if ab then ...? There is no 
hint of the former being tweaked for decades to make it easier to read.

Several experiments from the past cast doubt on the rest of the idea. At Disney 
we did a variety of code display generators to see what kinds of 
transformations we could do to the underlying Smalltalk (including syntactic) 
to make it something that could be subsetted as a growable path from Etoys. 


We got some good results from this (and this is what I'd do with Javascript in 
both directions -- Alex Warth's OMeta is in Javascript and is quite complete 
and could do this).

However, the showstopper was all the parentheses that had to be rendered in 
tiles. Mike Travers at MIT had done one of the first tile based editors for a 
version of Lisp that he used, and this was even worse.

More recently, Jens Moenig (who did SNAP) also did a direct renderer and editor 
for Squeak Smalltalk (this can be tried out) and it really seemed to be much 
too cluttered.

One argument for some of this, is well, teach the kids a subset that doesn't 
use so many parens  This could be a solution.

However, in the end, I don't think Javascript semantics is particularly good 
for kids. For example, one of features of Etoys that turned out to be very 
powerful for children and other Etoy programmers is the easy/trivial parallel 
methods execution. And there are others in Etoys and yet others in Scractch 
that are non-standard in regular programming languages but are very powerful 
for children (and some of them are better than standard CS language ideas).

I'm encouraging you to do something better (that would be ideal). Or at least 
as workable. Giving kids less just because that's what an existing language for 
adults has is not a good tactic.


2.c. Ditto 2.a. above

2.d. Ditto above above

Cheers,

Alan







 From: C. Scott Ananian csc...@laptop.org
To: Alan Kay alan.n...@yahoo.com 
Cc: IAEP SugarLabs i...@lists.sugarlabs.org; Fundamentals of New Computing 
fonc@vpri.org; Viewpoints Research a...@vpri.org 
Sent: Wednesday, March 14, 2012 10:25 AM
Subject: Re: [IAEP] [fonc] Barbarians at the gate! (Project Nell)
 

On Wed, Mar 14, 2012 at 12:54 PM, Alan Kay alan.n...@yahoo.com wrote:

The many papers from this work greatly influenced the thinking about personal 
computing at Xerox PARC in the 70s. Here are a couple:


-- O. K. Moore, Autotelic Responsive Environments and Exceptional Children, 
Experience, Structure and Adaptabilty (ed. Harvey), Springer, 1966
-- Anderson and Moore, Autotelic Folk Models, Sociological Quarterly, 1959



Thank you for these references.  I will chase them down and learn as much as I 
can.
 
2. Separating out some of the programming ideas here:


a. Simplest one is that the most important users of this system are the 
children, so it would be a better idea to make the tile scripting look as 
easy for them as possible. I don't agree with the rationalization in the 
paper about preserving the code reading skills of existing programmers.


I probably need to clarify the reasoning in the paper for this point.


Traditional text-based programming languages have been tweaked over decades 
to be easy to read -- for both small examples and large systems.  It's 
somewhat of a heresy, but I thought it would be interesting to explore a 
tile-based system that *didn't* throw away the traditional text structure, and 
tried simply to make the structure of the traditional text easier to visualize 
and manipulate.


So it's not really skills of existing programmers I'm interested in -- I 
should reword that.  It's that I feel we have an existence proof that the 
traditional textual form of a program is easy to read, even for very 
complicated programs.  So I'm trying to scale down the thing that works, 
instead of trying to invent something new which proves unwieldy at scale.


b. Good idea to go all the way to the bottom with the children's language.


c. Figure 2 introduces another -- at least equally important language -- in 
my opinion, this one should be made kid usable and programmable -- and I 
would try to see how it could fit with the TS language in some way. 



This language is JSON, which is just the object-definition subset of 
JavaScript.  So it can in fact be expressed with TurtleScript tiles.  
(Although I haven't yet tackled quasiquote in TurtleScript.)


d. There is another language -- AIML -- introduced for recognizing things. I 
would use something much nicer, easier, more readable, etc., -- like OMeta -- 
or more likely I would go way back to the never implemented Smalltalk-71 
(which had these and some of the above features in its design and also tried 
to be kid usable) -- and try to make a version that worked (maybe too hard to 
do in general

Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Alan Kay
Yep, I was there and trying to get the Newton project off the awful ATT chip 
they had first chosen. Larry Tesler (who worked with us at PARC) finally wound 
up taking over this project and doing a number of much better things with it. 
Overall what happened with Newton was too bad -- it could have been much better 
-- but there were many too many different opinions and power bases involved.

If you have a good version of confinement (which is pretty simple HW-wise) you 
can use Butler Lampson's schemes for Cal-TSS to make a workable version of a 
capability system.

And, yep, I managed to get them to allow interpreters to run on the iPad, but 
was not able to get Steve to countermand the no sharing rule.

Cheers,

Alan





 From: Jecel Assumpcao Jr. je...@merlintec.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, March 14, 2012 9:17 AM
Subject: [fonc] Apple and hardware (was: Error trying to compile COLA)
 
Alan Kay wrote on Wed, 14 Mar 2012 05:53:21 -0700 (PDT)
 A hardware vendor with huge volumes (like Apple) should be able to get a CPU
 vendor to make HW that offers real protection, and at a granularity that 
 makes
 more systems sense.

They did just that when they founded ARM Ltd (with Acorn and VTI): the
most significant change from the ARM3 to the ARM6 was a new MMU with a
more fine grained protection mechnism which was designed specially for
the Newton OS. No other system used it and though I haven't checked, I
wouldn't be surprised if this feature was eliminated from more recent
versions of ARM.

Compared to a real capability system (like the Intel iAPX432/BiiN/960XA
or the IBM AS/400) it was a rather awkward solution, but at least they
did make an effort.

Having been created under Scully, this technology did not survive Jobs'
return.

 But the main point here is that there are no technical reasons why a child 
 should
 be restricted from making an Etoys or Scratch project and sharing it with 
 another
 child on an iPad.
 No matter what Apple says, the reasons clearly stem from strategies and 
 tactics
 of economic exclusion.
 So I agree with Max that the iPad at present is really the anti-Dynabook

They have changed their position a little. I have a Hand Basic on my
iPhone which is compatible with the Commodore 64 Basic. I can write and
save programs, but can't send them to another device or load new
programs from the Internet. Except I can - there are applications for
the iPhone that give you access to the filing system and let you
exchange files with a PC or Mac. But that is beyond most users, which
seems to be a good enough barrier from Apple's viewpoint.

The same thing applies to this nice native development environment for
Lua on the iPad:

http://twolivesleft.com/Codea/

You can program on the iPad/iPhone, but can't share.

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Talking Typwriter [was: Barbarians at the gate! (Project Nell)]

2012-03-14 Thread Alan Kay
You had to have a lot of moxie in the 60s to try to make Moore's ideas into 
real technology. It was amazing what they were able to do.

I wonder where this old junk is now? Should be in the Computer History Museum!

Cheers,

Alan





 From: Martin McClure martin.mccl...@vmware.com
To: Fundamentals of New Computing fonc@vpri.org 
Cc: Viewpoints Research a...@vpri.org 
Sent: Wednesday, March 14, 2012 11:26 AM
Subject: Re: [fonc] Talking Typwriter [was:  Barbarians at the gate! (Project 
Nell)]
 
On 03/14/2012 09:54 AM, Alan Kay wrote:
 
 1. Psychologist O.K. Moore in the early 60s at Yale and elsewhere
 pioneered the idea of a talking typewriter to help children learn how
 to read via learning to write. This was first a grad student in a closet
 with a microphone simulating a smart machine -- but later the Edison
 division of McGraw-Hill made a technology that did some of these things.

Now that reference brings back some memories!

As an undergrad I had a student job in the Computer Assisted Instruction
lab. One day, a large pile of old parts arrived from somewhere, with no
accompanying documentation, and I was told, Put them together. It
turned out to be two Edison talking typewriters. I got one fully
functional; the other had a couple of minor parts missing. This was in
late '77 or early '78, about the same time I was attempting
(unsuccessfully) to learn something about Smalltalk.

Regards,

-Martin


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Apple and hardware (was: Error trying to compile COLA)

2012-03-14 Thread Alan Kay
Hi Jecel

The CRISP was too slow, and had other problems in its details. Sakoman liked it 
...

Bill Atkinson did Hypercard ... Larry made many other contributions at Xerox 
and Apple

To me the Dynabook has always been 95% a service model and 5% physical specs 
(there were three main physical ideas for it, only one was the tablet).

Cheers,

Alan





 From: Jecel Assumpcao Jr. je...@merlintec.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Wednesday, March 14, 2012 3:55 PM
Subject: Re: [fonc] Apple and hardware (was: Error trying to compile COLA)
 
Alan Kay wrote on Wed, 14 Mar 2012 11:36:30 -0700 (PDT)
 Yep, I was there and trying to get the Newton project off the awful ATT chip
 they had first chosen.

Interesting - a few months ago I studied the datasheets for the Hobbit
and read all the old CRISP papers and found this chip rather cute. It is
even more C centric than RISCs (specially the ARM) so might not be a
good choice for other languages. Another project that started out using
this and then had to switch (to the PowerPC) was the BeBox. In the link
I give below it says both projects were done by the same people (Jean
Louis Gassée and Steve Sakoman), so in a way it was really just one
project that used the chip.

 Larry Tesler (who worked with us at PARC) finally wound up taking over this
 project and doing a number of much better things with it.

He was also responsible for giving us Hypercard, right?

 Overall what happened with Newton was too bad -- it could have been much
 better -- but there were many too many different opinions and power bases
 involved.

This looks like a reasonable history of the Newton project (though some
parts that I know aren't quite right, so I can't guess how accurate the
parts I didn't know are):

http://lowendmac.com/orchard/06/john-sculley-newton-origin.html

It doesn't mention NewtonScript nor Object Soups. I have never used it
myself, only read about it and seen some demos. But my impression is
that this was the closest thing we have had to the dynabook yet.

 If you have a good version of confinement (which is pretty simple HW-wise) 
 you
 can use Butler Lampson's schemes for Cal-TSS to make a workable version of a
 capability system.

The 286 protected mode was good enough for this, and was extended in the
386. I am not sure all modern x86 processors still implement these, and
if they do it is likely that actually using them will hurt performance
so much that it isn't an option in practice.

 And, yep, I managed to get them to allow interpreters to run on the iPad, 
 but was
 not able to get Steve to countermand the no sharing rule.

That is a pity, though at least having native languages makes these
devices a reasonable replacement for my old Radio Shack PC-4 calculator.
I noticed that neither Matlab nor Mathematica are available for the
iPad, but only simple terminal apps that allow you to access these
applications running on your PC. What a waste!

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)

2012-03-14 Thread Alan Kay
Well, it was very much a mythical beast even on paper -- and you really have 
to implement programming languages and make a lot of things with them to be 
able to assess them 


But -- basically -- since meeting Seymour and starting to think about children 
and programming, there were eight systems that I thought were really nifty and 
cried out to be unified somehow:
  1. Joss
  2. Lisp
  3. Logo -- which was originally a unification of Joss and Lisp, but I thought 
more could be done in this direction).
  4. Planner -- a big set of ideas (long before Prolog) by Carl Hewitt for 
logic programming and pattern directed inference both forward and backwards 
with backtracking)
  5. Meta II -- a super simple meta parser and compiler done by Val Schorre at 
UCLA ca 1963
  6. IMP -- perhaps the first real extensible language that worked well -- by 
Ned Irons (CACM, Jan 1970)

  7. The Lisp-70 Pattern Matching System -- by Larry Tesler, et al, with some 
design ideas by me

  8. The object and pattern directed extension stuff I'd been doing previously 
with the Flex Machine and afterwards at SAIL (that also was influenced by Meta 
II)


One of the schemes was to really make the pattern matching parts of this work 
for everything that eventually required invocations and binding. This was 
doable semantically but was a bear syntactically because of the different 
senses of what kinds of matching and binding were intended for different 
problems. This messed up the readability and desired simple things should be 
simple.

Examples I wanted to cover included simple translations of languages (English 
to Pig Latin, English to French, etc. some of these had been done in Logo), the 
Winograd robot block stacking and other examples done with Planner, the making 
of the language the child was using, message sending and receiving, extensions 
to Smalltalk-71, and so forth.

I think today the way to try to do this would be with a much more graphical UI 
than with text -- one could imagine tiles that would specify what to match, and 
the details of the match could be submerged a bit.

More recently, both OMeta and several of Ian's matchers can handle multiple 
kinds of matching with binding and do backtracking, etc., so one could imagine 
a more general language that could be based on this.

On the other hand, trying to stuff 8 kinds of language ideas into one new 
language in a graceful way could be a siren's song of a goal.

Still 

Cheers,

Alan





 From: shaun gilchrist shaunxc...@gmail.com
To: fonc@vpri.org 
Sent: Wednesday, March 14, 2012 11:38 AM
Subject: Re: [fonc] [IAEP] Barbarians at the gate! (Project Nell)
 

Alan, 

I would go way back to the never implemented Smalltalk-71

Is there a formal specification of what 71 should have been? I have only ever 
read about it in passing reference in the various histories of smalltalk as a 
step on the way to 72, 76, and finally 80. 

I am very intrigued as to what sets 71 apart so dramatically. -Shaun


On Wed, Mar 14, 2012 at 12:29 PM, Alan Kay alan.n...@yahoo.com wrote:

Hi Scott --


1. I will see if I can get one of these scanned for you. Moore tended to 
publish in journals and there is very little of his stuff available on line.


2.a. if (ab) { ... } is easier to read than if ab then ...? There is no 
hint of the former being tweaked for decades to make it easier to read.


Several experiments from the past cast doubt on the rest of the idea. At 
Disney we did a variety of code display generators to see what kinds of 
transformations we could do to the underlying Smalltalk (including syntactic) 
to make it something that could be subsetted as a growable path from Etoys. 



We got some good results from this (and this is what I'd do with Javascript 
in both directions -- Alex Warth's OMeta is in Javascript and is quite 
complete and could do this).


However, the showstopper was all the parentheses that had to be rendered in 
tiles. Mike Travers at MIT had done one of the first tile based editors for a 
version of Lisp that he used, and this was even worse.


More recently, Jens Moenig (who did SNAP) also did a direct renderer and 
editor for Squeak Smalltalk (this can be tried out) and it really seemed to 
be much too cluttered.


One argument for some of this, is well, teach the kids a subset that doesn't 
use so many parens  This could be a solution.


However, in the end, I don't think Javascript semantics is particularly good 
for kids. For example, one of features of Etoys that turned out to be very 
powerful for children and other Etoy programmers is the easy/trivial parallel 
methods execution. And there are others in Etoys and yet others in Scractch 
that are non-standard in regular programming languages but are very powerful 
for children (and some of them are better than standard CS language ideas).


I'm encouraging you to do something better (that would be ideal). Or at least 
as workable. Giving kids less

Re: [fonc] Error trying to compile COLA

2012-03-13 Thread Alan Kay
But we haven't wanted to program in Smalltalk for a long time.

This is a crazy non-solution (and is so on the iPad already)

No one should have to work around someone else's bad designs and 
implementations ...


Cheers,

Alan





 From: Mack m...@mackenzieresearch.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, March 13, 2012 9:28 AM
Subject: Re: [fonc] Error trying to compile COLA
 

For better or worse, both Apple and Microsoft (via Windows 8) are attempting 
to rectify this via the Terms and Conditions route.


It's been announced that both Windows 8 and OSX Mountain Lion will require 
applications to be installed via download thru their respective App Stores 
in order to obtain certification required for the OS to allow them access to 
features (like an installed camera, or the network) that are outside the 
default application sandbox.  


The acceptance of the App Store model for the iPhone/iPad has persuaded them 
that this will be (commercially) viable as a model for general public 
distribution of trustable software.


In that world, the Squeak plugin could be certified as safe to download in a 
way that System Admins might believe.



On Feb 29, 2012, at 3:09 PM, Alan Kay wrote:

Windows (especially) is so porous that SysAdmins (especially in school 
districts) will not allow teachers to download .exe files. This wipes out the 
Squeak plugin that provides all the functionality.


But there is still the browser and Javascript. But Javascript isn't fast 
enough to do the particle system. But why can't we just download the particle 
system and run it in a safe address space? The browser people don't yet 
understand that this is what they should have allowed in the first place. So 
right now there is only one route for this (and a few years ago there were 
none) -- and that is Native Client on Google Chrome. 



 But Google Chrome is only 13% penetrated, and the other browser fiefdoms 
don't like NaCl. Google Chrome is an .exe file so teachers can't download 
it (and if they could, they could download the Etoys plugin).


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Sorting the WWW mess

2012-03-01 Thread Alan Kay
Hi Loup

Someone else said that about links.

Browsing about either knowing where you are (and going) and/or about dealing 
with a rough max of 100 items. After that search is necessary.

However, Ted Nelson said a lot in each of the last 5 decades about what kinds 
of linking do the most good. (Chase down what he has to say about why one-way 
links are not what should be done.) He advocated from the beginning that the 
provenance of links must be preserved (which also means that you cannot copy 
what is being pointed to without also copying its provenance). This allows a 
much better way to deal with all manner of usage, embeddings, etc. -- including 
both fair use and also various forms of micropayments and subscriptions.

One way to handle this requirement is via protection mechanisms that real 
objects can supply.

Cheers,

Alan





 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Thursday, March 1, 2012 6:36 AM
Subject: Re: [fonc] Sorting the WWW mess
 
Martin Baldan wrote:
 That said, I don't see why you have an issue with search engines and
 search services. Even on your own machine, searching files with complex
 properties is far from trivial. When outside, untrusted sources are
 involved, you need someone to tell you what is relevant, what is not,
 who is lying, and so on. Google got to dominate that niche for the right
 reasons, namely, being much better than the competition.

I wasn't clear.  Actually, I didn't want to state my opinion.  I can't
find the message, but I (incorrectly?) remembered Alan saying that
one-way links basically created the need for big search engines.  As I
couldn't imagine an architecture that could do away with centralized
search engines, I wanted to ask about it.

That said, I do have issues with Big Data search engines: they are
centralized.  That alone gives them more power than I'd like them to
have.  If we could remove the centralization while keeping the good
stuff (namely, finding things), that would be really cool.

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Chrome Penetration

2012-03-01 Thread Alan Kay
My friend Peter Norvig is the Director of Research at Google. 


I told him that I had heard of an astounding jump in the penetration of 
Chrome.

He says the best numbers they have at present is that Chrome is 20% to 30% 
penetrated ...

Cheers,

Alan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
I think it is domain dependent -- for example, it is very helpful to have a 
debugger of some kind for a parser, but less so for a projection language like 
Nile. On the other hand, debuggers for making both of these systems are very 
helpful. Etoys doesn't have a debugger because the important state is mostly 
visible in the form of graphical objects. OTOH, having a capturing tracer (a la 
EXDAMS) could be nice for both reviewing and understanding complex interactions 
and also dealing with unrepeatable events

The topic of going from an idea for a useful POL to an actually mission usable 
POL is prime thesis territory.


Cheers,

Alan





 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Wednesday, February 29, 2012 5:43 AM
Subject: Re: [fonc] Error trying to compile COLA
 
Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.  Sure, when dealing with a complex
program in a complex language (say, tens of thousands of lines in C++),
then sure, IDEs and debuggers are a must.  But I'm not sure their
absence outweigh the simplicity potentially achieved with POLs. (I
mean, I really don't know.  It could even be domain-dependent.)

I agree however that having both (POLs + tools) would be much better,
and is definitely worth pursuing.  I'll think about it.

Loup.



Alan Kay wrote:
 With regard to your last point -- making POLs -- I don't think we are
 there yet. It is most definitely a lot easier to make really powerful
 POLs fairly quickly than it used to be, but we still don't have a nice
 methodology and tools to automatically supply the IDE, debuggers, etc.
 that need to be there for industrial-strength use.

 Cheers,

 Alan

     *From:* Loup Vaillant l...@loup-vaillant.fr
     *To:* fonc@vpri.org
     *Sent:* Wednesday, February 29, 2012 1:27 AM
     *Subject:* Re: [fonc] Error trying to compile COLA

     Alan Kay wrote:
       Hi Loup
      
       Very good question -- and tell your Boss he should support you!

     Cool, thank you for your support.


       […] One general argument is
       that non-machine-code languages are POLs of a weak sort, but
     are more
       effective than writing machine code for most problems. (This was
     quite
       controversial 50 years ago -- and lots of bosses forbade using any
       higher level language.)

     I didn't thought about this historical perspective. I'll keep that in
     mind, thanks.


       Companies (and programmers within) are rarely rewarded for saving
     costs
       over the real lifetime of a piece of software […]

     I think my company is. We make custom software, and most of the time
     also get to maintain it. Of course, we charge for both. So, when we
     manage to keep the maintenance cheap (less bugs, simpler code…), we win.

     However, we barely acknowledge it: much code I see is a technical debt
     waiting to be paid, because the original implementer wasn't given the
     time to do even a simple cleanup.


       An argument that resonates with some bosses is the debuggable
       requirements/specifications - ship the prototype and improve it
     whose
       benefits show up early on.

     But of course. I should have thought about it, thanks.


       […] one of the most important POLs to be worked on are
       the ones that are for making POLs quickly.

     This why I am totally thrilled by Ometa and Maru. I use them to point
     out that programming languages can be much cheaper to implement than
     most think they are. It is difficult however to get past the idea that
     implementing a language (even a small, specialized one) is by default a
     huge undertaking.

     Cheers,
     Loup.
     ___
     fonc mailing list
    fonc@vpri.org mailto:fonc@vpri.org
     http://vpri.org/mailman/listinfo/fonc




 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread Alan Kay
Hi Duncan

The short answers to these questions have already been given a few times on 
this list. But let me try another direction to approach this.

The first thing to notice about the overlapping windows interface personal 
computer experience is that it is logically independent of the code/processes 
running underneath. This means (a) you don't have to have a single religion 
down below (b) the different kinds of things that might be running can be 
protected from each other using the address space mechanisms of the CPU(s), and 
(c) you can think about allowing outsiders to do pretty much what they want 
to create a really scalable really expandable WWW.

If you are going to put a browser app on an OS, then the browser has to 
be a mini-OS, not an app. 


But standard apps are a bad idea (we thought we'd gotten rid of them in the 
70s) because what you really want to do is to integrate functionality visually 
and operationally using the overlapping windows interface, which can safely get 
images from the processes and composite them on the screen. (Everything is now 
kind of super-desktop-publishing.) An app is now just a kind of integration.

But the route that was actually taken with the WWW and the browser was in the 
face of what was already being done.

Hypercard existed, and showed what a WYSIWYG authoring system for end-users 
could do. This was ignored.

Postscript existed, and showed that a small interpreter could be moved easily 
from machine to machine while retaining meaning. This was ignored.

And so forth.

19 years later we see various attempts at inventing things that were already 
around when the WWW was tacked together.

But the thing that is amazing to me is that in spite of the almost universal 
deployment of it, it still can't do what you can do on any of the machines it 
runs on. And there have been very few complaints about this from the mostly 
naive end-users (and what seem to be mostly naive computer folks who deal with 
it).

Some of the blame should go to Apple and MS for not making real OSs for 
personal computers -- or better, going the distance to make something better 
than the old OS model. In either case both companies blew doing basic 
protections between processes. 


On the other hand, the WWW and first browsers were originally done on 
workstations that had stronger systems underneath -- so why were they so blind?


As an aside I should mention that there have been a number of attempts to do 
something about OS bloat. Unix was always too little too late but its one 
outstanding feature early on was its tiny kernel with a design that wanted 
everything else to be done in user-mode-code. Many good things could have 
come from the later programmers of this system realizing that being careful 
about dependencies is a top priority. (And you especially do not want to have 
your dependencies handled by a central monolith, etc.)


So, this gradually turned into an awful mess. But Linus went back to square one 
and redefined a tiny kernel again -- the realization here is that you do have 
to arbitrate basic resources of memory and process management, but you should 
allow everyone else to make the systems they need. This really can work well if 
processes can be small and interprocess communication fast (not the way Intel 
and Motorola saw it ...).


And I've also mentioned Popek's LOCUS system as a nice model for migrating 
processes over a network. It was Unix only, but there was nothing about his 
design that required this.

Cutting to the chase with a current day example. We made Etoys 15 years ago so 
children could learn about math, science, systems, etc. It has a particle 
system that allows many interesting things to be explored.

Windows (especially) is so porous that SysAdmins (especially in school 
districts) will not allow teachers to download .exe files. This wipes out the 
Squeak plugin that provides all the functionality.

But there is still the browser and Javascript. But Javascript isn't fast enough 
to do the particle system. But why can't we just download the particle system 
and run it in a safe address space? The browser people don't yet understand 
that this is what they should have allowed in the first place. So right now 
there is only one route for this (and a few years ago there were none) -- and 
that is Native Client on Google Chrome. 


 But Google Chrome is only 13% penetrated, and the other browser fiefdoms don't 
like NaCl. Google Chrome is an .exe file so teachers can't download it (and 
if they could, they could download the Etoys plugin).

Just in from browserland ... there is now -- 19 years later -- an allowed route 
to put samples in your machine's sound buffer that works on some of the 
browsers.

Holy cow folks!

Alan







 From: Duncan Mak duncan...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Wednesday, February 29, 2012 11:50 AM
Subject: Re

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

Very good question -- and tell your Boss he should support you!

If your boss has a math or science background, this will be an easy sell 
because there are many nice analogies that hold, and also some good examples in 
computing itself.

The POL approach is generally good, but for a particular problem area could be 
as difficult as any other approach. One general argument is that 
non-machine-code languages are POLs of a weak sort, but are more effective 
than writing machine code for most problems. (This was quite controversial 50 
years ago -- and lots of bosses forbade using any higher level language.)

Four arguments against POLs are the difficulties of (a) designing them, (b) 
making them, (c) creating IDE etc tools for them, and (d) learning them. (These 
are similar to the arguments about using math and science in engineering, but 
are not completely bogus for a small subset of problems ...).

Companies (and programmers within) are rarely rewarded for saving costs over 
the real lifetime of a piece of software (similar problems exist in the climate 
problems we are facing).These are social problems, but part of real 
engineering. However, at some point life-cycle costs and savings will become 
something that is accounted and rewarded-or-dinged. 

An argument that resonates with some bosses is the debuggable 
requirements/specifications - ship the prototype and improve it whose 
benefits show up early on. However, these quicker track processes will often be 
stressed for time to do a new POL.

This suggests that one of the most important POLs to be worked on are the ones 
that are for making POLs quickly. I think this is a huge important area and 
much needs to be done here (also a very good area for new PhD theses!).


Taking all these factors (and there are more), I think the POL and extensible 
language approach works best for really difficult problems that small numbers 
of really good people are hooked up to solve (could be in a company, and very 
often in one of many research venues) -- and especially if the requirements 
will need to change quite a bit, both from learning curve and quick response to 
the outside world conditions.

Here's where a factor of 100 or 1000 (sometimes even a factor of 10) less code 
will be qualitatively powerful.

Right now I draw a line at *100. If you can get this or more, it is worth 
surmounting the four difficulties listed above. If you can get *1000, you are 
in a completely new world of development and thinking.


Cheers,

Alan






 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Tuesday, February 28, 2012 8:17 AM
Subject: Re: [fonc] Error trying to compile COLA
 
Alan Kay wrote:
 Hi Loup

 As I've said and written over the years about this project, it is not
 possible to compare features in a direct way here.

Yes, I'm aware of that.  The problem rises when I do advocacy. A
response I often get is but with only 20,000 lines, they gotta
leave features out!.  It is not easy to explain that a point by
point comparison is either unfair or flatly impossible.


 Our estimate so far is that we are getting our best results from the
 consolidated redesign (folding features into each other) and then from
 the POLs. We are still doing many approaches where we thought we'd have
 the most problems with LOCs, namely at the bottom.

If I got it, what you call consolidated redesign encompasses what I
called feature creep and good engineering principles (I understand
now that they can't be easily separated). I originally estimated that:

- You manage to gain 4 orders of magnitude compared to current OSes,
- consolidated redesign gives you roughly 2 of those  (from 200M to 2M),
- problem oriented languages give you the remaining 2.(from 2M  to 20K)

Did I…
- overstated the power of problem oriented languages?
- understated the benefits of consolidated redesign?
- forgot something else?

(Sorry to bother you with those details, but I'm currently trying to
  convince my Boss to pay me for a PhD on the grounds that PoLs are
  totally amazing, so I'd better know real fast If I'm being
  over-confident.)

Thanks,
Loup.



 Cheers,

 Alan


     *From:* Loup Vaillant l...@loup-vaillant.fr
     *To:* fonc@vpri.org
     *Sent:* Tuesday, February 28, 2012 2:21 AM
     *Subject:* Re: [fonc] Error trying to compile COLA

     Originally, the VPRI claims to be able to do a system that's 10,000
     smaller than our current bloatware. That's going from roughly 200
     million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
     to a single book.) That's 4 orders of magnitude.

      From the report, I made a rough break down of the causes for code
     reduction. It seems that

     - 1 order of magnitude is gained by removing feature creep. I agree
     feature creep can be important. But I also believe most feature
     belong to a long tail, where each is needed by a minority of users.
     It does matter

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Reuben

Yep. One of the many finesses in the STEPS project was to point out that 
requiring OSs to have drivers for everything misses what being networked is all 
about. In a nicer distributed systems design (such as Popek's LOCUS), one would 
get drivers from the devices automatically, and they would not be part of any 
OS code count. Apple even did this in the early days of the Mac for its own 
devices, but couldn't get enough other vendors to see why this was a really big 
idea.

Eventually the OS melts away to almost nothing (as it did at PARC in the 70s).

Then the question starts to become how much code has to be written to make the 
various functional parts that will be semi-automatically integrated to make 
'vanilla personal computing'  ?


Cheers,

Alan





 From: Reuben Thomas r...@sc3d.org
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 28, 2012 9:33 AM
Subject: Re: [fonc] Error trying to compile COLA
 
On 28 February 2012 16:41, BGB cr88...@gmail.com wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


 this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
As I mentioned, Smalltalk-71 was never implemented -- and rarely mentioned (but 
it was part of the history of Smalltalk so I put in a few words about it).

If we had implemented it, we probably would have cleaned up the look of it, and 
also some of the conventions. 

You are right that part of it is like a term rewriting system, and part of it 
has state (object state).

to ... do ... is a operation. The match is on everything between toand do

For example, the first line with cons in it does the car operation (which 
here is hd).

The second line with cons in it does replaca. The value of hd is being 
replaced by the value of c. 

One of the struggles with this design was to try to make something almost as 
simple as LOGO, but that could do language extensions, simple AI backward 
chaining inferencing (like Winograd's block stacking problem), etc.

The turgid punctuations (as I mentioned in the history) were attempts to find 
ways to do many different kinds of matching.

So we were probably lucky that Smalltalk-72 came along  It's pattern 
matching was less general, but quite a bit could be done as far as driving an 
extensible interpreter with it.

However, some of these ideas were done better later. I think by Leler, and 
certainly by Joe Goguen, and others.

Cheers,

Alan



 From: Jakob Praher ja...@praher.info
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Tuesday, February 28, 2012 12:52 PM
Subject: Re: [fonc] Error trying to compile COLA
 

Dear Alan,

Am 28.02.12 14:54, schrieb Alan Kay: 
Hi Ryan


Check out Smalltalk-71, which was a design to do just what you suggest -- it 
was basically an attempt to combine some of my favorite languages of the time 
-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere? Something like 
a Smalltalk 71 for Smalltalk 80 programmers :-)
In the early history of Smalltalk you mention it as 
 It was a kind of parser with object-attachment that executed tokens 
 directly. The Early History of Smalltalk From the examples I think that do 
 'expr' is evaluating expr by using previous to 'ident' :arg1..:argN 
 body.

As an example do 'factorial 3' should  evaluate to 6 considering:

to 'factorial' 0 is 1
to 'factorial' :n do 'n*factorial n-1' The Early History of Smalltalk What 
about arithmetic and precendence: What part of language was built into the 
system? 
- :var denote variables, whereas var denotes the instantiated value
of :var in the expr, e.g. :n vs 'n-1'
- '' denote simple tokens (in the head) as well as expressions
(in the body)?
- to, do are keywords
- () can be used for precedence

You described evaluation as straightforward pattern-matching.
It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons'
:a :b) '-'  :c  is a structured term.
I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract
syntax - whereas in Smalltalk 71 the concrete syntax seems to be
used in the rules.

Also it seems redundant to both have:
to 'hd' ('cons' :a :b) do 'a' 
and
to 'hd' ('cons' :a :b) '-'  :c  do 'a - c'

Is this made to make sure that the left hand side of - has to be
a hd (cons :a :b) expression?

Best,
Jakob




This never got implemented because of a bet that turned into Smalltalk-72, 
which also did what you suggest, but in a less comprehensive way -- think of 
each object as a Lisp closure that could be sent a pointer to the message and 
could then parse-and-eval that. 


A key to scaling -- that we didn't try to do -- is semantic typing (which I 
think is discussed in some of the STEPS material) -- that is: to be able to 
characterize the meaning of what is needed and produced in terms of a 
description rather than a label. Looks like we won't get to that idea this 
time either.


Cheers,


Alan





 From: Ryan Mitchley ryan.mitch...@gmail.com
To: fonc@vpri.org 
Sent: Tuesday, February 28, 2012 12:57 AM
Subject: Re: [fonc] Error trying to compile COLA
 

 
On 27/02/2012 19:48, Tony Garnock-Jones wrote:


My interest in it came out of thinking about
  integrating pub/sub (multi- and broadcast)
  messaging into the heart of a language. What
  would a Smalltalk look like if, instead of a
  strict unicast model with multi- and broadcast
  constructed atop (via Observer/Observable), it
  had a messaging model capable of natively
  expressing unicast, anycast, multicast, and
  broadcast patterns? 

I've wondered if pattern matching shouldn't be a
foundation of method resolution (akin to binding
with backtracking in Prolog) - if a multicast

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Alan Kay
Hi Julian

I should probably comment on this, since it seems that the STEPS reports 
haven't made it clear enough.

STEPS is a science experiment not an engineering project. 


It is not at all about making and distributing an operating system etc., but 
about trying to investigate the tradeoffs between problem oriented languages 
that are highly fitted to problem spaces vs. what it takes to design them, 
learn them, make them, integrate them, add pragmatics, etc.

Part of the process is trying many variations in interesting (or annoying) 
areas. Some of these have been rather standalone, and some have had some 
integration from the start.

As mentioned in the reports, we made Frank -- tacking together some of the POLs 
that were done as satellites -- to try to get a better handle on what an 
integration language might be like that is much better than the current use of 
Squeak. It has been very helpful to get something that is evocative of the 
whole system working in a practical enough matter to use it (instead of PPT 
etc) to give talks that involve dynamic demos. We got some good ideas from this.


But this project is really about following our noses, partly via getting 
interested in one facet or another (since there are really too many for just a 
few people to cover all of them). 


For example, we've been thinking for some time that the pretty workable DBjr 
system that is used for visible things - documents, UI, etc. -- should be 
almost constructable by hand if we had a better constraint system. This would 
be the third working DBjr made by us ...


And -- this year is the 50th anniversary of Sketchpad, which has also got us 
re-thinking about some favorite old topics, etc.

This has led us to start putting constraint engines into STEPS, thinking about 
how to automatically organize various solvers, what kinds of POLs would be nice 
to make constraint systems with, UIs for same, and so forth. Intellectually 
this is kind of interesting because there are important overlaps between the 
functions + time stamps approach of many of our POLs and with constraints and 
solvers.

This looks very fruitful at this point!


As you said at the end of your email: this is not an engineering project, but a 
series of experiments.


One thought we had about this list is that it might lead others to conduct 
similar experiments. Just to pick one example: Reuben Thomas' thesis Mite (ca 
2000) has many good ideas that apply here. To quote from the opening:Mite is a 
virtual machine intended to provide fast language and machine-neutral 
just-in-time
translation of binary-portable object code into high quality native code, with 
a formal foundation. So one interesting project could be to try going from 
Nile down to a CPU via Mite. Nile is described in OMeta, so this could be a 
graceful transition, etc.

In any case, we spend most of our time trying to come up with ideas that might 
be powerful for systems design and ways to implement them. We occasionally 
write a paper or an NSF report. We sometimes put out code so people can see 
what we are doing. But what we will put out at the end of this period will be 
very different -- especially in the center -- that what we did for the center 
last year.

Cheers and best wishes,

Alan






 From: Julian Leviston jul...@leviston.net
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Saturday, February 25, 2012 6:48 PM
Subject: Re: [fonc] Error trying to compile COLA
 

As I understand it, Frank is an experiment that is an extended version of DBJr 
that sits atop lesserphic, which sits atop gezira which sits atop nile, which 
sits atop maru all of which which utilise ometa and the worlds idea.


If you look at the http://vpri.org/html/writings.php page you can see a 
pattern of progression that has emerged to the point where Frank exists. From 
what I understand, maru is the finalisation of what began as pepsi and coke. 
Maru is a simple s-expression language, in the same way that pepsi and coke 
were. In fact, it looks to have the same syntax. Nothing is the layer 
underneath that is essentially a symbolic computer - sitting between maru and 
the actual machine code (sort of like an LLVM assembler if I've understood it 
correctly).


They've hidden Frank in plain sight. He's a patch-together of all their 
experiments so far... which I'm sure you could do if you took the time to 
understand each of them and had the inclination. They've been publishing as 
much as they could all along. The point, though, is you have to understand 
each part. It's no good if you don't understand it.


If you know anything about Alan  VPRI's work, you'd know that their focus is 
on getting children this stuff in front as many children as possible, because 
they have so much more ability to connect to the heart of a problem than 
adults. (Nothing to do with age - talking about minds, not bodies here). 
Adults usually get in the way with their stuff - their 

Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Alan Kay
Hi Tony

Yes, I've seen it. As Gerry says, it is an extension of Guy Steele's thesis. 
When I read this, I wished for a more interesting, comprehensive and 
wider-ranging and -scaling example to help think with. 


One reason to put up with some of the problems of defining things using 
constraints is that if you can organize things well enough, you get super 
clarity and simplicity and power.

They definitely need a driving example that has these traits. There is a 
certain tinge of the Turing Tarpit to this paper.

With regard to objects, my current prejudice is that objects should be able to 
receive messages, but should not have to send to explicit receivers. This is a 
kind of multi-cast I guess (but I think of it more like publish/subscribe).


Cheers,

Alan






 From: Tony Garnock-Jones tonygarnockjo...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Monday, February 27, 2012 9:48 AM
Subject: Re: [fonc] Error trying to compile COLA
 

Hi Alan,


On 27 February 2012 11:32, Alan Kay alan.n...@yahoo.com wrote:

[...] a better constraint system. [...] This has led us to start putting 
constraint engines into STEPS, thinking about how to automatically organize 
various solvers, what kinds of POLs would be nice to make constraint systems 
with, UIs for same, and so forth.

Have you looked into the Propagators of Radul and Sussman? For example, 
http://dspace.mit.edu/handle/1721.1/44215. His approach is closely related to 
dataflow, with a lattice defined at each node in the graph for integrating the 
messages that are sent to it. He's built FRP systems, type checkers, type 
inferencers, abstract interpretation systems and lots of other fun things in a 
nice, simple way, out of this core construct that he's placed near the heart 
of his language's semantics.

My interest in it came out of thinking about integrating pub/sub (multi- and 
broadcast) messaging into the heart of a language. What would a Smalltalk look 
like if, instead of a strict unicast model with multi- and broadcast 
constructed atop (via Observer/Observable), it had a messaging model capable 
of natively expressing unicast, anycast, multicast, and broadcast patterns? 
Objects would be able to collaborate on responding to requests... anycast 
could be used to provide contextual responses to requests... concurrency would 
be smoothly integrable... more research to be done :-)

Regards,
  Tony
-- 
Tony Garnock-Jones
tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-27 Thread Alan Kay
Hi Tony

I like what the BOOM/BLOOM people are doing quite a bit. Their version of 
Datalog + Time is definitely in accord with lots of our prejudices ...

Cheers,

Alan





 From: Tony Garnock-Jones tonygarnockjo...@gmail.com
To: Alan Kay alan.n...@yahoo.com 
Cc: Fundamentals of New Computing fonc@vpri.org 
Sent: Monday, February 27, 2012 1:44 PM
Subject: Re: [fonc] Error trying to compile COLA
 

On 27 February 2012 15:09, Alan Kay alan.n...@yahoo.com wrote:

Yes, I've seen it. As Gerry says, it is an extension of Guy Steele's thesis. 
When I read this, I wished for a more interesting, comprehensive and 
wider-ranging and -scaling example to help think with.

For me, the moment of enlightenment was when I realized that by using a 
lattice at each node, they'd abstracted out the essence of 
iterate-to-fixpoint that's disguised within a number of the examples I 
mentioned in my previous message. (Particularly the frameworks of abstract 
interpretation.)

I'm also really keen to try to relate propagators to Joe Hellerstein's recent 
work on BOOM/BLOOM. That team has been able to implement the Chord DHT in 
fewer than 50 lines of code. The underlying fact-propagation system of their 
language integrates with a Datalog-based reasoner to permit terse, dense 
reasoning about distributed state.
 
One reason to put up with some of the problems of defining things using 
constraints is that if you can organize things well enough, you get super 
clarity and simplicity and power.

Absolutely. I think Hellerstein's Chord example shows that very well. So I 
wish it had been an example in Radul's thesis :-)
 
With regard to objects, my current prejudice is that objects should be able 
to receive messages, but should not have to send to explicit receivers. This 
is a kind of multi-cast I guess (but I think of it more like 
publish/subscribe).


I'm nearing the point where I can write up the results of a chunk of my 
current research. We have been using a pub/sub-based virtual machine for 
actor-like entities, and have found a few cool uses of non-point-to-point 
message passing that simplify implementation of complex protocols like DNS and 
SSH.

Regards,
  Tony
-- 
Tony Garnock-Jones
tonygarnockjo...@gmail.com
http://homepages.kcbbs.gen.nz/tonyg/


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Raspberry Pi

2012-02-08 Thread Alan Kay
Hi Loup

Actually, your last guess was how we thought most of the optimizations would be 
done (as separate code guarded by the meanings). For example, one idea was 
that Cairo could be the optimizations of the graphics meanings code we would 
come up with. But Dan Amelang did such a great job at the meanings that they 
ran fast enough tempt us to use them directly (rather than on a supercomputer, 
etc.). In practice, the optimizations we did do are done in the translation 
chain and in the run-time, and Cairo never entered the picture.


However, this is a great area for developing more technique for how math can 
be made practical -- because the model is so pretty and compact -- and 
there is much more that could be done here.

Why can't a Nile backend for the GPU board be written? Did I miss something?

Cheers,

Alan






 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Wednesday, February 8, 2012 1:29 AM
Subject: Re: [fonc] Raspberry Pi
 
Jecel Assumpcao Jr. wrote:
 Alan Kay wrote:
 We have done very little of this so far, and very few optimizations. We can 
 give
 live dynamic demos in part because Dan Amelang's Nile graphics system turned
 out to be more efficient than we thought with very few optimizations.

 Here is were the binary blob thing in the Raspberry Pi would be a
 problem. A Nile backend for the board's GPU can't be written, and the
 CPU can't compare to the PCs you use in your demos.

Maybe as a temporary workaround, it would possible to use OpenGL (or
OpenCL, if available) as the back-end?  It would require loading a
whole Linux kernel, but maybe this could work? /wild_speculation


 I think it could be an valuable project for interested parties to see about 
 how to
 organize the separate optimization spaces that use the meanings as 
 references.

 I didn't get the part about meanings as references.

I understood that meanings meant the concise version of Frank.  The
optimization space would then be a set of correctness-preserving
compilers passes or interpreters. (I believe Frank already features
some of that.)

Or, re-written versions that are somehow guaranteed to behave the same
as the concise version they derive from, only faster.  But I'm not sure
that's the spirit.

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Raspberry Pi

2012-02-08 Thread Alan Kay
Yes, the annotation scheme you mention is essentially how were were going to 
do it. The idea was that in the optimization space there would be a variety of 
ways to do X -- e.g. there are lots of ways to do sorting -- and there would be 
conditions attached to these ways that would allow the system to choose the 
most appropriate solutions at the most appropriate times. This would include 
hints etc. The rule here is that the system had to be able to run correctly 
with all the optimizations turned off.

And your notions about some of the merits of DSLs (known in the 60s as POLs -- 
for Problem Oriented Languages) is why we took this approach.

Cheers,

Alan





 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Wednesday, February 8, 2012 9:24 AM
Subject: Re: [fonc] Raspberry Pi
 
Alan Kay wrote:
 Hi Loup
 
 Actually, your last guess was how we thought most of the optimizations
 would be done (as separate code guarded by the meanings).  […]
 In practice, the  optimizations we did do are done in the translation
 chain and in the run-time, […]


Okay, thanks.

I can't recall the exact reference, but I have read once about a middle
ground: mechanical optimization passes that are brittle in the face of
meaning change.  I mean, if you change the concise version of your
program, you may have to re-think your optimizations passes, but you
don't necessarily have to re-write your optimized version directly.

Example
{
  The guy where at the university, and was assigned to write optimized
  multiplication for big numbers.  Each student would  be graded
  according to the speed of their program.  No restriction on the
  programming language.

  Everyone started coding in C, but _he_ preferred to start with
  scheme.  He coded a straightforward version of the algorithm, then
  set out to manually (but mostly mechanically) apply a set of
  correctness-preserving transformations, most notably a CPS
  transformation, and a direct translation to C with gotos.  His final
  program, written in pure C, was the second fastest of his class (and
  very close to the first, which used assembly heavily).  Looking back
  at what he could have done better, he saw that his program spent most
  of his time in malloc().  He didn't know how to do it at the time,
  but he had managed his memory directly, his program would have been
  first.

  Oh, and of course, he had much less trouble dealing with mistakes
  than his classmates.

  So, his conclusion was that speed comes from beautiful programs, not
  prematurely optimized ones.
}


About Frank, we may imagine using this method in a more automated way,
for instance by annotating the source and intermediate code with
specialized optimizations that would only work in certain cases.  It
could be something roughly like this:

Nile Source            - Annotations to optimize the Nile program
   |
   |                      Compiler pass that check the validity of the
   |                      optimizations then applies them.
   v
Maru Intermediate code - Annotations to optimize that maru program
   |
   |                      Compiler pass like the above
   v
C Backend code         - Annotations (though at this point…)
   |
   |                   - GCC
   v
Assembly               - (no, I won't touch that :-)

(Note that instead of annotating the programs, you could manually
control the compilers.)

Of course, the second you change the Nile source is the second your
annotations at every level won't work any more.  But (i) you would only
have to redo your annotations, and (ii), maybe not all of them anyway,
for there is a slight chance that your intermediate representation
didn't change too much when you changed your source code.

I can think of one big caveat however: if the generated code is too big
or too cryptic, this approach may not be feasible any more.  And I
forgot about profiling your program first.



 But Dan Amelang did such a great job at the meanings that they ran
 fast enough tempt us to use them directly [so] Cairo never entered
 the picture.

If I had to speculate from an outsider's point of view, I'd say these
good surprises probably apply to almost any domain specific language.
The more specialized a language is, the more domain knowledge you can
embed in the compiler, the more efficient the optimizations may be. I
know it sounds like magic, but I recall a similar example with Haskell,
applied to bioinformatics (again, can't find the paper).

Example
{
  The goal was to implement a super-fast algorithm.  The catch was, the
  resulting program has to accept a rather big number of parameters.
  Written directly in C, the fact that those parameters changed meant
  the main loop had to make several checks, slowing the whole thing
  down.

  So they did an EDSL based on monads that basically generated a C
  program after the parameters were read and knows, then ran it.  Not
  quite Just-In-Time compilation, but close

Re: [fonc] Raspberry Pi

2012-02-07 Thread Alan Kay
Hi Jecel

In the difference between research and engineering department I think I would 
first port a version of Smalltalk to this system. 


One of the fun side-projects done in the early part of the Squeak system was 
when John Maloney and a Berkeley grad student ported Squeak to a luggage tag 
-- that is to the Mitsubishi hybrid computer on a chip that existed back ca 
1997. This was a ARM-like 32 bit microprocessor plus 4MB (or more) memory on a 
single die. This plus one ASIC constituted the whole computer. 


Mitsubishi did this nice system for mobile use. Motorola bought the rights to 
this technology and completely buried it to kill competition.

(We call it the luggage tag because they would embed failed chips in Lucite 
to make luggage tags!)

Anyway, for fun John and the grad student ported Squeak to this bare chip 
(including having to write the BiOS code). It worked fine, and I was able to do 
a large scale Etoy demo on it.

Although Squeak was quite small in those days, a number of effective 
optimizations had been done at various levels, and so it was quite efficient, 
and all plus Etoys fit easily into 4MB. 

In the earliest days of the OLPC XO project we made an offer to make Squeak the 
entire OS of the XO, etc., but you can imagine the resistance!


Frank on the other hand has very few optimizations -- it is about lines of code 
that carry meaning. It is a goal of the project to separate optimizations from 
the meanings so it would still run with the optimizations turned off but 
slower. We have done very little of this so far, and very few optimizations. We 
can give live dynamic demos in part because Dan Amelang's Nile graphics system 
turned out to be more efficient than we thought with very few optimizations.

I think it could be an valuable project for interested parties to see about how 
to organize the separate optimization spaces that use the meanings as 
references.

Cheers,

Alan







 From: Jecel Assumpcao Jr. je...@merlintec.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 7, 2012 9:42 AM
Subject: Re: [fonc] Raspberry Pi
 
Reuben Thomas wrote:
 On 7 February 2012 11:34, Ryan Mitchley wrote:
 
  I think the limited capabilities would be a great visceral demonstration of
  the efficiencies learned during the FONC research.
 
  I was thinking in terms of replacing the GNU software, using it as a cheap
  hardware target... some FONC-based system should blow the GNU stack out of
  the water when resources are restricted.
 
 Now that's an exciting idea.

People complain about *only* having 256MB (128MB in the A model) but
that is way more than is needed for SqueakNOS and, I imagine, Frank.
Certainly the boot time for SqueakNOS would be a second or less on this
hardware, which should impress a few people when compared to the various
Linux on the same board.

Fortunately, some information needed to port an OS to the Raspberry Pi
was released yesterday:

 http://dmkenr5gtnd8f.cloudfront.net/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf

The GPU stuff is still secret but I don't think the current version of
Frank would make use of it anyway.

-- Jecel

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] PARC founder Jacon Goldman dies at 90

2011-12-22 Thread Alan Kay
Yes, Jack was a driving force and quite a character in so many ways.

Cheers,

Alan





 From: Long Nguyen cgb...@gmail.com
To: fonc@vpri.org 
Sent: Thursday, December 22, 2011 9:47 AM
Subject: [fonc] PARC founder Jacon Goldman dies at 90
 
http://www.nytimes.com/2011/12/22/business/jacob-e-goldman-founder-of-xerox-lab-dies-at-90.html?_r=2pagewanted=all

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-17 Thread Alan Kay
This idea was tried by the Engelbartians with chord keyboards integrated with 
the mouse mechanism. In their design, there wasn't enough stability to do 
positioning well (although one could imagine other technologies that would do 
both good pointing with both hands and allow all fingers to be used).

Cheers,

Alan





 From: Casey Ransberger casey.obrie...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Friday, December 16, 2011 9:07 PM
Subject: Re: [fonc] History of computing talks at SJSU
 
Below. 

On Dec 16, 2011, at 3:19 PM, Alan Kay alan.n...@yahoo.com wrote:

 And what Engelbart was upset about was that the hands out -- hands 
 together style did not survive. The hands out had one hand with the 5 
 finger keyboard and the other with the mouse and 3 buttons -- this allowed 
 navigation and all commands and typing to be done really efficiently 
 compared to today. Hands together on the regular keyboard only happened 
 when you had bulk typing to do.

Are you talking about the so-called chording keyboard?

I had an idea years ago to have a pair of twiddlers (the one chording 
keyboard I'd seen was called a twiddler) which tracked movement of both hands 
over the desktop, basically giving you two pointing devices and a keyboarding 
solution at the same time. 

Now it's all trackpads and touch screens, and my idea seems almost Victorian:)

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Alan Kay
I hope I didn't say there was absolutely nothing worth talking about in the 
'personal computing' space in the past 30 years (and don't think I did say 
that).

Let us all share in the excitement of Discovery without vain attempts to claim 
priority -- Goethe

So some recent manifestations of ideas and technologies such as multitouch, 
mouseless, and SixthSense, should be praised.

However, it is also interesting to discover where ideas came from and who came 
up with them first -- this helps us understand and differentiate high 
creativity from low context from low creativity from high context.

I don't know who did the mouseless idea first, but certainly Dick Shoup at 
Xerox PARC and later at Interval, conceived and showed something very similar. 
One of the central parts of this was to use image recognition to track people, 
hands, and fingers.

Similarly, the SixthSense idea has much in common with Nicholas Negroponte's 
(and many in his Arch-Mac group at MIT) idea in the 70s that we would wear 
things that would let computers know where we are and where we are pointing, 
and that there will be displays everywhere (from a variety of means) and the 
Internet will also be everywhere by then, and there will be embedded computers 
everywhere, etc., so that one's helper agents will have the effect of 
following us around and responding to our gestures and commands. There are 
several terrific movies of their prototypes.


Multitouch, similarly is hard to find out who did it first, but again Nicholas' 
Arch-Mach group certainly did do it (Chris Herot as I recall) in the 70s.

And what Engelbart was upset about was that the hands out -- hands together 
style did not survive. The hands out had one hand with the 5 finger keyboard 
and the other with the mouse and 3 buttons -- this allowed navigation and all 
commands and typing to be done really efficiently compared to today. Hands 
together on the regular keyboard only happened when you had bulk typing to do.

It should be clear that being able to sense all the fingers in some way that 
allows piano keyboard like fluency/polyphony is still a good idea. Musical 
instruments require some training and practice but then allow many more degrees 
of freedom to be controlled.


And, though Nick Sheriden was the leader of the PARC electrophoretic 
migration display project, it was colloidal chemist Ann Chiang who 
accomplished many of the breakthroughs in the 70s. That Xerox didn't follow 
through with this technology was a great disappointment for me. It was really 
nice, and even the prototype had higher contrast ratios than the e-ink displays 
of today (different approach, different kinds of particles).

And a few things have happened since 1980  but the talk was supposed to be 
about the Dynabook idea 

Best wishes,

Alan





 From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, December 16, 2011 1:12 PM
Subject: Re: [fonc] History of computing talks at SJSU
 
I disagree with the tone in Alan's talk here.  While it is great to
see what was happening in the 50-70s, he makes it sound like there is
absolutely nothing worth talking about in the personal computing
space in the past 30 years.

Pranav Mistry's work on sixth sense technology and the mouseless
mouse alone raise legitimate counterpoints to much of what is
suggested by this talk.  For example, Alan touches upon Englebart's
fury over what happened with the mouse and how the needs of mass
market commercialization trump utility.  Yet, I see a future where we
are far less dependent on mechanical tools like the mouse.

But progress takes time.  For example, the first e-ink technologies
were developed at PARC in the 70s by Nicholas K. Sheridan as a
prototype for a future Alto computer (not mentioned at all by Alan in
his talk).  Reducing the cost to manufacture such displays has been a
long-running process and one I follow intently.  For example, only
recently has a consortium of researchers gotten together and come up
with a fairly brilliant idea to use the same techniques found in
inkjet printing to print pholed screens, making the construction of
flexible e-paper as cost effective as the invention of inkjet printing
to the paper medium.

With these newer mediums we will also need greater automation in
analyzing so-called big data.  Today most analysis is not automated
by computers, and so scientists are separated from truly interacting
with their massive datasets.  They have to talk to project managers,
who then talk to programmers, who then write code that gets deployed
to QA, etc.  The human social process here is fraught with error.

On Tue, Dec 13, 2011 at 3:02 PM, Kim Rose kim.r...@vpri.org wrote:
 For those of you looking to hear more from Alan Kay -- you'll find a talk
 from him and several other big names in computer science here -- thanks to
 San Jose State University.

  http://www.sjsu.edu/atn/services

Re: [CAG] Re: [fonc] Fexpr the Ultimate Lambda

2011-11-26 Thread Alan Kay
Hi Carl

Yes, I have always felt that you got nowhere near the coverage and recognition 
you deserved for PLANNER (the whole enchilada) -- to me it was a real landmark 
of a set of very powerful insights and perspectives. Definitely one of the very 
top gems of the late 60s!


I recall there was lots of good wine at that Pajaro Dunes meeting! (And Jeff 
Rulifson helped me pull off that Beef Wellington with the three must have 
sauces). That was also a great group. Cordell Green was there, Richard 
Waldinger, Rulifson, (Bob Yates?), Bob Balzer, etc. Can you remember any of the 
others? That one must have been in 1970.

And it was indeed the second -- and sequential -- evaluator (from the Lisp 
1.5 manual) that I had in mind when I did the ST-72 eval. Another influence on 
that scheme was the tiny Meta II parser-compiler that Val Shorre did at UCLA in 
1963 (for an 8K byte 1401!). I loved that little system. This led to the ST-72 
eval really being a kind of cascaded apply ...


And there's no question that once 
you aim at real objects a distributed eval makes great sense.
Cheers,

Alan






 From: Carl Hewitt hew...@concurrency.biz
To: Alan Kay alan.n...@yahoo.com 
Cc: Programming Language Design pi...@googlegroups.com; Dale Schumacher 
dale.schumac...@gmail.com; The Friday Morning Applied Complexity Coffee 
Group fr...@redfish.com; computational-actors-gu...@googlegroups.com 
computational-actors-gu...@googlegroups.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Saturday, November 26, 2011 9:24 AM
Subject: RE: [CAG] Re: [fonc] Fexpr the Ultimate Lambda
 

Hi Alan,
 
Yes, Smalltalk-71 had lots of potential!  Unfortunately, the way it developed 
was that Kowalski picked up a subset of micro-Planner (backward chaining only 
along with backtracking only control structure) and PROLOG was made into 
something of an ideology.  I have published a history in ArXiv titled “Middle 
History of Logic Programming: Resolution, Planner, Edinburgh LCF, Prolog, and 
the Japanese Fifth Generation Project” at
http://arxiv.org/abs/0904.3036
 
I have fond memories of the Beef Wellington that you prepared at one of the 
Pajaro Dunes meetings of the “DARPA Junior Over-achievers” society!  Before 
McCarthy developed his meta-circular definition, the developers of Lisp 1 took 
a similar approach to yours by developing a looped sequential program that 
mimicked their assembly language implementation.
 
Using “eval” as a message instead of the Lisp procedure is an interesting way 
to do language extension. For example lambda notation could be added to 
ActorScript as follows:


”lambda” “(“ id|-Identifier “)” body|-Expression   ~~   eval(env) -- 
(argument) -- body.eval(Environment(id,argument, env))
 
  where 
 
   Environment(iden, value, enviro)   ~~   lookup(iden2) -- 
iden2=iden ?~ true -- value ?? false -- enviro.lookup(iden2) ~?
 
Cheers,
Carl
 
From:Alan Kay [mailto:alan.n...@yahoo.com] 
Sent: Friday, November 25, 2011 15:37
To: Carl Hewitt; Dale Schumacher
Cc: Programming Language Design; The Friday Morning Applied Complexity Coffee 
Group; computational-actors-gu...@googlegroups.com; Fundamentals of New 
Computing
Subject: Re: [CAG] Re: [fonc] Fexpr the Ultimate Lambda
 
Hi Carl
 
I've always wished that we had gotten around to doing Smalltalk-71 -- which in 
many ways was a more interesting approach because it was heavily influenced by 
Planner -- kind of a Planner Logo with objects -- it was more aimed at the 
child users we were thinking about. ST-72 did the job at a more primitive 
level.
 
P.S. The characterizations of ST-71 and ST-72 in your paper are not quite 
accurate  -- but this doesn't matter -- but it is certainly true that we did 
not put concurrency in at the lowest level, nor did we have a truly formal 
model (I wrote the first interpreter scheme using a McCarthy-like approach -- 
it was a short one-pager -- but I wrote it as a looped sequential program 
rather than metacircularly because I wanted to show how it could be 
implemented).
 
Cheers,
 
Alan
 
 



From:Carl Hewitt hew...@concurrency.biz
To: Dale Schumacher dale.schumac...@gmail.com 
Cc: Programming Language Design pi...@googlegroups.com; The Friday Morning 
Applied Complexity Coffee Group fr...@redfish.com; 
computational-actors-gu...@googlegroups.com 
computational-actors-gu...@googlegroups.com; Alan Kay 
alan.n...@yahoo.com; Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, November 25, 2011 12:11 PM
Subject: RE: [CAG] Re: [fonc] Fexpr the Ultimate Lambda
I have started a discussion topic on Lambda the Ultimate so that others can 
participate here:  Actors all the way down
 
How SmallTalk-72 influenced the development of Actors is discussed in Actor 
Model of Computation: Scalable Robust Information Systems.
 
Cheers,
Carl
 
PS. Kristen Nygaard and I had some fascinating late night discussions over 
such matters in Aarhus lubricated with Linie

Re: [CAG] Re: [fonc] Fexpr the Ultimate Lambda

2011-11-25 Thread Alan Kay
Hi Carl

I've always wished that we had gotten around to doing Smalltalk-71 -- which in 
many ways was a more interesting approach because it was heavily influenced by 
Planner -- kind of a Planner Logo with objects -- it was more aimed at the 
child users we were thinking about. ST-72 did the job at a more primitive level.

P.S. The characterizations of ST-71 and ST-72 in your paper are not quite 
accurate  -- but this doesn't matter -- but it is certainly true that we did 
not put concurrency in at the lowest level, nor did we have a truly formal 
model (I wrote the first interpreter scheme using a McCarthy-like approach -- 
it was a short one-pager -- but I wrote it as a looped sequential program 
rather than metacircularly because I wanted to show how it could be 
implemented).

Cheers,

Alan





 From: Carl Hewitt hew...@concurrency.biz
To: Dale Schumacher dale.schumac...@gmail.com 
Cc: Programming Language Design pi...@googlegroups.com; The Friday Morning 
Applied Complexity Coffee Group fr...@redfish.com; 
computational-actors-gu...@googlegroups.com 
computational-actors-gu...@googlegroups.com; Alan Kay alan.n...@yahoo.com; 
Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, November 25, 2011 12:11 PM
Subject: RE: [CAG] Re: [fonc] Fexpr the Ultimate Lambda
 

I have started a discussion topic on Lambda the Ultimate so that others can 
participate here:  Actors all the way down
 
How SmallTalk-72 influenced the development of Actors is discussed in Actor 
Model of Computation: Scalable Robust Information Systems.
 
Cheers,
Carl
 
PS. Kristen Nygaard and I had some fascinating late night discussions over 
such matters in Aarhus lubricated with Linie aquavit :-)   I miss him dearly 
:-(
 
-Original Message-
From: computational-actors-gu...@googlegroups.com 
[mailto:computational-actors-gu...@googlegroups.com] On Behalf Of Dale 
Schumacher
Sent: Friday, November 25, 2011 11:54
To: Alan Kay; Fundamentals of New Computing
Cc: CAG; Programming Language Design; The Friday Morning Applied Complexity 
Coffee Group
Subject: [CAG] Re: [fonc] Fexpr the Ultimate Lambda
 
Yes, absolutely!  I've read that paper numerous times.  Unfortunately, I 
wasn't able to cite all of the branches of the LISP family tree.
 
I _did_ cite Piumarta's work on Maru.  His extensible base is much smaller 
Shutt's, but Kernel provided a better illustration of actor-based evaluation 
techniques.
 
On Fri, Nov 25, 2011 at 1:19 PM, Alan Kay alan.n...@yahoo.com wrote:
 Hi Dale
 Check out The Early History of Smalltalk to see the same insight 
 about Lisp and how it was used to think about and define and implement 
 Smalltalk-72.
 Cheers,
 Alan
  
 
 From: Dale Schumacher dale.schumac...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org; CAG 
 computational-actors-gu...@googlegroups.com; Programming Language 
 Design pi...@googlegroups.com; The Friday Morning Applied Complexity 
 Coffee Group fr...@redfish.com
 Sent: Friday, November 25, 2011 10:05 AM
 Subject: [fonc] Fexpr the Ultimate Lambda
  
 Fexpr the Ultimate Lambda (http://bit.ly/v6yTju) a treatise on 
 evaluation, in honor of John McCarthy.
  
 John Shutt’s Kernel language, and its underlying Vau-calculus, is a 
 simplified reformulation of the foundations of the LISP/Scheme family 
 of languages. It is based on the notion that evaluation should be 
 explicit, patterned after Fexprs, rather than implicit, using Lambda.
 The results is a powerful well-behaved platform for building 
 extensible languages. Not extensible in syntax, but in semantics. We 
 have implemented the key mechanisms of Vau-calculus using actors. The 
 actor-based evaluation strategy introduces inherent concurrency 
 prevasively throughout the evaluation process.
  
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
  
  
  
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc
  
  
 
--
You received this message because you are subscribed to the Google Groups 
Computational Actor's Guild group.
To post to this group, send email to 
computational-actors-gu...@googlegroups.com.
To unsubscribe from this group, send email to 
computational-actors-guild+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/computational-actors-guild?hl=en.
 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Tension between meta-object protocol and encapsulation

2011-09-07 Thread Alan Kay
We've already discussed this in other contexts. This is what I meant when I 
talked about levels of meta and why invoking a function is more benign than 
using a global assignment (which is tantamount to redefining a function under 
program control), etc. 


And certainly to allow unprotected rewiring under program control without 
possible worlds reasoning makes a very fragile system indeed (and so many of 
them out there are).


Generally speaking, the amount and depth of required design increases greatly 
as one traverses into meta. (And just who should be doing the designs at each 
level will (should) become a narrower more expert group.)

One of the interesting metafences that has been discussed in STEPS (and it 
would be great to move from talk to some examples) is the enforced separation 
of meaning from optimization. If they are in different worlds (where any and 
all of the optimizations could be turned off and the system would still run) 
then this sideways augmentation can be hugely useful and still very safe.

The MOP example in the book could be considered to be one of these, but the 
book omitted this set of ideas. Still, you could imagine setting things up so 
that the new variant would be automatically checked by running the meaning in 
parallel, and this would make everything much less fragile.

I think there is real gold here if these sets of domains and fences could be 
worked out.

Cheers,

Alan





From: Casey Ransberger casey.obrie...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Wednesday, September 7, 2011 12:48 PM
Subject: [fonc] Tension between meta-object protocol and encapsulation


This has been bugging me for awhile. This seems like the best forum to ask 
about it.


With great power comes great responsibility. This quotation is hard to pin 
down. There are several versions of it to be found. This particular phrasing 
of the statement probably belongs to Stan Lee, but I think the phrase, in 
another form, is older than that.


It seems to me that there is tension here, forces pulling in orthogonal 
directions. In systems which include a MOP, it seems as though encapsulation 
is sort of hosed at will. Certainly I'm not arguing against the MOP, it's one 
of the most powerful ideas in programming. For some things, it seems 
absolutely necessary, but then... there's the abuse of the MOP. 


I've done it to spectacular effect :D and when one is under constant pressure 
to ship, one tends to reach for the longest lever in the room.


On one hand, I can avoid writing a lot of code at times this way, but on the 
other hand, what I've done is liable to confuse the hell out of whatever poor 
bastard is maintaining my code now.


I've also had to wade through some very confusing code that also did it. You 
know, as long as it's me doing it, it's fine, but there's only one of me, and 
there's in the neighborhood of 6,775,235,700 of you!


Is this tension irreconcilable?


-- 
Casey Ransberger

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Tension between meta-object protocol and encapsulation

2011-09-07 Thread Alan Kay
Yep.

Cheers,

Alan





From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Wednesday, September 7, 2011 1:50 PM
Subject: Re: [fonc] Tension between meta-object protocol and encapsulation


On Wed, Sep 7, 2011 at 12:48 PM, Casey Ransberger casey.obrie...@gmail.com 
wrote:

It seems to me that there is tension here, forces pulling in orthogonal 
directions. In systems which include a MOP, it seems as though encapsulation 
is sort of hosed at will. Certainly I'm not arguing against the MOP, it's one 
of the most powerful ideas in programming. For some things, it seems 
absolutely necessary, but then... there's the abuse of the MOP. 


Is this tension irreconcilable?


There are patterns for meta-object protocol that protect encapsulation (and 
security). 


Gilad Bracha discusses capability-secure reflection and MOP by use of 
'Mirrors' [1][2]. The basic idea with mirrors is that the authority to reflect 
on an object can be kept separate from the authority to otherwise interact 
with the object - access to the MOP is not universal.


Maude's reflective towers [3] - which I feel are better described as 'towers 
of interpreters' - are also a secure basis. Any given interpreter is 
constrained to the parts of the model it is interpreting. By extending the 
interpreter model, developers are able to achieve ad-hoc extensions similar to 
a MOP.


These two classes of mechanisms are actually quite orthogonal, and can be used 
together. For example, developers can provide frameworks or interpreters in a 
library, and each interpreter 'instance' can easily export a set of mirror 
capabilities (which may then be fed to the application as arguments). 


[1]  Mirrors: Design Principles for Meta-level Facilities of Object-Oriented 
Programming Language. Gilad Bracha and David 
Ungar. http://bracha.org/mirrors.pdf
[2] The Newspeak Programming 
Platform. http://bracha.org/newspeak.pdf (sections 3,4).
[3] Maude Manual Chapter 11: Reflection, Metalevel Computation, and 
Strategies. http://maude.cs.uiuc.edu/maude2-manual/html/maude-manualch11.html


We can use power responsibly. The trick is to control who holds it, so that 
power is isolated to the 'responsible' regions of code.


Regards,


Dave


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Re: a little more FLEXibility

2011-09-05 Thread Alan Kay
I hate to be the one to bring this up, but this has always been a feature of 
all the Smalltalks ... one has to ask, what is there about current general 
practice that makes this at all remarkable? ...

Cheers,

Alan





From: Murat Girgin gir...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Monday, September 5, 2011 11:21 AM
Subject: Re: [fonc] Re: a little more FLEXibility




not sure if this is relevant:


one nifty feature I recently noticed which exists in SQL Server Management 
Studio was the ability to select things, hit a key, and evaluate only the 
selected code.


this seemed to combine some of the merits of entry in a text editor, with 
those of immediate evaluation (and allowing more convenient ways to deal with 
longer multi-line commands)


F# REPL in Visual Studio also supports this. Pretty nice feature.

On Mon, Sep 5, 2011 at 1:01 AM, BGB cr88...@gmail.com wrote:

On 9/4/2011 11:38 PM, Michael Haupt wrote: 
Hi Jecel, 


Am 02.09.2011 um 20:51 schrieb Jecel Assumpcao Jr.:
Michael,

your solution is a little more indirect than dragging
  arrows in Self
since you have to create a global, which is what I would
  like to avoid.



ah, but instead of Smalltalk  #at:put: you can use any object member's 
setter. I was just too lazy to write that. :-)

Not to mention that one solution is direct manipulation while the other
is typing and evaluating an expression. But between your
  solution and
Bert's it is obvious that the system can do what I want
  but the
limitation in the GUI.

Of course; I see the deficiencies.


not sure if this is relevant:

one nifty feature I recently noticed which exists in SQL Server
Management Studio was the ability to select things, hit a key, and
evaluate only the selected code.

this seemed to combine some of the merits of entry in a text editor,
with those of immediate evaluation (and allowing more convenient
ways to deal with longer multi-line commands).



Best,


Michael

-- 


Dr. Michael Haupt | Principal Member of Technical Staff
Phone: +49 331 200 7277 | Fax: +49 331 200 7561
Oracle Labs 
Oracle Deutschland B.V.  Co. KG, Schiffbauergasse
14 | 14467 Potsdam, Germany 
 Oracle is committed to developing practices and products that help protect 
 the environment  



___
fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)

2011-09-02 Thread Alan Kay
Hi Jecel

I think both these sections were reactions to some of the current hardware, and 
hardware problems of the time. 


I remember the second one better than the first. In those days cutting dies out 
of the wafters often damaged them, and the general yield was not great even 
before cutting the die out. This idea was to lay down regions of memory on the 
wafer, run bus metalization over them, test them, and zap a few bits if they 
didn't work. The key here was that the working ones would each have a register 
that had its name (actually its base address and range) and all could look at 
the bus to see if their address came up. If it did, it would seize the bus and 
do what ever. So this was a kind of distributed small pages and MMUs scheme. 
And the yield would be much higher because the wafers remained intact. I don't 
think any of these tradeoffs obtain today, though one could imagine other kinds 
of schemes for distributed memory and memory management that would be more 
sensible than current schemes.

The first one I really don't remember. But it probably was partially the result 
of the head per track small disk that the FLEX machine used -- and probably was 
influenced by Paul Rovner's scheme at Lincoln Labs for doing Jerry Feldman's 
software associative triples memory. 


I think this was not about Denis Seror's later and really interesting thesis 
(under Barton) to make a lambda calculus machine -- really a combinator 
machine (to replace variables by paths) and to have the computation on the 
disk and just pull in and reduce as possible as the disk zoomed by. All was 
done in parallel and eventually all would be reduced. Denis came up with a nice 
little language that had a bit of an APL feeling for humans to program this 
system in. He (and his wife) wound up making an animated movie to show people 
who didn't know about lambda expressions and combinators (which was pretty much 
everyone in CS in those days) what they were and how they reduced.

There's no question that Bob Taylor was the prime key for PARC (and he also had 
paid for most of our PhDs in the 60s when he was an ARPA funder).

Cheers,

Alan



From: Jecel Assumpcao Jr. je...@merlintec.com
To: Alan Kay alan.n...@yahoo.com
Cc: Fundamentals of New Computing fonc@vpri.org
Sent: Thursday, September 1, 2011 3:17 PM
Subject: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)

Alan,

 The Flex Machine was the omelet you have to throw away to clean the pan,
 so I haven't put any effort into saving that history.

Fair enough! Having the table of contents but not the text made me think
that perhaps the section B.6.b.ii The Disk as a Serial Associative
Memory and B.6.c. An Associativeley Mapped LSI Memory might be
interesting in light of Ian's latest paper. Or the first part might be
more related to OOZE instead.

 But there were 4 or 5 pretty good things and 4 or 5 really bad things 
 that
 helped the Alto-Smalltalk effort a few years later.

Was being able to input drawings one of the good things? There was one
Lisp GUI that put a lot of effort into allowing you to input objects
instead of just text. It did that by outputting text but keeping track
of where it came from. So if you pointed to the text generated by
listing the contents of a disk directory while there was some program
waiting for input, that program would read the actual entry object.

It is frustrating for me that while the Squeak VM could easily handle an
expression like

myView add: yellowEllipseMorph copy.

I have no way of typing that. I can't use any object as a literal nor as
input. In Etoys I can get close enough by getting  a tile representing
the yellowEllpiseMorph from its halo and use that in expressions. In
Self I could add a constant slot with some easy to type value, like 0,
and then drag the arrow from that slot to point to the object I really
wanted. It was a bit indirect but it worked and I used this a lot. The
nice thing about having something like this is that you never need
global variable again.

 I'd say that the huge factors after having tried to do one of these were two
 geniuses: Chuck Thacker (who was an infinitely better hardware designer and
 builder than I was), and Dan Ingalls (who was infinitely better at most 
 phases
 of software design and implementation than I was).

True. You were lucky to have them, though perhaps we might say Bob
Taylor had built that luck into PARC.

-- Jecel



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


P.S. Re: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)

2011-09-02 Thread Alan Kay
Here's a link to Denis' 1970 thesis

He was one of the incredible group of French grad students at Utah in the late 
60s, including Henri Gouraud, Patrick Baudelaire, Bob Mahl, and Denis. They 
were really good!


http://books.google.com/books/about/DCPL_A_Distributed_Control_Programming_L.html?id=93gCOAAACAAJ

Cheers,

Alan





From: Alan Kay alan.n...@yahoo.com
To: Jecel Assumpcao Jr. je...@merlintec.com
Cc: Fundamentals of New Computing fonc@vpri.org
Sent: Friday, September 2, 2011 8:23 AM
Subject: Re: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)


Hi Jecel


I think both these sections were reactions to some of the current hardware, 
and hardware problems of the time. 



I remember the second one better than the first. In those days cutting dies 
out of the wafters often damaged them, and the general yield was not great 
even before cutting the die out. This idea was to lay down regions of memory 
on the wafer, run bus metalization over them, test them, and zap a few bits if 
they didn't work. The key here was that the working ones would each have a 
register that had its name (actually its base address and range) and all 
could look at the bus to see if their address came up. If it did, it would 
seize the bus and do what ever. So this was a kind of distributed small pages 
and MMUs scheme. And the yield would be much higher because the wafers 
remained intact. I don't think any of these tradeoffs obtain today, though one 
could imagine other kinds of schemes for distributed memory and memory 
management that would be more sensible than current schemes.


The first one I really don't remember. But it probably was partially the 
result of the head per track small disk that the FLEX machine used -- and 
probably was influenced by Paul Rovner's scheme at Lincoln Labs for doing 
Jerry Feldman's software associative triples memory. 



I think this was not about Denis Seror's later and really interesting thesis 
(under Barton) to make a lambda calculus machine -- really a combinator 
machine (to replace variables by paths) and to have the computation on the 
disk and just pull in and reduce as possible as the disk zoomed by. All was 
done in parallel and eventually all would be reduced. Denis came up with a 
nice little language that had a bit of an APL feeling for humans to program 
this system in. He (and his wife) wound up making an animated movie to show 
people who didn't know about lambda expressions and combinators (which was 
pretty much everyone in CS in those days) what they were and how they reduced.


There's no question that Bob Taylor was the prime key for PARC (and he also 
had paid for most of our PhDs in the 60s when he was an ARPA funder).


Cheers,


Alan



From: Jecel Assumpcao Jr. je...@merlintec.com
To: Alan Kay alan.n...@yahoo.com
Cc: Fundamentals of New Computing fonc@vpri.org
Sent: Thursday, September 1, 2011 3:17 PM
Subject: a little more FLEXibility (was: [fonc] Re: Ceres and Oberon)

Alan,

 The Flex Machine was the omelet you have to throw away to clean the pan,
 so I haven't put any effort into saving that history.

Fair enough! Having the table of contents but not the text made me think
that perhaps the section B.6.b.ii The Disk as a Serial Associative
Memory and B.6.c. An Associativeley Mapped LSI Memory might be
interesting in light of Ian's latest paper. Or the first part might be
more related to OOZE instead.

 But there
 were 4 or 5 pretty good things and 4 or 5 really bad things that
 helped the Alto-Smalltalk effort a few years later.

Was being able to input drawings one of the good things? There was one
Lisp GUI that put a lot of effort into allowing you to input objects
instead of just text. It did that by outputting text but keeping track
of where it came from. So if you pointed to the text generated by
listing the contents of a disk directory while there was some program
waiting for input, that program would read the actual entry object.

It is frustrating for me that while the Squeak VM could easily handle an
expression like

myView add: yellowEllipseMorph copy.

I have no way of typing that. I can't use any object as a literal nor as
input. In Etoys I can get close enough by getting  a tile representing
the yellowEllpiseMorph from its halo and use that in expressions. In
Self I could add
 a constant slot with some easy to type value, like 0,
and then drag the arrow from that slot to point to the object I really
wanted. It was a bit indirect but it worked and I used this a lot. The
nice thing about having something like this is that you never need
global variable again.

 I'd say that the huge factors after having tried to do one of these were two
 geniuses: Chuck Thacker (who was an infinitely better hardware designer and
 builder than I was), and Dan Ingalls (who was infinitely better at most 
 phases
 of software design and implementation than I was).

True. You were lucky

Re: [fonc] Re: Ceres and Oberon

2011-09-01 Thread Alan Kay
I'm so glad I never read this before (and am looking for ways to forget that I 
just did )

Cheers,

Alan





From: John Zabroski johnzabro...@gmail.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org
Cc: Jecel Assumpcao Jr. je...@merlintec.com
Sent: Thursday, September 1, 2011 10:31 AM
Subject: Re: [fonc] Re: Ceres and Oberon


Has [1] been mentioned yet?  If so, apologies.

I think many here are implicitly referencing this when bringing up Oberon.

[1] http://c2.com/cgi/wiki?HeInventedTheTerm


On Wed, Aug 31, 2011 at 2:25 PM, Alan Kay alan.n...@yahoo.com wrote:

The Flex Machine was the omelet you have to throw away to clean the pan, so 
I haven't put any effort into saving that history. But there were 4 or 5 
pretty good things and 4 or 5 really bad things that helped the 
Alto-Smalltalk effort a few years later. I'd say that the huge factors after 
having tried to do one of these were two geniuses: Chuck Thacker (who was an 
infinitely better hardware designer and builder than I was), and Dan Ingalls 
(who was infinitely better at most phases of software design and 
implementation than I was).


Cheers,


Alan





From: Jecel Assumpcao Jr. je...@merlintec.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org
Sent: Wednesday, August 31, 2011 3:09 PM
Subject: Re: [fonc] Re: Ceres and Oberon


Alan,

thanks for the detailed history!


 1966 was the year I entered grad school (having programmed for 4-5 years,
 but essentially knowing nothing about computer science). Shortly after
 encounters with and lightning bolts from the sky induced by Sketchpad and
 Simula, I found the Euler papers and thought you could make
 something with
 objects that would be nicer if you used Euler for a basis rather than how
 Simula was built on Algol. That turned out to be the case and I built this 
 into
 the table-top plus display plus pointing device personal computer Ed 
 Cheadle
 and I made over the next few years. 


Is this available anywhere beyond the small fragments at

http://www.mprove.de/diplom/gui/kay68.html

and

http://www.mprove.de/diplom/gui/kay69.html

?

Though you often mention the machine itself, I have never seen you put
these texts in the list of what people should read like you do with
Ivan's thesis.


 The last time I looked at Oberon (at Apple more than 15 years ago) it did
 not impress, and did not resemble anything I would call an
 object-oriented
 language -- or an advance on anything that was already done in the 70s.
 But that's just my opinion. And perhaps it has improved since then.


It was an attempt to step back from the complexity of Modula-2, which is
a good thing. It has the FONC goal of being small enough to be
completely read and understood by one person (he does mention that this
is in the form of a 600 page book in the talk).

In the early 1990s I was trying to build a really low cost computer
around the Self language and a professor who always had interesting
insights suggested that something done with Oberon would require fewer
hardware resources. I studied the language and saw that they had
recently made it object oriented:

http://en.wikipedia.org/wiki/Oberon-2_%28programming_language%29

But it turned out that
 this was a dead end and the then current system
was built with the original, non object oriented version of the language
(as it is to this day - the OO programming Wirth mentioned in the talk
is the kind of thing you can do in plain C). I liked the size of the
system, but the ALL CAPS code hurt my eyes and the user interface was
awkward (both demonstrators in the movie had problems using it, though
Wirth had the excuse that he hadn't used it in a long time).

-- Jecel




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Re: Ceres and Oberon

2011-08-31 Thread Alan Kay
 and that are smaller and 
much more convenient and powerful.)

I think of what we did at PARC in the 70s as highly derivative of the best 
ideas of the 60s -- a great new recipe rather than brand new ingredients, and 
some of those came from Klaus.


The last time I looked at Oberon (at Apple more than 15 years ago) it did not 
impress, and did not resemble anything I would call an object-oriented language 
-- or an advance on anything that was already done in the 70s. But that's just 
my opinion. And perhaps it has improved since then.


Best wishes,


Alan





From: Eduardo Cavazos wayo.cava...@gmail.com
To: fonc@vpri.org
Sent: Wednesday, August 31, 2011 12:54 AM
Subject: [fonc] Re: Ceres and Oberon

Alan Kay wrote:

 I'm glad that he has finally come to appreciate OOP.

There are two kinds of people on this list. Those who can tell when Alan
is joking and those that can't. :-D

Don't know which I am but I can at least say that the OOP that is in
Oberon is not what Alan had in mind when he invented the term.

Sorry if you were being sincere Alan... :-)

At any rate, I do appreciate the Oberon system and the evolution of
Wirth's language through Pascal, Modula-2, and Oberon. *Somebody* had to
do the experiment that is: take a classical systems programming
language, and implement a small, understandable, system and environment
in that one language. I think the Oberon system is more or less what you
get when you take a C-like language and play it grand on a
workstation.

As an aside, I think it's crazy that C hasn't at least grown a module
system yet.

Changing the subject a bit...

We can all look to the past for great OS and language designs; each of
us knows a few. However, I'm not so sure about the network aspects and
the approaches to distributed computing. Can we look to the past for
inspiring distributed computing environments? Or are the truly great and
timeless ones yet to be invented? I guess we'll have to nail down and
agree upon a decent node first.

Ed


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


P.S. Re: [fonc] Re: Ceres and Oberon

2011-08-31 Thread Alan Kay
P.S. I should have directly pointed out that there were many earlier systems 
that did the experiment of taking a single language and writing everything in 
it.

(Some earlier than Smalltalk, etc., although it was interesting for being 
early, small, and very high level)


Cheers,

Alan





From: Eduardo Cavazos wayo.cava...@gmail.com
To: fonc@vpri.org
Sent: Wednesday, August 31, 2011 12:54 AM
Subject: [fonc] Re: Ceres and Oberon

Alan Kay wrote:

 I'm glad that he has finally come to appreciate OOP.

There are two kinds of people on this list. Those who can tell when Alan
is joking and those that can't. :-D

Don't know which I am but I can at least say that the OOP that is in
Oberon is not what Alan had in mind when he invented the term.

Sorry if you were being sincere Alan... :-)

At any rate, I do appreciate the Oberon system and the evolution of
Wirth's language through Pascal, Modula-2, and Oberon. *Somebody* had to
do the experiment that is: take a classical systems programming
language, and implement a small, understandable, system and environment
in that one language. I think the Oberon system is more or less what you
get when you take a C-like language and play it grand on a
workstation.

As an aside, I think it's crazy that C hasn't at least grown a module
system yet.

Changing the subject a bit...

We can all look to the past for great OS and language designs; each of
us knows a few. However, I'm not so sure about the network aspects and
the approaches to distributed computing. Can we look to the past for
inspiring distributed computing environments? Or are the truly great and
timeless ones yet to be invented? I guess we'll have to nail down and
agree upon a decent node first.

Ed


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Re: Ceres and Oberon

2011-08-31 Thread Alan Kay
The Flex Machine was the omelet you have to throw away to clean the pan, so I 
haven't put any effort into saving that history. But there were 4 or 5 pretty 
good things and 4 or 5 really bad things that helped the Alto-Smalltalk 
effort a few years later. I'd say that the huge factors after having tried to 
do one of these were two geniuses: Chuck Thacker (who was an infinitely better 
hardware designer and builder than I was), and Dan Ingalls (who was infinitely 
better at most phases of software design and implementation than I was).

Cheers,

Alan





From: Jecel Assumpcao Jr. je...@merlintec.com
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org
Sent: Wednesday, August 31, 2011 3:09 PM
Subject: Re: [fonc] Re: Ceres and Oberon

Alan,

thanks for the detailed history!

 1966 was the year I entered grad school (having programmed for 4-5 years,
 but essentially knowing nothing about computer science). Shortly after
 encounters with and lightning bolts from the sky induced by Sketchpad and
 Simula, I found the Euler papers and thought you could make something with
 objects that would be nicer if you used Euler for a basis rather than how
 Simula was built on Algol. That turned out to be the case and I built this 
 into
 the table-top plus display plus pointing device personal computer Ed Cheadle
 and I made over the next few years. 

Is this available anywhere beyond the small fragments at

http://www.mprove.de/diplom/gui/kay68.html

and

http://www.mprove.de/diplom/gui/kay69.html

?

Though you often mention the machine itself, I have never seen you put
these texts in the list of what people should read like you do with
Ivan's thesis.

 The last time I looked at Oberon (at Apple more than 15 years ago) it did
 not impress, and did not resemble anything I would call an object-oriented
 language -- or an advance on anything that was already done in the 70s.
 But that's just my opinion. And perhaps it has improved since then.

It was an attempt to step back from the complexity of Modula-2, which is
a good thing. It has the FONC goal of being small enough to be
completely read and understood by one person (he does mention that this
is in the form of a 600 page book in the talk).

In the early 1990s I was trying to build a really low cost computer
around the Self language and a professor who always had interesting
insights suggested that something done with Oberon would require fewer
hardware resources. I studied the language and saw that they had
recently made it object oriented:

http://en.wikipedia.org/wiki/Oberon-2_%28programming_language%29

But it turned out that this was a dead end and the then current system
was built with the original, non object oriented version of the language
(as it is to this day - the OO programming Wirth mentioned in the talk
is the kind of thing you can do in plain C). I liked the size of the
system, but the ALL CAPS code hurt my eyes and the user interface was
awkward (both demonstrators in the movie had problems using it, though
Wirth had the excuse that he hadn't used it in a long time).

-- Jecel



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Ceres and Oberon

2011-08-30 Thread Alan Kay
I'm glad that he has finally come to appreciate OOP.

Cheers,

Alan



From: Jakob Praher ja...@praher.info
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org
Sent: Tuesday, August 30, 2011 2:23 PM
Subject: Re: [fonc] Ceres and Oberon


Am 30.08.11 22:38, schrieb Alan Kay: 
Sure. He was invited to spend a year in CSL in the mid 70s and decided to do 
an Alto like machine with an Alto-like UI and that ran Alto-like languages 
(turned out to be an odd combination of Mesa and Smalltalk).
Did you exchange some ideas? He really apreciated object orientation when he 
designed his drawing program. Did you explain Smalltalk to him?
The talk makes me think about complexity of software and ability to
understand code.

I think there two sides:
a) no abstraction at all (assembly code) : complicated since simple
things are huge
b) over-use of abstraction : complicated since hard to see where the
real stuff is going on

Maybe it also has something to do with bottom up vs top down.

Cheers.
Jakob




Cheers,


Alan





From: Jakob Praher j...@hapra.at
To: Fundamentals of New Computing fonc@vpri.org
Sent: Tuesday, August 30, 2011 1:02 PM
Subject: Re: [fonc] Ceres and Oberon

Am 30.08.11 21:46, schrieb Jakob Praher:
 Dear Eduardo,

 Thanks for sharing this. There is a great overlap
between Alan's and
 Niklaus Wirth's sentiments.
 Very inspiring and to the point. Is anybody using
Oberon currently as a
 working environment?

 @Alan: Can you remember the discussion with Niklaus
from the PARC days?

 Best,
 Jakob


 Am 30.08.11 20:25, schrieb Eduardo Cavazos:
 Presentation from earlier this year by Niklaus
Wirth on Oberon:

 http://www.multimedia.ethz.ch/conferences/2011/oberon/?doi=10.3930/ETHZ/AV-5879ee18-554a-4775-8292-3cf0293f5956autostart=true

 Towards the end Niklaus demos an actual Ceres
workstation.


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc





___
fonc mailing list fonc@vpri.org http://vpri.org/mailman/listinfo/fonc 


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Messages

2011-08-20 Thread Alan Kay
(For example)

Try to imagine a system where the parts only receive messages but never 
explicitly send them.

This is one example of what I meant when I requested that computer people pay 
more attention to what is in between the parts, than to the parts -- the 
Japanese have a great short word for it: ma -- we don't, and that's a clue 
that we have a hard time with what is in between

Cheers,

Alan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-18 Thread Alan Kay
One way to try to think about the idea of Lisp and the larger interesting 
issues, is to read the Advice Taker paper by John McCarthy (ca. 56-58 
Programs With Common Sense) which is what got him thinking about interactive 
intelligent agents, and got him to start thinking about creating a programming 
language that such agents could be built in.

The context is that John was/is an excellent mathematician and this is one of 
his main perspectives for thinking about things.

So he wanted the agent to be able to reason.

He wanted to be able to reason about the agent.

He wanted to be able to reason about the programming language and its programs.

The agent operates in a world that has places and times and acquisition of new 
knowledge/state. So the reasoning has to be in a logical system where change 
happens, and this means that valid deductions at one time could be different if 
rededuced at some other time. However, he didn't want to violate the logic that 
worked earlier, etc.

He wanted a real mathematical apparatus (possibly a new one) that had 
desirable properties (including parsimony and compactness).

This led to several grand schemes that were all part of the same set of visions 
and insights.

Some of the best papers ever in CS were written by John (sometimes with 
collaborators) in the 60s. For example Situations, Actions and Causal Laws by 
John in 1963, which started off many of these lines of thought.

One of many good ones of these is the McCarthy  Hayes paper about situation 
calculus and its modal logic. This represents a minority view about how to 
view computations, reasoning, change, and race conditions. There were a few 
others around then that also had this POV (including -- for some of it -- the 
Simula inventors). This is also a summation paper with a very good references 
section that covers most of the good stuff being done in this area in the 60s. 

Lisp was just one part of larger more important ideas.

Another more trivial but telling point is that John did not like the use of S 
expressions for programming -- he invented them to have a way to represent 
collections and to serve as an internal form for interpretation. But as any 
decent mathematician would, he wanted to have a more readable notation for 
programming in and reading programs and learning to think in this more 
mathematical style. And he experimented with a number of these over the years.

Cheers,

Alan___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-18 Thread Alan Kay
One of the few real science fiction movies ever made. Full of real ideas, many very interesting. The follow-on book by W.J. Stuart was richer than the movie (which is rarely the case).I think it was an MGM movie but they hired Disney studios to do the design (including the refraction of colors in the air of the planet) and the generally excellent special effects.The story form was based on Shakespeare's Tempest. The setting was adapted from A. E. van Vogt's 1940s classic, "The Voyage of the Space Beagle". Like "20,000 Leagues Under The Sea", which I think was the same year or the year before, there were several sets of writers -- the ones with the ideas, and
 the ones hired to "punch and gag it up" for what Hollywood calls the "mouth breathers". So in both movies you get "Chateaubriand with ketchup" -- you just have to scrape the ketchup off to enjoy it.Both of these heavily influenced the later Star Trek franchise (which still had a few ideas but lighter ones).Cheers,AlanFrom: David Leibs david.le...@oracle.comTo:
 Fundamentals of New Computing fonc@vpri.orgSent: Thursday, August 18, 2011 12:43 PMSubject: Re: [fonc] Extending object oriented programming in SmalltalkOld Timer Alert!Ah, 1956. I was seven years old and Robby the Robot from the science fiction movie "Forbidden Planet" had just leaped into popular culture. Robby was an awesome automatous AI. The movie was really quite something for 1956. Faster than light travel, cool space ship, 3d printers, alien super brain race that had disappeared (the Krell), monsters from the ID.To me Lisp is like
 something created by the Krell. "As though my ape's brain could contain the secrets of the Krell."I asked John if he had seen the movie and he had. John is "Krell Smart".-David LeibsOn Aug 18,
 2011, at 10:15 AM, Alan Kay wrote:One way to try to think about "the idea of Lisp" and the larger interesting issues, is to read "the Advice Taker" paper by John McCarthy (ca. 56-58 "Programs With Common Sense") which is what got him thinking about interactive intelligent agents, and got him to start thinking about creating a programming language that such agents could be built in.___fonc mailing listfonc@vpri.orghttp://vpri.org/mailman/listinfo/fonc___fonc mailing listfonc@vpri.orghttp://vpri.org/mailman/listinfo/fonc___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Extending object oriented programming in Smalltalk

2011-08-17 Thread Alan Kay
Take a look at Landin's papers and especially ISWIM (The next 700 programming 
languages)

You don't so much want to learn Lisp as to learn the idea of Lisp

Cheers,

Alan




From: karl ramberg karlramb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Wednesday, August 17, 2011 12:00 PM
Subject: Re: [fonc] Extending object oriented programming in Smalltalk


Hi,
Just reading a Lisp book my self. 
Lisp seems to be very pure at the bottom level.
The nesting in parentheses are hard to read and comprehend / debug.
Things get not so pretty when all sorts of DSL are made to make it more 
powerful. 
The REPL give it a kind of wing clipped aura; there is more to computing than 
text io


Karl




On Wed, Aug 17, 2011 at 8:00 PM, DeNigris Sean s...@clipperadams.com wrote:

Alan,


While we're on the subject, you finally got to me and I started learning 
LISP, but I'm finding an entire world, rather than a cohesive language or 
philosophy (Scheme - which itself has many variants, Common LISP, etc). What 
would you recommend to get it in the way that changes your thinking? What 
should I be reading, downloading, coding, etc.


Thanks.
Sean DeNigris
You wouldn't say that Lisp 1.5 Programmer's Manual is outdated would you?  
:-)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics and Types

2011-08-05 Thread Alan Kay
That was my thought when I first saw what Seymour Papert was doing with 
children and LOGO in the 60s. I was thinking about going back into Molecular 
Biology, but Seymour showed that computers could *really* be important as 
unique vehicles for teaching powerful ideas and powerful thinking to young 
children -- this was like a light going on.

There is more on this at http://www.vpri.org/html/writings.php under the 
category of sorted by: Teaching and Learning Powerful Ideas


Cheers,

Alan





From: Simon Forman forman.si...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Thursday, August 4, 2011 10:33 PM
Subject: Re: [fonc] Physics and Types

Oh awesome! Thank you both.  That's got to be one of the single most
profound uses of computers I've ever run across.

Warm regards,
~Simon

On Thu, Aug 4, 2011 at 6:19 PM, Alan Kay alan.n...@yahoo.com wrote:
 Here's the link to the paper
 http://www.vpri.org/pdf/rn2005001_learning.pdf
 Cheers,
 Alan

 
 From: Martin McClure martin.mccl...@vmware.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Thursday, August 4, 2011 3:46 PM
 Subject: Re: [fonc] Physics and Types

 On 08/03/2011 08:10 PM, Simon Forman wrote:

 On the other hand, there's a story (I believe it's in one of the VPRI
 documents but I couldn't locate it just now) about children using
 their machines to take pictures of a falling object and then analyzing
 the pictures and deducing for themselves the constant-acceleration
 rule for gravity.

 IIRC, this is one of the things on the Squeakers DVD.
 http://squeakland.org/resources/audioVisual/#cat547

 Regards,

 -Martin

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc





-- 
I live on a Pony Farm: http://fertilefuture.blogspot.com/
My blog: http://firequery.blogspot.com/

The history of mankind for the last four centuries is rather like
that of an imprisoned sleeper, stirring clumsily and uneasily while
the prison that restrains and shelters him catches fire, not waking
but incorporating the crackling and warmth of the fire with ancient
and incongruous dreams, than like that of a man consciously awake to
danger and opportunity.  --H. P. Wells, A Short History of the
World

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics and Types

2011-08-05 Thread Alan Kay
Which, if you look carefully at the paper, you will see that differential 
relationships is just what we did teach the children a few months before doing 
the ball drop experiment. 


This is very powerful when you have a computer because you can go directly at 
the relationships and let the computer do the incremental additions that 
perform the integrations. This gets at the heart of what calculus is actually 
about (and of course it is what Babbage aimed at with his first difference 
engine). 

Both Newton and Einstein liked the idea of exploring mathematical relationships 
first, because they form a basis (and a language) for helping to think about 
and articulate what one might be observing.


The nice thing is that children can readily deal with 1st and 2nd order 
differences and their calculations, and this covers a lot of the practical 
calculations of elementary Galilean and Newtonian physics.

Underneath every Etoy object is a turtle, and a turtle is a vector (and there 
are vector math operations between objects lurking in the Etoys viewers). But 
this problem is one dimensional, and even simple 2D followons such as rolling a 
ball off a table, shoot the alien etc can easily be handled informally and 
also acted out with the children's bodies, etc.

Cheers,

Alan





From: Ondřej Bílka nel...@seznam.cz
To: Fundamentals of New Computing fonc@vpri.org
Sent: Friday, August 5, 2011 6:13 AM
Subject: Re: [fonc] Physics and Types

On Fri, Aug 05, 2011 at 03:43:04AM -0700, BGB wrote:
    On 8/4/2011 6:19 PM, Alan Kay wrote:
 
      Here's the link to the paper
      [1]http://www.vpri.org/pdf/rn2005001_learning.pdf
 
    inference:
    it is not that basic math and physics are fundamentally so difficult to
    understand...
    but that many classes portray them as such a confusing and incoherent mess
    of notation and gobbledygook that no one can really make sense of it...
 
    old stale/dead rant follows:
 
    it is like, one year, with the help of a physics book,
    google+wikipedia+mathworld, and good old trial and error, I proceed to
    write a (basically functional, but not particularly good) rigid body
    physics engine.
 
    several years later, I took a physics class, with a teacher that comes off
    like Q (calling everyone stupid, comparing the students with dogs, ...)
    and writes out esoteric mathematical gobbledygook beyond my abilities to
    make much sense of (filled with set-notation and other unrecognized
    symbols and notations, some in common with first-order logic, like the
    inverted A and backwards E, ..., and others unknown...).
 
... 
    granted, I have also seen in introductory programming classes just how
    poorly many of the students seem to grasp some of the basics of
    programming (struggling with things like variable declarations, loops,
    understanding why never-called functions fail to do anything, ...), so I
    guess ultimately it is kind of similar (in an almost sad way, programming
    really doesn't seem like it should be all that difficult from the POV of
    someone with a fair amount of experience with it).
 
    but, at the same time, there would also be nothing good to be gained by
    belittling or being condescending towards newbies...
 
Well I faced oposite problem that for classes people unnecesarily
complicate things by trying to make it accessible for newbies.
One of my experiences that high school physics could be three times
easier and simpler if students learned differential equations.

-- 

Pentium FDIV bug

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] HotDraw's Tool State Machine Editor

2011-07-30 Thread Alan Kay
By the way, a wonderful example of the QWERTY phenomenon is that both the 
Greeks and the Romans actually did calculations with an on-table or 
on-the-ground abacus that did have a zero (the term for the small stone 
employed 
was a calculus) but used a much older set of conventions for writing numbers 
down.

(One can imagine the different temperaments involved in the odd arrangement 
above -- which is very much many such odd arrangements around us in the world 
today ...)

Cheers,

Alan





From: K. K. Subramaniam kksubbu...@gmail.com
To: fonc@vpri.org
Cc: Alan Kay alan.n...@yahoo.com
Sent: Sat, July 30, 2011 3:09:39 PM
Subject: Re: [fonc] HotDraw's Tool State Machine Editor

On Thursday 28 Jul 2011 10:27:26 PM Alan Kay wrote:
 Well, we don't absolutely need music notation, but it really helps many 
 things. We don't need the various notations of mathematics (check out
 Newton's  use of English for complex mathematical relationships in the
 Principia), but it really helps things.
I would consider notions and notations as distinct entities but they often 
feed each other in a symbiotic relationship. Take decimal system for instance. 
The invention and refinement of decimal numerals made many higher notions 
possible. These would hindered with Roman numerals. Notations need to be 
carefully designed to assist in communicating notions to others or to connect 
notions together.

BTW, the bias ;-) towards written forms in computing should not blind us to 
the fact that speech is a form of notation too. The speech-notion connection 
has been studied thousands of years before written notations (cf. Sphota, 
Logos or Vakyapadiya entries in Wikipedia).

Subbu
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


  1   2   >