Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread BGB

On 7/25/2011 4:28 PM, David Barbour wrote:
On Mon, Jul 25, 2011 at 3:20 PM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


too bad there is no standardized bytecode or anything though, but
then I guess it would at this point be more like
browser-integrated Flash or something, as well as be potentially
more subject to awkward versioning issues, or the bytecode ends up
being lame/inflexible/awkward/...


Bytecode is a *bad* idea - all they ever do is reduce our ability to 
reason about, secure, and optimize code. Bytecodes have not achieved 
proposed cross-language benefits - i.e. they tend to be very language 
specific anyway, so you might as well compile to an intermediate 
application language.




well, there are pros and cons.

pros:
more compact;
better at hiding ones' source code (decompilers are necessary);
can be executed directly if using an interpreter (no parser/... needed);
...

cons:
often less flexible than the source language;
lots of capabilities may require a decent number of opcodes (say, 500 to 
1000);

are typically language specific;
are often sensitive to version issues (absent special care, which often 
leads to cruft);

are generally VM-specific;
...


If you want compiled JavaScript, try Google's Closure compiler 
(JavaScript - JavaScript).


But I do agree that JavaScript is not an ideal target for compilation!



the main merit of a bytecode format is that it could shorten the path in 
getting to native code, potentially allowing it to be faster.


note that having a bytecode does not preclude having 'eval()' and 
similar (in fact, most VMs with eval tend to at least internally use 
bytecode anyways).


even my C compiler internally used a bytecode at one stage, albeit for 
historical reasons, a textual representation of the IL was used between 
the frontend and backend.


reason: initially I created a textual IL mostly to allow me to more 
easily test the codegen, but initially I wrote the frontend and backend 
separately, and ran into a bit of a problem: they didn't fit together. 
so, I modified the frontend to spit out the textual format instead of 
raw bytecode, and problem fixed (in the backend, the textual format was 
converted relatively directly into the bytecode format).


however, I have traditionally not had a serialized/canonical bytecode 
format (loading things from source has generally been more common). my 
current bytecode loading/saving (for BGBScript) has not been well tested 
nor is necessarily even a stable format (it is based on using the binary 
data serialization mechanism).



not that it all needs to be one or the other though.


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread David Barbour
On Mon, Jul 25, 2011 at 11:16 PM, BGB cr88...@gmail.com wrote:

 well, there are pros and cons.

 pros:
 more compact;
 better at hiding ones' source code (decompilers are necessary);
 can be executed directly if using an interpreter (no parser/... needed);
 ...


Counters:
* We can minify source, or zip it. There are tools that do this for
JavaScript.
* Hiding code is usually a bad thing. A pretense of security is always a bad
thing. But, if someone were to insist on an equal capability, I'd point them
at an 'obfuscator' (such as http://javascriptobfuscator.com/default.aspx). A
tool dedicated to befuddling your users will do a much better job in this
role than a simple bytecode compiler.
* we rarely execute bytecode directly; there is a lot of non-trivial setup
for linking and making sure we call the right codes.

Besides, the real performance benefits come from compiling the code - even
bytecode is typically JIT'd. Higher-level source can allow more effective
optimizations, especially across library boundaries. We'll want to cache and
reuse the compiled code, in a format suitable for immediate execution.
JavaScript did a poor job here due to its lack of a module system (to
prevent name shadowing and such), but they're fixing that for ES.next.



 the main merit of a bytecode format is that it could shorten the path in
 getting to native code, potentially allowing it to be faster.


Well, it is true that one might save a few cycles for a straightforward
conversion.

The use of a private IL by a compiler isn't the same. You aren't forced to
stabilize a private IL the way you need to stabilize the JVM ops.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread John Nilsson
On Tue, Jul 26, 2011 at 8:16 AM, BGB cr88...@gmail.com wrote:

 the main merit of a bytecode format is that it could shorten the path in
 getting to native code, potentially allowing it to be faster.


It seems to me that there is a lot of derivation of information going on
when interpreting source code. First a one dimensional stream of characters
is transformed into some kind of structured representation, possibly in
several steps. From the structural representation a lot of inference about
the program happens to deduce types and other properties of the structure.
Once inside a VM even more information is gathered such as determining if
call sites are typically monomorphic or not, and so on.

In other words a _lot_ of CPU cycles are spend on deriving the same
information, again and again, each time a program is loaded. Not only is
this a waste of energy it also means that each interpreter of the program
needs to be able to derive all this information on their own, which leads to
very complex programs (expensive to develop).

Would it not be a big improvement if we could move from representing
programs as text-strings into representing them in some format capable of
representing all this derived information? Does any one know of attempts in
this direction?

BR,
John
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread BGB

On 7/26/2011 5:34 AM, Igor Stasenko wrote:

On 26 July 2011 05:21, Alan Kayalan.n...@yahoo.com  wrote:

Again good points.

Java itself could have been fixed if it were not for the Sun marketing
people who rushed the electronic toaster language out where it was not fit
to go. Sun was filled with computerists who knew what they were doing, but
it was quickly too late.

And you are right about Mark Miller.

My complaint is not about JS per se, but about whether it is possible to get
all the cycles the computer has for certain purposes. One of the main
unnecessary violations of the spirit of computing in the web is that it
wasn't set up to allow safe access to the whole machine -- despite this
being the goal of good OS design since the mid-60s.


Indeed. And the only lucky players in the field who can access raw
machine power is plugins like Flash.
And only because they gained enough trust as being well safe.
As for the rest of developers (if they are not using existing
mechanisms in browser) the computer's resources still closed behind
solid fence.

Another interesting fact that while we having a hardware which can do
virtualization (not saying about software),
the only application of it which adopted widely is to run one
operating system inside another, host one.

But hey, things could be much more lightweight!
For instance , look at SqueakNOS project. Its boot time is like 10-15
seconds (and most of it not belongs to SqueakNOS itself but to bios
and boot loader).

So, it remains a mystery to me, why we cannot use web+virtualization.
It seems like a good balance between accessing raw machine power and
being safe at the same time.

I hope that NaCl partly using it , but then i wonder why they spending
an effort to validate native code, because if something vent wrong,
you can just kill it or freeze it (or do anything which virtualization
allow you to do),
without any chance of putting host system in danger.


because NaCl was not built on virtualization... from what I heard it was 
built on using segmented memory.


the problem then is that unverified code could potentially far-pointer 
its way right out of the sandbox.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread BGB

On 7/26/2011 6:43 AM, John Nilsson wrote:
On Tue, Jul 26, 2011 at 8:16 AM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


the main merit of a bytecode format is that it could shorten the
path in getting to native code, potentially allowing it to be faster.


It seems to me that there is a lot of derivation of information going 
on when interpreting source code. First a one dimensional stream of 
characters is transformed into some kind of structured representation, 
possibly in several steps. From the structural representation a lot of 
inference about the program happens to deduce types and other 
properties of the structure. Once inside a VM even more information is 
gathered such as determining if call sites are typically monomorphic 
or not, and so on.


In other words a _lot_ of CPU cycles are spend on deriving the same 
information, again and again, each time a program is loaded. Not only 
is this a waste of energy it also means that each interpreter of the 
program needs to be able to derive all this information on their own, 
which leads to very complex programs (expensive to develop).


Would it not be a big improvement if we could move from representing 
programs as text-strings into representing them in some format capable 
of representing all this derived information? Does any one know of 
attempts in this direction?




there are merits to using source-code and a multi-stage translation 
process, and to representing programs as source-code.



however, ideally, this shouldn't be the case for every time the program 
starts, and source-code is not the ideal format for:

use as the main/sole distribution format;
being the IR of another process, such as an interpreter or compiler for 
another language.


for example, JavaScript and LangX (some language X) should be able 
to run, more or less, side-by-side (without requiring plugins and/or 
compromising security), with the LangX code essentially being able to 
target the underlying VM (ideally in a semi-portable manner).


if LangX needs basically to rewrite the code into JavaScript and feed it 
through eval, this works, but essentially makes LangX a lower-class 
citizen than JS is.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread BGB

On 7/26/2011 9:05 AM, David Barbour wrote:
On Tue, Jul 26, 2011 at 1:50 AM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


whether or not compiling to bytecode is itself an actually
effective security measure, it is the commonly expected security
measure.


Is it? I've not heard anyone speak that way in many, many years. I 
think people are getting used to JavaScript.


for web-apps maybe, but it will likely be a long time before it becomes 
adopted by commercial application software (where the source-code is 
commonly regarded as a trade-secret).





a compiler may be expected (as part of the process) even if it
could be technically more correctly called an archiver or similar
(people may not let go of the established process easily).


We can benefit from 'compilers' even if we distribute source. For 
example, JavaScript-JavaScript compilers can optimize code, eliminate 
dead code, provide static warnings, and so on. We should also be able 
to compile other languages into the distribution language. I don't 
mind having a compiler be part of 'the process'. The issue regards the 
distribution language, not how you reach it.




yes, but why do we need an HLL distribution language, rather than, say, 
a low-level distribution language, such as bytecode or a VM-level 
ASM-like format, or something resembling Forth or PostScript?...



That said, it would often be preferable to distribute source, and use 
a library or module to parse and compile it. This would allow us to 
change our implementation without redistributing our intention. A 
language with good support for 'staging' would be nice.


potentially.



granted, yes, there are different ways to approach JIT (whether or
not to inline things, blocks vs traces, ...).


Hotspot, too. It is possible to mix interpretation with compilation.



yeah. my present assumed strategy is to assume mixed compilation and 
interpretation.


back with my C compiler, I tried to migrate to a pure-compilation 
strategy (there was no interpreter, only the JIT). this ultimately 
created far more problems than it solved.


the alternative was its direct ancestor, a prior version of my BGBScript 
VM, which at the time had used a combined interpreter+JIT strategy 
(sadly, for later versions the JIT has broken, as the VM has been too 
much in flux and I haven't kept up on keeping it working).




also, depending on language it may matter:


Agreed. We certainly should *design* the distribution language with an 
eye on distribution, not just pick an arbitrary language.


yeah. such a language should be capable of a wide range of languages and 
semantics.



a basic model which has been working acceptably in my case can be 
described roughly as:

sort of like PostScript but also with labels and conditional jumps.

pretty much the entire program representation can be in terms of blocks 
and a stack machine.





but, I guess the question that can be made is whether or not the
bytecode is intended to be a stable distribution format (of the
same sort as JBC or CIL), or intended more as a transient format
which may depend somewhat on the currently running VM (and may
change from one version to the next).


We should not tie our users to a particular distribution of the VM. If 
you distribute bytecode, or any language, it really should be stable, 
so that other people can compete with the implementation.




what I meant may have been misinterpreted.

it could be restated more as: should the bytecode even be used for 
program distribution?


if not, then it can be used more internal to the VM and languages 
running on the VM, such as for implementing lightweight eval mechanisms 
for other languages, ...


hence currently running VM, basically in this sense meaning which VM 
are we running on right now?. if done well, a program, such as a 
language compiler, can target the underlying VM without getting tied too 
much into how the VM's IL works, allowing both some level of portability 
for the program, as well as reasonably high performance and flexibility 
for the VM to change its IL around as-needed (or potentially bypass the 
IL and send the code directly to native code).


most likely though, the above would largely boil down to emitting code 
via an API.

granted, yes, there are good and bad points to API-driven code generation.

an analogy would be something sort of like OpenGL, but for compilers.

(side note:
actually, at the moment the thought of an OpenGL-like codegen interface 
seems interesting. but I am thinking more in the context of using it as 
a means of emitting native code. however, sadly, most of my prior 
attempts at separating the codegen from the higher-level IL mechanics, 
... have not gone well. ultimately some structure may be necessary. )



as for the alternative case, note ARM:
ARM and Thumb machine code are often used as distribution formats.

however, ARM is also fairly model specific, and 

Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread Casey Ransberger
I want to try using a fluffy pop song to sell a protest album... it worked for 
others before me:) If you're really lucky, some people will accidentally listen 
to your other songs. 

(metaphorically speaking)

A spoonful of sugar 
--
Casey


On Jul 25, 2011, at 10:47 PM, Alan Kay alan.n...@yahoo.com wrote:

 The trivial take on computing today by both the consumers and most of the 
 professionals would just be another pop music to wince at most of the 
 time, if it weren't so important for how future thinking should be done.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread Julian Leviston

On 26/07/2011, at 3:47 PM, Alan Kay wrote:

 But the dilemma is: what happens if this is the route and the children and 
 adults reject it for the much more alluring human universals? Even if almost 
 none of them lead to a stable, thriving, growth inducing and prosperous 
 civilization?
 
 These are the issues I care about.


You seem to be seeing these two as orthogonal. I see them as mutually 
complementing. (ie we drive people to what they need through what they want... )

Julian.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread Casey Ransberger
On Jul 26, 2011, at 3:28 PM, BGB cr88...@gmail.com wrote:

 On 7/26/2011 9:05 AM, David Barbour wrote:
 
 On Tue, Jul 26, 2011 at 1:50 AM, BGB cr88...@gmail.com wrote:
 whether or not compiling to bytecode is itself an actually effective 
 security measure, it is the   commonly expected security measure.
 
 Is it? I've not heard anyone speak that way in many, many years. I think 
 people are getting used to JavaScript.
  
 
 for web-apps maybe, but it will likely be a long time before it becomes 
 adopted by commercial application software (where the source-code is commonly 
 regarded as a trade-secret).

Worth pointing out that server side JS dodges this problem. Now that Node is 
out there, people are actually starting to do stuff with JS that doesn't run on 
the client, so it's happening... whether or not it's a real qualitative 
improvement for anyone.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread Igor Stasenko
On 26 July 2011 23:04, BGB cr88...@gmail.com wrote:
 On 7/26/2011 5:34 AM, Igor Stasenko wrote:

 On 26 July 2011 05:21, Alan Kayalan.n...@yahoo.com  wrote:

 Again good points.

 Java itself could have been fixed if it were not for the Sun marketing
 people who rushed the electronic toaster language out where it was not
 fit
 to go. Sun was filled with computerists who knew what they were doing,
 but
 it was quickly too late.

 And you are right about Mark Miller.

 My complaint is not about JS per se, but about whether it is possible to
 get
 all the cycles the computer has for certain purposes. One of the main
 unnecessary violations of the spirit of computing in the web is that it
 wasn't set up to allow safe access to the whole machine -- despite this
 being the goal of good OS design since the mid-60s.

 Indeed. And the only lucky players in the field who can access raw
 machine power is plugins like Flash.
 And only because they gained enough trust as being well safe.
 As for the rest of developers (if they are not using existing
 mechanisms in browser) the computer's resources still closed behind
 solid fence.

 Another interesting fact that while we having a hardware which can do
 virtualization (not saying about software),
 the only application of it which adopted widely is to run one
 operating system inside another, host one.

 But hey, things could be much more lightweight!
 For instance , look at SqueakNOS project. Its boot time is like 10-15
 seconds (and most of it not belongs to SqueakNOS itself but to bios
 and boot loader).

 So, it remains a mystery to me, why we cannot use web+virtualization.
 It seems like a good balance between accessing raw machine power and
 being safe at the same time.

 I hope that NaCl partly using it , but then i wonder why they spending
 an effort to validate native code, because if something vent wrong,
 you can just kill it or freeze it (or do anything which virtualization
 allow you to do),
 without any chance of putting host system in danger.

 because NaCl was not built on virtualization... from what I heard it was
 built on using segmented memory.

 the problem then is that unverified code could potentially far-pointer its
 way right out of the sandbox.


Hmm. As they mentioning on this page:
--
http://code.google.com/games/technology-nacl.html

Native Client is a sandboxing system. It runs code in a virtual
environment where all OS calls are intercepted by the NaCl runtime.
This has two benefits. First, it enhances security by preventing
untrusted code from making dangerous use of the operating system.
Second, because OS calls are virtualized, NaCl code is OS-independent.
You can run the same binary executable on MacOS, Linux, and Windows.

But syscall virtualization by itself wouldn't be as secure as
Javascript, because clever hackers can always find ways to exit the
sandbox. NaCl's real contribution is a software verification system
that scans each executable module before it runs. The verifier imposes
a set of constraints on the program that prevent the code from exiting
the sandbox. This security comes at a relatively small performance
price, with NaCl code generally running at about 95% the speed of
equivalent compiled code.
--

so, it leaves me clueless, how it can escape the sandbox if you are
intercepting all system calls.
And what level of safety will give you a static analyzis, if your NaCl
could be a Virtual Machine with JIT - so potentially it could generate
and run arbitrary native code. Or can download the code from internet
and then execute it.


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
Best regards,
Igor Stasenko AKA sig.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-26 Thread John Zabroski
On Tue, Jul 26, 2011 at 9:04 PM, Igor Stasenko siguc...@gmail.com wrote:

 On 26 July 2011 23:04, BGB cr88...@gmail.com wrote:
  On 7/26/2011 5:34 AM, Igor Stasenko wrote:
 
  On 26 July 2011 05:21, Alan Kayalan.n...@yahoo.com  wrote:
 
  Again good points.
 
  Java itself could have been fixed if it were not for the Sun marketing
  people who rushed the electronic toaster language out where it was
 not
  fit
  to go. Sun was filled with computerists who knew what they were doing,
  but
  it was quickly too late.
 
  And you are right about Mark Miller.
 
  My complaint is not about JS per se, but about whether it is possible
 to
  get
  all the cycles the computer has for certain purposes. One of the main
  unnecessary violations of the spirit of computing in the web is that it
  wasn't set up to allow safe access to the whole machine -- despite this
  being the goal of good OS design since the mid-60s.
 
  Indeed. And the only lucky players in the field who can access raw
  machine power is plugins like Flash.
  And only because they gained enough trust as being well safe.
  As for the rest of developers (if they are not using existing
  mechanisms in browser) the computer's resources still closed behind
  solid fence.
 
  Another interesting fact that while we having a hardware which can do
  virtualization (not saying about software),
  the only application of it which adopted widely is to run one
  operating system inside another, host one.
 
  But hey, things could be much more lightweight!
  For instance , look at SqueakNOS project. Its boot time is like 10-15
  seconds (and most of it not belongs to SqueakNOS itself but to bios
  and boot loader).
 
  So, it remains a mystery to me, why we cannot use web+virtualization.
  It seems like a good balance between accessing raw machine power and
  being safe at the same time.
 
  I hope that NaCl partly using it , but then i wonder why they spending
  an effort to validate native code, because if something vent wrong,
  you can just kill it or freeze it (or do anything which virtualization
  allow you to do),
  without any chance of putting host system in danger.
 
  because NaCl was not built on virtualization... from what I heard it was
  built on using segmented memory.
 
  the problem then is that unverified code could potentially far-pointer
 its
  way right out of the sandbox.
 
 
 Hmm. As they mentioning on this page:
 --
 http://code.google.com/games/technology-nacl.html

 Native Client is a sandboxing system. It runs code in a virtual
 environment where all OS calls are intercepted by the NaCl runtime.
 This has two benefits. First, it enhances security by preventing
 untrusted code from making dangerous use of the operating system.
 Second, because OS calls are virtualized, NaCl code is OS-independent.
 You can run the same binary executable on MacOS, Linux, and Windows.

 But syscall virtualization by itself wouldn't be as secure as
 Javascript, because clever hackers can always find ways to exit the
 sandbox. NaCl's real contribution is a software verification system
 that scans each executable module before it runs. The verifier imposes
 a set of constraints on the program that prevent the code from exiting
 the sandbox. This security comes at a relatively small performance
 price, with NaCl code generally running at about 95% the speed of
 equivalent compiled code.
 --

 so, it leaves me clueless, how it can escape the sandbox if you are
 intercepting all system calls.
 And what level of safety will give you a static analyzis, if your NaCl
 could be a Virtual Machine with JIT - so potentially it could generate
 and run arbitrary native code. Or can download the code from internet
 and then execute it.


Try [1] instead of a marketing summary.  What they do is severely constrain
the x86 execution model and limit interaction with OS interfaces.

Is it 100% safe?  Probably not.

[1] http://www.chromium.org/nativeclient/reference/research-papers





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Alan Kay
Hi Thiago

To me, there is not nearly enough context to publish this outside this list. I 
like arguments and complaints that are well supported. I don't like the all too 
general practice on the web of mere opinions about any and all things.

One of the most interesting aspects to me about the reactions to the web is 
that 
the glaring mistakes in systems design from the very beginning were hardly 
noticed and complained about. The mess that constitutes the current so-called 
standards is astounding -- and worse -- is hugely inconvenient and blocks any 
number of things that are part of personal computing.

When we did Squeak ca 1996 this was not such a problem because one could 
generally provide executable plugins and helpers that would allow getting 
around 
the problems of the browsers and web. This is still possible, except that more 
and more since then, many SysAdmins in important destinations for software -- 
such as school districts and many companies -- will not allow anyone to 
download 
an executable plugin when needed. This is largely because they fear that these 
cannot be run safely by their MS OS.

This means that what can be done in the browser by combinations of the standard 
tools -- especially JavaScript -- now becomes mission critical. 


For example, some of our next version of Etoys for children could be done in 
JS, 
but not all -- e.g. the Kedama massively parallel programmable particle system 
made by Yoshiki cannot be implemented to run fast enough in JS. It needs 
something much faster and lower level -- and this something has not existed 
until the Chrome native client (and this only in Chrome which is only about 11% 
penetrated). 


So today there is no general solution for this intolerable situation. We've got 
Microsoft unable to make a trusted OS, so the SysAdmins ban executables. And 
we've got the unsophisticated browser and web folks who don't understand 
operating systems at all. And this on machines whose CPUs have address space 
protection built in and could easily run many such computations completely 
safely! Yikes! Where are we? In some Danteish 9th Circle of Fumbling?

Cheers,

Alan




From: Thiago Silva tsi...@sourcecraft.info
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org
Sent: Sun, July 24, 2011 1:41:33 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

Hello Dr. Alan,

Since access to fonc list archives is closed to members, would you allow me to 
publish your email below elsewhere for public access? It is the most rich and 
informative critique I've found about the web (plus the non-authoring nature 
of the browser you've mentioned before).

Cheers,
Thiago

On Sunday 24 July 2011 14:24:20 Alan Kay wrote:
 Hi Marcel
 
 I think I've already said a bit about the Web on this list -- mostly about
 the complete misunderstanding of the situation the web and browser
 designers had.
 
 
 All the systems principles needed for a good design were already extant,
 but I don't think they were known to the designers, even though many of
 them were embedded in the actual computers and operating systems they
 used.
 
 The simplest way to see what I'm talking about is to notice the many-many
 things that could be done on a personal computer/workstation that couldn't
 be done in the web  browser running on the very same personal
 computer/workstation. There was never any good reason for these
 differences.
 
 Another way to look at this is from the point of view of separation of
 concerns. A big question in any system is how much does 'Part A' have to
 know about 'Part B' (and vice versa) in order to make things happen? The
 web and browser designs fail on this really badly, and have forced set
 after set of weak conventions into larger and larger, but still weak
 browsers and, worse, onto zillions of web pages on the net.
 
 
 Basically, one of the main parts of good systems design is to try to find
 ways to finesse safe actions without having to know much. So -- for
 example -- Squeak runs everywhere because it can carry all of its own
 resources with it, and the OS processes/address-spaces allow it to run
 safely, but do not have to know anything about Squeak to run it. Similarly
 Squeak does not have to know much to run on every machine - just how to
 get events, a display buffer, and to map its file conventions onto the
 local ones. On a bare machine, Squeak *is* the OS, etc. So much for old
 ideas from the 70s!
 
 The main idea here is that a windowing 2.5 D UI can compose views from many
 sources into a page. The sources can be opaque because they can even do
 their own rendering if needed. Since the sources can run in protected
 address-spaces their actions can be confined, and we the mini-OS running
 all this do not have to know anything about them. This is how apps work on
 personal computers, and there is no reason why things shouldn't work this
 way when the address-spaces come from other parts

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Igor Stasenko
On 25 July 2011 09:47, Alan Kay alan.n...@yahoo.com wrote:
 Hi Thiago

 To me, there is not nearly enough context to publish this outside this list.
 I like arguments and complaints that are well supported. I don't like the
 all too general practice on the web of mere opinions about any and all
 things.

 One of the most interesting aspects to me about the reactions to the web is
 that the glaring mistakes in systems design from the very beginning were
 hardly noticed and complained about. The mess that constitutes the current
 so-called standards is astounding -- and worse -- is hugely inconvenient
 and blocks any number of things that are part of personal computing.

 When we did Squeak ca 1996 this was not such a problem because one could
 generally provide executable plugins and helpers that would allow getting
 around the problems of the browsers and web. This is still possible, except
 that more and more since then, many SysAdmins in important destinations for
 software -- such as school districts and many companies -- will not allow
 anyone to download an executable plugin when needed. This is largely because
 they fear that these cannot be run safely by their MS OS.


I think that there is only one successful example of browser plugin
which earned enough trust from sysadmins
to make it installed - Flash.
If you watch it's evolution over the years, you can see, it evolved
from a quite simple graphics and animation addon
into a full-fledged ecosystem, which Squeak and smalltalk has from the
very beginning.

It is really a pity, that we have a systems which has no trust from users side.

Interestingly that many today's trendy and popular things (which we
know today as web) were invented as a temporary solution without any
systematic approach.
I think that i will be right by saying that most of these technologies
(like PHP, Javascript, Ruby, Sendmail etc) is a result of random
choice instead making planning and a deep study of problem field
before doing anything.
And that's why no surprise, they are failing to grow.
And now, people trying to fill the gaps in those technologies with
security, scalability and so on.. Because they are now well
established standards.. while originally was not meant to be used in
such form from the beginning.

In contrast, as you mentioned, TCP/IP protocol which is backbone of
today's internet having much better design.
But i think this is a general problem of software evolution. No matter
how hard you try, you cannot foresee all kinds of interactions,
features and use cases for your system, when you designing it from the
beginning.
Because 20 years ago, systems has completely different requirements,
comparing to today's ones. So, what was good enough 20 years ago,
today is not very good.
And here the problem: is hard to radically change the software,
especially core concepts, because everyone using it, get used to it ,
because it made standard.
So you have to maintain compatibility and invent workarounds , patches
and fixes on top of existing things, rather than radically change the
landscape.


 This means that what can be done in the browser by combinations of the
 standard tools -- especially JavaScript -- now becomes mission critical.

 For example, some of our next version of Etoys for children could be done in
 JS, but not all -- e.g. the Kedama massively parallel programmable particle
 system made by Yoshiki cannot be implemented to run fast enough in JS. It
 needs something much faster and lower level -- and this something has not
 existed until the Chrome native client (and this only in Chrome which is
 only about 11% penetrated).

 So today there is no general solution for this intolerable situation. We've
 got Microsoft unable to make a trusted OS, so the SysAdmins ban executables.
 And we've got the unsophisticated browser and web folks who don't understand
 operating systems at all. And this on machines whose CPUs have address space
 protection built in and could easily run many such computations completely
 safely! Yikes! Where are we? In some Danteish 9th Circle of Fumbling?

 Cheers,

 Alan

 
 From: Thiago Silva tsi...@sourcecraft.info
 To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing
 fonc@vpri.org
 Sent: Sun, July 24, 2011 1:41:33 PM
 Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

 Hello Dr. Alan,

 Since access to fonc list archives is closed to members, would you allow me
 to
 publish your email below elsewhere for public access? It is the most rich
 and
 informative critique I've found about the web (plus the non-authoring nature
 of the browser you've mentioned before).

 Cheers,
 Thiago

 On Sunday 24 July 2011 14:24:20 Alan Kay wrote:
 Hi Marcel

 I think I've already said a bit about the Web on this list -- mostly about
 the complete misunderstanding of the situation the web and browser
 designers had.


 All the systems principles needed for a good design were already extant,
 but I don't

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Julian Leviston

On 26/07/2011, at 12:03 AM, Igor Stasenko wrote:

 In contrast, as you mentioned, TCP/IP protocol which is backbone of
 today's internet having much better design.
 But i think this is a general problem of software evolution. No matter
 how hard you try, you cannot foresee all kinds of interactions,
 features and use cases for your system, when you designing it from the
 beginning.
 Because 20 years ago, systems has completely different requirements,
 comparing to today's ones. So, what was good enough 20 years ago,
 today is not very good.

That makes no sense to me at all. How were the requirements radically different?

I still use my computer to play games, communicate with friends and family, 
solve problems, author text, make music and write programs. That's what I did 
with my computer twenty years ago. My requirements are the same. Of course, the 
sophistication and capacity of the programs has grown considerably... so has 
the hardware... but the actual requirements haven't changed much at all.

 And here the problem: is hard to radically change the software,
 especially core concepts, because everyone using it, get used to it ,
 because it made standard.
 So you have to maintain compatibility and invent workarounds , patches
 and fixes on top of existing things, rather than radically change the
 landscape.

I disagree with this entirely. Apple manage to change software radically... by 
tying it with hardware upgrades (speed/capacity in hardware) and other things 
people want (new features, ease of use). Connect something people want  with 
shifts in software architecture, or make the shift painless and give some kind 
of advantage and people will upgrade, so long as the upgrade doesn't somehow 
detract from the original, that is. Of course, if you don't align something 
people want with software, people won't generally upgrade.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Dethe Elza

On 2011-07-25, at 12:47 AM, Alan Kay wrote:

 For example, some of our next version of Etoys for children could be done in 
 JS, but not all -- e.g. the Kedama massively parallel programmable particle 
 system made by Yoshiki cannot be implemented to run fast enough in JS. It 
 needs something much faster and lower level -- and this something has not 
 existed until the Chrome native client (and this only in Chrome which is only 
 about 11% penetrated). 

You don't have to wait for Chrome Native Client to have native levels of 
performance. Most of the current crop of browsers (i.e. not IE) use tracing JIT 
compilers to get close to native performance (in this experiment writing a CPU 
emulator in JS, one emulated instruction took approximately 20 native 
instructions: 
http://weblogs.mozillazine.org/roc/archives/2010/11/implementing_a.html). 
Javascript is fast and getting faster, with array operations coming soon and 
Web Workers for safe parallelism (purely message-based threads) available now.

You can play 3D shooters, edit video, synthesize audio, and run Linux on an 
emulated CPU in Javascript. I'm not sure what part of that is not fast enough.

Some of it is cruft and some of it is less than elegant. But having higher 
level primitives (like what SVG and Canvas provide) isn't all bad.

--Dethe
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Igor Stasenko
On 25 July 2011 16:16, Julian Leviston jul...@leviston.net wrote:

 On 26/07/2011, at 12:03 AM, Igor Stasenko wrote:

 Interestingly that many today's trendy and popular things (which we
 know today as web) were invented as a temporary solution without any
 systematic approach.
 I think that i will be right by saying that most of these technologies
 (like PHP, Javascript, Ruby, Sendmail etc) is a result of random
 choice instead making planning and a deep study of problem field
 before doing anything.
 And that's why no surprise, they are failing to grow.
 And now, people trying to fill the gaps in those technologies with
 security, scalability and so on.. Because they are now well
 established standards.. while originally was not meant to be used in
 such form from the beginning.


 Wow... really? PHP, JavaScript, Ruby and Sendmail are the result of random
 choice?

Random. Because how something , which was done to satisfy minute needs
(i wanna script some interactions, so lets do it quick),
could grow into something mature without solid foundation?
If conceptual flaw is there from the very beginning, how you can fix it?

Here what author says about JS:

JS had to “look like Java” only less so, be Java’s dumb kid brother or
boy-hostage sidekick. Plus, I had to be done in ten days or something
worse than JS would have happened
Brendan Eich

Apparently a missing systematical approach then strikes back, once it
deployed, became popular and used by millions..


 Javascript, PHP, Ruby and Sendmail failing to grow? Seriously? What do you
 mean by grow? It can't surely be popularity...

Grow not in popularity of course.
Grow in serving our needs.

 Julian.
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc



-- 
Best regards,
Igor Stasenko AKA sig.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Igor Stasenko
On 25 July 2011 17:40, Yoshiki Ohshima yosh...@vpri.org wrote:
  Well said, Igor!

 At Mon, 25 Jul 2011 16:03:57 +0200,
 Igor Stasenko wrote:

 It is really a pity, that we have a systems which has no trust from users 
 side.

  Following the logic, maybe making provably secure system is not
 enough but it still takes some luck?


As Alan mentioned, there was at least one great example of system
which has this potential: B5000
(http://en.wikipedia.org/wiki/Burroughs_large_systems).

But today's systems are still not there. Why?

I think the answer would be same as for why today we have to use these
strange and flawed technologies like javascript or PHP:
a random picks of different choices made by people over the years, and
most of them were based on minute needs (or even worse - marketing)
rather than
conscious, based on serious evaluation and study.
Apparently those techs has nothing to do with computer science,
because if you will do it at first place, then you will never end up
with things like PHP :)

So, it is really And so it goes.


-- 
Best regards,
Igor Stasenko AKA sig.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Wesley Smith
 Why all those emerging technologies is just reproducing the same
 which were available for desktop apps for years?
 Doesn't it rings a bell that it is something fundamentally wrong with
 this technology?

Which technology?  The technical software one or the human
organization social one?

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Igor Stasenko
On 25 July 2011 18:29, Wesley Smith wesley.h...@gmail.com wrote:
 Why all those emerging technologies is just reproducing the same
 which were available for desktop apps for years?
 Doesn't it rings a bell that it is something fundamentally wrong with
 this technology?

 Which technology?  The technical software one or the human
 organization social one?


I think both.
But since i am technician i can clearly tell is that javascript is failure.

Why 20 years ago we weren't able to use script to draw something
directly on page?
It took 20 years for this particular software to evolve to do
something ridiculously basic..
What a great progress we made! :)

Why things like Lively Kernel (http://www.lively-kernel.org/) is
possible to deliver only today?
It's really strikes me.

-- 
Best regards,
Igor Stasenko AKA sig.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Jakob Praher

Dear Alan,
Dear List,

the following very recent announcement might be of interest to this 
discussion: 
http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482


To quote Andreas et al.:

 Mozilla believes that the web can displace proprietary,
   single-vendor stacks for application development.  To make open web
   technologies a better basis for future applications on mobile and
   desktop alike, we need to keep pushing the envelope of the web to
   include --- and in places exceed --- the capabilities of the
   competing stacks in question. 

Though there is not much there yet (just a kind of manifesto and a 
readme file on github) https://github.com/andreasgal/B2G, I think this 
is a encouragning development, as the web becomes more and more a walled 
garden of giants, I think we desperately need to have open APIs. Strong 
open client APIs hopefully bring more power to individuals. What do you 
think?


Cheers,
-- Jakob



Am 24.07.2011 19:24, schrieb Alan Kay:

Hi Marcel

I think I've already said a bit about the Web on this list -- mostly 
about the complete misunderstanding of the situation the web and 
browser designers had.


All the systems principles needed for a good design were already 
extant, but I don't think they were known to the designers, even 
though many of them were embedded in the actual computers and 
operating systems they used.


The simplest way to see what I'm talking about is to notice the 
many-many things that could be done on a personal computer/workstation 
that couldn't be done in the web  browser running on the very same 
personal computer/workstation. There was never any good reason for 
these differences.


Another way to look at this is from the point of view of separation 
of concerns. A big question in any system is how much does 'Part A' 
have to know about 'Part B' (and vice versa) in order to make things 
happen? The web and browser designs fail on this really badly, and 
have forced set after set of weak conventions into larger and larger, 
but still weak browsers and, worse, onto zillions of web pages on the 
net.


Basically, one of the main parts of good systems design is to try to 
find ways to finesse safe actions without having to know much. So -- 
for example -- Squeak runs everywhere because it can carry all of its 
own resources with it, and the OS processes/address-spaces allow it to 
run safely, but do not have to know anything about Squeak to run it. 
Similarly Squeak does not have to know much to run on every machine - 
just how to get events, a display buffer, and to map its file 
conventions onto the local ones. On a bare machine, Squeak *is* the 
OS, etc. So much for old ideas from the 70s!


The main idea here is that a windowing 2.5 D UI can compose views from 
many sources into a page. The sources can be opaque because they can 
even do their own rendering if needed. Since the sources can run in 
protected address-spaces their actions can be confined, and we the 
mini-OS running all this do not have to know anything about them. This 
is how apps work on personal computers, and there is no reason why 
things shouldn't work this way when the address-spaces come from other 
parts of the net. There would then be no difference between local 
and global apps.


Since parts of the address spaces can be externalized, indexing as 
rich (and richer) to what we have now still can be done.


And so forth.

The Native Client part of Chrome finally allows what should have been 
done in the first place (we are now about 20+ years after the first 
web proposals by Berners-Lee).  However, this approach will need to be 
adopted by most of the already existing multiple browsers before it 
can really be used in a practical way in the world of personal 
computing -- and there are signs that there is not a lot of agreement 
or understanding why this would be a good thing.


The sad and odd thing is that so many people in the computer field 
were so lacking in systems consciousness that they couldn't see 
this, and failed to complain mightily as the web was being set up and 
a really painful genii was being let out of the bottle.


As Kurt Vonnegut used to say And so it goes.

Cheers,

Alan


*From:* Marcel Weiher marcel.wei...@gmail.com
*To:* Fundamentals of New Computing fonc@vpri.org
*Cc:* Alan Kay alan.n...@yahoo.com
*Sent:* Sun, July 24, 2011 5:39:26 AM
*Subject:* Re: [fonc] Alan Kay talk at HPI in Potsdam

Hi Alan,

as usual, it was inspiring talking to your colleagues and hearing you 
speak at Potsdam.  I think I finally got the Model-T image, which 
resonated with my fondness for Objective-C:  a language that a 17 year 
old with no experience with compilers or runtimes can implement and 
that manages to boil down dynamic OO/messaging to a single special 
function can't be all bad :-)


There was one question I had on the scaling issue that would

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Jakob Praher

Dear Alan,
Dear List,

the following very recent announcement might be of interest to this 
discussion: 
http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482


To quote Andreas et al.:

 Mozilla believes that the web can displace proprietary,
   single-vendor stacks for application development.  To make open web
   technologies a better basis for future applications on mobile and
   desktop alike, we need to keep pushing the envelope of the web to
   include --- and in places exceed --- the capabilities of the
   competing stacks in question. 

Though there is not much there yet (just a kind of manifesto and a 
readme file on github) https://github.com/andreasgal/B2G, I think this 
is a encouragning development, as the web becomes more and more a walled 
garden of giants, I think we desperately need to have open APIs. Strong 
open client APIs hopefully bring more power to individuals. What do you 
think?


Cheers,
-- Jakob



Am 24.07.2011 19:24, schrieb Alan Kay:

Hi Marcel

I think I've already said a bit about the Web on this list -- mostly 
about the complete misunderstanding of the situation the web and 
browser designers had.


All the systems principles needed for a good design were already 
extant, but I don't think they were known to the designers, even 
though many of them were embedded in the actual computers and 
operating systems they used.


The simplest way to see what I'm talking about is to notice the 
many-many things that could be done on a personal computer/workstation 
that couldn't be done in the web  browser running on the very same 
personal computer/workstation. There was never any good reason for 
these differences.


Another way to look at this is from the point of view of separation 
of concerns. A big question in any system is how much does 'Part A' 
have to know about 'Part B' (and vice versa) in order to make things 
happen? The web and browser designs fail on this really badly, and 
have forced set after set of weak conventions into larger and larger, 
but still weak browsers and, worse, onto zillions of web pages on the 
net.


Basically, one of the main parts of good systems design is to try to 
find ways to finesse safe actions without having to know much. So -- 
for example -- Squeak runs everywhere because it can carry all of its 
own resources with it, and the OS processes/address-spaces allow it to 
run safely, but do not have to know anything about Squeak to run it. 
Similarly Squeak does not have to know much to run on every machine - 
just how to get events, a display buffer, and to map its file 
conventions onto the local ones. On a bare machine, Squeak *is* the 
OS, etc. So much for old ideas from the 70s!


The main idea here is that a windowing 2.5 D UI can compose views from 
many sources into a page. The sources can be opaque because they can 
even do their own rendering if needed. Since the sources can run in 
protected address-spaces their actions can be confined, and we the 
mini-OS running all this do not have to know anything about them. This 
is how apps work on personal computers, and there is no reason why 
things shouldn't work this way when the address-spaces come from other 
parts of the net. There would then be no difference between local 
and global apps.


Since parts of the address spaces can be externalized, indexing as 
rich (and richer) to what we have now still can be done.


And so forth.

The Native Client part of Chrome finally allows what should have been 
done in the first place (we are now about 20+ years after the first 
web proposals by Berners-Lee).  However, this approach will need to be 
adopted by most of the already existing multiple browsers before it 
can really be used in a practical way in the world of personal 
computing -- and there are signs that there is not a lot of agreement 
or understanding why this would be a good thing.


The sad and odd thing is that so many people in the computer field 
were so lacking in systems consciousness that they couldn't see 
this, and failed to complain mightily as the web was being set up and 
a really painful genii was being let out of the bottle.


As Kurt Vonnegut used to say And so it goes.

Cheers,

Alan


*From:* Marcel Weiher marcel.wei...@gmail.com
*To:* Fundamentals of New Computing fonc@vpri.org
*Cc:* Alan Kay alan.n...@yahoo.com
*Sent:* Sun, July 24, 2011 5:39:26 AM
*Subject:* Re: [fonc] Alan Kay talk at HPI in Potsdam

Hi Alan,

as usual, it was inspiring talking to your colleagues and hearing you 
speak at Potsdam.  I think I finally got the Model-T image, which 
resonated with my fondness for Objective-C:  a language that a 17 year 
old with no experience with compilers or runtimes can implement and 
that manages to boil down dynamic OO/messaging to a single special 
function can't be all bad :-)


There was one question I had on the scaling issue that would

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Dethe Elza

On 2011-07-25, at 9:25 AM, Igor Stasenko wrote:

 But don't you see a problem:
 it evolving from simple 'kiddie' scripting language into a full
 fledged system.

First off, JS was done in a hurry, but by Brendan Eich who was hired by 
Netscape because he had implemented languages before and knew something about 
what he was doing (and could work fast). JS itself had a marketing requirement 
to be have C-like syntax (curly braces), but the language itself was influenced 
more by Self and Lisp than any of the C lineage.

And the JS we use today has been evolving (what's wrong with evolving?) since 
1995. What is in browsers today was not designed in 10 days, it has been beaten 
through the wringer of day to day use, standardization processes, and 
deployment in an extremely wide range of environments. That doesn't make it 
perfect, and I'm not saying it doesn't have it's warts (it does), but to 
disparage it as kiddie scripting reeks to me of trolling, not discussion.

 It is of course a good direction and i welcome it. But how different
 our systems would be, if guys who started it 20 years back would think
 a bit about future?

I don't think we would even be having this discussion if they didn't think 
about the future, and I think they've spent the intervening years continuing to 
think about (and implement) the future.

 Why all those emerging technologies is just reproducing the same
 which were available for desktop apps for years?

Security, for one. Browsers (and distributed systems generally) are a hostile 
environment and the ability to run arbitrary code on a user's machine has to be 
tempered by not allowing rogue code to erase their files or install a virus. In 
the meantime, desktops have also become distributed systems, and browser 
technology is migrating into the OS. That's not an accident.

 Doesn't it rings a bell that it is something fundamentally wrong with
 this technology?

Well, I doubt we could name a technology there isn't something fundamentally 
wrong with. I've been pushing Javascript as far as I could for more than a 
decade now. Browsers (and JS) really were crap back then, no doubt about it. 
But they are starting to become a decent foundation in the past couple of 
years, with more  improvements to come. And there is something to be said for a 
safe language with first-class functions that is available anywhere a web 
browser can run (and further).

Anyhow, not going to spend more time defending JS. Just had to put in my $0.02 
CAD.

--Dethe
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Marcel Weiher
 by 
 Berners-Lee).  However, this approach will need to be adopted by most of the 
 already existing multiple browsers before it can really be used in a 
 practical way in the world of personal computing -- and there are signs that 
 there is not a lot of agreement or understanding why this would be a good 
 thing. 
 
 The sad and odd thing is that so many people in the computer field were so 
 lacking in systems consciousness that they couldn't see this, and failed to 
 complain mightily as the web was being set up and a really painful genii was 
 being let out of the bottle.
 
 As Kurt Vonnegut used to say And so it goes.
 
 Cheers,
 
 Alan
 
 From: Marcel Weiher marcel.wei...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Cc: Alan Kay alan.n...@yahoo.com
 Sent: Sun, July 24, 2011 5:39:26 AM
 Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
 
 [..]
 There was one question I had on the scaling issue that would not have fitted 
 in the QA:   while praising the design of the Internet, you spoke less well 
 of the World Wide Web, which surprised me a bit.   Can you elaborate?


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Igor Stasenko
On 25 July 2011 19:01, Dethe Elza de...@livingcode.org wrote:

 On 2011-07-25, at 9:25 AM, Igor Stasenko wrote:

 But don't you see a problem:
 it evolving from simple 'kiddie' scripting language into a full
 fledged system.

 First off, JS was done in a hurry, but by Brendan Eich who was hired by 
 Netscape because he had implemented languages before and knew something about 
 what he was doing (and could work fast). JS itself had a marketing 
 requirement to be have C-like syntax (curly braces), but the language itself 
 was influenced more by Self and Lisp than any of the C lineage.

 And the JS we use today has been evolving (what's wrong with evolving?) since 
 1995. What is in browsers today was not designed in 10 days, it has been 
 beaten through the wringer of day to day use, standardization processes, and 
 deployment in an extremely wide range of environments. That doesn't make it 
 perfect, and I'm not saying it doesn't have it's warts (it does), but to 
 disparage it as kiddie scripting reeks to me of trolling, not discussion.


There was no intent of any disrespect or disparage.
For me, its a fact that the original implementation were started (as
many other popular projects) in a form of kiddie scripting and then
evolved into something bigger/better.

After all, a starting point defines the way you go.

 It is of course a good direction and i welcome it. But how different
 our systems would be, if guys who started it 20 years back would think
 a bit about future?

 I don't think we would even be having this discussion if they didn't think 
 about the future, and I think they've spent the intervening years continuing 
 to think about (and implement) the future.

 Why all those emerging technologies is just reproducing the same
 which were available for desktop apps for years?

 Security, for one. Browsers (and distributed systems generally) are a hostile 
 environment and the ability to run arbitrary code on a user's machine has to 
 be tempered by not allowing rogue code to erase their files or install a 
 virus. In the meantime, desktops have also become distributed systems, and 
 browser technology is migrating into the OS. That's not an accident.

Yeah.. And the only difference i see today in systems is before
running a downloaded executable a system asking are you sure you want
to run something downloaded from internet?.
So, we're still not there. Our systems are still not as secure as we
want them to be (otherwise why asking user such kind of questions?).
:)

From today's perspective, how you would explain to people, why drawing
on canvas (as in HTML5) are available only today but not starting from
HTML1.0?

As Julian said before in this thread,  20 years ago we had almost same
requirements.. So, assuming that 20 years back we wanted to deliver
dynamic content which draws things on screen, why it took 20 years to
implement it?

I think the only answer could be, that we're changed the view on what
'web content' are. While 20 years back it was mostly static content
with simple markup text and couple of images, today it is completely
different.
So, i think it is more a lack of vision, than technical/security issues.


 Doesn't it rings a bell that it is something fundamentally wrong with
 this technology?

 Well, I doubt we could name a technology there isn't something fundamentally 
 wrong with. I've been pushing Javascript as far as I could for more than a 
 decade now. Browsers (and JS) really were crap back then, no doubt about it. 
 But they are starting to become a decent foundation in the past couple of 
 years, with more  improvements to come. And there is something to be said for 
 a safe language with first-class functions that is available anywhere a web 
 browser can run (and further).


Yes. But wait. Why if i want to run something on a web page it has to
be a javascript?
Is javascript an universal answer to every possible problems we have?
I doubt it.

Because now, i have to rewrite own applications in javascript, just
because it is the only technology which allows you to reach your
user base.
Everyone jumps into wagon and follows a hype. Without even considering
alternatives.
And its a pity.

So, it is good to hear about Google's NaCl. Maybe eventually it will
free us from 20 years old shackles.

 Anyhow, not going to spend more time defending JS. Just had to put in my 
 $0.02 CAD.


I wouldn't say that i hating it much. I just wanted to say that it a
pity watching how painfully it evolving from beginning. World could
spend own effort on something else over those 20 years :)

 --Dethe
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




-- 
Best regards,
Igor Stasenko AKA sig.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Benoît Fleury
So, i think it is more a lack of vision, than technical/security issues.

There might not have been a technical vision in the www but there is I
think a political statement which is that the information must be
open. Papers like The Rule of Least Power [1] make it very clear.
This is, in my opinion, the essence of the web and companies like
Google are built on it.

I think we're moving away today from this vision with technologies
like HTML5/JavaScript to respond to the application model of the
iPhone/iPad (more business friendly). I don't know if it will allow
us to keep this open philosophy or not.

- Benoit


[1] http://www.w3.org/2001/tag/doc/leastPower.html


On Mon, Jul 25, 2011 at 11:20 AM, Igor Stasenko siguc...@gmail.com wrote:
 On 25 July 2011 19:01, Dethe Elza de...@livingcode.org wrote:

 On 2011-07-25, at 9:25 AM, Igor Stasenko wrote:

 But don't you see a problem:
 it evolving from simple 'kiddie' scripting language into a full
 fledged system.

 First off, JS was done in a hurry, but by Brendan Eich who was hired by 
 Netscape because he had implemented languages before and knew something 
 about what he was doing (and could work fast). JS itself had a marketing 
 requirement to be have C-like syntax (curly braces), but the language itself 
 was influenced more by Self and Lisp than any of the C lineage.

 And the JS we use today has been evolving (what's wrong with evolving?) 
 since 1995. What is in browsers today was not designed in 10 days, it has 
 been beaten through the wringer of day to day use, standardization 
 processes, and deployment in an extremely wide range of environments. That 
 doesn't make it perfect, and I'm not saying it doesn't have it's warts (it 
 does), but to disparage it as kiddie scripting reeks to me of trolling, 
 not discussion.


 There was no intent of any disrespect or disparage.
 For me, its a fact that the original implementation were started (as
 many other popular projects) in a form of kiddie scripting and then
 evolved into something bigger/better.

 After all, a starting point defines the way you go.

 It is of course a good direction and i welcome it. But how different
 our systems would be, if guys who started it 20 years back would think
 a bit about future?

 I don't think we would even be having this discussion if they didn't think 
 about the future, and I think they've spent the intervening years continuing 
 to think about (and implement) the future.

 Why all those emerging technologies is just reproducing the same
 which were available for desktop apps for years?

 Security, for one. Browsers (and distributed systems generally) are a 
 hostile environment and the ability to run arbitrary code on a user's 
 machine has to be tempered by not allowing rogue code to erase their files 
 or install a virus. In the meantime, desktops have also become distributed 
 systems, and browser technology is migrating into the OS. That's not an 
 accident.

 Yeah.. And the only difference i see today in systems is before
 running a downloaded executable a system asking are you sure you want
 to run something downloaded from internet?.
 So, we're still not there. Our systems are still not as secure as we
 want them to be (otherwise why asking user such kind of questions?).
 :)

 From today's perspective, how you would explain to people, why drawing
 on canvas (as in HTML5) are available only today but not starting from
 HTML1.0?

 As Julian said before in this thread,  20 years ago we had almost same
 requirements.. So, assuming that 20 years back we wanted to deliver
 dynamic content which draws things on screen, why it took 20 years to
 implement it?

 I think the only answer could be, that we're changed the view on what
 'web content' are. While 20 years back it was mostly static content
 with simple markup text and couple of images, today it is completely
 different.
 So, i think it is more a lack of vision, than technical/security issues.


 Doesn't it rings a bell that it is something fundamentally wrong with
 this technology?

 Well, I doubt we could name a technology there isn't something fundamentally 
 wrong with. I've been pushing Javascript as far as I could for more than a 
 decade now. Browsers (and JS) really were crap back then, no doubt about it. 
 But they are starting to become a decent foundation in the past couple of 
 years, with more  improvements to come. And there is something to be said 
 for a safe language with first-class functions that is available anywhere a 
 web browser can run (and further).


 Yes. But wait. Why if i want to run something on a web page it has to
 be a javascript?
 Is javascript an universal answer to every possible problems we have?
 I doubt it.

 Because now, i have to rewrite own applications in javascript, just
 because it is the only technology which allows you to reach your
 user base.
 Everyone jumps into wagon and follows a hype. Without even considering
 alternatives.
 And its a pity.

 So, it is 

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread David Barbour
On Sun, Jul 24, 2011 at 10:24 AM, Alan Kay alan.n...@yahoo.com wrote:

 The main idea here is that a windowing 2.5 D UI can compose views from many
 sources into a page. The sources can be opaque because they can even do
 their own rendering if needed. Since the sources can run in protected
 address-spaces their actions can be confined, and we the mini-OS running
 all this do not have to know anything about them.


This idea of 'opaque' applications in a sandbox may result in a flexible UI,
but not an especially composable or accessible one.

Consider the following desiderata:
* Accessibility - for screen-readers, search engines, language translators.
* Zoomability - we should be able to constrain client-side resource
consumption (CPU, bandwidth, memory) such that it is commensurate with user
attention (as measured in terms of screen real-estate for visuals, or volume
for audibles).
* Service mashups and customization - grease-monkey scripts and extensions
that modify an app require it have a clear structure.
* Occasionally connected computing - links fail, power isn't always up, we
might persist an app we aren't zoomed on at the moment.
* Mobility - an app should be able to follow users from one computer to
another.
* Bookmarking, Sharing, CSCW - users should be able to share access to
specific elements of their applications.
* Optimization - the later we perform optimizations, the more we can
achieve, especially if the language is designed for it. Access to the
underlying code can allow us to achieve higher levels of specialization.



This is how apps work on personal computers, and there is no reason why
 things shouldn't work this way when the address-spaces come from other parts
 of the net.


Except, being opaque is also part of how apps consistently *fail their users
* on personal computers. I.e. we should not be striving for apps as they
exist on personal computers. Something better is needed.

I agree that NaCl for Chromium is a promising technology. Though, I wonder
if hardware virtualization might be more widely accepted.

But the promise I see for NaCl isn't just rendering flexible apps to screen;
rather, I see potential for upgrading the basic web abstractions, breaking
away from HTTP+HTML+DOM+JS+CSS for something more robust and consistent (I'm
especially interested in a better DOM and communications protocol for live
documents). I.e. it would be easy to create 'portal' sites that effectively
grant access to a new Internet. This offers a 'gradual transition' strategy
that is not accessible today.


 this [NaCl] approach will need to be adopted by most of the already
 existing multiple browsers before it can really be used in a practical way
 in the world of personal computing


I agree that this is a problem for apps that can be easily achieved in JS.
(And the domain of such applications will grow, given WebGL and WebCL and
improved quality for SVG.)

But for the applications where JS and WebGL are insufficient, mandating
Chrome as a platform/VM for an application should rarely be 'worse' than
providing your own executable and installer.



 The sad and odd thing is that so many people in the computer field were so
 lacking in systems consciousness that they couldn't see this


What's with the past tense?

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Bert Freudenberg

On 25.07.2011, at 19:13, Jakob Praher wrote:

 Dear Alan, 
 Dear List,
 
 the following very recent announcement might be of interest to this 
 discussion: 
 http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread/7668a9d46a43e482
 
 To quote Andreas et al.: 
  Mozilla believes that the web can displace proprietary, single-vendor 
 stacks for application development.  To make open web technologies a better 
 basis for future applications on mobile and desktop alike, we need to keep 
 pushing the envelope of the web to include --- and in places exceed --- the 
 capabilities of the competing stacks in question. 
 Though there is not much there yet (just a kind of manifesto and a readme 
 file on github) https://github.com/andreasgal/B2G, I think this is a 
 encouragning development, as the web becomes more and more a walled garden of 
 giants, I think we desperately need to have open APIs. Strong open client 
 APIs hopefully bring more power to individuals. What do you think?
 
 Cheers,
 -- Jakob

I did ask in that thread about exposing the CPU, a la NativeClient. (It's a 
usenet group so you can post without subscribing, nice)

Short answer is that they don't see a need for it.

- Bert -
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Thiago Silva
On Monday 25 July 2011 11:03:57 Igor Stasenko wrote:
 But i think this is a general problem of software evolution. No matter
 how hard you try, you cannot foresee all kinds of interactions,
 features and use cases for your system, when you designing it from the
 beginning.
 Because 20 years ago, systems has completely different requirements,
 comparing to today's ones. So, what was good enough 20 years ago,
 today is not very good.
 And here the problem: is hard to radically change the software,
 especially core concepts, because everyone using it, get used to it ,
 because it made standard.
 So you have to maintain compatibility and invent workarounds , patches
 and fixes on top of existing things, rather than radically change the
 landscape.

Now, why is it hard to radically change the software?

Is it the failure to foresee all kinds of interactions that creates the 
problems? Maybe is not what we are leaving behind in the design of the 
solution, but what the design assumes (whether we are aware or not): the 
hundreds and hundreds of little assumptions that have no relation with the 
actual solution description...

Take imperative instructions: when writing a solution in an imperative 
language, we are imposing chronological order to the instructions even when 
that particular ordering is not a requirement of the solution.

So, we are not called up to change the software when the solution changes. We 
are called up when something, anything changes and breaks any of the 
assumptions carried by the software. We seem to be writing software that 
doesn't appear to be so soft...



Cheers,
Thiago

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Jakob Praher
On 07/25/2011 09:35 PM, Bert Freudenberg wrote:
 I did ask in that thread about exposing the CPU, a la NativeClient. (It's a 
 usenet group so you can post without subscribing, nice)

 Short answer is that they don't see a need for it.
I somehow have mixed feelings about NaCL. I think that safe execution of
native code is a great achievement. Yet the current implementation
somehow still feels a bit like safer reincarnation of the ActiveX
technology. It defines a kind of abstract toolkit (like ActiveX used
WIN32 API) that enables you to interact with the user in a definite way
(graphics, audio, events).

I think it fails to achieve a common low level representation of data
that can be safely used to compose powerful applications. From this
point of view I think that e.g. message passing (in a pepsi/cola way)
with capabilities based security is much more interesting concept to
hide powerful computation than having to rely on a IPC (the pepper
interface in NaCl).  Also people should address introspectabilty and
debuggability right at the core - e.g. enforce symbols for debugging
into the applications. I think introspecabilty (the right to View
Source) is one of the biggest improvements of Javascript compared to
e.g. Java.

Cheers,
Jakob

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread BGB

On 7/25/2011 12:59 PM, David Barbour wrote:
On Mon, Jul 25, 2011 at 9:25 AM, Igor Stasenko siguc...@gmail.com 
mailto:siguc...@gmail.com wrote:


how different our systems would be, if guys who started it 20
years back would think a bit about future?


The guys who spend their time thinking about it lose, just as they 
always do. Worse is better wins on the market. Brendan Eich was right 
to fear something even worse than his rapidly hacked brainstorm child 
- i.e. if it were not JavaScript/EcmaScript, we might be using 
proprietary VBScript from Microsoft.




or, that happens...

then later ends up disabled by default as MS can't manage to prevent 
computer from being pwnt (owned) by viruses.


later on someone else recreates web-scripting in a more secure form 
using a 3rd-party browser plugin which gives a semi-programmatic 
interface based on XSLT and regexes.


...


Do you remember those battles between behemoths trying to place 
proprietary technologies in our browsers? I do. 'Embrace and extend' 
was a strategy discussed and understood even in grade school. I'm a 
bit curious whether Google will be facing an EOLAS patent suit for 
NaCl, or whether that privilege will go to whomever uses NaCl and 
WebSockets to connect browsers together.




although NaCl is an interesting idea, the fact that it was not 
originally designed to be binary-compatible between targets is a drawback.


it is sad though that there is no really good compromise between 
higher level VMs (Flash, .NET, JVM, ...) and sandboxed native code.



and, no one was like: hell, why don't we just write a VM that allows us 
to run sandboxed C and C++ apps in a browser (probably with some added 
metadata and a validation system).


an example would be, say, if the VM didn't allow accessing external 
memory via forged pointers, ...



It is interesting to see JS evolve in non-backwards-compatible ways to 
help eliminate some of the poisons of its original design - 
eliminating the global namespace, dropping callee/caller/arguments, 
development of a true module system that prevents name shadowing and 
allows effective caching, and so on. Mark Miller, who has performed 
significant work on object capability security, has also started to 
shape JavaScript to make it into a moderately sane programming 
language... something that could be used as a more effective 
compilation target for other languages.




fair enough.

too bad there is no standardized bytecode or anything though, but then I 
guess it would at this point be more like browser-integrated Flash or 
something, as well as be potentially more subject to awkward versioning 
issues, or the bytecode ends up being lame/inflexible/awkward/...


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Julian Leviston

On 26/07/2011, at 1:33 AM, Igor Stasenko wrote:

 On 25 July 2011 16:16, Julian Leviston jul...@leviston.net wrote:
 
 On 26/07/2011, at 12:03 AM, Igor Stasenko wrote:
 
 Interestingly that many today's trendy and popular things (which we
 know today as web) were invented as a temporary solution without any
 systematic approach.
 I think that i will be right by saying that most of these technologies
 (like PHP, Javascript, Ruby, Sendmail etc) is a result of random
 choice instead making planning and a deep study of problem field
 before doing anything.
 And that's why no surprise, they are failing to grow.
 And now, people trying to fill the gaps in those technologies with
 security, scalability and so on.. Because they are now well
 established standards.. while originally was not meant to be used in
 such form from the beginning.
 
 
 Wow... really? PHP, JavaScript, Ruby and Sendmail are the result of random
 choice?
 
 Random. Because how something , which was done to satisfy minute needs
 (i wanna script some interactions, so lets do it quick),
 could grow into something mature without solid foundation?
 If conceptual flaw is there from the very beginning, how you can fix it?
 
 Here what author says about JS:
 
 JS had to “look like Java” only less so, be Java’s dumb kid brother or
 boy-hostage sidekick. Plus, I had to be done in ten days or something
 worse than JS would have happened
 Brendan Eich
 

Except that JavaScript is one of the only common popular prototype based object 
oriented languages, which it turns out is an amazingly flexible system. I don't 
think this is random. Maybe rushed is what you mean here.

Apart from this, Ruby was DEFINITELY not random, or rushed. It's a delicate 
balance between form and pragmatic functionality. I'll grant you that the 
internals of the standard interpreter leave a lot to be desired, but I think 
this is perhaps less to do with randomness and more to do with the fact that 
perhaps Matz was entirely out of his depth when it came to best of breed for 
internal language structuring.

I think to say that these languages served as a temporary solution is not 
really very fair on most of them. PHP was basically designed to be an easy way 
to build dynamic web pages, and popularity drove it to where it is today.

I guess where you're coming from is you're attempting to say that none of these 
languages are being used for what they were originally designed for... possibly 
(I'd put my weight on saying hopefully) with the exception of Ruby, because 
Ruby was designed to be beautiful to code in, and to make programmers happy. 
Ruby is a general purpose language. I really don't know why you include 
Sendmail in this bunch.

I think you're kind of missing the point of the web not being structured 
properly, though... I think Alan's point is more the case that the fact that we 
had to use server side languages, as well as languages such as VBScript and 
JavaScript which the interpreter executes, is an illustration of the fact that 
STRUCTURALLY, the web is fairly broken. It has nothing to do with language 
choice (server- or client-side), really, but rather the fact that there is no 
set of conventions and readily usable standard for programming across the web 
in such a way that code is run in a protected way on machines where code needs 
to run.

I think as computer programmers, we get quite hung up on the specifics of 
languages and other potentially somewhat irrelevant details when perhaps 
they're not the most apposite concerns to be interested in.


 Apparently a missing systematical approach then strikes back, once it
 deployed, became popular and used by millions..
 
 
 Javascript, PHP, Ruby and Sendmail failing to grow? Seriously? What do you
 mean by grow? It can't surely be popularity...
 
 Grow not in popularity of course.
 Grow in serving our needs.

Perhaps you miss the point of why things become large and popular...? :) 
They're driven by people. And not just some people - by *most* people. 
Everybody wants to share photos and search for things on the web. Everyone 
wants their content, and purchases, and the things they want.

These people do not care about structural perfection in any way. They care 
about doing the things they want to do and naught else.

Look at Apple if you want to understand a group of people who get this (or 
maybe only Steve gets this, I don't really know, but I do know someone at Apple 
fully understand this, and possibly Apple *didn't* understand this when Steve 
wasn't there). The only way you can drive the future is if you get everyone to 
come along with you.

The only way you can get everyone to come along with you is to play to their 
understanding level. You make the general lives of everyone on the planet 
easier, and you will become popular.

Say, for example, like making a telephone that is vastly more easy to use than 
all other telephones on the planet. Now, for tech geeks, it's not really *that* 
much easier to 

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread David Barbour
On Mon, Jul 25, 2011 at 3:20 PM, BGB cr88...@gmail.com wrote:

 too bad there is no standardized bytecode or anything though, but then I
 guess it would at this point be more like browser-integrated Flash or
 something, as well as be potentially more subject to awkward versioning
 issues, or the bytecode ends up being lame/inflexible/awkward/...


Bytecode is a *bad* idea - all they ever do is reduce our ability to reason
about, secure, and optimize code. Bytecodes have not achieved proposed
cross-language benefits - i.e. they tend to be very language specific
anyway, so you might as well compile to an intermediate application
language.

If you want compiled JavaScript, try Google's Closure compiler (JavaScript
- JavaScript).

But I do agree that JavaScript is not an ideal target for compilation!
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Intention Implementation - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Julian Leviston

On 26/07/2011, at 1:43 AM, Igor Stasenko wrote:

 (quotes are broken)
 
 On 25 July 2011 16:26, Julian Leviston jul...@leviston.net wrote:
 
 On 26/07/2011, at 12:03 AM, Igor Stasenko wrote:
 
 In contrast, as you mentioned, TCP/IP protocol which is backbone of
 today's internet having much better design.
 But i think this is a general problem of software evolution. No matter
 how hard you try, you cannot foresee all kinds of interactions,
 features and use cases for your system, when you designing it from the
 beginning.
 Because 20 years ago, systems has completely different requirements,
 comparing to today's ones. So, what was good enough 20 years ago,
 today is not very good.
 
 That makes no sense to me at all. How were the requirements radically
 different?
 I still use my computer to play games, communicate with friends and family,
 solve problems, author text, make music and write programs. That's what I
 did with my computer twenty years ago. My requirements are the same. Of
 course, the sophistication and capacity of the programs has grown
 considerably... so has the hardware... but the actual requirements haven't
 changed much at all.
 
 
 If capacity of programs has grown, then there was a reason for it
 (read requirements)?
 Because if you stating that you having same requirements as 20 years
 ago, then why you don't using those old systems,
 but instead using today's ones?
 

Well, Igor, if something more efficient comes along, I will use it, and it will 
*probably* work just fine on 20 year old hardware... because *my* requirements 
haven't changed much. I will grant you that it's probably going to be quite 
hard to get a commodore 64 connected to a router, because its not very 
compatible, but what I'm trying to say here is that most of the requirements 
you're talking about are actually self-imposed by our computing system. Having 
something that can do 2.5 million instructions per second is ludicrous if all I 
want to do is type my document, isn't it? Surely any machine should be able to 
handle typing a document. ;-) (Note here, I'm obviously ignoring the fact that 
nowadays, we have unicode).

What I'm getting at is *MY* requirements haven't changed much. I still want to 
send a communication to my mother every now and then, and I still want to play 
games. In fact, some of my favourite games, I actually use emulators to play... 
emulators that run 20 year old hardware emulation so I can play the games which 
will not run on today's machines ;-)

One of my favourite games is Tetris Attack, which me and my friend play on his 
XBOX (original, not 360) in a Super Nintendo Emulator...

Do you find that amusing? I sure as hell do. :)

But I digress - my intentions are relatively similar that they were 20 years 
ago... I like to write programs, and I like to use programs to draw, and I like 
to listen to music, solve problems, create texts, make music... etc. The 
IMPLEMENTATIONS of how I went about this are vastly different, and so if you 
like you can bend requirements to a systems-view of requirements... and then 
I will agree with you... my requirements that I have today of my computer in 
terms of TECHNICAL requirements are vastly different, but in terms of 
interpersonal requirements, they're not at all different - maybe slightly...

Making music satisfies a creative impulse in me, and I can make it using my 
$10,000 computer system that I have today, or I can satisfy it using a 
synthesizer from the 80's. One of them does a vastly better job for me, but 
this is a qualitative issue, not a requirements issue ;-)

 Speaking of requirements,  a tooday's browser (Firefox) running on my
 machine takes more than 500Mb of system memory.
 I have no idea, why it consuming that much.. the fact is that you
 cannot run it on any 20-years old personal computer.
 
 

Well this is the point of the STEPS project and the like - get rid of the 
cruft, and we will have an optimized system that will run like lightning on our 
current day processors with all their amazing amount of memory. 

 And here the problem: is hard to radically change the software,
 especially core concepts, because everyone using it, get used to it ,
 because it made standard.
 So you have to maintain compatibility and invent workarounds , patches
 and fixes on top of existing things, rather than radically change the
 landscape.
 
 I disagree with this entirely. Apple manage to change software radically...
 by tying it with hardware upgrades (speed/capacity in hardware) and other
 things people want (new features, ease of use). Connect something people
 want  with shifts in software architecture, or make the shift painless and
 give some kind of advantage and people will upgrade, so long as the upgrade
 doesn't somehow detract from the original, that is. Of course, if you don't
 align something people want with software, people won't generally upgrade.
 
 
 Apple can do whatever they want with their own proprietary hardware
 and 

Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Alan Kay
I agree there are better ways to do things than NaCl, but Yoshiki was able to 
get Squeak running in it, and that was a milestone benchmark that points the 
way 
for better systems than Squeak.

Cheers,

Alan





From: Jakob Praher j...@hapra.at
To: fonc@vpri.org
Sent: Mon, July 25, 2011 1:59:03 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

On 07/25/2011 09:35 PM, Bert Freudenberg wrote:
 I did ask in that thread about exposing the CPU, a la NativeClient. (It's a 
usenet group so you can post without subscribing, nice)

 Short answer is that they don't see a need for it.
I somehow have mixed feelings about NaCL. I think that safe execution of
native code is a great achievement. Yet the current implementation
somehow still feels a bit like safer reincarnation of the ActiveX
technology. It defines a kind of abstract toolkit (like ActiveX used
WIN32 API) that enables you to interact with the user in a definite way
(graphics, audio, events).

I think it fails to achieve a common low level representation of data
that can be safely used to compose powerful applications. From this
point of view I think that e.g. message passing (in a pepsi/cola way)
with capabilities based security is much more interesting concept to
hide powerful computation than having to rely on a IPC (the pepper
interface in NaCl).  Also people should address introspectabilty and
debuggability right at the core - e.g. enforce symbols for debugging
into the applications. I think introspecabilty (the right to View
Source) is one of the biggest improvements of Javascript compared to
e.g. Java.

Cheers,
Jakob

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Julian Leviston

On 26/07/2011, at 12:20 PM, Igor Stasenko wrote:

 You lost me here. My attitude to Ruby is same as to Perl: lets take
 bit from here, bit from there, mix well everything and voila! , we
 having new programming language.
 It may be good for cooking recipe, but definitely not very good for
 programming language.
 I find it strange that many today's mainstream languages evolution is
 driven by taking same approach: mix  blend things together, rather
 than focusing on completeness, conciseness and clarity.

I don't think you understand Ruby very well. PERL and Ruby are quite different.
Sure, Ruby borrowed some stuff from PERL (such as built in RegExps, etc) but at 
its heart, it's pure objects, and behaves very much how you'd expect it to. 
It's also incredibly compact and beautiful looking, easy to read, and nice. I 
fell in love with it similarly to how I fell in love with SmallTalk...

Julian.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Julian Leviston

On 26/07/2011, at 12:20 PM, Igor Stasenko wrote:

 But for programming its a bit different: you giving to people a tool
 which they will use to craft their own products. And depending on how
 good/bad this tool are, the end product's quality will vary.
 
 And also, it would be too good to be true: if people would have to
 choose between java and smalltalk based on easy to use criteria, i
 doub't they would choose java.
 Marketing takes its toll, the worse one. :)

But they *did* choose Java over smalltalk precisely because it's easier to use.

You make the mistake of assuming easier to use between experts, but that's not 
how people adopt languages.

One of the reasons Rails became an overnight success for the Ruby and web 
development communities is because of a 15 minute blog screencast... and a 
bunch of simple evangelizing the creator of Rails did... he basically showed 
how easy it was to create a Blog in 15 minutes with comments... Mind you it 
wasn't a particularly beautiful Blog, but it functioned, nonetheless, and the 
kicker is...

... it was about twice as easy, twice as fast, and twice as nice to code than 
in any other comparative programming environment at the time.

People adoped Java because it was readily available to learn, and easy to 
grok in comparison with what they knew, and because it had spunk in the 
same way that Rails did -  it had an attitude, and was perceived as a funky 
thing. This has to do with marketing and the way our society works. SmallTalk, 
is incredibly simple, incredibly powerful, but also INCREDIBLY unapproachable 
for most people not welcoming to abstract thought.

Contrast that it took me weeks to understand SmallTalk when I first saw it - 
even vaguely understand I mean - but it only took me days to understand Java, 
given that I'd programmed in Basic and C before.

This has to do with the sub-cultural context more than anything. 

Julian.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Julian Leviston

On 26/07/2011, at 12:20 PM, Igor Stasenko wrote:

 
 Say, for example, like making a telephone that is vastly more easy to use 
 than all other telephones on the planet. Now, for tech geeks, it's not 
 really *that* much easier to use... For example, when the iPhone came out, I 
 got one, and the only really useful and different thing in terms of 
 technical specification and features that I could do that I previously 
 couldn't do easily was synchronise my contacts... but everything was quite a 
 bit EASIER to do. In the process, Apple are pushing next gen technologies 
 (next gen for the public is not necessarily next gen for us, mind :)). Mind 
 you, it comes wrapped around their bank account, but it's still coming.
 
 Look at Twitter for an example of what people like... this is a ridiculously 
 clear example... it simply allows people to write small messages to whoever 
 is listening. Brilliantly simple, brilliantly clear. Most people want to do 
 this, and so it is popular.  The thing with twitter is, though, they're not 
 using this popularity at all. They don't really know what to do with it.
 
 Now, what we want to do is make something compelling enough such that it 
 goes off like a rocket. Smalltalk was designed pretty amazingly well, and 
 it had an amazingly large amount of influence, but if you ask most 
 programmers what smalltalk is, they usually haven't heard of it... contrast 
 this to asking people about Java, and they know what that is. :) You even 
 ask them what Object Oriented programming is, and they know that, but you 
 say Heard of Alan Kay? and they give you a blank look. Ask them about 
 Steve Jobs and everyone knows all about him. Hell, what other company has 
 fanboys keeping track of their ADS? 
 (http://www.macrumors.com/2011/07/24/new-apple-ipad-ad-well-always/ )
 
 What I'm trying to get at here, is that I see no reason why something free 
 can't be popular (facebook? twitter?), but for that to take place, it has to 
 provide something that you simply can't get elsewhere. The advantage the web 
 has had is that it has moved quite quickly and continues to move at whatever 
 pace we like to go at. Nothing else has come along that has outpaced or out 
 innovated it FROM THE POINT OF VIEW OF THE AVERAGE PUNTER. So what is needed 
 is something along the lines of Frank, which when people see what is 
 possible (BY USING IT ONLY, I'd wager), they'll stop using everything else 
 because they simply can't go back to the old way because it feels like the 
 past too much. :)
 
 Make something better than all the user or developer experiences out there, 
 and developers like me will evangelise the shit out of it... and other users 
 who care about things will jump on the bandwagon, curators of experience 
 will jump on board, and overnight, a Windows 95 like experience will happen 
 (in terms of market share effect), or perhaps an iPod effect will happen. 
 Remember, it has to be just better than what is possible now, so if you 
 make something infinitely better but just show off how it's just better, 
 and also make it easy to migrate to and easier to use, then you will have 
 already won as the new way of doing things before you've started.
 
 Even Apple, our current purveyors of fine user experience and curators of 
 style and design, haven't managed to build a device or user experience in 
 software that allows primarily convention, ease of use and unclutteredness, 
 and yet then the total ability to configure things for people who want 
 things to do exactly what they want them to do (ie coders, programmers, and 
 advanced users). They hit the 80/20 rule quite well in terms of giving 80 
 percent of people everything they need, while leaving 20% of people sort of 
 out in the cold.
 
 
 I don't think its a good to drive an analogy between end product and tool(s).
 The main difference between them lies in the fact that tools are made
 for professionals, while end products are made for everyone.
 You don't have to graduate college to know how to use microwave, you
 just need to read a short instruction.
 Professionals who basing their choice on popularity are bad
 professionals, the good ones basing their choice on quality of tools.
 Because everyone knows that popularity has a temporary effect.
 Something which is popular today, will be forgotten tomorrow.

That's just silly. Products vs Tools? A toaster is a device that I can use to 
toast bread. A coffee machine is a device i can use to make coffee. 
Professional people who create coffee or create toasted sandwiches for a living 
use different ones, but they're still coffee machines and toasters, and mostly 
they're just based around higher volume, and higher quality in terms of 
controls.

Popularity doesn't always have a temporary effect. Consider the iPod. It's not 
forgotten is it? It's been popular for decades.

Consider the personal computer! The laptop - this is a very popular device. 
It's been more than 2 decades 

Re: Intention Implementation - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread David Barbour
On Mon, Jul 25, 2011 at 4:40 PM, Julian Leviston jul...@leviston.netwrote:

 I guess my question is... what's stopping an alternative, replacement,

backwardly-compatible protocol from taking over where http and https

leave off?


HTTP and HTTPS are not very good protocols if your goals relate to
low-latency, security, and composition.



And what would that protocol do?


Here's what the protocol I'm working on would do:

* chord/pastry/tapestry distributed replacement for DNS  (free us from
ICANN; easier configuration)
* identifiers for hosts = secure hash of RSA public key (easy validation,
ortho. to trust)
* logical connections (easier composition, independent disruption, potential
'restore' and use cache)
* logical objects - flat, usually opaque object identifier (favor object
capability security idioms)
* extensible protocol (just add objects); supports new overlays and network
abstractions.
* efficient orchestration; forward responses multiple steps without
centralized routing
* wait-free idioms, i.e. 'install' a new object then start using it - new
object references created locally
* reactive behaviors: focus on models involving continuous queries or
control.
* batching semantics - send multiple updates then 'apply' all at once.
* temporal semantics - send updates that apply in future.



One of the issues is surely the way our router-system structure is in
 place... if there was going to be a replacement for the web, it would *have*
 to end up being properly web based (down to the packet level)


I think if we replaced the protocols for the web, we'd still call it 'the
web', and it would therefore still be 'web-based'. ;-)

But we don't need to follow the same protocols we currently do.



 I simply hate the fact that if three people in my house request the front
 page of the financial times, our computers all have to go get it separately.
 Why don't the other two get it off the first one, or at the very least, off
 the router?


You're rather stingy with bandwidth. Maybe you should try a Squid
server. ;-)

Support for ad-hoc content distribution networks is designed into my
reactive demand programming model.



 We don't even have languages of intention - just languages of
 implementation. We're left to abstract out the intention from reading the
 implementation.


Have you ever used an executable specification language (such as Maude or
Coq)?

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: Growth, Popularity and Languages - was Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-25 Thread Alan Kay
The argument about mass popularity is good if all you want to do is triumph 
in 
the consumer products business (c.f. many previous raps I've done about the 
anthropological human universals and how and why technological amplifiers for 
them have been and will be very popular).

This is because marketeers are generally interested in what people *want* and 
desire to supply those wants and to get those wants to intersect with products.

Educators are more interested in what people *need*, and many of these *needs* 
in my opinion are coextensive with human non-universals -- inventions (not 
genetic built-ins) such as reading and writing, modern scientific thinking and 
mathematics, deep historical perspectives, the law, equal rights, and many 
other 
rather difficult to learn and difficult to invent ideas.

One of the most important points here is that becoming fluently skilled in a 
hard to learn area produces an odd -- but I think better -- kind of human ... 
one who has not just the inborn drives -- for example, revenge and vendetta are 
human universals -- but also has an overlay of other kinds of thinking that can 
in many cases moderate and sometimes head off impulses that might have been 
workable 200,000 years ago but are not good actions now.

As far as can be ascertained, humans had been on the planet for almost 200,000 
years before any of these were invented, and modern science was created only 
about 400 years ago. We are still trying to invent and teach and learn human 
rights. These are not only not obvious to our genetic brains, they are 
virtually 
invisible!

A mass market place will have to be above high thresholds in knowledge before 
it 
can make good choices about these.

Societies have always had to decide how to educate children into adults (though 
most have not been self-conscious about this).

If ways could be found to make the learning of the really important stuff 
popular and wanted, then things are easier and simpler. 


But the dilemma is: what happens if this is the route and the children and 
adults reject it for the much more alluring human universals? Even if almost 
none of them lead to a stable, thriving, growth inducing and prosperous 
civilization?

These are the issues I care about.

If we look in the small at computing, and open it to a popular culture, we will 
get a few good things (as we do in pop music), but most of what is rich in most 
invented and developed areas will be not even seen, will not be learned, and a 
few things will be re-invented in much worse forms (reinventing the flat 
tire).

This is partly because knowledge is generally more powerful than cleverness, 
and 
point of view is more powerful than either.

I think education at the highest possible levels has always been the main 
issues 
for human beings, especially after the difficult to learn powerful inventions 
started to appear.

For example, what was most important about writing was not that it could take 
down oral discourse, but that it led to new ways of thinking, arguing and 
discourse, and was one of the main underpinings of many other inventions. 
Similarly, what is important about computing is not that it can take down old 
media, useful as that is,  or provide other conveniences through simple 
scripting, but that it constitutes a new and much more powerful way to think 
about, embody, argue and invent powerful ideas that can help us gain 
perspective 
on the dilemmas created by being humans who are amplified by technologies. If 
the legacy of the last several centuries is to automate the Pleistocene via 
catering to and supplying great power to human universals, then monumental 
disaster is not far off. As H.G. Wells pointed out We are in a race between 
Education and Catastrophe. It is hard to see that real education is ahead at 
this point.

One of the great dilemmas of equal rights and other equalities is how to deal 
with the Tyranny of the Commons. The American Plan was to raise the commons 
to be able to participate in the same levels of conversations as the best 
thinkers. I think this is far from the situation at the current time.

Much of this is quite invisible to any culture that is trying to get by and 
lacks systems and historical consciousness.

The trivial take on computing today by both the consumers and most of the 
professionals would just be another pop music to wince at most of the time, 
if it weren't so important for how future thinking should be done.

Best wishes,

Alan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-24 Thread Marcel Weiher
Hi Alan,

as usual, it was inspiring talking to your colleagues and hearing you speak at 
Potsdam.  I think I finally got the Model-T image, which resonated with my 
fondness for Objective-C:  a language that a 17 year old with no experience 
with compilers or runtimes can implement and that manages to boil down dynamic 
OO/messaging to a single special function can't be all bad :-) 

There was one question I had on the scaling issue that would not have fitted in 
the QA:   while praising the design of the Internet, you spoke less well of 
the World Wide Web, which surprised me a bit.   Can you elaborate?

Thanks,

Marcel



On Jul 22, 2011, at 6:29 , Alan Kay wrote:

 To All,
 
 This wound up being a talk to several hundred students, so most of the 
 content is about ways to think about things, with just a little about 
 scaling and STEPS at the end.
 
 Cheers,
 
 Alan
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-24 Thread Alan Kay
Hi Marcel

I think I've already said a bit about the Web on this list -- mostly about the 
complete misunderstanding of the situation the web and browser designers had. 


All the systems principles needed for a good design were already extant, but I 
don't think they were known to the designers, even though many of them were 
embedded in the actual computers and operating systems they used.

The simplest way to see what I'm talking about is to notice the many-many 
things 
that could be done on a personal computer/workstation that couldn't be done in 
the web  browser running on the very same personal computer/workstation. There 
was never any good reason for these differences.

Another way to look at this is from the point of view of separation of 
concerns. A big question in any system is how much does 'Part A' have to know 
about 'Part B' (and vice versa) in order to make things happen? The web and 
browser designs fail on this really badly, and have forced set after set of 
weak 
conventions into larger and larger, but still weak browsers and, worse, onto 
zillions of web pages on the net. 


Basically, one of the main parts of good systems design is to try to find ways 
to finesse safe actions without having to know much. So -- for example -- 
Squeak 
runs everywhere because it can carry all of its own resources with it, and the 
OS processes/address-spaces allow it to run safely, but do not have to know 
anything about Squeak to run it. Similarly Squeak does not have to know much to 
run on every machine - just how to get events, a display buffer, and to map its 
file conventions onto the local ones. On a bare machine, Squeak *is* the OS, 
etc. So much for old ideas from the 70s!

The main idea here is that a windowing 2.5 D UI can compose views from many 
sources into a page. The sources can be opaque because they can even do their 
own rendering if needed. Since the sources can run in protected address-spaces 
their actions can be confined, and we the mini-OS running all this do not 
have 
to know anything about them. This is how apps work on personal computers, and 
there is no reason why things shouldn't work this way when the address-spaces 
come from other parts of the net. There would then be no difference between 
local and global apps.

Since parts of the address spaces can be externalized, indexing as rich (and 
richer) to what we have now still can be done.

And so forth.

The Native Client part of Chrome finally allows what should have been done in 
the first place (we are now about 20+ years after the first web proposals by 
Berners-Lee).  However, this approach will need to be adopted by most of the 
already existing multiple browsers before it can really be used in a practical 
way in the world of personal computing -- and there are signs that there is not 
a lot of agreement or understanding why this would be a good thing. 


The sad and odd thing is that so many people in the computer field were so 
lacking in systems consciousness that they couldn't see this, and failed to 
complain mightily as the web was being set up and a really painful genii was 
being let out of the bottle.

As Kurt Vonnegut used to say And so it goes.

Cheers,

Alan




From: Marcel Weiher marcel.wei...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Cc: Alan Kay alan.n...@yahoo.com
Sent: Sun, July 24, 2011 5:39:26 AM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam


Hi Alan,

as usual, it was inspiring talking to your colleagues and hearing you speak at 
Potsdam.  I think I finally got the Model-T image, which resonated with my 
fondness for Objective-C:  a language that a 17 year old with no experience 
with 
compilers or runtimes can implement and that manages to boil down dynamic 
OO/messaging to a single special function can't be all bad :-) 

There was one question I had on the scaling issue that would not have fitted in 
the QA:   while praising the design of the Internet, you spoke less well of 
the 
World Wide Web, which surprised me a bit.   Can you elaborate?

Thanks,

Marcel



On Jul 22, 2011, at 6:29 , Alan Kay wrote:

To All,

This wound up being a talk to several hundred students, so most of the content 
is about ways to think about things, with just a little about scaling and 
STEPS at the end.

Cheers,

Alan___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-24 Thread Thiago Silva
Hello Dr. Alan,

Since access to fonc list archives is closed to members, would you allow me to 
publish your email below elsewhere for public access? It is the most rich and 
informative critique I've found about the web (plus the non-authoring nature 
of the browser you've mentioned before).

Cheers,
Thiago

On Sunday 24 July 2011 14:24:20 Alan Kay wrote:
 Hi Marcel
 
 I think I've already said a bit about the Web on this list -- mostly about
 the complete misunderstanding of the situation the web and browser
 designers had.
 
 
 All the systems principles needed for a good design were already extant,
 but I don't think they were known to the designers, even though many of
 them were embedded in the actual computers and operating systems they
 used.
 
 The simplest way to see what I'm talking about is to notice the many-many
 things that could be done on a personal computer/workstation that couldn't
 be done in the web  browser running on the very same personal
 computer/workstation. There was never any good reason for these
 differences.
 
 Another way to look at this is from the point of view of separation of
 concerns. A big question in any system is how much does 'Part A' have to
 know about 'Part B' (and vice versa) in order to make things happen? The
 web and browser designs fail on this really badly, and have forced set
 after set of weak conventions into larger and larger, but still weak
 browsers and, worse, onto zillions of web pages on the net.
 
 
 Basically, one of the main parts of good systems design is to try to find
 ways to finesse safe actions without having to know much. So -- for
 example -- Squeak runs everywhere because it can carry all of its own
 resources with it, and the OS processes/address-spaces allow it to run
 safely, but do not have to know anything about Squeak to run it. Similarly
 Squeak does not have to know much to run on every machine - just how to
 get events, a display buffer, and to map its file conventions onto the
 local ones. On a bare machine, Squeak *is* the OS, etc. So much for old
 ideas from the 70s!
 
 The main idea here is that a windowing 2.5 D UI can compose views from many
 sources into a page. The sources can be opaque because they can even do
 their own rendering if needed. Since the sources can run in protected
 address-spaces their actions can be confined, and we the mini-OS running
 all this do not have to know anything about them. This is how apps work on
 personal computers, and there is no reason why things shouldn't work this
 way when the address-spaces come from other parts of the net. There would
 then be no difference between local and global apps.
 
 Since parts of the address spaces can be externalized, indexing as rich
 (and richer) to what we have now still can be done.
 
 And so forth.
 
 The Native Client part of Chrome finally allows what should have been done
 in the first place (we are now about 20+ years after the first web
 proposals by Berners-Lee).  However, this approach will need to be adopted
 by most of the already existing multiple browsers before it can really be
 used in a practical way in the world of personal computing -- and there
 are signs that there is not a lot of agreement or understanding why this
 would be a good thing.
 
 
 The sad and odd thing is that so many people in the computer field were so
 lacking in systems consciousness that they couldn't see this, and failed
 to complain mightily as the web was being set up and a really painful
 genii was being let out of the bottle.
 
 As Kurt Vonnegut used to say And so it goes.
 
 Cheers,
 
 Alan
 
 
 
 
 From: Marcel Weiher marcel.wei...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Cc: Alan Kay alan.n...@yahoo.com
 Sent: Sun, July 24, 2011 5:39:26 AM
 Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
 
 
 Hi Alan,
 
 as usual, it was inspiring talking to your colleagues and hearing you speak
 at Potsdam.  I think I finally got the Model-T image, which resonated with
 my fondness for Objective-C:  a language that a 17 year old with no
 experience with compilers or runtimes can implement and that manages to
 boil down dynamic OO/messaging to a single special function can't be all
 bad :-)
 
 There was one question I had on the scaling issue that would not have
 fitted in the QA:   while praising the design of the Internet, you spoke
 less well of the World Wide Web, which surprised me a bit.   Can you
 elaborate?
 
 Thanks,
 
 Marcel
 
 
 
 On Jul 22, 2011, at 6:29 , Alan Kay wrote:
 
 To All,
 
 This wound up being a talk to several hundred students, so most of the
 content is about ways to think about things, with just a little about
 scaling and STEPS at the end.
 
 Cheers,
 
 Alan

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-22 Thread Matthias Berth
Alan Kay: Programming and Scaling

http://tele-task.de/archive/lecture/overview/5819/



On Fri, Jul 22, 2011 at 14:57, DeNigris Sean s...@clipperadams.com wrote:
 Bert, that link was to a 2 minute clip of Alan receiving an award.

 Sean

 On Jul 22, 2011, at 8:03 AM, Bert Freudenberg wrote:

 Recording:

       http://tele-task.de/archive/lecture/overview/5820/

 - Bert -

 On 22.07.2011, at 06:29, Alan Kay wrote:

 To All,

 This wound up being a talk to several hundred students, so most of the 
 content is about ways to think about things, with just a little about 
 scaling and STEPS at the end.

 Cheers,

 Alan

 From: John Zabroski johnzabro...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Thu, July 21, 2011 5:47:15 PM
 Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

 Ian,

 When will the recording be online?

 Please let us know!

 Thanks,
 Z-Bo

 On Fri, Jul 8, 2011 at 7:32 PM, Ian Piumarta i...@vpri.org wrote:
 Title: Next steps for qualitatively improving programming

 Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany

 Date and time: July 21 (Thu) 2011, 16:00-17:00

 Additional information:

 http://www.vpri.org/html/people/founders.htm
 http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
 http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html

 This talk will be recorded and made available online.



 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alan Kay talk at HPI in Potsdam

2011-07-22 Thread Kim Rose

thanks, Bert!  Hope all is going well and you're having fun.
See you soon in Lancaster
cheers,
Kim
On Jul 22, 2011, at 5:03 AM, Bert Freudenberg wrote:


Recording:

http://tele-task.de/archive/lecture/overview/5820/

- Bert -

On 22.07.2011, at 06:29, Alan Kay wrote:


To All,

This wound up being a talk to several hundred students, so most of  
the content is about ways to think about things, with just a  
little about scaling and STEPS at the end.


Cheers,

Alan


From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Thu, July 21, 2011 5:47:15 PM
Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam

Ian,

When will the recording be online?

Please let us know!

Thanks,
Z-Bo

On Fri, Jul 8, 2011 at 7:32 PM, Ian Piumarta i...@vpri.org wrote:

Title: Next steps for qualitatively improving programming

Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany

Date and time: July 21 (Thu) 2011, 16:00-17:00

Additional information:

http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html

This talk will be recorded and made available online.





___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Alan Kay talk at HPI in Potsdam

2011-07-08 Thread Ian Piumarta
Title: Next steps for qualitatively improving programming

Venue: Lecture Hall 1, Hasso-Plattner-Institut Potsdam, Germany

Date and time: July 21 (Thu) 2011, 16:00-17:00

Additional information:

http://www.vpri.org/html/people/founders.htm
http://www.hpi.uni-potsdam.de/hpi/anfahrt?L=1
http://www.hpi.uni-potsdam.de/news/beitrag/computerpionier-alan-kay-wird-hpi-fellow.html

This talk will be recorded and made available online.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc