Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Dale Schumacher
On Mon, Feb 27, 2012 at 5:23 PM, Charles Perkins  wrote:
> I think of the code size reduction like this:
>
> A book of logarithm tables may be hundreds of pages in length and yet the 
> equation producing the numbers can fit on one line.
>
> VPRI is exploring "runnable math" and is seeking key equations from which the 
> functionality of those 1MLOC, 10MLOC, 14MLOC can be automatically produced.
>
> It's not about code compression, its about functionality expansion.

This reminds me of Gregory Chaitin's concept of algorithmic
complexity, leading to his results relating to compression, logical
irreducibility and "understanding" [1].

[1] G. Chaitin.  Meta Math! The Quest for Omega, Vintage Books 2006.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 5:36 PM, Julian Leviston wrote:


On 29/02/2012, at 10:29 AM, BGB wrote:


On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid "the 
current day world".


For example, one of the many current day standards that was 
dismissed immediately is the WWW (one could hardly imagine more of a 
mess).




I don't think "the web" is entirely horrible:
HTTP basically works, and XML is "ok" IMO, and an XHTML variant could 
be ok.


Hypertext as a structure is not beautiful nor is it incredibly useful. 
Google exists because of how incredibly flawed the web is and if 
you look at their process for organising it, you start to find 
yourself laughing a lot. The general computing experience these days 
is an absolute shambles and completely crap. Computers are very very 
hard to use. Perhaps you don't see it - perhaps you're in the trees - 
you can't see the forest... but it's intensely bad.




I am not saying it is particularly "good", just that it is "ok" and "not 
completely horrible...".


it is, as are most things in life, generally adequate for what it does...

it could be better, and it could probably also be worse...


It's like someone crapped their pants and google came around and said 
hey you can wear gas masks if you like... when what we really need to 
do is clean up the crap and make sure there's a toilet nearby so that 
people don't crap their pants any more.




IMO, this is more when one gets into the SOAP / WSDL area...


granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many "shiny new technologies" which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system 
lacking support for this is likely to be rejected outright.


You mean like email? A system that doesn't have anything to do with 
the WWW per se that is used daily by millions upon millions of people? 
:P I disagree intensely. In exactly the same was that facebook was 
taken up because it was a slightly less crappy version of myspace, 
something better than the web would be taken up in a heartbeat if it 
was accessible and obviously better.


You could, if you chose to, view this mailing group as a type of 
"living document" where you can peruse its contents through your email 
program... depending on what you see the web as being... maybe if you 
squint your eyes just the right way, you could envisage the web as 
simply being a means of sharing information to other humans... and 
this mailing group could simply be a different kind of web...


I'd hardly say that email hasn't been a great success... in fact, I 
think email, even though it, too is fairly crappy, has been more of a 
success than the world wide web.




I don't think email and the WWW are mutually exclusive, by any means.

yes, one probably needs email as well, as well as probably a small 
mountain of other things, to make a viable end-user OS...



however, technically, many people do use email via webmail interfaces 
and similar.
nevermind that many people use things like "Microsoft Outlook Web 
Access" and similar.


so, it is at least conceivable that a future exists where people read 
their email via webmail and access usenet almost entirely via Google 
Groups and similar...


not that it would be necessarily a good thing though...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Julian Leviston

On 29/02/2012, at 10:29 AM, BGB wrote:

> On 2/28/2012 2:30 PM, Alan Kay wrote:
>> 
>> Yes, this is why the STEPS proposal was careful to avoid "the current day 
>> world". 
>> 
>> For example, one of the many current day standards that was dismissed 
>> immediately is the WWW (one could hardly imagine more of a mess). 
>> 
> 
> I don't think "the web" is entirely horrible:
> HTTP basically works, and XML is "ok" IMO, and an XHTML variant could be ok.

Hypertext as a structure is not beautiful nor is it incredibly useful. Google 
exists because of how incredibly flawed the web is and if you look at their 
process for organising it, you start to find yourself laughing a lot. The 
general computing experience these days is an absolute shambles and completely 
crap. Computers are very very hard to use. Perhaps you don't see it - perhaps 
you're in the trees - you can't see the forest... but it's intensely bad.

It's like someone crapped their pants and google came around and said hey you 
can wear gas masks if you like... when what we really need to do is clean up 
the crap and make sure there's a toilet nearby so that people don't crap their 
pants any more.

> 
> granted, moving up from this, stuff quickly turns terrible (poorly designed, 
> and with many "shiny new technologies" which are almost absurdly bad).
> 
> 
> practically though, the WWW is difficult to escape, as a system lacking 
> support for this is likely to be rejected outright.

You mean like email? A system that doesn't have anything to do with the WWW per 
se that is used daily by millions upon millions of people? :P I disagree 
intensely. In exactly the same was that facebook was taken up because it was a 
slightly less crappy version of myspace, something better than the web would be 
taken up in a heartbeat if it was accessible and obviously better.

You could, if you chose to, view this mailing group as a type of "living 
document" where you can peruse its contents through your email program... 
depending on what you see the web as being... maybe if you squint your eyes 
just the right way, you could envisage the web as simply being a means of 
sharing information to other humans... and this mailing group could simply be a 
different kind of web...

I'd hardly say that email hasn't been a great success... in fact, I think 
email, even though it, too is fairly crappy, has been more of a success than 
the world wide web.

Julian

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid "the current 
day world".


For example, one of the many current day standards that was dismissed 
immediately is the WWW (one could hardly imagine more of a mess).




I don't think "the web" is entirely horrible:
HTTP basically works, and XML is "ok" IMO, and an XHTML variant could be ok.

granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many "shiny new technologies" which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system lacking 
support for this is likely to be rejected outright.



But the functionality plus more can be replaced in our "ideal world" 
with encapsulated confined migratory VMs ("Internet objects") as a 
kind of next version of Gerry Popek's LOCUS.


The browser and other storage confusions are all replaced by the 
simple idea of separating out the safe objects from the various modes 
one uses to send and receive them. This covers files, email, web 
browsing, search engines, etc. What is left in this model is just a UI 
that can integrate the visual etc., outputs from the various 
encapsulated VMs, and send them events to react to. (The original 
browser folks missed that a scalable browser is more like a kernel OS 
than an App)


it is possible.

in my case, I had mostly assumed file and message passing.
theoretically, script code could be passed along as well, but the 
problem with passing code is how to best ensure that things are kept secure.



in some of my own uses, an option is to throw a UID/GID+privileges 
system into the mix, but there are potential drawbacks with this 
(luckily, the performance impact seems to be relatively minor). granted, 
a more comprehensive system (making use of ACLs and/or "keyrings" could 
be potentially a little more costly, rather than simple UID/GID rights 
checking, but all this shouldn't be too difficult to mostly optimize 
away in most cases).


the big issue is mostly to set up all the security in a "fairly secure" way.

currently, by default, nearly everything defaults to requiring root 
access. unprivileged code would thus require interfaces to be exposed to 
it directly (probably via "setuid" functions). however, as-is, it is 
defeated by most application code defaulting to "root".


somehow though, I think I am probably the only person I know who thinks 
this system is "sane".


however, it did seem like it would probably be easier to set up and 
secure than one based on scoping and visibility.



otherwise, yeah, maybe one can provide a bunch of APIs, and "apps" can 
be mostly implemented as scripts which invoke these APIs?...




These are old ideas, but the vendors etc didn't get it ...



maybe:
browser vendors originally saw the browser merely as a document viewing 
app (rather than as a "platform").



support for usable network file systems and "applications which aren't 
raw OS binaries" are slow-coming.


AFAIK, the main current contenders in the network filesystem space are 
SMB2/CIFS and WebDAV.


possibly useful could be integrating things in a form which is not 
terrible, for example:

OS has a basic HTML layout engine (doesn't care where the data comes from);
the OS's VFS can directly access HTTP, ideally without having to "mount" 
things first;

...

in this case, the "browser" is essentially just a fairly trivial script, 
say:
creates a window, and binds an HTML layout object into a form with a few 
other widgets;
passes any HTTP requests to the OS's filesystem API, with the OS 
managing getting the contents from the servers.


a partial advantage then is that other apps may not have to deal with 
libraries or sockets or similar to get files from web-servers, and 
likewise shell utilities would work, by default, with web-based files.


"cp http://someserver/somefile ~/myfiles/"

or similar...


actually, IIRC, my OS project may have actually done this (or it was 
planned, either way). I do remember though that sockets were available 
as part of the filesystem (under "/dev/" somewhere), so no sockets API 
was needed (it was instead based on opening the socket and using 
"ioctl()" calls...).



side note: what originally killed my OS project was, at the time, 
reaching the conclusion that it wouldn't have been likely possible for 
me to compete on equal terms with Windows and Linux, rendering the 
effort pointless, vs instead developing purely in userspace. does bring 
up some interesting thoughts though.



or such...



Cheers,

Alan






*From:* Reuben Thomas 
*To:* Fundamentals of New Computing 
*Sent:* Tuesday, February 28, 2012 1:01 PM
*Subject:* Re: [fonc] Error trying to compile COLA

On 28 February 2012 20:51, Niklas Larsson mailto:metanik...@gmail.com>> wrote:
>
> But Linux contains much more duplication than drivers only, it
> supports 

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Yes, this is why the STEPS proposal was careful to avoid "the current day 
world". 


For example, one of the many current day standards that was dismissed 
immediately is the WWW (one could hardly imagine more of a mess). 


But the functionality plus more can be replaced in our "ideal world" with 
encapsulated confined migratory VMs ("Internet objects") as a kind of next 
version of Gerry Popek's LOCUS. 

The browser and other storage confusions are all replaced by the simple idea of 
separating out the safe objects from the various modes one uses to send and 
receive them. This covers files, email, web browsing, search engines, etc. What 
is left in this model is just a UI that can integrate the visual etc., outputs 
from the various encapsulated VMs, and send them events to react to. (The 
original browser folks missed that a scalable browser is more like a kernel OS 
than an App)

These are old ideas, but the vendors etc didn't get it ...


Cheers,

Alan




>
> From: Reuben Thomas 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 28, 2012 1:01 PM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>On 28 February 2012 20:51, Niklas Larsson  wrote:
>>
>> But Linux contains much more duplication than drivers only, it
>> supports many filesystems, many networking protocols, and many
>> architectures of which only a few of each are are widely used. It also
>> contains a lot of complicated optimizations of operations that would
>> be unwanted in a simple, transparent OS.
>
>Absolutely. And many of these cannot be removed, because otherwise you
>cannot interoperate with the systems that use them. (A similar
>argument can be made for hardware if you want your OS to be widely
>usable, but the software argument is rather more difficult to avoid.)
>
>> Let's put a number on that: the first public
>> release of Linux, 0.01, contains 5929 lines i C-files and 2484 in
>> header files. I'm sure that is far closer to what a minimal viable OS
>> is than what current Linux is.
>
>I'm not sure that counts as "viable".
>
>A portable system will always have to cope with a wide range of
>hardware. Alan has already pointed to a solution to this: devices that
>come with their own drivers. At the very least, it's not unreasonable
>to assume something like the old Windows model, where drivers are
>installed with the device, rather than shipped with the OS. So that
>percentage of code can indeed be removed.
>
>More troublingly, an interoperable system will always have to cope
>with a wide range of file formats, network protocols &c. As FoNC has
>demonstrated with TCP/IP, implementations of these sometimes made much
>smaller, but many formats and protocols will not be susceptible to
>reimplementation, for technical, legal or simple lack of interest.
>
>As far as redundancy in the Linux model, then, one is left with those
>parts of the system that can either be implemented with less code
>(hopefully, a lot of it), or where there is already duplication (e.g.
>schedulers).
>
>Unfortunately again, one person's "little-used architecture" is
>another's bread and butter (and since old architectures are purged
>from Linux, it's a reasonable bet that there are significant numbers
>of users of each supported architecture), and one person's
>"complicated optimization" is another's essential performance boost.
>It's precisely due to heavy optimization of the kernel and libc that
>the basic UNIX programming model has remained stable for so long in
>Linux, while still delivering the performance of advanced hardware
>undreamed-of when UNIX was designed.
>
>-- 
>http://rrt.sc3d.org
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 10:33 AM, Reuben Thomas wrote:

On 28 February 2012 16:41, BGB  wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being "mostly drivers"), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.



yeah, kind of the issue here.

one can shave code, reduce redundancy, increase abstraction, ... but 
this will only buy so much.


then one can start dropping features, but how many can one drop and 
still have everything still work?...


one can be like, "well, maybe we will make something like MS-DOS, but in 
long-mode?" (IOW: single-big address space, with no user/kernel 
separation, or conventional processes, and all kernel functionality is 
essentially library functionality).



ok, how small can this be made?
maybe 50 kloc, assuming one is sparing with the drivers.


I once wrote an OS kernel (long-dead project, ended nearly a decade 
ago), going and running a line counter on the whole project, I get about 
84 kloc. further investigation: 44 kloc of this was due to a copy of 
NASM sitting in the apps directory (I tried to port NASM to my OS at the 
time, but it didn't really work correctly, very possibly due to a 
quickly kludged-together C library...).



so, a 40 kloc OS kernel, itself at the time bordering on "barely worked".

what sorts of HW drivers did it have: ATA / IDE, console, floppy, VESA, 
serial mouse, RS232, RTL8139. how much code as drivers: 11 kloc.


how about VFS: 5 kloc, which include (FS drivers): "BSHFS" (IIRC, a 
TFTP-like shared filesystem), FAT (12/16/32), RAMFS.


another 5 kloc goes into the network code, which included TCP/IP, ARP, 
PPP, and an HTTP client+server.


boot loader was 288 lines (ASM), "setup" was 792 lines (ASM).

boot loader: copies boot files ("setup.bin" and "kernel.sys") into RAM 
(in the low 640K). seems hard-coded for FAT12.


"setup" was mostly responsible for setting up the kernel (copying it to 
the desired address) and entering protected mode (jumping into the 
kernel). this is commonly called a "second-stage" loader, partly because 
it does a lot of stuff which is too bulky to do in the boot loader 
(which is limited to 512 bytes, whereas "setup" can be up to 32kB).


"setup" magic: Enable A20, load GDT, enter big-real mode, check for "MZ" 
and "PE" markers (kernel was PE/COFF it seems), copies kernel image to 
VMA base, pushes kernel entry point to stack, remaps IRQs, executes 
32-bit return (jumps into protected mode).


around 1/2 of the "setup" file is code for jumping between real and 
protected mode and for interfacing with VESA.


note: I was using PE/COFF for apps and libraries as well.
IIRC, I was using a naive process-based model at the time.


could a better HLL have made the kernel drastically smaller? I have my 
doubts...



add the need for maybe a compiler, ... and the line count is sure to get 
larger quickly.


based on my own code, one could probably get a basically functional C 
compiler in around 100 kloc, but maybe smaller could be possible (this 
would include the compiler+assembler+linker).


apps/... would be a bit harder.


in my case, the most direct path would be just dumping all of my 
existing libraries on top of my original OS project, and maybe dropping 
the 3D renderer (since it would be sort of useless without GPU support, 
OpenGL, ...). this would likely weigh in at around 750-800 kloc (but 
could likely be made into a self-hosting OS, since a C compiler would be 
included, and as well there is a C runtime floating around).


this is all still a bit over the stated limit.


maybe, one could try to get a functional GUI framework and some basic 
applications and similar in place (probably maybe 100-200 kloc more, at 
least).


probably, by this point, one is looking at something like a Windows 3.x 
style disk footprint (maybe using 10-15 MB of HDD space or so for all 
the binaries...).



granted, in my case, the vast majority of the code would be C, probably 
with a smaller portion of the OS and applications being written in 
BGBScript or similar...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
As I mentioned, Smalltalk-71 was never implemented -- and rarely mentioned (but 
it was part of the history of Smalltalk so I put in a few words about it).

If we had implemented it, we probably would have cleaned up the look of it, and 
also some of the conventions. 

You are right that part of it is like a term rewriting system, and part of it 
has state (object state).

to ... do ... is a operation. The match is on everything between toand do

For example, the first line with "cons" in it does the "car" operation (which 
here is "hd").

The second line with "cons" in it does "replaca". The value of "hd" is being 
replaced by the value of "c". 

One of the struggles with this design was to try to make something almost as 
simple as LOGO, but that could do language extensions, simple AI backward 
chaining inferencing (like Winograd's block stacking problem), etc.

The turgid punctuations (as I mentioned in the history) were attempts to find 
ways to do many different kinds of matching.

So we were probably lucky that Smalltalk-72 came along  It's pattern 
matching was less general, but quite a bit could be done as far as driving an 
extensible interpreter with it.

However, some of these ideas were done better later. I think by Leler, and 
certainly by Joe Goguen, and others.

Cheers,

Alan


>
> From: Jakob Praher 
>To: Alan Kay ; Fundamentals of New Computing 
> 
>Sent: Tuesday, February 28, 2012 12:52 PM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>
>Dear Alan,
>
>Am 28.02.12 14:54, schrieb Alan Kay: 
>Hi Ryan
>>
>>
>>Check out Smalltalk-71, which was a design to do just what you suggest -- it 
>>was basically an attempt to combine some of my favorite languages of the time 
>>-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere? Something like 
a Smalltalk 71 for Smalltalk 80 programmers :-)
>In the early history of Smalltalk you mention it as 
>> It was a kind of parser with object-attachment that executed tokens 
>> directly. The Early History of Smalltalk From the examples I think that "do 
>> 'expr'" is evaluating expr by using previous "to 'ident' :arg1..:argN 
>> ".
>
>As an example "do 'factorial 3'" should  evaluate to 6 considering:
>
>to 'factorial' 0 is 1
>to 'factorial' :n do 'n*factorial n-1' The Early History of Smalltalk What 
>about arithmetic and precendence: What part of language was built into the 
>system? 
>- :var denote variables, whereas var denotes the instantiated value
of :var in the expr, e.g. :n vs 'n-1'
>- '' denote simple tokens (in the head) as well as expressions
(in the body)?
>- to, do are keywords
>- () can be used for precedence
>
>You described evaluation as straightforward pattern-matching.
>It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons'
:a :b) '<-'  :c " is a structured term.
>I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract
syntax - whereas in Smalltalk 71 the concrete syntax seems to be
used in the rules.
>
>Also it seems redundant to both have:
>to 'hd' ('cons' :a :b) do 'a' 
>and
>to 'hd' ('cons' :a :b) '<-'  :c  do 'a <- c'
>
>Is this made to make sure that the left hand side of <- has to be
a hd (cons :a :b) expression?
>
>Best,
>Jakob
>
>
>
>>
>>This never got implemented because of "a bet" that turned into Smalltalk-72, 
>>which also did what you suggest, but in a less comprehensive way -- think of 
>>each object as a Lisp closure that could be sent a pointer to the message and 
>>could then parse-and-eval that. 
>>
>>
>>A key to scaling -- that we didn't try to do -- is "semantic typing" (which I 
>>think is discussed in some of the STEPS material) -- that is: to be able to 
>>characterize the meaning of what is needed and produced in terms of a 
>>description rather than a label. Looks like we won't get to that idea this 
>>time either.
>>
>>
>>Cheers,
>>
>>
>>Alan
>>
>>
>>
>>
>>>
>>> From: Ryan Mitchley 
>>>To: fonc@vpri.org 
>>>Sent: Tuesday, February 28, 2012 12:57 AM
>>>Subject: Re: [fonc] Error trying to compile COLA
>>> 
>>>
>>> 
>>>On 27/02/2012 19:48, Tony Garnock-Jones wrote:
>>>
>>>
My interest in it came out of thinking about
  integrating pub/sub (multi- and broadcast)
  messaging into the heart of a language. What
  would a Smalltalk look like if, instead of a
  strict unicast model with multi- and broadcast
  constructed atop (via Observer/Observable), it
  had a messaging model capable of natively
  expressing unicast, anycast, multicast, and
  broadcast patterns? 

>>>I've wondered if pattern matching shouldn't be a
foundation of method resolution (akin t

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Reuben Thomas
On 28 February 2012 20:51, Niklas Larsson  wrote:
>
> But Linux contains much more duplication than drivers only, it
> supports many filesystems, many networking protocols, and many
> architectures of which only a few of each are are widely used. It also
> contains a lot of complicated optimizations of operations that would
> be unwanted in a simple, transparent OS.

Absolutely. And many of these cannot be removed, because otherwise you
cannot interoperate with the systems that use them. (A similar
argument can be made for hardware if you want your OS to be widely
usable, but the software argument is rather more difficult to avoid.)

> Let's put a number on that: the first public
> release of Linux, 0.01, contains 5929 lines i C-files and 2484 in
> header files. I'm sure that is far closer to what a minimal viable OS
> is than what current Linux is.

I'm not sure that counts as "viable".

A portable system will always have to cope with a wide range of
hardware. Alan has already pointed to a solution to this: devices that
come with their own drivers. At the very least, it's not unreasonable
to assume something like the old Windows model, where drivers are
installed with the device, rather than shipped with the OS. So that
percentage of code can indeed be removed.

More troublingly, an interoperable system will always have to cope
with a wide range of file formats, network protocols &c. As FoNC has
demonstrated with TCP/IP, implementations of these sometimes made much
smaller, but many formats and protocols will not be susceptible to
reimplementation, for technical, legal or simple lack of interest.

As far as redundancy in the Linux model, then, one is left with those
parts of the system that can either be implemented with less code
(hopefully, a lot of it), or where there is already duplication (e.g.
schedulers).

Unfortunately again, one person's "little-used architecture" is
another's bread and butter (and since old architectures are purged
from Linux, it's a reasonable bet that there are significant numbers
of users of each supported architecture), and one person's
"complicated optimization" is another's essential performance boost.
It's precisely due to heavy optimization of the kernel and libc that
the basic UNIX programming model has remained stable for so long in
Linux, while still delivering the performance of advanced hardware
undreamed-of when UNIX was designed.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Jakob Praher
Dear Alan,

Am 28.02.12 14:54, schrieb Alan Kay:
> Hi Ryan
>
> Check out Smalltalk-71, which was a design to do just what you suggest
> -- it was basically an attempt to combine some of my favorite
> languages of the time -- Logo and Lisp, Carl Hewitt's Planner, Lisp
> 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere?
Something like a Smalltalk 71 for Smalltalk 80 programmers :-)
In the early history of Smalltalk you mention it as

> It was a kind of parser with object-attachment that executed tokens
directly.

>From the examples I think that "do 'expr'" is evaluating expr by using
previous "to 'ident' :arg1..:argN ".

As an example "do 'factorial 3'" should  evaluate to 6 considering:

to 'factorial' 0 is 1
to 'factorial' :n do 'n*factorial n-1'

What about arithmetic and precendence: What part of language was built
into the system?
- :var denote variables, whereas var denotes the instantiated value of
:var in the expr, e.g. :n vs 'n-1'
- '' denote simple tokens (in the head) as well as expressions (in
the body)?
- to, do are keywords
- () can be used for precedence

You described evaluation as straightforward pattern-matching.
It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons' :a
:b) '<-'  :c " is a structured term.
I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract syntax
- whereas in Smalltalk 71 the concrete syntax seems to be used in the rules.

Also it seems redundant to both have:
to 'hd' ('cons' :a :b) do 'a'
and
to 'hd' ('cons' :a :b) '<-'  :c  do 'a <- c'

Is this made to make sure that the left hand side of <- has to be a hd
(cons :a :b) expression?

Best,
Jakob

>
> This never got implemented because of "a bet" that turned into
> Smalltalk-72, which also did what you suggest, but in a less
> comprehensive way -- think of each object as a Lisp closure that could
> be sent a pointer to the message and could then parse-and-eval that. 
>
> A key to scaling -- that we didn't try to do -- is "semantic typing"
> (which I think is discussed in some of the STEPS material) -- that is:
> to be able to characterize the meaning of what is needed and produced
> in terms of a description rather than a label. Looks like we won't get
> to that idea this time either.
>
> Cheers,
>
> Alan
>
> 
> *From:* Ryan Mitchley 
> *To:* fonc@vpri.org
> *Sent:* Tuesday, February 28, 2012 12:57 AM
> *Subject:* Re: [fonc] Error trying to compile COLA
>
> On 27/02/2012 19:48, Tony Garnock-Jones wrote:
>>
>> My interest in it came out of thinking about integrating pub/sub
>> (multi- and broadcast) messaging into the heart of a language.
>> What would a Smalltalk look like if, instead of a strict unicast
>> model with multi- and broadcast constructed atop (via
>> Observer/Observable), it had a messaging model capable of
>> natively expressing unicast, anycast, multicast, and broadcast
>> patterns? 
>>
>
> I've wondered if pattern matching shouldn't be a foundation of
> method resolution (akin to binding with backtracking in Prolog) -
> if a multicast message matches, the "method" is invoked (with much
> less specificity than traditional method resolution by
> name/token). This is maybe closer to the biological model of a
> cell surface receptor.
>
> Of course, complexity is an issue with this approach (potentially
> NP-complete).
>
> Maybe this has been done and I've missed it.
>
>
> ___
> fonc mailing list
> fonc@vpri.org 
> http://vpri.org/mailman/listinfo/fonc
>
>
>
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Niklas Larsson
Den 28 februari 2012 18:33 skrev Reuben Thomas :
> On 28 February 2012 16:41, BGB  wrote:
>>>
>>>  - 1 order of magnitude is gained by removing feature creep.  I agree
>>>   feature creep can be important.  But I also believe most feature
>>>   belong to a long tail, where each is needed by a minority of users.
>>>   It does matter, but if the rest of the system is small enough,
>>>   adding the few features you need isn't so difficult any more.
>>>
>>
>> this could help some, but isn't likely to result in an order of magnitude.
>
> Example: in Linux 3.0.0, which has many drivers (and Linux is often
> cited as being "mostly drivers"), actually counting the code reveals
> about 55-60% in drivers (depending how you count). So that even with
> only one hardware configuration, you'd save less than 50% of the code
> size, i.e. a factor of 2 at very best.
>

But Linux contains much more duplication than drivers only, it
supports many filesystems, many networking protocols, and many
architectures of which only a few of each are are widely used. It also
contains a lot of complicated optimizations of operations that would
be unwanted in a simple, transparent OS.

And not much code is actually needed to make a basic Unix clone. Once
upon a time Linux was a couple of thousand lines of C code big and was
even then a functional OS, capable of running a Unix userland and soon
gaining the ability to bootstrap itself by running the build
environment for itself. Let's put a number on that: the first public
release of Linux, 0.01, contains 5929 lines i C-files and 2484 in
header files. I'm sure that is far closer to what a minimal viable OS
is than what current Linux is.

Niklas
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Reuben

Yep. One of the many "finesses" in the STEPS project was to point out that 
requiring OSs to have drivers for everything misses what being networked is all 
about. In a nicer distributed systems design (such as Popek's LOCUS), one would 
get drivers from the devices automatically, and they would not be part of any 
OS code count. Apple even did this in the early days of the Mac for its own 
devices, but couldn't get enough other vendors to see why this was a really big 
idea.

Eventually the OS melts away to almost nothing (as it did at PARC in the 70s).

Then the question starts to become "how much code has to be written to make the 
various functional parts that will be semi-automatically integrated to make 
'vanilla personal computing' " ?


Cheers,

Alan




>
> From: Reuben Thomas 
>To: Fundamentals of New Computing  
>Sent: Tuesday, February 28, 2012 9:33 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>On 28 February 2012 16:41, BGB  wrote:
>>>
>>>  - 1 order of magnitude is gained by removing feature creep.  I agree
>>>   feature creep can be important.  But I also believe most feature
>>>   belong to a long tail, where each is needed by a minority of users.
>>>   It does matter, but if the rest of the system is small enough,
>>>   adding the few features you need isn't so difficult any more.
>>>
>>
>> this could help some, but isn't likely to result in an order of magnitude.
>
>Example: in Linux 3.0.0, which has many drivers (and Linux is often
>cited as being "mostly drivers"), actually counting the code reveals
>about 55-60% in drivers (depending how you count). So that even with
>only one hardware configuration, you'd save less than 50% of the code
>size, i.e. a factor of 2 at very best.
>
>-- 
>http://rrt.sc3d.org
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

Very good question -- and tell your Boss he should support you!

If your boss has a math or science background, this will be an easy sell 
because there are many nice analogies that hold, and also some good examples in 
computing itself.

The POL approach is generally good, but for a particular problem area could be 
as difficult as any other approach. One general argument is that 
"non-machine-code" languages are POLs of a weak sort, but are more effective 
than writing machine code for most problems. (This was quite controversial 50 
years ago -- and lots of bosses forbade using any higher level language.)

Four arguments against POLs are the difficulties of (a) designing them, (b) 
making them, (c) creating IDE etc tools for them, and (d) learning them. (These 
are similar to the arguments about using math and science in engineering, but 
are not completely bogus for a small subset of problems ...).

Companies (and programmers within) are rarely rewarded for saving costs over 
the real lifetime of a piece of software (similar problems exist in the climate 
problems we are facing).These are social problems, but part of real 
engineering. However, at some point life-cycle costs and savings will become 
something that is accounted and rewarded-or-dinged. 

An argument that resonates with some bosses is the "debuggable 
requirements/specifications -> ship the prototype and improve it" whose 
benefits show up early on. However, these quicker track processes will often be 
stressed for time to do a new POL.

This suggests that one of the most important POLs to be worked on are the ones 
that are for making POLs quickly. I think this is a huge important area and 
much needs to be done here (also a very good area for new PhD theses!).


Taking all these factors (and there are more), I think the POL and extensible 
language approach works best for really difficult problems that small numbers 
of really good people are hooked up to solve (could be in a company, and very 
often in one of many research venues) -- and especially if the requirements 
will need to change quite a bit, both from learning curve and quick response to 
the outside world conditions.

Here's where a factor of 100 or 1000 (sometimes even a factor of 10) less code 
will be qualitatively powerful.

Right now I draw a line at *100. If you can get this or more, it is worth 
surmounting the four difficulties listed above. If you can get *1000, you are 
in a completely new world of development and thinking.


Cheers,

Alan





>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Tuesday, February 28, 2012 8:17 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>Alan Kay wrote:
>> Hi Loup
>>
>> As I've said and written over the years about this project, it is not
>> possible to compare features in a direct way here.
>
>Yes, I'm aware of that.  The problem rises when I do advocacy. A
>response I often get is "but with only 20,000 lines, they gotta
>leave features out!".  It is not easy to explain that a point by
>point comparison is either unfair or flatly impossible.
>
>
>> Our estimate so far is that we are getting our best results from the
>> consolidated redesign (folding features into each other) and then from
>> the POLs. We are still doing many approaches where we thought we'd have
>> the most problems with LOCs, namely at "the bottom".
>
>If I got it, what you call "consolidated redesign" encompasses what I
>called "feature creep" and "good engineering principles" (I understand
>now that they can't be easily separated). I originally estimated that:
>
>- You manage to gain 4 orders of magnitude compared to current OSes,
>- consolidated redesign gives you roughly 2 of those  (from 200M to 2M),
>- problem oriented languages give you the remaining 2.(from 2M  to 20K)
>
>Did I…
>- overstated the power of problem oriented languages?
>- understated the benefits of consolidated redesign?
>- forgot something else?
>
>(Sorry to bother you with those details, but I'm currently trying to
>  convince my Boss to pay me for a PhD on the grounds that PoLs are
>  totally amazing, so I'd better know real fast If I'm being
>  over-confident.)
>
>Thanks,
>Loup.
>
>
>
>> Cheers,
>>
>> Alan
>>
>>
>>     *From:* Loup Vaillant 
>>     *To:* fonc@vpri.org
>>     *Sent:* Tuesday, February 28, 2012 2:21 AM
>>     *Subject:* Re: [fonc] Error trying to compile COLA
>>
>>     Originally, the VPRI claims to be able to do a system that's 10,000
>>     smaller than our current bloatware. That's going from roughly 200
>>     million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
>>     to a single book.) That's 4 orders of magnitude.
>>
>>      From the report, I made a rough break down of the causes for code
>>     reduction. It seems that
>>
>>     - 1 order of magnitude is gained by removing feature creep. I agree
>>     feature creep can be important. But I also believe most feature
>>     belong to a long tai

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Reuben Thomas
On 28 February 2012 16:41, BGB  wrote:
>>
>>  - 1 order of magnitude is gained by removing feature creep.  I agree
>>   feature creep can be important.  But I also believe most feature
>>   belong to a long tail, where each is needed by a minority of users.
>>   It does matter, but if the rest of the system is small enough,
>>   adding the few features you need isn't so difficult any more.
>>
>
> this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being "mostly drivers"), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 3:21 AM, Loup Vaillant wrote:

Originally,  the VPRI claims to be able to do a system that's 10,000
smaller than our current bloatware.  That's going from roughly 200
million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
to a single book.) That's 4 orders of magnitude.

From the report, I made a rough break down of the causes for code
reduction.  It seems that

 - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.



this could help some, but isn't likely to result in an order of magnitude.

it is much the same as thinking:
"this OS running on a single HW configuration will eliminate nearly all 
of the code related to drivers";
very likely, it would eliminate a lot of the HW-specific parts, but much 
the "common" functionality may still need to be there.




 - 1 order of magnitude is gained by mere good engineering principles.
   In Frank for instance, there is _one_ drawing system, that is used
   for everywhere.  Systematic code reuse can go a long way.
 Another example is the  code I work with.  I routinely find
   portions whose volume I can divide by 2 merely by rewriting a couple
   of functions.  I fully expect to be able to do much better if I
   could refactor the whole program.  Not because I'm a rock star (I'm
   definitely not).  Very far from that.  Just because the code I
   maintain is sufficiently abysmal.



this is elimination of redundancy, which is where I figured one would 
have the most results.
I suspect that, at least on the small scale, programmers tend to reduce 
redundancy where possible.


on the macro-scale, it is still a bit of an issue, as apps will tend to 
implement their own versions of things.


consider, for example, said OS has a 3D FPS style game in it:
maybe, the VFS code can be shared with the kernel, and with other apps;
likewise for memory management, and things like reflection and types 
management;

...

one may still have a big chunk of code for the 3D renderer.
ok, this is factored out, maybe a lot is shared with the GUI subsystem 
(which now deals with things light texture management, lighting, 
materials and shaders, ...);

...


but, one has a problem: what functionality is moved out of the 3D engine 
into other components in turn makes the parts the functionality into 
more complex, and leaving the functionality in the 3D engine means it 
has to deal with it.


someone could be like "well, this engine doesn't really need shadows and 
normal mapping", but people will notice.


if people are instead like, "this OS will not have or allow any sort of 
3D games", people will likely not go for it.



I suspect, at best, all this is likely to result in around a 25-50% 
reduction.




 - 2 orders of magnitude are gained through the use of Problem Oriented
   Languages (instead of C or C++). As examples, I can readily recall:
+ Gezira vs Cairo(÷95)
+ Ometa  vs Lex+Yacc (÷75)
+ TCP-IP (÷93)
   So I think this is not exaggerated.



I suspect this may be over-estimating it a bit.
say, eliminating C and C++ "may" eliminate some amount of bulk, say, 
people endlessly re-rolling their own linked lists, or typing verbose 
syntax.


but, a 100x average-case seems a bit of a stretch, even for this.
likewise: one still needs the logic to run said POLs.

a bigger problem though could be that it could become 
counter-productive, if the overuse of POLs renders the code unworkable 
due either to very few people being able to understand the POLs, or 
change the code in them without catastrophic results, like "don't touch 
this or your whole system will blow up in your face".




Looked at it this way, it doesn't seems so impossible any more.  I
don't expect you to suddenly agree the "4 orders of magnitude" claim
(It still defies my intuition), but you probably disagree specifically
with one of my three points above.  Possible objections I can think of
are:

 - Features matter more than I think they do.
 - One may not expect the user to write his own features, even though
   it would be relatively simple.
 - Current systems may be not as badly written as I think they are.
 - Code reuse could be harder than I think.
 - The two orders of magnitude that seem to come from problem oriented
   languages may not come from _only_ those.  It could come from the
   removal of features, as well as better engineering principles,
   meaning I'm counting some causes twice.



possibly.

I think more likely that is that a few things will result:
each thing has an effect, but not nearly so much, say if each stage only 
works out to maybe on average 25-50%;
one runs into a diminishing-returns curve (making one thing smaller 
makes it harder to make some

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Loup Vaillant

Alan Kay wrote:

Hi Loup

As I've said and written over the years about this project, it is not
possible to compare features in a direct way here.


Yes, I'm aware of that.  The problem rises when I do advocacy. A
response I often get is "but with only 20,000 lines, they gotta
leave features out!".  It is not easy to explain that a point by
point comparison is either unfair or flatly impossible.



Our estimate so far is that we are getting our best results from the
consolidated redesign (folding features into each other) and then from
the POLs. We are still doing many approaches where we thought we'd have
the most problems with LOCs, namely at "the bottom".


If I got it, what you call "consolidated redesign" encompasses what I
called "feature creep" and "good engineering principles" (I understand
now that they can't be easily separated). I originally estimated that:

- You manage to gain 4 orders of magnitude compared to current OSes,
- consolidated redesign gives you roughly 2 of those  (from 200M to 2M),
- problem oriented languages give you the remaining 2.(from 2M  to 20K)

Did I…
- overstated the power of problem oriented languages?
- understated the benefits of consolidated redesign?
- forgot something else?

(Sorry to bother you with those details, but I'm currently trying to
 convince my Boss to pay me for a PhD on the grounds that PoLs are
 totally amazing, so I'd better know real fast If I'm being
 over-confident.)

Thanks,
Loup.




Cheers,

Alan


*From:* Loup Vaillant 
*To:* fonc@vpri.org
*Sent:* Tuesday, February 28, 2012 2:21 AM
*Subject:* Re: [fonc] Error trying to compile COLA

Originally, the VPRI claims to be able to do a system that's 10,000
smaller than our current bloatware. That's going from roughly 200
million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
to a single book.) That's 4 orders of magnitude.

 From the report, I made a rough break down of the causes for code
reduction. It seems that

- 1 order of magnitude is gained by removing feature creep. I agree
feature creep can be important. But I also believe most feature
belong to a long tail, where each is needed by a minority of users.
It does matter, but if the rest of the system is small enough,
adding the few features you need isn't so difficult any more.

- 1 order of magnitude is gained by mere good engineering principles.
In Frank for instance, there is _one_ drawing system, that is used
for everywhere. Systematic code reuse can go a long way.
Another example is the code I work with. I routinely find
portions whose volume I can divide by 2 merely by rewriting a couple
of functions. I fully expect to be able to do much better if I
could refactor the whole program. Not because I'm a rock star (I'm
definitely not). Very far from that. Just because the code I
maintain is sufficiently abysmal.

- 2 orders of magnitude are gained through the use of Problem Oriented
Languages (instead of C or C++). As examples, I can readily recall:
+ Gezira vs Cairo (÷95)
+ Ometa vs Lex+Yacc (÷75)
+ TCP-IP (÷93)
So I think this is not exaggerated.

Looked at it this way, it doesn't seems so impossible any more. I
don't expect you to suddenly agree the "4 orders of magnitude" claim
(It still defies my intuition), but you probably disagree specifically
with one of my three points above. Possible objections I can think of
are:

- Features matter more than I think they do.
- One may not expect the user to write his own features, even though
it would be relatively simple.
- Current systems may be not as badly written as I think they are.
- Code reuse could be harder than I think.
- The two orders of magnitude that seem to come from problem oriented
languages may not come from _only_ those. It could come from the
removal of features, as well as better engineering principles,
meaning I'm counting some causes twice.

Loup.


BGB wrote:
 > On 2/27/2012 10:08 PM, Julian Leviston wrote:
 >> Structural optimisation is not compression. Lurk more.
 >
 > probably will drop this, as arguing about all this is likely
pointless
 > and counter-productive.
 >
 > but, is there any particular reason for why similar rules and
 > restrictions wouldn't apply?
 >
 > (I personally suspect that similar applies to nearly all forms of
 > communication, including written and spoken natural language, and a
 > claim that some X can be expressed in Y units does seem a fair amount
 > like a compression-style claim).
 >
 >
 > but, anyways, here is a link to another article:
 > http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
 >
 >> Julian
 >>
 >> On 28/02/2012, at 3:38 PM, BGB wrote:
 >>
 >>> granted, I remain a little skeptical.
 >>>
 >>> I think there is a bit 

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Loup Vaillant

Julian Leviston wrote:

Two things spring out of this at me (inline):

On 28/02/2012, at 9:21 PM, Loup Vaillant wrote:


- Features matter more than I think they do.
- One may not expect the user to write his own features, even though
it would be relatively simple.


What about when using software becomes "writing" features - see etoys.
Is clicking and dragging tiles still "writing" software? :)


Good point.  By default, I think of building software and using
software as two separate activities (though we do use software to
implement others). But if some software is built in such a way that the
default use case is to extend it, then my assumption goes right out the
window, and my objection doesn't work.

I did expect however that building software could be made much easier
than it is right now, if only because 20 Kloc can be taught before the
end of high school.  Plus, some spreadsheet users already do this.




- Current systems may be not as badly written as I think they are.
- Code reuse could be harder than I think.


It's not that they're written badly, it's just that that so many years
on, no one has really understood some of the powerful ideas of
yesteryear. Even those powerful ideas allowed a certain level of
magnification... but the powerful ideas of these days in addition to the
past allow an incredibly large possibility of magnification of thought...


OK, "badly written" is probably too dismissal.  What I meant was more
along the line of "right now I can do better".  The thing is, I regard
the clarity and terseness of my own code as the default, While anything
worse "sucks".  (I don't know if I am able to perceive when code is
better than my own.)  Of course, even my own code tend to "suck" if
it's old enough.

So I know I should be more forgiving.  It's just that when I see a
nested "if" instead of a conjunction, empty "else" branches, or
assignment happy interfaces, I feel the urge to deny oxygen to the
brain that produced this code.

Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

As I've said and written over the years about this project, it is not possible 
to compare features in a direct way here. The aim is to make something that 
feels like "vanilla personal computing" to an end-user -- that can do "a lot" 
-- and limit ourselves to 20,000 lines of code. We picked "personal computing" 
for three main reasons (a) we had some experience with doing this the first 
time around at PARC (in a very small amount of code), (b) it is something that 
people experience everyday, so they will be able to have opinions without 
trying to do a laborious point by point comparison, and (c) we would fail if we 
had to reverse engineer typical renditions of this (i.e. MS or Linux) -- we 
needed to do our own design to have a chance at this.

Our estimate so far is that we are getting our best results from the 
consolidated redesign (folding features into each other) and then from the 
POLs. We are still doing many approaches where we thought we'd have the most 
problems with LOCs, namely at "the bottom".

Cheers,

Alan




>
> From: Loup Vaillant 
>To: fonc@vpri.org 
>Sent: Tuesday, February 28, 2012 2:21 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>Originally,  the VPRI claims to be able to do a system that's 10,000
>smaller than our current bloatware.  That's going from roughly 200
>million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
>to a single book.) That's 4 orders of magnitude.
>
>From the report, I made a rough break down of the causes for code
>reduction.  It seems that
>
>- 1 order of magnitude is gained by removing feature creep.  I agree
>   feature creep can be important.  But I also believe most feature
>   belong to a long tail, where each is needed by a minority of users.
>   It does matter, but if the rest of the system is small enough,
>   adding the few features you need isn't so difficult any more.
>
>- 1 order of magnitude is gained by mere good engineering principles.
>   In Frank for instance, there is _one_ drawing system, that is used
>   for everywhere.  Systematic code reuse can go a long way.
>     Another example is the  code I work with.  I routinely find
>   portions whose volume I can divide by 2 merely by rewriting a couple
>   of functions.  I fully expect to be able to do much better if I
>   could refactor the whole program.  Not because I'm a rock star (I'm
>   definitely not).  Very far from that.  Just because the code I
>   maintain is sufficiently abysmal.
>
>- 2 orders of magnitude are gained through the use of Problem Oriented
>   Languages (instead of C or C++). As examples, I can readily recall:
>    + Gezira vs Cairo    (÷95)
>    + Ometa  vs Lex+Yacc (÷75)
>    + TCP-IP             (÷93)
>   So I think this is not exaggerated.
>
>Looked at it this way, it doesn't seems so impossible any more.  I
>don't expect you to suddenly agree the "4 orders of magnitude" claim
>(It still defies my intuition), but you probably disagree specifically
>with one of my three points above.  Possible objections I can think of
>are:
>
>- Features matter more than I think they do.
>- One may not expect the user to write his own features, even though
>   it would be relatively simple.
>- Current systems may be not as badly written as I think they are.
>- Code reuse could be harder than I think.
>- The two orders of magnitude that seem to come from problem oriented
>   languages may not come from _only_ those.  It could come from the
>   removal of features, as well as better engineering principles,
>   meaning I'm counting some causes twice.
>
>Loup.
>
>
>BGB wrote:
>> On 2/27/2012 10:08 PM, Julian Leviston wrote:
>>> Structural optimisation is not compression. Lurk more.
>> 
>> probably will drop this, as arguing about all this is likely pointless
>> and counter-productive.
>> 
>> but, is there any particular reason for why similar rules and
>> restrictions wouldn't apply?
>> 
>> (I personally suspect that similar applies to nearly all forms of
>> communication, including written and spoken natural language, and a
>> claim that some X can be expressed in Y units does seem a fair amount
>> like a compression-style claim).
>> 
>> 
>> but, anyways, here is a link to another article:
>> http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
>> 
>>> Julian
>>> 
>>> On 28/02/2012, at 3:38 PM, BGB wrote:
>>> 
 granted, I remain a little skeptical.
 
 I think there is a bit of a difference though between, say, a log
 table, and a typical piece of software.
 a log table is, essentially, almost pure redundancy, hence why it can
 be regenerated on demand.
 
 a typical application is, instead, a big pile of logic code for a
 wide range of behaviors and for dealing with a wide range of special
 cases.
 
 
 "executable math" could very well be functionally equivalent to a
 "highly compressed" program, but note in this case that one needs to
 count b

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Ryan

Check out Smalltalk-71, which was a design to do just what you suggest -- it 
was basically an attempt to combine some of my favorite languages of the time 
-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.

This never got implemented because of "a bet" that turned into Smalltalk-72, 
which also did what you suggest, but in a less comprehensive way -- think of 
each object as a Lisp closure that could be sent a pointer to the message and 
could then parse-and-eval that. 

A key to scaling -- that we didn't try to do -- is "semantic typing" (which I 
think is discussed in some of the STEPS material) -- that is: to be able to 
characterize the meaning of what is needed and produced in terms of a 
description rather than a label. Looks like we won't get to that idea this time 
either.

Cheers,

Alan




>
> From: Ryan Mitchley 
>To: fonc@vpri.org 
>Sent: Tuesday, February 28, 2012 12:57 AM
>Subject: Re: [fonc] Error trying to compile COLA
> 
>
> 
>On 27/02/2012 19:48, Tony Garnock-Jones wrote:
>
>
>>My interest in it came out of thinking about integrating
  pub/sub (multi- and broadcast) messaging into the heart of a
  language. What would a Smalltalk look like if, instead of a
  strict unicast model with multi- and broadcast constructed
  atop (via Observer/Observable), it had a messaging model
  capable of natively expressing unicast, anycast, multicast,
  and broadcast patterns? 
>>
>I've wondered if pattern matching shouldn't be a foundation of
method resolution (akin to binding with backtracking in Prolog) - if
a multicast message matches, the "method" is invoked (with much less
specificity than traditional method resolution by name/token). This
is maybe closer to the biological model of a cell surface receptor.
>
>Of course, complexity is an issue with this approach (potentially
NP-complete).
>
>Maybe this has been done and I've missed it.
>
>
>___
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alexis Read
I've only been looking at Maru, but as I understand it Maru is supposed to
be an evolution of COLA (ie Coke), and both object and lambda language. The
self hosting is important in that it can be treated as a first order entity
in the system, and I believe it's the smallest self hosting system
available (650 loc when optimised). I think the current plan is to port
Nile/gezira to Maru, and do away with Nothing - Maru is flexible enough to
act as a VM, cross compiler etc.
On Feb 28, 2012 11:35 AM, "Martin Baldan"  wrote:

> Guys, there are so much lines of inquiry in this thread I'm getting lost.
> Here's a little summary.
>
>
>
>
>
> [message]
> Author: Julian Leviston 
> http://vpri.org/mailman/private/fonc/2012/003081.html
>
> As I understand it, Frank is an experiment that is an extended version of
> DBJr that sits atop lesserphic, which sits atop gezira which sits atop
> nile, which sits atop maru all of which which utilise ometa and the
> "worlds" idea.
>
> If you look at the http://vpri.org/html/writings.php page you can see a
> pattern of progression that has emerged to the point where Frank exists.
> From what I understand, maru is the finalisation of what began as pepsi and
> coke. Maru is a simple s-expression language, in the same way that pepsi
> and coke were. In fact, it looks to have the same syntax. Nothing is the
> layer underneath that is essentially a symbolic computer - sitting between
> maru and the actual machine code (sort of like an LLVM assembler if I've
> understood it correctly).
> [..]
> http://www.vpri.org/vp_wiki/index.php/Main_Page
>
> On the bottom of that page, you'll see a link to the tinlizzie site that
> references "experiment" and the URL has dbjr in it... as far as I
> understand it, this is as much frank as we've been shown.
>
> http://tinlizzie.org/dbjr/
>
> [/message]
>
>
> About the DBJR repository:
>
> http://tinlizzie.org/dbjr/
>
> It contains ".lbox" files. Including "frank.lbox"
>
>
> About the "lbox" files:
>
> [message]
> Author: Josh Grams 
> http://vpri.org/mailman/private/fonc/2012/003089.html
>
> DBJr seems to be a Squeak thing.  Each of those .lbox directories has a
> SISS file which seems to be an S-expression serialization of Smalltalk
> objects.  Sounds like probably what you need is the stuff under "Text
> Field Spec for LObjects" on the VPRI wiki page.
>
> Not that I know *anything* about this whatsoever...
>
>
> [/message]
>
> Josh, do you mean I need the squeak VM, image and changes files?
> The active essay itself works  (except that updates break it),
>  but it doesn't seem to be an actual editor. Maybe I'm missing something.
>
> I think the lbox files are meant to be read with Lesserphic.
> According to the Lesserphic tutorial, I would need a Moshi Squeak Image.
> But I'm not sure where to get a generic, pristine one.
>
> The "Text Field Spec for LObjects" image says it is a Moshi image. Anyway,
> when I run
> the updates, it breaks. I don't know how to get out of the "Text Field"
> project
> and open a new Morphic project to try the tutorial.
> Also, I see a lot of Etoys mentions. Is it related? Maybe an Etoys image
> also works for
> Lesserphic?
>
>
>
>
> About Maru:
>
> [message]
> Author: David Girle davidgirle at gmail.com
> http://vpri.org/mailman/private/fonc/2012/003088.html
>
> Take a look at the page:
>
> http://piumarta.com/software/maru/
>
> it has the original version you have + current.
> There is a short readme in the current version with some examples that
> will get you going.
>
> [/message]
>
> This works, thanks. Maru is well worth further exploration.
> Still no GUI though ;)
>
> More to the point. So, Maru is Coke, right? I still don't get why so
> much emphasis on the fact that it can compile its own implementation
> language. Isn't that the definition of a self-hosting language,
> and isn't that feature relatively frequent these days, especially in
> Lisp-like languages? I'm just trying to understand what it's all about.
> Another doubt is that, IIRC, COLA is supposed to have two languages,
> an object language to implement computations, and a lambda language to
> describe their meaning. Does Maru fit both roles at the same time?
>
>
> [message]
> Author: Julian Leviston
> http://vpri.org/mailman/private/fonc/2012/003090.html
>
> In the tinlizzie.org/updates/exploratory/packages you'll find montecello
> packages that contains some of experiments, I'm fairly sure, one of which
> is: (yep, you guessed it)
>
> FrankVersion-yo.16.mcz
> [/message]
>
> I loaded this Monticello package from a squeak image, but nothing
> happened. It's described as "a dummy package to keep monotonic update
> numbers" or something like that. I also tried to load Gezira, but there are
> dependency issues and other errors. Maybe someone with more Squeak
> experience can clear that up. Maybe we need a Moshi Squeak image.
>
> An aside:
>
> By the way, my commentary about Jedi elitism was largely in jest. Pretty
> much all great visionaries have their qu

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Julian Leviston
Two things spring out of this at me (inline): 

On 28/02/2012, at 9:21 PM, Loup Vaillant wrote:

> - Features matter more than I think they do.
> - One may not expect the user to write his own features, even though
>   it would be relatively simple.

What about when using software becomes "writing" features - see etoys. Is 
clicking and dragging tiles still "writing" software? :)

> - Current systems may be not as badly written as I think they are.
> - Code reuse could be harder than I think.

It's not that they're written badly, it's just that that so many years on, no 
one has really understood some of the powerful ideas of yesteryear. Even those 
powerful ideas allowed a certain level of magnification... but the powerful 
ideas of these days in addition to the past allow an incredibly large 
possibility of magnification of thought... 

A good comparison would be:

- Engineer "A" understands what a lever does, therefore with that simple 
understanding can apply this knowledge to any number of concrete examples - it 
takes him almost no time to work out how to implement a lever. He teaches his 
apprentices this simple rule and law of physics, quite quickly, and they can 
write it down in a couple of sentences on a single piece of paper and also 
utilise it whenever and wherever they see fit. The Engineer "A" charges about 
$40 to implement a lever.

- Engineer "B" doesn't understand what a lever does, but he does have a 1000 
page book that illustrates almost every possible use of a lever, so when he 
finds a need, he looks up his well indexed book, which only takes a few minutes 
at the most... and then he can effectively do 90% of what Engineer "A" can do, 
but not actually understanding it, his implementations aren't as good. His 
apprentices get a copy of this book which only costs them $40 and which they 
have to familiarise themselves with. The book weighs two pounds, and they have 
to take it everywhere. The Engineer "B" charges only $50 to implement one of 
his 1000 page book ideas... he also charges $10 per minute that it takes to 
look it up.

> - The two orders of magnitude that seem to come from problem oriented
>   languages may not come from _only_ those.  It could come from the
>   removal of features, as well as better engineering principles,
>   meaning I'm counting some causes twice.
> 
> Loup.

Julian___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Martin Baldan
Guys, there are so much lines of inquiry in this thread I'm getting lost.
Here's a little summary.





[message]
Author: Julian Leviston 
http://vpri.org/mailman/private/fonc/2012/003081.html

As I understand it, Frank is an experiment that is an extended version of
DBJr that sits atop lesserphic, which sits atop gezira which sits atop
nile, which sits atop maru all of which which utilise ometa and the
"worlds" idea.

If you look at the http://vpri.org/html/writings.php page you can see a
pattern of progression that has emerged to the point where Frank exists.
>From what I understand, maru is the finalisation of what began as pepsi and
coke. Maru is a simple s-expression language, in the same way that pepsi
and coke were. In fact, it looks to have the same syntax. Nothing is the
layer underneath that is essentially a symbolic computer - sitting between
maru and the actual machine code (sort of like an LLVM assembler if I've
understood it correctly).
[..]
http://www.vpri.org/vp_wiki/index.php/Main_Page

On the bottom of that page, you'll see a link to the tinlizzie site that
references "experiment" and the URL has dbjr in it... as far as I
understand it, this is as much frank as we've been shown.

http://tinlizzie.org/dbjr/

[/message]


About the DBJR repository:

http://tinlizzie.org/dbjr/

It contains ".lbox" files. Including "frank.lbox"


About the "lbox" files:

[message]
Author: Josh Grams 
http://vpri.org/mailman/private/fonc/2012/003089.html

DBJr seems to be a Squeak thing.  Each of those .lbox directories has a
SISS file which seems to be an S-expression serialization of Smalltalk
objects.  Sounds like probably what you need is the stuff under "Text
Field Spec for LObjects" on the VPRI wiki page.

Not that I know *anything* about this whatsoever...


[/message]

Josh, do you mean I need the squeak VM, image and changes files?
The active essay itself works  (except that updates break it),
 but it doesn't seem to be an actual editor. Maybe I'm missing something.

I think the lbox files are meant to be read with Lesserphic.
According to the Lesserphic tutorial, I would need a Moshi Squeak Image.
But I'm not sure where to get a generic, pristine one.

The "Text Field Spec for LObjects" image says it is a Moshi image. Anyway,
when I run
the updates, it breaks. I don't know how to get out of the "Text Field"
project
and open a new Morphic project to try the tutorial.
Also, I see a lot of Etoys mentions. Is it related? Maybe an Etoys image
also works for
Lesserphic?




About Maru:

[message]
Author: David Girle davidgirle at gmail.com
http://vpri.org/mailman/private/fonc/2012/003088.html

Take a look at the page:

http://piumarta.com/software/maru/

it has the original version you have + current.
There is a short readme in the current version with some examples that
will get you going.

[/message]

This works, thanks. Maru is well worth further exploration.
Still no GUI though ;)

More to the point. So, Maru is Coke, right? I still don't get why so
much emphasis on the fact that it can compile its own implementation
language. Isn't that the definition of a self-hosting language,
and isn't that feature relatively frequent these days, especially in
Lisp-like languages? I'm just trying to understand what it's all about.
Another doubt is that, IIRC, COLA is supposed to have two languages,
an object language to implement computations, and a lambda language to
describe their meaning. Does Maru fit both roles at the same time?


[message]
Author: Julian Leviston
http://vpri.org/mailman/private/fonc/2012/003090.html

In the tinlizzie.org/updates/exploratory/packages you'll find montecello
packages that contains some of experiments, I'm fairly sure, one of which
is: (yep, you guessed it)

FrankVersion-yo.16.mcz
[/message]

I loaded this Monticello package from a squeak image, but nothing happened.
It's described as "a dummy package to keep monotonic update numbers" or
something like that. I also tried to load Gezira, but there are dependency
issues and other errors. Maybe someone with more Squeak experience can
clear that up. Maybe we need a Moshi Squeak image.

An aside:

By the way, my commentary about Jedi elitism was largely in jest. Pretty
much all great visionaries have their quirks and I don't mind it at all
that they prefer to focus on children. OTOH, there's a number of reasons
why I humbly think the emphasis on children may not yield the expected
benefits. The underlying assumption seems to be that people have a strong
tendency to stick to their childhood toys, so if only
those toys were based on powerful tools, a new generation of programmers
would build their adult tools from their childhood toys.

In practice, I see two problems with this idea. First, when children become
teenagers they are usually more than happy to throw their toys aways and
grab the adult stuff, to feel like adults. If there's no adult-friendly
version of the powerful technology, they will
simply use the crappy technology other

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Loup Vaillant

Originally,  the VPRI claims to be able to do a system that's 10,000
smaller than our current bloatware.  That's going from roughly 200
million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
to a single book.) That's 4 orders of magnitude.

From the report, I made a rough break down of the causes for code
reduction.  It seems that

 - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.

 - 1 order of magnitude is gained by mere good engineering principles.
   In Frank for instance, there is _one_ drawing system, that is used
   for everywhere.  Systematic code reuse can go a long way.
 Another example is the  code I work with.  I routinely find
   portions whose volume I can divide by 2 merely by rewriting a couple
   of functions.  I fully expect to be able to do much better if I
   could refactor the whole program.  Not because I'm a rock star (I'm
   definitely not).  Very far from that.  Just because the code I
   maintain is sufficiently abysmal.

 - 2 orders of magnitude are gained through the use of Problem Oriented
   Languages (instead of C or C++). As examples, I can readily recall:
+ Gezira vs Cairo(÷95)
+ Ometa  vs Lex+Yacc (÷75)
+ TCP-IP (÷93)
   So I think this is not exaggerated.

Looked at it this way, it doesn't seems so impossible any more.  I
don't expect you to suddenly agree the "4 orders of magnitude" claim
(It still defies my intuition), but you probably disagree specifically
with one of my three points above.  Possible objections I can think of
are:

 - Features matter more than I think they do.
 - One may not expect the user to write his own features, even though
   it would be relatively simple.
 - Current systems may be not as badly written as I think they are.
 - Code reuse could be harder than I think.
 - The two orders of magnitude that seem to come from problem oriented
   languages may not come from _only_ those.  It could come from the
   removal of features, as well as better engineering principles,
   meaning I'm counting some causes twice.

Loup.


BGB wrote:

On 2/27/2012 10:08 PM, Julian Leviston wrote:

Structural optimisation is not compression. Lurk more.


probably will drop this, as arguing about all this is likely pointless
and counter-productive.

but, is there any particular reason for why similar rules and
restrictions wouldn't apply?

(I personally suspect that similar applies to nearly all forms of
communication, including written and spoken natural language, and a
claim that some X can be expressed in Y units does seem a fair amount
like a compression-style claim).


but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Julian

On 28/02/2012, at 3:38 PM, BGB wrote:


granted, I remain a little skeptical.

I think there is a bit of a difference though between, say, a log
table, and a typical piece of software.
a log table is, essentially, almost pure redundancy, hence why it can
be regenerated on demand.

a typical application is, instead, a big pile of logic code for a
wide range of behaviors and for dealing with a wide range of special
cases.


"executable math" could very well be functionally equivalent to a
"highly compressed" program, but note in this case that one needs to
count both the size of the "compressed" program, and also the size of
the program needed to "decompress" it (so, the size of the system
would also need to account for the compiler and runtime).

although there is a fair amount of redundancy in typical program code
(logic that is often repeated, duplicated effort between programs,
...), eliminating this redundancy would still have a bounded
reduction in total size.

increasing abstraction is likely to, again, be ultimately bounded
(and, often, abstraction differs primarily in form, rather than in
essence, from that of moving more of the system functionality into
library code).


much like with data compression, the concept commonly known as the
"Shannon limit" may well still apply (itself setting an upper limit
to how much is expressible within a given volume of code).

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Ryan Mitchley

On 27/02/2012 19:48, Tony Garnock-Jones wrote:


My interest in it came out of thinking about integrating pub/sub 
(multi- and broadcast) messaging into the heart of a language. What 
would a Smalltalk look like if, instead of a strict unicast model with 
multi- and broadcast constructed atop (via Observer/Observable), it 
had a messaging model capable of natively expressing unicast, anycast, 
multicast, and broadcast patterns?




I've wondered if pattern matching shouldn't be a foundation of method 
resolution (akin to binding with backtracking in Prolog) - if a 
multicast message matches, the "method" is invoked (with much less 
specificity than traditional method resolution by name/token). This is 
maybe closer to the biological model of a cell surface receptor.


Of course, complexity is an issue with this approach (potentially 
NP-complete).


Maybe this has been done and I've missed it.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc