Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Loup Vaillant

Originally,  the VPRI claims to be able to do a system that's 10,000
smaller than our current bloatware.  That's going from roughly 200
million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
to a single book.) That's 4 orders of magnitude.

From the report, I made a rough break down of the causes for code
reduction.  It seems that

 - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.

 - 1 order of magnitude is gained by mere good engineering principles.
   In Frank for instance, there is _one_ drawing system, that is used
   for everywhere.  Systematic code reuse can go a long way.
 Another example is the  code I work with.  I routinely find
   portions whose volume I can divide by 2 merely by rewriting a couple
   of functions.  I fully expect to be able to do much better if I
   could refactor the whole program.  Not because I'm a rock star (I'm
   definitely not).  Very far from that.  Just because the code I
   maintain is sufficiently abysmal.

 - 2 orders of magnitude are gained through the use of Problem Oriented
   Languages (instead of C or C++). As examples, I can readily recall:
+ Gezira vs Cairo(÷95)
+ Ometa  vs Lex+Yacc (÷75)
+ TCP-IP (÷93)
   So I think this is not exaggerated.

Looked at it this way, it doesn't seems so impossible any more.  I
don't expect you to suddenly agree the 4 orders of magnitude claim
(It still defies my intuition), but you probably disagree specifically
with one of my three points above.  Possible objections I can think of
are:

 - Features matter more than I think they do.
 - One may not expect the user to write his own features, even though
   it would be relatively simple.
 - Current systems may be not as badly written as I think they are.
 - Code reuse could be harder than I think.
 - The two orders of magnitude that seem to come from problem oriented
   languages may not come from _only_ those.  It could come from the
   removal of features, as well as better engineering principles,
   meaning I'm counting some causes twice.

Loup.


BGB wrote:

On 2/27/2012 10:08 PM, Julian Leviston wrote:

Structural optimisation is not compression. Lurk more.


probably will drop this, as arguing about all this is likely pointless
and counter-productive.

but, is there any particular reason for why similar rules and
restrictions wouldn't apply?

(I personally suspect that similar applies to nearly all forms of
communication, including written and spoken natural language, and a
claim that some X can be expressed in Y units does seem a fair amount
like a compression-style claim).


but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Julian

On 28/02/2012, at 3:38 PM, BGB wrote:


granted, I remain a little skeptical.

I think there is a bit of a difference though between, say, a log
table, and a typical piece of software.
a log table is, essentially, almost pure redundancy, hence why it can
be regenerated on demand.

a typical application is, instead, a big pile of logic code for a
wide range of behaviors and for dealing with a wide range of special
cases.


executable math could very well be functionally equivalent to a
highly compressed program, but note in this case that one needs to
count both the size of the compressed program, and also the size of
the program needed to decompress it (so, the size of the system
would also need to account for the compiler and runtime).

although there is a fair amount of redundancy in typical program code
(logic that is often repeated, duplicated effort between programs,
...), eliminating this redundancy would still have a bounded
reduction in total size.

increasing abstraction is likely to, again, be ultimately bounded
(and, often, abstraction differs primarily in form, rather than in
essence, from that of moving more of the system functionality into
library code).


much like with data compression, the concept commonly known as the
Shannon limit may well still apply (itself setting an upper limit
to how much is expressible within a given volume of code).

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Reuben Thomas
On 28 February 2012 16:41, BGB cr88...@gmail.com wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


 this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Loup

Very good question -- and tell your Boss he should support you!

If your boss has a math or science background, this will be an easy sell 
because there are many nice analogies that hold, and also some good examples in 
computing itself.

The POL approach is generally good, but for a particular problem area could be 
as difficult as any other approach. One general argument is that 
non-machine-code languages are POLs of a weak sort, but are more effective 
than writing machine code for most problems. (This was quite controversial 50 
years ago -- and lots of bosses forbade using any higher level language.)

Four arguments against POLs are the difficulties of (a) designing them, (b) 
making them, (c) creating IDE etc tools for them, and (d) learning them. (These 
are similar to the arguments about using math and science in engineering, but 
are not completely bogus for a small subset of problems ...).

Companies (and programmers within) are rarely rewarded for saving costs over 
the real lifetime of a piece of software (similar problems exist in the climate 
problems we are facing).These are social problems, but part of real 
engineering. However, at some point life-cycle costs and savings will become 
something that is accounted and rewarded-or-dinged. 

An argument that resonates with some bosses is the debuggable 
requirements/specifications - ship the prototype and improve it whose 
benefits show up early on. However, these quicker track processes will often be 
stressed for time to do a new POL.

This suggests that one of the most important POLs to be worked on are the ones 
that are for making POLs quickly. I think this is a huge important area and 
much needs to be done here (also a very good area for new PhD theses!).


Taking all these factors (and there are more), I think the POL and extensible 
language approach works best for really difficult problems that small numbers 
of really good people are hooked up to solve (could be in a company, and very 
often in one of many research venues) -- and especially if the requirements 
will need to change quite a bit, both from learning curve and quick response to 
the outside world conditions.

Here's where a factor of 100 or 1000 (sometimes even a factor of 10) less code 
will be qualitatively powerful.

Right now I draw a line at *100. If you can get this or more, it is worth 
surmounting the four difficulties listed above. If you can get *1000, you are 
in a completely new world of development and thinking.


Cheers,

Alan






 From: Loup Vaillant l...@loup-vaillant.fr
To: fonc@vpri.org 
Sent: Tuesday, February 28, 2012 8:17 AM
Subject: Re: [fonc] Error trying to compile COLA
 
Alan Kay wrote:
 Hi Loup

 As I've said and written over the years about this project, it is not
 possible to compare features in a direct way here.

Yes, I'm aware of that.  The problem rises when I do advocacy. A
response I often get is but with only 20,000 lines, they gotta
leave features out!.  It is not easy to explain that a point by
point comparison is either unfair or flatly impossible.


 Our estimate so far is that we are getting our best results from the
 consolidated redesign (folding features into each other) and then from
 the POLs. We are still doing many approaches where we thought we'd have
 the most problems with LOCs, namely at the bottom.

If I got it, what you call consolidated redesign encompasses what I
called feature creep and good engineering principles (I understand
now that they can't be easily separated). I originally estimated that:

- You manage to gain 4 orders of magnitude compared to current OSes,
- consolidated redesign gives you roughly 2 of those  (from 200M to 2M),
- problem oriented languages give you the remaining 2.(from 2M  to 20K)

Did I…
- overstated the power of problem oriented languages?
- understated the benefits of consolidated redesign?
- forgot something else?

(Sorry to bother you with those details, but I'm currently trying to
  convince my Boss to pay me for a PhD on the grounds that PoLs are
  totally amazing, so I'd better know real fast If I'm being
  over-confident.)

Thanks,
Loup.



 Cheers,

 Alan


     *From:* Loup Vaillant l...@loup-vaillant.fr
     *To:* fonc@vpri.org
     *Sent:* Tuesday, February 28, 2012 2:21 AM
     *Subject:* Re: [fonc] Error trying to compile COLA

     Originally, the VPRI claims to be able to do a system that's 10,000
     smaller than our current bloatware. That's going from roughly 200
     million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
     to a single book.) That's 4 orders of magnitude.

      From the report, I made a rough break down of the causes for code
     reduction. It seems that

     - 1 order of magnitude is gained by removing feature creep. I agree
     feature creep can be important. But I also believe most feature
     belong to a long tail, where each is needed by a minority of users.
     It does matter, 

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
Hi Reuben

Yep. One of the many finesses in the STEPS project was to point out that 
requiring OSs to have drivers for everything misses what being networked is all 
about. In a nicer distributed systems design (such as Popek's LOCUS), one would 
get drivers from the devices automatically, and they would not be part of any 
OS code count. Apple even did this in the early days of the Mac for its own 
devices, but couldn't get enough other vendors to see why this was a really big 
idea.

Eventually the OS melts away to almost nothing (as it did at PARC in the 70s).

Then the question starts to become how much code has to be written to make the 
various functional parts that will be semi-automatically integrated to make 
'vanilla personal computing'  ?


Cheers,

Alan





 From: Reuben Thomas r...@sc3d.org
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Tuesday, February 28, 2012 9:33 AM
Subject: Re: [fonc] Error trying to compile COLA
 
On 28 February 2012 16:41, BGB cr88...@gmail.com wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


 this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.

-- 
http://rrt.sc3d.org
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Jakob Praher
Dear Alan,

Am 28.02.12 14:54, schrieb Alan Kay:
 Hi Ryan

 Check out Smalltalk-71, which was a design to do just what you suggest
 -- it was basically an attempt to combine some of my favorite
 languages of the time -- Logo and Lisp, Carl Hewitt's Planner, Lisp
 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere?
Something like a Smalltalk 71 for Smalltalk 80 programmers :-)
In the early history of Smalltalk you mention it as

 It was a kind of parser with object-attachment that executed tokens
directly.

From the examples I think that do 'expr' is evaluating expr by using
previous to 'ident' :arg1..:argN body.

As an example do 'factorial 3' should  evaluate to 6 considering:

to 'factorial' 0 is 1
to 'factorial' :n do 'n*factorial n-1'

What about arithmetic and precendence: What part of language was built
into the system?
- :var denote variables, whereas var denotes the instantiated value of
:var in the expr, e.g. :n vs 'n-1'
- '' denote simple tokens (in the head) as well as expressions (in
the body)?
- to, do are keywords
- () can be used for precedence

You described evaluation as straightforward pattern-matching.
It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons' :a
:b) '-'  :c  is a structured term.
I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract syntax
- whereas in Smalltalk 71 the concrete syntax seems to be used in the rules.

Also it seems redundant to both have:
to 'hd' ('cons' :a :b) do 'a'
and
to 'hd' ('cons' :a :b) '-'  :c  do 'a - c'

Is this made to make sure that the left hand side of - has to be a hd
(cons :a :b) expression?

Best,
Jakob


 This never got implemented because of a bet that turned into
 Smalltalk-72, which also did what you suggest, but in a less
 comprehensive way -- think of each object as a Lisp closure that could
 be sent a pointer to the message and could then parse-and-eval that. 

 A key to scaling -- that we didn't try to do -- is semantic typing
 (which I think is discussed in some of the STEPS material) -- that is:
 to be able to characterize the meaning of what is needed and produced
 in terms of a description rather than a label. Looks like we won't get
 to that idea this time either.

 Cheers,

 Alan

 
 *From:* Ryan Mitchley ryan.mitch...@gmail.com
 *To:* fonc@vpri.org
 *Sent:* Tuesday, February 28, 2012 12:57 AM
 *Subject:* Re: [fonc] Error trying to compile COLA

 On 27/02/2012 19:48, Tony Garnock-Jones wrote:

 My interest in it came out of thinking about integrating pub/sub
 (multi- and broadcast) messaging into the heart of a language.
 What would a Smalltalk look like if, instead of a strict unicast
 model with multi- and broadcast constructed atop (via
 Observer/Observable), it had a messaging model capable of
 natively expressing unicast, anycast, multicast, and broadcast
 patterns? 


 I've wondered if pattern matching shouldn't be a foundation of
 method resolution (akin to binding with backtracking in Prolog) -
 if a multicast message matches, the method is invoked (with much
 less specificity than traditional method resolution by
 name/token). This is maybe closer to the biological model of a
 cell surface receptor.

 Of course, complexity is an issue with this approach (potentially
 NP-complete).

 Maybe this has been done and I've missed it.


 ___
 fonc mailing list
 fonc@vpri.org mailto:fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc




 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Alan Kay
As I mentioned, Smalltalk-71 was never implemented -- and rarely mentioned (but 
it was part of the history of Smalltalk so I put in a few words about it).

If we had implemented it, we probably would have cleaned up the look of it, and 
also some of the conventions. 

You are right that part of it is like a term rewriting system, and part of it 
has state (object state).

to ... do ... is a operation. The match is on everything between toand do

For example, the first line with cons in it does the car operation (which 
here is hd).

The second line with cons in it does replaca. The value of hd is being 
replaced by the value of c. 

One of the struggles with this design was to try to make something almost as 
simple as LOGO, but that could do language extensions, simple AI backward 
chaining inferencing (like Winograd's block stacking problem), etc.

The turgid punctuations (as I mentioned in the history) were attempts to find 
ways to do many different kinds of matching.

So we were probably lucky that Smalltalk-72 came along  It's pattern 
matching was less general, but quite a bit could be done as far as driving an 
extensible interpreter with it.

However, some of these ideas were done better later. I think by Leler, and 
certainly by Joe Goguen, and others.

Cheers,

Alan



 From: Jakob Praher ja...@praher.info
To: Alan Kay alan.n...@yahoo.com; Fundamentals of New Computing 
fonc@vpri.org 
Sent: Tuesday, February 28, 2012 12:52 PM
Subject: Re: [fonc] Error trying to compile COLA
 

Dear Alan,

Am 28.02.12 14:54, schrieb Alan Kay: 
Hi Ryan


Check out Smalltalk-71, which was a design to do just what you suggest -- it 
was basically an attempt to combine some of my favorite languages of the time 
-- Logo and Lisp, Carl Hewitt's Planner, Lisp 70, etc.
do you have a detailled documentation of Smalltalk 71 somewhere? Something like 
a Smalltalk 71 for Smalltalk 80 programmers :-)
In the early history of Smalltalk you mention it as 
 It was a kind of parser with object-attachment that executed tokens 
 directly. The Early History of Smalltalk From the examples I think that do 
 'expr' is evaluating expr by using previous to 'ident' :arg1..:argN 
 body.

As an example do 'factorial 3' should  evaluate to 6 considering:

to 'factorial' 0 is 1
to 'factorial' :n do 'n*factorial n-1' The Early History of Smalltalk What 
about arithmetic and precendence: What part of language was built into the 
system? 
- :var denote variables, whereas var denotes the instantiated value
of :var in the expr, e.g. :n vs 'n-1'
- '' denote simple tokens (in the head) as well as expressions
(in the body)?
- to, do are keywords
- () can be used for precedence

You described evaluation as straightforward pattern-matching.
It somehow reminds me of a term rewriting system -  e.g 'hd' ('cons'
:a :b) '-'  :c  is a structured term.
I know rewriting systems which first parse into an abstract
representation (e.g. prefix form) and transforms on the abstract
syntax - whereas in Smalltalk 71 the concrete syntax seems to be
used in the rules.

Also it seems redundant to both have:
to 'hd' ('cons' :a :b) do 'a' 
and
to 'hd' ('cons' :a :b) '-'  :c  do 'a - c'

Is this made to make sure that the left hand side of - has to be
a hd (cons :a :b) expression?

Best,
Jakob




This never got implemented because of a bet that turned into Smalltalk-72, 
which also did what you suggest, but in a less comprehensive way -- think of 
each object as a Lisp closure that could be sent a pointer to the message and 
could then parse-and-eval that. 


A key to scaling -- that we didn't try to do -- is semantic typing (which I 
think is discussed in some of the STEPS material) -- that is: to be able to 
characterize the meaning of what is needed and produced in terms of a 
description rather than a label. Looks like we won't get to that idea this 
time either.


Cheers,


Alan





 From: Ryan Mitchley ryan.mitch...@gmail.com
To: fonc@vpri.org 
Sent: Tuesday, February 28, 2012 12:57 AM
Subject: Re: [fonc] Error trying to compile COLA
 

 
On 27/02/2012 19:48, Tony Garnock-Jones wrote:


My interest in it came out of thinking about
  integrating pub/sub (multi- and broadcast)
  messaging into the heart of a language. What
  would a Smalltalk look like if, instead of a
  strict unicast model with multi- and broadcast
  constructed atop (via Observer/Observable), it
  had a messaging model capable of natively
  expressing unicast, anycast, multicast, and
  broadcast patterns? 

I've wondered if pattern matching shouldn't be a
foundation of method resolution (akin to binding
with backtracking in Prolog) - if a multicast

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 10:33 AM, Reuben Thomas wrote:

On 28 February 2012 16:41, BGBcr88...@gmail.com  wrote:

  - 1 order of magnitude is gained by removing feature creep.  I agree
   feature creep can be important.  But I also believe most feature
   belong to a long tail, where each is needed by a minority of users.
   It does matter, but if the rest of the system is small enough,
   adding the few features you need isn't so difficult any more.


this could help some, but isn't likely to result in an order of magnitude.

Example: in Linux 3.0.0, which has many drivers (and Linux is often
cited as being mostly drivers), actually counting the code reveals
about 55-60% in drivers (depending how you count). So that even with
only one hardware configuration, you'd save less than 50% of the code
size, i.e. a factor of 2 at very best.



yeah, kind of the issue here.

one can shave code, reduce redundancy, increase abstraction, ... but 
this will only buy so much.


then one can start dropping features, but how many can one drop and 
still have everything still work?...


one can be like, well, maybe we will make something like MS-DOS, but in 
long-mode? (IOW: single-big address space, with no user/kernel 
separation, or conventional processes, and all kernel functionality is 
essentially library functionality).



ok, how small can this be made?
maybe 50 kloc, assuming one is sparing with the drivers.


I once wrote an OS kernel (long-dead project, ended nearly a decade 
ago), going and running a line counter on the whole project, I get about 
84 kloc. further investigation: 44 kloc of this was due to a copy of 
NASM sitting in the apps directory (I tried to port NASM to my OS at the 
time, but it didn't really work correctly, very possibly due to a 
quickly kludged-together C library...).



so, a 40 kloc OS kernel, itself at the time bordering on barely worked.

what sorts of HW drivers did it have: ATA / IDE, console, floppy, VESA, 
serial mouse, RS232, RTL8139. how much code as drivers: 11 kloc.


how about VFS: 5 kloc, which include (FS drivers): BSHFS (IIRC, a 
TFTP-like shared filesystem), FAT (12/16/32), RAMFS.


another 5 kloc goes into the network code, which included TCP/IP, ARP, 
PPP, and an HTTP client+server.


boot loader was 288 lines (ASM), setup was 792 lines (ASM).

boot loader: copies boot files (setup.bin and kernel.sys) into RAM 
(in the low 640K). seems hard-coded for FAT12.


setup was mostly responsible for setting up the kernel (copying it to 
the desired address) and entering protected mode (jumping into the 
kernel). this is commonly called a second-stage loader, partly because 
it does a lot of stuff which is too bulky to do in the boot loader 
(which is limited to 512 bytes, whereas setup can be up to 32kB).


setup magic: Enable A20, load GDT, enter big-real mode, check for MZ 
and PE markers (kernel was PE/COFF it seems), copies kernel image to 
VMA base, pushes kernel entry point to stack, remaps IRQs, executes 
32-bit return (jumps into protected mode).


around 1/2 of the setup file is code for jumping between real and 
protected mode and for interfacing with VESA.


note: I was using PE/COFF for apps and libraries as well.
IIRC, I was using a naive process-based model at the time.


could a better HLL have made the kernel drastically smaller? I have my 
doubts...



add the need for maybe a compiler, ... and the line count is sure to get 
larger quickly.


based on my own code, one could probably get a basically functional C 
compiler in around 100 kloc, but maybe smaller could be possible (this 
would include the compiler+assembler+linker).


apps/... would be a bit harder.


in my case, the most direct path would be just dumping all of my 
existing libraries on top of my original OS project, and maybe dropping 
the 3D renderer (since it would be sort of useless without GPU support, 
OpenGL, ...). this would likely weigh in at around 750-800 kloc (but 
could likely be made into a self-hosting OS, since a C compiler would be 
included, and as well there is a C runtime floating around).


this is all still a bit over the stated limit.


maybe, one could try to get a functional GUI framework and some basic 
applications and similar in place (probably maybe 100-200 kloc more, at 
least).


probably, by this point, one is looking at something like a Windows 3.x 
style disk footprint (maybe using 10-15 MB of HDD space or so for all 
the binaries...).



granted, in my case, the vast majority of the code would be C, probably 
with a smaller portion of the OS and applications being written in 
BGBScript or similar...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 5:36 PM, Julian Leviston wrote:


On 29/02/2012, at 10:29 AM, BGB wrote:


On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid the 
current day world.


For example, one of the many current day standards that was 
dismissed immediately is the WWW (one could hardly imagine more of a 
mess).




I don't think the web is entirely horrible:
HTTP basically works, and XML is ok IMO, and an XHTML variant could 
be ok.


Hypertext as a structure is not beautiful nor is it incredibly useful. 
Google exists because of how incredibly flawed the web is and if 
you look at their process for organising it, you start to find 
yourself laughing a lot. The general computing experience these days 
is an absolute shambles and completely crap. Computers are very very 
hard to use. Perhaps you don't see it - perhaps you're in the trees - 
you can't see the forest... but it's intensely bad.




I am not saying it is particularly good, just that it is ok and not 
completely horrible


it is, as are most things in life, generally adequate for what it does...

it could be better, and it could probably also be worse...


It's like someone crapped their pants and google came around and said 
hey you can wear gas masks if you like... when what we really need to 
do is clean up the crap and make sure there's a toilet nearby so that 
people don't crap their pants any more.




IMO, this is more when one gets into the SOAP / WSDL area...


granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many shiny new technologies which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system 
lacking support for this is likely to be rejected outright.


You mean like email? A system that doesn't have anything to do with 
the WWW per se that is used daily by millions upon millions of people? 
:P I disagree intensely. In exactly the same was that facebook was 
taken up because it was a slightly less crappy version of myspace, 
something better than the web would be taken up in a heartbeat if it 
was accessible and obviously better.


You could, if you chose to, view this mailing group as a type of 
living document where you can peruse its contents through your email 
program... depending on what you see the web as being... maybe if you 
squint your eyes just the right way, you could envisage the web as 
simply being a means of sharing information to other humans... and 
this mailing group could simply be a different kind of web...


I'd hardly say that email hasn't been a great success... in fact, I 
think email, even though it, too is fairly crappy, has been more of a 
success than the world wide web.




I don't think email and the WWW are mutually exclusive, by any means.

yes, one probably needs email as well, as well as probably a small 
mountain of other things, to make a viable end-user OS...



however, technically, many people do use email via webmail interfaces 
and similar.
nevermind that many people use things like Microsoft Outlook Web 
Access and similar.


so, it is at least conceivable that a future exists where people read 
their email via webmail and access usenet almost entirely via Google 
Groups and similar...


not that it would be necessarily a good thing though...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread Dale Schumacher
On Mon, Feb 27, 2012 at 5:23 PM, Charles Perkins ch...@memetech.com wrote:
 I think of the code size reduction like this:

 A book of logarithm tables may be hundreds of pages in length and yet the 
 equation producing the numbers can fit on one line.

 VPRI is exploring runnable math and is seeking key equations from which the 
 functionality of those 1MLOC, 10MLOC, 14MLOC can be automatically produced.

 It's not about code compression, its about functionality expansion.

This reminds me of Gregory Chaitin's concept of algorithmic
complexity, leading to his results relating to compression, logical
irreducibility and understanding [1].

[1] G. Chaitin.  Meta Math! The Quest for Omega, Vintage Books 2006.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc