Hi Loup
Very good question -- and tell your Boss he should support you!
If your boss has a math or science background, this will be an easy sell
because there are many nice analogies that hold, and also some good examples in
computing itself.
The POL approach is generally good, but for a particular problem area could be
as difficult as any other approach. One general argument is that
"non-machine-code" languages are POLs of a weak sort, but are more effective
than writing machine code for most problems. (This was quite controversial 50
years ago -- and lots of bosses forbade using any higher level language.)
Four arguments against POLs are the difficulties of (a) designing them, (b)
making them, (c) creating IDE etc tools for them, and (d) learning them. (These
are similar to the arguments about using math and science in engineering, but
are not completely bogus for a small subset of problems ...).
Companies (and programmers within) are rarely rewarded for saving costs over
the real lifetime of a piece of software (similar problems exist in the climate
problems we are facing).These are social problems, but part of real
engineering. However, at some point life-cycle costs and savings will become
something that is accounted and rewarded-or-dinged.
An argument that resonates with some bosses is the "debuggable
requirements/specifications -> ship the prototype and improve it" whose
benefits show up early on. However, these quicker track processes will often be
stressed for time to do a new POL.
This suggests that one of the most important POLs to be worked on are the ones
that are for making POLs quickly. I think this is a huge important area and
much needs to be done here (also a very good area for new PhD theses!).
Taking all these factors (and there are more), I think the POL and extensible
language approach works best for really difficult problems that small numbers
of really good people are hooked up to solve (could be in a company, and very
often in one of many research venues) -- and especially if the requirements
will need to change quite a bit, both from learning curve and quick response to
the outside world conditions.
Here's where a factor of 100 or 1000 (sometimes even a factor of 10) less code
will be qualitatively powerful.
Right now I draw a line at *100. If you can get this or more, it is worth
surmounting the four difficulties listed above. If you can get *1000, you are
in a completely new world of development and thinking.
Cheers,
Alan
>________________________________
> From: Loup Vaillant <l...@loup-vaillant.fr>
>To: fonc@vpri.org
>Sent: Tuesday, February 28, 2012 8:17 AM
>Subject: Re: [fonc] Error trying to compile COLA
>
>Alan Kay wrote:
>> Hi Loup
>>
>> As I've said and written over the years about this project, it is not
>> possible to compare features in a direct way here.
>
>Yes, I'm aware of that. The problem rises when I do advocacy. A
>response I often get is "but with only 20,000 lines, they gotta
>leave features out!". It is not easy to explain that a point by
>point comparison is either unfair or flatly impossible.
>
>
>> Our estimate so far is that we are getting our best results from the
>> consolidated redesign (folding features into each other) and then from
>> the POLs. We are still doing many approaches where we thought we'd have
>> the most problems with LOCs, namely at "the bottom".
>
>If I got it, what you call "consolidated redesign" encompasses what I
>called "feature creep" and "good engineering principles" (I understand
>now that they can't be easily separated). I originally estimated that:
>
>- You manage to gain 4 orders of magnitude compared to current OSes,
>- consolidated redesign gives you roughly 2 of those (from 200M to 2M),
>- problem oriented languages give you the remaining 2.(from 2M to 20K)
>
>Did I…
>- overstated the power of problem oriented languages?
>- understated the benefits of consolidated redesign?
>- forgot something else?
>
>(Sorry to bother you with those details, but I'm currently trying to
> convince my Boss to pay me for a PhD on the grounds that PoLs are
> totally amazing, so I'd better know real fast If I'm being
> over-confident.)
>
>Thanks,
>Loup.
>
>
>
>> Cheers,
>>
>> Alan
>>
>>
>> *From:* Loup Vaillant <l...@loup-vaillant.fr>
>> *To:* fonc@vpri.org
>> *Sent:* Tuesday, February 28, 2012 2:21 AM
>> *Subject:* Re: [fonc] Error trying to compile COLA
>>
>> Originally, the VPRI claims to be able to do a system that's 10,000
>> smaller than our current bloatware. That's going from roughly 200
>> million lines to 20,000. (Or, as Alan Kay puts it, from a whole library
>> to a single book.) That's 4 orders of magnitude.
>>
>> From the report, I made a rough break down of the causes for code
>> reduction. It seems that
>>
>> - 1 order of magnitude is gained by removing feature creep. I agree
>> feature creep can be important. But I also believe most feature
>> belong to a long tail, where each is needed by a minority of users.
>> It does matter, but if the rest of the system is small enough,
>> adding the few features you need isn't so difficult any more.
>>
>> - 1 order of magnitude is gained by mere good engineering principles.
>> In Frank for instance, there is _one_ drawing system, that is used
>> for everywhere. Systematic code reuse can go a long way.
>> Another example is the code I work with. I routinely find
>> portions whose volume I can divide by 2 merely by rewriting a couple
>> of functions. I fully expect to be able to do much better if I
>> could refactor the whole program. Not because I'm a rock star (I'm
>> definitely not). Very far from that. Just because the code I
>> maintain is sufficiently abysmal.
>>
>> - 2 orders of magnitude are gained through the use of Problem Oriented
>> Languages (instead of C or C++). As examples, I can readily recall:
>> + Gezira vs Cairo (÷95)
>> + Ometa vs Lex+Yacc (÷75)
>> + TCP-IP (÷93)
>> So I think this is not exaggerated.
>>
>> Looked at it this way, it doesn't seems so impossible any more. I
>> don't expect you to suddenly agree the "4 orders of magnitude" claim
>> (It still defies my intuition), but you probably disagree specifically
>> with one of my three points above. Possible objections I can think of
>> are:
>>
>> - Features matter more than I think they do.
>> - One may not expect the user to write his own features, even though
>> it would be relatively simple.
>> - Current systems may be not as badly written as I think they are.
>> - Code reuse could be harder than I think.
>> - The two orders of magnitude that seem to come from problem oriented
>> languages may not come from _only_ those. It could come from the
>> removal of features, as well as better engineering principles,
>> meaning I'm counting some causes twice.
>>
>> Loup.
>>
>>
>> BGB wrote:
>> > On 2/27/2012 10:08 PM, Julian Leviston wrote:
>> >> Structural optimisation is not compression. Lurk more.
>> >
>> > probably will drop this, as arguing about all this is likely
>> pointless
>> > and counter-productive.
>> >
>> > but, is there any particular reason for why similar rules and
>> > restrictions wouldn't apply?
>> >
>> > (I personally suspect that similar applies to nearly all forms of
>> > communication, including written and spoken natural language, and a
>> > claim that some X can be expressed in Y units does seem a fair amount
>> > like a compression-style claim).
>> >
>> >
>> > but, anyways, here is a link to another article:
>> > http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
>> >
>> >> Julian
>> >>
>> >> On 28/02/2012, at 3:38 PM, BGB wrote:
>> >>
>> >>> granted, I remain a little skeptical.
>> >>>
>> >>> I think there is a bit of a difference though between, say, a log
>> >>> table, and a typical piece of software.
>> >>> a log table is, essentially, almost pure redundancy, hence why
>> it can
>> >>> be regenerated on demand.
>> >>>
>> >>> a typical application is, instead, a big pile of logic code for a
>> >>> wide range of behaviors and for dealing with a wide range of
>> special
>> >>> cases.
>> >>>
>> >>>
>> >>> "executable math" could very well be functionally equivalent to a
>> >>> "highly compressed" program, but note in this case that one
>> needs to
>> >>> count both the size of the "compressed" program, and also the
>> size of
>> >>> the program needed to "decompress" it (so, the size of the system
>> >>> would also need to account for the compiler and runtime).
>> >>>
>> >>> although there is a fair amount of redundancy in typical
>> program code
>> >>> (logic that is often repeated, duplicated effort between programs,
>> >>> ...), eliminating this redundancy would still have a bounded
>> >>> reduction in total size.
>> >>>
>> >>> increasing abstraction is likely to, again, be ultimately bounded
>> >>> (and, often, abstraction differs primarily in form, rather than in
>> >>> essence, from that of moving more of the system functionality into
>> >>> library code).
>> >>>
>> >>>
>> >>> much like with data compression, the concept commonly known as the
>> >>> "Shannon limit" may well still apply (itself setting an upper limit
>> >>> to how much is expressible within a given volume of code).
>_______________________________________________
>fonc mailing list
>fonc@vpri.org
>http://vpri.org/mailman/listinfo/fonc
>
>
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc