much agreed.

pardon my likely inelegant extension:

seemingly, nearly any problem can be abstracted, and a set of more elegant 
solutions can be devised to long-standing problems.

for example, to abstract over the HW, there was the CPU instruction set;
to abstract over the instruction set, there was the assembler;
to abstract over assembler, there are compilers (in this context, I will define 
low-level languages and many earlier HLL's);
to abstract further, there are modern HLL's and OOP;
to abstract over HLL's, there may be DSL's and specialized code-generating 
tools;
...

AFAICT, most modern compilers involve at least 2 major translation stages 
getting from the HLL to the level of ASM: the input HLL is processed, usually 
to some compiler specific IL or IR;
this IL or IR is further processed, and the target ASM code is emitted 
(sometimes an ASM post-optimizer is used as well);
...


in terms of using these systems, each level is (usually) a little nicer and a 
little cleaner than the ones' which follow after it.

granted, there is a cost:
usually the total cost (in terms of system complexity) is larger with these 
facilities available (for example, an apps ASM + app ASM code is larger than a 
binary-coded app by itself, and a compiler is often far from being a small or 
simple piece of code).

but, in the larger sense, it is a good tradeoff, since a small amount of HLL 
code is a much smaller price to pay than a mountain of ASM.

but, one need also not forget that a lot of work has gone into all these little 
things which can be almost trivially overlooked: the many man-years which have 
gone into these operating systems and compiler technologies which one can 
easily take for granted.


it is an interesting idea though, namely what could be the most "minimally 
redundant yet maximally capable" system? like if one can increase the overall 
abstraction of a system while at the same time reducing its overall complexity 
(vs what it would have been otherwise), although I have my doubts that this 
could be done in the general case.

making an observation from data compression:
usually there is a fixed factor between compressed and uncompressed data 
entropy, and going beyond this factor results in rapidly increasing costs for 
rapidly diminishing returns, and ultimately hitting a fixed limit (the Shannon 
Limit).


a similar notion could apply to systems complexity:
the minimally redundant system being only some margain smaller than typical 
system, and given programmers tend to be relatively self-compressing, it is 
possible this margain is fairly small (meaning that, very possibly, the more 
abstracted system will always be more internally complex than the lower-level 
system).

like, one will always need more logic gates to have a full CPU than to drive an 
ALU directly via switches and lights.


note:
none of this is meant to degrade tools research, as there is still plenty of 
room for reasearch, and for seeing what works and what doesn't.

like, I believe in utility, but utility is the goal, not the imperative...

often, the thing which seems useless out the outset may turn out to be 
valuable, and ones' "innovative" idea may turn out to be useless, so an open 
mind is needed I think. often, the best strategy may be to try and see, and if 
it works, it works, and if it doesn't, it doesn't.

and, yes, in my case I am well experienced with people casting the opinion that 
all of what I am doing is useless, and hell, maybe they are right, but at the 
moment, it isn't too much of a loss.



side note:
on this front, recently I was beating together an idea-spec for an "unusual" 
bytecode design (basically, the idea is that opcode-binding is symbolic, rather 
than using fixed pre-assigned opcode numbers, and having tables define their 
own structure), ... if anyone cares I could make it available (like putting it 
online and posting a link or whatever).
(note: the overall structure was largely inspired by IFF, WAD, and MSIL, 
largely the structure follows fairly straightforwardly from these).

granted, as I haven't tried implementing or using the thing yet, I don't really 
know if the design "works" (really, it could just be destined for the dustbin 
of pointless ideas, FWIW...).



or such...


  ----- Original Message ----- 
  From: Alan Kay 
  To: Fundamentals of New Computing 
  Sent: Thursday, July 08, 2010 6:01 PM
  Subject: Re: [fonc] goals


  Once a project gets going it usually winds up with a few more goals than 
those that got it started -- partly because the individual researchers bring 
their own perspectives to the mix.

  But the original goals of STEPS were pretty simple and longstanding. They 
came from thinking that the size of many large systems in terms of amount of 
code written seemed very far out of balance -- by many orders of magnitude -- 
with intuitions about their actual "mathematical content". This led to a 
"science of the artificial" approach of taking the phenomena of already 
produced artifacts in the general area of personal computing, and seeing if 
very compact "runable maths" could be invented and built to model very similar 
behaviors. 

  If this could be done, then some of the approaches in the new models would 
represent better ways to design and program to complex behaviors -- which could 
be very illuminating about systems designs and representations -- and some of 
these would likely be advances in programming in general.

  I think of this as "scientific exploration via coevolving mathematics and 
engineering".

  Cheers,

  Alan





------------------------------------------------------------------------------


  _______________________________________________
  fonc mailing list
  [email protected]
  http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to