On 7/28/2011 9:57 AM, Alan Kay wrote:
Well, we don't absolutely *need* music notation, but it really helps
many things. We don't *need* the various notations of mathematics
(check out Newton's use of English for complex mathematical
relationships in the Principia), but it really helps things.
I do think the hard problem is "design" and all that goes along with
it (and this is true in music and math too). But that is not the end
of it, nor is ignoring the design of visual representations that help
grokking and thinking a good idea.
I think you are confusing/convolving the fact of being able to do
something with the ease of it. This confusion is rampant in computer
science ....
yes, agreed...
(possible tangent time...).
even though many mainstream programs involve huge amounts of code, much
of this code is written with relatively little thinking involved (one
throws together some code in an ad-hoc matter, and for the next task
that comes up, just throwing some more code out there, ...).
do this enough and one has a lot of code.
cleaner design, factoring things out, ... can help reduce the volume of
code, but at a cost of potentially requiring far more thinking and
mental effort to produce.
DSLs can also help reduce code because they, by their basic nature,
factor out a lot of things (the whole "domain" thing), but the creation
of a good DSL similarly involves a lot of mental factoring work, as well
as the effort of going about creating it.
so, then a lot ends up boiling down to a large set of cost/benefit
tradeoffs.
so, say, as a hypothetical example:
programmer A can partly turn-off their brain, and spew out a solution to
a problem which is, say, 10 kloc in about a week;
programmer B then goes about thinking about it, and produces a more
finely crafted 250 lines after about a month.
now, which is better?...
then, assume sometime later, the original developers are no longer
around, and maintenance is needed (say, because requirements have
changed, or new features were demanded by their superiors).
one may find that, although bigger, programmer A's code is generally
easier to understand and modify (it just sort of meanders along and does
its thing).
meanwhile, maybe programmer B's code is not so easy to understand, and
will tend to blow up in the face of anyone who dares try to alter it.
now, which is better?...
a partial analogy could be like "entropy" from data compression, which
would roughly correspond to the internal complexity of a system. making
code bigger or smaller may not necessarily change its total complexity,
but maybe only its relative density.
striving for simplicity can also help, but even simplicity can have costs:
sometimes, simplicity in one place may lead to much higher complexity
somewhere else.
for example, simplicity at the lower-levels (towards the "leaves" of a
dependency graph) tends to push complexity "up" the tree (towards the
"root" of the tree).
for example, a person creates a very simplistic compiler IL, which then
pushes the work onto the compiler upper-end writer;
the compiler writer doesn't want to deal with it, so then it is pushed
onto the programmer;
the programmer is less happy having to worry about all these added edge
cases, and so they want more pay;
...
then potentially, many levels of an organization are made less happy,
..., mostly because someone near the bottom didn't want to add a number
of "sugar" operations, and took it on faith that the level directly
above them would cover for it.
so, simplification is not necessarily a cure-all either, rather, it is
more necessary to try to figure out best what complexities belong where,
in a goal to find the lowest overall costs.
for example:
is Java ByteCode fairly simple? I would say yes.
what about the JVM as a whole? I would say probably not.
for example, had the JVM used a much more powerful, if likely more
complex, bytecode, it is possible now that its overall architectural
complexity would have been lower.
but, then one may find that there are many different possibilities with
differing tradeoffs, and possibly there is a lack of any "ideal"
front-runner.
not that simplicity is a bad thing either though, just it is better to
try to find a simple way to handle issues, rather than try to sweep them
under the carpet or try to push them somewhere else.
or, at least, this is my thinking at the moment...
Cheers,
Alan
------------------------------------------------------------------------
*From:* Quentin Mathé <qma...@gmail.com>
*To:* Fundamentals of New Computing <fonc@vpri.org>
*Sent:* Thu, July 28, 2011 12:32:53 PM
*Subject:* Re: [fonc] HotDraw's Tool State Machine Editor
Hi Alan,
Le 25 juil. 2011 à 10:08, Alan Kay a écrit :
> I don't know of an another attempt to build a whole system with wide
properties in DSLs. But it wouldn't surprise me if there were some
others around. It requires more design effort, and the tools to make
languages need to be effective and as easy as possible, but the
payoffs are worth it. I was asked this question after the HPI talk:
what about the "Tower of Babel" from using DSLs -- isn't there a
learning curve problem?
>
> My answer was: yes there is, but if you can get factors of 100s to
1000s of decrease in size and increase in clarity, the tradeoff will
be more like "you have to learn 7 languages, but then there are only a
few hundred pages of code in the whole system -- vs -- you only have
to learn one language but the system is 4 million pages of code, so
you will never come close to understanding it".
>
> (Hint: try to avoid poor language designs -- like perl etc. -- for
your DSLs ...)
>
> This is kind of a "mathematics is a plural" situation that we
already have. Maths are made up as DSLs to efficiently represent and
allow thinking about many different kinds of domains. One of the
things one learns while learning math is how to learn new representations.
>
> This used to be the case 50 years ago when most programming was done
in machine code. When I was a journeyman programmer at that time, I
had to learn 10 or 12 different instruction sets and macro-assembler
systems for the many different computers I had to program in the Air
Force and then at NCAR. We also had to learn a variety of mid-level
languages such as Fortran, COBOL, RPG, etc. This was thought of as no
big deal back then, it was just part of the process.
>
> So when people started talking in the 60s about "POL"s in research
(Problem Oriented Languages -- what are called DSLs today) this seemed
like a very good idea to most people (provided that you could get them
to be efficient enough). This led partly to Ted Steele's idea of an
"UNCOL" (Universal Computer Oriented Language) which was a relatively
low-level target for higher level languages whose back-end could be
optimized just once for each cpu. Historically, C wound up filling
this role about 10 years later for people who wanted a universal
target with an optimizer attached.
>
> Overall, I would say that the biggest difficulties -- in general --
are still the result of not knowing how to design each and every level
of software well enough.
As you mention it it looks to me the really hard problem is the design
and how to push OOP to its boundaries. From this perspective, I'm not
convinced that DSLs are really critical.
DSLs could matter more in the lower levels. For example, a DSL such as
the s-expression language described in 'PEG-based transformer provides
front-, middle and back-end stages in a simple compiler' seems very
convincing, at least the overall result is very impressive. I was able
to understand an entire non-trivial compiler for the first time in my
life :-)
But the closer we get to the user the less critical they seems to be imo.
So I get the impression that STEPS could be written Smalltalk or some
improved dialect with a marginal impact on the code base size.
Compared to a normal operating system that weights several millions
loc, with an entirely rethought design but no DSLs, it might be
possible to reduce the whole system to 100 000 or 50 000 loc.
Then using DSLs would allow to compress the code a bit more and go
down to 20 000 loc, but the real gain would come from the new design
approach rather than the DSLs.
imo there is a tension between DSLs and frameworks/libraries. As a
framework design is refined more and more, the more the framework
stands as its "own distinct language". When this point is reached
where using a framework feels close to writing in a dedicated
language, it's then relatively easy to add a DSL as syntactic sugar,
but the expressivity or code compression gains seem then limited in
most cases. If you implement the DSL earlier during the framework
development, the gains can be more important, because the DSL will
cover the framework design limitations, but these will probably
manifest elsewhere at a later time.
To take a concrete example, what looks important in OMeta is the
concept but not OMeta as a DSL. For instance, Newspeak executable
grammars or PetitParser appear to do almost the same than OMeta but
without a dedicated DSL.
So I'd be curious to know what yours take on this DSL vs framework issue.
I also wonder if you have studied how big would become Nile or some
other STEPs subprojects using DSLs if they were rewritten in Smalltalk...
Cheers,
Quentin.
_______________________________________________
fonc mailing list
fonc@vpri.org <mailto:fonc@vpri.org>
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc