Well, we don't absolutely *need* music notation, but it really helps many 
things. We don't *need* the various notations of mathematics (check out 
Newton's 
use of English for complex mathematical relationships in the Principia), but it 
really helps things.


I do think the hard problem is "design" and all that goes along with it (and 
this is true in music and math too). But that is not the end of it, nor is 
ignoring the design of visual representations that help grokking and thinking a 
good idea.

I think you are confusing/convolving the fact of being able to do something 
with 
the ease of it. This confusion is rampant in computer science ....

Cheers,

Alan




________________________________
From: Quentin Mathé <qma...@gmail.com>
To: Fundamentals of New Computing <fonc@vpri.org>
Sent: Thu, July 28, 2011 12:32:53 PM
Subject: Re: [fonc] HotDraw's Tool State Machine Editor

Hi Alan,

Le 25 juil. 2011 à 10:08, Alan Kay a écrit :

> I don't know of an another attempt to build a whole system with wide 
> properties 
>in DSLs. But it wouldn't surprise me if there were some others around. It 
>requires more design effort, and the tools to make languages need to be 
>effective and as easy as possible, but the payoffs are worth it. I was asked 
>this question after the HPI talk: what about the "Tower of Babel" from using 
>DSLs -- isn't there a learning curve problem?
> 
> My answer was: yes there is, but if you can get factors of 100s to 1000s of 
>decrease in size and increase in clarity, the tradeoff will be more like "you 
>have to learn 7 languages, but then there are only a few hundred pages of code 
>in the whole system -- vs -- you only have to learn one language but the 
>system 
>is 4 million pages of code, so you will never come close to understanding it".
> 
> (Hint: try to avoid poor language designs -- like perl etc. -- for your DSLs 
>...)
> 
> This is kind of a "mathematics is a plural" situation that we already have. 
>Maths are made up as DSLs to efficiently represent and allow thinking about 
>many 
>different kinds of domains. One of the things one learns while learning math 
>is 
>how to learn new representations.
> 
> This used to be the case 50 years ago when most programming was done in 
> machine 
>code. When I was a journeyman programmer at that time, I had to learn 10 or 12 
>different instruction sets and macro-assembler systems for the many different 
>computers I had to program in the Air Force and then at NCAR. We also had to 
>learn a variety of mid-level languages such as Fortran, COBOL, RPG, etc. This 
>was thought of as no big deal back then, it was just part of the process.
> 
> So when people started talking in the 60s about "POL"s in research (Problem 
>Oriented Languages -- what are called DSLs today) this seemed like a very good 
>idea to most people (provided that you could get them to be efficient enough). 
>This led partly to Ted Steele's idea of an "UNCOL" (Universal Computer 
>Oriented 
>Language) which was a relatively low-level target for higher level languages 
>whose back-end could be optimized just once for each cpu. Historically, C 
>wound 
>up filling this role about 10 years later for people who wanted a universal 
>target with an optimizer attached.
> 
> Overall, I would say that the biggest difficulties -- in general -- are still 
>the result of not knowing how to design each and every level of software well 
>enough.


As you mention it it looks to me the really hard problem is the design and how 
to push OOP to its boundaries. From this perspective, I'm not convinced that 
DSLs are really critical.

DSLs could matter more in the lower levels. For example, a DSL such as the 
s-expression language described in 'PEG-based transformer provides front-, 
middle and back-end stages in a simple compiler' seems very convincing, at 
least 
the overall result is very impressive. I was able to understand an entire 
non-trivial compiler for the first time in my life :-)
But the closer we get to the user the less critical they seems to be imo. 

So I get the impression that STEPS could be written Smalltalk or some improved 
dialect with a marginal impact on the code base size.
Compared to a normal operating system that weights several millions loc, with 
an 
entirely rethought design but no DSLs, it might be possible to reduce the whole 
system to 100 000 or 50 000 loc. 

Then using DSLs would allow to compress the code a bit more and go down to 20 
000 loc, but the real gain would come from the new design approach rather than 
the DSLs. 


imo there is a tension between DSLs and frameworks/libraries. As a framework 
design is refined more and more, the more the framework stands as its "own 
distinct language". When this point is reached where using a framework feels 
close to writing in a dedicated language, it's then relatively easy to add a 
DSL 
as syntactic sugar, but the expressivity or code compression gains seem then 
limited in most cases. If you implement the DSL earlier during the framework 
development, the gains can be more important, because the DSL will cover the 
framework design limitations, but these will probably manifest elsewhere at a 
later time.

To take a concrete example, what looks important in OMeta is the concept but 
not 
OMeta as a DSL. For instance, Newspeak executable grammars or PetitParser 
appear 
to do almost the same than OMeta but without a dedicated DSL.

So I'd be curious to know what yours take on this DSL vs framework issue.
I also wonder if you have studied how big would become Nile or some other STEPs 
subprojects using DSLs if they were rewritten in Smalltalk…

Cheers,
Quentin.
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to