On Sun, Apr 14, 2013 at 1:44 PM, Gath-Gealaich
<[email protected]>wrote:

> On Sat, Apr 13, 2013 at 8:29 PM, David Barbour <[email protected]>wrote:
>>
>>
>> On this forum, 'Nile' is sometimes proffered as an example of the power
>> of equational reasoning, but is a domain specific model.
>>
>
> Isn't one of the points of idst/COLA/Frank/whatever-it-is-called-today to
> simplify the development of domain-specific models to such an extent that
> their casual application becomes conceivable, and indeed even practical, as
> opposed to designing a new one-size-fits-all language every decade or so?
>

Aha! And you bring us back on topic! Good for you, man with an awesome
name.

Unfortunately, as I described on Apr6, DSLs are insufficient. There are
many "non-separable" cross-cutting concerns such as security, maintenance,
integration. Each of these non-separable concerns cuts across EVERY layer
and domain, i.e. the 'maintenance' issue is just as valid whether you're
considering data models or program code or what is rendered to a screen.
And so is the 'security' issue. By their very nature, you cannot address
such concerns within a domain specific model. You would need to address
these concerns in each domain specific model. But there are a few problems:

* we lack the discipline to address these issues with each domain-specific
model; our brains focus on "the problem at hand"
* we shouldn't be asked to hold a list of twenty properties in our heads;
developing a good DSL is hard enough on its own
* even if we had the discipline and brainpower, it's unlikely that we'd
decide and address these issues consistently across DSLs

Addressing these issues inconsistently is often vastly more frustrating
than *consistently* ignoring the issues. By analogy, it'd be like having
two countries build a bridge from both ends then discovering they don't
meet in the middle: now we have a lot of work to undo and redo, a lot of
finger-pointing and politics, and no clear authority or funding to fix
things. But if there were no bridge, it'd be easy to add a ferry.

We need those one-size-fits-all languages to alleviate the need for
discipline and potential for inconsistent decision-making. This isn't to
say we can't also have good support for domain-specific models. But there
is no "instead of" or "as opposed to". We need a good platform so that,
when we develop our DSLs, all those non-separable
cross-cutting/infrastructural issues are addressed implicitly.

Until we address these broader issues, we'll continue to develop elegant,
self-contained, domain-specific models like Nile... then scratching our
heads when it comes time to use them in a larger system. The only reason
that Nile seems to work in FoNC without bloating other code is that the
FoNC workgroup is a small monastery of computational monks that achieve
consistency by force-of-will and self-discipline alone. It's easy to ignore
this when enthusiastically reading end-of-year reports or watching Alan Kay
talk.

In real systems, 90% of code (conservatively) is glue code. (An excellent
picture to demonstrate the issues:
http://www.johndcook.com/blog/2011/11/15/plumber-programmers/) It should
come as no surprise that 'scaling' beyond toy projects is often difficult
when our infrastructure is 90% low-grade glue.



>
> I had another idea the other day that could profit from a domain-specific
> model: a model for compiler passes. I stumbled upon the nanopass approach
> [1] to compiler construction some time ago and found that I like it. Then
> it occurred to me that if one could express the passes in some sort of a
> domain-specific language, the total compilation pipeline could be assembled
> from the individual passes in a much more efficient way that would be the
> case if the passes were written in something like C++.
>

> In order to do that, however, no matter what the intermediate values in
> the pipeline would be (trees? annotated graphs?), the individual passes
> would have to be analyzable in some way. For example, two passes may or may
> not interfere with each other, and therefore may or may not be commutative,
> associative, and/or fusable (in the same respect that, say, Haskell maps
> over lists are fusable). I can't imagine that C++ code would be analyzable
> in this way, unless one were to use some severely restricted subset of C++
> code. It would be ugly anyway.
>
> Composing the passes by fusing the traversals and transformations would
> decrease the number of memory accesses, speed up the compilation process,
> and encourage the compiler writer to write more fine-grained passes, in the
> same sense that deep inlining in modern language implementations encourages
> the programmer to write small and reusable routines, even higher-order
> ones, without severe performance penalties. Lowering the barrier to
> implementing such a problem-specific language seems to make such an
> approach viable, perhaps even desirable, given how convoluted most
> "production compilers" seem to be.
>
> (If I've just written something that amounts to complete gibberish, please
> shoot me. I just felt like writing down an idea that occurred to me
> recently and bouncing it off somebody.)
>
> - Gath
>
> [1] Kent Dybvig, A nanopass framework for compiler education (2005),
> http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.72.5578
>
>
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to