I’m responding to my own message, because (thanks to Andy Keep) I’ve now 
discovered a big chunk of the answer.

Specifically, it looks Jeremy Siek’s compilers class includes a textbook 
written by him and Ryan Newton whose preface appears to answer all of my 
questions; specifically, that they did merge Ghuloum’s approach with nanopasses.

https://iu.instructure.com/courses/1735985

https://github.com/IUCompilerCourse/Essentials-of-Compilation

> On Feb 9, 2019, at 10:33, John Clements <cleme...@brinckerhoff.org> wrote:
> 
> 
> 
>> On Feb 8, 2019, at 15:01, George Neuner <gneun...@comcast.net> wrote:
>> 
>> On Fri, 8 Feb 2019 08:37:33 -0500, Matthias Felleisen
>> <matth...@felleisen.org> wrote:
>> 
>>> 
>>>> On Feb 6, 2019, at 3:19 PM, George Neuner <gneun...@comcast.net> wrote:
>> 
>>>> 
>>>> The idea that a compiler should be structured as multiple passes each
>>>> doing just one clearly defined thing is quite old.  I don't have
>>>> references, but I recall some of these ideas being floated in the late
>>>> 80's, early 90's [when I was in school].
>>>> 
>>>> Interestingly, LLVM began (circa ~2000) with similar notions that the
>>>> compiler should be highly modular and composed of many (relatively
>>>> simple) passes.  Unfortunately, they quickly discovered that, for a C
>>>> compiler at least, having too many passes makes the compiler very slow
>>>> - even on fast machines.  Relatively quickly they started combining
>>>> the simple passes to reduce the running time. 
>>> 
>>> 
>>> I strongly recommend that you read the article(s) to find out how
>>> different nanopasses are from the multiple-pass compilers, which
>>> probably date back to the late 60s at least. — Matthias
>> 
>> I did read the article and it seems to me that the "new idea" is the
>> declarative tool generator framework rather than the so-called
>> "nanopass" approach.  
>> 
>> The distinguishing characteristics of "nanopass" are said to be:
>> 
>> (1) the intermediate-language grammars are formally specified and
>>    enforced;
>> (2) each pass needs to contain traversal code only for forms that
>>    undergo meaningful transformation; and
>> (3) the intermediate code is represented more efficiently as records
>> 
>> 
>> IRs implemented using records/structs go back to the 1960s (if not
>> earlier).
>> 
>> 
>> Formally specified IR grammars go back at least to Algol (1958). I
>> concede that I am not aware of any (non-academic) compiler that
>> actually has used this approach: AFAIAA, even the Algol compilers
>> internally were ad hoc.  But the *idea* is not new.
>> 
>> I can recall as a student in the late 80's reading papers about
>> language translation and compiler implementation using Prolog
>> [relevant to this in the sense  of being declarative programming]. I
>> don't have cites available, but I was spending a lot of my library
>> time reading CACM and IEEE ToPL so it probably was in one of those.
>> 
>> 
>> I'm not sure what #2 actually refers to.  I may be (probably am)
>> missing something, but it would seem obvious to me that one does not
>> write a whole lot of unnecessary code.
> 
> 
> Hmm… I think I disagree.  In particular, I think you’re missing the notion of 
> a DSL that allows these intermediate languages to be specified much more 
> concisely by allowing users to write, in essence, “this language is just like 
> that one, except that this node is added and this other one is removed.” I 
> think it’s this feature, and its associated 
> automatic-translation-of-untouched-nodes code, that makes it possible to 
> consider writing a 50-pass parser that would otherwise have about 50 x 10 = 
> 500 “create a node by applying the transformation to the sub-elements” 
> visitor clauses. Right?
> 
> In fact, as someone who’s about to teach a compilers class starting in April 
> and who’s almost fatally prone to last-minute pivots, I have to ask: is 
> anyone that you know (o great racket users list) currently using this 
> approach or these tools? Last year I went with what I think of as the Aziz 
> Ghuloum via Ben Lerner approach, starting with a trivial language and 
> widening it gradually. I see now that Ghuloum was actually teaching at IU 
> when he wrote his 2006 Scheme Workshop paper, and that although he cites 
> about fifteen Dybvig papers, the nanopass papers don’t seem to be among them.
> 
> Hmm…
> 
> John
> 



-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/cd4334d7-0054-4ec6-9072-1c678e9cb0c4%40mtasv.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to