Over the course of the last year or so I have begun working on a
family of Scheme language designs as the basis for other research in
which I am currently interested and, in the grand Lisp tradition, I
have started off with meta-circular experimental systems. While
driving to my day-job this morning I had a thought about the
macro-timing problem that may have been forgotten in our long-standing
efforts to have a "nice" static description of the language semantics.
Of course, an actual working interpreter provides a very high-level
(or possibly low-level depending on your point of view) formal
description of the semantics using the implementation platform as
providing a set of semantic primitives. So while I expect that most of
the people interested in Scheme standardization will be familiar with
the structure of a meta-circular interpreter, I think it is worthwhile
to provide a brief outline.

A meta-circular interpreter (MCI) is typically defined by two mutually
recursive functions: APPLY and EVAL. Broadly speaking, APPLY has the
job of marshalling arguments and invoking abstractions, while EVAL has
the job of finding values for primitive syntactic forms. If the reader
is alert, you will note that there are probably *four* separate
functions involved (binding management, syntactic reduction, argument
marshalling, abstraction invocation), but the natural synergies
between them have largely dictated the binary breakdown of the MCI
into APPLY and EVAL.

Which is all well and good, but what does any of this have to do with
macros? Early Lisps treated macros as syntactic abstractions and
therefore incorporated their reduction into the APPLY function. This
led us to FEXPRs, and ultimately (I haven't done a deep study of it
yet) to John Shutt's vau-calculus as implemented in the Kernel
programming language. However, it seems that macros could also be
viewed as syntactic *extensions* and as such be more properly
considered as part of EVAL's functional remit. From this view it seems
that we ultimately derive hygienic macros - as their reduction can be
accomplished entirely in terms of textual manipulation with only
minimal (actually zero I think) reference to anything outside of the
syntactic reduction machinery in the MCI.

I think it can be safely assumed that hygienic macros - in particular
SYNTAX-RULES have found a certain "sweet spot" in terms of macro
semantics. The problems come in when we try to break hygiene. Then we
start having to cross, not only EVAL's internal distinction between
storage management and syntactic reduction, but the stronger border
between EVAL and APPLY.

Now it is interesting to note that hygienic macros float the syntactic
extension functionality outside of the entire EVAL/APPLY loop and into
a conceptually separate pass. This makes environmental manipulations
completely impossible to the macro processor. But my question (and
challenge) is there a semantically sound way to model macros that
returns them to their place inside EVAL? I am thinking in terms of an
operator that can extend the reduction rules of the operational
semantics f the language - perhaps that's daft in the way that I frame
it, but it is certainly how the code *could* be (and certainly was)
structured in some MCIs.

Something to think about, especially w/rt Thing1 and first-class
environments &cet...

david rush
-- 
GPG Public key at http://cyber-rush.org/drr/gpg-public-key.txt

_______________________________________________
r6rs-discuss mailing list
r6rs-discuss@lists.r6rs.org
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to