Stephen Wilson wrote:
Hi Tim,
On Sun, Nov 15, 2009 at 11:35:00PM -0500, Tim Daly wrote:
This is a request for design discussion for those who are interested.
Your message is very interesting.
I have some plans on how to get "from here to there" in a slow, incremental
fashion. These plans involve things like finding the basis-set of the algebra
in terms of lisp functions and then trying to find a closure of this basis set
so that the algebra embeds cleanly in this set. The compiler should compile
that basis set into an embedding in the prior layer. Lisp can support this by
defining domain-specific languages for each layer and macros from one layer to
another.
Could you please expand on this point? How do you propose to do this
incrementally from an implementation point of view? I am asking these questions
from an engineering standpoint. In particular, are you suggesting a rewrite
(essentially from scratch, using the current implementation as a guide)?
--
steve
A rewrite from scratch? Well, more like a "remolding of the clay" than a
rewrite.
The idea is to move from a working system to a working system with each
change, but
eventually restructure the internals cleanly. At the moment I am in the
process of
restructuring the interpreter/compiler into literate form (book volume
5). As I do
this I am rewriting the code to be functionally equivalent.
The above comment about finding the basis set for the algebra amounts to
finding
every lisp-source-level function call from every lisp algebra object. I
wrote a
"calls" function to walk lisp code and extract the non-common-lisp
function calls.
This set will be gathered, arranged, and studied. The idea is to find
the "right level"
of abstraction (ala "On Lisp" and "Structure and Interpretation"). This
level of
abstraction forms the current base language hosting the algebra. It
forms a design
target for the top level embedded layer. Next we re-arrange the system
to make this layer
explicit (define an API, so to speak). Then we recurse the process with
the new layer.
Code in the system that uses nothing but common lisp calls forms the
other end of
the spectrum. The game is to build the two layers toward each other in a
disciplined
way so that each layer embeds properly in the prior one. Because Axiom
was written by
so many different people the internals are more like "tent poles" of
functionality
where each one builds from nothing all the way to the top, reproducing
common ideas
in different ways.
Along the way the code needs major cleanup work. On Lisp and SICP stress
using
functional programming but the Axiom internals are wildly coupled
through the use
of special variables. Some of these can be eliminated with extra
arguments and
most of those that cannot will still be able to be limited using lexical
closures.
In addition, the use of Boot code led to a fantastic amount of list
processing
where it is not appropriate. Major structures in the system should be
defstruct
objects or some other kind of structuring. Because of the Boot idioms it
is next
to impossible to find all the places a data structure is modified.
Side-effecting
global state happens all over the place. Now that the boot code is gone
the data
can take a more natural and better disciplined shape.
Even better for the long term would be to make each structure immutable
(see
"Purely Functional Data Structure" by Chris Okasaki). This would make it
much easier
to move the system to a parallel base of code. This level of rewrite is
on the queue
to think about and study but might take some experimenting.
I have roughly 140 files still to merge into the interpreter volume and
it takes about a
week of moving/testing/documenting/rewriting/xrefing/latexing for each
one so this
is going to take a while.
Tim
_______________________________________________
Axiom-developer mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/axiom-developer