SJS wrote:
The beginnings of Smalltalk were not serene. . . part 5 covers a
major rewrite, where Alan Kay wanted to throw it all out and start
over, and also covers the Apple folks showing up. . .
http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltalk_V.html
Kay's always had a computers-for-kids thing going, so those of you
who have the same sort of itch should find his viewpoint interesting.
He also didn't care much for the microprocessor revolution . . . :)
Concrete information at bottom, ranting starts in 3 ... 2 ... 1 ...
<rant>
The functional programming guys need something to blame for the fact
that all their pet languages failed horribly.
As for Lisp (and probably anything prior to 1988 or so), I'm beginning
to think that it failed for one very significant reason:
*Documentation*
Lisp lore was passed by word of mouth and got *lost*.
As for word of mouth, if you didn't know somebody who knew the Lisp-way
of doing something, where were you going to find it? The Lisp folks who
disdained the assembly and C hackers who were spreading information to
one another in the 80's via BBS's and articles never achieved a critical
mass because they didn't spread their information out to the "unworthy".
If someone had written a Lisp ROM cartridge and articles/book for the
old TRS-80 CoCo that I could have coded games in (and with some of the
Lisp on Lego and PIC work being done, it was eminently possible), I
probably would have learned that instead. Instead, I had an 6809
assembly cartridge so I learned that.
As for getting lost, even now, with a lot more age, experience, and
resources available, it's really hard to dig backwards for *simple*
Lisp/Scheme implementation things because you are groveling across the
simple stuff in the 1960's, 1970's and 1980's where there is no modern
indexing, searching, etc. (the papers are PDF images if available at
all). If you *do* find those implementation details, they are wired to
the hardware of the day and are hard to abstract (this is prior to the
release and victory of C). The worst is the number of "private
communications" that spread some piece of information that is now lost
to time.
In addition, things for Lisp were written in Lisp ("Only a heathen would
implement a Lisp in FORTRAN!"). This meant that nobody without a Lisp
machine could use a Lisp paper. It also obfuscated understanding by
confusing what was implemented with what was borrowed from the
underlying environment. Take a look at the original Lambda papers and
try to figure out what you will have to reimplement if you don't have
Lisp available as the implementation language. It's not an easy task.
Metacircular interpreters are clever resource-saving hacks for
academics, but they prevent others from using the results.
In addition, Lisp has a couple of subtleties, but they're *not that
bad*. Closures and continuations aren't that mystical if you can look
at a simple Lisp/Scheme implementation. Just turn off the portion of
your brain that thinks about what gets allocated on the stack and what
gets allocated on the heap and replace it with "everything gets
allocated on the garbage-collected heap". Suddenly, closures are a
pointer to a function combined with a pointer to an environment (on the
heap)--not much different from a stack (just allocated on the heap).
Continuations are roughly a closure combined with a "register
save"--much like a process context switch on an OS. Finally, "simple"
macros aren't too bad either, just an extra case statement in the
interpreter top-level loop, except for the fact that *you can't find
anything about simple macros because all the academics were writing
about hygienic macros* starting about 1984.
Of course, finding such a simple implementation to look at is amazingly
difficult (I have found one, details at bottom). The moment such a
simple implementation springs into existence, it starts accreting
features. Without source repositories, you can't unwind the later
optimizations.
Basically, anything that came of age right in the late 80's and early
90's had a huge information advantage over anything before
("prehistory") and a big momentum advantage over anything that came after.
We are rediscovering some of these "prehistory" languages. People are
looking to embedded systems which resemble the systems of yore much more
strongly than modern microprocessors. So, some of the old ways are
looking good again. However, the process of reconstituting that diffuse
information is very painful.
</rant>
Now, on to useful stuff.
So far, the best pedagogical Scheme I have found is minischeme
(sometimes called miniscm). It was the original basis for TinyScheme.
It's about 2500 lines of C code:
http://www.cs.indiana.edu/pub/scheme-repository/imp/minischeme.tar.gz
This implementation is supposedly based upon something published in Japan:
This Mini-Scheme Interpreter is based on "SCHEME Interpreter in
Common Lisp" in Appendix of T.Matsuda & K.Saigo, Programming of LISP,
archive No5 (1987) p6 - p42 (published in Japan).
If you can help me find this original (even if it is in Japanese I can
probably decode it), I would appreciate it.
In addition, I would have liked to look at the papers for PC-Scheme, but
I can't seem to find them easily. I may actually have to go to one of
the local University libraries and see if they actually have the journals.
-a
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg