Re: [fonc] Eternal computing

2011-06-29 Thread Chris Warburton
On Sat, 2011-06-25 at 09:39 -0700, Steve Wart wrote:
 I've been thinking about eternal computing not so much in the context
 of software, but more from a cultural level.
 
 Software ultimately runs on some underlying physical computing
 machine, and physical machines are always changing. If you want a
 program to run for a long time, the software needs to be flexible
 enough to move from host to host without losing its state. That's more
 of a requirements statement than an insight, and it's not a
 particularly steep hurdle (given some expectation of down time), so
 I'll leave it at that for now.

 If you consider that life itself is computational in nature (not a big
 leap given what we know about DNA), it's instructive to think about
 the amount of energy most organisms expend on the activities
 surrounding sexual reproduction. As our abilities to perform
 artificial computations increase, it seems that more and more of our
 economic life will be driven by computing activities. Computation is
 an essential part of what we are.
 
 In this context, I wonder what to make of the 10,000 year clock:
 
 http://www.1yearclock.net/learnmore.html
 
 First, I'm skeptical that something made of metal will last 10,000
 years. But suppose it would be possible to build a clock that lasts
 that long. If in a fraction of a second I have a device that can
 execute billions of instructions, what advantage does stone-age (or
 iron-age) technology offer beyond longevity?
 
 I think the key advantage is that no computation takes place in
 isolation. Every time you calculate a result, the contextual
 assumptions that held at the start of that calculation have changed.
 Other computations by other devices may have obviated your result or
 provided you with new inputs that can allow you to continue
 processing. Which means running for a long time is no longer a simple
 matter of saving your state and jumping to a new host, since all the
 other hosts that you are interacting with have made assumptions about
 you too. It starts to look like a model of life, where the best way to
 free up resources is to allow obsolete hosts to die, so that new
 generations can continue once they've learned everything their parents
 can teach them.

Whilst the mention of parents, teaching, etc. are insightful, I think it
is a more fundamental comparison betwen an eternal computing system and
life; namely an organism like an animal. The concept of an animal
seems natural and obvious, but really every animal is actually a huge
collection of cells. These cells provide the link to the physical world
(they are where all of the interesting chemistry goes on) so they are
the hardware, whilst the animal itself is the arrangement and collective
activity of the cells, so it is the software.

Whilst all organisms eventually die, the link to eternal computing is
that animals generally live far longer than their cells, so the software
carries on running with no downtime as the hardware is continually
replaced.

The key difference to your point, I feel, is that this allows 'eternal'
systems to exist despite the fact that the underlying engineering is
only designed to last for the short term. In fact, the constant need for
renewal is what makes the system so flexible and robust, as opposed to
trying to build a robust artifact by making it as rigid and inflexible
as possible.

Here are a couple of examples that spring to mind:

Self-assembling solar cells. These use components which are very
efficient but degrade very quickly. However, the components can be
broken apart and self-assembled over and over by adding and removing a
surfactant. The extra efficiency allows old components to be removed,
disassembled, reassembled and reintroduced without impacting the output
of the system too much.
http://www.nature.com/nchem/journal/v2/n11/full/nchem.822.html

Viral programming and RGLL. Given an 'amorphous computer' (ie. no fixed
architecture, just an arbitrarily arranged network of unreliable,
low-resource devices), how can it be programmed? The idea of viral
programming, of which RGLL is presented as an example, is to program a
parallel, distributed algorithm and package the code into a capsule.
The computing nodes send and receive capsules, and execute each one they
receive (except if it's already received that particular capsule). As
part of its execution, a capsule can send copies of itself, or modified
versions of itself, to neighbouring nodes. Computation is redundant,
addressing is emergent (eg. number-of-hops gradient formation), etc.
Since the nodes are assumed to be failure-prone, this is a very direct
example of an 'eternal' system built out of short-lived components.
http://people.csail.mit.edu/jrb/Projects/rseam.pdf

I wonder what those with real knowledge of biology think of this?

Thanks,
Chris Warburton


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Eternal computing

2011-06-29 Thread Alan Kay
Hi Chris

I think looking at the way biology works is a good perspective. By the way, we 
recycle not just the 10 trillion cells that contain our DNA (and the 90 
Trillion 
cells we have with microbial DNA/RNA), but all our *atoms* are replaced about 
every 7 years (with the exception of inorganic pigments from tattooing, etc., 
which take quite a bit longer).

The only human artifact that is remotely like this is the Internet, which has 
been able to grow and replace most parts large and small without having to ever 
be stopped.

It is worth considering the scaling differences between biological structures 
and those we can make with computer hardware and software. These are some of 
several reasons why going directly after how bio does it is not very feasible. 
Similarly, evolution takes a very long time to figure out things compared to 
brains, so we should be interested in not just eternal computing as microbes 
do 
it but eternal computing with goals at human levels.

The Masters thesis looks interesting!

Cheers,

Alan







From: Chris Warburton chriswa...@googlemail.com
To: fonc@vpri.org
Sent: Wed, June 29, 2011 6:45:10 AM
Subject: Re: [fonc] Eternal computing

On Sat, 2011-06-25 at 09:39 -0700, Steve Wart wrote:
 I've been thinking about eternal computing not so much in the context
 of software, but more from a cultural level.
 
 Software ultimately runs on some underlying physical computing
 machine, and physical machines are always changing. If you want a
 program to run for a long time, the software needs to be flexible
 enough to move from host to host without losing its state. That's more
 of a requirements statement than an insight, and it's not a
 particularly steep hurdle (given some expectation of down time), so
 I'll leave it at that for now.

 If you consider that life itself is computational in nature (not a big
 leap given what we know about DNA), it's instructive to think about
 the amount of energy most organisms expend on the activities
 surrounding sexual reproduction. As our abilities to perform
 artificial computations increase, it seems that more and more of our
 economic life will be driven by computing activities. Computation is
 an essential part of what we are.
 
 In this context, I wonder what to make of the 10,000 year clock:
 
 http://www.1yearclock.net/learnmore.html
 
 First, I'm skeptical that something made of metal will last 10,000
 years. But suppose it would be possible to build a clock that lasts
 that long. If in a fraction of a second I have a device that can
 execute billions of instructions, what advantage does stone-age (or
 iron-age) technology offer beyond longevity?
 
 I think the key advantage is that no computation takes place in
 isolation. Every time you calculate a result, the contextual
 assumptions that held at the start of that calculation have changed.
 Other computations by other devices may have obviated your result or
 provided you with new inputs that can allow you to continue
 processing. Which means running for a long time is no longer a simple
 matter of saving your state and jumping to a new host, since all the
 other hosts that you are interacting with have made assumptions about
 you too. It starts to look like a model of life, where the best way to
 free up resources is to allow obsolete hosts to die, so that new
 generations can continue once they've learned everything their parents
 can teach them.

Whilst the mention of parents, teaching, etc. are insightful, I think it
is a more fundamental comparison betwen an eternal computing system and
life; namely an organism like an animal. The concept of an animal
seems natural and obvious, but really every animal is actually a huge
collection of cells. These cells provide the link to the physical world
(they are where all of the interesting chemistry goes on) so they are
the hardware, whilst the animal itself is the arrangement and collective
activity of the cells, so it is the software.

Whilst all organisms eventually die, the link to eternal computing is
that animals generally live far longer than their cells, so the software
carries on running with no downtime as the hardware is continually
replaced.

The key difference to your point, I feel, is that this allows 'eternal'
systems to exist despite the fact that the underlying engineering is
only designed to last for the short term. In fact, the constant need for
renewal is what makes the system so flexible and robust, as opposed to
trying to build a robust artifact by making it as rigid and inflexible
as possible.

Here are a couple of examples that spring to mind:

Self-assembling solar cells. These use components which are very
efficient but degrade very quickly. However, the components can be
broken apart and self-assembled over and over by adding and removing a
surfactant. The extra efficiency allows old components to be removed,
disassembled, reassembled and reintroduced without impacting the 

Re: [fonc] Eternal computing

2011-06-29 Thread Wesley Smith
Related to the bio perspective on computation, has anyone on this list
explored the ideas of Tibor Ganti's Chemoton Theory in relation to
computation and programming? It's a really interesting example of how
to abstract out the essence of biological systems in a way that
simplifies without losing touch with reality.  In one of his books, he
even sketches out what a computational system based on such fluid
automata might look like.

In my own view, I see biological systems as an abstraction over
chemical networks such that the biological system is constantly
computing its own identity despite the continual material and
energetic fluxes of its chemical components.  In a sense, chemistry is
functional (not unlike lambda calculus, see Fontana's Alchemy
http://fontana.med.harvard.edu/www/Documents/WF/Papers/objects.pdf)
and biology is akin to a virtual machine.  The key difference between
the two is self-referentiality.  The biological system, based on an
auto-catalytic metabolic cycle housed in a membrane can support worlds
of chemistry that simply can't otherwise exist due to the kinetics
involved.  Metabolism provides both the perception and reaction to
environmental conditions that enables biological systems to bootstrap
themselves.

I think there are a lot of lessons to be taken form these ideas that
can inform the philosophy and structure of computational systems,
particularly from a Bergsonian perspective.  Actually building them
directly will require exchanging wired connections for something that
more closely mimics how fluids mediate the ad hoc and dynamic scaling
of communication channels.


http://www.chemoton.com/
http://home.planet.nl/~gkorthof/korthof66.htm

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Eternal computing

2011-06-29 Thread Alan Kay
Thanks for the references to The Chemoton Theory -- I hadn't seen this before.

But I didn't understand your reference to Bergson -- wasn't he an adherent of 
the Elan Vital as a necessary part of what is life? and that also drove 
evolution in particular directions. 


Cheers,

Alan





From: Wesley Smith wesley.h...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Wed, June 29, 2011 12:16:55 PM
Subject: Re: [fonc] Eternal computing

Related to the bio perspective on computation, has anyone on this list
explored the ideas of Tibor Ganti's Chemoton Theory in relation to
computation and programming? It's a really interesting example of how
to abstract out the essence of biological systems in a way that
simplifies without losing touch with reality.  In one of his books, he
even sketches out what a computational system based on such fluid
automata might look like.

In my own view, I see biological systems as an abstraction over
chemical networks such that the biological system is constantly
computing its own identity despite the continual material and
energetic fluxes of its chemical components.  In a sense, chemistry is
functional (not unlike lambda calculus, see Fontana's Alchemy
http://fontana.med.harvard.edu/www/Documents/WF/Papers/objects.pdf)
and biology is akin to a virtual machine.  The key difference between
the two is self-referentiality.  The biological system, based on an
auto-catalytic metabolic cycle housed in a membrane can support worlds
of chemistry that simply can't otherwise exist due to the kinetics
involved.  Metabolism provides both the perception and reaction to
environmental conditions that enables biological systems to bootstrap
themselves.

I think there are a lot of lessons to be taken form these ideas that
can inform the philosophy and structure of computational systems,
particularly from a Bergsonian perspective.  Actually building them
directly will require exchanging wired connections for something that
more closely mimics how fluids mediate the ad hoc and dynamic scaling
of communication channels.


http://www.chemoton.com/
http://home.planet.nl/~gkorthof/korthof66.htm

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Eternal computing

2011-06-29 Thread Casey Ransberger
I can't help wondering whether or not it was any easier to keep a system 
running when systems were big enough to climb inside of. When my tablet bricks 
and refuses to take a flash, I can open the machine (I mean I can break it 
open) but the part that computes and remembers is all one piece now. 

I enjoyed swapping parts out of desktop machines, looking for defective 
components. It was like a meditation. Of course I would have to power them down 
first, and I can only imagine this has been generally true for all electronic 
computers. 

I used to take a pair of broken computers, and use the best (working) parts 
from both to make a computer that would often be better overall than either 
machine was when they still worked. I liked doing this, and so people started 
bringing me a lot of old broken computers. Usually whenever I built a new one, 
I would give the old one away, and this motivated people, as it happened, to 
keep bringing me their junk, so I could be perpetually looking for a better 
machine. I hadn't ever looked for a biological metaphor in what I was doing 
with those obsolete junkers, but I think I can see one now.

This is a great thread.

On Jun 25, 2011, at 9:39 AM, Steve Wart st...@wart.ca wrote:

 I've been thinking about eternal computing not so much in the context
 of software, but more from a cultural level.
 
 Software ultimately runs on some underlying physical computing
 machine, and physical machines are always changing. If you want a
 program to run for a long time, the software needs to be flexible
 enough to move from host to host without losing its state. That's more
 of a requirements statement than an insight, and it's not a
 particularly steep hurdle (given some expectation of down time), so
 I'll leave it at that for now.
 
 I recently stumbled across the work of Quinlan Terry, whom I had never
 heard of until I did a search for an inscription in a print that
 caught my eye. I found this essay helps capture what makes him
 different from most people designing buildings today:
 
 http://www.qftarchitects.com/essays/sevenmisunderstandings.php
 
 I don't make any claims that these observations have anything do with
 software, except in a more general sense of the cultural values that
 influence design. I suppose the pitfalls of trivializing something
 because it seems familiar applies to software as well as any other
 design discipline.
 
 We have an engineering culture that pursues change at an ever
 increasing rate. The loss of eternal values in physical architecture
 is sad indeed, especially in the context of urban sprawl and the now
 rampant deterioration of buildings that were built a generation ago,
 to last only a single generation. The ongoing global financial mess is
 arguably a result of short-term thinking.
 
 Economics matters. One of the intriguing facets of computing is the
 incredible amount of money the industry generates and consumes. And
 nowhere is short-term thinking more generously rewarded than in the
 continual churn of new computing devices and software. Personally I
 find it overwhelming and I have been trying to keep up for 30 years.
 Clearly it's not slowing down.
 
 I think there's a good reason for the ever-increasing rate of change
 in computer technology, and that it is the nature of computation
 itself.
 
 Seth Lloyd has a very interesting perspective on revolutions in
 information processing:
 
 http://www.edge.org/3rd_culture/lloyd06/lloyd06_index.html
 
 If you consider that life itself is computational in nature (not a big
 leap given what we know about DNA), it's instructive to think about
 the amount of energy most organisms expend on the activities
 surrounding sexual reproduction. As our abilities to perform
 artificial computations increase, it seems that more and more of our
 economic life will be driven by computing activities. Computation is
 an essential part of what we are.
 
 In this context, I wonder what to make of the 10,000 year clock:
 
 http://www.1yearclock.net/learnmore.html
 
 First, I'm skeptical that something made of metal will last 10,000
 years. But suppose it would be possible to build a clock that lasts
 that long. If in a fraction of a second I have a device that can
 execute billions of instructions, what advantage does stone-age (or
 iron-age) technology offer beyond longevity?
 
 I think the key advantage is that no computation takes place in
 isolation. Every time you calculate a result, the contextual
 assumptions that held at the start of that calculation have changed.
 Other computations by other devices may have obviated your result or
 provided you with new inputs that can allow you to continue
 processing. Which means running for a long time is no longer a simple
 matter of saving your state and jumping to a new host, since all the
 other hosts that you are interacting with have made assumptions about
 you too. It starts to look like a model of life, where the best way to
 free up resources 

Re: [fonc] Eternal computing

2011-06-29 Thread Wesley Smith
On Wed, Jun 29, 2011 at 12:38 PM, Alan Kay alan.n...@yahoo.com wrote:
 Thanks for the references to The Chemoton Theory -- I hadn't seen this
 before.

 But I didn't understand your reference to Bergson -- wasn't he an adherent
 of the Elan Vital as a necessary part of what is life? and that also drove
 evolution in particular directions.


you're welcome.  The interesting part of about Chemoton Theory is that
the first papers were written contemporaneously with Eigen's RNA world
theory and Maturana and Varela's autopoiesis ideas.

The Bergson reference was cryptic.  Sorry about that!  He did write
about Élan Vital, but in my understanding it doesn't represent a
transcendental category but is rather a name for a self-referential
process by which objects/virtualities/... differentiate.  The clearest
exposition I've found on this is the last chapter of Deleuze's
Bergsonism.

The aspect of Bersgon that I was thinking about though was the concept
of duration, particularly that of the cerebral interval (the time
between a received movement and an executed movement), which generates
perception.  Yet perception is both matter (made of up of neurons,
cells, chemical networks, sensors, ...) and the perception of matter.
It's a self-loop of something perceiving itself.  We see the same kind
of self-loop pattern in von Foerster's Cybernetics of Epistemology and
Notes on an Epistemology of Living Things where computation is
understood as com + putare or thinking together.

Where Bersgon was talking about human perception, I think his ideas
can be taken all the way down to the basic (theoretical) units of life
that Ganti describes in Chemoton Theory where instead of a cerebral
interval, there's a metabolic interval.  The metabolic interval is the
time of adjustment and reaction to environmental conditions (the cell
shrinks, grows, chemicals flows with varying degrees and directions)
that is a direct result of the structure of an auto-catalytic loop.
By virtue of this self-loop, novel conditions develop through
differentiating patterns of chemical flow that hook on to the
metabolism, over time developing into more and more complex structures
with new hierarchical levels.

I should point out that I'm not saying this is how life happened, but
rather that I believe it's a compelling way to approach
conceptualizing about how computational systems could be cast in a
biological perspective.  I tend to think of computation as mathematics
+ duration and biology as chemistry + duration.  Computational systems
does not have to mimic in a literal way what biology does, which is
what I see most systems doing.

wes

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Eternal computing

2011-06-29 Thread Casey Ransberger
On Jun 29, 2011, at 2:03 PM, Wesley Smith wesley.h...@gmail.com wrote:

 The aspect of Bersgon that I was thinking about though was the concept
 of duration, particularly that of the cerebral interval (the time
 between a received movement and an executed movement), which generates
 perception.  Yet perception is both matter (made of up of neurons,
 cells, chemical networks, sensors, ...) and the perception of matter.
 It's a self-loop of something perceiving itself.  We see the same kind
 of self-loop pattern in von Foerster's Cybernetics of Epistemology and
 Notes on an Epistemology of Living Things where computation is
 understood as com + putare or thinking together.

Thinking together is a really interesting thought. Have you ever read 
Minsky's Society of Mind? I'm wanting to quote it but I lent my copy to a 
curious stranger two days ago, and I don't want to misquote, so I'm just going 
to have to recommend it:)

It's one of my favorite books to lend people. They always come back with stars 
in their eyes.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Eternal computing

2011-06-29 Thread Max OrHai
A couple more references in this vein:

Robert Rosen's work in theoretical biology predates the autopoiesis theory
of Maturana and Varela by a couple decades, and is somewhat more general and
mathematically rigorous. He's not as well-known, but his book *Life
Itself* is well
worth reading, although one of his major points is that the essential
character of living systems is *not* computable.

More immediately on topic, I've just read a particularly thoughtful essay
from Richard Gabriel, titled Conscientious Computing, which directly
addresses these issues of scalability and adaptability in pervasive software
systems. Some here may find it interesting.
http://dreamsongs.com/Files/ConscientiousSoftwareCC.pdf

-- Max



On Wed, Jun 29, 2011 at 2:03 PM, Wesley Smith wesley.h...@gmail.com wrote:

 On Wed, Jun 29, 2011 at 12:38 PM, Alan Kay alan.n...@yahoo.com wrote:
  Thanks for the references to The Chemoton Theory -- I hadn't seen this
  before.
 
  But I didn't understand your reference to Bergson -- wasn't he an
 adherent
  of the Elan Vital as a necessary part of what is life? and that also
 drove
  evolution in particular directions.


 you're welcome.  The interesting part of about Chemoton Theory is that
 the first papers were written contemporaneously with Eigen's RNA world
 theory and Maturana and Varela's autopoiesis ideas.

 The Bergson reference was cryptic.  Sorry about that!  He did write
 about Élan Vital, but in my understanding it doesn't represent a
 transcendental category but is rather a name for a self-referential
 process by which objects/virtualities/... differentiate.  The clearest
 exposition I've found on this is the last chapter of Deleuze's
 Bergsonism.

 The aspect of Bersgon that I was thinking about though was the concept
 of duration, particularly that of the cerebral interval (the time
 between a received movement and an executed movement), which generates
 perception.  Yet perception is both matter (made of up of neurons,
 cells, chemical networks, sensors, ...) and the perception of matter.
 It's a self-loop of something perceiving itself.  We see the same kind
 of self-loop pattern in von Foerster's Cybernetics of Epistemology and
 Notes on an Epistemology of Living Things where computation is
 understood as com + putare or thinking together.

 Where Bersgon was talking about human perception, I think his ideas
 can be taken all the way down to the basic (theoretical) units of life
 that Ganti describes in Chemoton Theory where instead of a cerebral
 interval, there's a metabolic interval.  The metabolic interval is the
 time of adjustment and reaction to environmental conditions (the cell
 shrinks, grows, chemicals flows with varying degrees and directions)
 that is a direct result of the structure of an auto-catalytic loop.
 By virtue of this self-loop, novel conditions develop through
 differentiating patterns of chemical flow that hook on to the
 metabolism, over time developing into more and more complex structures
 with new hierarchical levels.

 I should point out that I'm not saying this is how life happened, but
 rather that I believe it's a compelling way to approach
 conceptualizing about how computational systems could be cast in a
 biological perspective.  I tend to think of computation as mathematics
 + duration and biology as chemistry + duration.  Computational systems
 does not have to mimic in a literal way what biology does, which is
 what I see most systems doing.

 wes

 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Formal Methods in STEPS

2011-06-29 Thread Alexis Read
I've been looking at integrating formal methods, advanced typing and
provability into STEPS, mainly as a good solution to being able to go from
specifications to testable prototypes easily. I've been looking at a number
of different languages and proof assistants that could fill the role.
From the (little) reading I've done, there seems to be a concensus that
higher order languages/provers are more useful than first order ones, in
particular, pattern matching for types/solutions appears to be more complete
under higher order logic.
For example, Epigram http://www.e-pig.org/darcs/Pig09/web/ is based on
dependent types, but uses first order matching and is incomplete (see the
literal programming manual, Epitome, p163) ie. it doesn't neccessarily match
all types/terms. I'm of the opinion that, although having a sound and
complete parser/matcher is pretty useless day-to-day (most computer
languages and systems are unambiguous), it is important to be sure you'll
catch all types/terms for proofs and for execution.

Coq, Isabelle and PVS all now use rewriting to some extent, though in a
higher order fashion (ie. more general than lambda calculus eg. Curry-Howard
calculi, dependent types). They do appear to use first-order logic for some
proofs where decidability is a problem (
www.lix.polytechnique.fr/~jouannaud/articles/tyl.pdf) and having seen some
higher-order developments within Maude, I'd venture that it is useful to
build higher-order provers on first-order environments - case in point,
Epigram compiles down to Haskell constructs.

There's a very good paper on pure type systems in Maude, with a view to
implementing a higher-order (Open) Calculus of Constructions. The paper
mentions a nice CINNI notation which allows you to use named types, rather
than say de Brujin indices, and still avoid accidental type/term hiding. The
proof assistant for OCC has been designed for use in a universe heirarchy (
http://www.informatik.uni-hamburg.de/TGI/mitarbeiter/wimis/stehr/occ_eng.html)
which does sound similar to Alessandro Warth's Worlds POV.

A later paper on the system is here:
http://www.arnetminer.org/viewpub.do?pid=882439 It looks interesting, but I
don't have a direct link for download unfortunately.
Lastly, I've a couple of links to a declarative debugger in Maude - yes you
can debug the specifications! (http://maude.sip.ucm.es/debugging/ and
http://maude.cs.uiuc.edu/papers/pdf/Riesco-et-al-debugging-tr.pdf)

Fairly recently, the Maude team (and contributors) wrote a formal analyser
for java programs (JavaFAN - see the slides in the Maude intro
http://maude.cs.uiuc.edu/talks/maude-padl10-slides.pdf). The basic strategy
I've come up with, involves writing a version of Maude on top of Squeak
(smalltalk), then adapting JavaFAN to analyse smalltalk bytecodes so you can
prove the version of Maude itself, the VM, and thus the entire software
stack from the top, down to the metal by virtue of running on the VM (ok,
SqueakNOS isn't strictly bare metal, but STEPS/FRANK/NotSqueak should be,
and I'd try to move the Maude runtime across).

The Maude port would be straight - I don't know smalltalk that well, and I'm
starting work by moving over the BuDDY library into Squeak. I'd appreciate
people's thoughts on my programming direction.

Cheers,
Alexis.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc