On Sat, 2011-06-25 at 09:39 -0700, Steve Wart wrote:
> I've been thinking about eternal computing not so much in the context
> of software, but more from a cultural level.
> 
> Software ultimately runs on some underlying physical computing
> machine, and physical machines are always changing. If you want a
> program to run for a long time, the software needs to be flexible
> enough to move from host to host without losing its state. That's more
> of a requirements statement than an insight, and it's not a
> particularly steep hurdle (given some expectation of "down time"), so
> I'll leave it at that for now.

> If you consider that life itself is computational in nature (not a big
> leap given what we know about DNA), it's instructive to think about
> the amount of energy most organisms expend on the activities
> surrounding sexual reproduction. As our abilities to perform
> artificial computations increase, it seems that more and more of our
> economic life will be driven by computing activities. Computation is
> an essential part of what we are.
> 
> In this context, I wonder what to make of the 10,000 year clock:
> 
> http://www.10000yearclock.net/learnmore.html
> 
> First, I'm skeptical that something made of metal will last 10,000
> years. But suppose it would be possible to build a clock that lasts
> that long. If in a fraction of a second I have a device that can
> execute billions of instructions, what advantage does stone-age (or
> iron-age) technology offer beyond longevity?
> 
> I think the key advantage is that no computation takes place in
> isolation. Every time you calculate a result, the contextual
> assumptions that held at the start of that calculation have changed.
> Other computations by other devices may have obviated your result or
> provided you with new inputs that can allow you to continue
> processing. Which means running for a long time is no longer a simple
> matter of saving your state and jumping to a new host, since all the
> other hosts that you are interacting with have made assumptions about
> you too. It starts to look like a model of life, where the best way to
> free up resources is to allow obsolete hosts to die, so that new
> generations can continue once they've learned everything their parents
> can teach them.

Whilst the mention of parents, teaching, etc. are insightful, I think it
is a more fundamental comparison betwen an eternal computing system and
life; namely an organism like an animal. The concept of "an animal"
seems natural and obvious, but really every animal is actually a huge
collection of cells. These cells provide the link to the physical world
(they are where all of the interesting chemistry goes on) so they are
the hardware, whilst the animal itself is the arrangement and collective
activity of the cells, so it is the software.

Whilst all organisms eventually die, the link to eternal computing is
that animals generally live far longer than their cells, so the software
carries on running with no downtime as the hardware is continually
replaced.

The key difference to your point, I feel, is that this allows 'eternal'
systems to exist despite the fact that the underlying engineering is
only designed to last for the short term. In fact, the constant need for
renewal is what makes the system so flexible and robust, as opposed to
trying to build a robust artifact by making it as rigid and inflexible
as possible.

Here are a couple of examples that spring to mind:

Self-assembling solar cells. These use components which are very
efficient but degrade very quickly. However, the components can be
broken apart and self-assembled over and over by adding and removing a
surfactant. The extra efficiency allows old components to be removed,
disassembled, reassembled and reintroduced without impacting the output
of the system too much.
http://www.nature.com/nchem/journal/v2/n11/full/nchem.822.html

Viral programming and RGLL. Given an 'amorphous computer' (ie. no fixed
architecture, just an arbitrarily arranged network of unreliable,
low-resource devices), how can it be programmed? The idea of viral
programming, of which RGLL is presented as an example, is to program a
parallel, distributed algorithm and package the code into a "capsule".
The computing nodes send and receive capsules, and execute each one they
receive (except if it's already received that particular capsule). As
part of its execution, a capsule can send copies of itself, or modified
versions of itself, to neighbouring nodes. Computation is redundant,
addressing is emergent (eg. number-of-hops gradient formation), etc.
Since the nodes are assumed to be failure-prone, this is a very direct
example of an 'eternal' system built out of short-lived components.
http://people.csail.mit.edu/jrb/Projects/rseam.pdf

I wonder what those with real knowledge of biology think of this?

Thanks,
Chris Warburton


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to