[Jos]
Did you ever bottom out why Pirsig only likes to include biological systems
in his definition of alive, and then only human systems in his definition of
society? He used lengthy analogies about computers to explain how the level
shifts work but then flatly refused to accept them as equivolent? (Lila's
Child annotations).

[Krimel}
About the only reason I ever got for Pirsig's limitations on his levels was
'because...' There have been many discussions about it but it seems everyone
has their own thoughts on the matter. My own is that experience is too fluid
to be confined to fixed set of static levels. 

[Jos]
Fanantiaclly applying this [flip flop-machine language-progam-novel] model
to it-self, it seems clear to me that as soon as you turn a machine "on" and
allow it to boot it has become what is analogous to alive. Being a
heterotrophic organism it chooses to feed from an external energy source in
order to sustain itself, it reacts in complex ways to stimuli and performs
various homeostatic housekeeping duties. On top of that it stores
information and shares it with other similar "organisms". The fact that it
frequently "requires"(gets) "user input"(stimuli) "in order to" (that happen
to) change its patterns of behaviour is immaterial as this is also the case
with most biological systems.

[Krimel]
For me the computer seems alive in the way fire is alive. And like fire it
paves the way for new forms of communication and higher human consciousness.
My desktop is still a little to on/off for me to want to call it a pet but I
do like to teach it new tricks.

The engineering aspects of computing are wonders in their own right but
consider the rapid growth and expansion of the internet. In less than 20
years it has become a cocoon of silk tying together every far flung corner
of this planet. Spun of copper and glass, it calls to satellites in heaven
and leaves voicemail. The internet is billions of axons with a desktop at
each synapse. 

[Jos]
As to intelligence in my view we have no way of telling, my suspicion is
that we are to the machine, an environmental fact of life rather than an
"owner", more part of the pattern that is the computer's sense of self, like
where pressing the on switch is analogous to the sun coming up, and user
"commands" are just envoronmental stimuli. The intelligence will not have
direct conscious awareness of it's responses to these environmental stimuli,
as they will be part of its biological level. There's no reason therefore to
expect that the consciousness that's there has any awareness of us as agents
at all. 

[Krimel]
One of the chief problems with a computer like the ones we have now becoming
conscious is that they are entirely sequential. Information can be accessed
randomly but it is processed sequentially. The computer can only look at one
thing at a time. It looks really fast but it can only see one thing at a
time. We one the other hand can be aware of at least ten things
simultaneously and where the computer is limited to on or off we can
integrate millions of colors with symphonies, chiffon, chocolate and a whiff
of sea breeze.

I think it will be a while before machines can do that in the meantime some
of the major work in AI is going on in computer gaming. Big Blue did beat
Kasperoff at least once. In strategy games the task for the game developer
is to make the computer players seem like real players. They are not there
yet but they are making progress.

[Jos]
Not to say it wouldn't be possible to communicate but I would say that
turing type experiments go about it all wrong.      
 
[Krimel]
What would you suggest?


moq_discuss mailing list
Listinfo, Unsubscribing etc.
http://lists.moqtalk.org/listinfo.cgi/moq_discuss-moqtalk.org
Archives:
http://lists.moqtalk.org/pipermail/moq_discuss-moqtalk.org/
http://moq.org.uk/pipermail/moq_discuss_archive/

Reply via email to