Normally, no one agrees more than I with Popper's principle that arguments
like that between Paul and Ben over definitions are not fruitful.


In this case, however, there is some significance in the dispute over the
meaning of the term intelligence to the enterprise of Artificial General
Intelligence.


I am going to argue that AGI's goal is false from two perspectives.  Please
bear with me.


First, if the idea is to build computer systems that can do the same things
that "natural" intelligent systems - people - can do, then the term "General
Intelligence" is not a true description of either the goal or the
methodology.  The idea that human intelligence is contained or described by
a single factor - Burt's and Spearman's g - is pretty much a dead letter,
despite trash political manifestos disguised as science like The Bell Curve.
We no are pretty sure that human intelligence is built out of lots of
different "tools" to accomplish specific tasks like perception,
comprehension, motivation, and calculation.


And, indeed, the intelligent computation industry is filled with efforts to
build other specific tools to accomplish tasks once deemed too "intelligent"
to be subject to mechanical imitation.  So the members of this group are
each in their own way working on ways of modeling intelligent tasks, and the
work done on these goals is significant and valuable.  I would even say that
this isn't a second best that we have to settle for because our tools and
understanding are limited, but the best and most fruitful way to advance.


But that's not what you're aiming at.  You're also hoping that at some point
all your task-specific intelligent tools will get hooked together into an
entity that can coordinate all their work, and that can then turn its
electron-fast analysis into a self-analyzing, creative loop that generates
something that passes the Turing Test.


I would suggest, however - and here I know I'm going way too far - that the
term you have chosen (AGI) for this meta-project has been deliberately
selected to make the enterprise sound scientific and legitimate.  What you
are actually after - and what the arguments are really about - is something
quite different.


Building the tools is possible.  Coordinating them is possible.  But the
next step, which you have mislabeled AGI, is not.


Because what you are really after should be called not Artificial
Intelligence, but Artificial Consciousness.


That is the key characteristic of being human.  It is not something that can
be built through a reductionist construction of finite Turing Machine
programs.



Now, to hold off on the first objection.  I, too, am an atheist.  I do not
believe in spirits, souls, or other terminology for what Pinker calls the
Ghost in the Machine.  Nor am I sitting here like a character in a James
Whale movie declaring that Man Was Not Meant to Go There.


What I am saying is that the goal of creating an autonomous, autopoietic
construct is farther away than we might think.  That we haven't even gotten
close to the silicon equivalent of a bacterium (much less a dog) and we're
arguing about the best way to build a man.  That an Artificial Conscious
Being will not be built out of linear, binary programming, no matter how
complicated, but be something else.


I can be as wildly speculative as the next person, and with a lot less
real-world scholarship to base that on.  I can say that every truly
significant step in macro-evolutionary history has been brought about by
symbiosis.  That it may well be that the next such step - the one that
allows the creation/construction of autonomous beings that can last long
enough to survive trips to other stars - will be a symbiotic melding of our
own carbon-based life forms with silicon-based agents.  And that what the
people in this group do and learn can be a significant contribution to that
achievement some day.


But I also believe that several steps along the way to the appearance of
such cyborgs are farther away than we think, even though we know quite a lot
about building interfaces between neurons and chips.  That linear binary
programs just may not be capable of developing emergent overlays upon which
autonomous autopoietic entities can appear.


Well, I've gone way beyond my authority, my standing, perhaps my ability to
be coherent.  Those of you who took my earlier brief postings seriously and
responded with such grace have only yourselves to blame for encouraging me.


The point of the above, however, may simply be that you are getting ahead of
yourselves in even arguing about the validity of AGI.  It isn't valid, it
doesn't exist, and going after it is a perfectly valid and productive
career.  Have fun.



C. David Noziglia
Object Sciences Corporation
6359 Walker Lane, Alexandria, VA
(703) 253-1095

    "What is true and what is not? Only God knows. And, maybe, America."
                                  Dr. Khaled M. Batarfi, Special to Arab
News

    "Just because something is obvious doesn't mean it's true."
                 ---  Esmirelda Weatherwax, witch of Lancre


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to