Hi there,

I am coming at AGI from an apparently unique perspective. Back in 2001 I
contracted an "incurable" illness (idiopathic atrial fibrillation). Having
been involved in a couple of medical research projects in the long distant
past, I simply took this as another project and dived in to find a cure.
After 4 months of researching during every conscious moment (my AF left me
unconscious most afternoons), I did indeed find a one-day cure (followed by
a year of recovery). The cure (resetting my body temperature back to 98.6F)
isn't what is interesting here. What IS interesting here is why this took me
4 months instead of ~1 hour. Certainly there was SOMETHING wrong with the
Internet as it presently exists. Continuing my story...

Just days later, I got a project repairing some "unrepairable" circuit
boards for ~$20M military aircraft simulators. I brought
my 18-year-old daughter into the project as my apprentice to perform some of
the lengthy and boring testing. She had observed my curing of my AF and seen
some other successful medical research projects, and after a week on the job
commented that the process of repairing circuit boards is just like curing
illnesses, only the very specific activities (e.g. screening for errant
metabolic parameters vs. screening for nodes with errant dynamic impedances)
were different. We talked about this for several weeks, and she was exactly
right.

I then decided to write an AI program to solve very difficult problems,
which gradually morphed into the Dr. Eliza that has been presented and
demonstrated by my kids at various past WORLDCOMP AI conferences. Dr. Eliza
does NOT deal in direct questions, but rather takes problem statements and
drills down into whatever it was that the author needed NOT to know to have
such a problem. After all, the only reason that we have problems is that
there is something important that we don't already understand or haven't
already applied.

Meanwhile, I was participating heavily in Yahoo's WS-Forum, helping others
with similar internal regulatory problems. I loaded Dr. Eliza with my own
essential knowledge about errant body temperatures and "idiopathic" atrial
fibrillation and started throwing unretouched postings into Dr. Eliza.
Surprisingly, Dr. Eliza often noticed subtle indirect references to
contributory factors that I had missed on my own readings of the postings.
In short, with only ~200 of my own knowledge records in its knowledge base,
it was serious competition to ME.

I then added the ability to exchange knowledge via USENET with other
incarnations of Dr. Eliza, so that many authors could contribute to a
knowledge base that would WAY outperform any of the individual authors.

OK, so why aren't I rich? For the same reasons that most smart guys aren't
rich. People simply don't trust anyone or anything that may be smarter than
they are. I have a good friend who is the Director of Research for a major
University's medical center. I discussed Dr. Eliza at length with him, and
he flatly stated that there was NO POLITICAL WAY to integrate such a product
into any major medical environment, simply because no doctor is going to
stand aside and watch a computer run circles around them.

BTW, the principles behind Dr. Eliza are rather unique. I'd be glad to send
some papers to anyone who is interested. Briefly, Joe Weisenbaum's original
Eliza was built on two concepts, one good and one bad, that no one
previously separated. The good concept was that individual links in complex
cause and effect chains could be recognized by the occurrence of slightly
variable but easily describable snippets of text/speech. The bad concept was
that text/speech could be usefully manipulated by juggling words around. Joe
then wrote a book discrediting his own Eliza (with its unseparated
concepts), thereby causing AI research to take a wrong turn 40 years ago
from which it never recovered. However, the internals of Dr. Eliza aren't
really the subject of this posting, other than to demonstrate that AGI now
already exists, at least in this one potentially useful form.

Extrapolating into the future: I see no hope for this group to mitigate the
impact of AGI, nor should it, any more than the Luddites were able to blunt
the entry of modern mechanized manufacturing. The one thing that writing Dr.
Eliza drilled into me is that people, even PhDs, even me, are REALLY
REALLY STUPID compared with computationally combining the expertise of many
people. That the Dr. Eliza project included the discovery of Reverse
Reductio ad Absurdum reasoning that is crucial to solving apparently
irresolvable disputes hammers this home, since society's failure to
understand RRAA underlies nearly every dispute world history, yet this
somehow went undiscovered during millennia of wars and other disputes.
Providing some mechanized intelligence is a service and a hope for mankind's
future sanity, and the only "threat" that I see is that some true believers
in the truly absurd will truly get ground under AGI's wheels, as they truly
should.

Any thoughts on all this?

Steve Richfield

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to