>> Any thoughts?
My first thought is that you put way too much in a single post . . . .
>> The process that we call "thinking" is VERY different in various people.
Or even markedly different from one occasion to the next in the same person. I
am subject to a *very*strong Seasonal Affective Disorder effect (call it
seasonal-cycle manic-depression though not quite that extreme). After many
years, I recognize that I think *entirely* differently in the summer as opposed
to the middle of winter.
>> Once they adopted an erroneous model and "stored" some information based on
>> it, they were stuck with it and its failures for the remainder of their
>> lives.
While true in many (and possibly the majority of cases), this is nowhere near
universally true. This is like saying that you can't unlearn old, bad habits.
>> Superstitious learning is absolutely and theoretically unavoidable.
No. You are conflating multiple things here. Yes, we always start learning by
combination -- but then we use science to weed things out. The problem is --
most people aren't good scientists or cleaners.
>> Certainly, no one has suggested ANY reason to believe that the great
>> ultimate AGI of the long distant future will be immune to it.
I believe that, with the ability to have it's beliefs transparent and open to
inspection by itself and others, the great ultimate AGI of the near future will
be able to perform scientific clean-up *much* better than you can possibly
imagine.
Mark
----- Original Message -----
From: Steve Richfield
To: [email protected]
Sent: Monday, April 21, 2008 11:54 PM
Subject: [agi] Random Thoughts on Thinking...
The process that we call "thinking" is VERY different in various people. In
my own case, I was mercury poisoned (which truncates neural tubes) as a baby,
was fed a low/no fat diet (which impairs myelin growth), and then at the age of
5, I had my metabolism trashed by general anesthesia (causing "brain fog"). I
have since corrected my metabolic problems, I now eat LOTS of fat, and I
flushed the mercury out of my system.
However, the result of all of this was dramatic - I tested beyond genius in
some ways (first tested at the age of 6), and below average in others. I could
solve complex puzzles at lightning speed, but had the memory of an early
Alzheimer's patient. However, one thing was quite clear - whatever it was that
went on behind my eyeballs was VERY different from other people. No, I don't
mean "better" or "worse" than others, but completely different. My horrible
memory FORCED me to resort to understanding many things that other people
simply remembered, as at least for me, those understandings lasted a lifetime,
while my memory would probably be gone before the sun went down. This pushed me
into a complex variable-model version of reality, from which I could see that
nearly everyone operated from fixed models. Once they adopted an erroneous
model and "stored" some information based on it, they were stuck with it and
its failures for the remainder of their lives. This apparently underlies most
religious belief, as children explain the unknown in terms of God, and are then
stuck with this long after they realize that neither God nor Santa Clause can
exist as "conscious" entities.
Superstitious learning is absolutely and theoretically unavoidable.
Certainly, no one has suggested ANY reason to believe that the great ultimate
AGI of the long distant future will be immune to it. Add some trusted
misinformation (that we all get) and you have the makings of a system that is
little better than us, other than it will have vastly superior abilities to
gain superstitious learning and spout well-supported but erroneous conclusions
based on it.
My efforts on Dr. Eliza was to create a system that was orthogonal to our
biologically-based problem solving abilities. No, it usually did NOT solve
problems in the traditional way of telling the user what is broken (except in
some simplistic cases where this was indeed possible), but rather it focused on
just what it was that the user apparently did NOT know to have such a problem.
Inform the user of whatever it is that they did not know, and their "problem"
will evaporate through obviation - something subtly different than being
"solved". Of course, some of that "knowledge" will be wrong, but hopefully
users have the good sense to skip over "Steve's snake oil will cure all
illnesses" and consider other "facts".
One job I had was as the in-house computer and numerical analysis consultant
for the Physics and Astronomy departments of a major university. There it
gradually soaked in that the symbol manipulation of Algebra and "higher"
mathematics itself made some subtle mis-assumptions that often led people
astray. For example, if you have a value with some uncertainty (as all values
do) through a function with a discontinuity (as many interesting functions
have); when the range of uncertainty includes the discontinuity, it is pretty
hard to compute any useful result. Add to this the UNLIMITED ultimate range of
uncertainty of many things - the "uncertainty" being a statistical statement
involving standard deviations and NOT an absolute limit, then what is the value
of any computation unless you address such issues? Of course, the answer often
involves many iterations to develop the "space" of results. It is the lack of
presumption of these iterations in algebra and other mathematics that greatly
reduces their value in solving complex real-world problems by simply giving the
naive the wrong solutions to their equations.
If even our mathematics is questionable, and it is ABSOLUTELY IMPOSSIBLE to
understand our world without also incorporating unrecognized superstitious
learning, then just what is it that AGI is supposed to do for us? Add (or
multiply) the apparent ability of some software (like an extended Dr. Eliza) to
deal with things that we are really bad at (like skeptically absorbing all of
human knowledge by giving the world's population a platform to explain what it
is that they "know"), and it is pretty clear to me that just about any
intelligent person, given such tools, will outperform any near-term conceivable
AGI that lacks those tools.
OK, so let's look at an AGI that has those tools. Basically, we are putting
the knowledge of thousands, and perhaps millions of people into a single box.
No reasonable amount of personal experience by any human or single AGI will
ever be able to compete with such a vast body of knowledge. Once the knowledge
is fully interrelatable so that a human can do anything with it that an AGI can
do, then there should be no significant difference in problem-solving
performance.
OK, so how about having thousands/millions of AGIs all interrelating
together, instead of having humans interrelating as with Dr. Eliza like
approaches. This may indeed produce superior results - but only after we have
first built a world full of AGIs. Further, I suspect that a world full of the
SAME AGIs won't be nearly as powerful as a world full of DIFFERENT people, who
think and see things from different points of view (something we frown at here
in America). Hence, I simply do not see the impact/usefulness of any small
number of AGIs that some people seem to be excited or concerned about, but
instead see this a something that only people growing up in America would think
valuable.
Consider Iraq for a moment. Democracy absolutely REQUIRES a consensual view
of reality. This simply isn't achievable in many parts of the world, and is of
HIGHLY questionable value even here in America. Most Americans don't even
understand and really don't care why it is that Muslims are willing to die
rather than adopt our ways. Unless Sun Tsu (author of The Art of War) is
completely wrong (probably for the first time), America and American's dream of
AGI will be long gone before there as any opportunity to build a real AGI.
I suspect that the REAL issue is that some people here just want to build
their projects and play with them, regardless of whether they have any real
impact on the world, and are probably annoyed by postings like this that
question the potential value of such efforts. However, in doing this, you may
be poisoning the well for future people who really DO want to change things,
hopefully for the better.
Obviously, no one here would ever invest 10 cents into a Dr. Eliza like
approach, even if it involves a comparatively trivial effort and maybe promises
more "value" than any foreseeable AGI, for the same reason that I am not a
stone mason - it just isn't what I want to do.
Any thoughts?
Steve Richfield
------------------------------------------------------------------------------
agi | Archives | Modify Your Subscription
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com