Jim

My understanding is that a Novamente-like system would have a process of
natural selection that tends to favor the retention and use of patterns
(perceptive, cognative, behaviors) prove themselves useful in achieving
goals in the word in which it is embodied.

It seems to me t such a process of natural selection would tend to naturally
put some sort of limit on how out-of-touch many of an AGI's patterns would
be, at least with regard to patterns about things for which the AGI has had
considerable experience from the world in which it is embodied.

However, we humans often get pretty out of touch with real world
probabilities, as the recent bubble in housing prices, and the commonly
said, although historically inaccurate, statement of several years ago ---
that housing prices never go down on a national --- shows.

It would be helpful to make AGI's be a little more accurate in their
evaluation of the evidence for many of their assumptions --- and what that
evidence really says --- than we humans are.

Ed Porter

-----Original Message-----
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 29, 2008 10:49 AM
To: agi@v2.listbox.com
Subject: [agi] Mushed Up Decision Processes

One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a habitual ignorance of the
consequences of a misuse the user can become over-confident or
unwisely dismissive of criticism regardless of how on the mark it
might be.

The most proper use of statistical and probabilistic methods is to
base results on a strong association with the data that they were
derived from.  The problem is that the AI community cannot afford this
strong a connection to original source because they are trying to
emulate the mind in some way and it is not reasonable to assume that
the mind is capable of storing all data that it has used to derive
insight.

This is a problem any AI method has to deal with, it is not just a
probability thing.  What is wrong with the AI-probability group
mind-set is that very few of its proponents ever consider the problem
of statistical ambiguity and its obvious consequences.

All AI programmers have to consider the problem.  Most theories about
the mind posit the use of similar experiences to build up theories
about the world (or to derive methods to deal effectively with the
world).  So even though the methods to deal with the data environment
are detached from the original sources of those methods, they can
still be reconnected by the examination of similar experiences that
may subsequently occur.

But still it is important to be able to recognize the significance and
necessity of doing this from time to time.  It is important to be able
to reevaluate parts of your theories about things.  We are not just
making little modifications from our internal theories about things
when we react to ongoing events, we must be making some sort of
reevaluation of our insights about the kind of thing that we are
dealing with as well.

I realize now that most people in these groups probably do not
understand where I am coming from because their idea of AI programming
is based on a model of programming that is flat.  You have the program
at one level and the possible reactions to the data that is input as
the values of the program variables are carefully constrained by that
level.  You can imagine a more complex model of programming by
appreciating the possibility that the program can react to IO data by
rearranging subprograms to make new kinds of programs.  Although a
subtle argument can be made that any program that conditionally reacts
to input data is rearranging the execution of its subprograms, the
explicit recognition by the programmer that this is useful tool in
advanced programming is probably highly correlated with its more
effective use.  (I mean of course it is highly correlated with its
effective use!)  I believe that casually constructed learning methods
(and decision processes) can lead to even more uncontrollable results
when used with this self-programming aspect of advanced AI programs.

The consequences then of failing to recognize that mushed up decision
processes that are never compared against the data (or kinds of
situations) that they were derived from will be the inevitable
emergence of inherently illogical decision processes that will mush up
an AI system long before it gets any traction.

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to