IEd,
You appear to be moving into the FUD camp yourself. However, . . . .
Richard,
I'm afraid that you have successfully talked me out of the complex systems
camp.
>> 6) A system is deemed "complex" if the smallest size of a theory that will
>> explain that system is so large that, for today's human minds, the discovery
>> of that theory is simply not practical. Notice that this definition does not
>> imply that there any such systems in the real world, it just says that if
>> the theory size were ever to go off the scale then the system would (by
>> definition) be complex.
I just don't believe that the core of intelligence is complex according to
this definition. The combination of the core of intelligence *plus* the world
is clearly complex but I don't believe that a boot-strap intelligence need be
complex (by this definition)
This definition is *not* what I understand to be complex.
----- Original Message -----
From: Ed Porter
To: [email protected]
Sent: Thursday, April 24, 2008 10:48 AM
Subject: **SPAM** [agi] DO RICHARD'S FOUR FEATURES OF DESIGN DOOM ACTUALLY
PREVENT DESIGNABILITY
As I have quoted below, in his susaro.com blog, Richard Loosemore states any
system with MEMORY, ADAPTATION, IDENTITY (individuals within a type), and
NON-LINEARITY cannot be understood, nor can it be designed to have a desired
overall behavior
I WOULD APPRECIATE IF OTHERS ON THIS LIST WOULD CHIP IN WITH THEIR EVIDENCE
ONE WAY OR THE OTHER ON THIS IMPORTANT TOPIC --- because it is a key issue in
determining whether or not we should believe much of the FUD (Fear,
Uncertainty, and Doubt -- an old IBM sales term for denigration of competitive
products) Richard has been spreading to say traditional approaches to AGI
design, including those used by Ben et al. for Novamente, are dead meat because
of unsolvable problems with the type of complexity he defines (i.e.,
RL-complexity)..
It is my strong hunch Richard's statement about these four features of design
doom is provably false. It is my hunch that many AI systems with these four
features have been built and have worked roughly as designed --- but in my
below copied post I said off the top of my head I could not think of any, and
by that I meant any I knew have been built and have worked roughly as planned
and knew for sure had all the four features of doom.
I believe that Novamente, if it would built, would have all the four
features of design doom, as apparently does Richard from his many
anti-Novamente statements. So, I am guessing, would Joscha Bach's MicroPSI,
Stan Franklin's LIDA, and Laird et al.'s SOAR - all of which have been built
and, as I understand it, work --- presumably with a fair amount of
experimentation thrown in --- somewhat as designed.
I would not be even be surprised if the fluid grammar Stephen Reed is working
on has all four of these features of doom. (Stephen, please tell me if this is
true or not.)
It appears from Stephen's Apr 21 2008 - 5:16pm post about fluid grammar that
it has (1) MEMORY, because it records individual new words and phrases it sees
occurring in text before --- (2) DEVELOPMENT because its ability to properly
parse adapts over time, through learning from the text --- (3) IDENTITY because
I assume it classifies its individual word forms, words, and/or phrases within
classes (Here I am guessing, Stephen, please correct me if I am wrong), --- and
(4) ---NON-LINEARITY, because presumably performs many of the types of
non-linear functions, such as thresholding and yes/no decision making, that
would be used in almost any AGI such as Novamente.
Richard has been using notions of RL-complexity to spread "FUD" against many
other people's approach to AGI. After much asking, he has now tried to justify
his denigration of others work on his susaro.com blog. So far a significant
part of his objection to such work is based on the above four features of
design doom.
SO PLEASE SPEAK UP THOSE OF YOU ON THIS LIST WITH ANY EVIDENCE OR SOUND
ARGUMENTS --- PRO OR CON --- ABOUT WHETHER RICHARD'S "FOUR FEATURES OF DESIGN
DOOM" ACTUALLY DO DOOM ENGINEERING OF AGI SYSTEMS, SUCH AS NOVAMENTE.
-----Original Message-----
From: Ed Porter [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 23, 2008 9:06 PM
To: [email protected]
Subject: RE: [agi] Adding to the extended essay on the complex systems problem
Richard,
In your blog you said:
"- Memory. Does the mechanism use stored information about what it was doing
fifteen minutes ago, when it is making a decision about what to do now? An
hour ago? A million years ago? Whatever: if it remembers, then it has memory.
"- Development. Does the mechanism change its character in some way over
time? Does it adapt?
"- Identity. Do individuals of a certain type have their own unique
identities, so that the result of an interaction depends on more than the type
of the object, but also the particular individuals involved?
"- Nonlinearity. Are the functions describing the behavior deeply nonlinear?
These four characteristics are enough. Go take a look at a natural system in
physics, or an engineering system, and find one in which the components of the
system interact with memory, development, identity and nonlinearity. You will
not find any that are understood.
".
"Notice, above all, that no engineer has ever tried to persuade one of these
artificial systems to conform to a pre-chosen overall behavior.."
I am quite sure there have been many AI system that have had all four of
these features and that have worked pretty much as planned and whose behavior
is reasonably well understood (although not totally understood, as is nothing
that is truly complex in the non-Richard sense), and whose overall behavior has
been as chosen by design (with a little experimentation thrown in) . To be
fair I can't remember any off the top of my head, because I have read about
many AI systems over the years. But recording episodes is very common in many
prior AI systems. So is adaptation. Nonlinearity is almost universal, and
Identity as you define it would be pretty common.
So, please --- other people on this list help me out --- but I am quite sure
system have been built that prove the above quoted statement to be false.
Ed Porter
-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 23, 2008 4:11 PM
To: [email protected]
Subject: [agi] Adding to the extended essay on the complex systems problem
Yesterday and today I have added more posts (susaro.com) relating to the
definition of complex systems and why this should be a problem for AGI
research.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
------------------------------------------------------------------------------
agi | Archives | Modify Your Subscription
------------------------------------------------------------------------------
agi | Archives | Modify Your Subscription
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com