====== Mark Waser's Fri 4/25/2008 9:22 AM post said======> 
I have to side with Richard on this.  The truth is *not* obvious.  I have my
beliefs which I will express when I get the time later today or tomorrow --
but -- there is absolutely no reason for you to be dismissive like this.
Richard is just *doing science* while you appear to me to be *doing
politics*.  Your ideas *might* be better but his methods are MUCH
better.======end of quote======

WHY ARE RICHARD'S METHODS THAT PRODUCE FALSE STATEMENTS "MUCH BETTER" THAN
MY IDEAS --- which are "better" --- as your above quote implies.  That is
contrary to any reasonable notion of what science is supposed to be about.
My showing that statements on a scientific subject are false --- when there
is strong reason to believe they are false --- *is* "doing science."   It
contributes more to scientific understanding than the person who is making
such false statements.

AND WITH REGARD TO "THE TRUTH IS *NOT* OBVIOUS" --- obviously at some level
there is no clear "truth" about anything.  All of reality could be a
deception.  But Science is based on the idea that there is, in fact,
something to the notion that the "truth", or at least significant aspects of
it, can be learned and known about certain subjects based on observations
and experiments.  

And by any reasonable standard of truth it is clear Richard's statement in
his blog --- that the four features of design doom make it impossible to
design a system that has them --- was false as originally written.  It is
false because AI systems have been designed which contain these four
features.  

Furthermore, it is clear Richard made another false statement when he not
only (1) claimed I unfairly characterizing his four feature argument by
implying it was broad enough to cover non-AGI AI systems as well as AGI
systems, but then (2) when I alleged this was a distinction not included in
his original statement of his four features argument --- he implied I was
falsely describing what he had written.  

The fact is, it is clearly Richard who was doing the false description,
because in he original statement of his four features of design doom
argument he said:

***"These four characteristics are enough. Go take a look at a natural
system in physics, or an engineering system, and find one in which the
components of the system interact with memory, development, identity and
nonlinearity. You will not find any that are understood."***

This makes it quite clear he intended his four features argument to cover
not only AGI systems, but also any AI system, any computer system, or any
engineered system, whatsoever, that contained the four features.

So Mark if you prefer Richard's false and sometimes dishonest statements to
my attempts to point out the truth, that's your right --- but please don't
imply it promotes science.  

And with regard to your allegation of "politics", please appreciate that
Richard is one of the most extreme people on this list in attacking the
arguments and intelligence other peoples.  And when he does it based on
reasoning that is false and dishonest I and other have a right to point out
the falseness and dishonesty of his arguments and attacks.  

It's not only science.  It's fairness. 

Ed Porter

P.S. You can have the last say in response to this post.  I have already
spend too much time on this particular subject.


-----Original Message-----
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 24, 2008 5:58 PM
To: [email protected]
Subject: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES


Richard,

The most important point in your below post is your newly introduced
limitation that your four features of design doom don't necessarily prevent
design of many AI systems --- but are --- you believe --- very likely to
cause design doom in very large AGI systems --- such as human-level systems
--- that are extremely complex (in the old fashioned sense) ---
particularly, once they have done a tremendous amount of self modification,
in the sense of automatic learning and adaptation.

It actually makes sense that as a system becomes vastly more complex --- as
I believe any human level AGI's will have to be --- that complexity --- in
the sense of systems that are hard to properly control --- might become a
problem.  

But since you now have admit the four features of design doom haven't doomed
design in some many current AI --- which presumbably includes current large
AI's like SOAR and LIDA --- there is nothing in your blog (at least as of
last night) or your response below to provide any indication of how large a
system has to be --- or how much of each of the four factors are required
--- for design doom to occur.  And there does not appear to be anything
other than a hunch --- on your part --- as to at what size design doom
becomes nearly or actually inevitable.

The Googleplex arguably has the four design features of doom and it has run
for roughly a decade adapting its indexes to arguably more information than
many full AGI systems may over their lifetimes, and it has remained
remarkably stable.  It has memory.  It has development in the sense of
automatically adapting its indexes.  It has Identity at least in the sense
of identifying individual users and presumably types of users for use in
placing adds.  Finally it has non-linearity in much of its decision making,
such as in handling word forms, etc.  

The Googleplex is less likely to have design doom problems than some AGIs
--- but it s not clear by how much, since your blog provide no math for
estimating where --- in a design space having as dimensions the four
features of design doom (and perhaps other parameters) --- design doom is
likely to kick in and to what degree.

So Richard it does not appear your theory of Richard-complexity --- with its
the four features of design doom and its concepts of "untouchability" -- as
it relates to AGI has added anything solid to the AGI community's
understanding other than that when we build large automatically running
AGI's there might well be complexity problems that may present very real,
and possibly, extremely difficult problems.

I have said for years --- long before I ever heard of Richard-complexity ---
that the only really big problem I know of in making human level AI ---
(other than getting the massively parallel and highly interconnected
hardware and the software tools to program on it) --- is getting it all to
work together well automatically.  It doesn't appear your theory adds
anything to that except a greater degree of doubt about whether we humans
are smart enough to solve that problem. 

----------
Now I will indicate my comments on other parts of your response.

>====RICHARD====>
Ed,

You have put words into my mouth:  I have never tried to argue that a 
narrow-AI system cannot work at all.

(Narrow AI is what you are referring to above:  it must be narrow AI, 
because there have not been any fully functioning *AGI* systems 
delivered yet, and you refr to systems that have been built).

====ED=========>
I did not put words in your mouth.  When I used AI I was using the term to
include both narrow AI and AGI, since both are AI systems (So its not a
strained interpretation I was intending)

There have been multiple systems such as SOAR and, I think,  LIDA which seem
to apply a common automatic learning and behaving architecture to many
different types of problems, and thus can be considers AGI's.

>====RICHARD====>
The point of my argument is to claim that such narrow AI systems CANNOT 
BE EXTENDED TO BECOME AGI SYSTEMS.  The complex systems problem predicts 
that when people allow those four factors listed above to operate in a 
full AGI context, where the system is on its own for a lifetime, the 
complexity effects will then dominate.

====ED=========>
Well, I wish you had said this in your blog.  It does not appear to make any
limitation that design doom depends on system being as large as huge AGI
system.  In fact it says, speaking of the Four Features of Design Doom:

"These four characteristics are enough. Go take a look at a natural system
in physics, or an engineering system, and find one in which the components
of the system interact with memory, development, identity and nonlinearity.
You will not find any that are understood."

This language implies design doom would occur on any system with these for
features.  Thus, it implies just the opposite of what you call the "point of
your argument" in your paragraph above.

>====RICHARD====>
.




-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 23, 2008 10:15 PM
To: [email protected]
Subject: Re: [agi] Adding to the extended essay on the complex systems
problem

Ed Porter wrote:
> Richard,
> In your blog you said:
> 
> "- Memory.  Does the mechanism use stored information about what it was 
> doing fifteen minutes ago, when it is making a decision about what to do 
> now?  An hour ago?  A million years ago?  Whatever:  if it remembers, 
> then it has memory.
> 
> "- Development.  Does the mechanism change its character in some way 
> over time?  Does it adapt?
> 
> "- Identity.  Do individuals of a certain type have their own unique 
> identities, so that the result of an interaction depends on more than 
> the type of the object, but also the particular individuals involved?
> 
> "- Nonlinearity.  Are the functions describing the behavior deeply 
> nonlinear?
> 
> These four characteristics are enough. Go take a look at a natural 
> system in physics, or an engineering system, and find one in which the 
> components of the system interact with memory, development, identity and 
> nonlinearity.  You will not find any that are understood.
> 
> ".
> 
> "Notice, above all, that no engineer has ever tried to persuade one of 
> these artificial systems to conform to a pre-chosen overall behavior.."
> 
>  
> 
>  
> 
> I am quite sure there have been many AI system that have had all four of 
> these features and that have worked pretty much as planned and whose 
> behavior is reasonably well understood (although not totally understood, 
> as is nothing that is truly complex in the non-Richard sense), and whose 
> overall behavior has been as chosen by design (with a little 
> experimentation thrown in) .  To be fair I can't remember any off the 
> top of my head, because I have read about many AI systems over the 
> years.  But recording episodes is very common in many prior AI systems.  
> So is adaptation.  Nonlinearity is almost universal, and Identity as you 
> define it would be pretty common.
> 
>  
> 
> So, please --- other people on this list help me out --- but I am quite 
> sure system have been built that prove the above quoted statement to be 
> false.

Ed,

You have put words into my mouth:  I have never tried to argue that a 
narrow-AI system cannot work at all.

(Narrow AI is what you are referring to above:  it must be narrow AI, 
because there have not been any fully functioning *AGI* systems 
delivered yet, and you refr to systems that have been built).

The point of my argument is to claim that such narrow AI systems CANNOT 
BE EXTENDED TO BECOME AGI SYSTEMS.  The complex systems problem predicts 
that when people allow those four factors listed above to operate in a 
full AGI context, where the system is on its own for a lifetime, the 
complexity effects will then dominate.

In effect, what I am claiming is that people have been masking the 
complexity effects by mollycoddling their systems in various ways, and 
by not allowing them to run for long periods of time, or in general 
environments, or to ground their own symbols.

I would predict that when people do this "mollycoddling" of their AI 
systems, the complex systems effects would not become apparent very soon.

Guess what?  That exactly fits the observed history of AI.  When people 
try to make these AI systems operate in ways that brings out the 
complexity, the systems fail.



Richard Loosemore




P.S.  Please don't call it "Richard-complexity" .... it has nothing to 
do with me:  this is "complexity" the way that lots of people understand 
the term.  If you need to talk about the concept that is the opposite of 
simple, it would be better to use "complicated".  Personalizing it just 
creates confusion.










-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to