I agree Richard, lets bury the hatchet on this one.  I think the truth is
obvious to any one who has read your blog and followed the various thread of
this argument on this list.   Ed Porter


-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 24, 2008 6:36 PM
To: [email protected]
Subject: Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY
THEORIES


Ed,

This is just garbled.  And insulting, too:  you try to imply that I have 
changed my stance, when in fact I have said the same thing throughout, 
but you keep finding new ways to get completely confused about what I 
have said.

My point has always been that all those systems only work to a limited 
extent, and one reason they only work to a limited extent is that they 
try to pretend that cognition is not complex.

I am not going to explain anything else to you, because whatever I say, 
you get confused and then launch even more accusations than before.

Sorry, but this is too silly.  Bye.


Richard Loosemore





Ed Porter wrote:
> Richard,
> 
> The most important point in your below post is your newly introduced
> limitation that your four features of design doom don't necessarily
prevent
> design of many AI systems --- but are --- you believe --- very likely to
> cause design doom in very large AGI systems --- such as human-level
systems
> --- that are extremely complex (in the old fashioned sense) ---
> particularly, once they have done a tremendous amount of self
modification,
> in the sense of automatic learning and adaptation.
> 
> It actually makes sense that as a system becomes vastly more complex ---
as
> I believe any human level AGI's will have to be --- that complexity --- in
> the sense of systems that are hard to properly control --- might become a
> problem.  
> 
> But since you now have admit the four features of design doom haven't
doomed
> design in some many current AI --- which presumbably includes current
large
> AI's like SOAR and LIDA --- there is nothing in your blog (at least as of
> last night) or your response below to provide any indication of how large
a
> system has to be --- or how much of each of the four factors are required
> --- for design doom to occur.  And there does not appear to be anything
> other than a hunch --- on your part --- as to at what size design doom
> becomes nearly or actually inevitable.
> 
> The Googleplex arguably has the four design features of doom and it has
run
> for roughly a decade adapting its indexes to arguably more information
than
> many full AGI systems may over their lifetimes, and it has remained
> remarkably stable.  It has memory.  It has development in the sense of
> automatically adapting its indexes.  It has Identity at least in the sense
> of identifying individual users and presumably types of users for use in
> placing adds.  Finally it has non-linearity in much of its decision
making,
> such as in handling word forms, etc.  
> 
> The Googleplex is less likely to have design doom problems than some AGIs
> --- but it s not clear by how much, since your blog provide no math for
> estimating where --- in a design space having as dimensions the four
> features of design doom (and perhaps other parameters) --- design doom is
> likely to kick in and to what degree.
> 
> So Richard it does not appear your theory of Richard-complexity --- with
its
> the four features of design doom and its concepts of "untouchability" --
as
> it relates to AGI has added anything solid to the AGI community's
> understanding other than that when we build large automatically running
> AGI's there might well be complexity problems that may present very real,
> and possibly, extremely difficult problems.
> 
> I have said for years --- long before I ever heard of Richard-complexity
---
> that the only really big problem I know of in making human level AI ---
> (other than getting the massively parallel and highly interconnected
> hardware and the software tools to program on it) --- is getting it all to
> work together well automatically.  It doesn't appear your theory adds
> anything to that except a greater degree of doubt about whether we humans
> are smart enough to solve that problem. 
> 
> ----------
> Now I will indicate my comments on other parts of your response.
> 
>> ====RICHARD====>
> Ed,
> 
> You have put words into my mouth:  I have never tried to argue that a 
> narrow-AI system cannot work at all.
> 
> (Narrow AI is what you are referring to above:  it must be narrow AI, 
> because there have not been any fully functioning *AGI* systems 
> delivered yet, and you refr to systems that have been built).
> 
> ====ED=========>
> I did not put words in your mouth.  When I used AI I was using the term to
> include both narrow AI and AGI, since both are AI systems (So its not a
> strained interpretation I was intending)
> 
> There have been multiple systems such as SOAR and, I think,  LIDA which
seem
> to apply a common automatic learning and behaving architecture to many
> different types of problems, and thus can be considers AGI's.
> 
>> ====RICHARD====>
> The point of my argument is to claim that such narrow AI systems CANNOT 
> BE EXTENDED TO BECOME AGI SYSTEMS.  The complex systems problem predicts 
> that when people allow those four factors listed above to operate in a 
> full AGI context, where the system is on its own for a lifetime, the 
> complexity effects will then dominate.
> 
> ====ED=========>
> Well, I wish you had said this in your blog.  It does not appear to make
any
> limitation that design doom depends on system being as large as huge AGI
> system.  In fact it says, speaking of the Four Features of Design Doom:
> 
> "These four characteristics are enough. Go take a look at a natural system
> in physics, or an engineering system, and find one in which the components
> of the system interact with memory, development, identity and
nonlinearity.
> You will not find any that are understood."
> 
> This language implies design doom would occur on any system with these for
> features.  Thus, it implies just the opposite of what you call the "point
of
> your argument" in your paragraph above.
> 
>> ====RICHARD====>
> ..
> 
> 
> 
> 
> -----Original Message-----
> From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, April 23, 2008 10:15 PM
> To: [email protected]
> Subject: Re: [agi] Adding to the extended essay on the complex systems
> problem
> 
> Ed Porter wrote:
>> Richard,
>> In your blog you said:
>>
>> "- Memory.  Does the mechanism use stored information about what it was 
>> doing fifteen minutes ago, when it is making a decision about what to do 
>> now?  An hour ago?  A million years ago?  Whatever:  if it remembers, 
>> then it has memory.
>>
>> "- Development.  Does the mechanism change its character in some way 
>> over time?  Does it adapt?
>>
>> "- Identity.  Do individuals of a certain type have their own unique 
>> identities, so that the result of an interaction depends on more than 
>> the type of the object, but also the particular individuals involved?
>>
>> "- Nonlinearity.  Are the functions describing the behavior deeply 
>> nonlinear?
>>
>> These four characteristics are enough. Go take a look at a natural 
>> system in physics, or an engineering system, and find one in which the 
>> components of the system interact with memory, development, identity and 
>> nonlinearity.  You will not find any that are understood.
>>
>> ".
>>
>> "Notice, above all, that no engineer has ever tried to persuade one of 
>> these artificial systems to conform to a pre-chosen overall behavior.."
>>
>>  
>>
>>  
>>
>> I am quite sure there have been many AI system that have had all four of 
>> these features and that have worked pretty much as planned and whose 
>> behavior is reasonably well understood (although not totally understood, 
>> as is nothing that is truly complex in the non-Richard sense), and whose 
>> overall behavior has been as chosen by design (with a little 
>> experimentation thrown in) .  To be fair I can't remember any off the 
>> top of my head, because I have read about many AI systems over the 
>> years.  But recording episodes is very common in many prior AI systems.  
>> So is adaptation.  Nonlinearity is almost universal, and Identity as you 
>> define it would be pretty common.
>>
>>  
>>
>> So, please --- other people on this list help me out --- but I am quite 
>> sure system have been built that prove the above quoted statement to be 
>> false.
> 
> Ed,
> 
> You have put words into my mouth:  I have never tried to argue that a 
> narrow-AI system cannot work at all.
> 
> (Narrow AI is what you are referring to above:  it must be narrow AI, 
> because there have not been any fully functioning *AGI* systems 
> delivered yet, and you refr to systems that have been built).
> 
> The point of my argument is to claim that such narrow AI systems CANNOT 
> BE EXTENDED TO BECOME AGI SYSTEMS.  The complex systems problem predicts 
> that when people allow those four factors listed above to operate in a 
> full AGI context, where the system is on its own for a lifetime, the 
> complexity effects will then dominate.
> 
> In effect, what I am claiming is that people have been masking the 
> complexity effects by mollycoddling their systems in various ways, and 
> by not allowing them to run for long periods of time, or in general 
> environments, or to ground their own symbols.
> 
> I would predict that when people do this "mollycoddling" of their AI 
> systems, the complex systems effects would not become apparent very soon.
> 
> Guess what?  That exactly fits the observed history of AI.  When people 
> try to make these AI systems operate in ways that brings out the 
> complexity, the systems fail.
> 
> 
> 
> Richard Loosemore
> 
> 
> 
> 
> P.S.  Please don't call it "Richard-complexity" .... it has nothing to 
> do with me:  this is "complexity" the way that lots of people understand 
> the term.  If you need to talk about the concept that is the opposite of 
> simple, it would be better to use "complicated".  Personalizing it just 
> creates confusion.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
> 
> 

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to