Richard,

(P.S. (pre-script) I am posting this in this form to see if my system will
properly mark the distinction between my writing and Richards, which is
doing now as I am seeing this screen)

In response to the below post I have made the following inter paragraph
comments: 

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 24, 2008 1:34 PM
To: [email protected]
Subject: Re: [agi] DO RICHARD'S FOUR FEATURES OF DESIGN DOOM ACTUALLY
PREVENT DESIGNABILITY


Ouch!  Please don't shout, Ed.


[Ed Porter] Sorry, Richard, I didn't realize you were so fragile.  
 
I have to say that you have once again reflected back the statements I 
made with terrible inaccuracy.


[Ed Porter] He can we reflect back your statements accurately, if you almost
always claim the alleged inaccuracy relates to a meaning of individual words
or of lines of text that are uncommon or not reasonably implied.

I cannot deal with all of it, but I'll grab the first item.  Those four 
characteristics are about the mechanisms inside individual symbols. (Or 
the equivalent to those mechanisms, when people put them inside).



You are shouting about the possibility that those four characteristics 
might appear in some other context.  Not relevant.

[Ed Porter] Richard, do you really mean it.  Are you saying that none of the
four factors of design doom can contribute to the type of un-designability
you are talking about if they occur any place within an AGI other within
individual symbols.  

If so, why.  What is the magic about symbols that the four features of
design doom to cause un-designability within them but no place else in
complicated system.  

It is not clear the nodes shown and described in your blog's "An Informal
Illustration of Complexity" are symbols.  If not their complexity would ---
according to your statement above --- be "Not relevant."  If so, why do you
discuss them in your blog as if they are relevant.

[Ed Porter] The frustration I feel is not so much that much of what you say
seems to be false --- it is more that there may be real issues involving the
potential for complexity to hamper the development of AGI --- and, thus, it
could be an important issue --- but it appears you take so little care in
properly reasoning your writing, that reading it provides much more in the
way of needless confusion and misleading statements than it does in
meaningful information.

I really think you could benefit from developing your ideas with someone
else who could help you clarify your thinking.  Of course, if you do that
there's always the danger they could prove that your ideas are not nearly as
special, powerful, or even correct as you think. But at least it would spare
readers from reading text that contains so many apparent errors as your
current blog.  And if there is anything to your ideas, they might help you
express them in a way that makes sense to ordinary mortals like most of the
people on this list.
  


Richard Loosemore








Ed Porter wrote:
> As I have quoted below, in his susaro.com blog, Richard Loosemore states 
> any system with MEMORY, ADAPTATION, IDENTITY (individuals within a 
> type), and NON-LINEARITY cannot be understood, nor can it be designed to 
> have a desired overall behavior
> 
>  
> 
> I WOULD APPRECIATE IF OTHERS ON THIS LIST WOULD CHIP IN WITH THEIR 
> EVIDENCE ONE WAY OR THE OTHER ON THIS IMPORTANT TOPIC --- because it is 
> a key issue in determining whether or not we should believe much of the 
> FUD (Fear, Uncertainty, and Doubt -- an old IBM sales term for 
> denigration of competitive products) Richard has been spreading to say 
> traditional approaches to AGI design, including those used by Ben et al. 
> for Novamente, are dead meat because of unsolvable problems with the 
> type of complexity he defines (i.e., RL-complexity)..
> 
>  
> 
> It is my strong hunch Richard's statement about these four features of 
> design doom is provably false.  It is my hunch that many AI systems with 
> these four features have been built and have worked roughly as designed 
> --- but in my below copied post I said off the top of my head I could 
> not think of any, and by that I meant any I knew have been built and 
> have worked roughly as planned and knew for sure had all the four 
> features of doom.
> 
>  
> 
> I believe that Novamente, if it would built,  would have all the four 
> features of design doom, as apparently does Richard from his many 
> anti-Novamente statements.  So, I am guessing, would Joscha Bach's 
> MicroPSI, Stan Franklin's LIDA, and Laird et al.'s SOAR - all of which 
> have been built and, as I understand it, work --- presumably with a fair 
> amount of experimentation thrown in --- somewhat as designed.
> 
>  
> 
> I would not be even be surprised if the fluid grammar Stephen Reed is 
> working on has all four of these features of doom.  (Stephen, please 
> tell me if this is true or not.)
> 
>  
> 
> It appears from Stephen's Apr 21 2008 - 5:16pm post about fluid grammar 
> that it has (1) MEMORY, because it records individual new words and 
> phrases it sees occurring in text before --- (2) DEVELOPMENT because its 
> ability to properly parse adapts over time, through learning from the 
> text --- (3) IDENTITY because I assume it classifies its individual word 
> forms, words, and/or phrases within classes (Here I am guessing, 
> Stephen, please correct me if I am wrong), --- and (4) ---NON-LINEARITY, 
> because presumably performs many of the types of non-linear functions, 
> such as thresholding and yes/no decision making, that would be used in 
> almost any AGI such as Novamente.
> 
>  
> 
> Richard has been using notions of RL-complexity to spread "FUD" against 
> many other people's approach to AGI.  After much asking, he has now 
> tried to justify his denigration of others work on his susaro.com blog.  
> So far a significant part of his objection to such work is based on the 
> above four features of design doom. 
> 
>  
> 
> SO PLEASE SPEAK UP THOSE OF YOU ON THIS LIST WITH ANY EVIDENCE OR SOUND 
> ARGUMENTS --- PRO OR CON --- ABOUT WHETHER RICHARD'S "FOUR FEATURES OF 
> DESIGN DOOM" ACTUALLY DO DOOM ENGINEERING OF AGI SYSTEMS, SUCH AS
NOVAMENTE.
> 
>  
> 
>  
> 
> -----Original Message-----
> *From:* Ed Porter [mailto:[EMAIL PROTECTED]
> *Sent:* Wednesday, April 23, 2008 9:06 PM
> *To:* [email protected]
> *Subject:* RE: [agi] Adding to the extended essay on the complex systems 
> problem
> 
>  
> 
> Richard,
> 
>  
> 
> In your blog you said:
> 
>  
> 
> "- Memory.  Does the mechanism use stored information about what it was 
> doing fifteen minutes ago, when it is making a decision about what to do 
> now?  An hour ago?  A million years ago?  Whatever:  if it remembers, 
> then it has memory.
> 
>  
> 
> "- Development.  Does the mechanism change its character in some way 
> over time?  Does it adapt?
> 
>  
> 
> "- Identity.  Do individuals of a certain type have their own unique 
> identities, so that the result of an interaction depends on more than 
> the type of the object, but also the particular individuals involved?
> 
>  
> 
> "- Nonlinearity.  Are the functions describing the behavior deeply 
> nonlinear?
> 
>  
> 
> These four characteristics are enough. Go take a look at a natural 
> system in physics, or an engineering system, and find one in which the 
> components of the system interact with memory, development, identity and 
> nonlinearity.  You will not find any that are understood.
> 
> ".
> 
> "Notice, above all, that no engineer has ever tried to persuade one of 
> these artificial systems to conform to a pre-chosen overall behavior.."
> 
>  
> 
>  
> 
> I am quite sure there have been many AI system that have had all four of 
> these features and that have worked pretty much as planned and whose 
> behavior is reasonably well understood (although not totally understood, 
> as is nothing that is truly complex in the non-Richard sense), and whose 
> overall behavior has been as chosen by design (with a little 
> experimentation thrown in) .  To be fair I can't remember any off the 
> top of my head, because I have read about many AI systems over the 
> years.  But recording episodes is very common in many prior AI systems.  
> So is adaptation.  Nonlinearity is almost universal, and Identity as you 
> define it would be pretty common.
> 
>  
> 
> So, please --- other people on this list help me out --- but I am quite 
> sure system have been built that prove the above quoted statement to be 
> false.
> 
>  
> 
> Ed Porter   


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to