When I first read Omohundro's paper, my first reaction was . . . Wow!  That's 
awesome.

Then, when I tried to build on it, I found myself picking it apart instead.  My 
previous e-mails from today should explain why.  He's trying to extrapolate and 
predict from first principles and toy experiments to a very large and complex 
system -- when there are just too many additional variables and too much 
emergent behavior to do so successfully.  He made a great try and it's worth 
spending a lot of time with the paper.  My biggest fault with it is that he 
should have recognized that his statements about "all" goal-driven systems 
don't apply to the proto--typical example (humans) and he should have made at 
least some explanation as to why he believed that it didn't.

In a way, Omohundro's paper is the prototypical/archetypal example for 
Richard's arguments about many AGIers trying to design complex systems through 
decomposition and toy examples and expecting the results to self-assemble and 
scale up to full intelligence.

I disagree entirely with Richard's arguments that Omohundro's errors have 
*anything* to do with architecture.  I am even tempted to argue that Richard is 
so enamored with/ensnared in his MES vision that he may well be violating his 
own concerns about building complex systems.
  ----- Original Message ----- 
  From: Jim Bromer 
  To: agi@v2.listbox.com 
  Sent: Sunday, May 25, 2008 2:22 PM
  Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity 
Outcomes...]





  ----- Original Message ----
  From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>

  The paper can be found at 
  
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf

  Read the appendix, p37ff. He's not making arguments -- he's explaining, with 
a 
  few pointers into the literature, some parts of completely standard and 
  accepted economics and game theory. It's all very basic stuff.

  ----------------------------

  I think Omohundro is making arguments, or providing reasoning, to support his 
views that the application of rational economic theory and game theory would 
tend to make an advanced agi system capable of self-improvement.  I don't think 
anyone would say that is an accepted viewpoint!  (I may not know what you are 
talking about; that has actually happened on a few occasions believe it or not. 
 And this may be a different paper than the one that was previously being 
discussed.) 

  I am not in complete disagreement with Loosemore because I do not believe 
that Omohundro's view is well founded.  But my main disagreement with Loosemore 
is that I object to his exaggerated claims like the one he made when he said 
that Omohundro is just pulling conclusions out of thin air.  That argument can 
be made against any and all of us until someone actually produces a truly 
advanced AI program.  I think Omohundro is pulling some assumptions out of thin 
air, but that is acceptable in a conjectural discussion.

  So far, I have found Omohundro's paper to be one of the more enjoyable papers 
I have read recently.  But that does not mean that I agree with what he says.  
I think that Omohundro should use a slightly higher level of criticism of his 
own ideas, but on the other hand, there is also a need to occasionally express 
some opinions that might not meet the higher level of criticism.

  The more general a comment is, the more it tends to be an opinion.  So the 
views I expressed here are really opinions that I have not supported.  I would 
have to work much harder to discuss one of Omohundro's ideas in any detail.  
But if I wanted to attack (or support) something that he wrote, I would have to 
do at least a little extra work so that I could make sure that I do understand 
him.  If that was what I wanted to do, I would draw a few quotes from his paper 
to argue for or against the apparent intent and perspective that I felt he was 
expressing.

  But maybe I found a different paper than was being discussed.  I noticed that 
the abstract he wrote for his paper was not written too well (in my opinion).

  Jim Bromer






------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to