I think this is a topic for the singularity list, but I agree it could happen 
very quickly.  Right now there is more than enough computing power on the 
Internet to support superhuman AGI.  One possibility is that it could take the 
form of a worm.

http://en.wikipedia.org/wiki/SQL_slammer_(computer_worm)

An AGI of this type would be far more dangerous because it could analyze code, 
discover large numbers of vulnerabilities and exploit them all at once.  As the 
Internet gets bigger, faster, and more complex, the risk increases.
 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Hank Conn <[EMAIL PROTECTED]>
To: agi <agi@v2.listbox.com>
Sent: Thursday, November 16, 2006 3:33:08 PM
Subject: [agi] RSI - What is it and how fast?

Here are some of my attempts at explaining RSI...

 

(1)

As a given instance of intelligence, as defined as an algorithm of an agent 
capable of achieving complex goals in complex environments, approaches the 
theoretical limits of efficiency for this class of algorithms, intelligence 
approaches infinity. Since increasing computational resources available for an 
algorithm is a complex goal in a complex environment, the more intelligent an 
instance of intelligence becomes, the more capable it is in increasing the 
computational resources for the algorithm, as well as more capable in 
optimizing the algorithm for maximum efficiency, thus increasing its 
intelligence in a positive feedback loop.


 

(2)

Suppose an instance of a mind has direct access to some means of both improving 
and expanding both the hardware and software capability of its particular 
implementation. Suppose also that the goal system of this mind elicits a strong 
goal that directs its behavior to aggressively take advantage of these means. 
Given each increase in capability of the mind's implementation, it could (1) 
increase the speed at which its hardware is upgraded and expanded, (2) More 
quickly, cleverly, and elegantly optimize its existing software base to 
maximize capability, (3) Develop better cognitive tools and functions more 
quickly and in more quantity, and (4) Optimize its implementation on 
successively lower levels by researching and developing better, smaller, more 
advanced hardware. This would create a positive feedback loop- the more capable 
its implementation, the more capable it is in improving its implementation.


 


How fast could RSI plausibly happen? Is RSI inevitable / how soon will it be? 
How do we truly maximize the benefit to humanity?


 

It is my opinion that this could happen extremely quickly once a completely 
functional AGI is achieved. I think its plausible it could happen against the 
will of the designers (and go on to pose an existential risk), and quite likely 
that it would move along quite well with the designers intention, however, this 
opens up the door to existential disasters in the form of so-called Failures of 
Friendliness. I think its fairly implausible the designers would suppress this 
process, except those that are concerned about completely working out issues of 
Friendliness in the AGI design.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to