On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:

Yes, now the point being that if you have an AGI and you aren't in a sufficiently fast RSI loop, there is a good chance that if someone else were to launch an AGI with a faster RSI loop, your AGI would lose control to the other AGI where the goals of the other AGI differed from yours.


Are you sure that "control" would be a high priority of such systems?


What I'm saying is that the outcome of the Singularity is going to be exactly the target goal state of the AGI with the strongest RSI curve.

The further the actual target goal state of that particular AI is away from the actual target goal state of humanity, the worse.


What on earth is "the actual target goal state of humanity"? AFAIK there is no such thing. For that matter I doubt very much there is or can be an unchanging target goal state for any real AGI.


The goal of ... humanity... is that the AGI implemented that will have the strongest RSI curve also will be such that its actual target goal state is exactly congruent to the actual target goal state of humanity.


This seems rather circular and ill-defined.

- samantha


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to