--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote: > To Derek Zahn > > You're 9/30/2007 10:58 AM post is very interesting. It is the type of > discussion of this subject -- potential dangers of AGI and how and when do > we deal with them -- that is probably most valuable. > > In response I have the following comments regarding selected portions of > your post's (shown in all-caps). > > "ONE THING THAT COULD IMPROVE SAFETY IS TO REJECT THE NOTION THAT AGI > PROJECTS SHOULD BE FOCUSED ON, OR EVEN CAPABLE OF, RECURSIVE SELF > IMPROVEMENT IN THE SENSE OF REPROGRAMMING ITS CORE IMPLEMENTATION." > > Sounds like a good idea to me, although I don't fully understand the > implications of such a restriction.
The implication is you would have to ban intelligent software productivity tools. You cannot do that. You can make strong arguments for the need for tools for proving software security. But any tool that is capable of analysis and testing with human level intelligence is also capable of recursive self improvement. > "BUT THERE'S AN EASY ANSWER TO THIS: DON'T BUILD AGI THAT WAY. IT IS > CLEARLY NOT NECESSARY FOR GENERAL INTELLIGENCE " Yes it is. In my last post I mentioned Legg's proof that a system cannot predict (understand) a system of greater algorithmic complexity. RSI is necessarily an evolutionary algorithm. The problem is that any goal other than rapid reproduction and acquisition of computing resources is unstable. The first example of this was the 1988 Morris worm. It doesn't matter if Novamente is a "safe" design. Others will not be. The first intelligent worm would mean the permanent end of being able to trust your computers. Suppose we somehow come up with a superhumanly intelligent intrusion detection system able to match wits with a superhumanly intelligent worm. How would you know if it was working? Your computer says "all is OK". Is that the IDS talking, or the worm? -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=48334017-4a12a2