Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
No computer is going to start writing and debugging software faster and more accurately than we can UNLESS we design it to do so, and during the design process we will have ample opportunity to ensre that the machine will never be able to pose a danger of any kind.

Perhaps, but the problem is like trying to design a safe gun.

It is 100% NOT like trying to design a safe gun. There is no resemblance whatsoever to that problem.

Maybe you can
program it with a moral code, so it won't write malicious code.  But the two
sides of the security problem require almost identical skills.  Suppose you
ask the AGI to examine some operating system or server software to look for
security flaws.  Is it supposed to guess whether you want to fix the flaws or
write a virus?

If it has a moral code (it does) then why on earth would it have to guess whether you want it fix the flaws or fix the virus? By asking that question you are implicitly assuming that this "AGI" is not an AGI at all, but something so incredibly stupid that it cannot tell the difference between these two .... so if you make that assumption we have nothing to worry about, because it would be too stupid to be a "general" intlligence and therefore not even potentially dangerous.



Suppose you ask it to write a virus for the legitimate purpose of testing the
security of your system.  It downloads copies of popular software from the
internet and analyzes it for vulnerabilities, finding several.  As instructed,
it writes a virus, a modified copy of itself running on the infected system. Due to a bug, it continues spreading. Oops... Hard takeoff.

Again, you implicitly assume that this "AGI" is so stupid that it makes a copy of itself and inserts it into a virus when asked to make an experimental virus. Any system that stupid does not have a general intelligence, and will never cause a hard takeoff because an absolute prerequisite for hard takeoff is that the system have the wits to know about these kind of no-brainer [:-)] questions.

This kind of Stupid-AGI scenario comes up all the time - the SL4 list was absolutely them, when last I was wasting my time over there, and when I last encountered anyone from SIAI they were still spouting them all the time without the slightest understandng of the incoherence of what they were saying.





Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90241804-cdba1c

Reply via email to