Randall Randall wrote:

To bring this slightly back to AGI:

>> Richard Loosemore wrote:
The thrust of my reply was that this entire idea of Matt's made no sense, since the AGI could not be a "general" intelligence if it could not see the full implications of the request.

I'm sure you know that most humans fail to see the full
implications of *most* things.  Is it your opinion, then,
that a human is not a general intelligence?

I would agree that humans do not always see the full implications of their actions.

But the context of my dispute with Matt was this statement:

"It is obvious that AGIs would be dangerous because it is *clear* that they would be capable of doing things like accidentally building a catastrophic computer virus."

Where I was going with my reply was this line of reasoning:

1) The potential damage that an AGI could do would be proportional to its intelligence ... and what Matt was trying to do (in effect) was scare us with the immense potential for damage implicit in a powerful, intelligent AGI.

2) However, by the same token, a powerful and intelligent AGI would also be able to see the implications of actions at least as well as we could.

In fact, I would go further and say that because of recursive self improvement the AGI could have the power to investigate the dangers in truly immense detail, and to guard against them in extremely sophisticated ways.

You can see my problem with Matt's argument: he wanted to have his cake and eat it too: the AGI is supposed to be unthinkably powerful when it comes to the damage it could do, but at the same time it is supposed to be incredibly naive (even the stupidest human security expert could see that building AGI potential into a virus would be fabulously risky).

This conjunction of "It's SO Dangerous!!!" with "It could do this REALLY Stupid thing!" is something that I have noticed many times in discussions of the future of AGI. It happens a lot in science fiction films, for sure, but it also happens when AI researchers talk about the subject. This is something that I once referred to as the "Dumbtelligence" mistake.

I think we need to fight against that very strongly, because it preys on people's fears in an irresponsible way.

Anyhow, going back to your question: in principle any intelligence (human or AGI) could make mistakes, but the issue here is what kind of mistakes the would make in the particular context that they are likely to exist. Because of the context surrounding the creation of AGIs, I find it deeply implausible that in practice they would be able to be both dumb and dangerous.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90641921-b11fd0

Reply via email to