Derek Zahn wrote:
[snip]
Surely certain AGI efforts are more dangerous than others, and the "opaqueness" that Yudkowski writes about is, at this point, not the primary danger. However, in that context, I think that Novamente is, to an extent, opaque in the sense that its actions may not be reduceable to anything clear (call such things "emergent" if you like, or just "complex"). If I understand Loosemore's argument, he might say that AGI without this type of opaqueness is inherently impossible, which could mean that Friendly AI is impossible. Suppose that's true... what do we do then? Minimize risks, I suppose. Perhaps certain protocol issues could be developed and agreed to. As an example:

Derek,

No, I would not argue that at all.

The question of whether complex AI is or is not more opaque than 'conventional' AI is not meaningful by itself: the whole point of talking about the complex-systems approach to AGI is that it *cannot* be done without making the systems complex. There is not going to be a conventional AGI that works well enough for anyone to ask if it is opaque or not.

Now, is the particular approach to AGI that I espouse "opaque" in the sense that you cannot understand its friendliness?

It is much less opaque.

I have argued that this is the ONLY way that I know of to ensure that AGI is done in a way that allows safety/friendliness to be guaranteed.

I will have more to say about that tomorrow, when I hope to make an announcement.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48326095-659201

Reply via email to