Kaj Sotala wrote:
On 1/24/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Theoretically yes, but behind my comment was a deeper analysis (which I
have posted before, I think) according to which it will actually be very
difficult for a negative-outcome singularity to occur.

I was really trying to make the point that a statement like "The
singularity WILL end the human race" is completely ridiculous.  There is
no WILL about it.

Richard,

I'd be curious to hear your opinion of Omohundro's "The Basic AI
Drives" paper at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
(apparently, a longer and more technical version of the same can be
found at 
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
, but I haven't read it yet). I found the arguments made relatively
convincing, and to me, they implied that we do indeed have to be
/very/ careful not to build an AI which might end up destroying
humanity. (I'd thought that was the case before, but reading the paper
only reinforced my view...)

Kaj,

I have only had time to look at it briefly this evening, but it looks like Omohundro is talking about "Goal Stack" systems.

I made a distinction, once before, between Standard-AI "Goal Stack" systems and another type that had a diffuse motivation system.

Summary of the difference:

1) I am not even convinced that an AI driven by a GS will ever actually become generally intelligent, because of the self-contrdictions built into the idea of a goal stack. I am fairly sure that whenever anyone tries to scale one of those things up to a real AGI (something that has never been done, not by a long way) the AGI will become so unstable that it will be an idiot.

2) A motivation-system AGI would have a completely different set of properties, and among those properties would be extreme stability. It would be possible to ensure that the thing stayed locked on to a goal set that was human-empathic, and which would stay that way.

Omohundros's analysis is all predicated on the Goal Stack approach, so my response is that nothing he says has any relevance to the type of AGI that I talk about (which, as I say, is probably going to be the only type ever created).

I will try to go into this in more depth as soon as I get a chance.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90892197-f7fae5

Reply via email to