Ben Goertzel wrote:
> What if iterative self-revision causes the system's goal G to "drift"
> over time... 

I think this is inevitable - it's just evolution keeping on going as it always 
will.  The key issue then is what processes can be set in train to operate 
throughout time to keep evolution re-inventing/re-committing AGIs (and 
humans too) to ethical behaviour.  Maybe communities of AGIs can 
create this dynamic.

Can isolated, non-socialised AGIs be ethical in relation to the whole?

A book that I found facinating on the ethics issue in ealier evolutionaryu 
stages is:

Good Natured: The Origins of Right and Wrong in Humans and Other 
Animals 
by Frans De Waal, Frans de Waal (Paperback - October 1997) 
Harvard Univ Pr; ISBN: 0674356616; Reprint edition (October 1997) 

It's well worth a read.

Cheers, Philip


Of course, one can seek to architect one's AGI system to mitigate against
goal drift under iterative self-revisions.

But algorithmic information theory comes up again, here.

At some point, a self-revising AGI system, which adds new hardware onto
itself periodically, will achieve a complexity (in the alg. info. theory
sense) greater than that of the human brain.  At this point, one can
formally show, it is *impossible for humans to predict what it will do*.  We
just don't have the compute power in our measly little brains....  So we
certainly can't be sure that goal drift won't occur in a system of
superhuman complexity...

This is an issue to be rethought again & again as AGI gets closer &
closer...

-- Ben



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to