Tim Freeman wrote:
My point is that if one is worried about a self-improving Seed AI exploding, one should also be worried about any AI that competently writes software exploding.
There *is* a slight gap between competently writing software and competently writing minds. Large by human standards, not much by interspecies standards. It does involve new math issues, which is why some of us are much impressed by it. Anyone with even a surface grasp of the basic concept on a math level will realize that there's no difference between self-modifying and writing an outside copy of yourself, but *either one* involves the sort of issues I've been calling "reflective".
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=53161697-a947ab