On May 27, 2007, at 12:37 PM, Abram Demski wrote:

Alright, that's sensible. The reason I asked was because it seemed to me that it would need to keep humans around to build hardware, feed it mathematical info, et cetera.

It is not at all sensible. Today we have no real idea how to build a working AGI. We have theories, some of which seem to people that have studied such things plausible or at least not obviously flawed - at least to a few of the qualified people. But we don't have anything working that looks remotely close. We don't have any software system that self-improves in really interesting ways except perhaps for some very, very constrained genetic programming domains. Nada. We don't have anything that can pick concepts out of the literature at large and integrate them.

To think that a machine that is a glorified theorem prover is going to spontaneously extrapolate all the ways it might get better and spontaneously sprout perfect general concept extraction and learning algorithms and spontaneously come understand software and hardware in depth and develop a will to be better greater than all other consideration whatsoever and somehow get or be given all the resources needed to eventually convert everything to highly efficient computational matrix is utterly and completely bizarre when you think about it. It is far more unlikely than gray goo.

Can we stop wasting very valuable time and brains on protecting against the most unlikely of possibilities and get on with actually increasing intelligence on this poor besotted rock?

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to