My first attempt at writing something Singularity-related that
somebody might actually even take seriously. Comments appreciated.

--------------------------------

http://www.saunalahti.fi/~tspro1/artificial.html

In recent years, some thinkers have raised the issue of a so-called
"superintelligence" being developed within our lifetimes and radically
revolutionizing society. A case has been made (see, for instance,
[Vinge, 1993] [Bostrom, 2000] [Yudkowsky, 2006]) that once we have a
human-equivalent artificial intelligence, it will soon develop to
become much more intelligent than humans - with unpredictable results.

Often, people seem to have less trouble with the idea of machine
superiority than with the idea of us actually developing an artificial
intelligence within our lifetimes - to most people, true machine
intelligence currently seems very remote. This text will attempt to
argue that there are several different ways by which artificial
intelligence may be developed in the near future, and that the
probability of this happening is high enough that the possibility
needs to be considered when making plans for the future.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to