Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
You suggest that a collection of *sub-intelligent* (this is crucial) computer programs can ad up to full intelligence just in virtue of their existence.

This is not the same as a collection of *already-intelligent* humans appearing more intelligent because they have access to a lot more information than they did before.

[dumb machine] + Google = dumb machine.

[smart human] + Google = smarter human.

My point of concern is when individual machines (not the whole network) exceed
individual brains in intelligence.  They can't yet, but they will.  Google
already knows more than any human, and can retrieve the information faster,
but it can't launch a singularity.  When your computer can write and debug
software faster and more accurately than you can, then you should worry.

I think this conversation is going nowhere: your above paragraph once again ignores everything I have said up to now.

No computer is going to start writing and debugging software faster and more accurately than we can UNLESS we design it to do so, and during the design process we will have ample opportunity to ensre that the machine will never be able to pose a danger of any kind.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=90009288-64a72b

Reply via email to