I fail to see why it would not at least be considered likely that a
mechanical brain that could do all the major useful mental processes the
human mind does, but do them much faster over a much, much larger recorded
body of experience and learning, would not be capable of greater
intelligence than humans, by most reasonable definitions of "intelligence.
"

By "super-human intelligence" I mean an AGI able to learn and perform a
large diverse set of complex tasks in complex environments faster and better
than humans, such as being able:

                -to read information more quickly and understand its
implications more deeply;

                -to interpret visual scenes faster and in greater depth;

                -to draw and learn appropriate and/or more complex
generalizations more quickly;

                -to remember, and appropriately recall from, a store of
knowledge hundreds or millions of times larger more quickly;

                -to instantiate behaviors and mental models in a context
appropriate way more quickly, deeply, and completely;

                -to respond to situations in a manner that appropriately
takes into account more of the relevant context in less time;

                -to consider more of the implications, interconnections,
analogies, and possible syntheses of all the recorded knowledge in all the
fields studied by all the worlds PhDs;

                -to program computers to perform more complex and
appropriate tasks more quickly and reliably;

                -etc.  

I have seen no compelling reasons on this list to believe such machines
cannot be built within 5 to 20 years -- although it is not an absolute
certainty they can.  For example, Richard Loosemore's complexity concerns
cannot be totally swept away at this time, but the success of small
controlled-chaos programs like Copycat to deal with such concerns using what
I have called "guiding-hand" techniques (techniques similar to those of Adam
Smith's invisible hand) indicates such issues can be successfully dealt
with.

Given the hypothetical assumption such an AGI could be made, I am just
amazed by the narrow mindedness of those who deny it would not be reasonable
to call a machine with such a collection of talents a form of superhuman
intelligence.

It seems we not only need to break the small-hardware mindset but also the
small-mind mindset.

Ed Porter

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78648604-ac748a

<<attachment: winmail.dat>>

Reply via email to