On Samstag, 26. April 2008 17:00 Pei Wang [mailto:[EMAIL PROTECTED]
wrote

> to many people, including me, this is exactly what AGI is
> after: a baby with all kinds of potentials, not an adult that can do
> everything.

I understand AGI in the same way but even the term "all kind of potentials "
seems to me 
a wish which is not possible and which also no human baby really has.

Let's take the halting
problem(http://en.wikipedia.org/wiki/Halting_problem).
It is fact that no Turing machine
(http://en.wikipedia.org/wiki/Turing_machine) can solve it for all
program-input pairs.

I think, your argumentation is that an AGI system (e.g. human being) can
solve any halting problem because it can change
over time by making more and more experiences. But the even the "experience
making" human being can be regarded as a turing machine with a fixed and
finite algorithm.
All knowledge it can obtain is already implicit there - in the universe. The
universe can be modeled as part of the infinite tape of the turing machine.
The computer or the brain is the finite table of the turing machine.  Every
making of experience of AGI can be modeled as reading data from the turing
tape.

 Even if you think of a human being who uses more and more pieces of paper
to expand his knowledge and behavior or a computer who grow into space, then
simply model the whole universe to be the finite table of the turing machine
and the tape at the same time. 

So even if you point out that AGI should mean "to have ALL potentials"
instead of "having ALL abilities", AGI is impossible and can only be
approximated. I therefore  prefer the term "human level AGI"

My point is that the theoretical thoughts are strong evidence for
fundamental limitations of human intelligence and that perhaps (=assumption)
these and further limitations are to a certain degree even necessary to
solve the problem of the need for huge time, memory, learning steps. 

Our universe is not only well build for life to evolve. It has also allowed
human level AGI to evolve from very narrow forms of intelligence. I like the
goal to create AGI but I fear, we want to make it too general and can
therefore not overcome problems of complexity. 



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to