In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
possible in this world
if you apply it not to a hypothetical machine or human being but to the
whole universe which can be assumed to be a closed system.

The axioms are the laws of physics.
Then, everything what happens in this world is the application of these
axioms.
And every step of this application is without any fault if we suppose that
our universe does not change the
laws of physics.

So, with Goedel, the whole universe cannot generate a set of statements
about itself which are both complete and without any contradictions.
And therefore a machine which is part of the universe cannot have this
ability too.

But my main point was not to think about the question whether perfect AGI is
possible.
I mainly wanted to point out, that we ourselves have strong limits and are
in a sense narrow AI systems instead of AGI systems.
The example with the visualized sound wave shows that we use very
specialized pattern algorithms instead of general ones.
And of course biology use it for reasons of performance. 

Perhaps it is possible for a human to see the patterns in a sound wave if he
has enough time. 
But this would be thousand fold slower than the specialized pattern
recognizer for sound signals.

This shows that human intelligence is not build from general pattern
algorithms  in the brain but from algorithms that are specialized for
patterns of a specialized environment or at least are tuned for specialized
patterns. From this raises the question whether it makes sense to think
about pattern algorithms that works with most patterns in this world. This
question is mainly a question of performance.

And my point was the assumption, that we can buy general intelligence only
for hopelessly many costs of time and memory.

Another example:
Imagine a child who makes the first experience with pain when touching a hot
hotplate in a kitchen.
The child will learn not to touch the hotplate. But this task is very hard
if you want to solve it with a general algorithm with low domain-knowledge. 

If you feel pain in your hand, what was the reason? 
If you think in the AGI way it could be 
the open window in the kitchen, 
the sandwich you have eaten an hour ago,
the fly on the desk,
your blue shirt

...

and after trillions of other  possible reasons
the collision of the hand with the plate which is obvious for us but
not obvious for any algorithm without domain knowledge.

Well you can find the reason with more experience. But how many tries do you
need with AGI? Trillions! Because with the AGI approach you can by
definition not rule out anything. At least you have to use AGI learning
algorithms with massive predefined rules of generalization. So even if we
find clever AGI algorithms, the power will mainly depend on tuning it to
work in special real world problems.


-----Ursprüngliche Nachricht-----
Von: Pei Wang [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 26. April 2008 14:16
An: [email protected]
Betreff: Re: [agi] How general can be and should be AGI?

From http://nars.wang.googlepages.com/wang-goertzel.AGI_06.pdf page 5:
---
In the current context, when we say that the human mind or an AGI system is
"general purpose", we do not mean that it can solve all kinds of
problems in all kinds
of domains, but that it has the potential to solve any problem in any
domain, given
proper experience. Non-AGI systems lack such a potential.
---
That paper also addressed the issue of "general potential" vs. "domain
knowledge".

I agree with you that an intelligence "solving all kinds of problems
in all kinds
of domains" is impossible, though I don't think the conclusions of
Gödel and Turing
are the major reason (or even that relevant) here. My arguments are in
http://nars.wang.googlepages.com/wang.AI_Misconceptions.pdf

Pei



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to