Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54

> Yes, truly general AI is only possible in the case of infinite
> processing power, which is
> likely not physically realizable.   
> How much generality can be achieved with how much
> Processing power, is not yet known -- math hasn't advanced that far yet.


My point is not only that  'general intelligence without any limits' would
need infinite resources of time and memory.
This is trivial of course. What I wanted to say is that any intelligence has
to be narrow in a sense if it wants be powerful and useful. There must
always be strong assumptions of the world deep in any algorithm of useful
intelligence. 

Let me explain this point in more detail:

By useful and powerful intelligence I mean algorithms that do not need
resources which grow exponentially with state and action space.

Let's take the credit assignment problem of reinforcement learning.
The agent has several sensor inputs which builds the perceived state pace of
its environment.
So if the algorithm is truly general the state space grows exponentially
with the number of sensor inputs
and the number of time steps it considers of the past. Every pixel of the
eyes retina is a part of the 
state description if you are truly general. And every tiny detail of the
past may be important if you are
truly general. 
And even if you are less general and describe your environment not by pixels
but by words of common language
the state space is huge. 
For example, a state  description could be:

 ...I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc. etc.
...

Even this far less general state description would fill pages. 

So an AGI agent acts in huge state spaces and huge action spaces. It has
always to solve the credit assignment problem: Which action in which state
is responsible for the current outcome in the current situation. And which
action in which state will give me the best outcome? A truly general AI
algorithm without much predefined domain-knowledge and suitable
for arbitrary state spaces will have to explore the complete state-action
space which as I said grows exponentially with sensor inputs and time. 

I think, every useful intelligence algorithm must always avoid the pitfall
of exponential costs and the only way to do this is to be less general and
to give the agent more predefined domain knowledge (implicit or explicit,
symbolic or non-symbolic, procedural or non-procedural )
Even if you say Human level AI is able to generate its own state spaces.
Then there is still the problem that the initial sensory state space is of
exponentially extend.

So in every useful AGI algorithm there must be certain strong limits as
explicit or implicit rules how to represent the world initially and/or how
to generalize and build a world representation from experiences.

This means, that the only way to avoid the problem of exponentially growth
is to hard code implicit or explicit assumptions of the world. 
And these initial assumptions are the most important limits of any useful
intelligence. They are much more important than the restrictions of time and
memory. Because with these limits it will probably not be true anymore that
you can learn everything and solve any solvable problem if you only get
enough resources. The algorithm in itself must has fixed inner limits to be
something useful in real world domains. These limits cannot be overcome with
experience. 

Even an algorithm that guesses new algorithms and replaces itself if it can
prove that it has found something more useful than itself has fixed
statements that it cannot overcome. More important: If you want to make such
an algorithm practically useful you have to give it predefined rules how to
reduce the huge space of possible algorithms. And again these rules are the
more important problem than the lack of memory and space. 

One could argue that the algorithm can change these rules by own experience.
But you can only prove that changing the rules algorithmically enhances the
performance if the agent makes good experiences with the new rules. You
cannot prove that certain algorithms would not improve your performance if
you don't know the algorithms at all. Remember: The rules do not define a
certain state or algorithm but they define a reduction of the whole
algorithm space the agent can consider while trying to become more powerful.
The rules within the algorithm contain knowledge of that what the learning
agent does not know itself and cannot learn.
Even if you can learn to learn. And learn to learn to learn. And ...
Every recursive procedure has to have a non-reducible base and it is clear,
that the overall performance and abilities depend crucially on that basic
non-reducible procedure. If this procedure is too general, the performance
slows exponentially with the space with which this  basic procedure works.


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to