Abram Demski wrote:
It seems as if we are beginning to talk past eachother. I think the
problem may be that we have different implicit conceptions of the sort
of AI being constructed. My implicit conception is that of an
optimization problem. The AI is given the challenge of formulating the
best response to its input that it can muster within real-world time
constraints. This in some sense always a search problem; it just might
be "all heuristic", so that it doesn't look much like a search. In
designing an AI, I am implicitly assuming that we have some exact
definition of intelligence, so that we know what we are looking for.
This makes the optimization problem well-defined: the search space is
that of all possible responses to the input, and the utility function
is our definition of intelligence. *Our* problem is to find (1)
efficient optimal search strategies, and where that fails, (2) good
heuristics.

I'll admit that the general Conway analogy applies, because we are
looking for heuristics with the property of giving good answers most
of the time, and the math is sufficiently complicated as to be
intractable in most cases. But your more recent variation, where
Conway goes amiss, does not seem to be analogous?

The confusion in our discussion has to do with the assumption you listed above: "...I am implicitly assuming that we have some exact definition of intelligence, so that we know what we are looking for..."

This is precisely what we do not have, and which we will quite possibly never have.

The reason? If existing intelligent systems are complex systems, then when we look at one of them and say "That is my example of what is meant by 'intelligence'", we are pointing at a global property of a complex system. If anyone thinks that the intelligence of existing intelligent systems is completely independent of all complex global properties of the system, the ball is in their court: they must somehow show good reason for us to believe that this is the case - and so far in the history of philosophy, psychology and AI, nobody has ever come close to showing such a thing. In other words, nobody can give a non-circular, practical definition that is demonstrably identical to the definition of intelligence in natural systems. All the evidence (the tangled nature of the mechanisms that appear to be necessary to build an intelligence) points to the fact that intelligence is likely to be a complex global property.

Now, if intelligence *is* a global property of a complex system, it will not be possible to simply write down a clear definition of it and then optimize. That is the point of the Conway analogy: we would be in the same boat that he was.

So, in a way, when you wrote down that assumption, what you did was iimplictly assert that human level intelligence can definitely be achieved without needing to do it with a system that is complex. That is an extremely strong assertion, and unfortunately there is no evidence (except the intuition of some people) that this is a valid assumption. Quite the contrary, all the evidence appears to point the other way.

So that one statement is really the crunch point. All the rest is downhill from that point on.


Richard Loosemore




On Tue, Jun 24, 2008 at 9:02 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Abram Demski wrote:
I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
"The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one..."
And at that point, your lab and my lab are essentially starting to do
the same thing.  You need to start searching the space of possible
heuristics in a systematic way, rather than just pick a hunch and go
with it.

The problem, though, is that you might already have gotten yourself into
a You Can't Get There By Starting From Here situation.  Suppose your
choice of basic logical formalism, and knowledge representation format
(and the knowledge acquisition methods that MUST come along with that
formalism) has boxed you into a corner in which there does not exist any
choice of heuristic control mechanism that will get your system up into
human-level intelligence territory?
If the underlying search space was sufficiently general, we are OK,
there is no way to get boxed in except by the heuristic.
Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to invent a
cellular automaton with particular characteristics - say, he has already
decided that the basic rules MUST show the global characteristic of having a
thing like a glider and a thing like a glider gun.  (This is equivalent to
us saying that we want to build a system that has the particular
characteristics that we colloquially call 'intelligence', and we will do it
with a system that is complex).

But now Conway boxes himself into a corner:  he decides, a priori, that the
cellular automaton MUST have three sexes, instead of the two sexes that we
are familiar with in Game of Life.  So three states for every cell.  But now
(we will suppose, for the sake of the argument), it just happens to be the
case that there do not exist ANY 3-sex cellular automata in which there are
emergent patterns equivalent to the glider and glider gun.  Now, alas,
Conway is up poop creek without an instrument of propulsion - he can search
through the entire space of 3-sex automata until the end of the universe,
and he will never build a system that satisfies his requirement.

This is the boxed-in corner that I am talking about.  We decide that
intelligence must be built with some choice of logical formalism, plus
heuristics, and we assume that we can always keep jiggling the heuristics
until the system as a whole shows a significant degree of intelligence.  But
there is nothing in the world that says that this is possible.  We could be
in exactly the same system as our hypothetical Conway, trying to find a
solution in a part of the space of all possible systems in which there do
not exist any solutions.

The real killer is that, unlike the example you mention below, mathematics
cannot possibly tell you that this part of the space does not contain any
solutions.  That is the whole point of complex systems, n'est pas?  No
analysis will let you know what the global properties are without doing a
brute force exploration of (simulations of) the system.


Richard Loosemore



This is what the mathematics is good for. An experiment, I think, will
not tell you this, since a formalism can cover almost everything but
not everything. For example, is a given notation for functions
Turing-complete, or merely primitive recursive? Primitive recursion is
amazingly expressive, so I think it would be easy to be fooled. But a
proof of Turing-completeness will suffice.




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to