My apologies for delay in responding. I was busy...
but I think there is a lot of confusion on the list about NP-hardness
still so here goes another attempt. I'm taking portion from a
different thread and changing subject, Eliezer when I get time
I'll try to respond a bit more on the original subject...
Eric Baum wrote:
I respectfully suggest that the human programmer couldn't do that
unless he knew a lot, in fact unless he had most of the program (in
chunks, not exactly assembled, capable of being assembled in
different ways to solve different problems) already in his head
before attacking the problem.
Eliezer> But that is not knowledge *specifically* about the blocks
Eliezer> problem. It is not like having a giant lookup table in your
Eliezer> head that says how to solve all possible blocks problems up
Eliezer> to 10 blocks. The existence of "knowledge" that is very
Eliezer> generally applicable, far beyond the domains over which it
Eliezer> was generalized, is what makes general intelligence possible.
Eliezer> It is why the problem is not NP-hard.
What is NP-hard is the problem of building the knowledge, of finding
the chunks of code.
I am using the term NP-hard to an extent metaphorically, but
drawing on real complexity notions that problems can really be
hard. I'm not claiming that constructing an intelligence is a
decision problem with a yes-no answer; in fact I'm not claiming
it's an infinite class of problems, which is necessary to talk
about asymptotic behavior altogether. It's a particular instance--
we are trying to construct one particular program that works in
this particular world, meaning solves a large collection of
problems of certain types.
Eliezer> Okay, problems *can* be hard; what reason do you have to
Eliezer> believe that this particular problem *is* hard?
Well, this is discussed in my book at some length.
What you say in the paragraph above, and which we agree on, is that
intelligence comes from having a program that generalizes.
Computational learning theory has studied this problem, and the
basic conclusion is, that it comes from Occam's Razor, by finding
a compact program that explains a bunch of data, you get a program
that will generalize to new data drawn from the same distribution.
I extrapolate this work to the claim that by finding a compact program
that solves a bunch of problems dealing with a complex world, you will
achieve a program that solves new problems arising in the world.
That is how intelligence arose, and roughly speaking I don't think
it will be possible to get intelligence without finding such
programs. Otherwise, there will be no reason for it to generalize.
Now Computational Learning Theory has studied a large number of
test problems of this nature, such as finding compact neural nets
that agree with predictions produced by compact neural nets-- and
a large number of other such problems.
And basically all of the interesting problems of this type are
NP-hard.
(I don't buy into the notion of "general intelligence" that solves
any possible world or any possible problem.)
Eliezer> I agree. An AI is supposed to work in the unusual special
Eliezer> case of own low-entropy universe, not all possible worlds.
Eliezer> No-Free-Lunch theorems etc.
I think the problem of constructing the right code for intelligence
is a problem like finding a very short tour in a particular huge
TSP instance. A human can't solve it by hand, (for reasons that are
best understood by thinking about complexity theory results about
infinite problem classes and in the limit behavior, which is why I
appeal to that understanding). To solve it, you are going to have
to construct a good algorithm, *and run it for a long time*. If you
do that, you can get a better and better solution, just like if you
run Lin-Kernighan on a huge TSP instance, you will find a pretty
short tour.
Eliezer> Finding a *short*, but not *optimal*, tour in a particular
Eliezer> huge TSP instance, is not an NP-hard problem - there are
Eliezer> algorithms that do it, as you mention. And much more
Eliezer> importantly from the perspective of AI design, it was not an
Eliezer> NP-hard problem for a programmer to find those algorithms.
The relevant point is that a human can not solve NP-hard problems by
hand. For example, humans can not solve large TSP problems by hand.
If you think producing the code for an AI is NP-hard (as I've
been arguing) then maybe it follows that a human can not produce
the code for the AI by hand.
I am not claiming that humans can not produce programs that when run
on computers will not generate intelligent programs. I am suggesting,
however, that that is what it may take. Lower down in the
email you replied to, I discussed that at some length.
(I'm going to cut it here for conciseness, but go back and look if you
want.Thread was: "Re: strong and weakly improving processes")
And it's possible we will not have the computational power to run
the algorithms long enough. I'm not saying that we won't,
I think and hope we will; I'm just pointing out it is possible
because evolution had way more power than we do, and it may be
smarter than you give it credit for (since I suggest evolution
effectively builds in design capabilities).
Eliezer> Actually, I very much like the idea of running simple
Eliezer> programs for a long time to boot up an intelligence. Not
Eliezer> because it's the only way to get intelligence, or even
Eliezer> because it's convenient, but because it means that the humans
Eliezer> have less complexity to potentially get wrong. I wouldn't
Eliezer> use an evolutionary program because then I'd lose control of
Eliezer> the resulting complexity, thus obviating the whole point of
Eliezer> starting out simple.
Actually, I think humans should build in as much to the program as
they can, because otherwise we will not be able to compete with
evolution. Evolution had this massive hardware advantage. Our
only hope of competing is to jump-start a lot of it by using our
brains. So I am suggesting we should run fairly complex programs
to boot up intelligence.