Thanks for the comments. My replies:


> It does happen to be the case that I
> believe that logic-based methods are mistaken, but I could be wrong about
> that, and it could turn out that the best way to build an AGI is with a
> completely logic-based AGI, along with just one small mechanism that was
> Complex.

Logical methods are quite Complex. This was part of my point. Logical
deduction in any sufficiently complicated formalism satisfies both
types of global-local disconnect that I mentioned (undecidability, and
computational irreducibility). If this were not the case, it seems
your argument would be much less appealing. (In particular, there
would be one less argument for the mind being complex; we could not
say "logic has some subset of the mind's capabilities; a brute-force
theorem prover is a complex system; therefore the mind is probably a
complex system.")

> Similarly, you suggest that I "have an image of an AGI that is built out of
> totally dumb pieces, with intelligence emerging unexpectedly."  Some people
> have suggested that that is my view of AGI, but whether or not those people
> are correct in saying that [aside:  they are not!]

Apologies. But your arguments do appear to point in that direction.

> In your original blog post, also, you mention the way that AGI planning
> The problem is that you have portrayed the
> distinction between 'pure' logical mechanisms and 'messy' systems that have
> heuristics riding on their backs, as equivalent to a distinction that you
> thought I was making between non-complex and complex AGI systems.  I hope
> you can see now that this is not what I was trying to argue.

You are right, this characterization is quite bad. I think that is
part of what was making me uneasy about my conclusion. My intention
was not that approximation should always equal a logical search with
messy heuristics stacked upon it. In fact, I had two conflicting
images in mind:use

-A logical search with logical heuristics (such as greedy methods for
NP-complete problems, which are guaranteed to be fairly near optimal)

-A "messy" method (such as a neural net or swarm) that somehow gives
you an answer without precise logic

A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using "messy" methods.

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
"The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one..."

> Finally, I should mention one general misunderstanding about mathematics.
>  This argument has a superficial similarity to Godel's theorem, but you
> should not be deceived by that.  Godel was talking about formal deductive
> systems, and the fact that there are unreachable truths within such systems.
>  My argument is about the feasibility of scientific discovery, when applied
> to systems of different sorts.  These are two very different domains.

I think it is fair to say that I accounted for this. In particular, I
said: "It's this second kind of irreducibility, computational
irreducibility, that I see as more relevant to AI." (Actually, I do
see Godel's theorem as relevant to AI; I should have been more
specific and said "relevant to AI's global-local disconnect".)


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to