Mark,

"It makes sense but I'm arguing that you're making my point for me . . . ."

I'm making the point "natural language is incompletely defined" for
you, but *not* the point "natural language suffers from Godelian
incompleteness", unless you specify what concept of "proof" applies to
natural language.

"It emphatically does *not* tell us anything about "any approach that
can be implemented on normal computers" and this is where all the
people who insist that "because computers operate algorithmically,
they will never achieve true general intelligence" are going wrong."

It tells us that any approach that is implementable on a normal
computer will not always be able to come up with correct answers to
all halting-problem questions (along with other problems that suffer
from incompleteness).

"You are correct in saying that Godel's theory has been improperly
overused and abused over the years but my point was merely that AGI is
Godellian Incomplete, natural language is Godellian Incomplete, "

Specify "truth" and "proof" in these domains before applying the
theorem, please. For "agi" I am OK, since "X is provable" would mean
"the AGI will come to believe X", and "X is true" would mean something
close to what it intuitively means. But for natural language? "Natural
language will come to believe X" makes no sense, so it can't be our
definition of proof...

Really, it is a small objection, and I'm only making it because I
don't want the theorem abused. You could fix your statement just by
saying "any proof system we might want to provide" will be incomplete
for "any well-defined subset of natural language semantics that is
large enough to talk fully about numbers". Doing this just seems
pointless, because the real point you are trying to make is that the
semantics is ill-defined in general, *not* that some hypothetical
proof system is incomplete.

"and effectively AGI-Complete most probably pretty much exactly means
Godellian-Incomplete. (Yes, that is a radically new phrasing and not
necessarily quite what I mean/meant but . . . . )."

I used to agree that Godelian incompleteness was enough to show that
the semantics of a knowledge representation was strong enough for AGI.
But, that alone doesn't seem to guarantee that a knowledge
representation can faithfully reflect concepts like "continuous
differentiable function" (which gets back to the whole discussion with
Ben).

Have you heard of Tarski's undefinability theorem? It is relevant to
this discussion.
http://en.wikipedia.org/wiki/Indefinability_theory_of_truth

--Abram

On Fri, Oct 24, 2008 at 9:19 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
>> I'm saying Godelian completeness/incompleteness can't be easily
>> defined in the context of natural language, so it shouldn't be applied
>> there without providing justification for that application
>> (specifically, unambiguous definitions of "provably true" and
>> "semantically true" for natural language). Does that make sense, or am
>> I still confusing?
>
> It makes sense but I'm arguing that you're making my point for me . . . .
>
>> agree with. Godel's incompleteness theorem tells us important
>> limitations of the logical approach to AI (and, indeed, any approach
>> that can be implemented on normal computers). It *has* however been
>> overused and abused throughout the years... which is one reason I
>> jumped on Mark...
>
> Godel's incompleteness theorem tells us important limitations of all formal
> *and complete* approaches and systems (like logic).  It clearly means that
> any approach to AI is going to have to be open-ended (Godellian-incomplete?
> ;-)
>
> It emphatically does *not* tell us anything about "any approach that can be
> implemented on normal computers" and this is where all the people who insist
> that "because computers operate algorithmically, they will never achieve
> true general intelligence" are going wrong.
>
> The later argument is similar to saying that because an inductive
> mathematical proof always operates only on just the next number, it will
> never successfully prove anything about infinity.  I'm a firm believe in
> inductive proofs and the fact that general intelligences can be implemented
> on the computers that we have today.
>
> You are correct in saying that Godel's theory has been improperly overused
> and abused over the years but my point was merely that AGI is Godellian
> Incomplete, natural language is Godellian Incomplete, and effectively
> AGI-Complete most probably pretty much exactly means Godellian-Incomplete.
> (Yes, that is a radically new phrasing and not necessarily quite what I
> mean/meant but . . . . ).


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to