Abram,

I find it more useful to think in terms of Chaitin's reformulation of
Godel's Theorem:

http://www.cs.auckland.ac.nz/~chaitin/sciamer.html

Given any computer program with algorithmic information capacity less than
K, it cannot prove theorems whose algorithmic information content is greater
than K.

Put simply, there are some things our  brains are not big enough to prove
true or false....

This is true for quantum computers just as it's true for classical
computers.  Penrose hypothesized it would NOT hold for "quantum gravity
computers", but IMO this is a fairly impotent hypothesis because quantum
gravity computers don't exist (even theoretically, I mean: since there is no
unified quantum gravity theory yet).

Penrose assumes that humans don't have this sort of limitation, but I'm not
sure why.

On the other hand, this limitation can be overcome somewhat if you allow the
program P to interact with the external world in a way that lets it be
modified into P1 such that P1 is not computable by P.  In this case P needs
to have a guru (or should I say an oracle ;-) that it trusts to modify
itself in ways it can't understand, or else to be a gambler-type...

You seem almost confused when you say that an AI can't reason about
uncomputable entities.  Of course it can.  An AI can manipulate math symbols
in a certain formal system, and then associate these symbols with the words
"uncomputable entities", and with its own self ... or us.  This is what we
do.

An AI program can't actually manipulate the uncomputable entities directly ,
but what makes you think *we* can, either?


-- Ben G




On Sat, Oct 18, 2008 at 9:54 PM, Abram Demski <[EMAIL PROTECTED]> wrote:

> Matt,
>
> I suppose you don't care about Steve's "do not comment" request? Oh
> well, I want to discuss this anyway. 'Tis why I posted in the first
> place.
>
> No, I do not claim that computer theorem-provers cannot prove Goedel's
> Theorem. It has been done. The objection applies specifically to
> AIXI-- AIXI cannot prove goedel's theorem. More generally, all AIXI's
> world-models are computable.
>
> What do I mean when I say "to reason about non-computable entities"?
> Well, Goedel's Incompleteness Theorem is a fine example. Another
> example is the way humans can talk about whether a particular program
> will halt. This sort of thing can be done in logical systems by adding
> basic non-computable primitives. A common choice is to add numbers.
> (Numbers may seem like the prototypical computable thing, but any
> logic of numbers is incomplete, as Goedel showed of course.)
>
> The broader issue is that *in general* given any ideal model of
> intelligence similar to AIXI, with a logically defined class of
> world-models, it will be possible to point out something that the
> intelligence cannot possibly reason about-- namely, its own semantics.
> This follows from Tarski's indefinability theorem, and hinges on a few
> assumptions about the meaning of "logically defined".
>
> I am not altogether sure that the Novamente/OCP design is really an
> approximation of AIXI anyway, *but* I think it is a serious concern.
> If the Novamente/OCP design really solves the (broader) problem, then
> it also solves some key problems in epistemology (specifically, in
> formal theories of truth), so it would be very interesting to see it
> worked out in these terms.
>
> --Abram
>
> On Sat, Oct 18, 2008 at 9:12 PM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> > --- On Sat, 10/18/08, Abram Demski <[EMAIL PROTECTED]> wrote:
> >
> >> Non-Constructive Logic: Any AI method that approximates AIXI
> >> will lack the human capability to reason about non-computable
> >> entities.
> >
> > Then how is it that humans can do it? According to the AIXI theorem, if
> we can do this, it makes us less able to achieve our goals because AIXI is
> provably optimal.
> >
> > Exactly what do you mean by "reason about" non-computable entities?
> >
> > Do you claim that a computer could not discover a proof of Goedel's
> incompleteness theorem by brute force search?
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> >
> >
> > -------------------------------------------
> > agi
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to