But, either you're just wrong or I don't understand your wording ... of
course, AIXI **can** reason about uncomputable entities.  If you showed AIXI
the axioms of, say, ZF set theory (including the Axiom of Choice), and
reinforced it for correctly proving theorems about uncomputable entities as
defined in ZF, then after enough reinforcement signals it could learn to
prove these theorems.

ben g

On Sun, Oct 19, 2008 at 10:42 AM, Abram Demski <[EMAIL PROTECTED]>wrote:

> Ben,
>
> I don't know what sounded "almost confused", but anyway it is apparent
> that I didn't make my position clear. I am not saying we can
> manipulate these things directly via exotic (non)computing.
>
> First, I am very specifically saying that AIXI-style AI (meaning, any
> AI that approaches AIXI as resources increase) cannot reason about
> uncomputable entities. This is because AIXI entertains only computable
> models.
>
> Second, I am suggesting a broader problem that will apply to a wide
> class of formulations of idealized intelligence such as AIXI: if their
> internal logic obeys a particular set of assumptions, it will become
> prone to Tarski's Undefinability Theorem. Therefore, we humans will be
> able to point out a particular class of concepts that it cannot reason
> about; specifically, the very concepts used in describing the ideal
> intelligence in the first place.
>
> One reasonable way of avoiding the "humans are magic" explanation of
> this (or "humans use quantum gravity computing", etc) is to say that,
> OK, humans really are an approximation of an ideal intelligence
> obeying those assumptions. Therefore, we cannot understand the math
> needed to define our own intelligence. Therefore, we can't engineer
> human-level AGI. I don't like this conclusion! I want a different way
> out.
>
> I'm not sure the "guru" explanation is enough... who was the Guru for
> Humankind?
>
> Thanks,
>
> --Abram
>
>
> On Sun, Oct 19, 2008 at 5:39 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> > Abram,
> >
> > I find it more useful to think in terms of Chaitin's reformulation of
> > Godel's Theorem:
> >
> > http://www.cs.auckland.ac.nz/~chaitin/sciamer.html<http://www.cs.auckland.ac.nz/%7Echaitin/sciamer.html>
> >
> > Given any computer program with algorithmic information capacity less
> than
> > K, it cannot prove theorems whose algorithmic information content is
> greater
> > than K.
> >
> > Put simply, there are some things our  brains are not big enough to prove
> > true or false....
> >
> > This is true for quantum computers just as it's true for classical
> > computers.  Penrose hypothesized it would NOT hold for "quantum gravity
> > computers", but IMO this is a fairly impotent hypothesis because quantum
> > gravity computers don't exist (even theoretically, I mean: since there is
> no
> > unified quantum gravity theory yet).
> >
> > Penrose assumes that humans don't have this sort of limitation, but I'm
> not
> > sure why.
> >
> > On the other hand, this limitation can be overcome somewhat if you allow
> the
> > program P to interact with the external world in a way that lets it be
> > modified into P1 such that P1 is not computable by P.  In this case P
> needs
> > to have a guru (or should I say an oracle ;-) that it trusts to modify
> > itself in ways it can't understand, or else to be a gambler-type...
> >
> > You seem almost confused when you say that an AI can't reason about
> > uncomputable entities.  Of course it can.  An AI can manipulate math
> symbols
> > in a certain formal system, and then associate these symbols with the
> words
> > "uncomputable entities", and with its own self ... or us.  This is what
> we
> > do.
> >
> > An AI program can't actually manipulate the uncomputable entities
> directly ,
> > but what makes you think *we* can, either?
> >
> >
> > -- Ben G
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to