Ben,

Just to clarify my opinion: I think an actual implementation of the
novamente/OCP design is likely to overcome this difficulty. However,
to the extent that it approximates AIXI, I think there will be
problems of these sorts.

The main reason I think OCP/novamente would *not* approximate AIXI is
that these systems are capable of a greater degree of self-reference,
as well as a very different sort of adaptation. Self-reference gives
the system a very direct reason to think *about* processes (resulting
in halting, convergence, and other uncomputable properties).
Self-adaptation could allow the system to adopt new sorts of reasoning
(such as uncomputable models) simply because they "seem to work".
(This is different from AIXI being trained to prove theorems about
uncomputable things, because the system starts actually making use of
the theorems internally.)

If I could formalize that intuition, I would be happy.

--Abram

On Sun, Oct 19, 2008 at 9:33 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Ben,
>
> How so? Also, do you think it is nonsensical to put some probability
> on noncomputable models of the world?
>
> --Abram


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to