On Wed, Oct 22, 2008 at 3:11 AM, Abram Demski <[EMAIL PROTECTED]> wrote:
> I agree with you there. Our disagreement is about what formal systems
> a computer can understand.

I'm also not quite sure what the problem is, but suppose we put it this way:

I think the most useful way to understand the family of algorithms of
which AIXI is the best-known member, is that they effectively amount
to: "create (by perfect simulation) all possible universes and select
the one that exhibits the desired behavior".

Suppose we took a bunch of data from our universe as input, if the
amount of data were large enough to be specific enough, our universe
(or at least one with the same physical laws) would be created and
selected as producing results that match the data.

So the universe thus created would contain humans, and therefore
contain all the understanding of mathematics that actual humans have.

Of course, this understanding would not be contained in the original kernel.

But this should not be surprising. Consider a realistic AI which can't
create whole universes, but can learn about mathematics. Suppose the
kernel of the AI is written in Lisp, does the Lisp compiler understand
incomputable numbers? No, but that's no reason the AI as a whole
can't, at least to the extent that we humans do.

Does this help at all?


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to