Ben Goertzel wrote:
hi,

No, the challenge can be posed in a way that refers to an arbitrary agent
A which a constant challenge C accepts as input.
But the problem with saying it this way, is that the "constant challenge"
has to have an infinite memory capacity.

So in a sense, it's an infinite constant ;)
Infinite Turing tapes are a pretty routine assumption in operations like these. I think Hutter's AIXI-tl is supposed to be able to handle constant environments (as opposed to constant challenges, a significant formal difference) that contain infinite Turing tapes. Though maybe that'd violate separability? Come to think of it, the Clone challenge might violate separability as well, since AIXI-tl (and hence its Clone) builds up state.

No, the charm of the physical challenge is exactly that there exists a
physically constant cavern which defeats any AIXI-tl that walks into it,
while being tractable for wandering tl-Corbins.
No, this isn't quite right.

If the cavern is physically constant, then there must be an upper limit to
the t and l for which it can clone AIXItl's.
Hm, this doesn't strike me as a fair qualifier. One, if an AIXItl exists in the physical universe at all, there are probably infinitely powerful processors lying around like sunflower seeds. And two, if you apply this same principle to any other physically realized challenge, it means that people could start saying "Oh, well, AIXItl can't handle *this* challenge because there's an upper bound on how much computing power you're allowed to use." If Hutter's theorem is allowed to assume infinite computing power inside the Cartesian theatre, then the magician's castle should be allowed to assume infinite computing power outside the Cartesian theatre. Anyway, a constant cave with an infinite tape seems like a constant challenge to me, and a finite cave that breaks any {AIXI-tl, tl-human} contest up to l=googlebyte also still seems interesting, especially as AIXI-tl is supposed to work for any tl, not just sufficiently high tl.

Well, yes, as a special case of AIXI-tl's being unable to carry out
reasoning where their internal processes are correlated with the
environment.
Agreed...

(See, it IS actually possible to convince me of something, when it's
correct; I'm actually not *hopelessly* stubborn ;)
Yes, but it takes t2^l operations.

(Sorry, you didn't deserve it, but a straight line like that only comes along once.)

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to