Eliezer S. Yudkowsky pointed out in a 2003 agi post titled Breaking
Solomonoff induction... well, not really [1] that
Solomonoff Induction is flawed because it fails to incorporate anthropic
reasoning. But apparently he thought this doesn't really matter because in
the long run Solomonoff
On Wed, Feb 19, 2003 at 06:37:21PM -0500, Eliezer S. Yudkowsky wrote:
Similarity in this case may be (formally) emergent, in the sense that a
most or all plausible initial conditions for a bootstrapping
superintelligence - even extremely exotic conditions like the birth of a
Friendly AI -
On Wed, Feb 19, 2003 at 11:02:31AM -0500, Ben Goertzel wrote:
I'm not sure why an AIXI, rewarded for pleasing humans, would learn an
operating program leading it to hurt or annihilate humans, though.
It might learn a program involving actually doing beneficial acts for humans
Or, it might
On Wed, Feb 19, 2003 at 11:56:46AM -0500, Eliezer S. Yudkowsky wrote:
The mathematical pattern of a goal system or decision may be instantiated
in many distant locations simultaneously. Mathematical patterns are
constant, and physical processes may produce knowably correlated outputs
given
Eliezer S. Yudkowsky wrote:
Important, because I strongly suspect Hofstadterian superrationality
is a *lot* more ubiquitous among transhumans than among us...
It's my understanding that Hofstadterian superrationality is not generally
accepted within the game theory research community as a
On Tue, Feb 18, 2003 at 06:58:30PM -0500, Ben Goertzel wrote:
However, I do think he ended up making a good point about AIXItl, which is
that an AIXItl will probably be a lot worse at modeling other AIXItl's, than
a human is at modeling other humans. This suggests that AIXItl's playing