Re: [agi] Re: AGI Complexity

2003-02-19 Thread Jonathan Standley
There is no reason you couldn't take every single deterministic, P algorithm in the standard C++ libraries and implement it as hardware. Most programs would then be mostly written in assembly language, with constructions like binarysearch[sorted_array x, search_target y] replacing add

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Ben Goertzel
This seems to be a non-sequitor. The weakness of AIXI is not that it's goals don't change, but that it has no goals other than to maximize an externally given reward. So it's going to do whatever it predicts will most efficiently produce that reward, which is to coerce or subvert the

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Ben Goertzel
I wrote: I'm not sure why an AIXI, rewarded for pleasing humans, would learn an operating program leading it to hurt or annihilate humans, though. It might learn a program involving actually doing beneficial acts for humans Or, it might learn a program that just tells humans what they

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Eliezer S. Yudkowsky
Wei Dai wrote: Eliezer S. Yudkowsky wrote: Important, because I strongly suspect Hofstadterian superrationality is a *lot* more ubiquitous among transhumans than among us... It's my understanding that Hofstadterian superrationality is not generally accepted within the game theory research

Re: [agi] Re: AGI Complexity

2003-02-19 Thread Alan Grimes
Jonathan Standley wrote: That approach went out with the introduction of the 4004. Imagine a motherboard that acted as the physical layer for a TCP/IP-based mesh network. TCP/IP is a bit too heavy... I heard of a system once that used ATM as its bus protocol... Today there is 3GIO and

Re: [agi] Re: AGI Complexity

2003-02-19 Thread Jonathan Standley
As I said (maybe you read what I had written as a joke) reconfigurable logic is your best choice. It's almost as good as custom hardware. Even though its pricey, you only have to buy it once and simply upload new designs to it. no, I didn't take it as a joke. I know FPGA's and such are the

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Wei Dai
On Wed, Feb 19, 2003 at 11:02:31AM -0500, Ben Goertzel wrote: I'm not sure why an AIXI, rewarded for pleasing humans, would learn an operating program leading it to hurt or annihilate humans, though. It might learn a program involving actually doing beneficial acts for humans Or, it might

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Ben Goertzel
The AIXI would just contruct some nano-bots to modify the reward-button so that it's stuck in the down position, plus some defenses to prevent the reward mechanism from being further modified. It might need to trick humans initially into allowing it the ability to construct such nano-bots,

[agi] Goal systems designed to epxlore novelty

2003-02-19 Thread Brad Wyble
The AIXI would just contruct some nano-bots to modify the reward-button so that it's stuck in the down position, plus some defenses to prevent the reward mechanism from being further modified. It might need to trick humans initially into allowing it the ability to construct such

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Billy Brown
Wei Dai wrote: The AIXI would just contruct some nano-bots to modify the reward-button so that it's stuck in the down position, plus some defenses to prevent the reward mechanism from being further modified. It might need to trick humans initially into allowing it the ability to construct such

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Brad Wyble
Now, there is no easy way to predict what strategy it will settle on, but build a modest bunker and ask to be left alone surely isn't it. At the very least it needs to become the strongest military power in the world, and stay that way. It might very well decide that exterminating the human

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Wei Dai
On Wed, Feb 19, 2003 at 11:56:46AM -0500, Eliezer S. Yudkowsky wrote: The mathematical pattern of a goal system or decision may be instantiated in many distant locations simultaneously. Mathematical patterns are constant, and physical processes may produce knowably correlated outputs given

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Ben Goertzel
Now, there is no easy way to predict what strategy it will settle on, but build a modest bunker and ask to be left alone surely isn't it. At the very least it needs to become the strongest military power in the world, and stay that way. I ... Billy Brown I think this line of thinking

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Eliezer S. Yudkowsky
Wei Dai wrote: Ok, I see. I think I agree with this. I was confused by your phrase Hofstadterian superrationality because if I recall correctly, Hofstadter suggested that one should always cooperate in one-shot PD, whereas you're saying only cooperate if you have sufficient evidence that the

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Billy Brown
Ben Goertzel wrote: I think this line of thinking makes way too many assumptions about the technologies this uber-AI might discover. It could discover a truly impenetrable shield, for example. It could project itself into an entirely different universe... It might decide we pose so little

Re: [agi] Breaking AIXI-tl

2003-02-19 Thread Eliezer S. Yudkowsky
Billy Brown wrote: Ben Goertzel wrote: I think this line of thinking makes way too many assumptions about the technologies this uber-AI might discover. It could discover a truly impenetrable shield, for example. It could project itself into an entirely different universe... It might decide

[agi] A probabilistic/algorithmic puzzle...

2003-02-19 Thread Ben Goertzel
Hi, This one is for the more mathematically/algorithmically inclined people on the list. I'm going to present a mathematical problem that's come up in the Novamente development process. We have two different solutions for it, each with strengths and weaknesses. I'm curious if, perhaps,

RE: [agi] Breaking AIXI-tl

2003-02-19 Thread Ben Goertzel
It should also be pointed out that we are describing a state of AI such that: a) it provides no conceivable benefit to humanity Not necessarily true: it's plausible that along the way, before learning how to whack off by stimulating its own reward button, it could provide some benefits to