There is no reason you couldn't take every single deterministic, P
algorithm in the standard C++ libraries and implement it as hardware.
Most programs would then be mostly written in assembly language, with
constructions like
binarysearch[sorted_array x, search_target y] replacing add
This seems to be a non-sequitor. The weakness of AIXI is not that it's
goals don't change, but that it has no goals other than to maximize an
externally given reward. So it's going to do whatever it predicts will
most efficiently produce that reward, which is to coerce or subvert
the
I wrote:
I'm not sure why an AIXI, rewarded for pleasing humans, would learn an
operating program leading it to hurt or annihilate humans, though.
It might learn a program involving actually doing beneficial acts
for humans
Or, it might learn a program that just tells humans what they
Wei Dai wrote:
Eliezer S. Yudkowsky wrote:
Important, because I strongly suspect Hofstadterian superrationality
is a *lot* more ubiquitous among transhumans than among us...
It's my understanding that Hofstadterian superrationality is not generally
accepted within the game theory research
Jonathan Standley wrote:
That approach went out with the introduction of the 4004.
Imagine a motherboard that acted as the physical layer for a
TCP/IP-based mesh network.
TCP/IP is a bit too heavy...
I heard of a system once that used ATM as its bus protocol... Today
there is 3GIO and
As I said (maybe you read what I had written as a joke) reconfigurable
logic is your best choice. It's almost as good as custom hardware. Even
though its pricey, you only have to buy it once and simply upload new
designs to it.
no, I didn't take it as a joke. I know FPGA's and such are the
On Wed, Feb 19, 2003 at 11:02:31AM -0500, Ben Goertzel wrote:
I'm not sure why an AIXI, rewarded for pleasing humans, would learn an
operating program leading it to hurt or annihilate humans, though.
It might learn a program involving actually doing beneficial acts for humans
Or, it might
The AIXI would just contruct some nano-bots to modify the reward-button so
that it's stuck in the down position, plus some defenses to
prevent the reward mechanism from being further modified. It might need to
trick humans initially into allowing it the ability to construct such
nano-bots,
The AIXI would just contruct some nano-bots to modify the reward-button so
that it's stuck in the down position, plus some defenses to
prevent the reward mechanism from being further modified. It might need to
trick humans initially into allowing it the ability to construct such
Wei Dai wrote:
The AIXI would just contruct some nano-bots to modify the reward-button so
that it's stuck in the down position, plus some defenses to
prevent the reward mechanism from being further modified. It might need to
trick humans initially into allowing it the ability to construct such
Now, there is no easy way to predict what strategy it will settle on, but
build a modest bunker and ask to be left alone surely isn't it. At the
very least it needs to become the strongest military power in the world, and
stay that way. It might very well decide that exterminating the human
On Wed, Feb 19, 2003 at 11:56:46AM -0500, Eliezer S. Yudkowsky wrote:
The mathematical pattern of a goal system or decision may be instantiated
in many distant locations simultaneously. Mathematical patterns are
constant, and physical processes may produce knowably correlated outputs
given
Now, there is no easy way to predict what strategy it will settle on, but
build a modest bunker and ask to be left alone surely isn't it. At the
very least it needs to become the strongest military power in the
world, and
stay that way. I
...
Billy Brown
I think this line of thinking
Wei Dai wrote:
Ok, I see. I think I agree with this. I was confused by your phrase
Hofstadterian superrationality because if I recall correctly, Hofstadter
suggested that one should always cooperate in one-shot PD, whereas you're
saying only cooperate if you have sufficient evidence that the
Ben Goertzel wrote:
I think this line of thinking makes way too many assumptions about the
technologies this uber-AI might discover.
It could discover a truly impenetrable shield, for example.
It could project itself into an entirely different universe...
It might decide we pose so little
Billy Brown wrote:
Ben Goertzel wrote:
I think this line of thinking makes way too many assumptions about
the technologies this uber-AI might discover.
It could discover a truly impenetrable shield, for example.
It could project itself into an entirely different universe...
It might decide
Hi,
This one is for the more mathematically/algorithmically inclined people on
the list.
I'm going to present a mathematical problem that's come up in the Novamente
development process. We have two different solutions for it, each with
strengths and weaknesses. I'm curious if, perhaps,
It should also be pointed out that we are describing a state of
AI such that:
a) it provides no conceivable benefit to humanity
Not necessarily true: it's plausible that along the way, before learning how
to whack off by stimulating its own reward button, it could provide some
benefits to
18 matches
Mail list logo