Really, when has a computer (with the exception of certain Microsoft
products) ever been able to disobey it's human masters?
It's easy to get caught up in the romance of superpowers, but come on,
there's nothing to worry about.
-Daniel
Hi Daniel,
Clearly there is nothing to worry about
C. David Noziglia wrote:
The problem with the issue we are discussing here is that the worst-case
scenario for handing power to unrestricted, super-capable AI entities is
very bad, indeed. So what we are looking for is not really building an
ethical structure or moral sense at all. Failure is
Hi David,
The problem here, I guess, is the conflict between Platonic expectations of
perfection and the messiness of the real world.
I never said perfection, and in my book make it clear that
the task of a super-intelligent machine learning behaviors
to promote human happiness will be very
Even if a (grown) human is playing PD2, it outperforms AIXI-tl playing
PD2.
Well, in the long run, I'm not at all sure this is the case. You haven't
proved this to my satisfaction.
In the short run, it certainly is the case. But so what? AIXI-tl is damn
slow at learning, we know that.
The
Brad Wyble wrote:
3) A society of selfish AIs may develop certain (not really
primatelike) rules for enforcing cooperative interactions among
themselves; but you cannot prove for any entropic specification, and
I will undertake to *disprove* for any clear specification, that this
creates
There are simple external conditions that provoke protective tendencies in
humans following chains of logic that seem entirely natural to us. Our
intuition that reproducing these simple external conditions serve to
provoke protective tendencies in AIs is knowably wrong, failing an
Brad Wyble wrote:
There are simple external conditions that provoke protective
tendencies in humans following chains of logic that seem entirely
natural to us. Our intuition that reproducing these simple external
conditions serve to provoke protective tendencies in AIs is knowably
wrong,
Ben Goertzel wrote:
Even if a (grown) human is playing PD2, it outperforms AIXI-tl
playing PD2.
Well, in the long run, I'm not at all sure this is the case. You
haven't proved this to my satisfaction.
PD2 is very natural to humans; we can take for granted that humans excel
at PD2. The
Hey Eliezer, my name is Hibbard, not Hubbard.
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
Bill Hibbard wrote:
I never said perfection, and in my book make it clear that
the task of a super-intelligent machine learning behaviors
to promote human happiness will be very messy. That's
Bill Hibbard wrote:
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
It *could* do this but it *doesn't* do this. Its control process is such
that it follows an iterative trajectory through chaos which is forbidden
to arrive at a truthful solution, though it may converge to a stable
attractor.
Bill Hibbard wrote:
Hey Eliezer, my name is Hibbard, not Hubbard.
*Argh* sound of hand whapping forehead sorry.
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote:
*takes deep breath*
This is probably the third time you've sent a message
to me over the past few months where you make some
Eliezer S. Yudkowsky asked Ben Goertzel:
Do you have a non-intuitive mental simulation mode?
LOL --#:^D
It *is* a valid question, Eliezer, but it makes me laugh.
Michael Roy Ames
[Who currently estimates his *non-intuitive mental simulation mode* to
contain about 3 iterations of 5
Strange that there would be someone on this list with a
name so similar to mine.
Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706
[EMAIL PROTECTED] 608-263-4427 fax: 608-263-6738
Bill Hibbard wrote:
Strange that there would be someone on this list with a
name so similar to mine.
I apologize, dammit! I whack myself over the head with a ballpeen hammer!
Now let me ask you this: Do you want to trade names?
--
Eliezer S. Yudkowsky
I'll read the rest of your message tomorrow...
But we aren't *talking* about whether AIXI-tl has a mindlike operating
program. We're talking about whether the physically realizable
challenge,
which definitely breaks the formalism, also breaks AIXI-tl in practice.
That's what I originally
Hmmm My friend, I think you've pretty much convinced me with this last
batch of arguments. Or, actually, I'm not sure if it was your excellently
clear arguments or the fact that I finally got a quiet 15 minutes to really
think about it (the three kids, who have all been out sick from
Ben Goertzel wrote:
I'll read the rest of your message tomorrow...
But we aren't *talking* about whether AIXI-tl has a mindlike
operating program. We're talking about whether the physically
realizable challenge, which definitely breaks the formalism, also
breaks AIXI-tl in practice. That's
Eliezer S. Yudkowsky wrote:
But if this isn't immediately obvious to you, it doesn't seem like a top
priority to try and discuss it...
Argh. That came out really, really wrong and I apologize for how it
sounded. I'm not very good at agreeing to disagree.
Must... sleep...
--
Eliezer S.
Bill,
Gulp..who was the Yank who said ... it was I ??? Johnny Appleseed
or something?
Well, it my turn to fess up. I'm pretty certain that it was my slip of the
keyboard who started it all. Sorry.
:)
My only excuse is that in my area of domain knowledge King Hubbard
is very famous.
19 matches
Mail list logo