On 9/17/06, Brian Atkins <[EMAIL PROTECTED]> wrote:
Eliezer thought that I was trying to prove that the AI would or could save us.
This was not correct, as I pointed out. The objection above is based on this
misunderstanding and thus, if I understand the objection correctly, once this
misunderstanding is corrected the objection is no longer relevant.
Does this answer your question?
Shane
Shane, after reading the comments, I don't see that you have addressed another
issue Eliezer brought up:
"My stance would be that you are trying to interpret a mathematically correct
theorem in a semantically incorrect way; the formal proof is okay but the
attached informal English doesn't say the same thing the math does.
Generally, when I talk about Friendly AI, I talk about shaping or directing some
amount of optimization power - rather than guaranteeing the optimization power
itself.
This was not correct, as I pointed out. The objection above is based on this
misunderstanding and thus, if I understand the objection correctly, once this
misunderstanding is corrected the objection is no longer relevant.
Does this answer your question?
Shane
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
