Shane, after reading the comments, I don't see that you have addressed another issue Eliezer brought up:

"I furthermore note that you have not demonstrated that Friendly AI is bunk, any more than Penrose has demonstrated that AI is bunk. You have demonstrated a task on which a provably tries-to-be-Friendly AI cannot knowably-to-us succeed."

To which you replied:

"No, what I have proven is different. I have shown that if A is very powerful (e.g. powerful enough to deal with B) then we cannot prove F(A). Reversing this statement: If we can prove F(A), then A must be weak (e.g. is unable to deal with B). This is not the same as your statement above."

And then Eliezer said:

"My stance would be that you are trying to interpret a mathematically correct theorem in a semantically incorrect way; the formal proof is okay but the attached informal English doesn’t say the same thing the math does.

Generally, when I talk about Friendly AI, I talk about shaping or directing some amount of optimization power - rather than guaranteeing the optimization power itself. In other words, we are not concerned so much with whether the AI has the power to e.g. rain loaves of bread down upon starving North Korea, but rather, whether the AI will do so if it has the capability. This is an important point where the interpretation of the math is concerned"

And then you did not reply because by that time the whole attempted proof had gone poof. Any further comments on this rather important divide between you two? Basically Eliezer seems to be saying even if your "FAI is Bunk" proof had held up, it still would not be really relevant to what he's actively working on.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to