From: Hank Conn <[EMAIL PROTECTED]>
-- Matt Mahoney, [EMAIL PROTECTED]
>I think the question, "will the AI be Friendly?", is only possible to answer AFTER you have the source code of the conscious >algorithms sitting on your computer screen, and have a rigorous prior theoretical knowledge on exactly how to make an AI >Friendly.
Then I'm afraid it is hopeless. There are several problems.
1. It is not possible for a less intelligent entity (human) to predict the behavior of a more intelligent entity. A state machine cannot simulate another machine with more states than itself.
2. A rigorous proof that an AI will be friendly requires a rigorous definition of "friendly".
3. Assuming (2), proving this property runs into Godel's incompleteness theorem for any AI system with a Kolmogorov complexity over about 1000 bits. See http://www.vetta.org/documents/IDSIA-12-06-1.pdf
4. There is no experimental evidence that consiousness exists. You believe that it does because animals that lacked an instinct for self preservation and fear of death were eliminated by natural selection.
Then I'm afraid it is hopeless. There are several problems.
1. It is not possible for a less intelligent entity (human) to predict the behavior of a more intelligent entity. A state machine cannot simulate another machine with more states than itself.
2. A rigorous proof that an AI will be friendly requires a rigorous definition of "friendly".
3. Assuming (2), proving this property runs into Godel's incompleteness theorem for any AI system with a Kolmogorov complexity over about 1000 bits. See http://www.vetta.org/documents/IDSIA-12-06-1.pdf
4. There is no experimental evidence that consiousness exists. You believe that it does because animals that lacked an instinct for self preservation and fear of death were eliminated by natural selection.
-- Matt Mahoney, [EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
