On Tue, Aug 27, 2019, 9:30 AM Stefan Reich via AGI <[email protected]>
wrote:

Please point me to the code being written as a result of this talk then :-)

http://mattmahoney.net/autobliss.txt

Actually I wrote it in response to this conversation in 2007 because
consciousness is a topic that just won't die. The program is the opposite
of AGI.  It is a simple reinforcement learner that passes all the tests we
use to determine that animals feel pain. It says "ouch" and it modifies its
behavior to avoid negative stimuli. Too much torture will kill it.

For any simple behavioral test or criteria for consciousness that you can
come up with, I can write a simple program that passes the test. What do
you expect to happen if you think that intelligence depends on a property
that by definition has no behavioral effect?
That AGI must be magic?

Oh yeah, it's vibrations. That's it.

Meanwhile Google, Siri, and Alexa fail the Turing test mostly by being too
smart and too helpful.



------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T41ac13a64c3d48db-M1a2f521403f1d91836833e51
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to