--- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote: > Autobliss has no grounding, no internal feedback, and no > volition. By what definitions does it feel pain?
Now you are making up new rules to decide that autobliss doesn't feel pain. My definition of pain is negative reinforcement in a system that learns. There is no other requirement. You stated that machines can feel pain, and you stated that we don't get to decide which ones. So can you precisely define grounding, internal feedback and volition (as properties of Turing machines) and prove that these criteria are valid? And just to avoid confusion, my question has nothing to do with ethics. -- Matt Mahoney, [EMAIL PROTECTED] ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com
