On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Tue, 11/18/08, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> > Autobliss has no grounding, no internal feedback, and no
> > volition.  By what definitions does it feel pain?
>
> Now you are making up new rules to decide that autobliss doesn't feel pain.
> My definition of pain is negative reinforcement in a system that learns.
> There is no other requirement.
>
> You stated that machines can feel pain, and you stated that we don't get to
> decide which ones. So can you precisely define grounding, internal feedback
> and volition (as properties of Turing machines)


Clearly, this can be done, and has largely been done already ... though
cutting and pasting or summarizing the relevant literature in emails would
not a productive use of time


> and prove that these criteria are valid?
>

That is a different issue, as it depends on the criteria of validity, of
course...

I think one can argue that these properties are necessary for a
finite-resources AI system to display intense systemic patterns correlated
with its goal-achieving behavior in the context of diverse goals and
situations.  So, one can argue that these properties are necessary for **the
sort of consciousness associated with general intelligence** ... but that's
a bit weaker than saying they are necessary for consciousness (and I don't
think they are)

ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to