On Sun, Sep 21, 2014 at 7:56 PM, John Rose via AGI <[email protected]> wrote:
>> -----Original Message-----
>> From: Matt Mahoney [mailto:[email protected]]
>>
>>
>> Are you saying that a p-zombie AGI would behave differently than a
>> conscious AGI? Now we're getting somewhere. Exactly how would you test if
>> an AGI really "understands" us?
>>
>
> On second thought I don't know which is more dangerous, a non-p-conscious AGI 
> or a p-conscious AGI.

I would suspect a p-conscious AGI to be more dangerous if I believed
in p-consciousness. That is because I would also assume it had free
will. But free will is also an illusion. It is caused by positive
reinforcement of actions. Again, we have it because it increases our
reproductive fitness. You would not want to live if you did not enjoy
doing things. The side effect of the pleasure of doing things is that
it reinforces the belief that there is a "me" that decides to do them,
rather than deterministic neural processes sending signals to my
muscles.

We associate p-consciousness with understanding. If I want to test
whether an AGI understands me, I would test it like I would test a
human. I would ask it to rephrase what I just said. The problem is
that we already have machines that can do this. When you search on
Google, it will match phrases that have the same meaning but different
words.

We don't think Google is conscious. To me it is irrelevant as long as
it understands me.

-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to