On Wed, Sep 24, 2014 at 9:15 AM, John Rose via AGI <[email protected]> wrote:
>> I agree. An AGI needs to be able to model human minds in order to
>> communicate effectively with people. If the model didn't claim to be
>> conscious then I would consider that a bug.
>>
>
> They don't matter UNLESS engineering a real p-conscious entity verses an 
> ersatz Google-like behavioral regurgitation, is easier to build and requires 
> significantly less computational resources to run AND results in 
> significantly more intelligence and human interactive assistance capabilities 
> on said resources.
>
> IOW which one is easier to build and which one runs better. My argument for 
> p-consciousness AGI design is also based on engineering and runtime estimates.

A p-conscious AGI and a zombie AGI would have identical behavior
because that is how a zombie is defined. (See
http://en.wikipedia.org/wiki/Philosophical_zombie ). Therefore both
AGIs would have identical designs and identical costs. Unless you
believe (like Penrose) that human behavior is not computable. I don't
think anyone on this list believes that a sufficiently powerful
computer couldn't at least in principle do what our 86 billion neurons
do.

BTW, do you agree with my cost estimate?
https://docs.google.com/document/d/1Z0kr3XDoM6cr5TgHH0GXQTjyikr7WpCkpWFn9IglW3o

I realize it is tempting to look for some magic shortcut like
consciousness or quantum computing or P = NP to get around this $1
quadrillion problem that we have been working on for the last 60
years.


-- 
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to