On Sunday, May 19, 2019 at 2:19:31 AM UTC-5, Philip Thrift wrote:
>
>
>
> On Sunday, May 19, 2019 at 1:50:03 AM UTC-5, Brent wrote:
>>
>>
>>
>> On 5/18/2019 11:25 PM, Philip Thrift wrote:
>>
>>
>> No I can't *prove *we aren't simulations, or that a simulation running 
>> in a big computer made of Intel Cores can't be conscious.
>>
>>
>> Nor can you give a reply to Chalmer's fading consciousness problem.
>>
>>
>>
> http://consc.net/papers/qualia.html :
>
> *for a system to be conscious it must have the right sort of biochemical 
> makeup; if so, a metallic robot or a silicon-based computer could never 
> have experiences, no matter what its causal organization *
>
> That from David Chalmer's paper is the only good takeaway. 
>
> And it's he only thing engineers need to pay attention to, Now AI 
> engineers just want to make smart robots, not conscious robots. But if they 
> did, then that above is all that matters.
>
> (In any case, I don't think Chalmers himself believes in what he wrote in 
> papers 25 years ago, per Philip Goff.)
>
> @philipthrift 
>
 

I should say above, AI engineers want to make functionally-smart robots. 
That's a better word.

Back in the '80s I was working on autonomous smart weapons, or autonomous 
smart missiles, which could "see" on their own and make decisions  (I sort 
of hate say.) That was DARPA's name.

If a smart missile were conscious, It would be committing suicide.

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/468a71b4-87ca-4c95-b84f-5034afddaf3c%40googlegroups.com.

Reply via email to