On Monday, October 22, 2012 3:08:14 AM UTC-4, rclough wrote:
>
> Hi Craig Weinberg   
>
> OK, you can program anything to emulate a particular human act. 
> And perhaps allow multiple options.  But how would your computerized 
> zombie know which option to take in any given situation ? 
>

If you believed that our brains were already nothing but computers, then 
you would say that it would know which option to take the same way that 
Google knows which options to show you. I argue that can only get you so 
far, and that authentic humanity is, in such a replacement scheme, a 
perpetually receding horizon. Just as speech synthesizers have improved 
cosmetically in the last 30 years to the point that we can use them for 
Siri or GPS narration, but they have not improved in the sense of 
increasing the sense of intention and personal presence. 

Unlike some others on this list, I suspect that our feeling for who is 
human and who isn't, while deeply flawed, is not limited to interpreting 
logical observations of behavior. What we feel is alive or sentient depends 
more on what we like, and what we like depends on what is like us. None of 
these criteria matter one way or another however as far as giving us reason 
to believe that a given thing does actually have human like experiences.

Craig

 

> I don't think options would be sophisticated enough to fool 
> anybody. But perhaps I am being too demanding. 
>
> Roger Clough, rcl...@verizon.net <javascript:> 
> 10/22/2012   
> "Forever is a long time, especially near the end." -Woody Allen 
>
>
> ----- Receiving the following content -----   
> From: Craig Weinberg   
> Receiver: everything-list   
> Time: 2012-10-21, 16:53:03 
> Subject: Re: Re: Solipsism = 1p 
>
>
>
>
> On Sunday, October 21, 2012 3:39:11 PM UTC-4, rclough wrote: 
>
>
> BRUNO:  Keep in mind that zombie, here, is a technical term. By definition 
> it     
> behaves like a human. No humans at all can tell the difference. Only     
> God knows, if you want.   
>
> ROGER: I  claim that it is impossible for any kind of zombie   
> that has no mind to act like a human. IMHO  that would   
> be an absurdity, because without a mind you cannot know   
> anything.  You would run into walls, for example, and   
> couldn't know what to do in any event. Etc.   
> You couldn't understand language.   
>
>
>
> Roger I agree that your intuition is right - a philosophical zombie cannot 
> exist in reality, but not for the reasons you are coming up with. Anything 
> can be programmed to act like a human in some level of description. A 
> scarecrow may act like a human in the eyes of a crow - well enough that it 
> might be less likely to land nearby. You can make robots which won't run 
> into walls or chatbots which respond to some range of vocabulary and 
> sentence construction. The idea behind philosophical zombies is that we 
> assume that there is nothing stopping us in theory from assembling all of 
> the functions of a human being as a single machine, and that such a 
> machine, it is thought, will either have the some kind of human-like 
> experience or else it would have to have no experience. 
>
> The absent qualia, fading qualia paper is about a thought experiment which 
> tries to take the latter scenario seriously from the point of view of a 
> person who is having their brain gradually taken over by these substitute 
> sub-brain functional units. Would they see blue as being less and less blue 
> as more of their brain is replaced, or would blue just suddenly disappear 
> at some point? Each one seems absurd given that the sum of the remaining 
> brain functions plus the sum of the replaced brain functions, must, by 
> definition of the thought experiment, equal no change in observed behavior. 
>
> This is my response to this thought experiment to Stathis: 
>
> Stathis: In a thought experiment we can say that the imitation stimulates 
> the   
> surrounding neurons in the same way as the original.   
>
> Craig: Then the thought experiment is garbage from the start. It begs the 
> question. Why not just say we can have an imitation human being that 
> stimulates the surrounding human beings in the same way as the original? 
> Ta-da! That makes it easy. Now all we need to do is make a human being that 
> stimulates their social matrix in the same way as the original and we have 
> perfect AI without messing with neurons or brains at all. Just make a whole 
> person out of person stuff - like as a thought experiment suppose there is 
> some stuff X which makes things that human beings think is another human 
> being. Like marzipan. We can put the right pheromones in it and dress it up 
> nice, and according to the thought experiment, let? say that works.   
>
> You aren? allowed to deny this because then you don? understand the 
> thought experiment, see? Don? you get it? You have to accept this flawed 
> pretext to have a discussion that I will engage in now. See how it works? 
> Now we can talk for six or eight months about how human marzipan is 
> inevitable because it wouldn? make sense if you replaced a city gradually 
> with marzipan people that New York would gradually fade into less of a New 
> York or that New York becomes suddenly absent. It? a fallacy. The premise 
> screws up the result. 
>
> Craig 
>
> --   
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group. 
> To view this discussion on the web visit 
> https://groups.google.com/d/msg/everything-list/-/vj3N3gQoVo8J. 
> To post to this group, send email to 
> everyth...@googlegroups.com<javascript:>. 
>
> To unsubscribe from this group, send email to 
> everything-li...@googlegroups.com <javascript:>. 
> For more options, visit this group at 
> http://groups.google.com/group/everything-list?hl=en. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/KXGkgQv_nEAJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to