On Tuesday, September 10, 2013 4:09:22 AM UTC-4, stathisp wrote:
>
>
>
> On Tuesday, September 10, 2013, Craig Weinberg wrote:
>
>>
>>
>> On Monday, September 9, 2013 11:39:31 PM UTC-4, stathisp wrote:
>>>
>>> (Resending complete email - trying to do this on a phone.)
>>>
>>> On Tuesday, September 10, 2013, Stathis Papaioannou wrote:
>>>
>>>>
>>>>
>>>> On Thursday, September 5, 2013, Craig Weinberg wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> My position would suggest that the more mechanistic the conditions of 
>>>>> the test, the more it stacks the test in favor of not being able to tell 
>>>>> the difference. If you want to fool someone into thinking an AI is alive, 
>>>>> get a small group of people who lean toward aspberger's traits and show 
>>>>> them short, unrelated examples in a highly controlled context. 
>>>>>
>>>>
>>>> You accept, of course, that people with Aspbergers have feelings even 
>>>> though they don't express them like everyone else?
>>>>
>>>
>> Certainly. I was using the idea of selecting for Aspberger traits as a 
>> way of stacking the deck toward a result that de-emphasizes emotional 
>> discernment of others behavior.
>>  
>>
>>>  
>>>>
>>>>> If you want to really bring out the differences between the two, use a 
>>>>> diverse audience and have them interact freely for a long time in many 
>>>>> different contexts, often without oversight. What you are looking for is 
>>>>> aesthetic cues that may not even be able to be named - intuitions of 
>>>>> something about the AI being off or untrustworthy, continuity gaps, 
>>>>> non-fluidity, etc. It's sort of like taking a video screen out into the 
>>>>> sunlight. You get a better view of what it isn't when you can see more of 
>>>>> what it is.
>>>>>
>>>>
>>> It sounds like you're proposing a variant of the Turing Test. What would 
>>> you say if the diverse audience decided the AI probably had feelings, or 
>>> probably had feelings but different to most people's, like the Aspergers 
>>> case?
>>>
>>
>> Between the two tests, I'm showing the opposite of what is typically 
>> intended by the Turing Test. I am proposing a way to test the extent to 
>> which any given Turing-type test reflects the bias of the interpreter 
>> rather than any intrinsic quality of the target of the test.
>>
>> It's hard to say for sure that a positive outcome for the test has any 
>> meaning. It's mainly to prove a negative. Maybe only one person out of ten 
>> million can pick up on the subtle cues that give away the simulation, and 
>> maybe they are too shy to speak up in public. Maybe only dogs can tell its 
>> not a person. My hunch though is that this is academic. I expect that 
>> simulations will always be pretty easy to figure out given enough time and 
>> diversity of audience and interaction. If at some point in time that is no 
>> longer the case, the ability to tell the difference will probably be 
>> available as an app for our own augmented human systems.
>>
>> Craig
>>
>
> You are assuming the entities around you either are or aren't conscious, 
> but you have no way of telling. If you have no way of telling, then how do 
> you know those around you are conscious, and how do you know that computers 
> aren't? By analogy with your own experience, you can say that those like 
> you are conscious, but you do this on the basis of their behaviour being 
> like yours, 
>

Not necessarily. Behavior that I am consciously aware of is perhaps the 
dominant factor, but our sensitivity transcends conscious attention. I may 
not be able to tell that a person is an imposter mentally, but my skin may 
feel the difference, and I may experience that difference in a subtle way 
which I might ignore by default, but it might be a sensitivity that I could 
train myself to develop. Maybe it's not the skin which can tell the 
difference, maybe its a history of personal experience. A familiarity with 
death on the battlefield, or a cultural background which is highly attuned 
to emotion.
 

> not on the basis of any special tests let alone dissection to see what 
> they are composed of. You say this test is invalid, but you presumably use 
> it all the time. 
>

It is an assumption that anyone uses any 'tests' to determine that someone 
else is alive. This presumes a default state of uncertainty where none 
necessarily exists. Just as Libet tests can show how our naive experience 
of our own will may not match reality, the assumption that we have no idea 
what is like us and unlike us except through a logic tree based on observed 
behaviors is not necessarily valid. It's a toy model of sentience and 
perception.
 

> You also claim to know that a computer is not conscious regardless of its 
> behaviour, but you need a test for consciousness and you have admitted you 
> don't have one. 
>

Because one is not necessary. Nothing could be more obvious about machines 
than the fact that they are utterly devoid of sentience.
 

> The best test you can propose is an intuition, but you admit that only one 
> in ten million might have this intuition; and it would not be possible to 
> know if this one in ten million were right, nor if the many others who 
> falsely claimed to have the intuition were wrong.
>

It wouldn't matter if it were one in 100 trillion. As long as something can 
correctly tell the difference between a living person and a computer 
program, we cannot believe that a computer program is alive in any way. In 
the worst case scenario, nobody could tell, and even the person themselves 
could be talked out of their own humanity. That still doesn't make them 
right. If I think that I know that I am a machine it doesn't mean that I'm 
right. If every test I submit myself agrees with my belief that I am 
machine it still does not mean I'm right. That's because private awareness 
is not the same thing as what can be measured publicly - presence is not 
representation.
 

>
>
> The way you talk implies that at least in principle there is a definitive 
> test for consciousness, but there is no such test.
>

Tests are not relevant. They can help clarify or distort, but sensitivity 
to sentient peers is ultimately grounded in the concrete authenticity of 
aesthetics, not abstract formalism.

Thanks,
Craig 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to