On Monday, September 9, 2013 11:39:31 PM UTC-4, stathisp wrote:
>
> (Resending complete email - trying to do this on a phone.)
>
> On Tuesday, September 10, 2013, Stathis Papaioannou wrote:
>
>>
>>
>> On Thursday, September 5, 2013, Craig Weinberg wrote:
>>
>>>
>>>
>>>
>>>
>>> My position would suggest that the more mechanistic the conditions of 
>>> the test, the more it stacks the test in favor of not being able to tell 
>>> the difference. If you want to fool someone into thinking an AI is alive, 
>>> get a small group of people who lean toward aspberger's traits and show 
>>> them short, unrelated examples in a highly controlled context. 
>>>
>>
>> You accept, of course, that people with Aspbergers have feelings even 
>> though they don't express them like everyone else?
>>
>
Certainly. I was using the idea of selecting for Aspberger traits as a way 
of stacking the deck toward a result that de-emphasizes emotional 
discernment of others behavior.
 

>  
>>
>>> If you want to really bring out the differences between the two, use a 
>>> diverse audience and have them interact freely for a long time in many 
>>> different contexts, often without oversight. What you are looking for is 
>>> aesthetic cues that may not even be able to be named - intuitions of 
>>> something about the AI being off or untrustworthy, continuity gaps, 
>>> non-fluidity, etc. It's sort of like taking a video screen out into the 
>>> sunlight. You get a better view of what it isn't when you can see more of 
>>> what it is.
>>>
>>
> It sounds like you're proposing a variant of the Turing Test. What would 
> you say if the diverse audience decided the AI probably had feelings, or 
> probably had feelings but different to most people's, like the Aspergers 
> case?
>

Between the two tests, I'm showing the opposite of what is typically 
intended by the Turing Test. I am proposing a way to test the extent to 
which any given Turing-type test reflects the bias of the interpreter 
rather than any intrinsic quality of the target of the test.

It's hard to say for sure that a positive outcome for the test has any 
meaning. It's mainly to prove a negative. Maybe only one person out of ten 
million can pick up on the subtle cues that give away the simulation, and 
maybe they are too shy to speak up in public. Maybe only dogs can tell its 
not a person. My hunch though is that this is academic. I expect that 
simulations will always be pretty easy to figure out given enough time and 
diversity of audience and interaction. If at some point in time that is no 
longer the case, the ability to tell the difference will probably be 
available as an app for our own augmented human systems.

Craig

 
>
>> -- 
>> Stathis Papaioannou
>>
>
>
> -- 
> Stathis Papaioannou
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to