On Tuesday, September 10, 2013, Craig Weinberg wrote:

>
>
> On Monday, September 9, 2013 11:39:31 PM UTC-4, stathisp wrote:
>>
>> (Resending complete email - trying to do this on a phone.)
>>
>> On Tuesday, September 10, 2013, Stathis Papaioannou wrote:
>>
>>>
>>>
>>> On Thursday, September 5, 2013, Craig Weinberg wrote:
>>>
>>>>
>>>>
>>>>
>>>>
>>>> My position would suggest that the more mechanistic the conditions of
>>>> the test, the more it stacks the test in favor of not being able to tell
>>>> the difference. If you want to fool someone into thinking an AI is alive,
>>>> get a small group of people who lean toward aspberger's traits and show
>>>> them short, unrelated examples in a highly controlled context.
>>>>
>>>
>>> You accept, of course, that people with Aspbergers have feelings even
>>> though they don't express them like everyone else?
>>>
>>
> Certainly. I was using the idea of selecting for Aspberger traits as a way
> of stacking the deck toward a result that de-emphasizes emotional
> discernment of others behavior.
>
>
>>
>>>
>>>> If you want to really bring out the differences between the two, use a
>>>> diverse audience and have them interact freely for a long time in many
>>>> different contexts, often without oversight. What you are looking for is
>>>> aesthetic cues that may not even be able to be named - intuitions of
>>>> something about the AI being off or untrustworthy, continuity gaps,
>>>> non-fluidity, etc. It's sort of like taking a video screen out into the
>>>> sunlight. You get a better view of what it isn't when you can see more of
>>>> what it is.
>>>>
>>>
>> It sounds like you're proposing a variant of the Turing Test. What would
>> you say if the diverse audience decided the AI probably had feelings, or
>> probably had feelings but different to most people's, like the Aspergers
>> case?
>>
>
> Between the two tests, I'm showing the opposite of what is typically
> intended by the Turing Test. I am proposing a way to test the extent to
> which any given Turing-type test reflects the bias of the interpreter
> rather than any intrinsic quality of the target of the test.
>
> It's hard to say for sure that a positive outcome for the test has any
> meaning. It's mainly to prove a negative. Maybe only one person out of ten
> million can pick up on the subtle cues that give away the simulation, and
> maybe they are too shy to speak up in public. Maybe only dogs can tell its
> not a person. My hunch though is that this is academic. I expect that
> simulations will always be pretty easy to figure out given enough time and
> diversity of audience and interaction. If at some point in time that is no
> longer the case, the ability to tell the difference will probably be
> available as an app for our own augmented human systems.
>
> Craig
>

You are assuming the entities around you either are or aren't conscious,
but you have no way of telling. If you have no way of telling, then how do
you know those around you are conscious, and how do you know that computers
aren't? By analogy with your own experience, you can say that those like
you are conscious, but you do this on the basis of their behaviour being
like yours, not on the basis of any special tests let alone dissection to
see what they are composed of. You say this test is invalid, but you
presumably use it all the time. You also claim to know that a computer is
not conscious regardless of its behaviour, but you need a test for
consciousness and you have admitted you don't have one. The best test you
can propose is an intuition, but you admit that only one in ten million
might have this intuition; and it would not be possible to know if this one
in ten million were right, nor if the many others who falsely claimed to
have the intuition were wrong.


The way you talk implies that at least in principle there is a definitive
test for consciousness, but there is no such test.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to