On Tuesday, March 27, 2018 at 11:34:41 AM UTC-4, agrays...@gmail.com wrote:
>
>
>
> On Monday, March 26, 2018 at 5:25:59 PM UTC-4, agrays...@gmail.com wrote:
>>
>>
>>
>> On Monday, March 26, 2018 at 4:20:02 PM UTC-4, Brent wrote:
>>>
>>>
>>>
>>> On 3/26/2018 10:17 AM, John Clark wrote:
>>>
>>> Brent Meeker Wrote"
>>>
>>>> *> It seems to me there's something fishy about making behavior and 
>>>> conscious thought functionally equivalent so neither can change without a 
>>>> corresponding change in the other.  My intuition is that there is a lot of 
>>>> my thinking that doesn't show up as observable behavior.  No doubt it's 
>>>> observable at the micro-level in my brain; but not at the external level.*
>>>
>>>  The behavior of your neurons at the micro-level is what I’m talking 
>> about. A change in the brain corresponds with a change inconsciousness and 
>> a change in consciousness corresponds with a change in the brain. So mind 
>> is what the brain does. So unless there is some mystical reason that carbon 
>> is conscious but silicon is not a intelligent computer is also conscious.
>>
>>>
>>> I don't doubt that.  But does equal intelligence imply equivalent 
>>> consciousness. 
>>>
>>
>>
>> *IMO, the way you pose the question confuses the issue. You could have 
>> two Rovers which do different tasks, and conclude they have different 
>> intelligences based on some well defined definition. But how could you 
>> ascertain whether either is conscious?  AFAICT, there is no understanding 
>> of what "conscious" means. I suppose one can say it involves the perception 
>> of sensation, pain, pleasure, etc. If you tore off a Rover's arm, it might 
>> be programmed to complain or otherwise register the adverse modification of 
>> its body. But if it did, wouldn't it be just simulating or mimicking a 
>> human response without being "conscious"? What the hell are we talking 
>> about? TIA, AG*
>>
>
> *You could program both Rovers to do arithmetic, but only one to do 
> calculus. So you could say one is more intelligent than the other. Or you 
> could program both to see in visible wave lengths, but only only to see in 
> IR. So you could say one has superior vision than the other. But what you 
> can never do IMO, is determine whether either Rover, in any circumstance, 
> has self knowledge or self perception, or can experience rudimentary or 
> complex sensations. So I don't think we're any closer to an explanation or 
> understanding of consciousness than when we started, however long ago that 
> was. AG*
>

*If we had a clue how self-reference could result from a neural network 
such as the human brain, we could, perhaps, duplicate it in a Rover or 
whatever, But I see no evidence that we have such an insight to do the 
modeling. CMIIAW. AG *

>
>>
>> In other words could I design two Mars Rovers that behaved very similarly 
>>> (as similar as two different humans) and yet, because of the way I 
>>> implemented their memory or computers their consciousness was very 
>>> different?  Of course this is related to the question of how do I know that 
>>> other people have consciousness like mine; except in that case one relies 
>>> in part on knowing that other people are constructed similarly.
>>>
>>> Brent
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to