On Thursday, October 25, 2012 2:01:44 AM UTC-4, Brent wrote:
>
>  On 10/24/2012 10:48 PM, Craig Weinberg wrote: 
>
>
>
> On Thursday, October 25, 2012 1:29:24 AM UTC-4, Brent wrote: 
>>
>>  On 10/24/2012 10:19 PM, Craig Weinberg wrote: 
>>
>>
>>
>> On Thursday, October 25, 2012 1:10:24 AM UTC-4, Brent wrote: 
>>>
>>>  On 10/24/2012 9:23 PM, Craig Weinberg wrote: 
>>>
>>> Or what if we don't care?  We don't care about slaughtering cattle, 
>>>> which are pretty smart 
>>>> as computers go.  We manage not to think about starving children in 
>>>> Africa, and they *are* 
>>>> humans.  And we ignore the looming disasters of oil depletion, water 
>>>> pollution, and global 
>>>> warming which will beset humans who are our children. 
>>>>
>>>
>>> Sure, yeah I wouldn't expect mainstream society to care, except maybe 
>>> for some people, I am mainly focused on what seems to be like an 
>>> astronomically unlikely prospect that we will someday find it possible to 
>>> make a person out of a program, but won't be able to just make the program 
>>> itself and no person attached. 
>>>
>>>
>>> Right. John McCarthy (inventor of LISP) worried and wrote about that 
>>> problem decades ago.  He cautioned that we should not make robots conscious 
>>> with emotions like humans because then it would be unethical use them like 
>>> robots.
>>>  
>>
>> It's arbitrary to think of robots though. It can be anything that 
>> represents computation to something. An abacus, a card game, anything. 
>> Otherwise it's prejudice based on form. 
>>  
>>>  
>>>  Especially given that we have never made a computer program that can 
>>> do anything whatsoever other than reconfigure whatever materials are able 
>>> to execute the program, I find it implausible that there will be a magical 
>>> line of code which cannot be executed without an experience happening to 
>>> someone. 
>>>
>>>
>>> So it's a non-problem for you.  You think that only man-born-of-woman or 
>>> wetware can be conscious and have qualia.  Or are you concerned that we are 
>>> inadvertently offending atoms all the time?
>>>  
>>
>> Everything has qualia, but only humans have human qualia. Animals have 
>> animal qualia, organisms have biological qualia, etc.
>>  
>>
>> So computers have computer qualia.
>>
>
> I would say that computer parts have silicon qualia. 
>
>
> Is it good or bad? Do they hurt when they loose and electron hole?
>

It's only speculation until we can connect up our brain to a chip. I 
suspect that good or bad, pain or pleasure is more of an animal level of 
qualitative significance. I imagine more of a holding or releasing of a 
monotonous tension.

I don't think the computer parts cohere into a computer except in our minds.
 

Racist!

Not at all, it's just that I understand what it actually is. Is it racist 
to think that Bugs Bunny isn't really an objectively real entity?

 
 

>   Do their qualia depend on whether they are sold-state or vacuum-tube?  
> germanium or silicon?  PNP or NPN?  Do they feel different when they run 
> LISP or C++?
>

Nah, its all inorganic low level qualia is my guess. Temperature, density, 
electronic tension and release.
 

They feel good when they beat you at chess.
>

If I change a line of code, then they will try to lose at chess. They feel 
nothing either way. There is no 'they' there.

 
 

>  Do you have Craig qualia? 
>  

 Sure. All the time.
 

Probably just low energy water soluble chemistry.
>

I would agree if I could, but since I experience sensory reality first 
hand, I know that is not the case. I also know, through my sensory reality, 
that there is a difference between being alive and dead, between animals 
and minerals, willful human beings and mechanical automatons. If any 
computer ever built gave me any reason to doubt this, then I would have to 
consider it, but unless and until that happens, I don't need to pretend 
that it is a possibility.


  
>   
>  
>>  
>>  No matter how hard we try, we can never just make a drawing of these 
>> functions just to check our math without invoking the power of life and 
>> death. It's really silly. It's not even good Sci-Fi, it's just too lame.
>>  
>>
>> I think we can, because although I like Bruno's theory I think the MGA is 
>> wrong, or at least incomplete.  I think the simulated intelligence needs a 
>> simulated environment, essentially another world, in which to *be* 
>> intelligent.  And that's where your chalk board consciousness fails.  It 
>> needs to be able to interact within a chalkboard world.  So it's not just a 
>> question of going to a low enough level, it's also a question of going to a 
>> high enough level.
>>  
>
> A chalkboard world just involves a larger chalkboard.
>  
>
> Right.  And it involves great chalkboard sex - but none we need worry 
> about.
>  

To me, there is no chalkboard world. It's all dusty and flat. Not much sexy 
going on, except maybe for beaten erasers.
 

To you maybe, but what about the chalk-people's qualia.
>

There aren't any chalk people, only particles of chalk and slate. They may 
not feel anything except every few thousand of our years when they are 
broken down.

Craig

Brent

>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/-pE8FtXYhCcJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to