On Wednesday, October 2, 2013 2:59:17 PM UTC-4, Brent wrote:
>
>  On 10/1/2013 11:49 PM, Pierz wrote:
>  
>
>
> On Wednesday, October 2, 2013 3:15:01 PM UTC+10, Brent wrote: 
>>
>> On 10/1/2013 9:56 PM, Pierz wrote: 
>> > Yes, I understand that to be Chalmer's main point. Although, if the 
>> qualia can be 
>> > different, it does present issues - how much and in what way can it 
>> vary? 
>>
>> Yes, that's a question that interests me because I want to be able to 
>> build intelligent 
>> machines and so I need to know what qualia they will have, if any.  I 
>> think it will depend 
>> on their sensors and on their values/goals.  If I build a very 
>> intelligent Mars Rover, 
>> capable of learning and reasoning, with a goal of discovering whether 
>> there was once life 
>> on Mars; then I expect it will experience pleasure in finding evidence 
>> regarding this.   
>> But no matter how smart I make it, it won't experience lust. 
>>
>>  "Reasoning" being what exactly? The ability to circumnavigate an 
> obstacle for instance? There are no "rewards" in an algorithm. There are 
> just paths which do or don't get followed depending on inputs. Sure, the 
> argument that there must be qualia in a sufficiently sophisticated computer 
> seems compelling. But the argument that there can't be seems equally so. As 
> a programmer I have zero expectation that the computer I am programming 
> will feel pleasure or suffering. It's just as happy to throw an exception 
> as it is to complete its assigned task. *I* am the one who experiences pain 
> when it hits an error! I just can't conceive of the magical point at which 
> the computer goes from total indifference to giving a damn. That's the 
> point Craig keeps pushing and which I agree with. Something is missing from 
> our understanding.
>  
>
> What's missing is you're considering a computer, not a robot.  As robot 
> has to have values and goals in order to act and react in the world.  
>

Not necessarily. I think that a robot could be programmed to simulate goals 
or to avoid goals entirely, both with equal chance of success. A robot 
could be programmed to imitate the behaviors of others in their 
environment. Even in the case where a robot would naturally accumulate 
goal-like circuits, there is no reason to presume that there is any binding 
of those circuits into an overall goal. What we think of as a robot could 
just as easily be thousands of unrelated sub-units, just as a person with 
multiple personalities could navigate a single life if each personality 
handed off information to the next personality.
 

> It has complex systems and subsystems that may have conflicting subgoals, 
> and in order to learn from experience it keeps a narrative history about 
> what it considers significant events. 
>

We don't know that there is any such thing as 'it' though. To me it's seems 
more likely to me that assuming such a unified and intentional presence is 
1) succumbing to the pathetic fallacy, and 2) begging the question. It is 
to say "We know that robots must be alive because otherwise they would not 
be as happy as we know they are."
 

> At that level it may have the consciousness of a mouse.  If it's a social 
> robot, one that needs to cooperate and compete in a society of other 
> persons, then it will need a self-image and model of other people.  In that 
> case it's quite reasonable to suppose it also has qualia.
>

It's no more reasonable than supposing that a baseball diamond is rooting 
for the home team. Machines need not have any kind of model or self-image 
which is experienced in any way. It doesn't necessarily appear like 'Field 
of Dreams'. What is needed is simply a complex tree of unconscious logical 
relations. There is no image or model, only records which are compressed to 
the point of arithmetic generalization - this is the opposite of any kind 
of aesthetic presence (qualia). If that were not the case, we wouldn't need 
sense organs, we would simply collect data in its native form, compress it 
quantitatively, and execute reactions against it with Bayesian regressions. 
No qualia required.
 

>
>   
>> > I'm curious what the literature has to say about that. And if 
>> functionalism means 
>> > reproducing more than the mere functional output of a system, if it 
>> potentially means 
>> > replication down to the elementary particles and possibly their quantum 
>> entanglements, 
>> > then duplication becomes impossible, not merely technically but in 
>> principle. That seems 
>> > against the whole point of functionalism - as the idea of "function" is 
>> reduced to 
>> > something almost meaningless. 
>>
>> I think functionalism must be confined to the classical functions, 
>> discounting the quantum 
>> level effects.  But it must include some behavior that is almost entirely 
>> internal - e.g. 
>> planning, imagining.  Excluding quantum entanglements isn't arbitrary; 
>> there cannot have 
>> been any evolution of goals and values based on quantum entanglement 
>> (beyond the 
>> statistical effects that produce decoherence and quasi-classical 
>> behavior). 
>>
>>  But what do "planning" and "imagining" mean except their functional 
> outputs? It shouldn't matter to you how the planning occurs - it's an 
> "implementation detail" in development speak. 
>  
>
> You can ask a person about plans and imaginings, and speech in response is 
> an action.
>
>  Your argument may be valid regarding quantum entanglement, but it is 
> still an argument based on what "seems to make sense" rather than on 
> genuine understanding of the relationship between functions and their 
> putative qualia.
>  
>
> But I suspect that there is no understanding that would satisfy Craig as 
> "genuine".  Do we have a "genuine" understanding of electrodynamics?  of 
> computation?  What we have is the ability to manipulate them for our 
> purposes.  So when we can make an intelligent robot that interacts with 
> people AS IF it experiences qualia and we can manipulate and anticipate 
> that behavior, then we'll have just as genuine an understanding of qualia 
> as we do of electrodynamics.
>

That's a deeper question, because it assumes that there is no such thing as 
genuine, which would make sense when you assume arithmetic supremacy. If 
you turn that assumption upside down, and instead begin from the position 
that the definition of genuine cannot be doubted to begin with, and that 
qualia is already understood far more than we could ever understand 
anything in physics, then the proposition that understanding qualia as well 
as we understand electrodynamics becomes absurd, since our understanding of 
electrodynamics is itself robotic. To get at the nature of qualia and 
quanta requires stripping down the whole of nature to Absolute fundamentals 
- beyond language and beyond measurement. We must question sense itself, 
and we must rehabilitate our worldview with ourselves inside of it, not 
just the cells of our brain or the mechanics of our evolved behavior in the 
world. The toy model of consciousness that modal logic and computation 
deliver is a good start in the wrong direction. To progress beyond that I 
think requires the greatest 180 since Galileo.

Craig



> Brent
>
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to