On Tuesday, February 26, 2013 7:15:53 AM UTC-5, Bruno Marchal wrote:
>
>
> On 25 Feb 2013, at 21:31, Craig Weinberg wrote:
>
>
>
> On Monday, February 25, 2013 12:53:47 PM UTC-5, Bruno Marchal wrote:
>>
>>
>> On 25 Feb 2013, at 01:41, Craig Weinberg wrote:
>>
>> You'll forgive me if I don't jump at the chance to shell out $51.96 for 
>> 300+ pages of the same warmed over cog-sci behaviorism-cum-functionalism 
>> that I have been hearing from everyone.
>>
>>
>> By making explicit the level of digital substitution, functionalism is 
>> made less trivial that in many accounts that you can find. And comp is 
>> stronger hypothesis that behavioral-mechanism (agnostic on zombie).
>>
>
> To me it's like arguing Episcopalian vs Presbyterian. Sure, there are 
> differences, but the problem I have is that they all approach consciousness 
> from the outside in while failing to recognize that the idea of there being 
> an exterior to consciousness is only something which we have come to expect 
> through consciousness itself.
>
>
> Computationalism explains consciousness from the inside. It explains 
> physics from inside as well, despite being refutable from the outside.
>
>
It explains numerical relations which might remind us of consciousness, but 
where does computation actually become conscious? As I have pointed out, 
computation never even becomes geometry, let alone colors, flavors, 
feelings, etc. Comp has no need for consciousness or physics - you can see 
this in basic programming, where things like collision detection of avatars 
have to be intentionally defined. There is no physics inherent in a graphic 
avatar, and it has not preference for developing physics.
 

>
>
>
>
>>
>>
>>
>>
>> The preview includes a couple of pages that tell me all that I need to 
>> know: (p.22) 
>>
>> 'Building in self-representaiton and value, with the goal of constructing 
>> a system that could have feelings, will result in a robot that also has the 
>> capacity for emotions and complex social behavior.'
>>
>>
>> I can agree with you, in the sense that I don't believe we can emulate 
>> for sure emotion. Well, we should see the algorithm to decide. If emotion 
>> comes from the use of diverse exploration made from the data, they might be 
>> correct, but loose in their way of presenting what they done.
>>
>
> What prevents you from believing that we might not be able to emulate 
> emotion? 
>
>
> The study of biology and molecular biology, and then of computer science. 
> If you say sopmething negative, like we can't do this or that, it is up to 
> you to explain why, and this without referring to your sense or opinion.
>

There is nothing that doesn't refer to sense. To imagine that there is 
means projecting your own sense to an omniscient scale where it doesn't 
belong.
 

>
>
>
>
> I have no sentimental reason for believing that, but am guided more by the 
> observation of how interaction with machines leads me and most others with 
> the distinct impression dealing with an impersonal presentation. 
>
>
> But the distinction between Bp and Bp & p refutes that kind of argument. 
> Machines too finds already impossible to be identified with anything having 
> a 3p description. But we have a complete explanation of what machines are 
> deluded on this, and so why they do have consciousness and delusion.
>

What does that have to do with personal and impersonal qualities though? 
Why do I find other people to be clearly persons, while I find machines to 
be clearly non-persons?
 

>
>
>
>
> It would seem that if emotions were harder to produce than logic, 
>
>
> They are simpler to produce. They are harder to justify. That's different.
>
> Emotion are indeed very simple to produce, with very high level goal, once 
> automated, like "help yourself", or "do anything you can to survive", etc.
>

What do these goals have to do with the experience of emotion? Why would 
"do anything you can to survive" be anything other than a methodical 
process of analysis of options by priority? There is obviously no emotion 
there, any more than there are singers of songs inside a microchip to keep 
circuit's spirits up.


>
>
>
> that the cortex should be our brain stem, and the limbic system should be 
> something that only higher primates have.
>  
>
>>
>>
>>
>>
>>
>> No, it won't. And a simulation of water won't make plants grow.
>>
>>
>> OK, but you might just mix levels, and so be trivially correct. A 
>> simulation of water, made at some level, can make a simulation of a growing 
>> plant, at some level of description. 
>>
>
> But it isn't necessary to simulate water to make a simulated plant appear 
> to grow. You can just simulate the growth directly, without cause, or with 
> whatever cause you choose, even if it is invisible and arithmetic.
>
> But lets talk about levels. What is it about the bottom level - the one 
> which keeps simulated water from watering 'real' plants, different from the 
> other levels in which the same simulated water could be used? Why is it 
> that no simulated presence can interface with our bodies unless it is 
> passed through a physical mechanism, but no such mechanism is required in 
> virtual environments?
>
>
> Comp makes that there is no physical mechanism needed at all. Physical 
> mechanism emerge from the many "virtual", that is arithmetical, relations. 
>

Why and how would that happen? You are saying that if you have enough 
virtual water, real water will somehow 'emerge'?
 

> That's the main interest of comp: it explains the why and how physics, 
> without assuming any physical primary reality. And it explains why it looks 
> like there are quanta and qualia.
>

It only explains them retrospectively. If you already have qualia and 
quanta and physics, then you can see how they are computable, but there is 
no suggestion that computation can ever appear independently of qualia.
 

>
>
>
>
>  
>
>> And that plant can be smelt by a person supported by a simulation, at 
>> some correct level. 
>> If that is not possible, it means that consciousness requires some 
>> infinite machinery, which infinities is not recoverable by first person 
>> indeterminacy (and thus requires something different than a quantum 
>> computer for example). That would make you and Penrose correct. But we 
>> still wait for which process you can have in mind, as such infinite 
>> machinery, not quantum emulable, remains speculation. Penrose do speculate 
>> on a collapse of the wave related to a quantum theory of gravitation. Well, 
>> you need such speculation if you want make comp false. 
>>
>
> There doesn't need to be infinite machinery if we assume a sense totality 
> from the start. 
>
>
> Then either you should accept a finite machinery, and this entails comp, 
> or you assume the existence of finite things which are not Turing emulable, 
> and this assumes non-comp, and beg the question.
>

I assume that finity and infinity are qualities within sense.
 

>
>
>
> Machines are only necessary to manipulate isolated forms in space and 
> transitive functions through time, but if sense pre-figures spacetime, then 
> the machine becomes a second order construction. 
>
>
> But thats again is a concequence of comp. Physical machines are second 
> order reality. It belongs to number's dreams, like time and space. We do 
> agree more than it might seem, but you drive from this the contrary of the 
> assumption I start from. 
>

I don't see any way to get from comp to experienced senses, but its obvious 
that sense could generate computation by nesting through self-detachment. 
We can see examples of senseless signs which we use to make sense, but I 
don't see any compelling examples of any sign which has ever generated 
sense in and of itself.
 

>
>
>
> Machines are orthogonal to the absolute orientation of nature (experiences 
> through time) in that they are derived from functional cliches which cut 
> across nature horizontally. A machine simulates a snapshot of the tree at a 
> certain moment in time, but it misses the longitudinal flow from acorn to 
> forest. 
>
>
> No, it does not. The machine, relatively to a computation, simulates the 
> snapshot indeed, but the 1p of the machine is related to the infinities of 
> computations below its substitution level, and that gives the 'the 
> longitudinal flow from acorn to forest'.
>

If you have that flow though, then you don't need the snapshot, and if you 
have infinite snapshots, then you don't need the flow. It only makes sense 
if it is the variety and richness of the sensory presentation that you are 
actually after to have both.
 

>
>
>
> It's that superficiality that will always make comp fall apart. 
>
>
> That superficiality is in the mind of those who either miss Gödel, or more 
> simply, the first person indeterminacy.
>

Why does Gödel or indeterminacy suggest sense?


>
>
> Contrary to our expectations, the more facades that are constructed, the 
> more the rootless qualities are subtly exposed, and the more the AI becomes 
> inconsistent - flashing apparent brilliance now, obvious cluelessness the 
> next. The lack of personhood becomes more uncanny and difficult to put your 
> finger on.
>
>
> That's a defect or a misinterpretation, by you,  of your own "theory". A 
> part of AI is naive, sure, but that part still extends conventional 
> programming.The other parts needs only time, or the copying of nature. 
>

That doesn't hold true for other copies of nature though. Copies of water 
aren't water. Jokes can't pretend to be funny. Copying is an illusion. Only 
signs can be copies, because they are figments of our expectation to begin 
with.

And nobody can predicts what it will give. Personally I think that the big 
> singularity is in the discovery of the universal machine. If we don't get 
> that, it will not help tomorrow's machine to believe that humans can think.
>

Why would the knowledge that humans can think make it more likely that they 
wouldn't be recognized as the obvious threat that they will always be to 
the UM?

Craig
 

>
> Bruno
>
>
>
>
> Craig
>
>
>> Bruno
>>
>>
>>
>>
>> Craig
>>
>>
>>
>> On Sunday, February 24, 2013 1:17:53 AM UTC-5, Brent wrote:
>>>
>>>  Here's a book Craig should read 
>>>
>>> Jean-Marc Fellous and Michael A. Arbib (2005). Who Needs Emotions? The 
>>> Brain Meets the 
>>> Robot<http://books.google.com/books?id=TvDi5V03b4IC&printsec=frontcover&client=safari&sig=ACfU3U1mSbp_Rp0SNLeLbdi3RHMox3Iagw>
>>>
>>> Heres' the table of contents.
>>>
>>>
>>>
>>>
>>> Or at least he should write to the authors and tell them they are 
>>> wasting their time and explain to them why robots cannot have emotions.  
>>> They are apparently unaware of his definitive wisdom on the question.
>>>
>>> Brent
>>>  
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com.
>> To post to this group, send email to everyth...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>  
>>  
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com <javascript:>.
> To post to this group, send email to everyth...@googlegroups.com<javascript:>
> .
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>  
>  
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to