On Tuesday, October 15, 2013 3:45:38 AM UTC-4, Bruno Marchal wrote:
>
>
> On 14 Oct 2013, at 22:04, Craig Weinberg wrote:
>
>
>
> On Monday, October 14, 2013 3:17:06 PM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 14 Oct 2013, at 20:13, Craig Weinberg wrote:
>>
>>
>>
>> On Sunday, October 13, 2013 5:03:45 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>>
>>> All object are conscious?
>>>
>>
>> No objects are conscious.
>>
>>
>> We agree on this.
>>
>>
>>  
>>
>>>
>>>
>>>
>>>
>>> Not at all. It is here and now. I have already interview such machines. 
>>>
>>
>> Are there any such machines available to interview online?
>>
>>
>> I can give you the code in Lisp, and it is up to you to find a good free 
>> lisp. But don't mind too much, AUDA is an integral description of the 
>> interview. Today, such interviews is done by paper and pencils, and appears 
>> in books and papers.
>> You better buy Boolos 1979, or 1993, but you have to study more logic too.
>>
>
> Doesn't it seem odd that there isn't much out there that is newer than 20 
> years old, 
>
>
> That is simply wrong, and I don't see why you say that. But even if that 
> was true, that would prove nothing.
>

It still seems odd. There are a lot of good programmers out there. If this 
is the frontier of machine intelligence, where is the interest? Not saying 
it proves something, but it doesn't instill much confidence that this is as 
fertile an area as you imply.
 

>
>
> and that paper and pencils are the preferred instruments?
>
>
> Maybe I was premature in saying it was promissory...it would appears that 
> there has not been any promise for it in quite some time.
>  
>
>>
>>
>>>
>>> It is almost applicable, but the hard part is that it is blind to its 
>>> own blindness, so that the certainty offered by mathematics comes at a cost 
>>> which mathematics has no choice but to deny completely. Because mathematics 
>>> cannot lie, 
>>>
>>>
>>> G* proves <>[]f
>>>
>>> Even Peano Arithmetic can lie.  
>>> Mathematical theories (set of beliefs) can lie.
>>>
>>> Only truth cannot lie, but nobody know the truth as such.
>>>
>>
>>  Something that is a paradox or inconsistent is not the same thing as an 
>> intentional attempt to deceive. I'm not sure what 'G* proves <>[]f' means 
>> but I think it will mean the same thing to anyone who understands it, and 
>> not something different to the boss than it does to the neighbor.
>>
>>
>> Actually it will have as much meaning as there are correct machines (a 
>> lot), but the laws remains the same. Then adding the non-monotonical 
>> umbrella, saving the LĂ´bian machines from the constant mistakes and lies 
>> they do, provides different interpretation of []f, like
>>
>> I dream,
>> I die,
>> I get mad,
>> I am in a cul-de-sac
>> I get wrong
>>
>> etc.
>>
>> It will depend on the intensional nuances in play.
>>
>
> Couldn't the machine output the same product as musical notes or colored 
> pixels instead?
>
>
> Why not. Humans can do that too.
>

If I asked a person to turn some data into music or art, no two people 
would agree on what that output would be and no person's output would be 
decipherable as input to another person. Computers, on the other hand, 
would automatically be able to reverse any kind of i/o in the same way. One 
computer could play a file as a song, and another could make a graphic file 
out of the audio line out data which would be fully reversible to the 
original binary file.


>
>
>  
>
>>
>>
>>
>>
>>>
>>>
>>> it cannot intentionally tell the truth either, and no matter how 
>>> sophisticated and self-referential a logic it is based on, it can never 
>>> transcend its own alienation from feeling, physics, and authenticity. 
>>>
>>>
>>> That is correct, but again, that is justifiable by all correct 
>>> sufficiently rich machines.
>>>
>>
>> Not sure I understand. Are you saying that we, as rich machines, cannot 
>> intentionally lie or tell the truth either?
>>
>>
>> No, I am saying that all correct machines can eventually justify that if 
>> they are correct they can't  express it, and if they are consistent, it 
>> will be consistent they are wrong. So it means they can eventually exploits 
>> the false locally. Team of universal numbers get entangled in very subtle 
>> prisoner dilemma. 
>> Universal machines can lie, and can crash.
>>
>
> That sounds like they can lie only when they calculate that they must, not 
> that they can lie intentionally because they enjoy it or out of sadism.
>
>
> That sounds like an opportunistic inference.
>

I think that computationalism maintains the illusion of legitimacy on basis 
of seducing us to play only by its rules. It says that we must give the 
undead a chance to be alive - that we cannot know for sure whether a 
machine is not at least as worthy of our love as a newborn baby. To fight 
this seduction, we must use what is our birthright as living beings. We can 
be opportunistic, we can cheat, and lie, and unplug machines whenever we 
want, because that is what makes us superior to recorded logic. We are 
alive, so we get to do whatever we want to that which is not alive.

Craig
 

>
> Bruno
>
>
>
> Craig
>  
>
>>
>> Bruno
>>
>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com <javascript:>.
> To post to this group, send email to everyth...@googlegroups.com<javascript:>
> .
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to