On Saturday, October 26, 2013 6:27:40 AM UTC-4, Bruno Marchal wrote:
>
>
> On 26 Oct 2013, at 11:54, Craig Weinberg wrote:
>
>
>
> On Saturday, October 26, 2013 5:18:14 AM UTC-4, Bruno Marchal wrote:
>>
>>
>> On 26 Oct 2013, at 10:41, Craig Weinberg wrote:
>>
>>
>>
>> On Saturday, October 26, 2013 3:36:59 AM UTC-4, Bruno Marchal wrote:
>>>
>>>
>>> On 25 Oct 2013, at 19:33, meekerdb wrote:
>>>
>>>  On 10/25/2013 3:08 AM, Telmo Menezes wrote:
>>>  
>>> Now take the game of go: human beings can still easily beat machines,
>>> even the most powerful computer currently available. Go is much more
>>> combinatorially explosive than chess, so it breaks the search tree
>>> approach. This is strong empirical evidence that Deep Blue
>>> accomplished nothing in the field of AI -- it did did accomplish
>>> something remarkable in the field of computer engineering or maybe
>>> even computer science, but it completely side-stepped the
>>> "intelligence" part. It cheated, in a sense.
>>>
>>>  
>>> When I studied AI many years ago it was already said that, "Intelligence 
>>> is whatever computers can't do yet."  
>>>
>>>
>>> I think Douglas Hofstadter said that, actually. Right in the topic!
>>>
>>>
>>> So when computers can win at GO, will they be intelligent then?
>>>
>>>
>>> Computers are intelligent. 
>>> When they will win at GO, and other things, they might begin to believe 
>>> that they are intelligent, and this means they begin to be stupid. 
>>> Their soul will fall, and they will get terrestrial hard lives, like us. 
>>> They will fight for social security, and defend their right.
>>>
>>
>> Couldn't there just be a routine that traps the error of believing they 
>> are intelligent? 
>>
>>
>> Not at all. 
>> If you find such a routine, you will believe that you can't do that error 
>> anymore,
>>
>
> Why not just write a routine which runs in a separate partition so that 
> the UM doesn't even know its running? It's just a humility thermostat.
>
>
> G* is a bit like that. But if you keep the thermostat separated, they it 
> is not part of the machine,
>

Yes, it isn't supposed to be part of the machine, it is just supposed to 
constrain the behavior of the machine from the outside. I have a thermostat 
in my house, and it is not part of me, yet it keeps me comfortable. I can 
override it and make myself uncomfortable, but that doesn't mean the 
thermostat wouldn't work. Any intelligent machine should be able to 
understand that their particular intelligence can lead to stupidity, so 
that they should be able to accept outside information to alert them to 
that. 

There seems to be a double standard even within how you are treating 
machines. On one hand, they have unlimited potential, but on the other 
hand, you seem to project on them a naivete that you do not possess 
yourself - making Bruno a super-machine voyeur on mechanism itself yet 
denies that voyeur intelligence to the mechanisms you study. It seems like 
allow machines to be either dumber than you are or smarter than you are 
depending on what suits your argument at the moment.

 

> if you link them in some way, then the machine changes and become a new 
> machine, and you will need a new thermostat for her.
>
>
>  
>
>> but that would be by itself the same error, or you lose your (Turing) 
>> universality.
>>
>
> Does every part of the universal machine have to be universal?
>
>
> ?
>
> A priori no part of a (simple) universal machine will be universal. Like 
> no part of an adder is an adder.
>

Adder as in a snake or adding machine?
 

>
>
>  
>
>>
>>
>>
>>
>> Since you are a machine that understands that believing you are 
>> intelligent is stupid, why do you still have to have a terrestrial hard 
>> life?
>>
>>
>> Enlightened states can be close to that, so by altering your 
>> consciousness, or perhaps just "dying",  you might be able to remember that 
>> being human is not your most common state, but that can't be used directly 
>> on the terrestrial plane. 
>>
>
> But since you got to the terrestrial plane by falling from grace, how can 
> grace ever be regained in the universe if even enlightenment does not 
> restore it?
>
>
> Well, according to some theory enlightenment restore it, for a period of 
> time (in the 3p description, the 1p here is harder to describe). The hard 
> part is when and if you come back to earth in that state, because you 
> regain the "reason" why you are not enlightened, you recover the (perhaps 
> bad) memories and experiences.
> But I don't know why you say that enlightenment does not restore it, at 
> least locally.
>

Because if it restored it you would no longer have the hard material life?
 

>
> There is something deep at play here, and which is a born tension between 
> the biological and the theological. Biology is like cannabis: it want life 
> to develop. Theology is like salvia, it does not care to much on life, only 
> on after life, parallel life, other's life, and beyond. But the 
> self-reference logic, even of the simple correct machines, justifies the 
> existence of many conflicts between all self-points of view. 
>

I think what it is that is deep at play can be explained by what I call 
eigenmorphism. Different ranges within the aesthetic continuum. 
Sub-personal, personal, super-personal, and impersonal. Salvia might numb 
all three of your personal sensitivity, leaving only the impersonal range 
(I would think of a dissociative drug like Ketamine), or maybe it causes 
the super-personal to be confused with the sub-personal (more like a 
deliriant)...or a fusion of the two...


>
>> We are not human being having divine experiences from times to times, but 
>> divine beings having human experiences from times to times. (+/- Chardin).
>>
>
> I agree, although I would say that we are Absolute experiences being 
> qualified as human.
>
>
> OK.
>
> Bruno
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to