Jason Resch-2 wrote:
> 
> 
> 
> On Aug 22, 2012, at 1:57 PM, benjayk <benjamin.jaku...@googlemail.com>  
> wrote:
> 
>>
>>
>> Jason Resch-2 wrote:
>>>
>>> On Wed, Aug 22, 2012 at 1:07 PM, benjayk
>>> <benjamin.jaku...@googlemail.com>wrote:
>>>
>>>>
>>>>
>>>> Jason Resch-2 wrote:
>>>>>
>>>>> On Wed, Aug 22, 2012 at 10:48 AM, benjayk
>>>>> <benjamin.jaku...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> Bruno Marchal wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Imagine a computer without an output. Now, if we look at what  
>>>>>>>> the
>>>>>>>> computer
>>>>>>>> is doing, we can not infer what it is actually doing in terms of
>>>>>>>> high-level
>>>>>>>> activity, because this is just defined at the output/input. For
>>>>>>>> example, no
>>>>>>>> video exists in the computer - the data of the video could be  
>>>>>>>> other
>>>>>>>> data as
>>>>>>>> well. We would indeed just find computation.
>>>>>>>> At the level of the chip, notions like definition, proving,
>>>> inductive
>>>>>>>> interference don't exist. And if we believe the church-turing
>>>>>>>> thesis, they
>>>>>>>> can't exist in any computation (since all are equivalent to a
>>>>>>>> computation of
>>>>>>>> a turing computer, which doesn't have those notions), they  
>>>>>>>> would be
>>>>>>>> merely
>>>>>>>> labels that we use in our programming language.
>>>>>>>
>>>>>>> All computers are equivalent with respect to computability. This
>>>> does
>>>>>>> not entail that all computers are equivalent to respect of
>>>>>>> provability. Indeed the PA machines proves much more than the RA
>>>>>>> machines. The ZF machine proves much more than the PA machines.  
>>>>>>> But
>>>>>>> they do prove in the operational meaning of the term. They  
>>>>>>> actually
>>>>>>> give proof of statements. Like you can say that a computer can  
>>>>>>> play
>>>>>>> chess.
>>>>>>> Computability is closed for the diagonal procedure, but not
>>>>>>> provability, game, definability, etc.
>>>>>>>
>>>>>> OK, this makes sense.
>>>>>>
>>>>>> In any case, the problem still exists, though it may not be  
>>>>>> enough to
>>>> say
>>>>>> that the answer to the statement is not computable. The original  
>>>>>> form
>>>>>> still
>>>>>> holds (saying "solely using a computer").
>>>>>>
>>>>>>
>>>>> For to work, as Godel did, you need to perfectly define the  
>>>>> elements in
>>>>> the
>>>>> sentence using a formal language like mathematics.  English is too
>>>>> ambiguous.  If you try perfectly define what you mean by  
>>>>> computer, in a
>>>>> formal way, you may find that you have trouble coming up with a
>>>> definition
>>>>> that includes computers, but does't also include human brains.
>>>>>
>>>>>
>>>> No, this can't work, since the sentence is exactly supposed to  
>>>> express
>>>> something that cannot be precisely defined and show that it is
>>>> intuitively
>>>> true.
>>>>
>>>> Actually even the most precise definitions do exactly the same at  
>>>> the
>>>> root,
>>>> since there is no such a thing as a fundamentally precise  
>>>> definition. For
>>>> example 0: You might say it is the smallest non-negative integer,  
>>>> but
>>>> this
>>>> begs the question, since integer is meaningless without defining 0  
>>>> first.
>>>> So
>>>> ultimately we just rely on our intuitive fuzzy understanding of 0 as
>>>> nothing, and being one less then one of something (which again is an
>>>> intuitive notion derived from our experience of objects).
>>>>
>>>>
>>>
>>> So what is your definition of computer, and what is your
>>> evidence/reasoning
>>> that you yourself are not contained in that definition?
>>>
>> There is no perfect definition of computer. I take computer to mean  
>> the
>> usual physical computer,
> 
> Why not use the notion of a Turing universal machine, which has a  
> rather well defined and widely understood definition?
Because it is an abstract model, not an actual computer. Taking a computer
to be a turing machine would be like taking a human to be a picture or a
description of a human.
It is a major confusion of level, a confusion between description and
actuality.

Also, if we accept your definition, than a turing machine can't do anything.
It is a concept. It doesn't actually compute anything anymore more than a
plan how to build a car drives.
You can use the concept of a turing machine to do actual computations based
on the concept, though, just as you can use a plan of how to a build a car
to build a car and drive it.


Jason Resch-2 wrote:
> 
>> since this is all that is required for my argument.
>>
>> I (if I take myself to be human) can't be contained in that definition
>> because a human is not a computer according to the everyday  
>> definition.
> 
> A human may be something a computer can perfectly emulate, therefore a  
> human could exist with the definition of a computer.  Computers are  
> very powerful and flexible in what they can do.
That is an assumption that I don't buy into at all.

Actually it can't be true due to self-observation.
A human that observes its own brain observes something entirely else than a
digital brain observing itself (the former will see flesh and blood while
the latter will see computer chips and wires), so they behaviour will
diverge if they look at their own brains - that is, the digital brain can't
an exact emulation, because emulation means behavioural equivalence.


Jason Resch-2 wrote:
> 
> Short of injecting infinities, true randomness, or halting-type  
> problems, you won't find a process that a computer cannot emulate.
Really? How come that we never ever emulated anything which isn't already
digital?
What is the evidence for your statement (or alternatively, why would it
think it is true for other reasons)?

We have no reason to believe that nature is finite. It just seems to go on
in every direction, we never found an edge. I am not saying it contains a
completed infinity (in my opinion that's pretty much an oxymoron), but it
appears to be inherently incomple. There are many places where our equations
*completely* break down, which implies that there might never be a accurate
description there. 
Occams razor is not an argument against this. It doesn't say "Assume as
little entities as possible" (otherwise we had to deny the existence of
everything we can't directly observe like planets that are far away). It
says "Make the least and the simplest assumptions".
We don't need to assume fundamental finiteness to explain anything, so we
shouldn't.
I am not saying that nature is infinite in the way we picture it. It may not
fit into these categories at all.

Quantum mechanics includes true subjective randomness already, so by your
own standards nothing that physically exists can be emulated.


Jason Resch-2 wrote:
> 
> Do you believe humans are hyper computers?  If not, then we are just  
> special cases of computers.  The particular case can defined by  
> program, which may be executed on any Turing machine.
Nope. We are not computers and also not hyper-computers.

And please don't ask me to prove that. The burden of proof is on the one
claiming that something exists in any particular way or is a particular
thing (just like atheists rightfully say that the burden of proof is on the
ones claiming that a christian God with very particular properties exists).

-- 
View this message in context: 
http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34339323.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to