On Thu, Aug 23, 2012 at 8:12 AM, benjayk <benjamin.jaku...@googlemail.com>wrote:

> Jason Resch-2 wrote:
> >
> >
> >
> > On Aug 22, 2012, at 1:57 PM, benjayk <benjamin.jaku...@googlemail.com>
> > wrote:
> >
> >>
> >>
> >> Jason Resch-2 wrote:
> >>>
> >>> On Wed, Aug 22, 2012 at 1:07 PM, benjayk
> >>> <benjamin.jaku...@googlemail.com>wrote:
> >>>
> >>>>
> >>>>
> >>>> Jason Resch-2 wrote:
> >>>>>
> >>>>> On Wed, Aug 22, 2012 at 10:48 AM, benjayk
> >>>>> <benjamin.jaku...@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> Bruno Marchal wrote:
> >>>>>>>
> >>>>>>>>
> >>>>>>>> Imagine a computer without an output. Now, if we look at what
> >>>>>>>> the
> >>>>>>>> computer
> >>>>>>>> is doing, we can not infer what it is actually doing in terms of
> >>>>>>>> high-level
> >>>>>>>> activity, because this is just defined at the output/input. For
> >>>>>>>> example, no
> >>>>>>>> video exists in the computer - the data of the video could be
> >>>>>>>> other
> >>>>>>>> data as
> >>>>>>>> well. We would indeed just find computation.
> >>>>>>>> At the level of the chip, notions like definition, proving,
> >>>> inductive
> >>>>>>>> interference don't exist. And if we believe the church-turing
> >>>>>>>> thesis, they
> >>>>>>>> can't exist in any computation (since all are equivalent to a
> >>>>>>>> computation of
> >>>>>>>> a turing computer, which doesn't have those notions), they
> >>>>>>>> would be
> >>>>>>>> merely
> >>>>>>>> labels that we use in our programming language.
> >>>>>>>
> >>>>>>> All computers are equivalent with respect to computability. This
> >>>> does
> >>>>>>> not entail that all computers are equivalent to respect of
> >>>>>>> provability. Indeed the PA machines proves much more than the RA
> >>>>>>> machines. The ZF machine proves much more than the PA machines.
> >>>>>>> But
> >>>>>>> they do prove in the operational meaning of the term. They
> >>>>>>> actually
> >>>>>>> give proof of statements. Like you can say that a computer can
> >>>>>>> play
> >>>>>>> chess.
> >>>>>>> Computability is closed for the diagonal procedure, but not
> >>>>>>> provability, game, definability, etc.
> >>>>>>>
> >>>>>> OK, this makes sense.
> >>>>>>
> >>>>>> In any case, the problem still exists, though it may not be
> >>>>>> enough to
> >>>> say
> >>>>>> that the answer to the statement is not computable. The original
> >>>>>> form
> >>>>>> still
> >>>>>> holds (saying "solely using a computer").
> >>>>>>
> >>>>>>
> >>>>> For to work, as Godel did, you need to perfectly define the
> >>>>> elements in
> >>>>> the
> >>>>> sentence using a formal language like mathematics.  English is too
> >>>>> ambiguous.  If you try perfectly define what you mean by
> >>>>> computer, in a
> >>>>> formal way, you may find that you have trouble coming up with a
> >>>> definition
> >>>>> that includes computers, but does't also include human brains.
> >>>>>
> >>>>>
> >>>> No, this can't work, since the sentence is exactly supposed to
> >>>> express
> >>>> something that cannot be precisely defined and show that it is
> >>>> intuitively
> >>>> true.
> >>>>
> >>>> Actually even the most precise definitions do exactly the same at
> >>>> the
> >>>> root,
> >>>> since there is no such a thing as a fundamentally precise
> >>>> definition. For
> >>>> example 0: You might say it is the smallest non-negative integer,
> >>>> but
> >>>> this
> >>>> begs the question, since integer is meaningless without defining 0
> >>>> first.
> >>>> So
> >>>> ultimately we just rely on our intuitive fuzzy understanding of 0 as
> >>>> nothing, and being one less then one of something (which again is an
> >>>> intuitive notion derived from our experience of objects).
> >>>>
> >>>>
> >>>
> >>> So what is your definition of computer, and what is your
> >>> evidence/reasoning
> >>> that you yourself are not contained in that definition?
> >>>
> >> There is no perfect definition of computer. I take computer to mean
> >> the
> >> usual physical computer,
> >
> > Why not use the notion of a Turing universal machine, which has a
> > rather well defined and widely understood definition?
> Because it is an abstract model, not an actual computer.

It doesn't have to be abstract.  It could be any physical machine that has
the property of being Turing universal.  It could be your cell phone, for

> Taking a computer
> to be a turing machine would be like taking a human to be a picture or a
> description of a human.
> It is a major confusion of level, a confusion between description and
> actuality.
> Also, if we accept your definition, than a turing machine can't do
> anything.
> It is a concept. It doesn't actually compute anything anymore more than a
> plan how to build a car drives.
> You can use the concept of a turing machine to do actual computations based
> on the concept, though, just as you can use a plan of how to a build a car
> to build a car and drive it.
> Jason Resch-2 wrote:
> >
> >> since this is all that is required for my argument.
> >>
> >> I (if I take myself to be human) can't be contained in that definition
> >> because a human is not a computer according to the everyday
> >> definition.
> >
> > A human may be something a computer can perfectly emulate, therefore a
> > human could exist with the definition of a computer.  Computers are
> > very powerful and flexible in what they can do.
> That is an assumption that I don't buy into at all.
Have you ever done any computer programming?  If you have, you might
realize that the possibilities for programs goes beyond your imagination.

Computers are universal tools, they can become anything and emulate
anything in the same way that a CD player is a universal sound emitting
system, which can mimic any voice or instruments.  You may not buy into
this, but the overwhelming majority of computer scientists do.  If you have
no opinion one way or the other, and don't wish to investigate it yourself,
for what reason do you reject the mainstream expert opinion?

> Actually it can't be true due to self-observation.
> A human that observes its own brain observes something entirely else than a
> digital brain observing itself (the former will see flesh and blood while
> the latter will see computer chips and wires), so they behaviour will
> diverge if they look at their own brains - that is, the digital brain can't
> an exact emulation, because emulation means behavioural equivalence.
It could be a brain (computer) in a vat:

But even if it weren't, let's say it was an android.  Why would knowledge
of being an android make it less capable than any biological human?

The computer might be miniature and fit inside a biological person's skull,
and since most people never see their brain in the flesh, there would be
little reason to suspect one had an artificial brain.

> Jason Resch-2 wrote:
> >
> > Short of injecting infinities, true randomness, or halting-type
> > problems, you won't find a process that a computer cannot emulate.
> Really? How come that we never ever emulated anything which isn't already
> digital?

Non-digital processes are emulated all the time.  Any continuous/real
number can be simulated to any desired degree of accuracy.  It is only when
you need infinite accuracy that it becomes impossible for a computer.  This
is an injection of an infinity.

Note that humans cannot add, or multiply real numbers with infinite
precision either.

> What is the evidence for your statement (or alternatively, why would it
> think it is true for other reasons)?

Sit for a few minutes and try to come up with a process that cannot be
replicated by a computer program, which does not involve one of the three
things I mentioned.  You may soon become frustrated by the seeming
impossibility of the task, and develop an intuition for what is meant by
Turing universality.

The reasoning is, anything that can be described algorithmically, and does
not require an infinite number of steps to solve, can be solved by a
computer following that algorithm.  No one has found or constructed any
algorithm that cannot be followed a computer.


> We have no reason to believe that nature is finite. It just seems to go on
> in every direction, we never found an edge. I am not saying it contains a
> completed infinity (in my opinion that's pretty much an oxymoron), but it
> appears to be inherently incomple.

I agree, our universe is probably infinite in size, and there are probably
infinitely many such structures that could be called universes.

But are humans infinite?  Do our brains or neurons need to process
continuous variables to infinite precision to function accurately?

There are many places where our equations
> *completely* break down, which implies that there might never be a accurate
> description there.
> Occams razor is not an argument against this. It doesn't say "Assume as
> little entities as possible" (otherwise we had to deny the existence of
> everything we can't directly observe like planets that are far away). It
> says "Make the least and the simplest assumptions".
> We don't need to assume fundamental finiteness to explain anything, so we
> shouldn't.

Nor should we assume infinities without reason.  There are some physical
reasons to assume there are no infinities involved in the brain, however:

The holographic principle places a finite bound on the amount of physical
information that there can be in a fixed volume.  This implies there is a
finite number of possible brain states and infinite precision cannot be a
requirement for the operation of the brain.


> I am not saying that nature is infinite in the way we picture it. It may
> not
> fit into these categories at all.
> Quantum mechanics includes true subjective randomness already, so by your
> own standards nothing that physically exists can be emulated.
The UD also contains subjective randomness, which is at the heart of
Bruno's argument.

Subjective randomness occurs anytime a subject is duplicated into two
distinguishable locations.  To the subject, this duplication seems like a
teleportation with the probability of ending up in location A vs. location
B, being truly random.

In the third person view of the UD, or quantum mechanics, however, it is
entirely deterministic.

> Jason Resch-2 wrote:
> >
> > Do you believe humans are hyper computers?  If not, then we are just
> > special cases of computers.  The particular case can defined by
> > program, which may be executed on any Turing machine.
> Nope. We are not computers and also not hyper-computers.
That is a bit like saying we are not X, but we are also not (not X).  Hyper
computers are these imagined things that can do everything normal computers
cannot.  So together, there is nothing the two could not be capable of.
 What is this magic that makes a human brain more capable than any machine?
 Do you not believe the human brain is fundamentally mechanical?


And please don't ask me to prove that. The burden of proof is on the one
> claiming that something exists in any particular way or is a particular
> thing (just like atheists rightfully say that the burden of proof is on the
> ones claiming that a christian God with very particular properties exists).

> --
> View this message in context:
> http://old.nabble.com/Simple-proof-that-our-intelligence-transcends-that-of-computers-tp34330236p34339323.html
> Sent from the Everything List mailing list archive at Nabble.com.
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to