Jason Resch-2 wrote:
> On Thu, Aug 23, 2012 at 11:11 AM, benjayk
> <benjamin.jaku...@googlemail.com>wrote:
>> Jason Resch-2 wrote:
>> >
>> >> >>> So what is your definition of computer, and what is your
>> >> >>> evidence/reasoning
>> >> >>> that you yourself are not contained in that definition?
>> >> >>>
>> >> >> There is no perfect definition of computer. I take computer to mean
>> >> >> the
>> >> >> usual physical computer,
>> >> >
>> >> > Why not use the notion of a Turing universal machine, which has a
>> >> > rather well defined and widely understood definition?
>> >> Because it is an abstract model, not an actual computer.
>> >
>> >
>> > It doesn't have to be abstract.  It could be any physical machine that
>> has
>> > the property of being Turing universal.  It could be your cell phone,
>> for
>> > example.
>> >
>> OK, then no computers exists because no computer can actually emulate all
>> programs that run on an universal turing machine due to lack of memory.
> If you believe the Mandlebrot set, or the infinite digits of Pi exist,
> then
> so to do Turing machines with inexhaustible memory.
They exist as useful abstractions, but not as physical objects (which is
what we practically deal with when we talk about computers).

Jason Resch-2 wrote:
>> But let's say we mean "except for memory and unlimited accuracy".
>> This would mean that we are computers, but not that we are ONLY
>> computers.
> Is this like saying our brains are atoms, but we are more than atoms?  I
> can agree with that, our minds transcend the simple description of
> interacting particles.
> But if atoms can serve as a platform for minds and consciousness, is there
> a reason that computers cannot?
Not absolutely. Indeed, I believe mind is all there is, so necessarily
computers are an aspect of mind and are even conscious in a sense already.

Jason Resch-2 wrote:
> Short of adopting some kind of dualism (such as
> http://en.wikipedia.org/wiki/Biological_naturalism , or the idea that God
> has to put a soul into a computer to make it alive/conscious), I don't see
> how atoms can serve as this platform but computers could not, since
> computers seem capable of emulating everything atoms do.
OK. We have a problem of level here. On some level, computers can emulate
everything atoms can do computationally, I'll admit that.  But that's simply
the wrong level, since it is not about what something can do in the sense of
transforming input/output.
It is about what something IS (or is like).

A boulder that falls on your foot may not be computationally more powerful
than a computer, but it can do something important that a computer running a
simulation of a boulder dropping on your foot can't - to make your foot
Even if you assume we could use a boulder in a simulation with ourselves
plugged into the simulation to create pain (I agree), it still doesn't do
the same, namely creating the pain when dropping on your physical foot.
See, the accuracy of the simulation does not help in bridging the levels.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> Jason Resch-2 wrote:
>> >> >
>> >> >> since this is all that is required for my argument.
>> >> >>
>> >> >> I (if I take myself to be human) can't be contained in that
>> definition
>> >> >> because a human is not a computer according to the everyday
>> >> >> definition.
>> >> >
>> >> > A human may be something a computer can perfectly emulate, therefore
>> a
>> >> > human could exist with the definition of a computer.  Computers are
>> >> > very powerful and flexible in what they can do.
>> >> That is an assumption that I don't buy into at all.
>> >>
>> >>
>> > Have you ever done any computer programming?  If you have, you might
>> > realize that the possibilities for programs goes beyond your
>> imagination.
>> Yes, I studied computer science for one semester, so I have programmed a
>> fair amount.
>> Again, you are misinterpreting me. Of course programs go beyond our
>> imagination. Can you imagine the mandel brot set without computing it on
>> a
>> computer? It is very hard.
>> I never said that they can't.
>> I just said that they lack some capability that we have. For example they
>> can't fundamentally decide which programs to use and which not and which
>> axioms to use (they can do this relatively, though). There is no
>> computational way of determining that.
> There are experimental ways, which is how we determined which axioms to
> use.
Nope, since for the computer no experimental ways exists if we haven't
determined a program first.

Jason Resch-2 wrote:
>> For example how can you computationally determine whether to use the
>> axiom
>> true=not(false) or use the axiom true=not(true)?
> Some of them are more useful, or lead to theories of a richer complexity.
Yes, but how to determine that with a computer?
If you program it to embrace bad axioms that lead to bad theories and don't
have a lot of use he will still carry out your instructions. So the computer
by itself will not notice whether it does something useful (except if you
programmed it to, in which case you get the same problem with the creation
of the program).

Jason Resch-2 wrote:
>  If the computer program had a concept for desiring novelty/surprises, it
> would surely find some axiomatic systems more interesting than others.
Sure. But he could be programmed to not to have such a concept, and there is
no way of determining whether to use it or not if we haven't already
programmed an algorithm for that (which again had the same problem).

In effect you get an infinite regress:
How determine which program to use? ->use a program to determine it
But which? ->use a program to determine it
But which? ->use a program to determine it

Jason Resch-2 wrote:
>> Or how can you determine whether to program a particular program or not?
>> To
>> do this computationally you would need another program, but how do you
>> determine if this is the correct one?
> How do we?
"How" may be the wrong question, since it would seem to impy that there is a
algorithm that we use for that.

Jason Resch-2 wrote:
>> Or to put it more rudely: Many computer scientists are deluded by their
>> own
>> dogma of computation being all important (or even real beyond an idea),
>> just
>> like many priests are deluded about God being all important (or even real
>> beyond an idea). Inside their respective system, there is nothing to
>> suggest
>> the contrary, and most are unwilling to step out of them system because
>> they
>> want to be comfortable and not be rejected by their peers.
> Most consciousness researchers (who often are not computer scientists)
> subscribe to the functionalist/computational theory of mind.
> It is better than dualism, because it does not require violations of
> physics for a mental event to cause a physical event.
> It is better than epihenominalism, because it explains how we can express
> our own puzzlement over consciousness.
> It is better than idealism, because it explains why we observe a physical
> universe that seems to follow certain laws.
> It is better than physicalism, because it explains how creatures with
> different neural anatomy can experience pain.
> If not functionalist/computationalist, what is theory of consciousness do
> you subscribe to?
Idealism (which really isn't a theory of consciousness, more a stance).

It is better than computationalism because it explains why we don't
subjectively perceive ourselves as computer or doing computations (we
aren't), and it explain where computations come from in the first place
(from subjective meaningfulness) and it explains why there is any experience
at all (it is irreducibly existent) and it spiritually meaningful (all there
is the infinite profoundity of experiencing that isn't constrained by
objective pre-defined things).
Your argument against idealism isn't valid, since idealism explain why we
observe a physical universe that follows certain laws (approximatly):
Because order (that can be described using laws) and matter are subjectively

Jason Resch-2 wrote:
> Do you, like Craig, believe that certain materials have to be used in the
> construction of a brain to realize certain mental states?
I think the question is invalid. A brain doesn't realize mental states. It
appears in our mind (and has some correlation to our mind). Experience can't
be attributed to any entity or thing or activity.
Just on a relative level they can be, but then we are talking about "mental
states" and not really the experience itself. On this level, we can also
attribute mental states to computers (we already do).

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> What is the evidence for your statement (or alternatively, why would
>> it
>> >> think it is true for other reasons)?
>> >>
>> >
>> > Sit for a few minutes and try to come up with a process that cannot be
>> > replicated by a computer program, which does not involve one of the
>> three
>> > things I mentioned.  You may soon become frustrated by the seeming
>> > impossibility of the task, and develop an intuition for what is meant
>> by
>> > Turing universality.
>> ???
>> Well, actually I can't find any actual process that can be replicated by
>> a
>> computer program.
>> If it could be, then I could use virtual things and processes like I use
>> actual things and processes. But this is empirically obviously not true.
>> If you want an example, take my heart beating. I can't substitute my
>> heart
>> even with the best simulation of a heart beating, because the simulation
>> doesn't ACTUALLY pumps my blood. Even if it is completely accurate, this
>> doesn't help at all with the problem of pumping my blood because all it
>> does
>> is generate information as output. We would still have the problem of
>> using
>> that information to actually pump the blood, and this would pretty much
>> still require a real heart (or an other pump).
> You're crossing contexts and levels.  Certainly, a heart inside a computer
> simulation of some reality isn't going to do you any good if you exist on
> a
> different level, in a different reality.
So you are actually agreeing with me? - Since this is exactly the point I am
trying to make.
Digital models exist on a different level than what they represent, and it
doesn't matter how good/accurate they are because that doesn't bridge the
gap between model and reality.

Jason Resch-2 wrote:
>> Computers can't go beyond symbol manipulation,
>> simply because that is exactly how we built them. That is the very
>> definition of a computer. Receive symbols, transform them in the stated
>> way,
>> output symbols.
> Computers can do more than manipulate symbols, they can generate reality.
Only through us. If we let a computer compute and don't look at what it
does, than as far we can see, it doesn't generate anything.

Jason Resch-2 wrote:
>  Consider that your entire life, all your experience are created by some
> gelatinous blob resting in the darkness of your skull.  If this blob can
> create your reality, why can't this box sitting under by desk do the same?
I don't buy your assumption. My experience is just there. I can't actually
find something that produces it. I can only imagine that.
As far as I am concerned, the brain just manifests and represents subjective
processes on a objective level inside a skull, they aren't made there (no
evidence for that or any reason to believe it is so, or is even meaningful).

Yes, the computer can do that as well (though on a different level, if only
because its representation is different).

Jason Resch-2 wrote:
>> If you say that only computers exists, you say that only symbol
>> manipulation
>> exists. The problem with that is that symbols don't make sense on their
>> own,
>> as the very definition of a symbol is that it represents something other
>> than itself. So you CAN'T have only symbols and symbols manipulation
>> because
>> the symbols are meaningless without something outiside of them and symbol
>> manipulation is meaningless if symbols are meaningless.
> The squirting of neurotransmitters between neurons are no more than
> symbols.  Yet they have meaning in the context of your brain.
No, the squirting has measurable objective qualities like energy (a symbol
doesn't have that becaues it isn't a unique physical entity).

Jason Resch-2 wrote:
> The act of comparing one symbol to another, and doing something different
> because it was one value and not another is the most elemental form of
> meaning.
I am a bit sorry for you if that is the most elemental form of meaning for
Often closing your eyes and stopping comparing and symbolizing can be much
more meaningful and fullfilling.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> > There are many places where our equations
>> >> *completely* break down, which implies that there might never be a
>> >> accurate
>> >> description there.
>> >> Occams razor is not an argument against this. It doesn't say "Assume
>> as
>> >> little entities as possible" (otherwise we had to deny the existence
>> of
>> >> everything we can't directly observe like planets that are far away).
>> It
>> >> says "Make the least and the simplest assumptions".
>> >> We don't need to assume fundamental finiteness to explain anything, so
>> we
>> >> shouldn't.
>> >>
>> >
>> > Nor should we assume infinities without reason.  There are some
>> physical
>> > reasons to assume there are no infinities involved in the brain,
>> however:
>> >
>> > The holographic principle places a finite bound on the amount of
>> physical
>> > information that there can be in a fixed volume.  This implies there is
>> a
>> > finite number of possible brain states and infinite precision cannot be
>> a
>> > requirement for the operation of the brain.
>> >
>> >
>> http://en.wikipedia.org/wiki/Holographic_principle#Limit_on_information_density
>> >
>> That argument does not work if the human brain is entangled with the rest
>> of
>> the cosmos (because then you can't seperate it as a entity having a fixes
>> volume).
> Okay, let's say it is a bubble of 1000 light years surrounding you.  There
> is a finite quantity of information in this bubble, and only so much can
> reach its center (your brain) over the next 1,000 years.
I mean it is literally entangled with the rest of infinite existence, not
just our universe.
Really even according to the multiverse theory it is. There is no absolute
decoherence in it and there are infinititely many universes.

Jason Resch-2 wrote:
>> And this seems to be empirically true because there is pretty much no
>> other
>> way to explain psi.
> What do you mean by psi?
Telepathy, for example.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> I am not saying that nature is infinite in the way we picture it. It
>> may
>> >> not
>> >> fit into these categories at all.
>> >>
>> >> Quantum mechanics includes true subjective randomness already, so by
>> your
>> >> own standards nothing that physically exists can be emulated.
>> >>
>> >>
>> > The UD also contains subjective randomness, which is at the heart of
>> > Bruno's argument.
>> No, it doesn't even contain a subject.
>> Bruno assumes COMP, which I don't buy at all.
> Okay.  What is your theory of mind?
I don't have any. Mind cannot be captured or even by described at the
fundamental level at all.

Jason Resch-2 wrote:
>> evidence for that.
>> The notion of entaglement doesn't make sense for machines, since they can
>> only process information/symbols, but entanglement is not informational.
>> Also, machines necessarily work in steps (that's how we built them), yet
>> entaglement is instantaneous. If you have to machines then they both have
>> to
>> do a step to know the state of the other one.
>> And indeed entanglement is somewhat magical, but nevertheless we know it
>> exists.
> Effects from entanglement are not instantaneous under many worlds.
> From: http://www.anthropic-principle.com/preprints/manyworlds.html
> To recap. Many-worlds is local and deterministic. Local measurements
> split local systems (including observers) in a subjectively random
> fashion; distant systems are only split when the causally transmitted
> effects of the local interactions reach them. We have not assumed any
> non-local FTL effects, yet we have reproduced the standard predictions
> of QM.
Well, OK, it doesn't really matter (though I don't buy into many-worlds much
more than I buy into single world).
The thing is that a perfect simulation of entanglement still wouldn't be
actual entaglement (since it requires there to be no gap of level - a
classical computer simulating entaglement is not actually entangled with its

View this message in context: 
Sent from the Everything List mailing list archive at Nabble.com.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to