I just respond to some parts of your posts, because I'd rather discuss the
main points than get sidetracked with issues that are less fundamental.

Jason Resch-2 wrote:
>> I admit that for numbers this is not so relevant because number relations
>> can be quite clearly expressed using numerous symbols (they have very few
>> and simple relations), but it is much more relevant for more complex
>> relations.
> Complex relation can be expressed in terms of a series of interrelated
> simpler relations (addition, multiplication, comparison, etc.).  You are
> focused on the very lowest level and it is no wonder you cannot see the
> higher-level possibilities for meaning, relations, intelligence,
> consciousness, etc. that a machine can create.
The complex relations can often only be expressed as simple relations on a
meta-level (which is a very big step of abstraction). You can express
rational numbers using natural numbers, but only using an additional layer
of interpretation (which is a *huge* abstraction - it's the difference
between description and what is being described).

The natural numbers itself don't lead to the rational numbers (except by
adding additional relations, like the inverse of multiplication).

Jason Resch-2 wrote:
> The relation of hot vs. cold as experienced by you is also the
> production of a long series of multiplications, additions, comparisons,
> and
> other operations. 
You assume reductionism or emergentism here. Of course you can defend the CT
thesis if you assume that the lowest level can magically lead to higher
levels (or the higher levels are not real in the first place).
The problem is that this magic would precisely be the higher levels that you
wanted to derive.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> For example it cannot directly compute
>> >> >> -1*-1=1. Machine A can only be used to use an encoded input value
>> and
>> >> >> encoded description of machine B, and give an output that is
>> correct
>> >> >> given
>> >> >> the right decoding scheme.
>> >> >>
>> >> >
>> >> > 1's or 0's, X's or O's, what the symbols are don't have any bearing
>> on
>> >> > what
>> >> > they can compute.
>> >> >
>> >> That's just an assertion of the belief I am trying to question here.
>> >> In reality, it *does* matter which symbols/things we use to compute. A
>> >> computer that only uses one symbol (for example a computer that adds
>> >> using
>> >> marbles) would be pretty useless.
>> >> It does matter in many different ways: Speed of computations,
>> effciency
>> >> of
>> >> computation, amount of memory, efficiency of memory, ease of
>> programming,
>> >> size of programs, ease of interpreting the result, amount of layers of
>> >> programming to interpret the result and to program efficiently, ease
>> of
>> >> introspecting into the state of a computer...
>> >>
>> >
>> > Practically they might matter but not theoretically.
>> In the right theoretical model, it does matter. I am precisely doubting
>> the
>> value of adhering to our simplistic theoretical model of computation as
>> the
>> essence of what computation means.
> What model do you propose to replace it?
> The Church-Turing thesis plays a similar role in computer science as the
> fundamental theorem of arithmetic does in number theory.
None. There is no one correct model of computations. There are infinite
models that express different facets of what computation is. Different
turing machines express different things, super-recursive turing machines
express another thing, etc...
I think computer scientists just don't want to accept it, because it takes
their bible away. We like to have an easy answer, even if it is the wrong

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >>
>> >> Why would we abstract from all that and then reduce computation to our
>> >> one
>> >> very abstract and imcomplete model of computation?
>> >> If we do this we could as well abstract from the process of
>> computation
>> >> and
>> >> say every string can be used to emulate any machine, because if you
>> know
>> >> what program it expresses, you know what it would compute (if
>> correctly
>> >> interpreted). There's no fundamental difference. Strings need to be
>> >> interpreted to make sense as a program, and a turing machine without
>> >> negative numbers needs to be interpreted to make sense as a program
>> >> computing the result of an equation using negative numbers.
>> >>
>> >
>> > I agree, strings need to be interpreted.  This is what the Turing
>> machine
>> > does.  The symbols on the tape become interrelated in the context of
>> the
>> > machine that interprets the symbols and it is these relations that
>> become
>> > equivalent.
>> That is like postulating some magic in the turing machine. It just
>> manipulates symbols.
> No, it is not magic.  It is equivalent to saying the laws of physics
> interrelate every electron and quark to each other.
It is more like saying that the laws of physics show how to create humans
from atoms.
This is not the case. Nothing in the laws of nature says that some atoms
form a human. Still it is evidently the case that there are humans, meaning
that the laws of nature just don't describe the higher levels 

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> First, our modern computers are pretty much strictly more
>> computationally
>> >> powerful in every practical and theoretical way.
>> >
>> >
>> > They aren't any more capable.  Modern computers have more memory and
>> are
>> > faster, sure.  But if their memory could be extended they could emulate
>> > any
>> > computer that exists today.
>> Using the right interpretational layer, meaning right output and input
>> conversion, right memory content, correct user interface etc....
>> What is the justifcation that all of this doesn't matter?
> No program can determine its hardware.  This is a consequence of the
> Church
> Turing thesis.  The particular machine at the lowest level has no bearing
> (from the program's perspective).
If that is true, we can show that CT must be false, because we *can* define
a "meta-program" that has access to (part of) its own hardware (which still
is intuitively computable - we can even implement it on a computer).
Actually I will make another post about this, because it seems to be an
important argument.

Jason Resch-2 wrote:
>> Note that I am not saying it doesn't make sense to abstract from that. I
>> am
>> just saying it doesn't make sense to reduce our notion of computation to
>> (near) the highest level of abstraction (which the CT thesis asserts).
>> It is the same mistake as saying that all use of language is equivalent
>> because you can map all strings to all other strings, and every word can
>> in
>> principle represent everything. On some level, it is correct, but it is
>> not
>> a useful level to think of as the foundation of language, because it
>> takes
>> away most of what actually matters about language.
> Okay.  I can see your point that when looking all the way down at the
> bottom layers, you can see a difference.  However, I am not sure how this
> matters.  If our universe were a giant emulation on some computer, the
> particular architecture of the computer could make no difference to us. 
> So
> long as they emulated the same laws of physics there is no possible way,
> even in theory, that we could ever discern which architecture was running
> our universe.
Do you realize that what you said is just a restatement of the belief in
"only low level computation matters"? I think if the universe were an
emulation we could indeed see no difference, because we wouldn't be in the
emulation at all (though our behaviour may be mirrored in some way there).

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >>
>> >> Even if we grant that what you say is true, why would we define
>> >> computation
>> >> as being completely abstracted from the way something is expressed?
>> >> Especially if languages are very different (and programming languages
>> can
>> >> be
>> >> *very* different) the way we express actually does matter so much that
>> it
>> >> is
>> >> quite meaningless to even say the express the same thing.
>> >>
>> >> Tell me, does "00000000000000000000000000000000000000000000" really
>> >> practically express the same thing as "44"?
>> >
>> >
>> > It depends on the interpreter.
>> Right. And this means that the strings practically will express different
>> things and are thus not equivalent in general. The same is true for
>> computations.
> A Turing machine along with its tape has a unique definition and future
> evolution.  All the meaning it has is also uniquely defined (though
> perhaps
> implicitly), anyone can follow it and see what it does.
That's not true. The low-level action of a turing machine may have many
high-level meanings, which can't be derived from the lower levels. For
example, it is possible that some operation on data may represent graphical
transformation, or change to a source code of a program, or change of an
audio file, etc...

It only practically mostly isn't the case because we encode and store and
manipulate and decode the data in a way that is mostly not very ambigous (to
us!). But that is a function of a higher-level. The computer itself doesn't
know what data represents. We can use computers do display images as texts
for example, and if we lack the right interpretational layer, then a given
piece of data, or a given computatation is just meaingless rubbish.

Jason Resch-2 wrote:
> Whereas with a lone bit string (with no definition of its interpreter),
> there is no
> inherent or definite meaning.
Yes, just as with turing machines.
The only inherent meaning of a bit string like 01 is first bit is zero,
second bit is 1.

Very long bit strings can have quite unique high-level meaning for us, just
like long computations.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> will be easy to read, will be easily interpreted without error,
>> >> will be easier to correctly use, etc...
>> >> So using different symbols will expand what the system can express on
>> a
>> >> very
>> >> relevant level.
>> >>
>> >
>> > At the lowest level but not at higher levels.  You are using a computer
>> to
>> > type an email which uses a "tape" that has only 2 states.  Yet you are
>> > still able to type "44".
>> ???
>> Did you mean at the higher level, but not at the lowest level?
> By lowest level I mean the raw hardware.  At the lowest level your
> computer's memory can only represent 2 states, often labeled '1' and '0'.
> But at the higher levels built upon this, you can have programs with much
> larger symbol sets.
> Maybe this is the source of our confusion and disagreement?
Yes, it seems like it. You say that the higher levels are contained in the
lower level, while I argue that they are clearly not, though they may be
relatively to a representational meta-level (but only because we use the low
levels in the right way - which is big feat in itself).

Jason Resch-2 wrote:
>> > The computer (any computer) can do the interpretation for us.  You can
>> > enter a description of the machine at one point in time, and the state
>> of
>> > the machine at another time, and ask the computer is this the state the
>> > machine will be in N steps from now.  Where 0 is no and 1 is yes, or A
>> is
>> > no and B is yes, or X is no and Y is yes.  Whatever symbols it might
>> use,
>> > any computer can be setup to answer questions about any other machine
>> in
>> > this way.
>> The computer will just output zeroes and ones, and the screen will
>> convert
>> this into pixels. Without your interpretation the pixels (and thus the
>> answers) are meaningless.
> When things make a difference, they aren't meaningless.  The register
> containing a value representing a plane's altitude isn't meaningless to
> the
> autopilot program, nor to those on board.
Right, but it is meaningless on the level we are speaking about. If you use
a turing machine to emulate another, more complex one, than its output is
meaningless until you interpret it the right way.

Jason Resch-2 wrote:
>> If you don't know how to encode and decode the symbols (ie interpret them
>> on
>> a higher level than the level of the interpretation the machine is doing)
>> the "interpretation" is useless.
> Useless to the one who failed to interpret them, but perhaps not
> generally.  If you were dropped off in a foreign land, your speech would
> be
> meaningless to others who heard you, but not to you, or others  who know
> how to interpret it.
Right. I am not objecting to this. But this is precisely why we can't ignore
the higher levels as being less important (or even irrelevant) than the low
level language / computation.
Unless we postulate some independent higher level, the lower levels don't
make sense in a high level context (like emulation only makes sense to some
observer that knows of the existence of different machines).

Jason Resch-2 wrote:
>> We always have to have some information beforehand (though it may be
>> implicit, without being communicated first). Otherwise every signal is
>> useless because it could mean everything and nothing.
> How do infants learn language if they start with none?
Because they still have something, even though it is not a language in our
Of course we can get from no information to some information in some
relative realm.


View this message in context: 
Sent from the Everything List mailing list archive at Nabble.com.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to