Jason Resch-2 wrote:
> On Thu, Sep 6, 2012 at 12:47 PM, benjayk
> <benjamin.jaku...@googlemail.com>wrote:
>> Jason Resch-2 wrote:
>> >
>> > On Tue, Aug 28, 2012 at 2:57 PM, benjayk
>> > <benjamin.jaku...@googlemail.com>wrote:
>> >
>> >>
>> >> It seems that the Church-Turing thesis, that states that an universal
>> >> turing
>> >> machine can compute everything that is intuitively computable, has
>> near
>> >> universal acceptance among computer scientists.
>> >>
>> >> I really wonder why this is so, given that there are simple cases
>> where
>> >> we
>> >> can compute something that an abitrary turing machine can not compute
>> >> using
>> >> a notion of computation that is not extraordinary at all (and quite
>> >> relevant
>> >> in reality).
>> >> For example, given you have a universal turing machine A that uses the
>> >> alphabet {1,0} and a universal turing machine B that uses the alphabet
>> >> {-1,0,1}.
>> >> Now it is quite clear that the machine A cannot directly answer any
>> >> questions that relates to -1.
> I see this at all being the case at all.  What is the symbol for -1
> supposed to look like?  Do you agree that a turing machine that used A, B,
> and C as symbols could work the same as one that used -1, 0, and 1?
Well, the symbol for -1 could be "-1"?
To answer your latter question, no, not necessarily. I don't take the
symbols not to be mere symbols, but to contain meaning (which they do), and
so it matters what symbols the machine use, because that changes the meaning
of its computation. Often times the meaning of the symbols also constrain
the possible relations (for example -1 * -1 normally needs to be 1, while A
* A could be A, B or C).

CT thesis wants to abstract from things like meaning, but I don't really see
the great value in acting like this is necessarily the correct theoretical
way of thinking about computations. It is only valuable as one possible,
very strongly abstracted, limited and representational model of computation
with respect to emulability.

Jason Resch-2 wrote:
> Everything is a representation, but what is important is that the Turing
> machine preserves the relationships.  E.g., if ABBBABAA is greater than
> AAABBAAB then 01110100 is greater than 00011001, and all the other
> properties can hold, irrespective of what symbols are used.
The problem is that relationships don't make sense apart from symbols. We
can theoretically express the natural numbers using an infinite numbers of
unique symbols for both numbers and operations (like A or B or C or X for
10, ´ or ? or [ or ° for +), but in this case it won't be clear that we are
expresing natural numbers at all (without a lengthy explanation of what the
symbols mean).
Or if we are using binary numbers to express the natural numbers, it will
also be not very clear that we mean numbers, because often binary
expressions mean something entirely else. If we then add 1 to this "number"
it will not be clear that we actually added one, or if we just flipped a

I admit that for numbers this is not so relevant because number relations
can be quite clearly expressed using numerous symbols (they have very few
and simple relations), but it is much more relevant for more complex

Jason Resch-2 wrote:
>> For example it cannot directly compute
>> >> -1*-1=1. Machine A can only be used to use an encoded input value and
>> >> encoded description of machine B, and give an output that is correct
>> >> given
>> >> the right decoding scheme.
>> >>
>> >
>> > 1's or 0's, X's or O's, what the symbols are don't have any bearing on
>> > what
>> > they can compute.
>> >
>> That's just an assertion of the belief I am trying to question here.
>> In reality, it *does* matter which symbols/things we use to compute. A
>> computer that only uses one symbol (for example a computer that adds
>> using
>> marbles) would be pretty useless.
>> It does matter in many different ways: Speed of computations, effciency
>> of
>> computation, amount of memory, efficiency of memory, ease of programming,
>> size of programs, ease of interpreting the result, amount of layers of
>> programming to interpret the result and to program efficiently, ease of
>> introspecting into the state of a computer...
> Practically they might matter but not theoretically.
In the right theoretical model, it does matter. I am precisely doubting the
value of adhering to our simplistic theoretical model of computation as the
essence of what computation means.

Jason Resch-2 wrote:
>> Why would we abstract from all that and then reduce computation to our
>> one
>> very abstract and imcomplete model of computation?
>> If we do this we could as well abstract from the process of computation
>> and
>> say every string can be used to emulate any machine, because if you know
>> what program it expresses, you know what it would compute (if correctly
>> interpreted). There's no fundamental difference. Strings need to be
>> interpreted to make sense as a program, and a turing machine without
>> negative numbers needs to be interpreted to make sense as a program
>> computing the result of an equation using negative numbers.
> I agree, strings need to be interpreted.  This is what the Turing machine
> does.  The symbols on the tape become interrelated in the context of the
> machine that interprets the symbols and it is these relations that become
> equivalent.
That is like postulating some magic in the turing machine. It just
manipulates symbols.
No turing machine can interpret something which it can't even express
(except on representational meta-level, which is only ""interpretation"").

Jason Resch-2 wrote:
>> First, our modern computers are pretty much strictly more computationally
>> powerful in every practical and theoretical way.
> They aren't any more capable.  Modern computers have more memory and are
> faster, sure.  But if their memory could be extended they could emulate
> any
> computer that exists today.
Using the right interpretational layer, meaning right output and input
conversion, right memory content, correct user interface etc....
What is the justifcation that all of this doesn't matter?

Jason Resch-2 wrote:
>> It would be more of an
>> argument if you would simulate a windows on a nintendo (but you can't). I
>> am
>> not saying that a turing machine using 0, 1 and -1 can't emulate a
>> machine
>> using only 0 and 1.
>> Secondly, even this emulations are just correct as far as our playing
>> experience goes (well, at least if you are not nostalgic about hardware).
>> The actual process going on in the computer is very different,
> It is different if you focus on what happens at the lower level, but it is
> equivalent if you look at the abstractions.
...at the right level. Yes, I agree. But this is not what I am talking
about. At the right level of abstraction everything can be everything. I
object to this kind of abstraction into meaninglessness.

Jason Resch-2 wrote:
>> Jason Resch-2 wrote:
>> >
>> >> I realize that it all comes down to the notion of computation. But why
>> do
>> >> most choose to use such a weak notion of computation? How does machine
>> B
>> >> not
>> >> compute something that A doesn't by any reasonable standard?
>> >> Saying that A can compute what B computes is like saying that "orange"
>> >> can
>> >> express the same as the word "apple", because we can encode the word
>> >> "apple"
>> >> as "orange".
>> >
>> >
>> > System A (using its own language of representation for system A), can
>> > predict exactly all future states of another system B (and vice versa).
>> Nope, it can't even *express* the future states.
> Are you saying it is important whether one Turing machine uses a red
> marker
> and one uses a blue, or whether one is located in a basement and another
> in
> an attic?  (The one in the attic doesn't have the capability of expressing
> '1' in a basement).
Ultimately, yes. The latter much less than the former.

Note that I am not saying it doesn't make sense to abstract from that. I am
just saying it doesn't make sense to reduce our notion of computation to
(near) the highest level of abstraction (which the CT thesis asserts).
It is the same mistake as saying that all use of language is equivalent
because you can map all strings to all other strings, and every word can in
principle represent everything. On some level, it is correct, but it is not
a useful level to think of as the foundation of language, because it takes
away most of what actually matters about language.

Jason Resch-2 wrote:
>> We can just use it to
>> predict a future state (if we are clever enough). But why would that be
>> the
>> standard for what is computed?
>> Jason Resch-2 wrote:
>> >
>> >   A and B have different symbols, states, instructions, etc., so
>> perhaps
>> > this
>> > is why you think system A can't perfectly emulate system B, but this is
>> a
>> > little like saying there are things that can only be described by
>> Spanish
>> > speakers that no other language (French, English, etc.) could describe.
>> >  Sure, a translation needs to occur to communicate a Spanish idea into
>> an
>> > English one, but just because spanish and english speakers use a
>> different
>> > language doesn't mean there are problems only speakers of one language
>> can
>> > solve.
>> If the languages are similiar in capability, yes. If they are not, no.
>> Some
>> languages may completely lack concepts that others have, making them
>> inadequate for certain purposes (especially languages of some native
>> people
>> come to mind - they probably wouldn't be able to grasp certain concepts -
>> like eg logarithm - without adding words to their language).
> Do you think there are universal languages, which are not so limited?  Is
> English a universal language in your opinion?
In the most important sense, no, there are no universal languages. Every
language is only relatively expressive.

Sure, we can abstract all meaning away and say that the language consisting
only of the word "blurgh" is completely universal, because you can map all
sentences using this language to all other languages. This just completely
misses that this only make sense from the perspective of the higher languge
and that it is practically utterly wrong. 

Jason Resch-2 wrote:
> Even though there are not words for everything in English, can English be
> used to approximate the meaning of any word (in another language) to
> arbitrary levels of approximation?
In some ways that may be true for our current languages (they are quite
similar). But not in general.
Some languages may have concepts that are just not expressable using
english. Especially if the syntax or symbols of the language itself contains

Jason Resch-2 wrote:
>> Even if we grant that what you say is true, why would we define
>> computation
>> as being completely abstracted from the way something is expressed?
>> Especially if languages are very different (and programming languages can
>> be
>> *very* different) the way we express actually does matter so much that it
>> is
>> quite meaningless to even say the express the same thing.
>> Tell me, does "00000000000000000000000000000000000000000000" really
>> practically express the same thing as "44"?
> It depends on the interpreter.
Right. And this means that the strings practically will express different
things and are thus not equivalent in general. The same is true for

Jason Resch-2 wrote:
>> will be easy to read, will be easily interpreted without error,
>> will be easier to correctly use, etc...
>> So using different symbols will expand what the system can express on a
>> very
>> relevant level.
> At the lowest level but not at higher levels.  You are using a computer to
> type an email which uses a "tape" that has only 2 states.  Yet you are
> still able to type "44".
Did you mean at the higher level, but not at the lowest level?

Yes, at the lowest level - *in conjunction with the higher levels* - it does
not matter. 

It only works because we mirror the actual symbol 44 in the memory of the
computer using 2 states. That is, we are actually still using the higher
level, just expressing it on a lower level.

So even if we grant that it does not matter on the lowest level, we can't
reduce everything to the lowest level (because even the lowest level is
meaningless then).

Jason Resch-2 wrote:
>> This is even more obvious if we take a more extreme example: Does
>> 10001101001010101010111100... (goes on for 10000000 more symbols) really
>> express the same as the photo it represents?
> If we sent it to an alien civilization, they might be able to interpret it
> as a picture, but in general I agree that arbitrary bit strings can be
> interpreted in many different ways.
And the use of the strings often determines what it will be interpreted as.
Hence practically the computations mean something entirely different, and
are different computations for all intents and purposes.

Jason Resch-2 wrote:
>> Hardly at all. For one thing,
>> it only makes sense as being a photo because we know it is supposed to be
>> a
>> photo and use the data accordingly - but this information itself is not
>> contained in the symbols.
>> Jason Resch-2 wrote:
>> >
>> >> It is true in a very limited sense, but it seems mad to treat
>> >> it as the foundation of what it means for words to express something
>> (and
>> >> the same goes for computation).
>> >> If we use such trivial notions of computation, why not say that the
>> >> program
>> >> "return input" emulates all turing-machines because given the right
>> input
>> >> it
>> >> gives the right output (we just give it the solution as input).
>> >>
>> >
>> > Many programs have no input and/or no output, but they still can be
>> > rightfully said to perform different computations.
>> OK. Then we use the same program but demand as input the correct
>> development
>> of the execution of the program (use it multiple times if you haven't
>> emulated enough steps).
>> You see? We can trivialize the notion of computation as much as we like,
>> by
>> adding more and more layers of interpretation, or more demands on the
>> right
>> usage of the program. It just doesn't make sense to pretend that this
>> trivalization actually is what computation is.
>> Sure, my example is a lot more useless than the CT notion of computation,
>> but it is the same principle. Abstracting and interpreting until we
>> arrive
>> at a trivialization of the actual phenomenon.
> The computer (any computer) can do the interpretation for us.  You can
> enter a description of the machine at one point in time, and the state of
> the machine at another time, and ask the computer is this the state the
> machine will be in N steps from now.  Where 0 is no and 1 is yes, or A is
> no and B is yes, or X is no and Y is yes.  Whatever symbols it might use,
> any computer can be setup to answer questions about any other machine in
> this way.
The computer will just output zeroes and ones, and the screen will convert
this into pixels. Without your interpretation the pixels (and thus the
answers) are meaningless.
If you don't know how to encode and decode the symbols (ie interpret them on
a higher level than the level of the interpretation the machine is doing)
the "interpretation" is useless.
The computer always only does low level interpretations.

Jason Resch-2 wrote:
>> A turing machine by itself simply doesn't emulate anything. We just
>> provide
>> it input and interpret its output to emulate something *using* a turing
>> machine.
> So computers don't add, don't compare, don't increment numbers unless a
> particular person looks at it and gives a notarized stamp attesting to the
> action being performed?
I talked about emulating, not computing in general.

Jason Resch-2 wrote:
>> By correctly interpreting, we can use any symbols for anything. We "just"
>> have to interpret them correctly. So this symbol tells you how to build
>> an
>> very advanced AI: "°". Of course you still have to interpret what it
>> means.
>> ;)
> There are limits though.  You can't communicate a message that contains
> more than a amount of informational of entropy without sending a certain
> number of symbols.
Nope. If I have the right information beforehand, I can send an unlimited
amount of information using only one bit (using an agreement like "if we you
send me one bit in the next hour, then this means XYZ...").
We always have to have some information beforehand (though it may be
implicit, without being communicated first). Otherwise every signal is
useless because it could mean everything and nothing.

Jason Resch-2 wrote:
>> The same really applies to turing machines on a different level. A turing
>> machine is indeed utterly useless to perform actual, complex, real life
>> computation.
> Can you provide an example of such an actual, complex, real life
> computation?
Playing a modern computer game. You won't find a turing machine that you can
play a game on.


View this message in context: 
Sent from the Everything List mailing list archive at Nabble.com.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to