Jason Resch-2 wrote:
> 
> On Tue, Aug 28, 2012 at 2:57 PM, benjayk
> <benjamin.jaku...@googlemail.com>wrote:
> 
>>
>> It seems that the Church-Turing thesis, that states that an universal
>> turing
>> machine can compute everything that is intuitively computable, has near
>> universal acceptance among computer scientists.
>>
>> I really wonder why this is so, given that there are simple cases where
>> we
>> can compute something that an abitrary turing machine can not compute
>> using
>> a notion of computation that is not extraordinary at all (and quite
>> relevant
>> in reality).
>> For example, given you have a universal turing machine A that uses the
>> alphabet {1,0} and a universal turing machine B that uses the alphabet
>> {-1,0,1}.
>> Now it is quite clear that the machine A cannot directly answer any
>> questions that relates to -1. For example it cannot directly compute
>> -1*-1=1. Machine A can only be used to use an encoded input value and
>> encoded description of machine B, and give an output that is correct
>> given
>> the right decoding scheme.
>>
> 
> 1's or 0's, X's or O's, what the symbols are don't have any bearing on
> what
> they can compute.
> 
That's just an assertion of the belief I am trying to question here.
In reality, it *does* matter which symbols/things we use to compute. A
computer that only uses one symbol (for example a computer that adds using
marbles) would be pretty useless.
It does matter in many different ways: Speed of computations, effciency of
computation, amount of memory, efficiency of memory, ease of programming,
size of programs, ease of interpreting the result, amount of layers of
programming to interpret the result and to program efficiently, ease of
introspecting into the state of a computer...

Why would we abstract from all that and then reduce computation to our one
very abstract and imcomplete model of computation?
If we do this we could as well abstract from the process of computation and
say every string can be used to emulate any machine, because if you know
what program it expresses, you know what it would compute (if correctly
interpreted). There's no fundamental difference. Strings need to be
interpreted to make sense as a program, and a turing machine without
negative numbers needs to be interpreted to make sense as a program
computing the result of an equation using negative numbers.


Jason Resch-2 wrote:
> 
> Consider: No physical computer today uses 1's or 0's, they use voltages,
> collections of more or fewer electrons.
OK, but in this case abstraction makes sense for computer scientist because
progamers don't have access to that level. You are right, though that a chip
engineer shouldn't abstract from that level if he actually wants to build a
computer.


Jason Resch-2 wrote:
> 
> This doesn't mean that our computers can only directly compute what
> electrons do.
In fact they do much more.  Electrons express strictly more than just 0 and
1. So it's not a good anology, because 0 and 1 express *less* than 0, 1 and
-1.


Jason Resch-2 wrote:
> 
> But for me this already makes clear that machine A is less computationally
>> powerful than machine B. Its input and output when emulating B do only
>> make
>> sense with respect to what the machine B does if we already know what
>> machine B does, and if it is known how we chose to reflect this in the
>> input
>> of machine A (and the interpretation of its output). Otherwise we have no
>> way of even saying whether it emulates something, or whether it is just
>> doing a particular computation on the alphabet {1,0}.
>>
> 
> These are rather convincing:
> http://en.wikipedia.org/wiki/Video_game_console_emulator
> 
> There is software that emulates the unique architectures of an Atari,
> Nintendo, Supernintendo, PlayStation, etc. systems.  These emulators can
> also run on any computer, whether its Intel X86, x86_64, PowerPC, etc. 
> You
> will have a convincing experience of playing an old Atari game like space
> invaders, even though the original creators of that program never intended
> it to run on a computer architecture that wouldn't be invented for another
> 30 years, and the original programmers didn't have to be called in to
> re-write their program to do so.
Yes, I use them as well. They are indeed convincing. But this doesn't really
relate to the question very much.
First, our modern computers are pretty much strictly more computationally
powerful in every practical and theoretical way. It would be more of an
argument if you would simulate a windows on a nintendo (but you can't). I am
not saying that a turing machine using 0, 1 and -1 can't emulate a machine
using only 0 and 1.
Secondly, even this emulations are just correct as far as our playing
experience goes (well, at least if you are not nostalgic about hardware).
The actual process going on in the computer is very different, and thus it
makes sense to say that it computes something else. Its computation just
have a similar results in terms of experience, but they need vastly more
ressources and compute something more (all the virtual layers required to
make it work).
I don't see while we would abstract from that. I can see why we do it in
some circumstances, but the CT-thesis is a general statement, and as such is
competely unwarranted, IMO.


Jason Resch-2 wrote:
> 
>> I realize that it all comes down to the notion of computation. But why do
>> most choose to use such a weak notion of computation? How does machine B
>> not
>> compute something that A doesn't by any reasonable standard?
>> Saying that A can compute what B computes is like saying that "orange"
>> can
>> express the same as the word "apple", because we can encode the word
>> "apple"
>> as "orange".
> 
> 
> System A (using its own language of representation for system A), can
> predict exactly all future states of another system B (and vice versa).
Nope, it can't even *express* the future states. We can just use it to
predict a future state (if we are clever enough). But why would that be the
standard for what is computed?


Jason Resch-2 wrote:
> 
>   A and B have different symbols, states, instructions, etc., so perhaps
> this
> is why you think system A can't perfectly emulate system B, but this is a
> little like saying there are things that can only be described by Spanish
> speakers that no other language (French, English, etc.) could describe.
>  Sure, a translation needs to occur to communicate a Spanish idea into an
> English one, but just because spanish and english speakers use a different
> language doesn't mean there are problems only speakers of one language can
> solve.
If the languages are similiar in capability, yes. If they are not, no. Some
languages may completely lack concepts that others have, making them
inadequate for certain purposes (especially languages of some native people
come to mind - they probably wouldn't be able to grasp certain concepts -
like eg logarithm - without adding words to their language).

Even if we grant that what you say is true, why would we define computation
as being completely abstracted from the way something is expressed?
Especially if languages are very different (and programming languages can be
*very* different) the way we express actually does matter so much that it is
quite meaningless to even say the express the same thing.

Tell me, does "00000000000000000000000000000000000000000000" really
practically express the same thing as "44"? For all intents and purposes, it
does not. 44 will make clear you mean a number for everyone (even without
context), will be easy to read, will be easily interpreted without error,
will be easier to correctly use, etc...
So using different symbols will expand what the system can express on a very
relevant level.

This is even more obvious if we take a more extreme example: Does
10001101001010101010111100... (goes on for 10000000 more symbols) really
express the same as the photo it represents? Hardly at all. For one thing,
it only makes sense as being a photo because we know it is supposed to be a
photo and use the data accordingly - but this information itself is not
contained in the symbols.


Jason Resch-2 wrote:
> 
>> It is true in a very limited sense, but it seems mad to treat
>> it as the foundation of what it means for words to express something (and
>> the same goes for computation).
>> If we use such trivial notions of computation, why not say that the
>> program
>> "return input" emulates all turing-machines because given the right input
>> it
>> gives the right output (we just give it the solution as input).
>>
> 
> Many programs have no input and/or no output, but they still can be
> rightfully said to perform different computations.
OK. Then we use the same program but demand as input the correct development
of the execution of the program (use it multiple times if you haven't
emulated enough steps).
You see? We can trivialize the notion of computation as much as we like, by
adding more and more layers of interpretation, or more demands on the right
usage of the program. It just doesn't make sense to pretend that this
trivalization actually is what computation is.
Sure, my example is a lot more useless than the CT notion of computation,
but it is the same principle. Abstracting and interpreting until we arrive
at a trivialization of the actual phenomenon.

A turing machine by itself simply doesn't emulate anything. We just provide
it input and interpret its output to emulate something *using* a turing
machine.
Only by senselessly abstracting away the crucial steps of encoding, decoding
and interpreting we arrive at the CT thesis.
By correctly interpreting, we can use any symbols for anything. We "just"
have to interpret them correctly. So this symbol tells you how to build an
very advanced AI: "°". Of course you still have to interpret what it means.
;) 


Jason Resch-2 wrote:
> 
>> I get that we can simply use the Church-turing as the definition of
>> computation means. But why is it (mostly) treated as being the one and
>> only
>> correct notion of computation (especially in a computer science context)?
> 
> 
> I think it more comes into play in the a definition of a universal
> machine,
> than in the definition of computation.
> 
> It is useful because it makes it easy to prove.  All you need to do is
> show
> how some machine can be used to emulate any other known turning universal
> machine.
Well, every machine that can output its input can emulate any turing
machine, only in a utterly trivial and useless way (just give it the correct
emulation as input).
The same really applies to turing machines on a different level. A turing
machine is indeed utterly useless to perform actual, complex, real life
computation.


Jason Resch-2 wrote:
> 
>>
> 
> The only explanation I have is that it is dogma. To question it would
> change
>> to much and would be too "complicated" and uncomfortable. It would make
>> computation an irreducibly complex and relative notion or - heaven forbid
>> -
>> even an inherently subjective notion (computation from which
>> perspective?).
> 
> 
> Reverse engineering machine language code is very difficult, but there are
> automated programs for doing this that can provide much more readable
> program code.  Code too, can be difficult to grasp, (see
> http://www.ioccc.org/ for example), but in other cases, code is easy to
> understand.  I often prefer a snippet of example code to a notation-heavy
> mathematical formula.
> 
OK, but I don't get the relation to what I wrote.

benjayk

-- 
View this message in context: 
http://old.nabble.com/Why-the-Church-Turing-thesis--tp34348236p34398854.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to