Re: Subjective measure and turing machine terminology

2004-01-25 Thread Stephen Paul King
Dear Friends,]

Also see Svolzil:

http://tph.tuwien.ac.at/~svozil/publ/publ.html

Stephen

- Original Message - 
From: "CMR" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, January 25, 2004 10:25 PM
Subject: Re: Subjective measure and turing machine terminology


> > Again, I really suggest that you read the book. It's very good and will
> > explain all this for you. If you don't have ready access to the book,
> > there are some online introductions to algorithmic information theory
that
> > you could try (see http://www.idsia.ch/~marcus/kolmo.htm#tutorials) but
> > the book will review all of the prerequisites (such as Turing machines)
> > for you and give a much more complete overview of the field.
> >
>
> And chaitin's web site has a lot of articles and material he's written on
> the subject:
>
> http://www.cs.auckland.ac.nz/CDMTCS/chaitin/
>
> (And it doubles as a bit of a Liebniz fanzine as well)
>
> Also Calude's site has some good stuff on AIT as well:
>
> http://www.cs.auckland.ac.nz/~cristian/
>
>




Re: Subjective measure and turing machine terminology

2004-01-25 Thread CMR
> Again, I really suggest that you read the book. It's very good and will
> explain all this for you. If you don't have ready access to the book,
> there are some online introductions to algorithmic information theory that
> you could try (see http://www.idsia.ch/~marcus/kolmo.htm#tutorials) but
> the book will review all of the prerequisites (such as Turing machines)
> for you and give a much more complete overview of the field.
>

And chaitin's web site has a lot of articles and material he's written on
the subject:

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/

(And it doubles as a bit of a Liebniz fanzine as well)

Also Calude's site has some good stuff on AIT as well:

http://www.cs.auckland.ac.nz/~cristian/



Re: Subjective measure and turing machine terminology

2004-01-25 Thread Wei Dai
On Sat, Jan 24, 2004 at 07:31:50PM -0800, Eric Hawthorne wrote:
>I  took some small smattering of that stuff in comp sci undergrad, but

Algorithmic information theory is generally not taught in undergrad
courses. At least I didn't see it during my CS undergrad years.
(Which is too bad because it's really interesting and I would have had a 
lot of fun learning it in school. Instead I had to learn about compilers 
and graphics. A lot of good those courses did me...)

>essentially
>what  it  lets  met understand is that some algorithms are O(1), O(n),
>O(nlogn),O(n^2) O(e^n) etc.

The complexity that you're talking about is resource complexity, whereas
I'm talking about information complexity.

>I'm  also  generally  familiar  with  Turing Machine concepts, but I'm
>rusty on the details.
>I'm  a  bit  confused  as  to what is meant by a string having a lower
>algorithmic complexity.

Again, I really suggest that you read the book. It's very good and will
explain all this for you. If you don't have ready access to the book,
there are some online introductions to algorithmic information theory that
you could try (see http://www.idsia.ch/~marcus/kolmo.htm#tutorials) but
the book will review all of the prerequisites (such as Turing machines)  
for you and give a much more complete overview of the field.



Re: Subjective measure and turing machine terminology

2004-01-24 Thread Eric Hawthorne






Wei Dai wrote:

  On Sat, Jan 24, 2004 at 12:21:40PM -0800, Eric Hawthorne wrote:
  
  
Can you explain briefly why the choice of measure is subjective? I 
haven't read any of the
books you mentioned (will try to get to them) but am familiar with 
computability theory
and decision theory.

  
  
Since you do not mention that you're familiar with the theory 
of algorithmic complexity, I suggest that you read the first book on that 
list ASAP. The following response might not make sense until you do.

  

I took some small smattering of that stuff in comp sci undergrad, but
essentially
what it lets met understand is that some algorithms are O(1), O(n),
O(nlogn),O(n^2) O(e^n) etc.
I'm also generally familiar with Turing Machine concepts, but I'm rusty
on the details.
I'm a bit confused as to what is meant by a string having a lower
algorithmic complexity.
Does that mean that ths shortest program that could result in a symbol
string of that form
has a vertain algorithmic complexity that is lower than the algorithmic
complexity that
could compute some other string? What are these strings anyway? Symbol
strings which
are a finite subpart of the turing machine's tape, conceptually?

A question that would arise with that definition above of what the
"algorithmic complexity of
a string" means is: Shortest algorithm that could generate that string
starting with what as its
input? Surely if the input were a string that was, say, just one value
in one tape-position
different than the output string, then any output string can be
computed by a trivial turing
machine program (one step or so) from that special input. So how do you
define what
the input is in assessing "the algorithmic complexity of a string?" 

Or is the string a sequence of instructions and datastore positions
comprising the turing machine program itself? 
and we're discussing the inherent computational complexity of that
particular program, for any (or average or whatever)
input?

I guess I have more trouble mapping directly in my head from turing
machine programs to multiverse states than
I do mapping raw bitstrings to multiverse states.

The general question I asked above would seem to come down to "isn't
the complexity of getting to
some subsequent information state determined by what the previous
information state is?"


Second terminology  thing: When you say "each universal Turing machine,
again I get confused".
Isn't "a turing machine" just the abstraction consisting of the movable
read/write head and a tape?
Isn't the correct terminology "each turing machine PROGRAM which is
NP-complete" or which is
"universal"? How can we have different machines themselves? Or is it
conventional to say that
"a turing machine" is "the movable head, plus its current position,
plus a particular set of values
on a tape (i.e. a particular program?)   In normal computing
terminology, the machine is the machine
and the software program is the software program and the data is the
data.

If you can just help me a little with these terminology stumbling
blocks, I'm sure I (and other 
computational-complexity-theory-tourists on the list) can understand
the concepts.




  Basically, all of the sensible proposed measures are based on the
universal distribution, which assigns a larger probabilities to strings
that have lower algorithmic complexities. However there's actually an
infinite class of universal distributions, one for each universal Turing
machine, and there's no objective criteria for determining which one
should be used.

Another problem is that using the universal distribution forces you to 
assume that non-computable universes do not exist. If one does not want to 
make this assumption, then a more dominant measure need to be used (for 
example, based on a TM with an oracle for the halting problem or
the complexity of a string's logical definition) but then there are even 
more measures to choose from (how high up the computability 
hierarchy do you go? how high up the set theoretic hierarchy?).

Now suppose that two people, Alice and Bob, somehow agree that a measure M
is the objectively correct measure, but Bob insists on using measure M' in
making decisions. He says "So what if universe A has a bigger measure than
universe B according to M? I just care more about what happens in universe
B than universe A, so I'll use M' which assigns a bigger measure to
universe B." What can Alice say to Bob to convince him that he is
not being rational? I don't see what the answer could be.