Brent Meeker writes:
> What's Wolfram's critereon for "randomness"? how it looks? It would seem
> that he's using something different than Chaitin if we thinks a simple
> system can produce random output. Chaitin *defines* random as an output
> that can't be reproduced by a system simpler than the output itself.
That's a good point. It's curious that Chaitin used the primes as an
example of mathematical randomness when they must have inherently low
algorithmic complexity. A handful of relatively short axioms define the
integers and the operations of addition, multiplication, etc., and this
is sufficient to produce the "random" spacing of the primes.
> > Another connection between Chaitin and Wolfram has to do with the
> > underpinnings of the multiverse. From Chaitin and AIT we know that any
> > universal computer can emulate any other using a fixed size program.
> > This means that universes agree on their relative measures to within a
> > constant factor (whose value depends on which system is emulating which
> > other system). Therefore, in the limit of large programs, where this
> > constant factor becomes insignificant,
> How's that? Isn't a constant factor is of the same significance for all
> sizes? Are you thinking of an additive term instead of a factor (but that
> doesn't apply to Turing emulations)?
Yes, I meant a constant additive term. Any UTM can emulate any other
TM using a fixed size prefix in front of the other TM's input tape.
(This assumes that the two TMs use the same basic alphabet of characters
for their input tape.)
However I may have been too general in making this assertion about all
universal computers (a larger set than UTMs). It's possible that some
UC's would need to scale the size of the program in order to emulate
others, so in those cases it is not just a fixed additive term. I would
have to look more closely at other models of universal computation to
get a better understanding of this point.