On Sat, Oct 20, 2012 at 09:16:54PM +0200, Alberto G. Corona  wrote:
> This is not a consequence of the shannon optimum coding , in which the
> coding size of a symbol is inversely proportional  to the logaritm of the
> frequency of the symbol?.

Not quite. Traditional shannon entropy uses probability of a symbol,
whereas algorithmic complexity uses the probability of the whole
sequence. Only if the symbols are independently distributed are the
two the same. Usually, in most messages, the symbols are not id.

> 
> What is exactly the comp measure problem?

A UD generates and executes all programs, many of which are
equivalent. So some programs are represented more than others. The
COMP measure is a function over all programs that captures this
variation in program respresentation.

Why should this be unique, independent of UD, or the universal Turing
machine it runs on? Because the UD executes every other UD, as well as
itself, the measure will be a limit over contributions from all UDs.

Cheers
-- 

----------------------------------------------------------------------------
Prof Russell Standish                  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics      hpco...@hpcoders.com.au
University of New South Wales          http://www.hpcoders.com.au
----------------------------------------------------------------------------

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to