With respect to definitions of
information
(Shannon, Von Neumann, Kolmogorov, etc.)
there is the completely opposite
approach
of Michael Leyton. He defines information
as
causal explanation. This is very
powerful
because it is driven by a
meaning-making
system, i.e., a cognitive system.
With respect to quantitative issues, his
work
uses his group-theoretic methods based
on
levels of wreath-product sequences.
The wreath products come from
structural
characterizations of intelligent causal
explanation.
best
Jim Johnson
----- Original Message -----
Sent: Saturday, June 10, 2006 2:22
PM
Subject: Re: [Fis] Reply to Ted Goranson:
levels of description
At 08:20 AM 6/7/2006, Andrei Khrennikov wrote:
My comment: Yes, >>
deeply about the nature of information>> This is the crucial point.
But as I know there are only two ways to define information rigorously,
classical Shannon information, and quantum von Neumann information. In
fact, all my discussion was about the possibility (if it would be
possible at all) to reduce the second one to the first one.
I
understood that very often people speak about information in
some heuristic sense, but we are not able to proceed rigorously with
a mathematical definition of information. And I know only
definitions which are based on different kinds of entropy and hence
probability. Hmm. You should read Barwise and Seligman,
Information Flow: the logic of distributed Systems. Very important for
understanding Quantum Information. Also, I assume that you are familiar with
algorithmic complexity theory, which is certainly rigourous, Minimum
Description Length (Rissanen) and Minimum Message Length (Wallace and Dowe)
methods that apply Kolomogorov and Chaitin's ideas very rigourously. If you
don't like the computational approaches for some reason, then you might want
to look at Ingarden et al, (1997) Information
Dynamics and Open Systems (Dordrecht: Kluwer). They show how probability
can be derived from Boolean structures, which are based on the fundamental
notion of information theory, that of making a binary distinction. So
probability is based in information theory, not the other way around (there
are other ways to show this, but I take the Ingarden et al approach as
conclusive -- Chaitin and Kolmogorov and various commentators have observed
the same thing). If you think about the standard foundations of probability
theory, whether Bayesian subjective approaches or various objective approaches
(frequency approaches fail for a number of reasons -- so they are out, but
could be a counterexample to what I say next), then you will see that making
distinctions and/or the idea of information present but not accessible are the
grounds for probability theory. Information theory is the more fundamental
notion, logically, it is more general, but includes probability theory as a
special case. Information can be defined directly in terms of distinctions
alone; probability cannot. We need to construct a measure to do
that.
John
Professor John
Collier
[EMAIL PROTECTED] Philosophy and Ethics, University of KwaZulu-Natal,
Durban 4041 South Africa T: +27 (31) 260 3248 / 260
2292 F: +27 (31) 260 3031 http://www.nu.ac.za/undphil/collier/index.html
_______________________________________________ fis mailing
list fis@listas.unizar.es http://webmail.unizar.es/mailman/listinfo/fis
|