To ALL discussants:

Please, take into account that posting in this list is restricted to two messages per week. It is the Second Rule of our info club...

Fis List moderator

El 30/03/2017 a las 11:12, John Collier escribió:

Dear Hector,

Personally I agree that algorithmic information theory and the related concepts of randomness and Bennett’s logical depth are the best way to go. I have used them in many of my own works. When I met Chaitin a few years back we talked mostly about how unrewarding and controversial our work on information theory has been. When I did an article on information for the Stanford Encyclopaedia of Philosophy it was rejected in part becausewe of fierce divisions between supporters of Chaitin and supporters of Kolmogorov! The stuff I put in on Spencer Brown was criticized because “he was some sort of Buddhist, wasn’t he?” It sounds like you have run into similar problems.

That is why I suggested a realignment of what this group should be aiming for. I think the end result would justify our thinking, and your work certainly furthers it. But it does need to be worked out. Personally, I don’t have the patience for it.

John Collier

Emeritus Professor and Senior Research Associate

Philosophy, University of KwaZulu-Natal

*From:*Hector Zenil []
*Sent:* Thursday, 30 March 2017 10:48 AM
*To:* John Collier <>; fis <>
*Subject:* Re: [Fis] Causation is transfer of information

Dear John et al. Some comments below:

On Thu, Mar 30, 2017 at 9:47 AM, John Collier < <>> wrote:

    I think we should try to categorize and relate information
    concepts rather than trying to decide which is the “right one”. I
    have tried to do this by looking at various uses of information in
    science, and argue that the main uses show progressive
    containment: Kinds of Information in Scientific Use
    2011. cognition, communication, co-operation. Vol 9, No 2

    There are various mathematical formulations of information as
    well, and I think the same strategy is required here. Sometimes
    they are equivalent, sometimes close to equivalent, and sometimes
    quite different in form and motivation. Work on the foundations of
    information science needs to make these relations clear. A few
    years back (more than a decade) a mathematician on a list
    (newsgroup) argued that there were dozens of different
    mathematical definitions of information. I thought this was a bit
    excessive, and argued with him about convergences, but he was
    right that they were mathematically different. We need to look at
    information theory structures and their models to see where they
    are equivalent and where (and if) they overlap. Different
    mathematical forms can have models in common, sometimes all of them.

The agreement among professional mathematicians is that the correct definition of randomness as opposed to information is the Martin Loef definition for the infinite asymptotic case, and Kolmogorov-Chaitin for the finite case. Algorithmic probability (Solomonoff, Levin) is the theory of optimal induction and thus provides a formal universal meaning to the value of information. Then the general agreement is also that Bennett's logical depth separates the concept of randomness from information structure. No much controversy in in there on the nature of classical information as algorithmic information. Notice that 'algorithmic information' is not just one more definiton of information, IS the definition of mathematical information (again, by way of defining algorithmic randomness). So adding 'algorithmic' to information is not to talk about a special case that can then be ignored by philosophy of information.

All the above builds on (and well beyond) Shannon Entropy, which is not even very properly discussed in philosophy of information beyond its most basic definition (we rarely, if ever, see discussions around mutual information, conditional information, Judea Pearl's interventionist approach and counterfactuals, etc), let alone anything of the more advanced areas mentioned above, or a discussion on the now well established area of quantum information that is also comletely ignored.

This is like trying to do philosophy of cosmology discussing Gamow and Hubble but ignoring relativity, or trying to do philosophy of language today discussing Locke and Hume but not Chomsky, or doing philosophy of mind discussing the findings of Ramon y Cajal and claiming that his theories are not enough to explain the brain. It is some sort of strawman fallacy contructing an opponent living in the 40s to claim in 2017 that it fails at explaining everything about information. Shannon Entropy is a counting-symbol function, with interesting applications, Shannon himself knew it. It makes no sense to expect a counting-symbol function to tell anything interesting about information after 60 years. I refer again to my Entropy deceiving paper:

I do not blame philosophers on this one, phycisists seem to assign Shannon Entropy some mystical power, this is why I wrote a paper proving how it cannot be used in graph complexity as some phycists have recently suggested (e.g. Bianconi via Barabasi). But this is the kind of discussion that we should have having, telling phycisists not to go back to the 40s when it comes to characterizing new objects. If Shannon Entropy fails at characterizing sequences it will not work for other objects (graphs!).

I think the field of philosophy of information cannot get serious until serious discussion on topics above starts to take place. Right now the field is small and carried out by a few mathematicians and phycisists. Philosophers are left behind because they are choosing to ignore all the theory developed in the last 50 to 60 years. I hope this is taken constructively. I think we philosophers need to step up, if we are not be leading the discussion at least we should not be 50 or 60 years behind. I have tried to to close that gap but usually I also get convenently ignored =)

    I have argued that information originates in symmetry breaking
    (making a difference, if you like, but I see it as a dynamic
    process rather than merely as a representation) Information
    Originates in Symmetry Breaking
    <> (/Symmetry/ 1996).

Very nice paper. I agree on symmetry breaking, I have similar ideas:

(published in the journal of Natural Computing)

On how symmetric rules can produce assymetric information.


Hector Zenil

    I adopt what I call dynamical realism, that anything that is real
    is either dynamical or interpretable in dynamical terms. Not
    everyone will agree.

    John Collier

    Emeritus Professor and Senior Research Associate

    Philosophy, University of KwaZulu-Natal

    *From:*Guy A Hoelzer [
    *Sent:* Wednesday, 29 March 2017 1:44 AM
    *To:* Sungchul Ji <
    <>>; Terry Deacon
    < <>>; John Collier
    < <>>; Foundations of
    Information Science Information Science <

    *Subject:* Re: [Fis] Causation is transfer of information

    Greetings all,

    It seems that the indigestion from competing definitions of
    ‘information’ is hard to resolve, and I agree with Terry and
    others that a broad definition is preferable.  I also think it is
    not a problem to allow multiple definitions that can be
    operationally adopted in appropriate contexts.  In some respects,
    apparently competing definitions are actually reinforcing.  For
    example, I prefer to use ‘information’ to describe any difference
    (a distinction or contrast), and it is also true that a subset of
    all differences are ones that ‘make a difference’ to an observer.
    When we restrict ‘information’ to differences that make a
    difference it becomes inherently subjective.  That is certainly
    not a problem if you are interested in subjectivity, but it would
    eliminate the rationality of studying objective ‘information’,
    which I think holds great promise for understanding dynamical
    systems.  I don’t see any conflict between ‘information’ as
    negentropy and ‘information’ as a basis for decision making.  On
    the other hand, semantics and semiotics involve the attachment of
    meaning to information, which strikes me as a separate and
    complementary idea.  Therefore, I think it is important to sustain
    this distinction explicitly in what we write.  Maybe there is a
    context in which ‘information’ and ‘meaning’ are so intertwined
    that they cannot be isolated, but I can’t think of one.  I’m sure
    there are plenty of contexts in which the important thing is
    ‘meaning’, and where the (more general, IMHO) term ‘information’
    is used instead.  I think it is fair to say that you can have
    information without meaning, but you can’t have meaning without
    information. Can anybody think of a way in which it might be
    misleading if this distinction was generally accepted?



        On Mar 28, 2017, at 3:26 PM, Sungchul Ji
        < <>>

        Hi Fisers,

        I agree with Terry that "information" has three
        irreducible aspects ---/amount/,/meaning/, and/value/. These
        somehow may be related to another triadic relation called
        the ITR as depicted below, although I don't know the exact
        rule of mapping between the two triads. Perhaps, 'amount' = f,
        'meaning' = g, and 'value' = h ? .

                        f                     g

                       Object --------------->  Sign -------------->

        |                 ^
                            |          |
                            |          |
                            |          |


        *Figure 1.* The/Irreducible Triadic Relation/(ITR) of
        seimosis (also called sign process or communication) first
        clearly articulated by Peirce to the best of my
        knowledge./Warning/: Peirce often replaces Sign with
        Representamen and represents the whole triad, i.e., Figure 1
        itself (although he did not use such a figure in his
        writings) as the Sign. Not distinguishing between these two
        very different uses of the same word "Sign" can lead to
        semiotic confusions.  The three processes are defined as
        follows: f = sign production, g = sign interpretation, h =
        information flow (other ways of labeling the arrows are
        not excluded).   Each process or arrow reads "determines",
        "leads", "is presupposed by", etc., and the three
        arrows constitute a/commutative triangle/of category theory,
        i.e., f x g = h, meaning f followed by g ledes to the same
        result as h.

        I started using  the so-called  ITR template,*Figure 1*,
         about 5 years ago, and the main reason I am bringing it up
        here is to ask your critical opinion on my
        suggestion published in 2012 (Molecular Theory of the Living
         Cell: Concepts, Molecular Mechanisms, and Biomedical
        Applications, Springer New York, p ~100 ?) that there are two
        kinds of causality -- (i) the energy-dependent causality
        (identified with/Processes f/and///g/in*Figure 1*) and (ii)
        the information (and hence code)-dependent causality
        (identified with/Process h/).  For convenience, I coined the
        term 'codality' to refer to the latter to contrast it with the
        traditional termcausality.

        I wonder if we can  view John's idea of the relation between
        'information' and 'cause' as being  an alternative way of
        expressing the same ideas as the "energy-dependent causality"
        or the "codality" defined in F*igure 1.*

        All the best.



        *From:*Fis <
        <>> on behalf of Terrence
        W. DEACON < <>>
        *Sent:*Tuesday, March 28, 2017 4:23:14 PM
        *To:*John Collier
        *Subject:*Re: [Fis] Causation is transfer of information

        Corrected typos (in case the intrinsic redundancy didn't
        compensate for these minor corruptions of the text):

         information-beqaring medium =  information-bearing medium

        appliction = application

         conceptiont =  conception

        On Tue, Mar 28, 2017 at 10:14 PM, Terrence W.
        DEACON< <>>wrote:

            Dear FIS colleagues,

            I agree with John Collier that we should not assume to
            restrict the concept of information to only one subset of
            its potential applications. But to work with this breadth
            of usage we need to recognize that 'information' can refer
            to intrinsic statistical properties of a physical medium,
            extrinsic referential properties of that medium (i.e.
            content), and the significance or use value of that
            content, depending on the context.  A problem arises when
            we demand that only one of these uses should be given
            legitimacy. As I have repeatedly suggested on this
            listserve, it will be a source of constant useless
            argument to make the assertion that someone is wrong in
            their understanding of information if they use it in one
            of these non-formal ways. But to fail to mark which
            conception of information is being considered, or worse,
            to use equivocal conceptions of the term in the same
            argument, will ultimately undermine our efforts to
            understand one another and develop a complete general
            theory of information.

            This nominalization of 'inform' has been in use for
            hundreds of years in legal and literary contexts, in all
            of these variant forms. But there has been a slowly
            increasing tendency to use it to refer to the
            information-beqaring medium itself, in substantial terms.
            This reached its greatest extreme with the restricted
            technical usage formalized by Claude Shannon. Remember,
            however, that this was only introduced a little over a
            half century ago. When one of his mentors (Hartley)
            initially introduced a logarithmic measure of signal
            capacity he called it 'intelligence' — as in the gathering
            of intelligence by a spy organization. So had Shannon
            chose to stay with that usage the confusions could have
            been worse (think about how confusing it would have been
            to talk about the entropy of intelligence). Even so,
            Shannon himself was to later caution against assuming that
            his use of the term 'information' applied beyond its
            technical domain.

            So despite the precision and breadth of appliction that
            was achieved by setting aside the extrinsic relational
            features that characterize the more colloquial uses of the
            term, this does not mean that these other uses are in some
            sense non-scientific. And I am not alone in the belief
            that these non-intrinsic properties can also (eventually)
            be strictly formalized and thereby contribute insights to
            such technical fields as molecular biology and cognitive

            As a result I think that it is legitimate to argue that
            information (in the referential sense) is only in use
            among living forms, that an alert signal sent by the
            computer in an automobile engine is information (in both
            senses, depending on whether we include a human
            interpreter in the loop), or that information (in the
            intrinsic sense of a medium property) is lost within a
            black hole or that it can be used  to provide a more
            precise conceptiont of physical cause (as in Collier's
            sense). These different uses aren't unrelated to each
            other. They are just asymmetrically dependent on one
            another, such that medium-intrinsic properties can be
            investigated without considering referential properties,
            but not vice versa.

            It's time we move beyond terminological chauvenism so that
            we can further our dialogue about the entire domain in
            which the concept of information is important. To succeed
            at this, we only need to be clear about which conception
            of information we are using in any given context.

            — Terry

            On Tue, Mar 28, 2017 at 8:32 PM, John

                I wrote a paper some time ago arguing that causal
                processes are the transfer of information. Therefore I
                think that physical processes can and do convey
                information. Cause can be dispensed with.

                  * There is a copy atCausation is the Transfer of
                    Howard Sankey (ed)/Causation, Natural Laws and
                    Explanation/(Dordrecht: Kluwer, 1999)

                Information is a very powerful concept. It is a shame
                to restrict oneself to only a part of its possible

                John Collier

                Emeritus Professor and Senior Research Associate

                Philosophy, University of KwaZulu-Natal


                Fis mailing list


            Professor Terrence W. Deacon
            University of California, Berkeley


        Professor Terrence W. Deacon
        University of California, Berkeley

        Fis mailing list <>

    Fis mailing list <>

Fis mailing list

Pedro C. Marijuán
Grupo de Bioinformación / Bioinformation Group
Instituto Aragonés de Ciencias de la Salud
Centro de Investigación Biomédica de Aragón (CIBA)
Avda. San Juan Bosco, 13, planta 0
50009 Zaragoza, Spain
Tfno. +34 976 71 3526 (& 6818)

Fis mailing list

Reply via email to