On 10/12/07, bob logan <[EMAIL PROTECTED]> wrote:

> Loet et al - I guess I am not convinced that information and entropy
> are connected. Entropy in physics has the dimension of energy divided
> by temperature. Shannon entropy has no physical dimension - it is
> missing the Boltzman constant. Therefore how can entropy and shannon
> entropy be compared yet alone connected?


Dear Bob:

I agree. Thermodynamic entropy has a physical meaning, while probabilistic
entropy is a purely mathematical concept. A mathematical concept is formal;
bits are dimensionless. Probabilistic entropy, Shannon-type information and
uncertainty are different words for the same concept.

I am talking about information not entropy - an organized collection
of organic chemicals must have more meaningful info than an
unorganized collection of the same chemicals.
However, in this next paragraph, you shift to "meaningful information".
Meaning can only be provided to the (Shannon-type) information by a system
of reference. The differences (i.e., the distribution can then make a
difference for this observing system.

Let's elaborate the "difference which makes a difference." For example, the
first difference is "on/off" or [0,1]. For the purpose of the example, let's
assume that the second difference is [1,0]. Cross-tabulation leads to a
matrix with four possible combinations. More generally: if we have N classes
in the first distribution and M classes in the second, we obtain a matrix
with N x M classes and hence a maximum (Shannon) entropy of log(N x M).

The difference which not yet made a difference, that is, the Shannon-type
information of the random soup had only N classes. (N would be the number of
different chemical molecules in the soup). The organization in the living
organism has increased the redundancy with log(M), and therefore the
Shannon-type information has decreased.

This is just a straightforward answer on your question in a previous email.
Organization decreases the expected information content of the distribution
because a range of new possibilities is made available by adding a second
dimension or --as John Collier mentioned it-- a second degree of freedom.
Organization can always be written as an organization of a distribution
which was previously unorganized. The matrix contains more redundancy than
the vector.

The advantage of this approach is that it remains mathematical and can be
provided with an appreciation in different discourses. For example, one
expects the appreciation in biological discourse to be different from the
appreciation in economics. It seems to me that your definition of
organization is *a priori *biological. This is unnecessarily reductionistic.
Biology is a special theory which provides us with heuristics to study
fields like the social sciences. These heuristics can be formalized by using
a non-substantive, but formal apparatus. This enables us to specify the
differences in the non-linear dynamics of different systems of reference.

Furthermore, I would not know how to measure a "difference which makes a
difference" if it were not in this way using probabilistic entropy as a
methodology. I did not get that from your paper.

Let me add for the good order that I heavily lean in the above on Henry
Theil's *Statistical Decomposition Analysis *(Amsterdam: North Holland,
1972) and Brooks & Wiley's (1986) *Evolution as Entropy*.

Best wishes,  Loet

On 11-Oct-07, at 5:34 PM, Loet Leydesdorff wrote:

>> Loet - if your claim is true then how do you explain that a random
>> soup of
>> organic chemicals have more Shannon info than an equal number of
>> organic
>> chemicals organized as a living cell where knowledge of some
>> chemicals
>> automatically implies the presence of others and hence have less
>> surprise
>> than those of the  soup of random organic chemicals? -  Bob
>
> Dear Bob and colleagues,
>
> In the case of the random soup of organic chemicals, the maximum
> entropy of the systems is set by the number of chemicals involved (N).
> The maximum entropy is therefore log(N). (Because of the randomness of
> the soup , the Shannon entropy will not be much lower.)
>
> If a grouping variable with M categories is added the maximum entropy
> is log(N * M). Ceteris paribus, the redundancy in the system increases
> and the Shannon entropy can be expected to decrease.
>
> In class, I sometimes use the example of comparing Calcutta with New
> York in terms of sustainability. Both have a similar number of
> inhabitants, but the organization of New York is more complex to the
> extent that the value of the grouping variables (the systems of
> communication) becomes more important than the grouped variable (N).
> When M is extended to M+1, N possibilities are added.
>
> I hope that this is convincing or provoking your next reaction.
>
> Best wishes,
>
>
> Loet





-- 
Loet Leydesdorff
Amsterdam School of Communications Research (ASCoR)
Kloveniersburgwal 48, 1012 CX Amsterdam
Tel.: +31-20- 525 6598; fax: +31-20- 525 3681
[EMAIL PROTECTED] ; http://www.leydesdorff.net/
---------------------------------------
Now available: The Knowledge-Based Economy: Modeled, Measured, Simulated,
385 pp.; US$ 18.95;
_______________________________________________
fis mailing list
fis@listas.unizar.es
http://webmail.unizar.es/mailman/listinfo/fis

Reply via email to