On 08.02.2012 22:44 Russell Standish said the following:
On Wed, Feb 08, 2012 at 08:32:16PM +0100, Evgenii Rudnyi wrote:
What I observe personally is that there is information in
informatics and information in physics (if we say that the
thermodynamic entropy is the information). If you would agree,
that these two informations are different, it would be fine with
me, I am flexible with definitions.
Yet, if I understand you correctly you mean that the information
in informatics and the thermodynamic entropy are the same. This
puzzles me as I believe that the same physical values should have
the same numerical values. Hence my wish to understand what you
mean. Unfortunately you do not want to disclose it, you do not want
to apply your theory to examples that I present.
Given the above paragraph, I would say we're closer than you've
Of course there is information in informatics, and there is
information in physics, just as there's information in biology and
so on. These are all the same concept (logarithm of a probability).
Numerically, they differ, because the context differs in each
Entropy is related in a very simple way to information. S=S_max - I.
So provided an S_max exists (which it will any finite system), so
does entropy. In the example of a hard drive, the informatics S_max
is the capacity of the drive eg 100GB for a 100GB drive. If you
store 10GB of data on it, the entropy of the drive is 90GB. That's
Just as information is context dependent, then so must entropy.
Thermodynamics is just one use (one context) of entropy and
information. Usually, the context is one of homogenous bulk
materials. If you decide to account for surface effects, you change
the context, and entropy should change accordingly.
Let me ask you the same question that I have recently asked Brent. Could
you please tell me, the thermodynamic entropy of what is discussed in
Jason's example below?
On 03.02.2012 00:14 Jason Resch said the following:
> Sure, I could give a few examples as this somewhat intersects with my
> line of work.
> The NIST 800-90 recommendation (
> http://csrc.nist.gov/publications/nistpubs/800-90A/SP800-90A.pdf )
> for random number generators is a document for engineers implementing
> secure pseudo-random number generators. An example of where it is
> important is when considering entropy sources for seeding a random
> number generator. If you use something completely random, like a
> fair coin toss, each toss provides 1 bit of entropy. The formula is
> -log2(predictability). With a coin flip, you have at best a .5
> chance of correctly guessing it, and -log2(.5) = 1. If you used a
> die roll, then each die roll would provide -log2(1/6) = 2.58 bits of
> entropy. The ability to measure unpredictability is necessary to
> ensure, for example, that a cryptographic key is at least as
> difficult to predict the random inputs that went into generating it
> as it would be to brute force the key.
> In addition to security, entropy is also an important concept in the
> field of data compression. The amount of entropy in a given bit
> string represents the theoretical minimum number of bits it takes to
> represent the information. If 100 bits contain 100 bits of entropy,
> then there is no compression algorithm that can represent those 100
> bits with fewer than 100 bits. However, if a 100 bit string contains
> only 50 bits of entropy, you could compress it to 50 bits. For
> example, let's say you had 100 coin flips from an unfair coin. This
> unfair coin comes up heads 90% of the time. Each flip represents
> -log2(.9) = 0.152 bits of entropy. Thus, a sequence of 100 coin
> flips with this biased coin could be represent with 16 bits. There
> is only 15.2 bits of information / entropy contained in that 100 bit
> long sequence.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at