Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Denker

Aram Perez wrote:

* How do you measure entropy? I was under the (false) impression that  
Shannon gave a formula that measured the entropy of a message (or  
information stream).


Entropy is defined in terms of probability.  It is a measure of
how much you don't know about the situation.  If by message
you mean a particular message that you are looking at, it has
zero entropy.  It is what it is;  no probability required.

If by stream you mean a somewhat abstract notion of an ensemble
of messages or symbols that you can only imperfectly predict, then
it has some nonzero entropy.

* Can you measure the entropy of a random oracle? Or is that what  both 
Victor and Perry are saying is intractable?


That is a tricky question for a couple of reasons.
 a) It will never be completely answerable because the question hinges
  on the word random, which means different things to different people.
  Thoughtful experts use the word in multiple inconsistent ways.
 b) It also depends on just what you mean by measure.  Often it
  is possible to _ascertain_ the entropy of a source ... but direct
  empirical measurements of the output are usually not the best way.


* Are there units of entropy?


Yes.  It is naturally _dimensionless_, but dimensionless quantities
often have nontrivial units.  Commonly used units for entropy
include
 ++ bits
 ++ joules per kelvin.  One J/K equals 1.04×10^23 bits
For more on this, see
  http://www.av8n.com/physics/thermo-laws.htm#sec-s-units


* What is the relationship between randomness and entropy?


They are neither exactly the same nor completely unrelated.
A pseudorandom sequence may be random enough for many
applications, yet has asymptotically zero entropy density.
A sequence of _odd_ bytes is obviously not entirely random,
yet may have considerable entropy density.

* (Apologies to the original poster) When the original poster  requested 
passphrases with more than 160 bits of entropy, what was he requesting?


When you apply a mathematical function to an ensemble of inputs, it
is common to find that the ensemble of outputs has less entropy than
the ensemble of inputs. A simple pigeonhole argument suffices to show
that a function whose output is represented in 160 bits cannot possibly
represent more than 160 bits of entropy per output.  So if you want
the ensemble of outputs to have more than 160 bits of entropy, it is
necessary to do something fancier than a single instance of SHA-1.

* Does processing an 8 character password with a process similar to  
PKCS#5 increase the entropy of the password?


No.


* Can you add or increase entropy?


Shuffling a deck of cards increases the entropy of the deck.

For more on this, see
  http://www.av8n.com/physics/thermo-laws.htm#sec-entropy

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread Jack Lloyd
On Wed, Mar 22, 2006 at 03:29:07PM -0800, Aram Perez wrote:

 * How do you measure entropy? I was under the (false) impression that  
 Shannon gave a formula that measured the entropy of a message (or  
 information stream).

He did give a formula for the entropy of a source; however the caculation is
based on the probabilties of each symbol appearing. Unless you know those, you
can't actually apply the formula. So it is computable in theory, just not in
pratice for any source that is at all interesting.

 * Can you measure the entropy of a random oracle? Or is that what  
 both Victor and Perry are saying is intractable?

A random oracle, by definition, produces a completely random output. However,
since random oracles don't actually exist that does not seem to be a terribly
interesting thing to be measuring the entropy of.

 * Are there units of entropy?

Bits are usually the most intuitive/useful unit.

 * What is the relationship between randomness and entropy?

I have a vague feeling this question requires a deeper answer than I'm able to
provide.

 * Does processing an 8 character password with a process similar to  
 PKCS#5 increase the entropy of the password?

No, because there are no additional secrets. Knowledge of the password is all
you need to rederive the final output, thus clearly there is no additional
information (ie, entropy) in the output that was not in the original password.

This ignores the salt, iteration count, and the specification of the algorithm
itself, but those are all (usually) public. So they contribute to the entropy,
they do not contribute to the conditional entropy, which is what we are usually
interested in when thinking about entropy and crypto.

 * Can you add or increase entropy?

Yes. Let's say the contents of tommorrow's NY Times has n bits of entropy (we
probably can't actually calculate n, but we know it is a finite and fixed
value). And the LA Times from the same day will also have some amount of
entropy (call it n'). However, the entropy of the two papers combined would
(probably) not be n+n', but some number x st min(n,n') = x = n+n', because
the two papers will report on many of the same issues, carry some of the same
AP stories, and so forth. This redundant information doesn't increase the
entropy (reading the same AP story in a second newspaper wouldn't give you any
new information).

A book you may find interesting is Elements of Information Theory by Cover 
Thomas.

-Jack

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread Sandy Harris
Aram Perez [EMAIL PROTECTED] wrote:

 So, if you folks care to educate me, I have several questions related
 to entropy and information security (apologies to any physicists):

I'll answer the easier questions. I'll leave the harder ones for someone
with a better grounding in information theory.

 * What is the relationship between randomness and entropy?

Roughly, they both measure unpredictability. Something that is hard
to predict is random, or has high entropy. There are mathematical
formulations that make this a lot more precise, but that's the basic
idea.

 * Does processing an 8 character password with a process similar to
 PKCS#5 increase the entropy of the password?

Absolutely not!

At best, you preserve the original entropy, just distributing it
differently. If you get the processing wrong, you can reduce or
even entirely destroy it, but no amount of any kind of processing
can increase it.

 * Can you add or increase entropy?

You can add more entropy, either from another source or more
from the same source. That is the only way to increase it.

--
Sandy Harris
Zhuhai, Guangdong, China

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-23 Thread Travis H.
I have examined the LRNG paper and have a few comments.

CC'd to the authors so mind the followups.

1) In the paper, he mentions that the state file could be altered by
an attacker, and then he'd know the state when it first came up.  Of
course, if he could do that, he could simply install a trojan in the
OS itself, so this is not really that much of a concern.  If your hard
drives might be altered by malicious parties, you should be using some
kind of cryptographic integrity check on the contents before using
them.  This often comes for free when encrypting the contents.

2) His objection against using keyboard data is perhaps just an
indication that reseeding of the pool should occur with sufficient
entropy that the values cannot efficiently be guessed via brute force
search and forward operation of the PRNG.  If the reseeding is of
insufficient to deter brute force input space search, other bad things
can happen.  For example, in the next paragraph the author mentions
that random events may reseed the secondary pool directly if the
primary pool is full.  If an attacker were to learn the contents of
the secondary pool, he could guess the incremental updates to its
contents and compare results with the real PRNG, resulting in an
incremental state-tracking attack breaking backward security until a
reseed from the primary is generated (which appears to have a minimum
of 8 bytes, also perhaps too low).  The answer is more input, not
less.

It's annoying that the random number generator code calls the
unpredictable stuff entropy.  It's unpredictability that we're
concerned with, and Shannon entropy is just an upper bound on the
predictability.  Unpredictability cannot be measured based on outputs
of sources, it must be based on models of the source and attacker
themselves.  But we all know that.  Maybe we should invent a term? 
Untropy?

And now a random(3) tangent:

While we're on the subject of randomness, I was hoping that random(3)
used the old (TYPE_0) implementation by default... lots of DoS tools
use it to fill spoofed packet fields, and one 32-bit output defines
the entire state of the generator --- meaning that I could distinguish
DoS packets which had at least 32 bits of state in them from other
packets.  However, it appears that Linux and BSD both use a TYPE_3
pool, which makes such simple techniques invalid, and would probably
require identification of a packet stream, instead of testing packets
one by one.  Since use of a real pool has put it beyond my interest
and perhaps my ability, I'm giving the idea away.  Email me if you
find a really good use for PRNG analysis of this sort.

For a TYPE_0 generator, the equation is:
i' = (i * 1103515245 + 12345)  0x7fff

As far as low-hanging fruit goes, the higher generator types still
never set the highest order bit (RAND_MAX is 0x7fff), and the
outputs are unaltered pool contents.
--
Security Guru for Hire http://www.lightconsulting.com/~travis/ --
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Greek officials were tapped using law enforcement back door

2006-03-23 Thread Adam Fields
On Thu, Mar 23, 2006 at 09:30:30AM -0500, Perry E. Metzger wrote:
 A while ago, you may recall that members of the Greek government were
 wiretapped, and at the time, I speculated that the bad guys may have
 abused the built in CALEA software in the switch to do it. Well, it
 now appears that that was precisely what happened. Unfortunately, the
 article below is short on detail -- anyone have access to primary
 sources? (I know there are at least a couple of Greek cryptographers
 on this list...)
 
 http://www.deccanherald.com/deccanherald/mar162006/update71652006316.asp

Schneier posted this a few weeks ago:

http://www.schneier.com/blog/archives/2006/03/more_on_greek_w.html

-- 
- Adam

** Expert Technical Project and Business Management
 System Performance Analysis and Architecture
** [ http://www.adamfields.com ]

[ http://www.aquick.org/blog ]  Blog
[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.flickr.com/photos/fields ] ... Photos
[ http://www.aquicki.com/wiki ].Wiki

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Kelsey
From: Jack Lloyd [EMAIL PROTECTED]
Sent: Mar 22, 2006 11:30 PM
To: cryptography@metzdowd.com
Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits 
of entropy)

...

As an aside, this whole discussion is confused by the fact that there
are a bunch of different domains in which entropy is defined.  The
algorithmic information theory sense of entropy (how long is the
shortest program that produces this sequence?) is miles away from the
information theory sense of entropy, and even that has a bunch of
different flavors.  

For the information theory meanings of entropy, what you're really
talking about is a property of a probability distribution.  You can do
that in terms of Shannon entropy if you want to know about bounds on
average bandwidth requirements for transmitting symbols from that
distribution.  You can look at guessing entropy if you want to know
the -lg(expected work to guess the next symbol).  You can look at
min-entropy if you want a hard bound on how many symbols you need to
sample to derive a 128-bit key that won't ever expose you to
weaknesses based on how you selected it.  And so on.  

Shannon entropy is the one most people know, but it's all wrong for
deciding how many samples you need to derive a key.  The kind of
classic illustration of this is the probability distirbution:

0 occurs with probability 1/2
each other number from 1 to 2^{160}+1 happens with probability
2^{-161}.

The Shannon entropy on this distribution is 81.5 bits.  But if you
tried to sample it once to generate an 80-bit Skipjack key, half the
time, I'd guess your key on my first try.  

...

Yes. Let's say the contents of tommorrow's NY Times has n bits of entropy (we
probably can't actually calculate n, but we know it is a finite and fixed
value). And the LA Times from the same day will also have some amount of
entropy (call it n'). However, the entropy of the two papers combined would
(probably) not be n+n', but some number x st min(n,n') = x = n+n', because
the two papers will report on many of the same issues, carry some of the same
AP stories, and so forth. This redundant information doesn't increase the
entropy (reading the same AP story in a second newspaper wouldn't give you any
new information).

Right.  If the probabilities are independent, you can add the
entropies.  That's why they're defined in log terms.  

--John


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Greek officials were tapped using law enforcement back door

2006-03-23 Thread Nikos Mavrogiannopoulos
On Thu 23 Mar 2006 15:30, Perry E. Metzger wrote:

 A while ago, you may recall that members of the Greek government were
 wiretapped, and at the time, I speculated that the bad guys may have
 abused the built in CALEA software in the switch to do it. Well, it
 now appears that that was precisely what happened. Unfortunately, the
 article below is short on detail -- anyone have access to primary
 sources? (I know there are at least a couple of Greek cryptographers
 on this list...)

 http://www.deccanherald.com/deccanherald/mar162006/update71652006316.
asp

Hello,
 The article above is good summary of what had happened so far. 
If you are asking for technical details, I'm not sure if there have been 
released any (although I only have access to online material since I 
don't live in greece any more). 

Some links to articles that summarize the testimonies of vodafone's and 
ericsson's CEO's to the greek pariament follow.

vodafone's position:
http://www.athensnews.gr/athweb/nathens.print_unique?e=Cf=13173m=A03aa=1eidos=A

ericsson's:
http://www.athensnews.gr/athweb/nathens.prnt_article?e=Cf=13174t=01m=A03aa=1


regards,
Nikos

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-23 Thread Dave Korn
Olle Mulmo wrote:
 On Mar 20, 2006, at 21:51, [EMAIL PROTECTED] wrote:

 I was tearing up some old credit card receipts recently - after all
 these years, enough vendors continue to print full CC numbers on
 receipts that I'm hesitant to just toss them as is, though I doubt
 there
 are many dumpster divers looking for this stuff any more - when I
 found
 a great example of why you don't want people applying their
 creativity
 to security problems, at least not without a great deal of review.

 You see, most vendors these days replace all but the last 4 digits of
 the CC number on a receipt with X's.  But it must be boring to do the
 same as everyone else, so some bright person at one vendor(*) decided
 they were going to do it differently:  They X'd out *just the last
 four
 digits*.  After all, who could guess the number from the 10,000
 possibilities?

 Ahem.
  -- Jerry

 (*) It was Build-A-Bear.  The receipt was at least a year old, so for
 all I know they've long since fixed this.

 Unfortunately, they haven't. In Europe I get receipts with different
 crossing-out patterns almost every week.

 And, with they I mean the builders of point-of-sale terminals: I
 don't think individual store owners are given a choice.

 Though I believe I have noticed a good trend in that I get receipts
 where *all but four* digits are crossed out more and more often
 nowadays.

  In the UK, that is now the almost universal practice.  And it's equally 
almost universally the /last/ four digits across all retailers.  Which is 
good.

  What is not so good, however, is another example of 
not-as-clever-as-it-thinks-it-is clever new idea for addressing the problem 
of receipts.

  As we all know, when you pay with a credit or debit card at a store, it's 
important to take the receipt with you, because it contains vital 
information - even when most of the card number is starred out, the expiry 
date is generally shown in full.  So we're all encouraged to take them with 
us, take them home, and shred or otherwise securely dispose of them under 
our own control.

  Of course, this is a) a nuisance and b) wasteful of paper.  And obviously 
enough, someone's been trying to come up with a 'bright idea' to solve these 
issues.

  So what they've been doing at my local branch of Marks  Spencer for the 
past few weeks is, at the end of the transaction after the (now always 
chip'n'pin-based) card reader finishes authorizing your transaction, the 
cashier at the till asks you whether you actually /want/ the receipt or not; 
if you say yes, they press a little button and the till prints out the 
receipt same as ever and they hand it to you, but if you say no they don't 
press the button, the machine doesn't even bother to print a receipt, and 
you wander away home, safe in the knowledge that there is no wasted paper 
and no leak of security information  ...

  ... Of course, three seconds after your back is turned, the cashier can 
still go ahead and press the button anyway, and then /they/ can have your 
receipt.  With the expiry date on it.  And the last four digits of the card 
number.  And the name of the card issuer, which allows you to narrow the 
first four digits down to maybe three or four possible combinations.  OK, 
10^8 still aint easy, but it's a lot easier than what we started with.

  The risk could perhaps be fixed with an interlock which makes it 
impossible to print the receipt out after the card has been withdrawn from 
the reader, but I think the better solution would still be for the receipt 
to be printed out every single time and the staff trained in the importance 
of not letting customers leave without taking their receipts with them.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-23 Thread David Malone
On Thu, Mar 23, 2006 at 01:55:30AM -0600, Travis H. wrote:
 It's annoying that the random number generator code calls the
 unpredictable stuff entropy.  It's unpredictability that we're
 concerned with, and Shannon entropy is just an upper bound on the
 predictability.  Unpredictability cannot be measured based on outputs
 of sources, it must be based on models of the source and attacker
 themselves.  But we all know that.  Maybe we should invent a term? 
 Untropy?

The problem is that we have to decide what out metric is before we
can give it a name. Shannon entropy is about the asympitic amount
of data needed to encode an average message. Kolmorogrov's entropy
(which got a mention here today) is about the shortest program that
can produce this string. These things aren't often important for
a PRNG or /dev/random like device.

One metric might be guessability (mean number of guesses required
or moments there of).  As you point out, Arikan and Massey have
shown that Shannon entropy are not particularly good estimates of
guessability. Generalisations of entropy, like Reni entropy do seem
to have meaning. The min-entropy mentioned in RFC 4086 seems
reasonable, though I don't think the rational given in the RFC is
actually correct.

David.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Denker

John Kelsey wrote:


As an aside, this whole discussion is confused by the fact that there
are a bunch of different domains in which entropy is defined.  The
algorithmic information theory sense of entropy (how long is the
shortest program that produces this sequence?) is miles away from the
information theory sense of entropy, and even that has a bunch of
different flavors.  


I would have said almost the opposite.  The great power and beauty
of the entropy idea is that it has the *same* meaning across many
domains.  Entropy is defined in terms of probability, period.  Any
other definition is either
  a) exactly equivalent,
  b) an approximation, valid under this-or-that restrictive conditions, or
  c) wrong.

With some slight fiddling to get the normalization right, 1/2
raised to the power of (program length) defines a probability
measure.  This may not be the probability you want, but it
is a probability, and you can plug it into the entropy definition.

The problem is almost never understanding the definition of
entropy.  The problem is almost always ascertaining what is
the correct probability measure applicable to the problem
at hand.

Don't get me started about universal probability measures.
That has an arcane narrow technical meaning that is verrry widely
misunderstood.



0 occurs with probability 1/2
each other number from 1 to 2^{160}+1 happens with probability
2^{-161}.

The Shannon entropy on this distribution is 81.5 bits. 


I think it is very close to 81 bits, not 81.5, but that is a minor
point that doesn't change the conclusion:


 But if you
tried to sample it once to generate an 80-bit Skipjack key, half the
time, I'd guess your key on my first try.  


Yeah, but if I sample it N times, with high probability I can generate
a large number of very good keys.  This problem is faced by (and solved
by) any decent TRNG.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-23 Thread J. Bruce Fields
On Thu, Mar 23, 2006 at 08:15:50PM -, Dave Korn wrote:
   So what they've been doing at my local branch of Marks  Spencer for the 
 past few weeks is, at the end of the transaction after the (now always 
 chip'n'pin-based) card reader finishes authorizing your transaction, the 
 cashier at the till asks you whether you actually /want/ the receipt or not; 
 if you say yes, they press a little button and the till prints out the 
 receipt same as ever and they hand it to you, but if you say no they don't 
 press the button, the machine doesn't even bother to print a receipt, and 
 you wander away home, safe in the knowledge that there is no wasted paper 
 and no leak of security information  ...
 
   ... Of course, three seconds after your back is turned, the cashier can 
 still go ahead and press the button anyway, and then /they/ can have your 
 receipt.  With the expiry date on it.  And the last four digits of the card 
 number.  And the name of the card issuer, which allows you to narrow the 
 first four digits down to maybe three or four possible combinations.  OK, 
 10^8 still aint easy, but it's a lot easier than what we started with.

If all that information's printed on the outside of the card, then isn't
this battle kind of lost the moment you hand the card to them?

--b.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread Greg Rose

At 22:09  -0500 2006/03/22, John Denker wrote:

Aram Perez wrote:


* Can you add or increase entropy?


Shuffling a deck of cards increases the entropy of the deck.


As a minor nit, shuffling *in an unpredictable manner* adds entropy, 
because there is extra randomness being brought into the process. If 
I was one of those people who can do a perfect riffle shuffle, 
reordering the cards in this entirely predictable manner does not 
increase or decrease the existing entropy.


So in one sense, the answer is a simple no... nothing you can do to 
a passphrase can increase its (that is, the passphrase's) entropy. 
You can add randomness from another source, and increase the total 
entropy, but I don't think that is relevant to the original question.


Greg.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Kelsey
From: John Denker [EMAIL PROTECTED]
Sent: Mar 23, 2006 1:44 PM
To: John Kelsey [EMAIL PROTECTED], cryptography@metzdowd.com
Subject: Re: Entropy Definition (was Re: passphrases with more than 160 bits 
of entropy)

...
With some slight fiddling to get the normalization right, 1/2
raised to the power of (program length) defines a probability
measure.  This may not be the probability you want, but it
is a probability, and you can plug it into the entropy definition.

No, this isn't right for the algorithmic information theory meaning at
all.  For that measure, we can intelligently discuss the entropy of a
specific random string, without reference to a probability model.
Indeed, what's the probability distribution of the sequence of bits
defined by Chaitin's Omega?  

You can certainly complain that they should have used a different term
than entropy here, but you're not going to get these to be the same.  

The problem is almost never understanding the definition of
entropy.  The problem is almost always ascertaining what is
the correct probability measure applicable to the problem
at hand.

Well, you need to decide which of the probability distribution kinds
of entropy measures to use, and that differs in different
applications.  If you use min-entropy or guessing entropy to estimate
the limits on how well some sequence of symbols will compress, you'll
get a pretty lousy answer.  The same goes for using Shannon entropy to
determine whether you have collected enough entropy in your pool to
generate a 128-bit AES key.  

...
 0 occurs with probability 1/2
 each other number from 1 to 2^{160}+1 happens with probability
 2^{-161}.
 
 The Shannon entropy on this distribution is 81.5 bits. 

I think it is very close to 81 bits, not 81.5, but that is a minor
point that doesn't change the conclusion:

  But if you
 tried to sample it once to generate an 80-bit Skipjack key, half the
 time, I'd guess your key on my first try.  

Yeah, but if I sample it N times, with high probability I can generate
a large number of very good keys.  This problem is faced by (and solved
by) any decent TRNG.

Hmmm.  I've seen plenty of people get this wrong.  If you use Shannon
entropy as your measure, and then say when I have collected 128 bits
of Shannon entropy in my pool, I can generate a 128-bit AES key, you
will generate keys that aren't as secure as you think they are.  Now,
most TRNGs seem to evade this and many other problems by designing the
hardware to give a relatively simple probability model.  If your
probability model is close to uniform, then all these probability
distribution based entropy measurements converge (with a constant
difference).   

In the case above, we had better specify N.  If you sample it 16
times, then you have a 2^{-16} chance of still being in a weak state.
Once you get enough samples that the probability of being in the
pathological worst case is negligible (whatever that means in your
application), then you can start generating output bits.  That's
probably somewhere between N=32 and N=64.  

--John

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Creativity and security

2006-03-23 Thread brucee
Blanking out all but the last 4 digits is foolish.  The last is a checksum
and the first four are determined by the merchant.  This greatly reduces
the possibilities for the other 8 digits.  I'd rather just Bank Name or even
the first 4 digits.  (I know that amex use only 15, even worse.)

brucee

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-23 Thread John Kelsey
From: David Malone [EMAIL PROTECTED]
Sent: Mar 23, 2006 3:43 PM
To: Travis H. [EMAIL PROTECTED]
Cc: Heyman, Michael [EMAIL PROTECTED], cryptography@metzdowd.com, [EMAIL 
PROTECTED], [EMAIL PROTECTED]
Subject: Re: Linux RNG paper

...
One metric might be guessability (mean number of guesses required
or moments there of).  As you point out, Arikan and Massey have
shown that Shannon entropy are not particularly good estimates of
guessability. Generalisations of entropy, like Reni entropy do seem
to have meaning. The min-entropy mentioned in RFC 4086 seems
reasonable, though I don't think the rational given in the RFC is
actually correct.

Starting clarification:  

Min-entropy of a probability distribution is 

-lg ( P[max] ), 

minus the base-two log of the maximum probability.  

The nice thing about min-entropy in the PRNG world is that it leads to
a really clean relationship between how many bits of entropy we need
to seed the PRNG, and how many bits of security (in terms of
resistance to brute force guessing attack) we can get.

Suppose I have a string S with 128 bits of min-entropy.  That means
that the highest probablity guess of S has probability 2^{-128}.  I
somehow hash S to derive a 128-bit key.  The question to ask is, could
you guess S more cheaply than you guess the key?  When the min-entropy
is 128, it can't be any cheaper to guess S than to guess the key.
That's true whether we're making one guess, two guesses, ten guesses,
or 2^{127} guesses.   

To see why this is so, consider the best case for an attacker: S is a
128-bit uniform random string.  Now, all possible values have the same
probability, and guessing S is exactly the same problem as guessing
the key.  (I'm ignoring any bad things that happen when we hash down
to a key, but those can be important to think about in a different
context.)  

Now, why is this the best case for an attacker?  Because it gives the
highest probability of guessing right on the nth guess.  If the
min-entropy is 128, then the highest probability symbol must have
prob. 2^{-128}.  If the next highest probability symbol has lower than
2^{-128} probability, then his second guess has lower probability.
And then the next highest probability symbol is constrained in the
same way.   

This makes it really easy, once you're working in min-entropy terms,
to answer questions like do I have enough entropy in this string to
initialize a PRNG based on running AES-128 in counter mode?

   David.

--John Kelsey, NIST


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread John Denker

I wrote:

With some slight fiddling to get the normalization right, 1/2
raised to the power of (program length) defines a probability
measure.  This may not be the probability you want, but it
is a probability, and you can plug it into the entropy definition.

John Kelsey wrote:


No, this isn't right for the algorithmic information theory meaning at
all.  


OK, in a moment we will have gone through four plies of no-it-isn't
yes-it-is no-it-isn't yes-it-is.  Let's get serious.  The axiomatic
definition of a measure is
  -- a mapping from sets to numbers
  -- positive
  -- additive on the countable union of disjoint sets

And a probability measure has the further property of being
  -- bounded above

I have checked that -- with due attention to trivial details --
.5 ^ (program length) satisfies this definition.  If you wish to
renew the assertion that there is no such probability measure, please
explain which of the axiomatic requirements is not met.  Please be
specific.

 For that measure, we can intelligently discuss the entropy of a
 specific random string, without reference to a probability model.

That's like saying we can talk about three-dimensional force, velocity,
and acceleration without reference to vectors.

Measure theory is the tried-and-true formalism for dealing with random
strings.  It would be spectacularly contrary to ignore the formalism,
and just plain wrong to say the formalism is inapplicable.


Indeed, what's the probability distribution of the sequence of bits
defined by Chaitin's Omega?  


Huh?  Omega is so obviously a probability that usually the word probability
is used in its definition.  See e.g.
  http://mathworld.wolfram.com/ChaitinsConstant.html
I suppose a masochistic nitpicker could demand to see a proof that this word
is justified, but I'm not going to bother, for reasons given above.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


ECC Wit and Wisdom (Fwd)

2006-03-23 Thread Vin McLellan

Pithy wit and wisdom from New Zealand. lol.

_Vin

-Original Message-
From: Peter Gutmann  [EMAIL PROTECTED]
Sent: Thursday, 23 March 2006 12:41 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: [Cfrg] Defining inter operable ECC keys in for IETF protocols

Blumenthal, Uri [EMAIL PROTECTED] writes:

I would MUCH prefer ECC - but my lawyers (yuk!) are telling me that there are
licensing problems, and supposed NSA contacts don't call them back.

Anybody knows anything useful about licensing of ECC GF(p), that he can
share with me?

Certicom have done such a good job of creating FUD about ECC legal 
issues that, unless you're a Certicom licensee, it's easier to not 
use it at all.


So far every time I've been asked about ECC support (which admittedly 
is once a blue moon anyway), I've asked the organisation who want it 
to come back to me with either proof of a Certicom license or a clear 
statement of which non- infringing mechanisms they want me to 
implement.  In every case, after looking at what's involved, they've 
decided they didn't need ECC that much anyway.


It's like the conclusion from Wargames, the only way to win is not to play.

Peter.

___
Cfrg mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/cfrg



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-23 Thread Hal Finney
This is getting pretty far afield from cryptography but it is a topic
I find very interesting so I can't resist jumping in.

John Denker writes:
 OK, in a moment we will have gone through four plies of no-it-isn't
 yes-it-is no-it-isn't yes-it-is.  Let's get serious.  The axiomatic
 definition of a measure is
-- a mapping from sets to numbers
-- positive
-- additive on the countable union of disjoint sets

 And a probability measure has the further property of being
-- bounded above

 I have checked that -- with due attention to trivial details --
 .5 ^ (program length) satisfies this definition.  If you wish to
 renew the assertion that there is no such probability measure, please
 explain which of the axiomatic requirements is not met.  Please be
 specific.

This is true, in fact it is sometimes called the universal distribution
or universal measure.  In more detail, it is a distribution over all
finite-length strings.  The measure for a particular string X is defined
as the sum over all programs that output X of 1/2^L_i, where L_i is the
length of each such program.

Often the algorithmic complexity of a string is defined as the length
of the shortest program to output the string.  The universal measure is
based on the same idea, but takes into consideration that there may be
multiple programs that output the same string.  Each program of length L_i
contributes 1/2^L_i to the string's measure.  If there is only one short
program and all others are much longer, then the probability measure
is essentially 1/2^C where C is the length of this shortest program,
i.e. the algorithmic complexity.

The universal measure for a string can also be thought of as the
probability that, given a fixed Universal Turing Machine (UTM), a randomly
chosen input program will output that string.

So this is clearly a probability distribution (with some technicalities
regarding issues of program lengths being glossed over here) as John
Denker says.  However to go from this to a notion of entropy is more
questionable.

There are a countably infinite number of finite strings, and all of
them have non-zero probabilities under this distribution.  This means
that for most strings, the probability must be very low, asymptotically
approaching zero.  In fact you can show that for most strings of length n,
their measure is 1/2^n; this is equivalent to saying that most strings
are effectively random and have no shorter program to output them.

Shannon entropy is defined over a probability distribution.  That is,
given a probability distribution we can find its Shannon entropy by
summing -p_i / log2(p_i) over all members of the distribution.  If we
approximate the universal distribution by 1/2^n for strings of length
n, this sum is clearly divergent.  So there is not really a meaningful
Shannon entropy for this probability distribution, or in other words
the entropy is infinite.

John Kelsey asked:
  Indeed, what's the probability distribution of the sequence of bits
  defined by Chaitin's Omega?  

This probability distribution is defined only over finite strings and so
Omega is not within the universe of this distribution.  It should also be
noted that it is impossible for an n bit program to output more than n
bits of Omega (plus or minus an additive constant specific to the UTM).
Hence even if we consider successive approximations to Omega of ever
increasing length, their measures would tend asymptotically to zero.

Hal Finney

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]