Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Taral
On 3/21/06, Travis H. [EMAIL PROTECTED] wrote:
 Does anyone have a good idea on how to OWF passphrases without
 reducing them to lower entropy counts?

I've frequently seen constructs of this type:

H(P), H(0 || P), H(0 || 0 || P), ...

If entropy(P)  entropy(H), the entries will be independent, theoretically.

--
Taral [EMAIL PROTECTED]
You can't prove anything.
-- Gödel's Incompetence Theorem

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Joseph Ashwood
- Original Message - 
From: Travis H. [EMAIL PROTECTED]

Subject: passphrases with more than 160 bits of entropy



I was thinking that one could hash the first block, copy the
intermediate state, finalize it, then continue the intermediate result
with the next block, and finalize that.  Is this safe?  Is there a
better alternative?


Use a bigger hash, SHA-512 should be good to basically 512-bits, same for 
Whirlpool. If those aren't enough for you, use the chaining mode from 
Whirlpool and Rijndael with appropriate size.
   Joe 




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Stefan Lucks
 Does anyone have a good idea on how to OWF passphrases without
 reducing them to lower entropy counts?  That is, I've seen systems
 which hash the passphrase then use a PRF to expand the result --- I
 don't want to do that.  I want to have more than 160 bits of entropy
 involved.

What kind of application are you running, that  150 bits of entropy is
insufficient?

 I was thinking that one could hash the first block, copy the
 intermediate state, finalize it, then continue the intermediate result
 with the next block, and finalize that.  Is this safe?  Is there a
 better alternative?

As I understand your proposal, you split up the passphrase P into L Blocks
P_1, ..., P_L, (padding the last block P_L as neccessary) and then you
output L*160 bit like this:

F(P) = ( H(P_1), H(P_1,P_2), ..., H(P_1, P_2, ..., P_L) ).

This does not look like a good idea to me:

   1.   If the size of the P_i is the internal block size of the hash
function (e.g., 512 bit for SHA-1, SHA-256, or RIPEMD-160) and
your message P=P_1 is just one block long, you definitively end
with (at most) 160 bit of entropy, how large the entropy in P_1
is (could be 512 bit).

   2.   If the local entropy in each block P_i (or even the conditional
entropy in P_i, given all the P_{i-1}, ..., P_1) is low, then you
can step by step find P. This function F(P) is thus *much* *worse*
than its simple and straight counterpart H(P).

   3.   In fact, to calculate the entropy of F(P), you can take the
conditional entropy in P_i. The entropy of F(P) is close to the
maximum of these conditional entropies ...


Any better solution I can think of will be significantly less performant
than just applying H(P). One idea of mine would be the function G:

   *Let i be a counter of some fixed size, say 32 bit.

   *Let J+1 be the number of 160-bit values you need (e.g., J = 4*L).

   *G(P) = ( H(P_1,0,P_1,P_2,0,P_2, ..., P_L,0,P_L),
 H(P_2,1,P_2, ..., P_L,1,P_L,P_1,1,P_1),
...
 H(P_L,J,P_L,P_1,J,P_1, ..., P_{L-1},J,P_{L-1})
   )

Would that be OK for you application?

In any case, I think that using a 160-bit hash function as a building
block for a universal one-way function with (potentially) much more than
160-bit of entropy is a bit shaky.




-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  taste  of  Cryptanalysis  in  the  morning!  --


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Aram Perez

On Mar 22, 2006, at 4:28 AM, Thierry Moreau wrote:


Travis H. wrote:

Hi,
Does anyone have a good idea on how to OWF passphrases without
reducing them to lower entropy counts?  That is, I've seen systems
which hash the passphrase then use a PRF to expand the result --- I
don't want to do that.  I want to have more than 160 bits of entropy
involved.


More than 160 bits is a wide-ranging requirement.

Entropy is a highly discussed unit of measure.


And very often confused. While you do want maximum entropy, maximum  
entropy is not sufficient. The sequence of the consecutive numbers 0  
- 255 have maximum entropy but have no randomness (although there is  
finite probability that a RNG will produce the sequence).


Regards,
Aram Perez


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Aram Perez

On Mar 22, 2006, at 9:04 AM, Perry E. Metzger wrote:



Aram Perez [EMAIL PROTECTED] writes:

Entropy is a highly discussed unit of measure.


And very often confused.


Apparently.


While you do want maximum entropy, maximum
entropy is not sufficient. The sequence of the consecutive numbers 0
- 255 have maximum entropy but have no randomness (although there is
finite probability that a RNG will produce the sequence).


One person might claim that the sequence of numbers 0 to 255 has 256
bytes of entropy.


It could be, but Shannon would not.


Another person will note the sequence of numbers 0-255 completely
describes that sequence and is only 30 bytes long.


I'm not sure I see how you get 30 bytes long.


Indeed, more
compact ways yet of describing that sequence probably
exist. Therefore, we know that the sequence 0-255 does not, in fact,
have maximum entropy in the sense that the entropy of the sequence
is far lower than 256 bytes and probably far lower than even 30 bytes.


Let me rephrase my sequence. Create a sequence of 256 consecutive  
bytes, with the first byte having the value of 0, the second byte the  
value of 1, ... and the last byte the value of 255. If you measure  
the entropy (according to Shannon) of that sequence of 256 bytes, you  
have maximum entropy.



Entropy is indeed often confusing. Perhaps that is because both the
Shannon and the Kolmogorov-Chaitin definitions do not provide a good
way of determining the lower bound of the entropy of a datum, and
indeed no such method can exist.


No argument from me.

Regards,
Aram Perez

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread leichter_jerrold
| Let me rephrase my sequence. Create a sequence of 256 consecutive  
| bytes, with the first byte having the value of 0, the second byte the  
| value of 1, ... and the last byte the value of 255. If you measure  
| the entropy (according to Shannon) of that sequence of 256 bytes, you  
| have maximum entropy.
Shannon entropy is a property of a *source*, not a particular sequence
of values.  The entropy is derived from a sum of equivocations about
successive outputs.

If we read your create a sequence..., then you've described a source -
a source with exactly one possible output.  All the probabilities will
be 1 for the actual value, 0 for all other values; the equivocations are
all 0.  So the resulting Shannon entropy is precisely 0.

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Perry E. Metzger

Aram Perez [EMAIL PROTECTED] writes:
 On Mar 22, 2006, at 9:04 AM, Perry E. Metzger wrote:


 Aram Perez [EMAIL PROTECTED] writes:
 Entropy is a highly discussed unit of measure.

 And very often confused.

 Apparently.

 While you do want maximum entropy, maximum
 entropy is not sufficient. The sequence of the consecutive numbers 0
 - 255 have maximum entropy but have no randomness (although there is
 finite probability that a RNG will produce the sequence).

 One person might claim that the sequence of numbers 0 to 255 has 256
 bytes of entropy.

 It could be, but Shannon would not.

No, he wouldn't. You did, however. The maximum entropy a string of 256
bytes could have would be 256*8 bits. Since we're talking about a
sequence with far less entropy than 256*8 bits, it is not a sequence
of maximum entropy. There are, of course, trivially produced sequences
of maximum entropy.

 Another person will note the sequence of numbers 0-255 completely
 describes that sequence and is only 30 bytes long.

 I'm not sure I see how you get 30 bytes long.

$ echo the sequence of numbers 0-255 | wc -c
  30

Now, of course, there are probably not more than 1.5 bits of entropy
per letter in that sentence fragment, so really we're probably talking
about ~6 bytes of information. Doubtless, though, cleverer people
could do better.

 Indeed, more
 compact ways yet of describing that sequence probably
 exist. Therefore, we know that the sequence 0-255 does not, in fact,
 have maximum entropy in the sense that the entropy of the sequence
 is far lower than 256 bytes and probably far lower than even 30 bytes.

 Let me rephrase my sequence. Create a sequence of 256 consecutive
 bytes, with the first byte having the value of 0, the second byte the
 value of 1, ... and the last byte the value of 255.

We heard you the first time.

The Shannon information of a message is the negative of the log (base
2) of the probability of the message. Of course, that definition is
only really useful if you're talking about a sequence of messages. The
Kolmogorov-Chaitin information of a text is (roughly) the smallest
program that can generate the text.

Both of these definitions are getting at can be informally described
as the smallest representation of a piece of information.

If Alice asks Bob what color are your eyes, Bob could send a 10M
high resolution image of his eye, a precise measurement of the
spectrum reflected by his eyes, the word blue, or perhaps even
something shorter, like a couple of bits that, according to a
pre-agreed table, represent an eye color from a table of eye
colors. The smallest possible representation of just the eye color,
the couple of bits from a table of eye color codes, is likely closest
to the information content of someone's eye color, though a precise
measurement is impossible since it is highly context dependent.

 If you measure the entropy (according to Shannon) of that sequence
 of 256 bytes, you have maximum entropy.

Clearly you don't, since the sequence can be described with far less
information than 256 bytes. A completely random and incompressible
sequence of 256 bytes would have maximum entropy, since it is
impossible to compress to less than 256*8 bits, but the sequence
0..255 has very little entropy because it is easily compressed to a
smaller size. This should be obvious. If I start reciting to you
zero. one. two... for a few iterations the probability of the next
byte will be 1/256 or close to it. Shortly, though, anyone who isn't
an idiot will guess what the next value (with nearly probability 1) in
the sequence is, and the information content of subsequent bytes falls
to far less than one bit per byte. This is just another way of saying
that the smallest program that generates 0..255 is quite small, or
that you can easily compress 0..255 into a description that fits in
far less than 256 bytes.

 Entropy is indeed often confusing. Perhaps that is because both the
 Shannon and the Kolmogorov-Chaitin definitions do not provide a good
 way of determining the lower bound of the entropy of a datum, and
 indeed no such method can exist.

 No argument from me.

I thought you were, indeed, still arguing.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Perry E. Metzger

[EMAIL PROTECTED] writes:
 | Let me rephrase my sequence. Create a sequence of 256 consecutive  
 | bytes, with the first byte having the value of 0, the second byte the  
 | value of 1, ... and the last byte the value of 255. If you measure  
 | the entropy (according to Shannon) of that sequence of 256 bytes, you  
 | have maximum entropy.

 Shannon entropy is a property of a *source*, not a particular sequence
 of values.  The entropy is derived from a sum of equivocations about
 successive outputs.

 If we read your create a sequence..., then you've described a source -
 a source with exactly one possible output.  All the probabilities will
 be 1 for the actual value, 0 for all other values; the equivocations are
 all 0.  So the resulting Shannon entropy is precisely 0.

Shannon information certainly falls to zero as the probability with
which a message is expected approaches 1. Kolmogorov-Chaitin
information cannot fall to zero, though it can get exceedingly small.

In either case, though, I suspect we're in agreement on what entropy
means, but Mr. Perez is not familiar with the same definitions that
the rest of us are using.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Matt Crawford
Let me rephrase my sequence. Create a sequence of 256 consecutive  
bytes, with the first byte having the value of 0, the second byte  
the value of 1, ... and the last byte the value of 255. If you  
measure the entropy (according to Shannon) of that sequence of 256  
bytes, you have maximum entropy.


I so often get irritated when non-physicists discuss entropy.  The  
word is almost always misused. I looked at Shannon's definition and  
it is fine, from a physics point of view.  But if you apply  
thoughtfully to a single fixed sequence, you correctly get the answer  
zero.


If your sequence is defined to be { 0, 1, 2, ..., 255 }, the  
probability of getting that sequence is 1 and of any other sequence,  
0.  Plug it in.


If you have a generator of 8-bit random numbers and every sample is  
independent and uniformly distributed, and you ran this for a  
gazillion iterations and wrote to the list one day saying the special  
sequence { 0, 1, 2, ..., 255 } had appeared in the output, that's a  
different story.  But still, we would talk about the entropy of your  
generator, not of one particular sequence of outputs.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


PayPad

2006-03-22 Thread leichter_jerrold
PayPad (www.paypad.com) is an initiative that seems to have JPMorganChase
Chase behind it to provide an alternative method for paying transactions
on line.  You buy a PayPad device, a small card reader with integrated
keypad.  It connects to your PC using USB.  To pay using PayPad at
a merchant that supports it, you select that as an option, swipe your
card, enter your PIN, and the data is (allegedly) sent encrypted
from the PayPad device direct to the merchant.

Advantage to the merchant:  It's a debit card transaction, and they
claim the transaction fees are half those of a credit card. Of course,
the consumer pays for everything:  The device itself (about $60), the
lack of float.  It's not clear what kind of recourse you might have
in case of fraud.

It's sold as the secure alternative to using your credit card
online.  Unfortunately, it has the problems long discussed on
this list:  The PayPad itself has no display.  It authorizes a
transaction the details of which are on your computer screen.
You have only the software's word for it that there is any
connection between what's on the screen and what's sent to the
merchant (or to someone else entirely).

Realistically, it's hard to see how this is any more secure than
a standard credit card transaction in an SSL session.  It's not
even clear that the card data is encrypted in the device - for
all we know, card data and pin are transfered over the USB to the
application you have to run on your PC, ready to be stolen by,
say, a targetted virus.  They do claim that you are protected in
another way:  Your sensitive data never goes to the merchant or
into a database that can be hacked  The encrypted transaction
is handled directly with your bank  (I guess banks don't
keep databases)

Anyone know anything more about this effort?

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread Perry E. Metzger

Victor Duchovni [EMAIL PROTECTED] writes:
 Actually calculating the entropy for real-world functions and generators
 may be intractable...

It is, in fact, generally intractable.

1) Kolmogorov-Chaitin entropy is just plain intractable -- finding the
   smallest possible Turing machine to generate a sequence is not
   computable.
2) Shannon entropy requires a precise knowledge of the probability of
   all symbols, and in any real world situation that, too, is
   impossible.

Usually, the best you can do is produce (bad) bounds, and sometimes
not even that.

One thing that can be done, of course, is that you can prove, under
certain assumptions, that it would take an intractable amount of
computation to distinguish a particular PRNG from a true random
sequence with greater than 50% probability. However, this is very much
NOT the same as showing that the PRNG sequence contains an endless
stream of entropy -- in fact, the unicity distance very clearly shows
you how much real entropy you have, and it usually isn't
much. Merely because too much computation would be needed does not
mean that you've created entropy -- you've just made it hard for the
opponent to get at your PRNG seed.

Anyone who considers arithmetical methods of producing random digits
is, of course, in a state of sin. -- John von Neumann

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-22 Thread Bill Frantz
On 3/21/06, [EMAIL PROTECTED] (Heyman, Michael) wrote:

Gutterman, Pinkas, and Reinman have produced a nice as-built-specification and 
analysis of the Linux 
random number generator.

From http://eprint.iacr.org/2006/086.pdf:

...

” Since randomness is often consumed in a multi-user environment, it makes 
sense to generalize the BH 
model to such environments. Ideally, each user should have its own 
random-number generator, and these 
generators should be refreshed with different data which is all derived from 
the entropy sources 
available to the system (perhaps after going through an additional PRNG). This 
architecture should 
prevent denial-of-service attacks, and prevent one user from learning about 
the randomness used by 
other users

One of my pet peeves: The idea that the user is the proper atom of
protection in an OS.

My threat model includes different programs run by one (human) user.  If
a Trojan, running as part of my userID, can learn something about the
random numbers harvested by my browser/gpg/ssh etc., then it can start
to attack the keys used by those applications, even if the OS does a
good job of keeping the memory spaces separate and protected.

Cheers - Bill

-
Bill Frantz| The first thing you need   | Periwinkle 
(408)356-8506  | when using a perimeter | 16345 Englewood Ave
www.pwpconsult.com | defense is a perimeter.| Los Gatos, CA 95032

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Entropy Definition (was Re: passphrases with more than 160 bits of entropy)

2006-03-22 Thread Aram Perez

On Mar 22, 2006, at 2:05 PM, Perry E. Metzger wrote:


Victor Duchovni [EMAIL PROTECTED] writes:
Actually calculating the entropy for real-world functions and  
generators

may be intractable...


It is, in fact, generally intractable.

1) Kolmogorov-Chaitin entropy is just plain intractable -- finding the
   smallest possible Turing machine to generate a sequence is not
   computable.
2) Shannon entropy requires a precise knowledge of the probability of
   all symbols, and in any real world situation that, too, is
   impossible.


I'm not a cryptographer nor a mathematician, so I stand duly  
corrected/chastised ;-)


So, if you folks care to educate me, I have several questions related  
to entropy and information security (apologies to any physicists):


* How do you measure entropy? I was under the (false) impression that  
Shannon gave a formula that measured the entropy of a message (or  
information stream).
* Can you measure the entropy of a random oracle? Or is that what  
both Victor and Perry are saying is intractable?

* Are there units of entropy?
* What is the relationship between randomness and entropy?
* (Apologies to the original poster) When the original poster  
requested passphrases with more than 160 bits of entropy, what was  
he requesting?
* Does processing an 8 character password with a process similar to  
PKCS#5 increase the entropy of the password?

* Can you add or increase entropy?

Thanks in advance,
Aram Perez

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Linux RNG paper

2006-03-22 Thread Victor Duchovni
On Wed, Mar 22, 2006 at 02:31:37PM -0800, Bill Frantz wrote:

 One of my pet peeves: The idea that the user is the proper atom of
 protection in an OS.
 
 My threat model includes different programs run by one (human) user.  If
 a Trojan, running as part of my userID, can learn something about the
 random numbers harvested by my browser/gpg/ssh etc., then it can start
 to attack the keys used by those applications, even if the OS does a
 good job of keeping the memory spaces separate and protected.
 

Why would a trojan running in your security context bother with attacking
a PRNG? It can just read your files, record your keystrokes, change your
browser proxy settings, ...

If the trojan is a sand-box of some sort, the sand-box is a different
security context, and in that case, perhaps a different RNG view is
justified.

Some applications that consume a steady stream of RNG data, maintain
their own random pool, and use the public pool to periodically mix in
some fresh state. These are less vulnerable to snooping/exhaustion of
the public stream.

The Postfix tlsmgr(8) process proxies randomness for the rest of the
system in this fashion...

-- 

 /\ ASCII RIBBON  NOTICE: If received in error,
 \ / CAMPAIGN Victor Duchovni  please destroy and notify
  X AGAINST   IT Security, sender. Sender does not waive
 / \ HTML MAILMorgan Stanley   confidentiality or privilege,
   and use is prohibited.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: passphrases with more than 160 bits of entropy

2006-03-22 Thread John Denker

Matt Crawford wrote:

I so often get irritated when non-physicists discuss entropy.  The  word 
is almost always misused. 


Yes, the term entropy is often misused ... and we have seen some
remarkably wacky misusage in this thread already.  However, physicists
do not have a monopoly on correct usage.  Claude S was not a physicist,
yet he definitely knew what he was talking about.  Conversely, I know
more than a few card-carrying physicists who have no real feel for what
entropy is.

I looked at Shannon's definition and  it is 
fine, from a physics point of view.  


Indeed.

But if you apply  thoughtfully to a 
single fixed sequence, you correctly get the answer  zero.


I agree with all that, except for the But.  Shannon well knew that
the entropy was zero in such a situation.

If your sequence is defined to be { 0, 1, 2, ..., 255 }, the  
probability of getting that sequence is 1 and of any other sequence,  
0.  Plug it in.


Indeed.

If you have a generator of 8-bit random numbers and every sample is  
independent and uniformly distributed, and you ran this for a  gazillion 
iterations and wrote to the list one day saying the special  sequence { 
0, 1, 2, ..., 255 } had appeared in the output, that's a  different 
story.  But still, we would talk about the entropy of your  generator, 
not of one particular sequence of outputs.


Yes.  Shannon called it the source entropy, i.e. the entropy of
the source, i.e. the entropy of the generator.


Perry Metzger wrote:


Usually, the best you can do is produce (bad) bounds, and sometimes
not even that.


Huh?  What's your metric for usually?  I'll agree as a matter of
principle that whatever you're doing, you can always do it wrong.
But that doesn't prevent me from doing it right.  I can use physics
to produce good bounds, that is,
  http://www.av8n.com/turbid/



===

The problem posed by the OP is trivial, and good solutions have already
been posted.  To recap: SHA-512 exists, and if that isn't good enough,
you can concatenate the output of several different one-way functions.
You can create new hash functions at the drop of a hat by prepending
something (a counter suffices) to the input to the hash.

Example:  result = hash(1 || pw)  ||  hash(2 || pw)  ||  hash(3 || pw)


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]