Re: going around the crypto

1999-08-17 Thread Enzo Michelangeli

- Original Message -
From: Peter Gutmann [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, August 14, 1999 6:03 AM
Subject: Re: going around the crypto


[...]
 Smart cards with thumbprint readers are one step in this
 direction, although they're currently prohibitively expensive.

Siemens is experimenting with such a device in mobile phones, so it can't
cost that much:

http://www.siemens.com.hk/press/icp01.htm

Enzo





Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-17 Thread John Denker

Hi Ted --

At 11:41 PM 8/14/99 -0400, you wrote: 
 
standard Mathematician's style --- encrypted by formulae 
guaranteed to make it opaque to all but those who are trained in the
peculiar style of Mathematics' papers. 
 ...
someone tried to pursuade me to use Maurer's test
...
too memory intensive and too CPU intensive

You are very wise to be skeptical of mathematical mumbo-jumbo.

You mentioned questions about efficiency, but I would like to call into
question whether the entropy estimate provided by Maurer's Universal
Statistical Test (MUST) would be suitable for our purposes, even if it
could be computed for free.

Don't be fooled by the Universal name.  If you looked it up in a real-world
dictionary, you might conclude that Universal means all-purpose or
general-purpose.  But if you look it up in a mathematical dictionary, you
will find that a Universal probability distribution has the property that
if we compare it to some other distribution, it is not lower by more than
some constant factor.  Alas, the "constant" depends on what two
distributions are being compared, and there is no uniform bound on it!  Oooops!

In the language of entropy, a Universal entropy-estimator overestimates the
entropy by no more than a constant -- but beware, there is no uniform upper
bound on the constant.

To illustrate this point, I have invented Denker's Universal Statistical
Test (DUST) which I hereby disclose and place in the public domain:
According to DUST, the entropy of a string is equal to its length.  That's
it!  Now you may not *like* this test, and you may quite rightly decide
that it is not suitable for your purposes, but my point is that according
to the mathematical definitions, DUST is just as Universal as MUST.

There are profound theoretical reasons to believe it is impossible to
calculate a useful lower bound on the entropy of a string without knowing
how it was prepared.  There simply *cannot* be an all-purpose statistical test.

If you were to make the mistake of treating a Universal estimator as an
all-purpose estimator, and then applying it in a situation where the input
might (in whole or in part) be coming from an adversary, you would lay
yourself open to a chosen-seed attack (analogous to a chosen-plaintext attack).

On the other side of the same coin, if you *do* know something about how
the input was prepared, there obviously are things you can do to improve
your estimate of its entropy.  For example, in the early stages of a
hardware RNG, you could use two input channels, sending the
differential-mode signal to the next stage, and using the common-mode
signal only for error checking.  This is a good way to get rid of a certain
type of interference, and could be quite useful in the appropriate
circumstances.  Returning to the ugly side of the coin, you can see that a
small change in the way the inputs were prepared would make this
differencing scheme worthless, possibly leading to wild overestimates of
the entropy.

BOTTOM LINE:  
 *) Incorporating an all-purpose entropy-estimator into /dev/random is
impossible.
 *) Incorporating something that *pretends* to be an all-purpose estimator
is a Really Bad Idea.
 *) The present design appears to be the only sound design:  whoever
provides the inputs is responsible for providing the estimate of the
entropy thereof.  If no estimate is provided, zero entropy is attributed.

Cheers --- jsd




semantics of /dev/{u}random

1999-08-17 Thread William Allen Simpson

-BEGIN PGP SIGNED MESSAGE-

Catching up, and after talking with John Kelsey and Sandy Harris at
SAC'99, it seems clear that there is some consensus on these lists that
the semantics of /dev/urandom need improvement, and that some principles
of Yarrow should be incorporated.  I think that most posters can be
satisfied by making the functionality of /dev/random and /dev/urandom
more orthogonal.

/dev/random would be essentially the same as today:
 * provide TRNG stream to kernel/system-wide processes
 * maintain entropy "accumulator" pool(s)
 * estimate "available" entropy
 * block when more requested than available

/dev/urandom would be updated:
 * provide PRNG stream to userland processes
 * counter mode
 * non-blocking
 * fast re-seed in smaller chunks periodically
 * slow re-seed in large chunks when TRNG is "full"
 * force ("explicit") re-seed function at critical times

By dividing the responsibilities, each can be better analysed, and
future changes made without disturbing applications.

There does not seem to be consensus whether to limit /dev/random to
"rw---", as this might affect current applications.  I think that
we should bite the bullet, and make this change, to make future changes
more clean and prevent "entropy exhaustion" attacks from userland.
Consider this as a security bug fix for a known attack.

I suggest that fast re-seed occur when estimated new entropy (since last
fast re-seed) from all sources is 128-bits, but that only 64 bits be
used.  This is different than Yarrow-160, as a concession to the
blocking nature of /dev/random.  Leaving half the entropy will allow a
build-up to the slow re-seed, and accomodate other uses of /dev/random.

I suggest that slow re-seed occur when estimated new entropy (since last
slow re-seed) from all sources is 320-bits, with at least TWO sources of
160-bits each (see Yarrow-160 section 5.4), but that only 160 bits be
used.

I suggest that the force re-seed function be rewind(), and that only
min( available-bits/2, 64-bits ) be used, counting as a fast re-seed.

The re-seed threshholds probably have to be implemented in /dev/random,
requiring a close coupling with /dev/urandom.  But as long as the
defined semantics are clearly delineated, details can more easily be
changed transparently.

-BEGIN PGP SIGNATURE-
Version: PGP 6.5.1

iQCVAwUBN7bHjtm/qMj6R+sxAQH9iQP/bLCHCV5LrkehICGQzoGchC8lB0OL6lRC
Ut4uDxUZ6/zGSP4nAnwE3MqPuNOf2R16y90CR6LwsF9kPI9yr90SbCJL/aaJsXI7
xilXSdYAyatfZd3ETWzBmuYwdG63Gchxu6v2xU7NqVPIvy9q1Xz8hhaAFEFgCmml
Ee0RCu12bDw=
=4XF/
-END PGP SIGNATURE-




Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-17 Thread Thomas Roessler

On 1999-08-14 12:27:30 -0700, Bill Frantz wrote:

 It bothers me when people who are in favor of strong crypto
 automatically assume that anything which makes strong crypto easier
 will automatically be export controlled.  This assertion is clearly
 wrong.  The thing which most makes strong crypto easier is the
 (slow) general purpose CPU.  These have never been export
 controlled.

In DuD 2/1998 (I recall, one of the Roth articles on export
control), a case is quoted in which re-exporting a US-fabricated
i386 PC to Poland in 1990 is said to have lead to a conviction.