Re: chip-level randomness?

2001-09-24 Thread Ben Laurie

Bram Cohen wrote:
 
 On Wed, 19 Sep 2001, Peter Fairbrother wrote:
 
  Bram Cohen wrote:
 
   You only have to do it once at startup to get enough entropy in there.
 
  If your machine is left on for months or years the seed entropy would become
  a big target. If your PRNG status is compromised then all future uses of
  PRNG output are compromised, which means pretty much everything crypto.
  Other attacks on the PRNG become possible.
 
 Such attacks can be stopped by reseeding once a minute or so, at much less
 computational cost than doing it 'continuously'. I think periodic
 reseedings are worth doing, even though I've never actually heard of an
 attack on the internal state of a PRNG which was launched *after* it had
 been seeded properly once already.

There was a bug in OpenSSL's PRNG (and BSAFEs) which permitted recovery
of the internal state from a largish number of small outputs. It has
been fixed, of course.

Cheers,

Ben.

--
http://www.apache-ssl.org/ben.html

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-22 Thread Bram Cohen

On Thu, 20 Sep 2001, Nomen Nescio wrote:

 If the internal circuitry did output a 60Hz sine wave then regularities
 would still be visible after this kind of whitener.  It is a rather
 mild cleanup of the signal.

It does mask patterns to an extent, possibly pushing them inside the
margin for error of the sample size you happen to use in a test.

 It doesn't seem right to object to them including a bias remover.
 They have done other things to reduce bias.  For example they use a pair
 of thermal resistors located next to each other on the chip and use the
 difference of the values from each of them, to reduce sensitivity to
 environmental influences.  This reduces bias, but should they have left
 the differencing out so that you could more easily measure a possible
 influence?

It's important to have the two of them to get a good estimate of the
amount of entropy it's outputting, although it would also be good if both
row values were available to the CPU, for diagnostic purposes if nothing
else.

-Bram Cohen

Markets can remain irrational longer than you can remain solvent
-- John Maynard Keynes




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-20 Thread Nomen Nescio

Ted Tso writes:
 It turns out that with the Intel 810 RNG, it's even worse because
 there's no way to bypass the hardware whitening which the 810 chip
 uses.  Hence, if the 810 random number generator fails, and starts
 sending something that's close to a pure 60 HZ sine wave to the
 whitening circuitry, it may be very difficult to detect that this has
 happened.

The whitener is just a slightly improved von Neumann bias remover.

The tradition vN state machine looks at pairs of bits and does something
like this:

0 0  -  discard
0 1  -  output 1
1 0  -  output 0
1 1  -  discard

This removes a static bias.  I.e. if you are producing say 55% 0's
and 45% 1's, after this whitener you will output 50% 0's and 1's.
However it is at the cost of discarding a considerable fraction of
the bits.

The improved version in the Intel RNG has a 3 bit window and this
lets it remove the bias just as well while discarding somewhat
fewer bits.

If the internal circuitry did output a 60Hz sine wave then regularities
would still be visible after this kind of whitener.  It is a rather
mild cleanup of the signal.

It doesn't seem right to object to them including a bias remover.
They have done other things to reduce bias.  For example they use a pair
of thermal resistors located next to each other on the chip and use the
difference of the values from each of them, to reduce sensitivity to
environmental influences.  This reduces bias, but should they have left
the differencing out so that you could more easily measure a possible
influence?

Suppose the voltage were to drop to this part of the chip; the
differencing will hide this fact and prevent you from detecting that maybe
some other parts aren't working well.  Here is an example of a possible
partial-failure mode which the chip internal design will tend to hide.
It should not be considered a design flaw for the chip to do this.
It improves the random numbers which the chip produces.  And similarly,
the digital bias remover does the same thing.

The bottom line is the quality of the random numbers produced by the
device.  It is designed internally to withstand various kinds of noise
and bias, so as to produce the best random numbers possible.

See http://www.cryptography.com/intelRNG.pdf for information on the
design of the RNG.  See if you can identify a plausible failure mode
which could be detected if the whitener was not present, but which will
be undetectable with the vN whitener in place.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-20 Thread David Wagner

Bill Frantz  wrote:
At 2:17 PM -0700 9/19/01, Theodore Tso wrote:
It turns out that with the Intel 810 RNG, it's even worse because
there's no way to bypass the hardware whitening which the 810 chip
uses.

Does anyone know what algorithm the whitening uses?

Just like von Neumann's unbiasing procedure, but with a few bits of
state instead of just one.  See Paul Kocher's analysis for the details.

In short, the whitening is only enough to reduce any biases in the raw
generator, not to remove them.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Bram Cohen

On Tue, 18 Sep 2001, Pawel Krawczyk wrote:

 On Mon, Sep 17, 2001 at 01:44:57PM -0700, Bram Cohen wrote:
 
   What is important, it *doesn't* feed the built-in Linux kernel PRNG
   available in /dev/urandom and /dev/random, so you have either to only
   use the hardware generator or feed /dev/urandom yourself.
  That's so ... stupid. Why go through all the work of making the thing run
  and then leave it unplugged?
 
 It's not that stupid, as feeding the PRNG from i810_rng at the kernel
 level would be resource intensive,

You only have to do it once at startup to get enough entropy in there.

 not necessary in general case

Since most applications reading /dev/random don't want random numbers
anyway?

 and would require to invent some defaults without any reasonable
 arguments to rely on. Like how often to feed the PRNG, with how much
 data etc.

At startup and with 200 bits of data would be fine.

Of course, there's the religion of people who say that /dev/random output
'needs' to contain 'all real' entropy, despite the absolute zero increase
in security this results in and the disastrous effect it can have on
performance.

-Bram Cohen

Markets can remain irrational longer than you can remain solvent
-- John Maynard Keynes




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Pawel Krawczyk

On Wed, Sep 19, 2001 at 01:12:44AM -0700, Bram Cohen wrote:

  not necessary in general case
 Since most applications reading /dev/random don't want random numbers
 anyway?

Here I meant exactly what you said about /dev/random religion. On the
other hand feeding the /dev/random with i810 during normal system
operation is not bad idea, as /dev/random is not a PRNG but pool,
that can be emptied if not feed enough from other semi-random events
(interrupts, keyboard).

 At startup and with 200 bits of data would be fine.
 Of course, there's the religion of people who say that /dev/random output
 'needs' to contain 'all real' entropy, despite the absolute zero increase
 in security this results in and the disastrous effect it can have on
 performance.

Ok, I get your point now. I'm not sure if reading a blocking device
(i810) from kernel is a very good idea, however. That's sort of things
that is very good suited for userland, when the system goes multiuser
and multiprocess.

Actually, it would be a quite good idea for the Linux distribution
vendors to add a "dd if=/dev/intel_rng of=/dev/random bs=1k count=1" to
the PRNG initialization scripts. If it fails, then you probably don't
have i810 and everything works the old way... Maybe it's even already
done, as the author of i810 daemon seems to be from MandrakeSoft.

-- 
Pawe Krawczyk *** home: http://ceti.pl/~kravietz/
security: http://ipsec.pl/  *** fidonet: 2:486/23



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Bill Frantz

At 1:12 AM -0700 9/19/01, Bram Cohen wrote:
Of course, there's the religion of people who say that /dev/random output
'needs' to contain 'all real' entropy, despite the absolute zero increase
in security this results in and the disastrous effect it can have on
performance.

If I am generating one time pads, I would certainly prefer /dev/random
output to /dev/urandom output.  There is much less algorithm exposure.
(Although I do still have to worry about the whitening and combining
algorithms.)

Cheers - Bill


-
Bill Frantz   | The principal effect of| Periwinkle -- Consulting
(408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
[EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Peter Fairbrother

 Bram Cohen wrote:

 On Tue, 18 Sep 2001, Pawel Krawczyk wrote:
[..]
 It's not that stupid, as feeding the PRNG from i810_rng at the kernel
 level would be resource intensive,
 
 You only have to do it once at startup to get enough entropy in there.

If your machine is left on for months or years the seed entropy would become
a big target. If your PRNG status is compromised then all future uses of
PRNG output are compromised, which means pretty much everything crypto.
Other attacks on the PRNG become possible.

 and would require to invent some defaults without any reasonable
 arguments to rely on. Like how often to feed the PRNG, with how much
 data etc.

The Intel rng outputs about 8kB/s (I have heard of higher rates). Using all
this entropy to reseed a PRNG on a reasonably modern machine would not take
up _that_ much resources. And it would pretty much defeat any likely attacks
on the PRNG.

 At startup and with 200 bits of data would be fine.

So you need a cryptographically-secure PRNG that takes a 200-bit seed. As
the output is used by programs that may use strange and not-yet-invented
algorithms which may interact with and weaken the PRNG, how are you going to
design it? And what happens if your PRNG is broken? Everything is lost, the
attacker has got root so to speak.

 Of course, there's the religion of people who say that /dev/random output
 'needs' to contain 'all real' entropy, despite the absolute zero increase
 in security this results in and the disastrous effect it can have on
 performance.

Sometimes it may have no effect on security, but it can affect it badly.
Brute force attacks on the PNRG could be more efficient than on the cipher
if 256 bit or higher keys were used. With the possible introduction of QC
looming it might well be advisable to use such key-lengths for data that
requires long-term security.

I agree that performance hits arise if an all-real-random approach is used,
but personally I am in favour of using all the entropy that can easily be
collected without taking those hits. The Intel rng can do this nicely
(although I would use other sources of entropy as well).


-- Peter Fairbrother




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread John Gilmore

The real-RNG in the Intel chip generates something like 75 kbits/sec
of processed random bits.  These are merely wasted if nobody reads them
before it generates 75kbits more in the next second.

I suggest that if application programs don't read all of these bits
out of /dev/intel-rng (or whatever it's called), and the kernel
/dev/random pool isn't fully charged with entropy, then the real-RNG
driver should feed some of the excess random bits into the /dev/random
pool periodically.  When and how it siphons off bits from the RNG is a
separate issue; but can we agree that feeding otherwise-wasted bits
into a depleted /dev/random would be a good idea?

A better way to structure this might be for /dev/intel-rng to register
with /dev/random as a source of entropy that /dev/random can call upon
if it depletes its pool.  /dev/random would then be making decisions
about when to stir more entropy into the pool (either in response to a
read on /dev/random, or to read ahead to increase the available pool
in between such reads).  Thus, when demand on /dev/random is high, it
would become one of the application programs that would compete to
read from /dev/intel-rng.  Since /dev/random is the defined interface
for arbitrary applications to get unpredictable bits out of the
kernel, I would expect that in general, /dev/random is likely to be
the MAJOR consumer of /dev/intel-rng bits.

(Linux IPSEC uses /dev/random or /dev/urandom for keying material.  It
can easily consume many thousands of random bits per second in doing
IKE's Diffie-Hellman to set up dozens of tunnels.  Today this surge
demand occurs at boot time when setting up preconfigured tunnels -- a
particularly bad time since the system hasn't been collecting entropy
for very long.  /dev/intel-rng's high-spead stream can significantly
improve the quality of this keying material, by replenishing the entropy
pool almost as fast IPSEC consumes it.  Over time, IPSEC's
long-term demand for random bits will increase, since opportunistic
encryption allows many more tunnels to be created, with much less
effort per tunnel by the system administrator.)

Also, the PRNG in /dev/random and /dev/urandom may someday be broken
by analytical techniques.  The more diverse sources of true or
apparent randomness that we can feed into it, the less likely it is
that a successful theoretical attack on the PRNG will be practically
successful.  If even a single entropy source of sufficiently high
speed is feeding it, even a compromised PRNG may well be unbreakable.

John




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Bram Cohen

On Wed, 19 Sep 2001, Peter Fairbrother wrote:

 Bram Cohen wrote:
 
  You only have to do it once at startup to get enough entropy in there.
 
 If your machine is left on for months or years the seed entropy would become
 a big target. If your PRNG status is compromised then all future uses of
 PRNG output are compromised, which means pretty much everything crypto.
 Other attacks on the PRNG become possible.

Such attacks can be stopped by reseeding once a minute or so, at much less
computational cost than doing it 'continuously'. I think periodic
reseedings are worth doing, even though I've never actually heard of an
attack on the internal state of a PRNG which was launched *after* it had
been seeded properly once already.

-Bram Cohen

Markets can remain irrational longer than you can remain solvent
-- John Maynard Keynes




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Enzo Michelangeli

- Original Message -
From: Theodore Tso [EMAIL PROTECTED]
To: John Gilmore [EMAIL PROTECTED]
Cc: Pawel Krawczyk [EMAIL PROTECTED]; Bram Cohen
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, September 20, 2001 5:17 AM
Subject: Re: chip-level randomness?


[...]
 On the other hand, for most people, on balance it's probably better
 for the kernel to just blindly trust the 810 random number generator
 to be free from faults (either deliberate or accidentally induced),
 since the alternative (an incompletely seeded RNG) is probably worst
 for most folks.

Not only that: I don't think that feeding predictable input to the entropy
pool is going to make the PRNG's output any worse. If you don't bump up the
entropy estimator (risking a misleading estimate), it's a sort of Pascal's
Wager: you may or may not be better off, but surely you won't be worse off.

Enzo





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-19 Thread Peter Fairbrother

Bram,

I need _lots_ of random-looking bits to use as covertraffic, so I'm using
continuous reseeding (of a BBS PRNG) using i810_rng output on i386 platform
as well as other sources (the usual suspects plus CD latency plus an
optional USB feed-through rng device a bit like a dongle). I don't use a rng
on Apple, 'cos it doesn't have one. Others would perhaps not need so many
bits. 

I do hash them, but I don't really trust any hash, algorithm, or rng, so I
use all the entropy I can get from anywhere and mix it up. I try to arrange
things so each source is sufficient by itself to provide decent protection.

It might be a better idea to schedule reseeding of the PRNG depending on
usage rather than time for more everyday use. Actually I don't disagree with
you much, except I'd like to see reseeding more often than once a minute.

There is another reason to use a PRNG rather than a real-rng, which is to
deliberately repeat random output for debugging, replaying games, etc. Not
very relevant to crypto, except perhaps as part of an attack strategy.

-- Peter


 On Wed, 19 Sep 2001, Peter Fairbrother wrote:
 
 Bram Cohen wrote:
 
 You only have to do it once at startup to get enough entropy in there.
 
 If your machine is left on for months or years the seed entropy would become
 a big target. If your PRNG status is compromised then all future uses of
 PRNG output are compromised, which means pretty much everything crypto.
 Other attacks on the PRNG become possible.
 
 Such attacks can be stopped by reseeding once a minute or so, at much less
 computational cost than doing it 'continuously'. I think periodic
 reseedings are worth doing, even though I've never actually heard of an
 attack on the internal state of a PRNG which was launched *after* it had
 been seeded properly once already.




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-18 Thread Pawel Krawczyk

On Mon, Sep 17, 2001 at 01:44:57PM -0700, Bram Cohen wrote:

  What is important, it *doesn't* feed the built-in Linux kernel PRNG
  available in /dev/urandom and /dev/random, so you have either to only
  use the hardware generator or feed /dev/urandom yourself.
 That's so ... stupid. Why go through all the work of making the thing run
 and then leave it unplugged?

It's not that stupid, as feeding the PRNG from i810_rng at the kernel
level would be resource intensive, not necessary in general case and
would require to invent some defaults without any reasonable arguments
to rely on. Like how often to feed the PRNG, with how much data etc.

On the other hand, the authors provide a `rngd' daemon, running in
userland, that reads the i810_rng device and feeds the data into kernel
PRNG. It seems to be reasonably written, with all the possible caveats
in mind, and you can control the feeding interval, block size and other
parameters.

URI: http://sourceforge.net/project/showfiles.php?group_id=3242release_id=28349

-- 
Pawe Krawczyk *** home: http://ceti.pl/~kravietz/
security: http://ipsec.pl/  *** fidonet: 2:486/23



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-17 Thread Pawel Krawczyk

On Sat, Sep 15, 2001 at 10:16:27AM -0700, Carl Ellison wrote:

 I'm told that the LINUX 2.4 kernel comes with the RNG driver
 built-in, but I haven't tried that.

It works almost out of box, kernel detects the chip and if you have the
necessary device file created (character 10,183 AFAIK) you can use it to
read random data streams. It blocks sometimes when you read long blocks,
but it's quite obvious and it returns as soon as it collects enough data
to satisfy your request. What is important, it *doesn't* feed the built-in
Linux kernel PRNG available in /dev/urandom and /dev/random, so you have
either to only use the hardware generator or feed /dev/urandom yourself.

-- 
Pawe Krawczyk *** home: http://ceti.pl/~kravietz/
security: http://ipsec.pl/  *** fidonet: 2:486/23



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-15 Thread Sandy Harris

R. A. Hettinga wrote:
 
 I'm rooting around for stuff on hardware random number generation.

RFC 1750 is a standard reference. There's a draft of a rewrite on ietf.org.
 
 More specificially, I'm looking to see if anyone has done any
 entropy-collection at the chip-architecture level as part of the logic of a
 chip.
 
 I saw somewhere the intel had done it as part of the Pentium, for instance,
 but I can't find out whether it's an actual entropy collector, or just a
 PRNG.

http://www.cryptography.com/intelRNG.pdf



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]