Bram writes:

> Paul Kocher has said the design looks sound, which I believe, but
> unforotunately the raw output of Intel's RNG just plain can't be accessed
> without it going through whitening first. Unsurprisingly, all the output
> passes all statistical tests. Well, duh, it's been sent through SHA-1. All
> that proves is that there's enough entropy in each block being hashed that
> none of them got repeated in the tests, and even a measly 20 bits are
> likely to do that.

Actually the data analyzed by Jun and Kocher was from _before_ the SHA-1
whitening.  Bram is correct that it would be meaningless to analyze the
output after SHA-1.  None of the statistical results which are reported
in Kocher's writeup are from after SHA-1 (as described in
http://www.cryptography.com/intelRNG.pdf):

: Because subtle output correlations are always a possibility, the
: verification process has included a wide array of statistical tests by
: Cryptography Research and by Intel. These tests are designed to detect
: nonrandom characteristics by comparing statistical distributions in
: large samples of actual RNG outputs against distributions expected from
: a perfect random source.
:
: Tests were performed both before and after the digital
: post-processing. Tests on pre-corrected data help to identify
: characteristics that might be difficult to detect after the correction
: process. All statistical tests were performed on data prior to the
: software library's SHA-1 mixing, as the SHA operation would mask
: nonrandom characteristics.

Bram asks:

> If Intel's RNG really is producing a reliable one bit of entropy per one
> bit of output, why don't they just make it accessible without whitening?

There are a number of reasonable possibilities for why Intel prefers to
provide the post-whitened output:

The main one is that they want people to access the chip via a standard
API which provides high quality random bits.  This is normal software
engineering practice.  It gives Intel freedom in the future to make
changes to the chip interface and accommodate them in the driver.  For
example, they could move the von Neumann bias remover into software if
they desired, and the change would be transparent to software which used
the chip.  Or perhaps they could go in the other direction and put some
kind of SHA-like whitener onto the chip in order to reduce the software
load.  Using a standard API for high quality random bits allows this
kind of design flexibility without concerns about breaking applications
which rely on the previous architecture.

They are also concerned, with the current architecture, that naive users
may use the output of the chip directly without passing it through the
software layers which are necessary to make it fully random.  Of course
most people would hopefully not be foolish enough to do this, but Intel
may be worried about liability issues if they publish the internal
interface to a "random number generator" which is not fully random.

Intel is probably also be motivated by profit.  Got to keep that stock
going up, you know.  Apparently they are charging a great deal of money
(six figures) for access to the RNG library.  If they openly published
the interface to the chip they would not be able to make this kind of
money off of their software driver.

Now, although these reasons are all valid to varying degrees, it is
likely that the interface will be published despite these concerns.
There is little that Intel can do to stop people from reverse
engineering their driver and publishing the interface (anonymously, if
necessary).  This issue will therefore probably be moot in a few months.

Reply via email to