I've been following this thread for a couple of weeks now, and so
far virtually none of it makes any sense to me.

Back on 10/12/2005 Travis H. wrote:
I am thinking of making a userland entropy distribution system, so
that expensive HWRNGs may be shared securely amongst several machines.

What evidence is there that HRNGs are expensive?  How many machines do
you have?  How many of them already have soundcards?  How much entropy
do they need (bits per second)?

The obvious solution is to put a high-performance low-cost HRNG in each machine.
Is there some reason why this cannot be done?  If so, please explain.

Otherwise, this whole discussion seems like a futile exercise, i.e. trying
to find the optimal way of doing the wrong thing.

]] ABSTRACT: We discuss the principles of a High-Entropy Randomness Generator 
(also called a True
]] Random Number Generator) that is suitable for a wide range of applications, 
]] cryptography, high-stakes gaming, and other highly adversarial applications. 
It harvests entropy
]] from physical processes, and uses that entropy efficiently. The hash 
saturation principle is used
]] to distill the data, resulting in virtually 100% entropy density. This is 
calculated, not
]] statistically estimated, and is provably correct under mild assumptions. In 
contrast to a
]] Pseudo-Random Number Generator, it has no internal state to worry about, and 
does not depend on
]] unprovable assumptions about “one-way functions”. We also describe a 
low-cost high-performance
]] implementation, using the computer’s audio I/O system.

For details, see

 Here's the algorithm from generation to use:

1) Entropy harvested from HWRNG.

OK so far.

2) Entropy mixed with PRNG output to disguise any biases present in
source.  ...   (Is XOR sufficent and desirable?)

If it were a decent HRNG it would have this built in.  XOR is not even
remotely sufficient.

3) Entropy used as "truly random" input in an extractor to map
"somewhat random" input (interrupt timing, memory contents, disk head
settling times) into "strongly random" output.

What's an extractor?  What is needed is a compressor.

4) Entropy passed through OWF to occlude state of previous systems in
this chain.

A decent HRNG is stateless and does not need any one-way functions.

5?) Entropy ciphered with a randomly-generated key (taken from the
previous step), rotated periodically.

A decent HRNG does not need any such encipherment.


Similarly, I also would like to use ID Quantique's HWRNG based on


but their modules are sealed and opaque.  What I want to do is
explore what kind of assurances I can make about the output, based on
assumptions about the attacker's ability to control, predict, or
observe one of the sources.

Such assurances are discussed at:

5) Do it in a language not as prone to security-relevant errors as C
and containing support for large numbers and bitstrings as first-class

turbid is already written in C++ for this reason.  Strings and suchlike
are part of the language, defined in the Standard Template Library.

1) Lack of standardization in the naming or semantics of kernel
facilities, such as the names of devices in /dev.

The semantics is just broken ... which is why turbid defines and
implements /dev/hrandom with its own semantics.  Optionally it can
feed entropy to /dev/[u]random for the benefit of legacy applications
under certain limited conditions.

2) Lack of support for sockets in the target language.

Really not a problem with turbid.

3) The use of ioctls for interfacing to sources of entropy in the kernel.

Really not a problem with turbid.

4) The use of tty devices to interface to HWRNGs

Really not a problem with turbid.

5) Multiple clients petitioning the daemon for random bits at once.

Really not a problem with turbid.

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to