In an ealier message I wrote:

> Conclusions I've reached that I hope there's agreement on:
> 
> More analysis is needed, especially in the area of how
> to estimate input entropy.
> 
[SNIP]
> 
> Yarrow's two-stage design, where the output from
> hashing the pool seeds a pseudo-random number
> generator based on a strong block cipher, offers
> significant advantages over the one-stage design
> in /dev/random which delivers hash results as
> output. ...
> 
[SNIP]
> 
> /dev/random should become two-stage, ...
> 
> I'll make a detailed proposal for that in
> another post.

So here I'll assume agreement on the above and
outline the design I've come up with, more as a
starting point for further discussion than in
hopes I've got it right.

I'll ignore questions of inputs to the pool here.
/dev/random has working code for that, based on
design assumptions that seem plausible.

The question, then is how best to make it into
a two-stage design. Mainly, choose a block cipher
and modify the hashing to suit. 

The obvious cipher to chose would be one of the
AES candidates with a 256-bit key. Offhand, I'd
say Rijndael, but this needs detailed analysis,
especially in the area of re-keying costs.
Twofish, Serpent, and CAST-256 are all freely
available as well.

A 256-bit key would be nice, but the Yarrow
paper says security is limited to min(k,m)
for cipher keysize k and hash output size m.
I think that can be fixed.

Consider a large entropy pool, with N blocks
of 512 bytes available for hashing. Yarrow and
/dev/random each hash the whole thing every time,
Yarrow to re-key and /dev/random to generate
output. I doubt this is necessary when the pool
contains plenty of entropy. In that case, you
can safely produce output after every block,
at least in a two-stage design where hashing
outputs are never made directly visible to the
attacker.

With output after every block, the limit on
delivered security becomes min(k,Nm) and we
can use a 256-bit key with some hope of
actually delivering 256-bit security. Only
when the pool has m bits of entropy per
block, of course.

I'd like to use Tiger:

http://www.cs.technion.ac.il/~biham/Reports/Tiger/

as the hash for the revised /dev/random.
It is freely available, apparently secure,
faster than MD5 or SHA-1 on at least some
machines, and produces a 192-bit output.

The one disadvantage of Tiger is that it
uses 8K bytes of S-boxes, quite a lot for
a kernel device driver. The /dev/random
author says he limited the default pool
to 512 bytes to save kernel memory. I'm
not sure my suggestion, with Tiger plus
a large pool, is plausible there.

One way to reduce the load on kernel memory
would be to find a way to make Tiger use the
CAST-128 s-boxes. They're the right total
size, though a different shape, and they'll
be in some kernels anyway if we add CAST-128
to the FreeS/WAN code, which I think we should.

The HMACs used in IPSEC use 128 or 160-bit
hashes, but only 96 bits are actually sent
and used for authentication. This makes an
attack on the hashes harder.

Currently, if I've understood right, Yarrow
does not feed hash output back into the pool
and /dev/random feeds back the same bytes it
outputs.

My suggestion, using Tiger: When you have
plenty of entropy, use 128 bits of the hash
for rekeying and stir the other 64 back
into the pool. When entropy gets lower,
reverse the numbers and produce only 64
bits of output per block hashed, stirring
128 back into the pool.

This is intended both to make attacking
the hash harder, since the attacker (even
if he's broken the block cipher!) does
not see all hash output, and to make
iterative guessing attacks harder.

Whether you're taking 128 or 64 bits per
block, the minimum rekeying operation is
to change 128 bits of the block cipher key.
  
Treat a call to /dev/random as if it were
a Yarrow explicit_reseed() followed by a
call to /dev/urandom. Reseed before
calculating any output and set a flag so
that another reseed occurs shortly after.

Reply via email to