Brian Fundakowski Feldman wrote:
> On Mon, 24 Jul 2000, Jeroen C. van Gelderen wrote:
> > > > What I meant with that point is that the user may get, say an extra few
> > > > hundred bits out of it with no new entropy before the scheduled reseed
> > > > task kicks in.
> > >
> > > How does he know which bits are which? His analysis task just got a whole
> > > lot more difficult.
> >
> > Again, not entirely correct but not relevant either...
> >
> > Kris is simply right in that the /dev/random semantics change
> > and that more bits can be output by Yarrow than there is entropy
> > gathered. *In theory* the complexity of an attack on our Yarrow
> > has an upper bound of 2^256 and *in theory* this is less than
> > the complexity of an attack on our current /dev/random. This is
> > a hard fact, no way around that.
> Even if the attack on a single non-blocking read from Yarrow is only
> of 2^256 complexity, it is designed to be much more expensive than
> just cracking a single block cipher.  Blowfish has a very large keying
> step, and Yarrow is designed to exploit having large keying steps and
> then adding more complexity in its setup in addition.  This makes it
> infeasible to mount attacks on Yarrow, and the security is really not
> as weak as just cracking 20-round Blowfish-256.

Actually, it is. The low key agility doesn't add anything in
terms of practical security because it only affects brute
force attacks and can be optimized out in a pipelined 
implementation. Expensive, yes, but can be done. There is
some more details on this in the Yarrow paper I think...

Anyway, not that is matters, Yarrow was designed to be as 
secure as the underlying blockcipher and Blowfish is 
generally considered to be reasonably secure.

So, the security still is 2^256 maximum, no way around that.

> However, none of this makes Yarrow useless for getting many bits of
> high-quality random data for, e.g., generation of an RSA key.

Well, you will need to back that up with arguments if you want
to convince the more sceptical (not me). A mere statement will 
not do it, you need proof or at least arguments :-)

The question that Kris posed basically boiled down to: "Does 2^256 
complexity equal 2^x (x > 256) complexity in practice?" . I can't
think of a practical system where it wouldn't be sufficient in 
practice but that's just me. Well, you seem to agree and MarkM
seems to too.

Hmm, maybe the complainers should provide proof that they do 
need more than 2^256 complexity. Makes it easier for us,
proponents ;-/

Also, since a 1024 RSA-key only has ~2^77 complexity it isn't a 
very good example. A more interesting question is, what if you 
generate a couple of 256-bit symmetric keys in a row. Their
total complexity is 2^256 which is less than they could have.
Does that matter in practice?

> Mark already stated that in *practicality*, Yarrow-BF-cbc-256 1.0
> (I guess that's the proper name for this :-) 

According to Bruce S. one would call it 
  Yarrow-256 (implementation details go here)
You would definately spell out Blowfish completely.

Btw, how exactly is the hash actually constructed in our Yarrow?
I wonder how one constructs a 256-bit hash out of Blowfish with
a 64-bit block size. A quick explanation would be appreciated.

> is complex enough and
> generates good enough ouput.  If you /really/ want to make the attack
> on it much harder, how about this: if you're going to read 1024 bits
> of entropy from Yarrow on /dev/random, you will request it all at once
> and block just as the old random(4) used to block; the blocking can
> occur at 256 bit intervals and sleep until there is a reseed.  Waiting
> to reseed for each read will ensure a much larger amount of "real"
> entropy than it "maybe" happening at random times.

Sounds like a good idea. It looks like it would be reasonably
easy to then add an extra entropy counter to the pool from which
you subtract the number of bits that are output and to which you
add the number of entropy bits that are mixed in.

You can then extract bytes until that counter hits 0 and then block 
until it goes positive again which would ensure that the entropy
output trough /dev/random is not re-used for output trough
/dev/urandom. This would not affect the security of Yarrow at all
but preserve the semantics of /dev/random almost completely.

> Can you really find anything wrong with doing what I propose *in
> practice*?  

No. Although I think you can make it nearly perfect by 
incorporating the above suggestion. You would then *never* 
return more bits than you have gathered entropy and you 
would never use those entropy bits twice. 

> I've
> already implemented this as well as some other bugfixes, so see the
> attached diff.


> > [1] And, should we decide not to change /dev/random semantics,
> >     can we still back /dev/random with a modified Yarrow?
> I think it makes sense :)

Me too, especially with your changes (or modification thereof)
to preserve current /dev/random semantics.

Jeroen C. van Gelderen          o      _     _         _
[EMAIL PROTECTED]  _o     /\_   _ \\o  (_)\__/o  (_)
                      _< \_   _>(_) (_)/<_    \_| \   _|/' \/
                     (_)>(_) (_)        (_)   (_)    (_)'  _\o_

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to