Mark writes:
> > I'm hoping to persuade the yarrow designers of the importance of
> > supporting /dev/random semantics for the unix community acceptance.
> > John Kelsey and I had some discussions along the lines of feeding
> > /dev/random output into yarrow, which I notice someone on here
> > considered.
> 
> By "/dev/random semantics", are you referring to a blocking model
> that "counts" entropy and blocks when it beieves it is "empty"?

Yes.  See the mbox archive I just put up at:

        http://www.cypherspace.org/~adam/yarrow.txt

and [1] below for my arguments of why I agree with Kris and Jeroen's
earlier comments.

You really can't use yarrow to implement /dev/random as it is.  Even
waiting for reseeds doesn't cut it.  The issue is that everything goes
through the yarrow output function, which restricts yarrow to offering
computational security with at worst 2^n work factor to break because
it offers known plaintext the 0 block as the first output is E_k( 0 ).

> I am against the blocking model, as I believe that it goes against
> what Yarrow is trying to do. If the Yarrow authors argued otherwise,
> I'd listen.

Niels and John Kelsey were against it to initially, on the grounds
that computational security (160 bits -- or whatever the parameter is
with the ciphers you have plugged in) is in fact "good enough" in
practice even for 1024 bit RSA key generation.

(The argument has some validity; in practice a brute force attack
against RSA 1024 takes significantly less than 2^160 operations,
though the memory requirements are higher).

However it is not fair to impose that view on someone.  People can
have legitmate reasons to need more entropy.  Another very concrete
example is: say someone is using a yarrow-160 (3DES and SHA1)
implementation and they want to use an AES cipher with a 256 bit key
-- without the /dev/random API, you can't get 256 bit security, with
it you can.

OTPs and some algorithms offer information theoretic security, or you
may be using a larger key space symmetric construct than the yarrow
output size (using 256 doesn't solve that -- then they want a 512 bit
key).  Worse people may already be using /dev/random for these
assumptions, so you risk breaking existing code's security by
replacing /dev/random with yarrow.

> > - yarrow design specifically calls for a hash functoin and a block
> >   cipher -- you may easily be violating some of it's security
> >   assumptions by plugging in the above.
> 
> If I construct a specific hash function, is this still a problem?

No.  Note my other comments on list about CBC-MAC are confused -- I
misread your code.  It appears to be a keyless hash function, and as
Joeren noted it has some similarities to Davies-Meyer, but it's not
quite the same for the reasons he noted.

The main argument is against not using constructs which haven't
received lots of peer-review -- most crypto constructs are very
fragile to small design changes.

Adam

[1]
======================================================================
Here's a cut and paste from discussions on yarrow list summarising my
view:

| we still have a community acceptance problem: there remains the
| possibility that /dev/random could produce unconditionally secure
| ouput [IFF entropy estimates are conservative]; if we replace this
| with something which _can not_ be unconditionally secure, we face 
| complaints.

and:

| So given that, it doesn't seem quite fair to pull the rug from under
| /dev/random users and replace it with a PRNG with quite different
| security assumptions.  Users would have similar reasons to be upset if
| someone removed their /dev/random and symlinked it to /dev/urandom.

and after more arguments, more formally argued:

| Let's imagine we have a radioactive decay card, and we run it's
| outputs through a software de-skewing function to hide biases due to
| detector dead time and the expected distribution being different to
| that which we desire.  Say we're convinced that the de-skewing
| function makes each bit of output uniformly distributed, to the extent
| that we're confident in using it's outputs as a OTP.
| 
| Now what Yarrow-160 does is it takes k bits of OTP output, resets a 64
| bit counter (C) to 0 and uses counter mode from there.  You can't get
| at the OTP outputs.
| 
| 
| Now my issue is if I had access to the OTP I could have had as many
| uniformly distributed bits as I wanted subject to their rate of
| production.  But going through the Yarrow-160 output function I can
| never get information theoretic security.  If I use it niavely I'll
| get at best 2^160 bits of security if no reseeds occur, and I may
| share these bits across multiple applications and with other users.
| 
| Even if I have a mechanism to wait for a reseed after each output and
| reserve that output for me, I get at best R*2^160 bits for R reseeds,
| rather than the 2^{R*160} bits I wanted.
| 
| Note the yarrow-160 API and design doesn't allow me to wait for and
| reserve the output of a reseed in a multi-tasking OS -- /dev/random
| does.
======================================================================


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to