On Thu, Jul 19, 2018 at 10:27:56AM +0100, Richard Melville wrote: > On 19 July 2018 at 04:47, Bruce Dubbs <[email protected]> wrote: > > > On 07/18/2018 08:04 PM, Ken Moffat wrote: > >> Finally, making slow progress on this. The problem is caused by the > >> fix for CVE-2018-1108. A little while ago Ted Ts'o offered a patch, > >> possibly as an RFC, to use entropy from the hwrng (unsafe for > >> critical things like key generation, but it allows less-important > >> things, e.g. in systemd units, to run and therefore it lets the box > >> boot in the absence of real entropy. > >> > >> Apparently he did this because fedora are starting to derive > >> "entropy" from jitter so that e.g. VMs can boot in a meaningful > >> time.
And that generation of jitter sounds very similar to what haveged claims to provide. > > > > Have you tried using haveged? It's boot order is S21 and will start > > slightly before unbound. That still leaves the problem of unbound using > > /dev/urandom, but it may help. > > > > I already suggested that -- Ken doesn't like it. I use SSDs and it works > for me. > > Richard Yeah, if you google about haveged you will quickly find links mentioning that almost all its tests pass if fed by a constant stream of '1' bits, e.g. mentioned in https://lwn.net/Articles/525459/ And also note the reference to debian's past openssh problem - at one time they generated only 32,767 possible SSH keys. As long as the result is NOT used for important things (crypto key generation, perhaps generating UUIDs), the quality of the randomness does not usually matter too much. I now contend that generating a random number to use when validating DNS responses does not require high-quality randomness, and as evidence I refer to the code I posted (taken originally from Open BSD, according to its documentation, so I will describe it as "paranoid by preference"). It tries to read /dev/random, and only falls back to /dev/urandom if the read failed. But the correct behaviour of /dev/random *on linux* is to hang forever until the kernel determines it can provide the requested entropy. By adding something from haveged, unbound will probably be able to start quickly, like it used to before the kernel correctly checked that initialisation was complete. But any subsequent need for a high-quality random number will get lower-quality randomness. Using /dev/urandom seems a better way of preserving quality randomness for when it is needed, which is why I am reluctant to use haveged (and since my kaveri doesn't have an rng, I can't use rng-tools). And (probably like most other people here), thinking through the details makes my brain hurt and I might have missed or misunderstood something. Fortunately I don't have to worry about the more severe issues such as generating cryptographic keys in a VM, and people who do have to deal with that have my respect. ĸen -- Entropy not found, thump keyboard to continue -- http://lists.linuxfromscratch.org/listinfo/blfs-support FAQ: http://www.linuxfromscratch.org/blfs/faq.html Unsubscribe: See the above information page
