Like everyone else, I've been bit by /dev/random blocking because it
didn't have enough entropy. I recently got bit after booting the
system single-user to do some work, meaning nothing in the discussion
about when/where/how to deal with the entropy information addressed
this one.

It seems like we're being a bit paranoid about things - arguments
about that not being possible notwithstanding. I mean - if I mount a
few dozen file systems using inode numbers drawn from a PRNG instead
of being cryptographic quality, then *quit using the PRNG*, how much
extra exposure do I have?

It's clear to me it would be less painful if we had two bike sheds -
uh, behaviors for /dev/random. One would be active at system boot
time, and hence while the system was running single user. It wouldn't
require lots of entropy, and also wouldn't be of cryptographic quality
(though that would be nice)u.  The other would be the quality system
being built, but would be enabled by some userland action. Once
enabled it can't be turned off. The obvious userland action is writing
to /dev/random to give it entropy.

However, someone who actually understands the issues should go through
the rc sequence and figure out when we need cryptographic quality
randomness (or we could add a "CRYPTORANDOM" to the NetBSD-like RC,
and things that require that can be flagged as such). This is the step
needed to decide if doing this kind of split has any advantage at all.

The one problem I see is preventing someone from shotting themselves
in the foot by, for instance, creating ssh host keys using the
low-quality /dev/random. There are certainly others.

Just some thoughts, possibly useful, probably not - but I thought
worth sharing.

        <mike


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to