If this subject varies based on context, then I'm specifically focusing on
generating private keys / certs via "openssl" command-line tools on linux
(rhel/centos) for use in https, etc. 

 

My question is, assuming servers are generated from VM snapshots or clones,
or restored from backups, or otherwise relatively free of entropy when
they're young, how can you ensure you have good entropy at the time of
generating your keys?

 

I guess I'm trying to either learn trust in /dev/urandom, or find a good way
to get better randomness due to distrusting /dev/urandom.

 

Based on my understanding of the FAQ and manual, I think the RANDFILE ~/.rnd
is used as the seed, for the openssl internal prng, and ~/.rnd will be
overwritten upon each use, to be the new seed for next time.  Right?
Apparently not...

 

I did an md5sum on ~/.rnd.  Then I generated some key, and checked the
md5sum again.  It changed.  This verifies I'm looking at the right RANDFILE.
So far so good...

 

I tried copying .rnd, then generating some keys, then restoring .rnd, and
re-generating the keys.  Just to see if my understanding was right.  But the
keys came out different, which suggests my understanding is wrong.  What am
I missing?  Does openssl use a combination of the .rnd file and urandom
together?

 

Naturally, this question implies distrust for /dev/urandom.  But according
to the FAQ & manual, it seems openssl trusts urandom to be sufficiently
random.  Why?  Doesn't this defy conventional logic?  That's the point of
differentiating random from urandom, isn't it?  Because /dev/urandom only
appears random upon cursory inspection, which is good enough to create some
variability in video games, but /dev/random should appear random no matter
how much you dig into it, which is important for cryptography, isn't it?

 

Does it increase entropy if you overwrite the .rnd file with 1024 bytes from
/dev/random?

 

Thanks...

Reply via email to