> In my program, I have calls to RSA_generate_key and also RAND_bytes which 
> is used to generate a DES key. It's obvious that these functions require 
> source of randomness, and in the documentation it also said to seed the 
> PRNG before calling them. However, because I couldn't figure out exactly 
> what to use to do the seed, I simply called them without doing any seeding 
> for now. When I run the program, nothing seems to be wrong 
> (i.e. RAND_bytes() returns 1, etc.) 

RAND_bytes will return 1 if it *thinks* it has enough randomness.
However if you seeded it with 1 meg of '0's then it'll think it
has enough because you lied to it.

> My program works on both Windows and Linux right now, but can I assume it 
> will always be able to find the source?

On Linux or anything else with /dev/urandom, you're ok (though some
will argue you should use stronger entropy if you're going to be
generating keys, such as snagging a few (~16) bytes from /dev/random as
well manually)

On windows it'll use the screen output, which may or may not be very
random.  On a server, where the icons are always in the same place,
or worse yet at a login screen, you don't really have much randomness
on the screen.

When developing for windows, I prefer to manually include a random
seed file.  I generate this seed file from something trusted (like
my linux box...) and then re-write it with new seed data periodically
so each new invocation has new seed data to start off with.

A good way to create one would just be

        $ dd if=/dev/urandom of=seed_data bs=1024 count=1
        $ scp seed_data windows_box:/path/to/put/it

then have your app read the seed data with RAND_load_file and
be sure to RAND_write_file immediately thereafter (to assure
it's different for the next process starting up), and then
again later when you have a chance (so it's even more
different than just the initial write.)

If you have the option, keep sticking more bytes into the pool, such as
passing the data you're shuttling around (I assume your app reads/writes
*something*) with an appropriate entropy parameter (if you're read/writing
lots of similar data, your entropy parameter should be low.  If you're
dealing with lots of really quite random data then set it higher, but
I'd never go higher than 50% unless you've done some analysis to verify
that your data is pretty high entropy.)

For example, my most recent OpenSSL project involved sending data over
SSL that came from patient's life signs (heart rate, EKG, etc).  Some
were certainly more random than others (body temperature is only a few
bits of entropy, whereas the results of an EKG has millions of bytes
of data, the random quality of which can vary radically depending on
the state of the patient.)

I fed all this into the pool when possible, with an entropy estimate
of 2%, even though much of it was highly random.  The amount of data
I had available was more than sufficient to keep the pool stirred.

At least, that's my method.



--
Brian Hatch                  "Zathras work here. Zathras
   Systems and                were born here. You work up
   Security Engineer          there, Zathras work down
http://www.ifokr.org/bri/     here. You dress like that,
                              Zathras dress like this."
Every message PGP signed

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to