/dev/random requires real entropy, like, timing delays between packets
on the network, or between keystrokes, or disk reads.  It will stop if
there is nothing really random happening.  That is how it is supposed
to work.  /dev/urandom does pseudo-random generation when there isn't
any entropy, 1.8K/s seems a bit slow, maybe it would be faster with
bigger blocks, or double-buffered.  Try with 'dd if=/dev/urandom obs=512
ibs=5120'.  Both err on the side of making the data really random.

/dev/null isn't an input, only an output, it doesn't make bytes.  The
/dev/zero is a common choice if you don't need cryptographic security.

Nothing you describe is suprising, and certainly these devices are all
stock unmodified 2.2.20 in tomsrtbt...  Probably older kernels would be
faster, and newer ones more obsessively perfectly as random as can be.

-Tom



> dd if=/dev/random of=/dev/hda bs=1024
> and I get a trickle of random out of /dev/random. Basically, I get a few records (4
> or 5) when I break out of it after a few seconds. If I use /dev/urandom I seem to
> get a little more data, but still _very_ slow. usring /dev/null doesn't work at all, 
>but
> /dev/zero does.
> Timing the process, urandom gives me about 1.8K/s. This running on ancient
> hardware (486). Is the random device that processor-hungry? Or is something
> else amiss?

Reply via email to