On 2/11/24 00:11, Thomas Schmitt wrote:
Hi,

David Christensen wrote:
$ time dd if=/dev/urandom bs=8K count=128K | wc -c
[...]
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.30652 s, 249 MB/s

This looks good enough for practical use on spinning rust and slow SSD.


Yes.


Maybe the "wc" pipe slows it down ?
... not much on 4 GHz Xeon with Debian 11:

   $ time dd if=/dev/urandom bs=8K count=128K | wc -c
   ...
   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.13074 s, 260 MB/s
   $ time dd if=/dev/urandom bs=8K count=128K of=/dev/null
   ...
   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.95569 s, 271 MB/s


My CPU has a Max Turbo Frequency of 3.3 GHz. I would expect a 4 GHz processor to be ~21% faster, but apparently not.


Baseline with pipeline, wc(1), and bs=8K due to unknown Bash pipeline bottleneck (?):

2024-02-11 01:18:33 dpchrist@laalaa ~
$ dd if=/dev/urandom bs=8K count=128K | wc -c
1073741824
131072+0 records in
131072+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.27283 s, 251 MB/s


Eliminate pipeline and wc(1):

2024-02-11 01:18:44 dpchrist@laalaa ~
$ dd if=/dev/urandom of=/dev/null bs=8K count=128K
131072+0 records in
131072+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.75946 s, 286 MB/s


Increase block size:

2024-02-11 01:18:51 dpchrist@laalaa ~
$ dd if=/dev/urandom of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62874 s, 296 MB/s


Concurrency:

threads throughput
1       296 MB/s
2       285+286=571 MB/s
3       271+264+266=801 MB/s
4       249+250+241+262=1,002 MB/s
5       225+214+210+224+225=1,098 MB/s
6       223+199+199+204+213+205=1,243 MB/s
7       191+209+210+204+213+201+197=1,425 MB/s
8       205+198+180+195+205+184+184+189=1,540 MB/s


Last time i tested /dev/urandom it was much slower on comparable machines
and also became slower as the amount grew.


Did you figure out why the Linux random number subsystem slowed, and at what amount?


Therefore i still have my amateur RNG which works with a little bit of
MD5 and a lot of EXOR. It produces about 1100 MiB/s on the 4 GHz machine.
No cryptographical strength, but chaotic enough to avoid any systematic
pattern which could be recognized by a cheater and represented with
some high compression factor.
The original purpose was to avoid any systematic interference with the
encoding of data blocks on optical media.

I am sure there are faster RNGs around with better random quality.


I assume the Linux kernel in Debian 11 is new enough to support RDRAND (?):

https://en.wikipedia.org/wiki/RdRand


But, my processor is too old to have Secure Key.


$ time perl -MMath::Random::ISAAC::XS -e 
'$i=Math::Random::ISAAC::XS->new(12345678); print pack 'L',$i->irand while 1' | 
dd bs=8K count=128K | wc -c
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 82.6523 s, 13.0 MB/s

Now that's merely sufficient for shredding the content of a BD-RE medium
or a slow USB stick.


Okay.


I suggest using /dev/urandom and tee(1) to write the same CSPRN
stream to all of the devices and using cmp(1) to validate.

I'd propose to use a checksummer like md5sum or sha256sum instead of cmp:

  $random_generator | tee $target | $checksummer
  dd if=$target bs=... count=... | $checksummer

This way one can use unreproducible random streams and does not have to
store the whole stream on a reliable device for comparison.


TIMTOWTDI.  :-)


David

Reply via email to