On 10/18/11 16:47, James Hozier wrote:

> I'm doing dd if=/dev/random of=/dev/wd0c

and your bottleneck was anything but uh...(/dev/)random. :)

Doing it that way, you can't even push zeros out rapidly.

Add a block size flag.  Long ago, someone who should know assured me (or
maybe the mail list?) that a bs>32k doesn't do anything, but my tests a
year or two ago showed non-trivial improvements up to around bs=1m.
Your results may vary.

Use the raw device -- /dev/rwd0c

So...something like this:
dd if=/dev/random of=/dev/rwd0c bs=256k
(note: I'm not sure what happens if the last block to be written is only
250k in size -- it may clear 'em, it may stop.  probably good to test if
you are concerned about it.

You will see a huge improvement from EACH of those two tricks, and
combined, they are even better.  Another advantage of bs=1m is that if
you pkill -INFO dd, you can see how many megabytes you've cleared so far
(ok, you can do the math regardless, but that one I can do in my head).

But, repeating what someone else said -- if zeros aren't good enough for
you, shred or melt down the disks.  No one will get usable data off the
good spots on your disk with zeros, and random data doesn't clear the
locked out bad blocks.

Some time back, I made an OpenBSD boot disk with the install script
replaced by some dd commands to zero disks.  No prompting, just blow
away everything.  Cleared a few hundred machines that way.  Look, ma! no
keyboard! :) (only blew away one machine by accident ;)

Nick.

Reply via email to