Re: Filling a 4TB Disk with Random Data

2020-06-10 Thread STeve Andre'
Even easier,  have stty status set to ^T, and run dd .

When you want to know where you are in the process hit ^T.  Lots (most?)
of programs will respond to a SIGINFO request.

--STeve Andre' ​

On Jun 10, 2020, 12:48, at 12:48, Luke Small  wrote:
>if you have access to packages, you could "pkg_add pv"
>
>and:
>
>"dd if=/dev/random | pv | dd of=/dev/rsdXc bs=1m"
>
>It will show you in real time how much random
>
>data has been written to disk.
>
>-Luke
>
>
>On Wed, Jun 10, 2020 at 11:43 AM Luke Small 
>wrote:
>
>> I mean: "dd if=/dev/random | pv | dd of=/dev/rsdXc bs=1m"
>>
>> -Luke
>>
>>
>> On Wed, Jun 10, 2020 at 11:41 AM Luke Small 
>wrote:
>>
>>> if you have access to packages, you could "pkg_add pv"
>>>
>>> and:
>>>
>>> "dd if=/dev/random | pv | of=/dev/rsdXc bs=1m"
>>>
>>> It will show you in real time how much random
>>>
>>> data has been written to disk.
>>>
>>> -Luke
>>>
>>


Re: Filling a 4TB Disk with Random Data

2020-06-10 Thread Luke Small
if you have access to packages, you could "pkg_add pv"

and:

"dd if=/dev/random | pv | dd of=/dev/rsdXc bs=1m"

It will show you in real time how much random

data has been written to disk.

-Luke


On Wed, Jun 10, 2020 at 11:43 AM Luke Small  wrote:

> I mean: "dd if=/dev/random | pv | dd of=/dev/rsdXc bs=1m"
>
> -Luke
>
>
> On Wed, Jun 10, 2020 at 11:41 AM Luke Small  wrote:
>
>> if you have access to packages, you could "pkg_add pv"
>>
>> and:
>>
>> "dd if=/dev/random | pv | of=/dev/rsdXc bs=1m"
>>
>> It will show you in real time how much random
>>
>> data has been written to disk.
>>
>> -Luke
>>
>


Re: Filling a 4TB Disk with Random Data

2020-06-08 Thread Ian Darwin
On Fri, Jun 05, 2020 at 12:49:41PM -0500, Ed Ahlsen-Girard wrote:
> On Mon, 01 Jun 2020 13:38:55 -0400
> "Eric Furman"  wrote:
> 
> > On Mon, Jun 1, 2020, at 10:28 AM, Paul de Weerd wrote:
> >  [...]  
> > 
> > This is why if you are serious you use a degausser.
> > 
> 
> The truly serious use a smelter. I am not making a joke.

And, to reduce the impact of their being intercepted on the way to the smelter:

https://prodevice.eu/media-destroyers-shredders/data-media-shredder/



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Ed Ahlsen-Girard
On Mon, 01 Jun 2020 13:38:55 -0400
"Eric Furman"  wrote:

> On Mon, Jun 1, 2020, at 10:28 AM, Paul de Weerd wrote:
>  [...]  
> 
> This is why if you are serious you use a degausser.
> 

The truly serious use a smelter. I am not making a joke.

-- 

Edward Ahlsen-Girard
Ft Walton Beach, FL




Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Christian Weisgerber
On 2020-06-05, Roderick  wrote:

>> I'd think that a degausser would also erase the servo tracks which will make
>> the disk irrevocably unusable. If that's what you want then just drill holes
>> through the disk - it's quicker.
>
> Or perhaps to put it on an induction cooktop?

I always keep a vat of molten steel at hand so I can easily dispose
of old disk drives, killer robots from the future, etc.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Roderick



On Mon, 1 Jun 2020, Eike Lantzsch wrote:


I'd think that a degausser would also erase the servo tracks which will make
the disk irrevocably unusable. If that's what you want then just drill holes
through the disk - it's quicker.


Or perhaps to put it on an induction cooktop?



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Eike Lantzsch
On Monday, 1 June 2020 13:38:55 -04 Eric Furman wrote:
> On Mon, Jun 1, 2020, at 10:28 AM, Paul de Weerd wrote:
> > storage medium.  Due to smart disks remapping your data in case of
> > 'broken' sectors, some old data can never be properly overwritten.
>
> This is why if you are serious you use a degausser.

I'd think that a degausser would also erase the servo tracks which will make
the disk irrevocably unusable. If that's what you want then just drill holes
through the disk - it's quicker.

--
Eike Lantzsch ZP6CGE

Paradox: Getting live-updates about fatalities





Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Roderick



On Fri, 5 Jun 2020, Janne Johansson wrote:


Then again, if you count how many hours it will take to securely erase a
disk, one might doubt the option of "just run this command and it will do
the same in 10 seconds".


Not 10 seconds, but there will be sure a difference if the task is done
by the disk hardware/firmware instead of the CPU/OS/software.

Rod.



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Martin Schröder
Am Fr., 5. Juni 2020 um 09:21 Uhr schrieb Roderick :
> Is not there a SCSI command "sanitize" for that?

Secure erase: 
https://en.wikipedia.org/wiki/Parallel_ATA#HDD_passwords_and_security

Or you encrypt your device and throw away the key.

Best
Martin



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Janne Johansson
Den fre 5 juni 2020 kl 09:23 skrev Roderick :

> Is not there a SCSI command "sanitize" for that?
> Can be issued with OpenBSD?
> Perhaps his disc supports it.
>

Then again, if you count how many hours it will take to securely erase a
disk, one might doubt the option of "just run this command and it will do
the same in 10 seconds". Might work, might not work. Both will result in a
drive that is hard to read out old data from, but which option gives
confidence?

-- 
May the most significant bit of your life be positive.


Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Roderick



Is not there a SCSI command "sanitize" for that?

Can be issued with OpenBSD?

Perhaps his disc supports it.

Rod.



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Otto Moerbeek
On Thu, Jun 04, 2020 at 08:39:24PM -0700, Justin Noor wrote:

> Thanks you @misc.
> 
> Using dd with a large block size will likely be the course of action.
> 
> I really need to refresh my memory on this stuff. This is not something we
> do, or need to do, everyday.
> 
> Paul your example shows:
> 
> bs=1048576
> 
> How did you choose that number? Could you have gone even bigger? Obviously
> it is a multiple of 512.
> 
> The disks in point are 4TB Western Digital Blues. They have 4096 sector
> sizes.
> 
> I used a 16G USB stick as a sacrificial lamb to experiment with dd.
> Interestingly, there is no difference in time between 1m, 1k, and 1g. How
> is that possible? Obviously this will not be an accurate comparison of the
> WD disks, but it was still a good practice exercise.

Did you write to the raw device? That make a big difference. 

At some point increasing buffer size will not help, since you already are
hitting some other (hw or sw) limit to the bandwidth.

-Otto

> 
> Also Paul, to clarify a point you made, did you mean forget the random data
> step, and just encrypt the disks with softraid0 crypto? I think I like that
> idea because this is actually a traditional pre-encryption step. I don't
> agree with it, but I respect the decision. For our purposes, encryption
> only helps if the disks are off the machine, and someone is trying to
> access them. This automatically implies that they were stolen. The chances
> of disk theft around here are slim to none. We have no reason to worry
> about forensics either - we're not storing nuclear secrets.
> 
> Thanks for your time
> 
> 
> On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd  wrote:
> 
> > On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote:
> > | Hi Misc,
> > |
> > | Has anyone ever filled a 4TB disk with random data and/or zeros with
> > | OpenBSD?
> >
> > I do this before disposing of old disks.  Have written random data to
> > several sizes of disk, not sure if I ever wiped a 4TB disk.
> >
> > | How long did it take? What did you use (dd, openssl)? Can you share the
> > | command that you used?
> >
> > It takes quite some time, but OpenBSD (at least on modern hardware)
> > can generate random numbers faster than you can write them to spinning
> > disks (may be different with those fast nvme(4) disks).
> >
> > I simply used dd, with a large block size:
> >
> > dd if=/dev/random of=/dev/sdXc bs=1048576
> >
> > And then you wait.  The time it takes really depends on two factors:
> > the size of the disk and the speed at which you write (whatever the
> > bottleneck).  If you start, you can send dd the 'INFO' signal (`pkill
> > -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty
> > status ^T`))  This will give you output a bit like:
> >
> > 30111+0 records in
> > 30111+0 records out
> > 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec)
> >
> > Now take the size of the disk in bytes, divide it by that last number
> > and subtract the second number.  This is a reasonable ball-park
> > indication of time remaining.
> >
> > Note that if you're doing this because you want to prevent others from
> > reading back even small parts of your data, you are better of never
> > writing your data in plain text (e.g. using softraid(4)'s CRYPTO
> > discipline), or (if it's too late for that), to physically destroy the
> > storage medium.  Due to smart disks remapping your data in case of
> > 'broken' sectors, some old data can never be properly overwritten.
> >
> > Cheers,
> >
> > Paul 'WEiRD' de Weerd
> >
> > --
> > >[<++>-]<+++.>+++[<-->-]<.>+++[<+
> > +++>-]<.>++[<>-]<+.--.[-]
> >  http://www.weirdnet.nl/
> >



Re: Filling a 4TB Disk with Random Data

2020-06-05 Thread Paul de Weerd
Hi Justin,

On Thu, Jun 04, 2020 at 08:39:24PM -0700, Justin Noor wrote:
| Thanks you @misc.
| 
| Using dd with a large block size will likely be the course of action.
| 
| I really need to refresh my memory on this stuff. This is not something we
| do, or need to do, everyday.
| 
| Paul your example shows:
| 
| bs=1048576
| 
| How did you choose that number? Could you have gone even bigger? Obviously
| it is a multiple of 512.

It's just 1m.  Yes, I could've gone bigger, but that wouldn't add
much.  1m is just my defaut so i can more easily tell how much has
been done upon SIGINFO, as the records are then 1m large.  So in my
sample output 30111 MB had been written.

| The disks in point are 4TB Western Digital Blues. They have 4096 sector
| sizes.

1m is of course a multiple of 4k :)

| I used a 16G USB stick as a sacrificial lamb to experiment with dd.
| Interestingly, there is no difference in time between 1m, 1k, and 1g. How
| is that possible? Obviously this will not be an accurate comparison of the
| WD disks, but it was still a good practice exercise.
| 
| Also Paul, to clarify a point you made, did you mean forget the random data
| step, and just encrypt the disks with softraid0 crypto? I think I like that
| idea because this is actually a traditional pre-encryption step. I don't
| agree with it, but I respect the decision. For our purposes, encryption
| only helps if the disks are off the machine, and someone is trying to
| access them. This automatically implies that they were stolen. The chances
| of disk theft around here are slim to none. We have no reason to worry
| about forensics either - we're not storing nuclear secrets.

Well, you didn't mention the why: what are you trying to accomplish by
overwriting your 4TB disk with random data?  If it is to prevent
others from accessing the data after you dispose of the disk then you
should be aware of the caveat I mentioned.

I get rid of old computers by overwriting the disk(s) and installing
the latest snapshot.  That's why I do this .. but it's not clear why
you want to do it.

Cheers,

Paul

| Thanks for your time
| 
| 
| On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd  wrote:
| 
| > On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote:
| > | Hi Misc,
| > |
| > | Has anyone ever filled a 4TB disk with random data and/or zeros with
| > | OpenBSD?
| >
| > I do this before disposing of old disks.  Have written random data to
| > several sizes of disk, not sure if I ever wiped a 4TB disk.
| >
| > | How long did it take? What did you use (dd, openssl)? Can you share the
| > | command that you used?
| >
| > It takes quite some time, but OpenBSD (at least on modern hardware)
| > can generate random numbers faster than you can write them to spinning
| > disks (may be different with those fast nvme(4) disks).
| >
| > I simply used dd, with a large block size:
| >
| > dd if=/dev/random of=/dev/sdXc bs=1048576
| >
| > And then you wait.  The time it takes really depends on two factors:
| > the size of the disk and the speed at which you write (whatever the
| > bottleneck).  If you start, you can send dd the 'INFO' signal (`pkill
| > -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty
| > status ^T`))  This will give you output a bit like:
| >
| > 30111+0 records in
| > 30111+0 records out
| > 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec)
| >
| > Now take the size of the disk in bytes, divide it by that last number
| > and subtract the second number.  This is a reasonable ball-park
| > indication of time remaining.
| >
| > Note that if you're doing this because you want to prevent others from
| > reading back even small parts of your data, you are better of never
| > writing your data in plain text (e.g. using softraid(4)'s CRYPTO
| > discipline), or (if it's too late for that), to physically destroy the
| > storage medium.  Due to smart disks remapping your data in case of
| > 'broken' sectors, some old data can never be properly overwritten.
| >
| > Cheers,
| >
| > Paul 'WEiRD' de Weerd
| >
| > --
| > >[<++>-]<+++.>+++[<-->-]<.>+++[<+
| > +++>-]<.>++[<>-]<+.--.[-]
| >  http://www.weirdnet.nl/
| >

-- 
>[<++>-]<+++.>+++[<-->-]<.>+++[<+
+++>-]<.>++[<>-]<+.--.[-]
 http://www.weirdnet.nl/ 



Re: Filling a 4TB Disk with Random Data

2020-06-04 Thread Justin Noor
Thanks you @misc.

Using dd with a large block size will likely be the course of action.

I really need to refresh my memory on this stuff. This is not something we
do, or need to do, everyday.

Paul your example shows:

bs=1048576

How did you choose that number? Could you have gone even bigger? Obviously
it is a multiple of 512.

The disks in point are 4TB Western Digital Blues. They have 4096 sector
sizes.

I used a 16G USB stick as a sacrificial lamb to experiment with dd.
Interestingly, there is no difference in time between 1m, 1k, and 1g. How
is that possible? Obviously this will not be an accurate comparison of the
WD disks, but it was still a good practice exercise.

Also Paul, to clarify a point you made, did you mean forget the random data
step, and just encrypt the disks with softraid0 crypto? I think I like that
idea because this is actually a traditional pre-encryption step. I don't
agree with it, but I respect the decision. For our purposes, encryption
only helps if the disks are off the machine, and someone is trying to
access them. This automatically implies that they were stolen. The chances
of disk theft around here are slim to none. We have no reason to worry
about forensics either - we're not storing nuclear secrets.

Thanks for your time


On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd  wrote:

> On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote:
> | Hi Misc,
> |
> | Has anyone ever filled a 4TB disk with random data and/or zeros with
> | OpenBSD?
>
> I do this before disposing of old disks.  Have written random data to
> several sizes of disk, not sure if I ever wiped a 4TB disk.
>
> | How long did it take? What did you use (dd, openssl)? Can you share the
> | command that you used?
>
> It takes quite some time, but OpenBSD (at least on modern hardware)
> can generate random numbers faster than you can write them to spinning
> disks (may be different with those fast nvme(4) disks).
>
> I simply used dd, with a large block size:
>
> dd if=/dev/random of=/dev/sdXc bs=1048576
>
> And then you wait.  The time it takes really depends on two factors:
> the size of the disk and the speed at which you write (whatever the
> bottleneck).  If you start, you can send dd the 'INFO' signal (`pkill
> -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty
> status ^T`))  This will give you output a bit like:
>
> 30111+0 records in
> 30111+0 records out
> 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec)
>
> Now take the size of the disk in bytes, divide it by that last number
> and subtract the second number.  This is a reasonable ball-park
> indication of time remaining.
>
> Note that if you're doing this because you want to prevent others from
> reading back even small parts of your data, you are better of never
> writing your data in plain text (e.g. using softraid(4)'s CRYPTO
> discipline), or (if it's too late for that), to physically destroy the
> storage medium.  Due to smart disks remapping your data in case of
> 'broken' sectors, some old data can never be properly overwritten.
>
> Cheers,
>
> Paul 'WEiRD' de Weerd
>
> --
> >[<++>-]<+++.>+++[<-->-]<.>+++[<+
> +++>-]<.>++[<>-]<+.--.[-]
>  http://www.weirdnet.nl/
>


Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Jordan Geoghegan




On 2020-06-01 06:58, Justin Noor wrote:

Hi Misc,

Has anyone ever filled a 4TB disk with random data and/or zeros with
OpenBSD?

How long did it take? What did you use (dd, openssl)? Can you share the
command that you used?

Thank you so much



I've used OpenBSD to overwrite up to 8TB disks. I use a large block size 
with 'dd' and make sure to use /dev/rsdX (the 'r' makes things much 
faster).




Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Diana Eichert
or if a degausser isn't available use 7.62 x 51

sorry, couldn't help myself

On Mon, Jun 1, 2020 at 11:41 AM Eric Furman  wrote:
>
> On Mon, Jun 1, 2020, at 10:28 AM, Paul de Weerd wrote:
> > storage medium.  Due to smart disks remapping your data in case of
> > 'broken' sectors, some old data can never be properly overwritten.
>
> This is why if you are serious you use a degausser.
>



Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Eric Furman
On Mon, Jun 1, 2020, at 10:28 AM, Paul de Weerd wrote:
> storage medium.  Due to smart disks remapping your data in case of
> 'broken' sectors, some old data can never be properly overwritten.

This is why if you are serious you use a degausser.



Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Daniel Jakots
On Mon, 1 Jun 2020 14:33:44 - (UTC), Christian Weisgerber
 wrote:

> Take care to pick the proper device corresponding to the drive you
> want to overwrite.

Don't make people miss a good opportunity to test their backups!



Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Christian Weisgerber
On 2020-06-01, Justin Noor  wrote:

> Has anyone ever filled a 4TB disk with random data and/or zeros with
> OpenBSD?

Yes.

> How long did it take?

I don't remember.  Hours.
At a plausible 100 MB/s write speed it will take 11 hours.

> What did you use (dd, openssl)? Can you share the command that you used?

# dd if=/dev/random of=/dev/rsd1c bs=64k# random data
# dd if=/dev/zero of=/dev/rsd1c bs=64k  # zeros

Take care to pick the proper device corresponding to the drive you
want to overwrite.

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Paul de Weerd
On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote:
| Hi Misc,
| 
| Has anyone ever filled a 4TB disk with random data and/or zeros with
| OpenBSD?

I do this before disposing of old disks.  Have written random data to
several sizes of disk, not sure if I ever wiped a 4TB disk.

| How long did it take? What did you use (dd, openssl)? Can you share the
| command that you used?

It takes quite some time, but OpenBSD (at least on modern hardware)
can generate random numbers faster than you can write them to spinning
disks (may be different with those fast nvme(4) disks).

I simply used dd, with a large block size:

dd if=/dev/random of=/dev/sdXc bs=1048576

And then you wait.  The time it takes really depends on two factors:
the size of the disk and the speed at which you write (whatever the
bottleneck).  If you start, you can send dd the 'INFO' signal (`pkill
-INFO dd` (or press Ctrl-T if your shell is set up for it with `stty
status ^T`))  This will give you output a bit like:

30111+0 records in
30111+0 records out
31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec)

Now take the size of the disk in bytes, divide it by that last number
and subtract the second number.  This is a reasonable ball-park
indication of time remaining.

Note that if you're doing this because you want to prevent others from
reading back even small parts of your data, you are better of never
writing your data in plain text (e.g. using softraid(4)'s CRYPTO
discipline), or (if it's too late for that), to physically destroy the
storage medium.  Due to smart disks remapping your data in case of
'broken' sectors, some old data can never be properly overwritten.

Cheers,

Paul 'WEiRD' de Weerd

-- 
>[<++>-]<+++.>+++[<-->-]<.>+++[<+
+++>-]<.>++[<>-]<+.--.[-]
 http://www.weirdnet.nl/ 



Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread Janne Johansson
Den mån 1 juni 2020 kl 16:01 skrev Justin Noor :

> Hi Misc,
> Has anyone ever filled a 4TB disk with random data and/or zeros with
> OpenBSD?
> How long did it take? What did you use (dd, openssl)? Can you share the
> command that you used?
>

My /dev/random on decent x86_64 give out more or less same amount of data
(around 200MB/s) as spinning drives will accept, so you might aswell just
dd random to the raw device for it. At this speed, you are looking at ~5
hours of fun.

https://www.wolframalpha.com/input/?i=4+terabyte+at+200MB%2Fs

-- 
May the most significant bit of your life be positive.


Re: Filling a 4TB Disk with Random Data

2020-06-01 Thread STeve Andre'
The speed of writing is dependent on the rotational speed of the disk, and the 
i/o bandwidth of the system.

You want to do

   dd if=/dev/zero of=/dev/rsd1c bs=1m

Note that this writes to the sd1 disk!  Carefully,
carefully look at your disks and write to the correct
one.  Writing to sd0 is likely to be disastrous.

Do this on a test system.  dd is as efficient as it is ruthless.  You can 
irrevocably damage a system with it.

---STeve Andre'


⁣Sent from BlueMail ​

On Jun 1, 2020, 09:58, at 09:58, Justin Noor  wrote:
>Hi Misc,
>
>Has anyone ever filled a 4TB disk with random data and/or zeros with
>OpenBSD?
>
>How long did it take? What did you use (dd, openssl)? Can you share the
>command that you used?
>
>Thank you so much


Filling a 4TB Disk with Random Data

2020-06-01 Thread Justin Noor
Hi Misc,

Has anyone ever filled a 4TB disk with random data and/or zeros with
OpenBSD?

How long did it take? What did you use (dd, openssl)? Can you share the
command that you used?

Thank you so much