Re: [gentoo-user] Securely deletion of an HDD

2015-07-15 Thread Rich Freeman
On Tue, Jul 14, 2015 at 6:21 PM, R0b0t1 r03...@gmail.com wrote:

 On Sun, Jul 12, 2015 at 7:18 PM, Rich Freeman ri...@gentoo.org wrote:
 I think that assumes that the two get averaged together in some way
 and cannot be separated.  If you could determine the orientation of
 individual magnetic domains it is possible that you might be able to
 determine which ones are which.  For example, if in a given location
 you found 90% of the grains had one orientation, and 10% had another,
 you might be able to infer that the 10% was the previous value of that
 location.

 Every bit on the disk will have this ghost inverse behind it. If you
 flip bits at random - what overwriting the drive with random data
 effectively does - then it's impossible to tell which ones were
 flipped recently and which ones were flipped before the last write.

If a disk head moves across a track and lays down a pattern of
magnetic fields, I imagine that the intensity of those fields will
vary with distance from the head.  If the head makes a second pass
writing a different pattern of magnetic fields following a path not
identical to the first, I imagine that those field intensities will
also vary with distance from the head, but particles on ones die of
the track will probably retain more of the former pattern and
particles on the other side of the track would tend to retain more of
the second pattern.

I'm just not seeing anything that suggests that such an attack is
physically impossible.  It might be impractical today.  It might be
impractical forever.  However, impossible is a very high bar to clear.

Whether somebody with a technical capability so advanced that it is so
debatable today fits within your threat model is a different story.
Clearly these techniques are not available commercially/etc.  If
you're afraid of the NSA and you have unencrypted data on a disk in
the first place they've probably already defeated your security in 100
different ways already.  So, I'll agree there is a practical argument
to be made.

However, I can't really agree that something is physically impossible
unless you can prove it from first principles.

--
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-14 Thread R0b0t1
On Sun, Jul 12, 2015 at 7:18 PM, Rich Freeman ri...@gentoo.org wrote:
I think that assumes that the two get averaged together in some way
and cannot be separated.  If you could determine the orientation of
individual magnetic domains it is possible that you might be able to
determine which ones are which.  For example, if in a given location
you found 90% of the grains had one orientation, and 10% had another,
you might be able to infer that the 10% was the previous value of that
location.

Every bit on the disk will have this ghost inverse behind it. If you
flip bits at random - what overwriting the drive with random data
effectively does - then it's impossible to tell which ones were
flipped recently and which ones were flipped before the last write.

That probably isn't practical with current technology, but I see no
reason that it should be impossible.

Magnetic force microscopy has a resolution fine enough to read any
disk that can be created - they're just really expensive.


On Sun, Jul 12, 2015 at 8:50 PM, Thomas Mueller
mueller6...@bellsouth.net wrote:
All that has been said on this thread supposes that the hard drive is still 
readable and writable.

On Sun, Jul 12, 2015 at 6:22 PM, R0b0t1 r03...@gmail.com wrote:
If you need to destroy a platter drive take it apart and sand the platters 
(probably the easiest). If it's solid state heat the drive over 150C-250C for 
an extended period of time or mechanically destroy the chips.



Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Marc Joliet
Am Sun, 12 Jul 2015 22:43:44 +0200
schrieb Volker Armin Hemmann volkerar...@googlemail.com:

 http://www.howtogeek.com/115573/htg-explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/

Yeah, that was linked from the Arch wiki I looked at.

 http://www.vidarholen.net/~vidar/overwriting_hard_drive_data.pdf

FWIW, Peter Gutmann doesn't have much good to say about that article
(specifically, he wrote about the related blog article at [0] in his Further
Epilogue at [1]). Regardless, the summary still seems to be: with
modern high-density drives, there is *no* wiggle room outside for remnants of
data to stick around after overwriting it, outside of some potential future
method that is probably a) far enough away into the future that the data on the
drive is uninteresting by then (if it ever was interesting to begin with!) and
b) prohibitively expensive (at least at the start), which pushes the earliest
time someone might ever look at my old hard drives even further back.  This
assumes that anybody is interested in developing something like that, if it's
even possible.

I can't help but wonder what the situation is like with tape, which still
commonly used for backups. ISTR that huge densities are also the norm there, but
that's about all I know.

[0]
https://web.archive.org/web/20090722235051/http://sansforensics.wordpress.com/2009/01/15/overwriting-hard-drive-data
[1] https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
-- 
Marc Joliet
--
People who think they know everything really annoy those of us who know we
don't - Bjarne Stroustrup


pgpQE7rExP0Zs.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Rich Freeman
On Mon, Jul 13, 2015 at 4:05 AM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:
 Am 12.07.2015 um 23:30 schrieb Rich Freeman:
 Impossible is a pretty bold claim.  You need proof, not evidence that
 a particular recovery technique didn't work.  I can demonstrate very
 clearly that I'm unable to crack DES, but that doesn't make it secure.


 they gave you the prove. Others have found the same. If you are unable
 to understand what they wrote, just say so.


By all means point out more specifically where you think they made a
theoretical argument.  I see lots of talk of measurements and lots of
empirical-looking numbers.  Theoretical arguments tend to involve lots
of h-bars over pis and such.

As far as others finding the same goes, that also tends to
characterize this as an experimental/practical argument.  You
generally don't tend to have publications of reproductions of
theoretical arguments since about all you can do is either point out
an error in the math or extend it.

Such experiments are useful, but they're not airtight.  It is the
difference between AES and a one-time pad.  The former has no known
method of circumvention and seems really hard to attack, the latter is
theoretically impossible to attack if correctly implemented, but
probably impossible to truly implement correctly.  I don't worry about
using AES, but I'm not under any illusions that it is completely
unbreakable.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Joerg Schilling
Marc Joliet mar...@gmx.de wrote:

 Hi,

 I have to failed drives that I want to give away for recycling purposes, but
 want to be sure to properly clear them first.  They used be part of a btrfs

The test patterns used on Solaris and marked with federal requirements are:

int purge_patterns[]= {   /* patterns to be written */ 
0x, /* 10101010... */ 
0x, /* 01010101...  == ... */ 
0x, /* 10101010... */ 
0x, /* 10101010... */ 
}

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Marc Joliet
Am Mon, 13 Jul 2015 01:50:57 +
schrieb Thomas Mueller mueller6...@bellsouth.net:

 All that has been said on this thread supposes that the hard drive is still 
 readable and writable.
 
 But the original post stated this was a failed drive.
 
 Then you might not be able to dd if=/dev/zero of=/dev/sdx .. or whatever else.
 
 You would be stopped by bad sectors.

The two drives I'm referring to here failed in the sense that they have no more
reallocation sectors available.  Perhaps that will make it difficult to wipe 
them
properly, but they were fine mechanically when I removed them.

-- 
Marc Joliet
--
People who think they know everything really annoy those of us who know we
don't - Bjarne Stroustrup


pgpm3N2V4UFQG.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Marc Joliet
Am Sun, 12 Jul 2015 18:32:39 +0200
schrieb Volker Armin Hemmann volkerar...@googlemail.com:

 Am 12.07.2015 um 14:35 schrieb Marc Joliet:
  Hi,
 
  I have to failed drives that I want to give away for recycling purposes, but
  want to be sure to properly clear them first.  They used be part of a btrfs
  RAID10 array, but needed to be replaced (with btrfs replace).  (In the
  meantime I converted the array to RAID1 with only two drives.)
 
  My question is how precisely the disks should be cleared.  From various 
  sources
  I know that overwriting them with random data a few times is enough to 
  render
  old versions of data unreadable.  I'm guessing 3 times ought to be enough, 
  but
  maybe even that small amount is overly paranoid these days?
 
  As to the actual command, I would suspect something like dd if=/dev/urandom
  of=/dev/sdx bs=4096 should suffice, and according to
  https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furandom,
  /dev/urandom ought to be random enough for this task.  Or are cat/cp that 
  much
  faster?
 
  Any thoughts?
 
  Greetings
 
 actually 1 time is enough. With zeros. Or ones. Does not matter at all.

If you look at my initial response to Rich, I already concluded that one time
is enough, although I'm going to stick with whatever random data shred(1)
produces.

-- 
Marc Joliet
--
People who think they know everything really annoy those of us who know we
don't - Bjarne Stroustrup


pgpykgWwMsVBc.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Volker Armin Hemmann
Am 12.07.2015 um 23:30 schrieb Rich Freeman:
 On Sun, Jul 12, 2015 at 5:20 PM, Volker Armin Hemmann
 volkerar...@googlemail.com wrote:
 read the second link I provided.

 I did.  It contains no theoretical arguments against the possibility

yes it does.

 of data recovery.  Theoretical limits would be ones like the
 uncertainty principle.  If a given amount of matter could only store a
 certain number of bits, and that number of bits is already being
 stored, then it would be clearly impossible to recover more.

 And then google for yourself.
 For what?

 Back then it was very hard. Today it is impossible. You toss a coin for
 every bit. And that is your chance to extract anything.

 Impossible is a pretty bold claim.  You need proof, not evidence that
 a particular recovery technique didn't work.  I can demonstrate very
 clearly that I'm unable to crack DES, but that doesn't make it secure.


they gave you the prove. Others have found the same. If you are unable
to understand what they wrote, just say so.



Re: [gentoo-user] Securely deletion of an HDD

2015-07-13 Thread Volker Armin Hemmann
Am 13.07.2015 um 03:50 schrieb Thomas Mueller:
 All that has been said on this thread supposes that the hard drive is still 
 readable and writable.

 But the original post stated this was a failed drive.

 Then you might not be able to dd if=/dev/zero of=/dev/sdx .. or whatever else.

 You would be stopped by bad sectors.

 Or a hard drive might not be accessible at all through the computer interface.

 I heard something that sounded like a modem dialing, but had no such modem.  

 Going around with my eyes and ears led me to determine that it was a hard 
 drive whining in an external eSATA enclosure, no longer recognized or 
 accessible from the computer.

 That was a Western Digital Green 3 TB hard drive that replaced, under 
 warranty, a WD Green 3 TB hard drive that developed bad sectors.

 Fortunately I had no confidential data on that hard drive.

 So everything in this thread says nothing about if the hard drive failed due 
 to a mechanical problem.

 Then the data could not be overwritten by ordinary means, but could still be 
 read by techniques such as used by Drive Savers.

in case of mechanical failure: open case, rub platters on the carpet. Done.



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Rich Freeman
On Sun, Jul 12, 2015 at 10:39 AM, Marc Joliet mar...@gmx.de wrote:

 Am Sun, 12 Jul 2015 08:48:48 -0400
 schrieb Rich Freeman ri...@gentoo.org:

 If it weren't painful to set up and complicated for rescue attempts,
 I'd just use full-disk encryption with a strong key on a flash drive
 or similar.  Then the disk is as good as wiped if separated from the
 key already.

 Plus you don't have to worry about reallocated sectors (which might only
 contain single bit errors). Currently I'm planning on waiting for btrfs to
 support it. Chris Mason recently mentioned that it's definitely something they
 want to look at (https://youtu.be/W3QRWUfBua8?t=631), and it's not something
 that is so important to me personally that I have to have it right this 
 instant.


While some kind of native support would be nice, and likely more
efficient in some ways, you could just layer btrfs on top of an
encrypted loopback device.  The problem is you'll need various scripts
in your initramfs (or root partition if you don't bother to encrypt
it) to actually set that up.  In the event of a recovery situation
you'll need to do all that setting up before you can run something
like fsck on the disks and so on.  In the event of a power loss I'd
have to think through the failure modes, but I think you'd be fine as
long as everything respected barriers, and btrfs/zfs already do
checksuming.

The typical approach is to use many rounds of encryption using a
keyed-in password.  That is a pretty good approach but obviously not
nearly as secure as just using a completely random key with the full
amount of entropy.  A hand-keyed password with more entropy than the
cipher uses would also be fine, but that would be a very long password
(we're not just talking battery horse staple here).  I guess you could
just use a USB drive as your boot partition with the keys on it and
keep a few copies of it, and with a decent grub setup on it that would
also work for rescue purposes.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Neil Bothwick
On Sun, 12 Jul 2015 15:21:41 -0400, Rich Freeman wrote:

 While some kind of native support would be nice, and likely more
 efficient in some ways, you could just layer btrfs on top of an
 encrypted loopback device. 

The problem with that approach, if you use RAID, is that all writes must
be encrypted multiple times, once for each disk, unless you use MD RAID
between the disk and the encryption layer.

 The problem is you'll need various scripts
 in your initramfs (or root partition if you don't bother to encrypt
 it) to actually set that up.

With a single device, Dracut handles all this automatically. I have such
a setup on my laptop and used to use custom scripts to call cryptsetup at
boot time, until I got fed up with you and Canek banging on about Dracut
and decided to give it another go. With the right boot options, it just
works.


-- 
Neil Bothwick

Any sufficiently advanced bug is indistinguishable from a feature.


pgpLGZoheOcYT.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Volker Armin Hemmann
Am 12.07.2015 um 21:14 schrieb Rich Freeman:
 On Sun, Jul 12, 2015 at 12:32 PM, Volker Armin Hemmann
 volkerar...@googlemail.com wrote:
 actually 1 time is enough. With zeros. Or ones. Does not matter at all.

 That depends on your threat model.

nope. It doesn't.

You believe in some urban legend you never dared to question.


 If you're concerned about somebody reading the contents of the drive
 using the standard ATA commands, then once with zeros is just fine.
 Secure erase is probably easier/faster.

 If you're concerned about somebody removing the disks from the drive
 and reading them with specialized equipment then you really want
 multiple rounds of complete overwrites with random data.  Even then
 you run the risk of relocation blocks and all that stuff, so the
 secure erase at the end is still a wise move but it may or may not
 completely do the job.

even then one time is enough. Links are below.



 If you're concerned about somebody leaving the disks in the drive but
 having access to directly manipulate the drive heads to possibly
 access data not accessible using the standard ATA commands then one
 pass is probably good enough, but I'd still use random data instead of
 zeros.  The reason is that a clever firmware (especially on an SSD)
 might not actually record zeros to the regular disk space, but instead
 just mark the block range as containing zeros, leaving the actual data
 intact.  For random data the drive has to actually store the contents
 as it cannot be represented in any more concise way.

 If I'm not in a rush I prefer to just do the multiple passes.  Why
 take a chance?

if you do it, it is your problem, but recommending something stupid is
something else altogether.


 And of course full-disk encryption is the solution to all of the
 above, as it defeats any kind of attack at the level of the drive and
 is proactive in nature.


cute.

Unlike you, I read some stuff before posting. This is OLD NEWS:

http://www.howtogeek.com/115573/htg-explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/

http://www.vidarholen.net/~vidar/overwriting_hard_drive_data.pdf

to quote:


Resultantly, if there is less than a 1% chance of determining each
character to be
recovered correctly, the chance of a complete 5-character word being
recovered drops
exponentially to 8.463E-11 (or less on a used drive and who uses a new
raw drive
format). This results in a probability of less than 1 chance in 10Exp50
of recovering
any useful data. So close to zero for all intents and definitely not
within the realm of
use for forensic presentation to a court.


10^50. Think about that for a moment. And that is not 'all the data' but
'any useful data'.



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread R0b0t1
@topic: I would strongly suggest using a hardware key that also utilizes a
passphrase. To delete, remove the key and/or don't tell anyone the
passphrase. If you need to destroy a platter drive take it apart and sand
the platters (probably the easiest). If it's solid state heat the drive
over 150C-250C for an extended period of time or mechanically destroy the
chips.


It contains no theoretical arguments against the possibility of data
recovery.

The superparmagnetic limit sets the upper bound for storage density. It is
impossible to store information inside the grain of a metal because it acts
as if the magnetic moment is the sum of all of the atoms in the grain. At
this size, the polarity of the magnet can randomly flip directions
depending on the temperature. For ~2005 drives that was about 1Tbit/in^2
with ~850Gbit/in^2 used. Newer drives continue to have higher numbers but
unless the efficiency drops there is not enough room to shadow all the data
(you will need to calculate or find these numbers for each drive you are
interested in). At best you could hope to recover some portion of it with
magnetic force microscopy, which you can/should assume will read back at
the maximum density available on the medium.

But, simpler: if you combine a random stream of data with what is on the
drive, the result looks just like random data. You need only overwrite the
drive once.


Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Rich Freeman
On Sun, Jul 12, 2015 at 4:43 PM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:

 Unlike you, I read some stuff before posting. This is OLD NEWS:

No need to be rude.


 http://www.howtogeek.com/115573/htg-explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/

 http://www.vidarholen.net/~vidar/overwriting_hard_drive_data.pdf

 to quote:

 
 Resultantly, if there is less than a 1% chance of determining each
 character to be
 recovered correctly, the chance of a complete 5-character word being
 recovered drops
 exponentially to 8.463E-11 (or less on a used drive and who uses a new
 raw drive
 format). This results in a probability of less than 1 chance in 10Exp50
 of recovering
 any useful data. So close to zero for all intents and definitely not
 within the realm of
 use for forensic presentation to a court.
 

 10^50. Think about that for a moment. And that is not 'all the data' but
 'any useful data'.


This really looks like a pragmatic argument, and not a theoretical
one.  I see no arguments based on hard laws of physics.  This argument
basically says that because this lab couldn't read the data with their
equipment/methods, it is impossible for anybody to do it at any time
in the future using any equipment.

I'd say Schneier's Law applies.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Rich Freeman
On Sun, Jul 12, 2015 at 12:32 PM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:

 actually 1 time is enough. With zeros. Or ones. Does not matter at all.


That depends on your threat model.

If you're concerned about somebody reading the contents of the drive
using the standard ATA commands, then once with zeros is just fine.
Secure erase is probably easier/faster.

If you're concerned about somebody removing the disks from the drive
and reading them with specialized equipment then you really want
multiple rounds of complete overwrites with random data.  Even then
you run the risk of relocation blocks and all that stuff, so the
secure erase at the end is still a wise move but it may or may not
completely do the job.

If you're concerned about somebody leaving the disks in the drive but
having access to directly manipulate the drive heads to possibly
access data not accessible using the standard ATA commands then one
pass is probably good enough, but I'd still use random data instead of
zeros.  The reason is that a clever firmware (especially on an SSD)
might not actually record zeros to the regular disk space, but instead
just mark the block range as containing zeros, leaving the actual data
intact.  For random data the drive has to actually store the contents
as it cannot be represented in any more concise way.

If I'm not in a rush I prefer to just do the multiple passes.  Why
take a chance?

And of course full-disk encryption is the solution to all of the
above, as it defeats any kind of attack at the level of the drive and
is proactive in nature.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Volker Armin Hemmann
Am 12.07.2015 um 23:10 schrieb Rich Freeman:
 On Sun, Jul 12, 2015 at 4:43 PM, Volker Armin Hemmann
 volkerar...@googlemail.com wrote:
 Unlike you, I read some stuff before posting. This is OLD NEWS:
 No need to be rude.

 http://www.howtogeek.com/115573/htg-explains-why-you-only-have-to-wipe-a-disk-once-to-erase-it/

 http://www.vidarholen.net/~vidar/overwriting_hard_drive_data.pdf

 to quote:

 
 Resultantly, if there is less than a 1% chance of determining each
 character to be
 recovered correctly, the chance of a complete 5-character word being
 recovered drops
 exponentially to 8.463E-11 (or less on a used drive and who uses a new
 raw drive
 format). This results in a probability of less than 1 chance in 10Exp50
 of recovering
 any useful data. So close to zero for all intents and definitely not
 within the realm of
 use for forensic presentation to a court.
 

 10^50. Think about that for a moment. And that is not 'all the data' but
 'any useful data'.

 This really looks like a pragmatic argument, and not a theoretical
 one.  I see no arguments based on hard laws of physics.  This argument
 basically says that because this lab couldn't read the data with their
 equipment/methods, it is impossible for anybody to do it at any time
 in the future using any equipment.

 I'd say Schneier's Law applies.


read the second link I provided.

And then google for yourself.

All that 'overwritte many times' crap came from people who never read
Guttman's original paper closely.

Back then it was very hard. Today it is impossible. You toss a coin for
every bit. And that is your chance to extract anything.

There are better chances to extract the key you used to encrypt your
data from RAM than to extract any useful data from a harddisk that was
overwritten once.



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Rich Freeman
On Sun, Jul 12, 2015 at 5:20 PM, Volker Armin Hemmann
volkerar...@googlemail.com wrote:
 read the second link I provided.


I did.  It contains no theoretical arguments against the possibility
of data recovery.  Theoretical limits would be ones like the
uncertainty principle.  If a given amount of matter could only store a
certain number of bits, and that number of bits is already being
stored, then it would be clearly impossible to recover more.

 And then google for yourself.

For what?


 Back then it was very hard. Today it is impossible. You toss a coin for
 every bit. And that is your chance to extract anything.


Impossible is a pretty bold claim.  You need proof, not evidence that
a particular recovery technique didn't work.  I can demonstrate very
clearly that I'm unable to crack DES, but that doesn't make it secure.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Thomas Mueller
All that has been said on this thread supposes that the hard drive is still 
readable and writable.

But the original post stated this was a failed drive.

Then you might not be able to dd if=/dev/zero of=/dev/sdx .. or whatever else.

You would be stopped by bad sectors.

Or a hard drive might not be accessible at all through the computer interface.

I heard something that sounded like a modem dialing, but had no such modem.  

Going around with my eyes and ears led me to determine that it was a hard drive 
whining in an external eSATA enclosure, no longer recognized or accessible from 
the computer.

That was a Western Digital Green 3 TB hard drive that replaced, under warranty, 
a WD Green 3 TB hard drive that developed bad sectors.

Fortunately I had no confidential data on that hard drive.

So everything in this thread says nothing about if the hard drive failed due to 
a mechanical problem.

Then the data could not be overwritten by ordinary means, but could still be 
read by techniques such as used by Drive Savers.

Tom




Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Rich Freeman
On Sun, Jul 12, 2015 at 6:22 PM, R0b0t1 r03...@gmail.com wrote:

 But, simpler: if you combine a random stream of data with what is on the
 drive, the result looks just like random data. You need only overwrite the
 drive once.

I think that assumes that the two get averaged together in some way
and cannot be separated.  If you could determine the orientation of
individual magnetic domains it is possible that you might be able to
determine which ones are which.  For example, if in a given location
you found 90% of the grains had one orientation, and 10% had another,
you might be able to infer that the 10% was the previous value of that
location.

That probably isn't practical with current technology, but I see no
reason that it should be impossible.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Rich Freeman
On Sun, Jul 12, 2015 at 8:35 AM, Marc Joliet mar...@gmx.de wrote:

 My question is how precisely the disks should be cleared.  From various 
 sources
 I know that overwriting them with random data a few times is enough to render
 old versions of data unreadable.  I'm guessing 3 times ought to be enough, but
 maybe even that small amount is overly paranoid these days?

 As to the actual command, I would suspect something like dd if=/dev/urandom
 of=/dev/sdx bs=4096 should suffice, and according to
 https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furandom,
 /dev/urandom ought to be random enough for this task.  Or are cat/cp that much
 faster?

I'd probably just use a tool like shred/wipe, but you have the general idea.

I'd probably follow it up with an ATA secure erase - for an SSD it is
probably the only way to be sure (well, to the extent that you trust
the firmware authors).

If it weren't painful to set up and complicated for rescue attempts,
I'd just use full-disk encryption with a strong key on a flash drive
or similar.  Then the disk is as good as wiped if separated from the
key already.

-- 
Rich



Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Mick
On Sunday 12 Jul 2015 13:35:25 Marc Joliet wrote:
 Hi,
 
 I have to failed drives that I want to give away for recycling purposes,
 but want to be sure to properly clear them first.  They used be part of a
 btrfs RAID10 array, but needed to be replaced (with btrfs replace).  (In
 the meantime I converted the array to RAID1 with only two drives.)
 
 My question is how precisely the disks should be cleared.  From various
 sources I know that overwriting them with random data a few times is
 enough to render old versions of data unreadable.  I'm guessing 3 times
 ought to be enough, but maybe even that small amount is overly paranoid
 these days?
 
 As to the actual command, I would suspect something like dd
 if=/dev/urandom of=/dev/sdx bs=4096 should suffice, and according to
 https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furan
 dom, /dev/urandom ought to be random enough for this task.  Or are cat/cp
 that much faster?
 
 Any thoughts?
 
 Greetings

I use urandom a couple of times (3 to 5), because random takes too long and I 
don't store state secrets on my disks.  Then I dd onto it a final round of 
/dev/zero.  Finally, run hdparm to securely erase it for good measure.[1]  All 
of this could be an overkill, but do it out of habit these days.

It is worth saying that I use haveged to increase entropy:

[I] sys-apps/haveged
 Available versions:  
1.5
   ~1.7a
1.7a-r1
   ~1.9.1
 Installed versions:  1.7a-r1(12:46:23 04/21/14)
 Homepage:http://www.issihosts.com/haveged/
 Description: A simple entropy daemon using the HAVEGE algorithm

I should clarify that disks which contained financial data are dealth with a 
high speed angle grinder, after I remove the outer casing of the drive and don 
a pair of goggles.[2]  *Only then* do I recycle the bits left.  ;-)


[1] https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase

[2] You can also use a hammer, a drill, or any similar implement which will 
completely break the physical disk platters to bits.

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Francisco Ares
Em 12/07/2015 10:03, Mick michaelkintz...@gmail.com escreveu:

 On Sunday 12 Jul 2015 13:35:25 Marc Joliet wrote:
  Hi,
 
  I have to failed drives that I want to give away for recycling purposes,
  but want to be sure to properly clear them first.  They used be part of
a
  btrfs RAID10 array, but needed to be replaced (with btrfs replace).
(In
  the meantime I converted the array to RAID1 with only two drives.)
 
  My question is how precisely the disks should be cleared.  From various
  sources I know that overwriting them with random data a few times is
  enough to render old versions of data unreadable.  I'm guessing 3 times
  ought to be enough, but maybe even that small amount is overly paranoid
  these days?
 
  As to the actual command, I would suspect something like dd
  if=/dev/urandom of=/dev/sdx bs=4096 should suffice, and according to
 
https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furan
  dom, /dev/urandom ought to be random enough for this task.  Or are
cat/cp
  that much faster?
 
  Any thoughts?
 
  Greetings

 I use urandom a couple of times (3 to 5), because random takes too long
and I
 don't store state secrets on my disks.  Then I dd onto it a final round of
 /dev/zero.  Finally, run hdparm to securely erase it for good
measure.[1]  All
 of this could be an overkill, but do it out of habit these days.

 It is worth saying that I use haveged to increase entropy:

 [I] sys-apps/haveged
  Available versions:
 1.5
~1.7a
 1.7a-r1
~1.9.1
  Installed versions:  1.7a-r1(12:46:23 04/21/14)
  Homepage:http://www.issihosts.com/haveged/
  Description: A simple entropy daemon using the HAVEGE
algorithm

 I should clarify that disks which contained financial data are dealth
with a
 high speed angle grinder, after I remove the outer casing of the drive
and don
 a pair of goggles.[2]  *Only then* do I recycle the bits left.  ;-)


 [1] https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase

 [2] You can also use a hammer, a drill, or any similar implement which
will
 completely break the physical disk platters to bits.

 --
 Regards,
 Mick

A physical damage is what I guess be the best choice for sensitive data.

I use to disassemble the HDD and rub a strong magnet over the disks'
surfaces.

Just my 2c.

Best regards,
Francisco


Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Marc Joliet
(Thanks to everyone for the replies so far!)

Am Sun, 12 Jul 2015 08:48:48 -0400
schrieb Rich Freeman ri...@gentoo.org:

 On Sun, Jul 12, 2015 at 8:35 AM, Marc Joliet mar...@gmx.de wrote:
 
  My question is how precisely the disks should be cleared.  From various 
  sources
  I know that overwriting them with random data a few times is enough to 
  render
  old versions of data unreadable.  I'm guessing 3 times ought to be enough, 
  but
  maybe even that small amount is overly paranoid these days?
 
  As to the actual command, I would suspect something like dd if=/dev/urandom
  of=/dev/sdx bs=4096 should suffice, and according to
  https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furandom,
  /dev/urandom ought to be random enough for this task.  Or are cat/cp that 
  much
  faster?
 
 I'd probably just use a tool like shred/wipe, but you have the general idea.

Ah, I overlooked that shred can operate on device files!  Thanks.  I especially
trust shred, since my main source was an article by its author
(https://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html).

With regards to the other replies: I think physical destruction is unnecessary,
and I don't really want to go through the trouble.  The key bit in the above
article is:

[...]. If these drives require sophisticated signal processing just to read
the most recently written data, reading overwritten layers is also
correspondingly more difficult. A good scrubbing with random data will do about
as well as can be expected.

And this was in 1996!  Drives have only gotten denser since then (e.g.,
perpendicular recording), and the epilogues (which reiterate the above) suggest
that nothing has changed to make old data more recoverable.  I noticed that the
info manual to shred even says:

On modern disks, a single pass should be adequate, and it will take one third
the time of the default three-pass approach.

The Arch wiki also arrives at the same conclusion (see
https://wiki.archlinux.org/index.php/Securely_wipe_disk#Residual_magnetism),
and provides some additional references.

 I'd probably follow it up with an ATA secure erase - for an SSD it is
 probably the only way to be sure (well, to the extent that you trust
 the firmware authors).

Yeah, that sounds like a good idea.  In the case of HDDs, even if I can't trust
the firmware, I've already wiped what I can.  With regards to SSDs, I've been
meaning to read http://www.cypherpunks.to/~peter/usenix01.pdf.

So my intermediate summary is:  I'll probably use shred with one pass, followed
by ATA (Enhanced) Secure Erase to erase the reallocated sectors (though I'll
have to fiddle with my BIOS to do that). I'll be sure to read
https://ata.wiki.kernel.org/index.php/ATA_Secure_Erase first.

 If it weren't painful to set up and complicated for rescue attempts,
 I'd just use full-disk encryption with a strong key on a flash drive
 or similar.  Then the disk is as good as wiped if separated from the
 key already.

Plus you don't have to worry about reallocated sectors (which might only
contain single bit errors). Currently I'm planning on waiting for btrfs to
support it. Chris Mason recently mentioned that it's definitely something they
want to look at (https://youtu.be/W3QRWUfBua8?t=631), and it's not something
that is so important to me personally that I have to have it right this instant.

-- 
Marc Joliet
--
People who think they know everything really annoy those of us who know we
don't - Bjarne Stroustrup


pgp3zot4xIl6d.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Securely deletion of an HDD

2015-07-12 Thread Volker Armin Hemmann
Am 12.07.2015 um 14:35 schrieb Marc Joliet:
 Hi,

 I have to failed drives that I want to give away for recycling purposes, but
 want to be sure to properly clear them first.  They used be part of a btrfs
 RAID10 array, but needed to be replaced (with btrfs replace).  (In the
 meantime I converted the array to RAID1 with only two drives.)

 My question is how precisely the disks should be cleared.  From various 
 sources
 I know that overwriting them with random data a few times is enough to render
 old versions of data unreadable.  I'm guessing 3 times ought to be enough, but
 maybe even that small amount is overly paranoid these days?

 As to the actual command, I would suspect something like dd if=/dev/urandom
 of=/dev/sdx bs=4096 should suffice, and according to
 https://wiki.archlinux.org/index.php/Random_number_generation#.2Fdev.2Furandom,
 /dev/urandom ought to be random enough for this task.  Or are cat/cp that much
 faster?

 Any thoughts?

 Greetings

actually 1 time is enough. With zeros. Or ones. Does not matter at all.