> To the best of my knowledge, Peter Guttman(sp?) has demonstrated for years
> now that there is no form of over-writing which makes any substantial
> difference to the ability to recover previously written data from a
> hard disk.
Guttman's paper, "Secure Deletion of Data from Magnetic and Solid-State
Memory" (available at
http://www-tac.cisco.com/Support_Library/field_alerts/fn13070.html ) has
become something of a classic, and for good reason: It's absolutely
fascinating reading, describing in detail what most of us suspected and some
of us never imagined.
The paper, however, is five years old, and quite frankly needs to be
understood in that context.
Now, I'm *not* saying that Guttman's points are flawed, just that it's
likely that the mechanisms used to recover data from 300 Megabyte drives
probably don't scale to 80 Gigabyte disks using GMR(Gigantic
Magnetoresistance) technology. The extra surface area and analog bit
density used to divine past generations of data has almost certainly been
exploited in the 16x explosion in data density since Guttman's paper was
My *guess*, however, is that as drive densities have increased, the
requirements for more and more advanced error correction(to increase yields
on platters with miniscule deformities) has led to greater redundancy and
increased platter space to entirely redundant--and
Furthermore, it's impossible that drive scanning technology hasn't advanced
in sync with drive capacity--the bottom line is, somebody needed to design
the sensors to work out the kinks from each generation of disk. Companies
like OnTrack(who, incidentally, worked very well for me) have made rather
successful businesses of proven what's gone is not necessarily gone.
So essentially, Guttman was right, and Guttman is probably still right. But
the technologies used to recover deleted data has probably advanced just as
much as the technology used to store the data in the first place.
> My understanding of current "high security" standards wrt the re-use of
> disks which previously contained classified materials is that they only be
> re-used in similarly classified systems, or, are destroyed beyond any form
> of molecular reconstruction (e.g. melted).
Exactly. It is the job of the medium to store information. It is the job
of the incinerator to delete it. Violation of the barriers between
establishing functionality and enforcing security leads to systems that
allow too much access to an unstable service.
> So to suggest that your perceived EFS flaw can be resolved by over-writing
> is naive. The only solution is to encrypt in memory or use some removable
> partition as the temp space.
Russ, you're absolutely correct about the need for memory encryption, though
removable media has equivalent risks(with the exception of possibly being
more conveniently incinerated). The correct behavior is for a disk to never
receive anything that gives it plaintext-equivalent access to any of the
actual information contained within the encrypted data. That means no
decryption keys ever get written, no passwords get saved, and most
importantly, *no plaintext data gets stored, not even "temporarily"*. The
moment an "Encrypted File System" writes a plaintext version of the data to
the disk, all is lost--whether or not an apparently laughable delete(really,
"dab white-out on the page number on the index in the back of the book)
operation is actually carried out.
Lets not forget--an encrypted file system exists for *no other reason* but
to resist attack. Encryption does not add speed. It does not add
stability. It does not add anything *but* resistance against an attacker
who lacks the key material. If Rickard's analysis is correct--something
that should be independently verified--EFS offers attackers a rich array of
simple attacks that do not require discovery of the key material. You can
draw your own conclusions from that.
Cisco Systems, Inc.