Sandy Harris <sandyinch...@gmail.com> writes: >Yes, but that paper is over ten years old. In the meanwhile, disk designs and >perhaps encoding schemes have changed, journaling file systems have become >much more common and, for all I know the attack technology may have changed >too.
It's nearly fifteen years old (it was written in 1995, when the very first PRML drives were just starting to appear, there's a reference in there to a Quantum whitepaper published the same year) and refers to technology from the early 1990s (and leftover stuff from the late 1980s, which was still around at the time). I've had an epilogue attached to the paper for, oh, at least ten of those fifteen years saying: In the time since this paper was published, some people have treated the 35- pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now. Looking at this from the other point of view, with the ever-increasing data density on disk platters and a corresponding reduction in feature size and use of exotic techniques to record data on the medium, it's unlikely that anything can be recovered from any recent drive except perhaps a single level via basic error-cancelling techniques. In particular the drives in use at the time that this paper was originally written have mostly fallen out of use, so the methods that applied specifically to the older, lower- density technology don't apply any more. Conversely, with modern high- density drives, even if you've got 10KB of sensitive data on a drive and can't erase it with 100% certainty, the chances of an adversary being able to find the erased traces of that 10KB in 80GB of other erased traces are close to zero. (the second paragraph is slightly newer than the first one). The reason why I haven't updated the paper is that there really isn't much more to say than what's in those two paragraphs, EPRML and perpendicular recording are nothing like the technology that the paper discusses, for these more modern techniques a good scrubbing is about the best you can do, and you have to balance the amount of effort you're prepared to expend with the likelihood of anyone even trying to pull 10kB of data from a (well, at the time 80GB was the largest drive, today 1TB) drive. I made the paper as forward-looking as I could with the information available at the time (i.e. projection to PRML/EPRML read channels and so on in the original paper), but didn't realise that people would skip that bit and just religiously quote the same old stuff fifteen years later. (I've been working on a talk on "Defending where the Attacker Isn't" where I look at this sort of thing, in some areas like password "best practices" this phenomenon is even more pronounced because organisations are religiously following "best practices" designed to defend shared mainframes connected to 029 keypunches and model 33 teletypes, I hope the data erasure thing doesn't follow the same lifecycle :-). Peter. --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com