Title: RE: disk encryption modes

With a 4096 byte cluster size, 1 GB of drive space would require 4 MB temporary key file storage. At this ratio, a 128 MB compact flash card could hold a key file for 32 GB of hard drive space. The key file could be stored on the same physical drive if you wanted to do so, but putting it on separate, and easily microwaveable media gives you the "wipe all the data without touching the actual hard drive" capability. If you trust the reliability of the storage hardware, you could send the main drive the encrypted data and the temporary keyfile drive the temp key data concurrently and let the drive buffering do its magic without a major performance hit. Reliability would be a significant issue, since losing keyfile data would mean the loss of a proportionally larger amount of data on the main storage device. If operational reliability is really super-important, having 2 copies of the key file on separate CDRW's would up the warm-and-fuzzy factor, but require the destruction of both CD's or CF cards or whatever to securely destroy the data.

The main feature I was going for was the ability to give a semi-trusted third party out of the reach of your local men-with-guns the ability to irrevocably destroy your data in an emergency, without giving the third party any of your actual data. If the "I need you to destroy the keyfile NOW" signal was automatically sent to the third party after N failed login attempts by the encryption driver (by writing a pre-arranged random value to a pre-arranged random section of the key file) you wouldn't even have to be conscious. And your (semi) trusted third party could have a similar arrangement with you, to covertly warn you if he was compromised. This design is intended primarily for applications where complete loss of the data is less dire than disclosure of the data to the wrong parties. For these applications, security considerations would probably be more important than absolute cutting-edge performance. but since the keyfile data would be about 0.4% of the actual stored data, I think it could be done reasonably reliably without a noticeable performance hit.

One real-world application that comes to mind for this idea is encryption for a corporate laptop computer. The laptop has an encrypted partition containing the sensitive corporate data, and the keyfile for that partition is stored at corporate HQ. In order for the encrypted partition to be accessed, the laptop has to have a live connection to corporate HQ. Even if this connection was a 33.6 kilobit dialup, you could still encrypt and decrypt at over 800 kilobytes per second, which is fast enough to open up most files in a reasonable amount of time. (The laptop/HQ connection would need to be end-to-end encrypted and authenticated to prevent an attacker from gradually acquiring the keyfile.) If the laptop is stolen, the thief gets none of the encrypted data, and runs the risk of having the computer tattle on his location via caller ID, GPS, or other means when it phones home. You could also use this concept for pay-per-view digital content, but of course it doesn't address the unsolvable issue of once the consumer has decrypted the content, how to make them play nice with it and not redistribute it.

-----Original Message-----
From: Bill Stewart [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 29, 2002 2:16 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: disk encryption modes


At 01:13 AM 04/29/2002 -0700, [EMAIL PROTECTED] wrote:
>  [each cluster has 128 bits permanent half-key, 128 bits nonce half-key...]
>  are for the second cluster, and so on. Each time a disk cluster is
> written to, a new temporary half-key is pulled from the (P)RNG and used
> to encrypt the cluster data, and then is stored in the temporary key
> file. When a cluster is read, the appropriate temporary key half is read
> from the temporary key file, combined with the permanent key half, and
> then the data is decrypted.

At least it's big enough to prevent searches through the space.
But it not only requires managing the extra key-file (which could be pretty
large,
and needs to be kept somewhere, apparently not in the same file system),
it potentially requires two disk reads per block instead of just one,
which is a major performance hit unless you're good at predictive caching,
and more seriously it requires two writes that both succeed.
If you write the key first and don't write out the block,
you can't decrypt the old block that was there, while if you write the
block first
and don't succeed in writing the key, you can't decrypt the new block.
This makes depending on caching writes much more difficult - it's already
one of the
things that helps make systems fast and either reliable or unreliable,
and you've made it tougher as well as requiring two disk spins.
You can get some relief using non-volatile memory (the way the Legato
Prestoserve
did for NFS acceleration - first cache the write in battery-backed RAM,
send your ACK, and then write the block out to disk), but that's hardwary.

It's cute, though...

Reply via email to