| > I guess perhaps the reason they don't do integrity checking is that it | > involves redundant data, so the encrypted volume would be smaller, or | > the block offsets don't line up, and perhaps that's trickier to handle | > than a 1:1 correspondence. | | Exactly, many file systems rely on block devices with atomic single block | (sector) writes. If sector updates are not atomic, the file system needs | to be substantially more complex (unavoidable transaction logs to support | roll-back and roll-forward). Encrypted block device implementations that | are file system agnostic, cannot violate block update atomicity and so | MUST not offer integrity. That's way too strong. Here's an implementation that preserves block-level atomicity while providing integrity: Corresponding to each block, there are *two* checksums, A and B.
Read algorithm: Read Block, A and B. If checksum matches either of A or B, return the value of the block; otherwise, declare the block invalid. Write algorithm: Read current value of block. If its checksum matches A, write the checksum of the new data to B; otherwise, write the checksum of the new value to A. After the checksum data is known to be on the disk, write the data block. Writes to a given block must be atomic with respect to each other. (No synchronization is needed between reads and writes.) Granted, this algorithm has other problems. But it shows that the three requirements - user block size matches disk block size; block level atomicity; and authentication - are not mutually exclusive. (Actually, I suppose one should add a fourth requirement, which this scheme also realizes: The size of a user block identifier is the same as the size of the block id passed to disk. Otherwise, one can keep the checksum with each "block identifier".) -- Jerry --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]