Date:        Sun, 7 May 2000 15:12:49 +0200
   From: Thomas Pornin <[EMAIL PROTECTED]>

   I am asking myself whether it would be easy to maintain at the
   filesystem level a per-file checksum of the data. I am thinking about a
   64-bit value which would be the exclusive or of all 64-bit words in the
   file (the file being padded with 0 up to the next block limit).

   If my understanding of the structure of the ext2 filesystem is correct,
   we would need the following:
   -- 64 bits free in each inode, in order to store the checksum
   -- whenever some data is written over old data in the same file, the old
      data must be reread, so that the checksum could be updated accordingly

If the checksum is updated automatically when the file is modified, then
it doesn't protect you in the case where the attacker has breached root
(or some other user/group privileges) and then uses privilege to modify
the the file.  This is why all of the programs which provide this kind
of crypto checksum store the checksums off-line, on some trusted media.
(Example: on a CD-ROM, so that the attacker can't modify the checksum).

By storing the checksum in the filesystem, the only thing this protects
you against is attackers who modify the filesystem via the block
device.  This is actually a relatively rare attack path.  (However,
storing the checksums off-line on a secure, trusted store protects you
against both attacks).

   Besides encryption, the main cost is that data overwritten needs to be
   read again. I am under the impression that, under typical disk usage,
   files are truncated to zero size and rewritten from scratch, which does
   not imply additional cost in my design.

There are a number of files where this isn't the case.  Database files,
and ELF files written by the GCC toolchain come to mind as immediate
examples.  There are also many files for which you are constantly
appending to the file --- log files, for example.   If each time you
open and append to a file, you have to reread the entire log (which
could in some cases grow to hundreds of megabytes) the resulting
performance degredation would be somewhat non-optimal.  :-)

                                                        - Ted

Reply via email to