Hello,

I am asking myself whether it would be easy to maintain at the
filesystem level a per-file checksum of the data. I am thinking about a
64-bit value which would be the exclusive or of all 64-bit words in the
file (the file being padded with 0 up to the next block limit).

If my understanding of the structure of the ext2 filesystem is correct,
we would need the following:
-- 64 bits free in each inode, in order to store the checksum
-- whenever some data is written over old data in the same file, the old
   data must be reread, so that the checksum could be updated accordingly

The reason for such a construct would be to provide more security in
a cryptographic filesystem. I have looked at existing implementations
of such filesystems under several OS, and if all of them provide
confidentiality, none includes any way of detecting active intrusion.
Even without understanding what is in a filesystem, an intruder might
corrupt data, and there should be a way to detect this.

My idea is that, if all sectors are physically encrypted on the
disk, a simple checksum such as the one described above would give
authentication. The checksum is per file (and per directory) so that it
might be checked "on the fly"

Besides encryption, the main cost is that data overwritten needs to be
read again. I am under the impression that, under typical disk usage,
files are truncated to zero size and rewritten from scratch, which does
not imply additional cost in my design.

Am I just dumb ? Am I reinventing the wheel ?

        --Thomas Pornin

Reply via email to