> The short answer is that fsck can detect the bad inodes and fix or
> delete them.  Assuming no programming errors, you don't have to worry
> about a file containing bogus data after fsck has run.  Unfortunately,
> if write-caching is enabled on your hard drive (and it probably is,
> for speed), then the drive may internally re-order the writes and the
> carefully crafted sequence of writes disappears, so there are no
> guarantees (or at least, not as many).  Whether this is actually a
> problem depends on the brand, model, and firmware version of the
> drive, because some drives claim that data has been written to the
> disk when it is actually only in the drive buffer, while other drives
> are more honest.

Let's suppose block A contains user A's private data. User A
deletes the file, so synchronously the metadata referring to that
file is updated, but the data block still contains the sensitive
information. Now user B creates a new file B. Let's suppose the data
block allocated for file B is block A. The right thing to do in
terms of security is first update block A with the new data, and then
update the metadata referring to it. But if metadata is updated
synchronously first (the free block bitmap says that block A is
allocated and the inode of file B points to it) and the system
crashes, user B has access to user A's private data. In this case
(asynchronous data blocks updates), fsck cannot fix the problem
(if I'm missing something, please correct me).

The following is extracted from "Soft Updates: A Technique for
Eliminating Most Synchronous Writes in the Fast Filesystem":

"When a new block is allocated, its bitmap location is updated to
reflect that it is in use and the block's contents are
initialized with newly written data or zeros. In addition, a
pointer to the new block is added to an inode or indirect block
(see bellow). To ensure that the on-disk bitmap always reflects
allocated resources, the bitmap must be written to disk before
the pointer. Also, because the contents of the newly allocated
disk location are unknown, rule #1 specifies an update dependency
between the new block and the pointer to it. Because enforcing
this update dependency with synchronous writes can reduce data
creation throughput by a factor of two [Ganger & Patt, 1994],
many implementations ignore it for regular data blcosk. This
implementation decision reduces integrity and security, since
newly allocated blocks generally contain previously deleted file

The following is extracted from "Metadata Update Performance in
File Systems" [Ganger & Patt, 1994]:

"For example, a pointer to a newly allocated block should not be
added to a file's inode before the block is initialized on stable
storage. If this ordering is not enforced, a system failure could
result in the file containing data from some previously deleted
file, presenting both an integrity weakness and a security hole."

One can read this in the man page for fsck_ffs:

"The kernel takes care that only a restricted class of
innocuous file system inconsistencies can happen unless
hardware or software failures intervene.  These are
limited to the following:

       Unreferenced inodes
       Link counts in inodes too large
       Missing blocks in the free map
       Blocks in the free map also in files
       Counts in the super-block wrong"

so I assume FreeBSD is doing the correct thing.

Is correct this assumption?

> More details are found in
> http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/configtuning-disk.html

Thanks, that entry in the handbook is really interesting.
freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to