On Thu, Mar 18, 1999 at 12:37:42PM -0800, Steve Gehlbach wrote:
> Journaling file systems record a log of disk activity so that un-sync'ed
> cache activity can be reconstructed and the disk recovered on power up, by
> rolling back or rolling forward the disk transactions, much like a data base
> rolls back/forward data transactions.
It's very important to note that journaling filesystems only
guarantee coherency of the _filesystem_, i.e., inodes, directories,
etc., but it can't guarantee the contents of the files. For
example, if you are writing to random locations in a large file,
say 10000 blocks long, if you write to block 1000, and then
write to block 2000, it is possible for the OS to write block
2000 to disk before writing 1000. With a journaled filesystem,
you may have a consistent filesystem, but a corrupted database.
With ext2fs, fsck just corrects the filesystem inconsistency
on reboot.
There have been people interested in a journaling filesystem
for Linux, but since Linux/ext2fs is so stable, there isn't much
need. It's much better to have a UPS, so that your application
has time to save its data into a consistent state and have
it all written to persistent media. Doing anything else
invites yourself to get screwed by special cases you didn't
think of in the design. Plus, it's faster, since the OS
and application doesn't have to spend all its time being
paranoid.
But, I will agree that applications have a general lack
of control under Linux to encourage/force certain blocks
of data to get written to disk quickly or synchronously.
This is important if you want applications to be robust
across power failures without a UPS.
dave...
--- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
----
For more information on Real-Time Linux see:
http://www.rtlinux.org/~rtlinux/