Sean Murphy <[EMAIL PROTECTED]> wrote:
I have read up on soft updates and have some questions.
The way that I am understanding soft updates purpose is to allow file
systems to be mounted dirty after an unclean shutdown of the system.
That's not the purpose. The purpose is to improve performance by taking
advantage of delayed writes much the way an asynchronous filesystem does,
while preventing horrendous data corruption by ordering those writes,
the way a journalling filesystem does.
The fact that you can generate filesystem snapshots is a
fact that you can use filesystem snapshots to validate the filesystem
it's been mounted is a further side-benefit.
If this is a safe way to restore consistency why is it not used on /?
Because writes are delayed, it's possible for data to be lost in the
a crash -- it acts like a database, either the entire transaction is
or it's rolled back, either way, the data is guaranteed not to be
Also, on heavily used filesystems, softupdates can lead to the filesystem
temporarily having less space available than it really does. I.e. you
/kernel, softupdates completely replaces the file with a new one, but the
blocks for the old file haven't been reclaimed yet. For a short
might have 1 kernel file, but there's 2x that being allocated for it.
For these two reasons, / is traditionally _not_ mounted with softupdates
enabled, since it's critical to system startup.
If a file system is not heavily written to is it better not to use
Weigh the good vs. the bad:
*) synchronous mounted filesystem is almost guaranteed to keep your
at all times, but is abysmally slow.
*) softupdates _may_ lose some data if your system crashes before all
are flushed, but will never _corrupt_ it. Additionally, you get a LOT
*) Asynchronous is a little faster than softupdates, but it's damn near
guaranteed to be corrupt in the event of a crash.
When file systems are mounted dirty and our being used while the
backgound fsck is running on the file systems how does it prevent
files from being lost?
It doesn't. It guarantees that your filesystem will always be
never corrupt, but it doesn't guarantee against data loss.
Here's a simplified example:
Let's say you're saving a big file and the power goes out. When the
back on, there are basically 3 states that file can be in:
A) It was fully written to disk -- you got lucky.
B) Nothing had been written to disk yet -- "data loss"
C) It was partially written to disk -- your filesystem is corrupt, you
need to allow a filesystem repair program to fix it (fsck -- or
Windows, for example) or you'll have weird problems with it until
you do so.
Softupdates guarantees against C. It does this by (essentially)
1) it writes all the data to data blocks, and once that's done
2) _then_ it creates a directory entry for the file.
If the system crashes between #1 and #2, it looks like B happened, but
get in scenario C where the filesystem is corrupt and gets more
corrupt as you
continue to use it. Instead, when fsck runs (in the background) it
that there are data blocks in use that don't belong to any file, and
free them up for the filesystem to use.
That's somewhat simplified, but it gives you the basic idea.