James G. Sack (jim) wrote:
ZFS was also mentioned earlier in this thread.
What is the right question to ask?
Is ZFS inherently immune to these (sudden power loss?) problems?
Define "immune".
The basic trick with ZFS is that things don't get overwritten. So, you
may lose new data, but you won't lose anything that was already
consistent before power loss. Thus the "filesystem" and its associated
data should never trash. However, if you were writing a file when the
power went out, the new data may not appear on disk even though the old
version of the file is intact.
One of the goals of ZFS was to be "fsck-less". You don't have to fsck
to get a usable file system, but it seems that you have to run some
vacuuming periodically to keep the performance up.
Is write barrier strategy an unnecessary kludge in ZFS architecture?
(or what?)
Not necessarily. You still want certain actions to occur as close to
atomically as possible in order to minimize loss of new things.
-a
--
KPLUG-List@kernel-panic.org
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list