I'll add my anecdotes :)

On Tue, Apr 23, 2013 at 3:40 PM, Alan McKinnon <[email protected]> wrote:
> In over 10 years, I have never had a file system failure with any of
> these (all used a lot):
>
> ext2
> ext3
> ext4
> zfs
> reiser3

ext2, ext3, ext4, btrfs here.

ext4 for years (ever since it lost the dev suffix in the kernel)
without a single hiccup, and btrfs on a laptop with no battery
monitor, meaning the battery would die with no warning (unclean
shutdowns x1000) and never had an issue that prevented it from
mounting on the next reboot.

Also have used btrfs on a mobile phone running Mer development
snapshots which tends to crash, reboot, freeze and requires the
battery pulled, also never failed to remount after that constant
abuse.

btrfs has some features similar to zfs, reiser, lvm, dm... I still
haven't decided whether that feature-creep makes me think "oh cool!"
or "oh no!" :)

> I have had failures with these (used a lot):
>
> Oh wait, there aren't any of those.

JFS is on my "never again" list, I have used it on a few drives and
two of them ended with catastrophic failure after an unexpected
shutdown. "journal replay failed" is a phrase I still see in my
nightmares... The recovery stripped names from inodes resulting in
millions of files like I01039130.RCN or something like that... not
sorted into directories or anything, though the timestamps survived,
strangely. It has been several years since then and I've avoided JFS
ever since.

I actually had a third JFS incident, but by then I had disabled
auto-fsck. I was unable to mount it read-only, but found a shareware
tool for OS/2 that was able to recover files from a corrupt JFS
volume, complete with filenames and directories. I slapped the drive
into an OS/2 machine and it took several DAYS to complete the
recovery, but it did in fact complete and I happily sent the guy ten
dollars. It looks like nowadays there is an open-source tool for linux
called jfsrec which does the same kind of recovery from broken JFS
volumes.

I used XFS on a drive which had a bad cable, and it wound up being
unmountable and unfixable by fsck, though (after replacing the cable)
I was able to do read-only dump all of the files from it using the xfs
utils, after which I reformatted and copied everything back. Can't
fault the filesystem for a bad cable but any time fsck is unable to
fix an unmountable filesystem, it scares me.

So, for me the rule of thumb is: ext4 on "important" drives (servers,
my main desktop system, RAID array, backups), and btrfs on drives
where I'm more willing to experiment and take a chance at something
weird happening (laptop, web surfing workstation, mobile phone,
virtual machines).

Reply via email to