> too many words wasted, but not a single word, how to restore the data.
>
> I have read the man pages carefully. But again: there's nothing said,
> that on USB drives zfs umount pool is not allowed.

You misunderstand.  This particular point has nothing to do with USB;
it's the same for any ZFS environment.  You're allowed to do a zfs
umount on a filesystem, there's no problem with that.  But remember
that ZFS is not just a filesystem, in the way that reiserfs and UFS are
filesystems.  It's an integrated storage pooling system and filesystem.
When you umount a filesystem, you're not taking any storage offline,
you're just removing the filesystem's presence on the VFS hierarchy.

You umounted a zfs filesystem, not touching the pool, then removed
the device.  This is analogous to preparing an external hardware RAID
and creating one or more filesystems, using them a while, umounting
one of them, and powering down the RAID.  You did nothing to protect
other filesystems or the RAID's r/w cache.  Everything on the RAID
is now inconsistent and suspect.  But since your "RAID" was a single
striped volume, there's no mirror or parity information with which to
reconstruct the data.

ZFS is capable of detecting these problems, where other filesystems are
often not.  But no filesystem can tell what the data should have been
when the only copy of the data is damaged.

This is documented in ZFS.  It's not about USB, it's just that USB
devices can be more vulnerable to this kind of treatment than other
kinds of storage are.

> And again: Why should a 2 weeks old Seagate HDD suddenly be damaged,
> if there was no shock, hit or any other event like that?

It happens all the time.  We just don't always know about it.

-- 
 -D.    d...@uchicago.edu    NSIT    University of Chicago
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to