On February 11, 2009 2:07:47 AM -0800 Gino <dandr...@gmail.com> wrote:
I agree but I'd like to point out that the MAIN problem with ZFS is that
because of a corruption you-ll loose ALL your data and there is no way to
recover it. Consider an example where you have 100TB of data and a fc
switch fails or other hw problem happens during I/O on a single file.
With UFS you'll probably get corruption on that single file. With ZFS
you'll loose all your data.  I totally agree that ZFS is theoretically
much much much much much better than UFS but in real world application
having a risk to loose access to an entire pool is not acceptable.

if you have 100TB of data, wouldn't you have a completely redundant
storage network -- dual FC switches on different electrical supplies,
etc.  i've never designed or implemented a storage network before but
such designs seem common in the literature and well supported by
Solaris.  i have done such designs with data networks and such
redundancy is quite common.

i mean, that's a lot of data to go missing due to a single device
failing -- which it will.

not to say it's not a problem with zfs, just that in the real world,
it should be mitigated since your storage network design would overcome
a single failure *anyway* -- regardless of zfs.

-frank
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to