[sigh, here we go again... isn't this in a FAQ somewhere, it certainly is
in the archives...]

Ed Spencer wrote:
> I find this thread both interesting and disturbing. I'm fairly new to
> this list so please excuse me if my comments/opinions are simplistic or
> just incorrect.
>
> I think there's been to much FC SAN bashing so let me change the
> example.
>
> What if you buy a 7000 Series server (complete with zfs) and setup an IP
> SAN. You create a LUN and share it out to a Solaris 10 host.
> On the solaris host you create a ZFS pool with that iscsi LUN.
>   

You are certainly able to implement ZFS redundancy on the
Solaris 10 host.

> Now my undersatnding is that you will not be able to correct errors on
> the zpool of the Solaris10 machine because zfs on the solaris 10 machine
> is not doing the raid.
>   

No, this is not a completely true statement (more below)

> Another example would be if you were sharing out a lun to a vmware
> server, from your iscsi san or fc san, and creating solaris 10 virtual
> machines, with zfs booting.
>   

You are certainly able to implement ZFS redundancy on the
Solaris 10 VM.

> Another example would be Solaris 10 booting a zfs filesystem from a
> hardware mirrored pair of drives.
>   

You are certainly able to implement ZFS redundancy on the mirrored
pair of drives.

> Now these are examples of standard implementations of machines in a
> datacenter, specifically ones I have installed.
>   

I presume you are saying that you implemented only the default ZFS
data protection for a single vdev.  You have more options, including
copies, mirroring, raidz, etc.

> >From following this thread I now feel that if I have uncorrectable "data
> errors" on the zfs pools there will be no way to easily repair the pool.
>   

Untrue.  ZFS will attempt to repair what it can repair.  More below.

> I see no reason that if I do detect errors as I scrub the zfs pool that
> I should be able to run a simple utility to fix the pools as I would a
> ufs filesystem and then recover the corrupted files from tape.
>   

There is no utility for UFS which will repair corrupted data.  UFS is
blissfully unaware of data corruption. fsck will attempt to reconcile
metadata problems, which were very common before logging was
added, because UFS does not have an always consistent on-disk
format (ZFS does).

By default, ZFS uses copies=2 for metadata.  Uberblocks are 4x
redundant.  If data corruption is detected in a file, zpool status -x
will show exactly which files are corrupted, which will allow you
to make the decision how you want to handle the broken file.

IMHO, you are getting hung up about the fact that if data corruption
is detected in a file and ZFS does not have a way to repair the file, then
you will probably want to do something about it manually.  With UFS,
you'll never know, though you might see some symptoms like your
apps crash or your spreadsheet has the wrong numbers.

> I believe that for zfs to be used as a general purpose filesystem that
> there has to be support built into zfs to support these standard data
> center implementations, otherwise it will just become a specialized
> filesystem, like Netapp's WAFL, and there are alot more servers than
> storage appliances in the datacenter.
>   

I disagree.  ZFS will be the preferred boot file system for Solaris
systems -- it already is the only boot file system available for
OpenSolaris.  Features like snapshots (that actually work, unlike UFS
snapshots for many cases) and cloning are extremely useful for
managing OSes, patches, and upgrades.  ZFS is the future general
purpose file system for Solaris, UFS is not (which will become
readily apparent when you buy a 1.5 TByte disk)

> I think this thread has put zfs in a negative light.  I don't actually
> believe that I will experience many of these problems in an Enterprise
> class data center, but still I don't look forward to having to deal with
> the consequences of encountering these types of problems.
>   

One reason you may have never experienced data corruption with
UFS (which I find hard to believe, having used UFS for 20+ years)
is that UFS has no way to detect data corruption.  Are you trying to
kill the canary? :-)

> Maybe zfs is not ready to be considered a general purpose filesystem.
>   

I'd say maybe UFS is not ready to be considered a general purpose file
system, by today's standards :-)
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to