[EMAIL PROTECTED] said:
> That's the one that's been an issue for me and my customers - they get billed
> back for GB allocated to their servers by the back end arrays.   To be more
> explicit about the 'self-healing properties' -  To deal with any fs
> corruption situation that would traditionally require an fsck on UFS (SAN
> switch crash, multipathing issues, cables going flaky or getting pulled,
> server crash that corrupts fs's) ZFS needs some disk redundancy in place so
> it has parity and can recover.  (raidz, zfs mirror, etc)  Which means to use
> ZFS a customer have to pay more to get the back end storage redundancy they
> need to recover from anything that would cause an fsck on UFS.  I'm not
> saying it's a bad implementation or that the gains aren't worth it, just that
> cost-wise, ZFS is more expensive in this particular bill-back model. 

If your back-end array implements RAID-0, you need not suffer the extra
expense.  Allocate one RAID-0 LUN per physical drive, then use ZFS to
make raidz or mirrored pools as appropriate.

To add to the other anecdotes on this thread:  We have non-redundant
ZFS pools on SAN storage, in production use for about a year, replacing
some SAM-QFS filesystems which were formerly on the same arrays.  We
have had the "normal" ZFS panics occur in the presence of I/O errors
(SAN zoning mistakes, cable issues, switch bugs), and had no ZFS corruption
nor data loss as a result.  We run S10U4 and S10U5, both SPARC and x86.
MPXIO works fine, once you have OS and arrays configured properly.

Note that I'd by far prefer to have ZFS-level redundancy, but our equipment
doesn't support a useful RAID-0, and our customers want cheap storage.  But
we also charge them for tape backups....

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to