Dennis Clarke wrote:
Dennis Clarke wrote:
While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.
??  why not?  Perhaps this is just too simple and therefore doesn't get
explained well.

That is not what I wrote.

Once again, for the sake of clarity, I don't know if there is a published
document, anywhere, that shows by way of a concise experiment, that ZFS will
actually perform RAID 1+0 and survive multiple disk failures gracefully.

I do not see why it would not.  But there is no conclusive proof that it will.

Will add it to the solarisinternals ZFS wiki.

For an easy proof, I created a RAID-1+0 set with ramdisks and clobbered
two of the ramdisks.
  # zpool status rampool
    pool: rampool
   state: DEGRADED
  status: One or more devices could not be opened.  Sufficient replicas exist 
for
          the pool to continue functioning in a degraded state.
  action: Attach the missing device and online it using 'zpool online'.
     see: http://www.sun.com/msg/ZFS-8000-D3
   scrub: resilver completed with 0 errors on Mon Oct 23 10:58:55 2006
  config:

          NAME                     STATE     READ WRITE CKSUM
          rampool                  DEGRADED     0     0     0
            mirror                 DEGRADED     0     0     0
              /dev/ramdisk/set1-0  ONLINE       0     0     0
              /dev/ramdisk/set1-1  UNAVAIL      0     0     0  cannot open
            mirror                 DEGRADED     0     0     0
              /dev/ramdisk/set2-1  ONLINE       0     0     0
              /dev/ramdisk/set2-0  UNAVAIL      0     0     0  cannot open

  errors: No known data errors
  #

 -- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to