I'm curious, how often do you scrub the pool? On Mon, 2009-06-22 at 15:33, Ross wrote: > Hey folks, > > Well, I've had a disk fail in my home server, so I've had my first experience > of hunting down the faulty drive and replacing it (damn site easier on Sun > kit than on a home built box I can tell you!). > > All seemed well, I replaced the faulty drive, imported the pool again, and > kicked off the repair with: > # zpool replace zfspool c1t1d0 > > But then a few minutes later I noticed this: > > # zpool status > pool: zfspool > state: DEGRADED > status: One or more devices has experienced an unrecoverable error. An > attempt was made to correct the error. Applications are unaffected. > action: Determine if the device needs to be replaced, and clear the errors > using 'zpool clear' or replace the device with 'zpool replace'. > see: http://www.sun.com/msg/ZFS-8000-9P > scrub: resilver in progress for 0h5m, 1.35% done, 6h46m to go > config: > > NAME STATE READ WRITE CKSUM > zfspool DEGRADED 0 0 0 > raidz2 DEGRADED 0 0 0 > replacing DEGRADED 0 0 68.0K > 15299378891435382892 FAULTED 0 212K 0 was > /dev/dsk/c1t1d0s0/old > c1t1d0 ONLINE 0 0 0 2.89G > resilvered > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > c1t5d0 ONLINE 0 0 1 43K resilvered > > errors: No known data errors > > > A checksum error on one of the other disks! Thank god I went with raid-z2. > > Ross > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Ed Spencer Information Services and Technology UNIX System Administrator Infrastructure Group EMail: ed_spen...@umanitoba.ca The University of Manitoba telephone: (204) 474-8311 Winnipeg, Manitoba, Canada R3T 2N2
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss