Hello,

I've been getting warnings that my zfs pool is degraded. At first it was 
complaining about a few corrupt files, which were listed as hex numbers instead 
of filenames, i.e.

VOL1:<0x0>

After a scrub, a couple of the filenames appeared - turns out they were in 
snapshots I don't really need, so I destroyed those snapshots and started a new 
scrub. Subsequently, I typed " zpool status -v VOL1" ... and the machine 
rebooted. When I could log on again, I looked at /var/log/messages, but found 
nothing interesting prior to the reboot. I typed " zpool status -v VOL1" again, 
whereupon the machine rebooted. When the machine was back up, I stopped the 
scrub, waited a while, then typed "zpool status -v VOL1" again, and this time 
got:


r...@nexenta1:~# zpool status -v VOL1
pool: VOL1
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: scrub canceled on Wed Aug 11 11:03:15 2010
config:

NAME STATE READ WRITE CKSUM
VOL1 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c2d0 DEGRADED 0 0 0 too many errors
c3d0 DEGRADED 0 0 0 too many errors
c4d0 DEGRADED 0 0 0 too many errors
c5d0 DEGRADED 0 0 0 too many errors

So, I have the following questions:

1) How do I find out which file is corrupt, when I only get something like 
"VOL1:<0x0>"
2) What could be causing these reboots?
3) How can I fix my pool?

Thanks!
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to