First let me tell you I am NOT an expert on ZFS - far from it.
But I have tried to duplicate your scenario using files
in place of real hard drives. And I found I could fix the
error in my test scenario, by replacing the UNAVAIL drives - see below.
Of course, there are no guarantees that I have duplicated
your problem, or that this would fix your problem,
or that this is an appropriate course of action.
And I would recommend you do nothing to
your ZFS pool until you have heard from a ZFS expert.
But I thought it an interesting experiment, and
worth posting this here, to hopefully get some comments
from some ZFS experts.
Regards
Nigel Smith
# mkfile 128m /vdev/disk1
# mkfile 128m /vdev/disk2
# mkfile 128m /vdev/disk3
# zpool create tank raidz /vdev/disk1 /vdev/disk2 /vdev/disk3
(--- copy data to /tank ---)
# df -h /tank
Filesystem size used avail capacity Mounted on
tank 214M 207M 7.3M 97% /tank
# mkfile 128m /vdev/disk4
# mkfile 128m /vdev/disk5
# mkfile 128m /vdev/disk6
# zpool add tank raidz /vdev/disk4 /vdev/disk5 /vdev/disk6
# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/vdev/disk1 ONLINE 0 0 0
/vdev/disk2 ONLINE 0 0 0
/vdev/disk3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/vdev/disk4 ONLINE 0 0 0
/vdev/disk5 ONLINE 0 0 0
/vdev/disk6 ONLINE 0 0 0
errors: No known data errors
# dd if=/dev/random of=/vdev/disk5 bs=2048k count=1
# dd if=/dev/random of=/vdev/disk6 bs=2048k count=1
# zpool scrub tank
# zpool status -v tank
pool: tank
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
scrub: scrub completed with 3 errors on Sun Dec 30 11:27:13 2007
config:
NAME STATE READ WRITE CKSUM
tank UNAVAIL 0 0 0 insufficient replicas
raidz1 ONLINE 0 0 0
/vdev/disk1 ONLINE 0 0 0
/vdev/disk2 ONLINE 0 0 0
/vdev/disk3 ONLINE 0 0 0
raidz1 UNAVAIL 0 0 0 insufficient replicas
/vdev/disk4 ONLINE 0 0 0
/vdev/disk5 UNAVAIL 0 0 0 corrupted data
/vdev/disk6 UNAVAIL 0 0 0 corrupted data
errors: Permanent errors have been detected in the following files:
<metadata>:<0xf>
<metadata>:<0x18>
<metadata>:<0xb5>
# mkfile 128m /vdev/disk7
# zpool replace tank /vdev/disk5 /vdev/disk7
# zpool scrub tank
# zpool status tank
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: resilver completed with 0 errors on Sun Dec 30 11:30:23 2007
config:
NAME STATE READ WRITE CKSUM
tank DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
/vdev/disk1 ONLINE 0 0 0
/vdev/disk2 ONLINE 0 0 0
/vdev/disk3 ONLINE 0 0 0
raidz1 DEGRADED 0 0 0
/vdev/disk4 ONLINE 0 0 0
/vdev/disk7 ONLINE 0 0 0
/vdev/disk6 UNAVAIL 0 0 0 corrupted data
errors: No known data errors
# mkfile 128m /vdev/disk8
# zpool replace tank /vdev/disk6 /vdev/disk8
# zpool scrub tank
# zpool status tank
pool: tank
state: ONLINE
scrub: scrub completed with 0 errors on Sun Dec 30 11:32:36 2007
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/vdev/disk1 ONLINE 0 0 0
/vdev/disk2 ONLINE 0 0 0
/vdev/disk3 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
/vdev/disk4 ONLINE 0 0 0
/vdev/disk7 ONLINE 0 0 0
/vdev/disk8 ONLINE 0 0 0
errors: No known data errors
# zpool history
History for 'tank':
2007-12-30.10:58:52 zpool create tank raidz /vdev/disk1 /vdev/disk2 /vdev/disk3
2007-12-30.11:25:41 zpool add tank raidz /vdev/disk4 /vdev/disk5 /vdev/disk6
2007-12-30.11:26:59 zpool scrub tank
2007-12-30.11:30:09 zpool replace tank /vdev/disk5 /vdev/disk7
2007-12-30.11:30:23 zpool scrub tank
2007-12-30.11:31:06 zpool replace tank /vdev/disk6 /vdev/disk8
2007-12-30.11:32:23 zpool scrub tank
# uname -a
SunOS solaris 5.11 snv_70 i86pc i386 i86pc
#
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss