I have a ZFS pool that has been corrupted. The pool contains a single device
which was actually a file on UFS. The machine was accidentally halted and now
the pool is corrupt. There are (of course) no backups and I've been asked to
recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed: dmu_read(os,
smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file:
../../common/fs/zfs/space_map.c, line: 319
<system reboots>
I've booted single user, moved /etc/zfs/zpool.cache out of the way, and now
have access to the pool from the command line. However zdb fails with a similar
assertion.
r...@kestrel:/opt$ zdb -U -bcv zones
Traversing all blocks to verify checksums and verify nothing leaked ...
Assertion failed: dmu_read(os, smo->smo_object, offset, size, entry_map) == 0
(0x5 == 0x0), file ../../../uts/common/fs/zfs/space_map.c, line 319
Abort (core dumped)
I've read Victor's suggestion to invalidate the active uberblock, forcing ZFS
to use an older uberblock and thereby recovering the pool. However I don't know
how to figure the offset to the uberblock. I have the following information
from zdb.
r...@kestrel:/opt$ zdb -U -uuuv zones
Uberblock
magic = 0000000000bab10c
version = 4
txg = 1504158
guid_sum = 10365405068077835008
timestamp = 1229142108 UTC = Sat Dec 13 15:21:48 2008
rootbp = [L0 DMU objset] 400L/200P DVA[0]=<0:52e3edc00:200>
DVA[1]=<0:6f9c1d600:200> DVA[2]=<0:16e280400:200> fletcher4 lzjb LE contiguous
birth=1504158 fill=172 cksum=b0a5275f3:474e0ed6469:e993ed9bee4d:205661fa1d4016
I've also checked the labels.
r...@kestrel:/opt$ zdb -U -lv zpool.zones
--------------------------------------------
LABEL 0
--------------------------------------------
version=4
name='zones'
state=0
txg=4
pool_guid=17407806223688303760
top_guid=11404342918099082864
guid=11404342918099082864
vdev_tree
type='file'
id=0
guid=11404342918099082864
path='/opt/zpool.zones'
metaslab_array=14
metaslab_shift=28
ashift=9
asize=42944954368
--------------------------------------------
LABEL 1
--------------------------------------------
version=4
name='zones'
state=0
txg=4
pool_guid=17407806223688303760
top_guid=11404342918099082864
guid=11404342918099082864
vdev_tree
type='file'
id=0
guid=11404342918099082864
path='/opt/zpool.zones'
metaslab_array=14
metaslab_shift=28
ashift=9
asize=42944954368
--------------------------------------------
LABEL 2
--------------------------------------------
version=4
name='zones'
state=0
txg=4
pool_guid=17407806223688303760
top_guid=11404342918099082864
guid=11404342918099082864
vdev_tree
type='file'
id=0
guid=11404342918099082864
path='/opt/zpool.zones'
metaslab_array=14
metaslab_shift=28
ashift=9
asize=42944954368
--------------------------------------------
LABEL 3
--------------------------------------------
version=4
name='zones'
state=0
txg=4
pool_guid=17407806223688303760
top_guid=11404342918099082864
guid=11404342918099082864
vdev_tree
type='file'
id=0
guid=11404342918099082864
path='/opt/zpool.zones'
metaslab_array=14
metaslab_shift=28
ashift=9
asize=42944954368
I'm hoping somebody here can give me direction on how to figure the active
uberblock offset, and the dd parameters I'd need to intentionally corrupt the
uberblock and force an earlier uberblock into service.
The pool is currently on Solaris 05/08 however I'll transfer the pool to
OpenSolaris if necessary.
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss