Thanks for the response Matthew.

> > I don't think it's a hardware issue because it
> seems to be still
> > working fine, and has been for months.
> 
> "Working fine", except that you can't access your
> pool, right? :-)

Well the computer and disk controller work fine when I tried it in Linux with a 
different set of disks.  Even if one of the disks or the controller failed, I 
do not think that this should destroy the pool, should it?

> We might be able to figure out more exactly what went
> wrong if you can 
> send the output of:
> 
> zpool status -x
> zdb -v tank
>    (which might not work)
>  -l /dev/dsk/c0t1d0
> db -l /dev/dsk/... (for each of the other devices in
> the pool)

# zpool status -x
  pool: tank
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-CS
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        FAULTED      0     0     6  corrupted data
          raidz     ONLINE       0     0     6
            c0t0d0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0

# zdb -v tank
    version=2
    name='tank'
    state=0
    txg=4
    pool_guid=13701012839440790608
    vdev_tree
        type='root'
        id=0
        guid=13701012839440790608
        children[0]
                type='raidz'
                id=0
                guid=4302888056402016629
                metaslab_array=13
                metaslab_shift=33
                ashift=9
                asize=1280238944256
                children[0]
                        type='disk'
                        id=0
                        guid=15193057146179069576
                        path='/dev/dsk/c0t0d0s0'
                        whole_disk=1
                children[1]
                        type='disk'
                        id=1
                        guid=5113010407673419800
                        path='/dev/dsk/c0t1d0s0'
                        whole_disk=1
                children[2]
                        type='disk'
                        id=2
                        guid=11908389868437050464
                        path='/dev/dsk/c0t2d0s0'
                        whole_disk=1
                children[3]
                        type='disk'
                        id=3
                        guid=800140628824658935
                        path='/dev/dsk/c0t4d0s0'
                        devid='id1,[EMAIL PROTECTED]/a'
                        whole_disk=1

# zdb -l /dev/dsk/c0t0d0
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3


The result is the same for the other 3 disks (c0t1d0, c0t2d0 and c0t4d0).  I am 
new to ZFS, so I am not sure what these results tell, they don't look too good. 
 What I find strange is that zdb -v tank has a devid for the 4th child, but not 
for the others.

Any ideas?


Thanks,
Siegfried
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to