Hi, 

i found some time and was able to test again.

 - verify with unique uid of the device 
 - verify with autoreplace = off

Indeed autoreplace was set to yes for the pools. So I disabled the autoreplace. 

VOL     PROPERTY       VALUE       SOURCE
nxvol2  autoreplace    off         default

Erased the labels on the cache disk and added it again to the pool. Now both 
cache disks have different guid's: 

# cache device in node1
r...@nex1:/volumes# zdb -l -e /dev/rdsk/c0t2d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
    version=14
    state=4
    guid=15970804704220025940

# cache device in node2
r...@nex2:/volumes# zdb -l -e /dev/rdsk/c0t2d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
    version=14
    state=4
    guid=2866316542752696853

GUID's are different. 

However after switching the pool nxvol2 to node1 (where nxvol1 was active), the 
disks picked up as cache dev's: 

# nxvol2 switched to this node ... 
volume: nxvol2
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        nxvol2       ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t10d0  ONLINE       0     0     0
            c4t13d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t9d0   ONLINE       0     0     0
            c4t12d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t8d0   ONLINE       0     0     0
            c4t11d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t18d0  ONLINE       0     0     0
            c4t22d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t17d0  ONLINE       0     0     0
            c4t21d0  ONLINE       0     0     0
        cache
          c0t2d0     FAULTED      0     0     0  corrupted data

# nxvol1 was active here before ...
n...@nex1:/$ show volume nxvol1 status
volume: nxvol1
 state: ONLINE
 scrub: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        nxvol1       ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t15d0  ONLINE       0     0     0
            c4t18d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t14d0  ONLINE       0     0     0
            c4t17d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t13d0  ONLINE       0     0     0
            c4t16d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t12d0  ONLINE       0     0     0
            c4t15d0  ONLINE       0     0     0
          mirror     ONLINE       0     0     0
            c3t11d0  ONLINE       0     0     0
            c4t14d0  ONLINE       0     0     0
        cache
          c0t2d0     ONLINE      0     0     0  

So this is true with and without autoreplace, and with differnt guid's of the 
devices.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to