Hi,

mi...@r600:/rpool/tmp# zpool status test
 pool: test
state: ONLINE
scrub: none requested
config:

   NAME             STATE     READ WRITE CKSUM
   test             ONLINE       0     0     0
     /rpool/tmp/f1  ONLINE       0     0     0

errors: No known data errors

lets add a cache device:

mi...@r600:/rpool/tmp# zfs create -V 100m rpool/tmp/ssd2
mi...@r600:/rpool/tmp# zpool add test cache /dev/zvol/dsk/rpool/tmp/ssd2
mi...@r600:/rpool/tmp# zpool status test
 pool: test
state: ONLINE
scrub: none requested
config:

   NAME                            STATE     READ WRITE CKSUM
   test                            ONLINE       0     0     0
     /rpool/tmp/f1                 ONLINE       0     0     0
   cache
     /dev/zvol/dsk/rpool/tmp/ssd2  ONLINE       0     0     0

errors: No known data errors
mi...@r600:/rpool/tmp#

now lets export the pool, re-create the zvol and then import the pool again:

mi...@r600:/rpool/tmp# zpool export test
mi...@r600:/rpool/tmp# zfs destroy rpool/tmp/ssd2
mi...@r600:/rpool/tmp# zfs create -V 100m rpool/tmp/ssd2
mi...@r600:/rpool/tmp# zpool import -d /rpool/tmp/ test

mi...@r600:/rpool/tmp# zpool status test
 pool: test
state: ONLINE
scrub: none requested
config:

   NAME                            STATE     READ WRITE CKSUM
   test                            ONLINE       0     0     0
     /rpool/tmp/f1                 ONLINE       0     0     0
   cache
     /dev/zvol/dsk/rpool/tmp/ssd2  ONLINE       0     0     0

errors: No known data errors
mi...@r600:/rpool/tmp#


No complaint here...
I'm not entirely sure that it should behave that way - in some circumstances it could be risky. For example what if zvol/ssd/disk which is used on one server as a cache device has the same path on another server and then a pool is imported there? Would l2arc just blindly start using it as a cache device and overwriting some other data?

Shouldn't l2arc devices have a label/signature or at least use uuid of a disk and during import be checked if it is the same device? Or maybe it does and there is some other issue here with re-creating zvol...

btw: x86, snv_127

--
Robert Milkowski
http://milek.blogspot.com





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to