Hi Lutz,

On Jan 20, 2010, at 3:17 AM, Lutz Schumann wrote:

> Hello, 
> 
> we tested clustering with ZFS and the setup looks like this: 
> 
> - 2 head nodes (nodea, nodeb)
> - head nodes contain l2arc devices (nodea_l2arc, nodeb_l2arc)

This makes me nervous. I suspect this is not in the typical QA 
test plan.

> - two external jbods
> - two mirror zpools (pool1,pool2)
>   - each mirror is a mirror of one disk from each jbod
> - no ZIL (anyone knows a well priced SAS SSD ?)
> 
> We want active/active and added the l2arc to the pools. 
> 
> - pool1 has nodea_l2arc as cache
> - pool2 has nodeb_l2arc as cache
> 
> Everything is great so far. 
> 
> One thing to node is that the nodea_l2arc and nodea_l2arc are named equally ! 
> (c0t2d0 on both nodes).
> 
> What we found is that during tests, the pool just picked up the device 
> nodeb_l2arc automatically, altought is was never explicitly added to the pool 
> pool1.

This is strange. Each vdev is supposed to be uniquely identified by its GUID.
This is how ZFS can identify the proper configuration when two pools have 
the same name. Can you check the GUIDs (using zdb) to see if there is a
collision?
 -- richard

> We had a setup stage when pool1 was configured on nodea with nodea_l2arc and 
> pool2 was configured on nodeb without a l2arc. Then we did a failover. Then 
> pool1 pickup up the (until then) unconfigured nodeb_l2arc. 
> 
> Is this intended ? Why is a L2ARC device automatically picked up if the 
> device name is the same ? 
> 
> In a later stage we had both pools configured with the corresponding l2arc 
> device. (po...@nodea with nodea_l2arc and po...@nodeb with nodeb_l2arc). Then 
> we also did a failover. The l2arc device of the pool failing over was marked 
> as "too many corruptions" instead of "missing". 
> 
> So from this tests it looks like ZFS just picks up the device with the same 
> name and replaces the l2arc without looking at the device signatures to only 
> consider devices beeing part of a pool.
> 
> We have not tested with a data disk as "c0t2d0" but if the same behaviour 
> occurs - god save us all.
> 
> Can someone clarify the logic behind this ? 
> 
> Can also someone give a hint how to rename SAS disk devices in opensolaris ? 
> (to workaround I would like to rename c0t2d0 on nodea (nodea_l2arc) to 
> c0t24d0 and c0t2d0 on nodeb (nodea_l2arc) to c0t48d0). 
> 
> P.s. Release is build 104 (NexentaCore 2). 
> 
> Thanks!
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to