Re: gmultipath, ses and shared disks / cant seem to share between local nodes

2013-04-17 Thread Teske, Devin

On Apr 17, 2013, at 3:26 PM, Outback Dingo wrote:

 Ok, maybe im at a loss here in the way my brain is viewing this
 
 we have a box, its got 2 nodes in the chassis, and 32 sata drives
 attached to a SATA/SAS backplane via 4 (2 per node) LSI MPT SAS2 cards
 should i not logically be seeing 4 controllers X #drive count ??
 
 camcontrol devlist shows 32 devices, daX,passX and sesX,passX
 
 SEAGATE ST33000650SS 0004at scbus0 target 9 lun 0 (da0,pass0)
 STORBRICK-3 1400at scbus0 target 10 lun 0 (ses0,pass1)
 SEAGATE ST33000650SS 0004at scbus0 target 11 lun 0 (da1,pass2)
 STORBRICK-1 1400at scbus0 target 12 lun 0 (ses1,pass3)
 SEAGATE ST33000650SS 0004at scbus0 target 13 lun 0 (da2,pass4)
 STORBRICK-2 1400at scbus0 target 14 lun 0 (ses2,pass5)
 SEAGATE ST33000650SS 0004at scbus0 target 15 lun 0 (da3,pass6)
 STORBRICK-4 1400at scbus0 target 16 lun 0 (ses3,pass7)
 SEAGATE ST33000650SS 0004at scbus0 target 17 lun 0 (da4,pass8)
 STORBRICK-6 1400at scbus0 target 18 lun 0 (ses4,pass9)
 SEAGATE ST33000650SS 0004at scbus0 target 19 lun 0 (da5,pass10)
 STORBRICK-0 1400at scbus0 target 20 lun 0 (ses5,pass11)
 SEAGATE ST33000650SS 0004at scbus0 target 21 lun 0 (da6,pass12)
 STORBRICK-7 1400at scbus0 target 22 lun 0 (ses6,pass13)
 SEAGATE ST33000650SS 0004at scbus0 target 23 lun 0 (da7,pass14)
 STORBRICK-5 1400at scbus0 target 24 lun 0 (ses7,pass15)
 SEAGATE ST9300605SS 0004 at scbus1 target 0 lun 0 (da8,pass16)
 SEAGATE ST9300605SS 0004 at scbus1 target 1 lun 0 (da9,pass17)
 STORBRICK-3 1400at scbus8 target 10 lun 0 (ses8,pass19)
 SEAGATE ST33000650SS 0004at scbus8 target 11 lun 0 (da11,pass20)
 STORBRICK-1 1400at scbus8 target 12 lun 0 (ses9,pass21)
 SEAGATE ST33000650SS 0004at scbus8 target 13 lun 0 (da12,pass22)
 STORBRICK-2 1400at scbus8 target 14 lun 0 (ses10,pass23)
 SEAGATE ST33000650SS 0004at scbus8 target 15 lun 0 (da13,pass24)
 STORBRICK-4 1400at scbus8 target 16 lun 0 (ses11,pass25)
 SEAGATE ST33000650SS 0004at scbus8 target 17 lun 0 (da14,pass26)
 STORBRICK-6 1400at scbus8 target 18 lun 0 (ses12,pass27)
 SEAGATE ST33000650SS 0004at scbus8 target 19 lun 0 (da15,pass28)
 STORBRICK-0 1400at scbus8 target 20 lun 0 (ses13,pass29)
 SEAGATE ST33000650SS 0004at scbus8 target 21 lun 0 (da16,pass30)
 STORBRICK-7 1400at scbus8 target 22 lun 0 (ses14,pass31)
 SEAGATE ST33000650SS 0004at scbus8 target 23 lun 0 (da17,pass32)
 STORBRICK-5 1400at scbus8 target 24 lun 0 (ses15,pass33)
 USB 2.0 Flash Drive 8.07 at scbus9 target 0 lun 0 (da18,pass34)
 
 
 we would like to create a zpool from all the devices, that in theory if
 nodeA failed
 then nodeB could force import the pool,

gmultipath (which you mention in the subject) is the appropriate tool for this, 
but there's no need for an import of the pool if you build the pool out of 
multipath devices. In our experience, we can pull a cable and zfs continues 
working just fine.

In other words, don't build the pool out of the devices, put a gmultipath label 
on each device and then use /dev/multipath/LABEL for the zpool devices.


 nodeA and NodeB are attached through
 dual LSI controllers, to the SATA/SAS backplane. but i cant seem to create
 a zpool from sesX or passX devices, i can however create a 16 drive zp0ol
 on either node, from any daX device. what did i miss? ive looked at
 gmirror, and also ses documents. Any insight is appreciated, thanks in
 advance.

gmirror is the wrong tool, gmultipath is what you want. The basic task is to 
use gmultipath label FOO da# to write a cookie on the disk (used to identify 
new/existing paths during GOEM taste events for example).

After you've labeled the da# devices with gmultipath you say gmultipath 
status to see the components of each label and you use multipath/LABEL as 
your disk name when creating the zpool (these correspond directly to 
/dev/multipath/LABEL, but zpool create … or zpool add … allow you to omit 
the leading /dev).
-- 
Devin

_
The information contained in this message is proprietary and/or confidential. 
If you are not the intended recipient, please: (i) delete the message and all 
copies; (ii) do not disclose, distribute or use the message in any manner; and 
(iii) notify the sender immediately. In addition, please be aware that any 
message addressed to our domain is subject to archiving and review by persons 
other than the intended recipient. Thank you.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: gmultipath, ses and shared disks / cant seem to share between local nodes

2013-04-17 Thread Outback Dingo
On Wed, Apr 17, 2013 at 6:39 PM, Teske, Devin devin.te...@fisglobal.comwrote:


 On Apr 17, 2013, at 3:26 PM, Outback Dingo wrote:

  Ok, maybe im at a loss here in the way my brain is viewing this
 
  we have a box, its got 2 nodes in the chassis, and 32 sata drives
  attached to a SATA/SAS backplane via 4 (2 per node) LSI MPT SAS2 cards
  should i not logically be seeing 4 controllers X #drive count ??
 
  camcontrol devlist shows 32 devices, daX,passX and sesX,passX
 
  SEAGATE ST33000650SS 0004at scbus0 target 9 lun 0 (da0,pass0)
  STORBRICK-3 1400at scbus0 target 10 lun 0 (ses0,pass1)
  SEAGATE ST33000650SS 0004at scbus0 target 11 lun 0 (da1,pass2)
  STORBRICK-1 1400at scbus0 target 12 lun 0 (ses1,pass3)
  SEAGATE ST33000650SS 0004at scbus0 target 13 lun 0 (da2,pass4)
  STORBRICK-2 1400at scbus0 target 14 lun 0 (ses2,pass5)
  SEAGATE ST33000650SS 0004at scbus0 target 15 lun 0 (da3,pass6)
  STORBRICK-4 1400at scbus0 target 16 lun 0 (ses3,pass7)
  SEAGATE ST33000650SS 0004at scbus0 target 17 lun 0 (da4,pass8)
  STORBRICK-6 1400at scbus0 target 18 lun 0 (ses4,pass9)
  SEAGATE ST33000650SS 0004at scbus0 target 19 lun 0 (da5,pass10)
  STORBRICK-0 1400at scbus0 target 20 lun 0 (ses5,pass11)
  SEAGATE ST33000650SS 0004at scbus0 target 21 lun 0 (da6,pass12)
  STORBRICK-7 1400at scbus0 target 22 lun 0 (ses6,pass13)
  SEAGATE ST33000650SS 0004at scbus0 target 23 lun 0 (da7,pass14)
  STORBRICK-5 1400at scbus0 target 24 lun 0 (ses7,pass15)
  SEAGATE ST9300605SS 0004 at scbus1 target 0 lun 0 (da8,pass16)
  SEAGATE ST9300605SS 0004 at scbus1 target 1 lun 0 (da9,pass17)
  STORBRICK-3 1400at scbus8 target 10 lun 0 (ses8,pass19)
  SEAGATE ST33000650SS 0004at scbus8 target 11 lun 0
 (da11,pass20)
  STORBRICK-1 1400at scbus8 target 12 lun 0 (ses9,pass21)
  SEAGATE ST33000650SS 0004at scbus8 target 13 lun 0
 (da12,pass22)
  STORBRICK-2 1400at scbus8 target 14 lun 0 (ses10,pass23)
  SEAGATE ST33000650SS 0004at scbus8 target 15 lun 0
 (da13,pass24)
  STORBRICK-4 1400at scbus8 target 16 lun 0 (ses11,pass25)
  SEAGATE ST33000650SS 0004at scbus8 target 17 lun 0
 (da14,pass26)
  STORBRICK-6 1400at scbus8 target 18 lun 0 (ses12,pass27)
  SEAGATE ST33000650SS 0004at scbus8 target 19 lun 0
 (da15,pass28)
  STORBRICK-0 1400at scbus8 target 20 lun 0 (ses13,pass29)
  SEAGATE ST33000650SS 0004at scbus8 target 21 lun 0
 (da16,pass30)
  STORBRICK-7 1400at scbus8 target 22 lun 0 (ses14,pass31)
  SEAGATE ST33000650SS 0004at scbus8 target 23 lun 0
 (da17,pass32)
  STORBRICK-5 1400at scbus8 target 24 lun 0 (ses15,pass33)
  USB 2.0 Flash Drive 8.07 at scbus9 target 0 lun 0 (da18,pass34)
 
 
  we would like to create a zpool from all the devices, that in theory if
  nodeA failed
  then nodeB could force import the pool,

 gmultipath (which you mention in the subject) is the appropriate tool for
 this, but there's no need for an import of the pool if you build the pool
 out of multipath devices. In our experience, we can pull a cable and zfs
 continues working just fine.

 In other words, don't build the pool out of the devices, put a gmultipath
 label on each device and then use /dev/multipath/LABEL for the zpool
 devices.


  nodeA and NodeB are attached through
  dual LSI controllers, to the SATA/SAS backplane. but i cant seem to
 create
  a zpool from sesX or passX devices, i can however create a 16 drive zp0ol
  on either node, from any daX device. what did i miss? ive looked at
  gmirror, and also ses documents. Any insight is appreciated, thanks in
  advance.

 gmirror is the wrong tool, gmultipath is what you want. The basic task is
 to use gmultipath label FOO da# to write a cookie on the disk (used to
 identify new/existing paths during GOEM taste events for example).

 After you've labeled the da# devices with gmultipath you say gmultipath
 status to see the components of each label and you use multipath/LABEL
 as your disk name when creating the zpool (these correspond directly to
 /dev/multipath/LABEL, but zpool create … or zpool add … allow you to
 omit the leading /dev).


sanity check me on node A i did

zpool destroy master

gmultipath label FOO da0

gmultipath status
NameStatus  Components
   multipath/FOO  DEGRADED  da0 (ACTIVE)
 multipath/FOO-619648737  DEGRADED  da1 (ACTIVE)
 multipath/FOO-191725652  DEGRADED  da2 (ACTIVE)
multipath/FOO-1539342315  DEGRADED  da3 (ACTIVE)
multipath/FOO-1276041606  DEGRADED  da4 (ACTIVE)
multipath/FOO-2000832198  DEGRADED  da5 (ACTIVE)
multipath/FOO-1285640577  DEGRADED  da6 (ACTIVE)
multipath/FOO-1816092574  DEGRADED  da7 (ACTIVE)
multipath/FOO-110225  DEGRADED  da8 (ACTIVE)
 multipath/FOO-330300690  DEGRADED  da9 (ACTIVE)
  multipath/FOO-92140635  DEGRADED  da10 (ACTIVE)
 

Re: gmultipath, ses and shared disks / cant seem to share between local nodes

2013-04-17 Thread Teske, Devin

On Apr 17, 2013, at 4:10 PM, Outback Dingo wrote:




On Wed, Apr 17, 2013 at 6:39 PM, Teske, Devin 
devin.te...@fisglobal.commailto:devin.te...@fisglobal.com wrote:

On Apr 17, 2013, at 3:26 PM, Outback Dingo wrote:

 Ok, maybe im at a loss here in the way my brain is viewing this

 we have a box, its got 2 nodes in the chassis, and 32 sata drives
 attached to a SATA/SAS backplane via 4 (2 per node) LSI MPT SAS2 cards
 should i not logically be seeing 4 controllers X #drive count ??

 camcontrol devlist shows 32 devices, daX,passX and sesX,passX

 SEAGATE ST33000650SS 0004at scbus0 target 9 lun 0 (da0,pass0)
 STORBRICK-3 1400at scbus0 target 10 lun 0 (ses0,pass1)
 SEAGATE ST33000650SS 0004at scbus0 target 11 lun 0 (da1,pass2)
 STORBRICK-1 1400at scbus0 target 12 lun 0 (ses1,pass3)
 SEAGATE ST33000650SS 0004at scbus0 target 13 lun 0 (da2,pass4)
 STORBRICK-2 1400at scbus0 target 14 lun 0 (ses2,pass5)
 SEAGATE ST33000650SS 0004at scbus0 target 15 lun 0 (da3,pass6)
 STORBRICK-4 1400at scbus0 target 16 lun 0 (ses3,pass7)
 SEAGATE ST33000650SS 0004at scbus0 target 17 lun 0 (da4,pass8)
 STORBRICK-6 1400at scbus0 target 18 lun 0 (ses4,pass9)
 SEAGATE ST33000650SS 0004at scbus0 target 19 lun 0 (da5,pass10)
 STORBRICK-0 1400at scbus0 target 20 lun 0 (ses5,pass11)
 SEAGATE ST33000650SS 0004at scbus0 target 21 lun 0 (da6,pass12)
 STORBRICK-7 1400at scbus0 target 22 lun 0 (ses6,pass13)
 SEAGATE ST33000650SS 0004at scbus0 target 23 lun 0 (da7,pass14)
 STORBRICK-5 1400at scbus0 target 24 lun 0 (ses7,pass15)
 SEAGATE ST9300605SS 0004 at scbus1 target 0 lun 0 (da8,pass16)
 SEAGATE ST9300605SS 0004 at scbus1 target 1 lun 0 (da9,pass17)
 STORBRICK-3 1400at scbus8 target 10 lun 0 (ses8,pass19)
 SEAGATE ST33000650SS 0004at scbus8 target 11 lun 0 (da11,pass20)
 STORBRICK-1 1400at scbus8 target 12 lun 0 (ses9,pass21)
 SEAGATE ST33000650SS 0004at scbus8 target 13 lun 0 (da12,pass22)
 STORBRICK-2 1400at scbus8 target 14 lun 0 (ses10,pass23)
 SEAGATE ST33000650SS 0004at scbus8 target 15 lun 0 (da13,pass24)
 STORBRICK-4 1400at scbus8 target 16 lun 0 (ses11,pass25)
 SEAGATE ST33000650SS 0004at scbus8 target 17 lun 0 (da14,pass26)
 STORBRICK-6 1400at scbus8 target 18 lun 0 (ses12,pass27)
 SEAGATE ST33000650SS 0004at scbus8 target 19 lun 0 (da15,pass28)
 STORBRICK-0 1400at scbus8 target 20 lun 0 (ses13,pass29)
 SEAGATE ST33000650SS 0004at scbus8 target 21 lun 0 (da16,pass30)
 STORBRICK-7 1400at scbus8 target 22 lun 0 (ses14,pass31)
 SEAGATE ST33000650SS 0004at scbus8 target 23 lun 0 (da17,pass32)
 STORBRICK-5 1400at scbus8 target 24 lun 0 (ses15,pass33)
 USB 2.0 Flash Drive 8.07 at scbus9 target 0 lun 0 (da18,pass34)


 we would like to create a zpool from all the devices, that in theory if
 nodeA failed
 then nodeB could force import the pool,

gmultipath (which you mention in the subject) is the appropriate tool for this, 
but there's no need for an import of the pool if you build the pool out of 
multipath devices. In our experience, we can pull a cable and zfs continues 
working just fine.

In other words, don't build the pool out of the devices, put a gmultipath label 
on each device and then use /dev/multipath/LABEL for the zpool devices.


 nodeA and NodeB are attached through
 dual LSI controllers, to the SATA/SAS backplane. but i cant seem to create
 a zpool from sesX or passX devices, i can however create a 16 drive zp0ol
 on either node, from any daX device. what did i miss? ive looked at
 gmirror, and also ses documents. Any insight is appreciated, thanks in
 advance.

gmirror is the wrong tool, gmultipath is what you want. The basic task is to 
use gmultipath label FOO da# to write a cookie on the disk (used to identify 
new/existing paths during GOEM taste events for example).

After you've labeled the da# devices with gmultipath you say gmultipath 
status to see the components of each label and you use multipath/LABEL as 
your disk name when creating the zpool (these correspond directly to 
/dev/multipath/LABEL, but zpool create … or zpool add … allow you to omit 
the leading /dev).

sanity check me on node A i did

zpool destroy master

gmultipath label FOO da0

gmultipath status
NameStatus  Components
   multipath/FOO  DEGRADED  da0 (ACTIVE)
 multipath/FOO-619648737  DEGRADED  da1 (ACTIVE)
 multipath/FOO-191725652  DEGRADED  da2 (ACTIVE)
multipath/FOO-1539342315  DEGRADED  da3 (ACTIVE)
multipath/FOO-1276041606  DEGRADED  da4 (ACTIVE)
multipath/FOO-2000832198  DEGRADED  da5 (ACTIVE)
multipath/FOO-1285640577  DEGRADED  da6 (ACTIVE)
multipath/FOO-1816092574  DEGRADED  da7 (ACTIVE)
multipath/FOO-110225  DEGRADED  da8 (ACTIVE)
 multipath/FOO-330300690  DEGRADED  da9 (ACTIVE)
  multipath/FOO-92140635  DEGRADED  da10 

Re: gmultipath, ses and shared disks / cant seem to share between local nodes

2013-04-17 Thread Outback Dingo
On Wed, Apr 17, 2013 at 7:29 PM, Teske, Devin devin.te...@fisglobal.comwrote:


  On Apr 17, 2013, at 4:10 PM, Outback Dingo wrote:




 On Wed, Apr 17, 2013 at 6:39 PM, Teske, Devin 
 devin.te...@fisglobal.comwrote:


 On Apr 17, 2013, at 3:26 PM, Outback Dingo wrote:

  Ok, maybe im at a loss here in the way my brain is viewing this
 
  we have a box, its got 2 nodes in the chassis, and 32 sata drives
  attached to a SATA/SAS backplane via 4 (2 per node) LSI MPT SAS2 cards
  should i not logically be seeing 4 controllers X #drive count ??
 
  camcontrol devlist shows 32 devices, daX,passX and sesX,passX
 
  SEAGATE ST33000650SS 0004at scbus0 target 9 lun 0 (da0,pass0)
  STORBRICK-3 1400at scbus0 target 10 lun 0 (ses0,pass1)
  SEAGATE ST33000650SS 0004at scbus0 target 11 lun 0 (da1,pass2)
  STORBRICK-1 1400at scbus0 target 12 lun 0 (ses1,pass3)
  SEAGATE ST33000650SS 0004at scbus0 target 13 lun 0 (da2,pass4)
  STORBRICK-2 1400at scbus0 target 14 lun 0 (ses2,pass5)
  SEAGATE ST33000650SS 0004at scbus0 target 15 lun 0 (da3,pass6)
  STORBRICK-4 1400at scbus0 target 16 lun 0 (ses3,pass7)
  SEAGATE ST33000650SS 0004at scbus0 target 17 lun 0 (da4,pass8)
  STORBRICK-6 1400at scbus0 target 18 lun 0 (ses4,pass9)
  SEAGATE ST33000650SS 0004at scbus0 target 19 lun 0
 (da5,pass10)
  STORBRICK-0 1400at scbus0 target 20 lun 0 (ses5,pass11)
  SEAGATE ST33000650SS 0004at scbus0 target 21 lun 0
 (da6,pass12)
  STORBRICK-7 1400at scbus0 target 22 lun 0 (ses6,pass13)
  SEAGATE ST33000650SS 0004at scbus0 target 23 lun 0
 (da7,pass14)
  STORBRICK-5 1400at scbus0 target 24 lun 0 (ses7,pass15)
  SEAGATE ST9300605SS 0004 at scbus1 target 0 lun 0 (da8,pass16)
  SEAGATE ST9300605SS 0004 at scbus1 target 1 lun 0 (da9,pass17)
  STORBRICK-3 1400at scbus8 target 10 lun 0 (ses8,pass19)
  SEAGATE ST33000650SS 0004at scbus8 target 11 lun 0
 (da11,pass20)
  STORBRICK-1 1400at scbus8 target 12 lun 0 (ses9,pass21)
  SEAGATE ST33000650SS 0004at scbus8 target 13 lun 0
 (da12,pass22)
  STORBRICK-2 1400at scbus8 target 14 lun 0 (ses10,pass23)
  SEAGATE ST33000650SS 0004at scbus8 target 15 lun 0
 (da13,pass24)
  STORBRICK-4 1400at scbus8 target 16 lun 0 (ses11,pass25)
  SEAGATE ST33000650SS 0004at scbus8 target 17 lun 0
 (da14,pass26)
  STORBRICK-6 1400at scbus8 target 18 lun 0 (ses12,pass27)
  SEAGATE ST33000650SS 0004at scbus8 target 19 lun 0
 (da15,pass28)
  STORBRICK-0 1400at scbus8 target 20 lun 0 (ses13,pass29)
  SEAGATE ST33000650SS 0004at scbus8 target 21 lun 0
 (da16,pass30)
  STORBRICK-7 1400at scbus8 target 22 lun 0 (ses14,pass31)
  SEAGATE ST33000650SS 0004at scbus8 target 23 lun 0
 (da17,pass32)
  STORBRICK-5 1400at scbus8 target 24 lun 0 (ses15,pass33)
  USB 2.0 Flash Drive 8.07 at scbus9 target 0 lun 0
 (da18,pass34)
 
 
  we would like to create a zpool from all the devices, that in theory if
  nodeA failed
  then nodeB could force import the pool,

  gmultipath (which you mention in the subject) is the appropriate tool
 for this, but there's no need for an import of the pool if you build the
 pool out of multipath devices. In our experience, we can pull a cable and
 zfs continues working just fine.

 In other words, don't build the pool out of the devices, put a gmultipath
 label on each device and then use /dev/multipath/LABEL for the zpool
 devices.


  nodeA and NodeB are attached through
  dual LSI controllers, to the SATA/SAS backplane. but i cant seem to
 create
  a zpool from sesX or passX devices, i can however create a 16 drive
 zp0ol
  on either node, from any daX device. what did i miss? ive looked at
  gmirror, and also ses documents. Any insight is appreciated, thanks in
  advance.

  gmirror is the wrong tool, gmultipath is what you want. The basic task
 is to use gmultipath label FOO da# to write a cookie on the disk (used to
 identify new/existing paths during GOEM taste events for example).

 After you've labeled the da# devices with gmultipath you say gmultipath
 status to see the components of each label and you use multipath/LABEL
 as your disk name when creating the zpool (these correspond directly to
 /dev/multipath/LABEL, but zpool create … or zpool add … allow you to
 omit the leading /dev).


  sanity check me on node A i did

  zpool destroy master

  gmultipath label FOO da0

  gmultipath status
 NameStatus  Components
multipath/FOO  DEGRADED  da0 (ACTIVE)
  multipath/FOO-619648737  DEGRADED  da1 (ACTIVE)
  multipath/FOO-191725652  DEGRADED  da2 (ACTIVE)
 multipath/FOO-1539342315  DEGRADED  da3 (ACTIVE)
 multipath/FOO-1276041606  DEGRADED  da4 (ACTIVE)
 multipath/FOO-2000832198  DEGRADED  da5 (ACTIVE)
 multipath/FOO-1285640577  DEGRADED  da6 (ACTIVE)
 multipath/FOO-1816092574  DEGRADED  da7 (ACTIVE)

Re: gmultipath, ses and shared disks / cant seem to share between local nodes

2013-04-17 Thread Teske, Devin

On Apr 17, 2013, at 4:56 PM, Outback Dingo wrote:




On Wed, Apr 17, 2013 at 7:29 PM, Teske, Devin 
devin.te...@fisglobal.commailto:devin.te...@fisglobal.com wrote:

On Apr 17, 2013, at 4:10 PM, Outback Dingo wrote:




On Wed, Apr 17, 2013 at 6:39 PM, Teske, Devin 
devin.te...@fisglobal.commailto:devin.te...@fisglobal.com wrote:

On Apr 17, 2013, at 3:26 PM, Outback Dingo wrote:

 Ok, maybe im at a loss here in the way my brain is viewing this

 we have a box, its got 2 nodes in the chassis, and 32 sata drives
 attached to a SATA/SAS backplane via 4 (2 per node) LSI MPT SAS2 cards
 should i not logically be seeing 4 controllers X #drive count ??

 camcontrol devlist shows 32 devices, daX,passX and sesX,passX

 SEAGATE ST33000650SS 0004at scbus0 target 9 lun 0 (da0,pass0)
 STORBRICK-3 1400at scbus0 target 10 lun 0 (ses0,pass1)
 SEAGATE ST33000650SS 0004at scbus0 target 11 lun 0 (da1,pass2)
 STORBRICK-1 1400at scbus0 target 12 lun 0 (ses1,pass3)
 SEAGATE ST33000650SS 0004at scbus0 target 13 lun 0 (da2,pass4)
 STORBRICK-2 1400at scbus0 target 14 lun 0 (ses2,pass5)
 SEAGATE ST33000650SS 0004at scbus0 target 15 lun 0 (da3,pass6)
 STORBRICK-4 1400at scbus0 target 16 lun 0 (ses3,pass7)
 SEAGATE ST33000650SS 0004at scbus0 target 17 lun 0 (da4,pass8)
 STORBRICK-6 1400at scbus0 target 18 lun 0 (ses4,pass9)
 SEAGATE ST33000650SS 0004at scbus0 target 19 lun 0 (da5,pass10)
 STORBRICK-0 1400at scbus0 target 20 lun 0 (ses5,pass11)
 SEAGATE ST33000650SS 0004at scbus0 target 21 lun 0 (da6,pass12)
 STORBRICK-7 1400at scbus0 target 22 lun 0 (ses6,pass13)
 SEAGATE ST33000650SS 0004at scbus0 target 23 lun 0 (da7,pass14)
 STORBRICK-5 1400at scbus0 target 24 lun 0 (ses7,pass15)
 SEAGATE ST9300605SS 0004 at scbus1 target 0 lun 0 (da8,pass16)
 SEAGATE ST9300605SS 0004 at scbus1 target 1 lun 0 (da9,pass17)
 STORBRICK-3 1400at scbus8 target 10 lun 0 (ses8,pass19)
 SEAGATE ST33000650SS 0004at scbus8 target 11 lun 0 (da11,pass20)
 STORBRICK-1 1400at scbus8 target 12 lun 0 (ses9,pass21)
 SEAGATE ST33000650SS 0004at scbus8 target 13 lun 0 (da12,pass22)
 STORBRICK-2 1400at scbus8 target 14 lun 0 (ses10,pass23)
 SEAGATE ST33000650SS 0004at scbus8 target 15 lun 0 (da13,pass24)
 STORBRICK-4 1400at scbus8 target 16 lun 0 (ses11,pass25)
 SEAGATE ST33000650SS 0004at scbus8 target 17 lun 0 (da14,pass26)
 STORBRICK-6 1400at scbus8 target 18 lun 0 (ses12,pass27)
 SEAGATE ST33000650SS 0004at scbus8 target 19 lun 0 (da15,pass28)
 STORBRICK-0 1400at scbus8 target 20 lun 0 (ses13,pass29)
 SEAGATE ST33000650SS 0004at scbus8 target 21 lun 0 (da16,pass30)
 STORBRICK-7 1400at scbus8 target 22 lun 0 (ses14,pass31)
 SEAGATE ST33000650SS 0004at scbus8 target 23 lun 0 (da17,pass32)
 STORBRICK-5 1400at scbus8 target 24 lun 0 (ses15,pass33)
 USB 2.0 Flash Drive 8.07 at scbus9 target 0 lun 0 (da18,pass34)


 we would like to create a zpool from all the devices, that in theory if
 nodeA failed
 then nodeB could force import the pool,

gmultipath (which you mention in the subject) is the appropriate tool for this, 
but there's no need for an import of the pool if you build the pool out of 
multipath devices. In our experience, we can pull a cable and zfs continues 
working just fine.

In other words, don't build the pool out of the devices, put a gmultipath label 
on each device and then use /dev/multipath/LABEL for the zpool devices.


 nodeA and NodeB are attached through
 dual LSI controllers, to the SATA/SAS backplane. but i cant seem to create
 a zpool from sesX or passX devices, i can however create a 16 drive zp0ol
 on either node, from any daX device. what did i miss? ive looked at
 gmirror, and also ses documents. Any insight is appreciated, thanks in
 advance.

gmirror is the wrong tool, gmultipath is what you want. The basic task is to 
use gmultipath label FOO da# to write a cookie on the disk (used to identify 
new/existing paths during GOEM taste events for example).

After you've labeled the da# devices with gmultipath you say gmultipath 
status to see the components of each label and you use multipath/LABEL as 
your disk name when creating the zpool (these correspond directly to 
/dev/multipath/LABEL, but zpool create … or zpool add … allow you to omit 
the leading /dev).

sanity check me on node A i did

zpool destroy master

gmultipath label FOO da0

gmultipath status
NameStatus  Components
   multipath/FOO  DEGRADED  da0 (ACTIVE)
 multipath/FOO-619648737  DEGRADED  da1 (ACTIVE)
 multipath/FOO-191725652  DEGRADED  da2 (ACTIVE)
multipath/FOO-1539342315  DEGRADED  da3 (ACTIVE)
multipath/FOO-1276041606  DEGRADED  da4 (ACTIVE)
multipath/FOO-2000832198  DEGRADED  da5 (ACTIVE)
multipath/FOO-1285640577  DEGRADED  da6 (ACTIVE)
multipath/FOO-1816092574 

Re: gmultipath, ses and shared disks / cant seem to share between local nodes

2013-04-17 Thread Outback Dingo
On Wed, Apr 17, 2013 at 8:05 PM, Teske, Devin devin.te...@fisglobal.comwrote:


  On Apr 17, 2013, at 4:56 PM, Outback Dingo wrote:




 On Wed, Apr 17, 2013 at 7:29 PM, Teske, Devin 
 devin.te...@fisglobal.comwrote:


   On Apr 17, 2013, at 4:10 PM, Outback Dingo wrote:




 On Wed, Apr 17, 2013 at 6:39 PM, Teske, Devin 
 devin.te...@fisglobal.comwrote:


 On Apr 17, 2013, at 3:26 PM, Outback Dingo wrote:

  Ok, maybe im at a loss here in the way my brain is viewing this
 
  we have a box, its got 2 nodes in the chassis, and 32 sata drives
  attached to a SATA/SAS backplane via 4 (2 per node) LSI MPT SAS2 cards
  should i not logically be seeing 4 controllers X #drive count ??
 
  camcontrol devlist shows 32 devices, daX,passX and sesX,passX
 
  SEAGATE ST33000650SS 0004at scbus0 target 9 lun 0 (da0,pass0)
  STORBRICK-3 1400at scbus0 target 10 lun 0 (ses0,pass1)
  SEAGATE ST33000650SS 0004at scbus0 target 11 lun 0
 (da1,pass2)
  STORBRICK-1 1400at scbus0 target 12 lun 0 (ses1,pass3)
  SEAGATE ST33000650SS 0004at scbus0 target 13 lun 0
 (da2,pass4)
  STORBRICK-2 1400at scbus0 target 14 lun 0 (ses2,pass5)
  SEAGATE ST33000650SS 0004at scbus0 target 15 lun 0
 (da3,pass6)
  STORBRICK-4 1400at scbus0 target 16 lun 0 (ses3,pass7)
  SEAGATE ST33000650SS 0004at scbus0 target 17 lun 0
 (da4,pass8)
  STORBRICK-6 1400at scbus0 target 18 lun 0 (ses4,pass9)
  SEAGATE ST33000650SS 0004at scbus0 target 19 lun 0
 (da5,pass10)
  STORBRICK-0 1400at scbus0 target 20 lun 0 (ses5,pass11)
  SEAGATE ST33000650SS 0004at scbus0 target 21 lun 0
 (da6,pass12)
  STORBRICK-7 1400at scbus0 target 22 lun 0 (ses6,pass13)
  SEAGATE ST33000650SS 0004at scbus0 target 23 lun 0
 (da7,pass14)
  STORBRICK-5 1400at scbus0 target 24 lun 0 (ses7,pass15)
  SEAGATE ST9300605SS 0004 at scbus1 target 0 lun 0
 (da8,pass16)
  SEAGATE ST9300605SS 0004 at scbus1 target 1 lun 0
 (da9,pass17)
  STORBRICK-3 1400at scbus8 target 10 lun 0 (ses8,pass19)
  SEAGATE ST33000650SS 0004at scbus8 target 11 lun 0
 (da11,pass20)
  STORBRICK-1 1400at scbus8 target 12 lun 0 (ses9,pass21)
  SEAGATE ST33000650SS 0004at scbus8 target 13 lun 0
 (da12,pass22)
  STORBRICK-2 1400at scbus8 target 14 lun 0 (ses10,pass23)
  SEAGATE ST33000650SS 0004at scbus8 target 15 lun 0
 (da13,pass24)
  STORBRICK-4 1400at scbus8 target 16 lun 0 (ses11,pass25)
  SEAGATE ST33000650SS 0004at scbus8 target 17 lun 0
 (da14,pass26)
  STORBRICK-6 1400at scbus8 target 18 lun 0 (ses12,pass27)
  SEAGATE ST33000650SS 0004at scbus8 target 19 lun 0
 (da15,pass28)
  STORBRICK-0 1400at scbus8 target 20 lun 0 (ses13,pass29)
  SEAGATE ST33000650SS 0004at scbus8 target 21 lun 0
 (da16,pass30)
  STORBRICK-7 1400at scbus8 target 22 lun 0 (ses14,pass31)
  SEAGATE ST33000650SS 0004at scbus8 target 23 lun 0
 (da17,pass32)
  STORBRICK-5 1400at scbus8 target 24 lun 0 (ses15,pass33)
  USB 2.0 Flash Drive 8.07 at scbus9 target 0 lun 0
 (da18,pass34)
 
 
  we would like to create a zpool from all the devices, that in theory if
  nodeA failed
  then nodeB could force import the pool,

  gmultipath (which you mention in the subject) is the appropriate tool
 for this, but there's no need for an import of the pool if you build the
 pool out of multipath devices. In our experience, we can pull a cable and
 zfs continues working just fine.

 In other words, don't build the pool out of the devices, put a
 gmultipath label on each device and then use /dev/multipath/LABEL for the
 zpool devices.


  nodeA and NodeB are attached through
  dual LSI controllers, to the SATA/SAS backplane. but i cant seem to
 create
  a zpool from sesX or passX devices, i can however create a 16 drive
 zp0ol
  on either node, from any daX device. what did i miss? ive looked at
  gmirror, and also ses documents. Any insight is appreciated, thanks in
  advance.

  gmirror is the wrong tool, gmultipath is what you want. The basic task
 is to use gmultipath label FOO da# to write a cookie on the disk (used to
 identify new/existing paths during GOEM taste events for example).

 After you've labeled the da# devices with gmultipath you say gmultipath
 status to see the components of each label and you use multipath/LABEL
 as your disk name when creating the zpool (these correspond directly to
 /dev/multipath/LABEL, but zpool create … or zpool add … allow you to
 omit the leading /dev).


  sanity check me on node A i did

  zpool destroy master

  gmultipath label FOO da0

  gmultipath status
 NameStatus  Components
multipath/FOO  DEGRADED  da0 (ACTIVE)
  multipath/FOO-619648737  DEGRADED  da1 (ACTIVE)
  multipath/FOO-191725652  DEGRADED  da2 (ACTIVE)
 multipath/FOO-1539342315  DEGRADED  da3 (ACTIVE)
 multipath/FOO-1276041606  DEGRADED  da4 (ACTIVE)