iscsiadm -m iface and routing

2009-07-09 Thread Hoot, Joseph

Hi all,

I'm currently attempting to implement a Dell EqualLogic iSCSI solution
connected through m1000e switches to Dell m610 blades with (2) iSCSI
dedicated nics in each blade running Oracle VM Server v2.1.5 (which, I
believe, is based off of RHEL5.1).

[r...@oim6102501 log]# rpm -qa | grep iscsi
iscsi-initiator-utils-6.2.0.868-0.7.el5
[r...@oim6102501 log]# uname -a
Linux oim6102501 2.6.18-8.1.15.3.1.el5xen #1 SMP Tue May 12 19:21:30 EDT
2009 i686 i686 i386 GNU/Linux

I have setup a bond with the two nics to test this out initially and had
no problems.  This allows for failover (active-passive bond).  I
discovered my targets, I logged into sessions, and recognized everything
from dm-multipath.  I fdisked, formatted drives, and mounted them.I
then ran lots of dd's to and from the disks with speeds around 90MB/sec
when `dd if=/dev/mapper/ovm-1-lun0p1 of=/dev/null bs=1M count=1000`

So since this is going to be one of many VM servers in our OVM cluster
with multiple VM's running on it (of which many are database servers), I
wanted to try to make this more efficient.  Therefore, I read that by
using the `iscsiadm -m iface` syntax you can (instead of bonding) setup
(2) nics individually, each with an IP on that same segment as your
iSCSI storage.  From what I understand, this allows (2) sessions to be
created to each volume-- which should give you a little more throughput.
I did this:

iscsiadm -m iface -I eth2 --op=new
iscsiadm -m iface -I eth3 --op=new
iscsiadm -m iface -I eth2 --op=update -n iface.hwaddress -v
00:10:18:3A:5B:6C
iscsiadm -m iface -I eth3 --op=update -n iface.hwaddress -v
00:10:18:3A:5B:6E
iscsiadm -m discovery -t st -p 192.168.0.19 -P 1
iscsiadm -m node --loginall=all

dm-multipath sees 2 paths now to each volume.  If I run `iscsiadm -m
session -P 3` I can see which /dev/sdX device is used by which multipath
device.  I have multipath setup to load balance across both paths with
rr_min_io set to 10 in /etc/multipath.conf (which it is my understanding
that this will send 10 I/Os to one path and then switch to the other
path).

my eth2 is 192.168.0.151
my eth3 is 192.168.0.161

my group is 192.168.0.19
my eql interface#1 is 192.168.0.30
my eql interface#1 is 192.168.0.31

[r...@oim6102501 log]# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags   MSS Window  irtt
Iface
10.0.10.0   0.0.0.0 255.255.255.0   U 0 0  0
vlan10
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0
eth3
192.168.0.0 0.0.0.0 255.255.0.0 U 0 0  0
eth2
192.168.0.0 0.0.0.0 255.255.0.0 U 0 0  0
eth3
0.0.0.0 10.0.10.254 0.0.0.0 UG0 0  0
vlan10

PROBLEM:

I'm now having some odd issues where everything appears to work fine.  I
can mount drives and dd stuff around.  But then I will occasionally get
Reset received on the connection from my EqualLogic logs.  I will see
the same thing in /var/log/messages from my kernel scsi layer as well as
dm-multipath layer.  When I look at `iscsiadm -m session -P 2` I can see
the following: 

iSCSI Connection State: TRANSPORT WAIT
iSCSI Session State: Unknown
Internal iscsid Session State: REPOEN

By the way, is that a bug?  REPOEN?  should it be REOPEN?

Within about 1-2 minutes it will reconnect.  But I'm a bit baffled what
would cause this.

QUESTION:
=
Since I am creating two iscsi sessions (one out eth2 and eth3), I'm
wondering how routing plays into sessions.  Since iscsiadm is given the
hwaddress, does iscsid need to care much about routing?  In other words,
let's say that, for whatever reason, my session that I had through eth3
(that session, by the way, is connected to 192.168.0.31 on the
EqualLogic) timesout.  iscsid sees this and attempts to REOPEN the
session.  Since my routing table shows eth2 above the route for eth3,
IP-wise, eth2 will be the interface that would typically be chosen for
that traffic to route out of.  However, is iscsid smart enough to not
use that route and instead select the iface (based on hwaddress) to use
for that reconnection?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iscsiadm -m iface and routing

2009-07-09 Thread Pasi Kärkkäinen

On Thu, Jul 02, 2009 at 09:41:30AM -0400, Hoot, Joseph wrote:
 
 Hi all,
 
 I'm currently attempting to implement a Dell EqualLogic iSCSI solution
 connected through m1000e switches to Dell m610 blades with (2) iSCSI
 dedicated nics in each blade running Oracle VM Server v2.1.5 (which, I
 believe, is based off of RHEL5.1).
 
 [r...@oim6102501 log]# rpm -qa | grep iscsi
 iscsi-initiator-utils-6.2.0.868-0.7.el5
 [r...@oim6102501 log]# uname -a
 Linux oim6102501 2.6.18-8.1.15.3.1.el5xen #1 SMP Tue May 12 19:21:30 EDT
 2009 i686 i686 i386 GNU/Linux
 
 I have setup a bond with the two nics to test this out initially and had
 no problems.  This allows for failover (active-passive bond).  I
 discovered my targets, I logged into sessions, and recognized everything
 from dm-multipath.  I fdisked, formatted drives, and mounted them.I
 then ran lots of dd's to and from the disks with speeds around 90MB/sec
 when `dd if=/dev/mapper/ovm-1-lun0p1 of=/dev/null bs=1M count=1000`
 
 So since this is going to be one of many VM servers in our OVM cluster
 with multiple VM's running on it (of which many are database servers), I
 wanted to try to make this more efficient.  Therefore, I read that by
 using the `iscsiadm -m iface` syntax you can (instead of bonding) setup
 (2) nics individually, each with an IP on that same segment as your
 iSCSI storage.  From what I understand, this allows (2) sessions to be
 created to each volume-- which should give you a little more throughput.
 I did this:
 
 iscsiadm -m iface -I eth2 --op=new
 iscsiadm -m iface -I eth3 --op=new
 iscsiadm -m iface -I eth2 --op=update -n iface.hwaddress -v
 00:10:18:3A:5B:6C
 iscsiadm -m iface -I eth3 --op=update -n iface.hwaddress -v
 00:10:18:3A:5B:6E
 iscsiadm -m discovery -t st -p 192.168.0.19 -P 1
 iscsiadm -m node --loginall=all
 
 dm-multipath sees 2 paths now to each volume.  If I run `iscsiadm -m
 session -P 3` I can see which /dev/sdX device is used by which multipath
 device.  I have multipath setup to load balance across both paths with
 rr_min_io set to 10 in /etc/multipath.conf (which it is my understanding
 that this will send 10 I/Os to one path and then switch to the other
 path).
 
 my eth2 is 192.168.0.151
 my eth3 is 192.168.0.161
 
 my group is 192.168.0.19
 my eql interface#1 is 192.168.0.30
 my eql interface#1 is 192.168.0.31
 
 [r...@oim6102501 log]# netstat -rn
 Kernel IP routing table
 Destination Gateway Genmask Flags   MSS Window  irtt
 Iface
 10.0.10.0   0.0.0.0 255.255.255.0   U 0 0  0
 vlan10
 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0
 eth3
 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0  0
 eth2
 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0  0
 eth3
 0.0.0.0 10.0.10.254 0.0.0.0 UG0 0  0
 vlan10
 
 PROBLEM:
 
 I'm now having some odd issues where everything appears to work fine.  I
 can mount drives and dd stuff around.  But then I will occasionally get
 Reset received on the connection from my EqualLogic logs.  I will see
 the same thing in /var/log/messages from my kernel scsi layer as well as
 dm-multipath layer.  When I look at `iscsiadm -m session -P 2` I can see
 the following: 
 
 iSCSI Connection State: TRANSPORT WAIT
 iSCSI Session State: Unknown
 Internal iscsid Session State: REPOEN
 
 By the way, is that a bug?  REPOEN?  should it be REOPEN?
 
 Within about 1-2 minutes it will reconnect.  But I'm a bit baffled what
 would cause this.
 
 QUESTION:
 =
 Since I am creating two iscsi sessions (one out eth2 and eth3), I'm
 wondering how routing plays into sessions.  Since iscsiadm is given the
 hwaddress, does iscsid need to care much about routing?  In other words,
 let's say that, for whatever reason, my session that I had through eth3
 (that session, by the way, is connected to 192.168.0.31 on the
 EqualLogic) timesout.  iscsid sees this and attempts to REOPEN the
 session.  Since my routing table shows eth2 above the route for eth3,
 IP-wise, eth2 will be the interface that would typically be chosen for
 that traffic to route out of.  However, is iscsid smart enough to not
 use that route and instead select the iface (based on hwaddress) to use
 for that reconnection?
 

You can also use the ethernet interface name, to make sure correct iface is 
always used.

# iscsiadm -m iface -I iface3 -o new
New interface iface3 added

# iscsiadm -m iface -I iface3 --op=update -n iface.net_ifacename -v eth1.234
iface3 updated.

Replace the eth1.234 VLAN with whatever you use, for example eth3.

You can also specify these things in /var/lib/iscsi/ifaces/ directory.
Create a file called ifaceX and write something like this in it:

iface.iscsi_ifacename = ifaceX
iface.transport_name = tcp
iface.net_ifacename = eth0.xyz

or you could replace the iface.net_ifacename with

iface.hwaddress = 00:DE:AD:BE:EF:00

if needed.

-- Pasi