Hi all,

I’m currently attempting to implement a Dell EqualLogic iSCSI solution
connected through m1000e switches to Dell m610 blades with (2) iSCSI
dedicated nics in each blade running Oracle VM Server v2.1.5 (which, I
believe, is based off of RHEL5.1).

[r...@oim6102501 log]# rpm -qa | grep iscsi
[r...@oim6102501 log]# uname -a
Linux oim6102501 2.6.18- #1 SMP Tue May 12 19:21:30
EDT 2009 i686 i686 i386 GNU/Linux

I have setup a bond with the two nics to test this out initially and
had no problems.  This allows for failover (active-passive bond).  I
discovered my targets, I logged into sessions, and recognized
everything from dm-multipath.  I fdisked, formatted drives, and
mounted them.    I then ran lots of dd’s to and from the disks with
speeds around 90MB/sec when `dd if=/dev/mapper/ovm-1-lun0p1 of=/dev/
null bs=1M count=1000`

So since this is going to be one of many VM servers in our OVM cluster
with multiple VM’s running on it (of which many are database servers),
I wanted to try to make this more efficient.  Therefore, I read that
by using the `iscsiadm –m iface` syntax you can (instead of bonding)
setup (2) nics individually, each with an IP on that same segment as
your iSCSI storage.  From what I understand, this allows (2) sessions
to be created to each volume-- which should give you a little more
throughput.  I did this:

iscsiadm -m iface -I eth2 --op=new
iscsiadm -m iface -I eth3 --op=new
iscsiadm -m iface -I eth2 --op=update -n iface.hwaddress -v
00:10:18:3A:5B:6C iscsiadm -m iface -I eth3 --op=update -n
iface.hwaddress -v 00:10:18:3A:5B:6E iscsiadm -m discovery -t st -p -P 1 iscsiadm -m node --loginall=all

dm-multipath sees 2 paths now to each volume.  If I run `iscsiadm -m
session -P 3` I can see which /dev/sdX device is used by which
multipath device.  I have multipath setup to load balance across both
paths with rr_min_io set to 10 in /etc/multipath.conf (which it is my
understanding that this will send 10 I/Os to one path and then switch
to the other path).

my eth2 is
my eth3 is

my group is
my eql interface#1 is
my eql interface#1 is

[r...@oim6102501 log]# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window
irtt Iface   U         0 0
0 vlan10     U         0 0
0 eth3     U         0 0
0 eth2     U         0 0
0 eth3         UG        0 0
0 vlan10

I'm now having some odd issues where everything appears to work fine.
I can mount drives and dd stuff around.  But then I will occasionally
get "Reset received on the connection" from my EqualLogic logs.  I
will see the same thing in /var/log/messages from my kernel scsi layer
as well as dm-multipath layer.  When I look at `iscsiadm -m session -P
2` I can see the following:

iSCSI Connection State: TRANSPORT WAIT
iSCSI Session State: Unknown
Internal iscsid Session State: REPOEN

By the way, is that a bug?  "REPOEN"?  should it be "REOPEN"?

Within about 1-2 minutes it will reconnect.  But I'm a bit baffled
what would cause this.

Since I am creating two iscsi sessions (one out eth2 and eth3), I'm
wondering how routing plays into sessions.  Since iscsiadm is given
the hwaddress, does iscsid need to care much about routing?  In other
words, let's say that, for whatever reason, my session that I had
through eth3 (that session, by the way, is connected to
on the EqualLogic) timesout.  iscsid sees this and attempts to REOPEN
the session.  Since my routing table shows eth2 above the route for
eth3, IP-wise, eth2 will be the interface that would typically be
chosen for that traffic to route out of.  However, is iscsid smart
enough to not use that route and instead select the iface (based on
hwaddress) to use for that reconnection?

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
For more options, visit this group at http://groups.google.com/group/open-iscsi

Reply via email to