Thanks for the response Mike.  I've went through so many changes over
the last week and the environment has pretty much been peeled back to
the core.  We're just now building everything back on.  If we run into
the issue again, I'll be sure to check the logs for more specifics as
well as pay attention to those tunables in iscsid.conf.

-----Original Message-----
From: open-iscsi@googlegroups.com [mailto:open-is...@googlegroups.com]
On Behalf Of Mike Christie
Sent: Thursday, July 09, 2009 2:49 AM
To: open-iscsi@googlegroups.com
Subject: Re: iscsiadm -m iface + routing


hootjr29 wrote:
> Hi all,
> 
> I'm currently attempting to implement a Dell EqualLogic iSCSI solution
> connected through m1000e switches to Dell m610 blades with (2) iSCSI
> dedicated nics in each blade running Oracle VM Server v2.1.5 (which, I
> believe, is based off of RHEL5.1).
> 
> [r...@oim6102501 log]# rpm -qa | grep iscsi
> iscsi-initiator-utils-6.2.0.868-0.7.el5
> [r...@oim6102501 log]# uname -a
> Linux oim6102501 2.6.18-8.1.15.3.1.el5xen #1 SMP Tue May 12 19:21:30
> EDT 2009 i686 i686 i386 GNU/Linux
> 
> I have setup a bond with the two nics to test this out initially and
> had no problems.  This allows for failover (active-passive bond).  I
> discovered my targets, I logged into sessions, and recognized
> everything from dm-multipath.  I fdisked, formatted drives, and
> mounted them.    I then ran lots of dd's to and from the disks with
> speeds around 90MB/sec when `dd if=/dev/mapper/ovm-1-lun0p1 of=/dev/
> null bs=1M count=1000`
> 
> So since this is going to be one of many VM servers in our OVM cluster
> with multiple VM's running on it (of which many are database servers),
> I wanted to try to make this more efficient.  Therefore, I read that
> by using the `iscsiadm -m iface` syntax you can (instead of bonding)
> setup (2) nics individually, each with an IP on that same segment as
> your iSCSI storage.  From what I understand, this allows (2) sessions
> to be created to each volume-- which should give you a little more
> throughput.  I did this:
> 
> iscsiadm -m iface -I eth2 --op=new
> iscsiadm -m iface -I eth3 --op=new
> iscsiadm -m iface -I eth2 --op=update -n iface.hwaddress -v
> 00:10:18:3A:5B:6C iscsiadm -m iface -I eth3 --op=update -n
> iface.hwaddress -v 00:10:18:3A:5B:6E iscsiadm -m discovery -t st -p
> 192.168.0.19 -P 1 iscsiadm -m node --loginall=all
> 
> dm-multipath sees 2 paths now to each volume.  If I run `iscsiadm -m
> session -P 3` I can see which /dev/sdX device is used by which
> multipath device.  I have multipath setup to load balance across both
> paths with rr_min_io set to 10 in /etc/multipath.conf (which it is my
> understanding that this will send 10 I/Os to one path and then switch
> to the other path).
> 
> my eth2 is 192.168.0.151
> my eth3 is 192.168.0.161
> 
> my group is 192.168.0.19
> my eql interface#1 is 192.168.0.30
> my eql interface#1 is 192.168.0.31
> 
> [r...@oim6102501 log]# netstat -rn
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags   MSS Window
> irtt Iface
> 10.0.10.0       0.0.0.0         255.255.255.0   U         0 0
> 0 vlan10
> 169.254.0.0     0.0.0.0         255.255.0.0     U         0 0
> 0 eth3
> 192.168.0.0     0.0.0.0         255.255.0.0     U         0 0
> 0 eth2
> 192.168.0.0     0.0.0.0         255.255.0.0     U         0 0
> 0 eth3
> 0.0.0.0         10.0.10.254     0.0.0.0         UG        0 0
> 0 vlan10
> 
> PROBLEM:
> ========
> I'm now having some odd issues where everything appears to work
fine.cd 
> I can mount drives and dd stuff around.  But then I will occasionally
> get "Reset received on the connection" from my EqualLogic logs.  I
> will see the same thing in /var/log/messages from my kernel scsi layer
> as well as dm-multipath layer.  When I look at `iscsiadm -m session -P
> 2` I can see the following:
> 
> iSCSI Connection State: TRANSPORT WAIT
> iSCSI Session State: Unknown
> Internal iscsid Session State: REPOEN
> 
> By the way, is that a bug?  "REPOEN"?  should it be "REOPEN"?

Yes. I put up a fix on open-iscsi.org.

> 
> Within about 1-2 minutes it will reconnect.  But I'm a bit baffled
> what would cause this.

In /var/log/messages do you see something about a nop or ping timing 
out, or do you just see something about a host reset succeeding?

The initiator sends a iscsi ping (nop iscsi command) to the target every

node.conn[0].timeo.noop_out_interval. If we do not get a response in 
node.conn[0].timeo.noop_out_timeout then we will drop the connection and

reconnect.

If we reconnected within a couple of seconds, then it might have been 
the driver being too agressive and you may want to increase those nops 
values. If it is taking a couple minutes to reconnect then maybe 
something is wrong with the network temporarily.


If you are only seeing the host reset succeed message then you are 
probably sending more IO than the target device can handle. You should 
try lowering the  node.session.cmds_max and node.session.queue_depth.


> 
> QUESTION:
> =========
> Since I am creating two iscsi sessions (one out eth2 and eth3), I'm
> wondering how routing plays into sessions.  Since iscsiadm is given
> the hwaddress, does iscsid need to care much about routing?  In other
> words, let's say that, for whatever reason, my session that I had
> through eth3 (that session, by the way, is connected to 192.168.0.31
> on the EqualLogic) timesout.  iscsid sees this and attempts to REOPEN
> the session.  Since my routing table shows eth2 above the route for
> eth3, IP-wise, eth2 will be the interface that would typically be
> chosen for that traffic to route out of.  However, is iscsid smart
> enough to not use that route and instead select the iface (based on
> hwaddress) to use for that reconnection?
> 

When you use the iface binding iscsid is going to tell the network layer

to ignore the route tables and bind the session to whatever nic you 
specified in the iface/target binding.



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to