I'm cross-posting here from linux-iscsi-users since I've seen no
traffic in the weeks since I posted this.

Hi, I needed a little help or advice with my setup.  I'm trying to
configure multipathed iscsi on a CentOS 5.4 (RHEL 5.4 clone) box.

Very short version: One server with two NICs for iSCSI sees storage on
EMC.  Storage shows up as four discs, but only one works.

So far single connections work: If I setup the box to use one NIC, I
get one connection and can use it just fine.

When I setup multiple connections I have problems...
I created two interfaces, and assigned each one to a NIC
iscsiadm -m iface -I iface0 --op=new
iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v eth2
iscsiadm -m iface -I iface1 --op=new
iscsiadm -m iface -I iface1 --op=update -n iface.net_ifacename -v eth3

Each interface saw two paths to their storage, four total, so far so
good.
I logged all four of them them in with:
iscsiadm -m node -T <long ugly string here>  -l

I could see I was connected to all four via
iscsiadm-m session

At this point, I thought I was set, I had four new devices
/dev/sdb /dev/sdc /dev/sdd /dev/sde

Ignoring multipath at this point for now, here's where the problem
started.  I have all four devices, but I can only communicate through
one of them: /dev/sdc.

As a quick test I tried to fdisk all four partitions, to see if I saw
the same thing in each place, and only /dev/sdc works.

Turning on multipath, I got a multipathed device consisting of sdb sdc
sdd and sde, but sdb sdd and sde are failed with a message of
checker msg is "emc_clariion_checker: Logical Unit is unbound or LUNZ"


I'm in the dark here.  Is this right?  Obviously wrong?

Thanks
--Kyle

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.


Reply via email to