Re: Need help with multipath and iscsi in CentOS 5.4

2010-01-08 Thread Kyle Schmitt
Using a single path (without MPIO) as a baseline:

With bonding I saw, on average 99-100% of the speed (worst case 78%)
of a single path.
With MPIO (2 nics) I saw, on average 82% of the speed (worst case 66%)
of the single path.
With MPIO with one nic (ifconfig downed the second), I saw, on average
86% of the speed (worst case 66%) of the single path.

There were situations where bonding and MPIO both scored slightly
higher than the single path, but that is most likely due to
differences on the array, since the tests weren't run back to back.

--Kyle

On 1/6/10, Mike Christie micha...@cs.wisc.edu wrote:
 On 12/30/2009 11:48 AM, Kyle Schmitt wrote:
 On Wed, Dec 9, 2009 at 8:52 PM, Mike Christiemicha...@cs.wisc.edu
 wrote:
 So far single connections work: If I setup the box to use one NIC, I
 get one connection and can use it just fine.
 Could you send the /var/log/messages for when you run the login command
 so I can see the disk info?

 Sorry for the delay.  In the meanwhile I tore down the server and
 re-configured it using ethernet bonding.  It worked, according to
 iozone, provided moderately better throughput than the single
 connection I got before.  Moderately.  Measurably.  Not significantly.

 I tore it down after that and reconfigured again using MPIO, and funny
 enough, this time it worked.  I can access the lun now using two
 devices (sdb and sdd), and both ethernet devices that connect to iscsi
 show traffic.

 The weird thing is that aside from writing bonding was measurably
 faster than MPIO.  Does that seem right?


 With MPIO are you seeing the same throughput you would see if you only
 used one path at a time?

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2010-01-06 Thread Mike Christie

On 12/30/2009 11:48 AM, Kyle Schmitt wrote:

On Wed, Dec 9, 2009 at 8:52 PM, Mike Christiemicha...@cs.wisc.edu  wrote:

So far single connections work: If I setup the box to use one NIC, I
get one connection and can use it just fine.

Could you send the /var/log/messages for when you run the login command
so I can see the disk info?


Sorry for the delay.  In the meanwhile I tore down the server and
re-configured it using ethernet bonding.  It worked, according to
iozone, provided moderately better throughput than the single
connection I got before.  Moderately.  Measurably.  Not significantly.

I tore it down after that and reconfigured again using MPIO, and funny
enough, this time it worked.  I can access the lun now using two
devices (sdb and sdd), and both ethernet devices that connect to iscsi
show traffic.

The weird thing is that aside from writing bonding was measurably
faster than MPIO.  Does that seem right?



With MPIO are you seeing the same throughput you would see if you only 
used one path at a time?
-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Pasi Kärkkäinen
On Wed, Dec 30, 2009 at 11:48:31AM -0600, Kyle Schmitt wrote:
 On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie micha...@cs.wisc.edu wrote:
  So far single connections work: If I setup the box to use one NIC, I
  get one connection and can use it just fine.
  Could you send the /var/log/messages for when you run the login command
  so I can see the disk info?
 
 Sorry for the delay.  In the meanwhile I tore down the server and
 re-configured it using ethernet bonding.  It worked, according to
 iozone, provided moderately better throughput than the single
 connection I got before.  Moderately.  Measurably.  Not significantly.


If you have just a single iscsi connection/login from the initiator to the
target, then you'll have only one tcp connection, and that means bonding
won't help you at all - you'll be only able to utilize one link of the
bond.

bonding needs multiple tcp/ip connections for being able to give more
bandwidth.

 I tore it down after that and reconfigured again using MPIO, and funny
 enough, this time it worked.  I can access the lun now using two
 devices (sdb and sdd), and both ethernet devices that connect to iscsi
 show traffic.
 
 The weird thing is that aside from writing bonding was measurably
 faster than MPIO.  Does that seem right?
 

That seems a bit weird.

How did you configure multipath? Please paste your multipath settings. 

-- Pasi

 
 Here's the dmesg, if that lends any clues.  Thanks for any input!
 
 --Kyle
 
  156 lines of dmesg follows 
 
 cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
 iscsi: registered transport (cxgb3i)
 device-mapper: table: 253:6: multipath: error getting device
 device-mapper: ioctl: error adding target to table
 device-mapper: table: 253:6: multipath: error getting device
 device-mapper: ioctl: error adding target to table
 Broadcom NetXtreme II CNIC Driver cnic v2.0.0 (March 21, 2009)
 cnic: Added CNIC device: eth0
 cnic: Added CNIC device: eth1
 cnic: Added CNIC device: eth2
 cnic: Added CNIC device: eth3
 Broadcom NetXtreme II iSCSI Driver bnx2i v2.0.1e (June 22, 2009)
 iscsi: registered transport (bnx2i)
 scsi3 : Broadcom Offload iSCSI Initiator
 scsi4 : Broadcom Offload iSCSI Initiator
 scsi5 : Broadcom Offload iSCSI Initiator
 scsi6 : Broadcom Offload iSCSI Initiator
 iscsi: registered transport (tcp)
 iscsi: registered transport (iser)
 bnx2: eth0: using MSIX
 ADDRCONF(NETDEV_UP): eth0: link is not ready
 bnx2i: iSCSI not supported, dev=eth0
 bnx2i: iSCSI not supported, dev=eth0
 bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive 
 transmit flow control ON
 ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
 bnx2: eth2: using MSIX
 ADDRCONF(NETDEV_UP): eth2: link is not ready
 bnx2i: iSCSI not supported, dev=eth2
 bnx2i: iSCSI not supported, dev=eth2
 bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
 ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
 bnx2: eth3: using MSIX
 ADDRCONF(NETDEV_UP): eth3: link is not ready
 bnx2i: iSCSI not supported, dev=eth3
 bnx2i: iSCSI not supported, dev=eth3
 bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
 ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
 eth0: no IPv6 routers present
 eth2: no IPv6 routers present
 scsi7 : iSCSI Initiator over TCP/IP
 scsi8 : iSCSI Initiator over TCP/IP
 scsi9 : iSCSI Initiator over TCP/IP
 scsi10 : iSCSI Initiator over TCP/IP
   Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
 sdb : very big device. try to use READ CAPACITY(16).
 SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdb: Write Protect is off
 sdb: Mode Sense: 7d 00 00 08
 SCSI device sdb: drive cache: write through
 sdb : very big device. try to use READ CAPACITY(16).
 SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdb: Write Protect is off
 sdb: Mode Sense: 7d 00 00 08
 SCSI device sdb: drive cache: write through
  sdb:5  Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
   Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
 sdc : very big device. try to use READ CAPACITY(16).
 SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdc: test WP failed, assume Write Enabled
 sdc: asking for cache data failed
 sdc: assuming drive cache: write through
   Vendor: DGC   Model: RAID 5Rev: 0429
   Type:   Direct-Access  ANSI SCSI revision: 04
 sdc : very big device. try to use READ CAPACITY(16).
 SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
 sdc: test WP failed, assume Write Enabled
 sde : very big device. try to use READ CAPACITY(16).
 sdc: asking for cache data failed
 sdc: assuming drive cache: write through
  sdc:5SCSI device sde: 7693604864 512-byte hdwr sectors (3939126 MB)
 sd 8:0:0:0: Device not ready: 6: Current: sense key: Not Ready
 Add. Sense: Logical 

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Kyle Schmitt
On Thu, Dec 31, 2009 at 8:23 AM, Pasi Kärkkäinen pa...@iki.fi wrote:
 If you have just a single iscsi connection/login from the initiator to the
 target, then you'll have only one tcp connection, and that means bonding
 won't help you at all - you'll be only able to utilize one link of the
 bond.

 bonding needs multiple tcp/ip connections for being able to give more
 bandwidth.

That's what I thought, but I figured it was one of the following three
possibilities:
MPIO was (mis)configured and using more overhead than bonding
OR the initiator was firing multiple concurrent requests (which you
say it doesn't, I'll believe you)
OR the san was under massively different load between the test runs
(not too likely, but possible.  Only one other lun is in use).

 That seems a bit weird.
That's what I thought, otherwise I would have just gone with it.

 How did you configure multipath? Please paste your multipath settings.

 -- Pasi

Here's the /etc/multipath.conf.  Where there other config options that
you'd need to see?

devnode_blacklist {
devnode ^sda[0-9]*
devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
devnode ^hd[a-z][[0-9]*]
devnode ^cciss!c[0-9]d[0-9]*[p[0-9]*]
}
devices {
device {
vendor EMC 
product SYMMETRIX
path_grouping_policy multibus
getuid_callout /sbin/scsi_id -g -u -s /block/%n
path_selector round-robin 0
features 0
hardware_handler 0
failback immediate
}
device {
vendor DGC
product *
path_grouping_policy group_by_prio
getuid_callout /sbin/scsi_id -g -u -s /block/%n
prio_callout /sbin/mpath_prio_emc /dev/%n
hardware_handler 1 emc
features 1 queue_if_no_path
no_path_retry 300
path_checker emc_clariion
failback immediate
}
}

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Kyle Schmitt
Note, the EMC specific bits of that multipath.conf were just copied
from boxes that use FC to the SAN, and use MPIO successfully.

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-30 Thread Kyle Schmitt
On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie micha...@cs.wisc.edu wrote:
 So far single connections work: If I setup the box to use one NIC, I
 get one connection and can use it just fine.
 Could you send the /var/log/messages for when you run the login command
 so I can see the disk info?

Sorry for the delay.  In the meanwhile I tore down the server and
re-configured it using ethernet bonding.  It worked, according to
iozone, provided moderately better throughput than the single
connection I got before.  Moderately.  Measurably.  Not significantly.

I tore it down after that and reconfigured again using MPIO, and funny
enough, this time it worked.  I can access the lun now using two
devices (sdb and sdd), and both ethernet devices that connect to iscsi
show traffic.

The weird thing is that aside from writing bonding was measurably
faster than MPIO.  Does that seem right?


Here's the dmesg, if that lends any clues.  Thanks for any input!

--Kyle

 156 lines of dmesg follows 

cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
iscsi: registered transport (cxgb3i)
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding target to table
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding target to table
Broadcom NetXtreme II CNIC Driver cnic v2.0.0 (March 21, 2009)
cnic: Added CNIC device: eth0
cnic: Added CNIC device: eth1
cnic: Added CNIC device: eth2
cnic: Added CNIC device: eth3
Broadcom NetXtreme II iSCSI Driver bnx2i v2.0.1e (June 22, 2009)
iscsi: registered transport (bnx2i)
scsi3 : Broadcom Offload iSCSI Initiator
scsi4 : Broadcom Offload iSCSI Initiator
scsi5 : Broadcom Offload iSCSI Initiator
scsi6 : Broadcom Offload iSCSI Initiator
iscsi: registered transport (tcp)
iscsi: registered transport (iser)
bnx2: eth0: using MSIX
ADDRCONF(NETDEV_UP): eth0: link is not ready
bnx2i: iSCSI not supported, dev=eth0
bnx2i: iSCSI not supported, dev=eth0
bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive 
transmit flow control ON
ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
bnx2: eth2: using MSIX
ADDRCONF(NETDEV_UP): eth2: link is not ready
bnx2i: iSCSI not supported, dev=eth2
bnx2i: iSCSI not supported, dev=eth2
bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
bnx2: eth3: using MSIX
ADDRCONF(NETDEV_UP): eth3: link is not ready
bnx2i: iSCSI not supported, dev=eth3
bnx2i: iSCSI not supported, dev=eth3
bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
eth0: no IPv6 routers present
eth2: no IPv6 routers present
scsi7 : iSCSI Initiator over TCP/IP
scsi8 : iSCSI Initiator over TCP/IP
scsi9 : iSCSI Initiator over TCP/IP
scsi10 : iSCSI Initiator over TCP/IP
  Vendor: DGC   Model: RAID 5Rev: 0429
  Type:   Direct-Access  ANSI SCSI revision: 04
sdb : very big device. try to use READ CAPACITY(16).
SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
sdb: Write Protect is off
sdb: Mode Sense: 7d 00 00 08
SCSI device sdb: drive cache: write through
sdb : very big device. try to use READ CAPACITY(16).
SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
sdb: Write Protect is off
sdb: Mode Sense: 7d 00 00 08
SCSI device sdb: drive cache: write through
 sdb:5  Vendor: DGC   Model: RAID 5Rev: 0429
  Type:   Direct-Access  ANSI SCSI revision: 04
  Vendor: DGC   Model: RAID 5Rev: 0429
  Type:   Direct-Access  ANSI SCSI revision: 04
sdc : very big device. try to use READ CAPACITY(16).
SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
sdc: test WP failed, assume Write Enabled
sdc: asking for cache data failed
sdc: assuming drive cache: write through
  Vendor: DGC   Model: RAID 5Rev: 0429
  Type:   Direct-Access  ANSI SCSI revision: 04
sdc : very big device. try to use READ CAPACITY(16).
SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
sdc: test WP failed, assume Write Enabled
sde : very big device. try to use READ CAPACITY(16).
sdc: asking for cache data failed
sdc: assuming drive cache: write through
 sdc:5SCSI device sde: 7693604864 512-byte hdwr sectors (3939126 MB)
sd 8:0:0:0: Device not ready: 6: Current: sense key: Not Ready
Add. Sense: Logical unit not ready, manual intervention required

end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
sd 8:0:0:0: Device not ready: 6: Current: sense key: Not Ready
Add. Sense: Logical unit not ready, manual intervention required

end_request: I/O error, dev sdc, sector 0

... that repeats a bunch of times 

Buffer I/O error on device sdc, logical block 0
 unable to read partition table
sdd : very big device. try to use READ CAPACITY(16).
sd 8:0:0:0: Attached scsi disk sdc
SCSI device sdd: 7693604864 512-byte hdwr sectors (3939126 

RE: Need help with multipath and iscsi in CentOS 5.4

2009-12-10 Thread berthiaume_wayne
Hi Kyle.

For your configuration do you have a LUN in the storage group
assigned to your server? If so, the LUN is presented to both SPs
(service processors) but only owned by one. If your storage group for
the server has been configured for the default mode the LUN is presented
in PNR mode; therefore, only one SP owns it and can be used for the IO.
In order to use the LUN thru the other SP the LUN needs to be trespassed
to that SP. If the storage group for the server is configured for ALUA
mode then you can read and write thru both SPs without a trespass
command.
If there are no LUNs assigned in a storage group for your
server, then the LUN that is presented to the server is a LUNZ and you
will not be able to access it. 
You indicate below that you are able to fdisk one of the SCSI
devices but not all four. If you are configured in the manner you
described you should be able to access it thru the two NICs that
attached to the SP that owns the LUN.

Regards,
Wayne.
EMC Corp

-Original Message-
From: open-iscsi@googlegroups.com [mailto:open-is...@googlegroups.com]
On Behalf Of Mike Christie
Sent: Wednesday, December 09, 2009 9:52 PM
To: open-iscsi@googlegroups.com
Subject: Re: Need help with multipath and iscsi in CentOS 5.4

Kyle Schmitt wrote:
 I'm cross-posting here from linux-iscsi-users since I've seen no

linux-scsi-users would be for centos 4. Centos 5 uses a different 
initiator, but you are the right place finally :)

 traffic in the weeks since I posted this.
 
 Hi, I needed a little help or advice with my setup.  I'm trying to
 configure multipathed iscsi on a CentOS 5.4 (RHEL 5.4 clone) box.
 
 Very short version: One server with two NICs for iSCSI sees storage on
 EMC.  Storage shows up as four discs, but only one works.
 
 So far single connections work: If I setup the box to use one NIC, I
 get one connection and can use it just fine.
 
 When I setup multiple connections I have problems...
 I created two interfaces, and assigned each one to a NIC
 iscsiadm -m iface -I iface0 --op=new
 iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v eth2
 iscsiadm -m iface -I iface1 --op=new
 iscsiadm -m iface -I iface1 --op=update -n iface.net_ifacename -v eth3
 
 Each interface saw two paths to their storage, four total, so far so
 good.
 I logged all four of them them in with:
 iscsiadm -m node -T long ugly string here  -l
 
 I could see I was connected to all four via
 iscsiadm-m session
 
 At this point, I thought I was set, I had four new devices
 /dev/sdb /dev/sdc /dev/sdd /dev/sde
 
 Ignoring multipath at this point for now, here's where the problem
 started.  I have all four devices, but I can only communicate through
 one of them: /dev/sdc.
 
 As a quick test I tried to fdisk all four partitions, to see if I saw
 the same thing in each place, and only /dev/sdc works.

What do you mean by works? Can you dd it, or fdisk it?

 
 Turning on multipath, I got a multipathed device consisting of sdb sdc
 sdd and sde, but sdb sdd and sde are failed with a message of
 checker msg is emc_clariion_checker: Logical Unit is unbound or LUNZ
 

Have you created a lun that is accessible to the initiator defined in 
/etc/iscsi/initiatorname.iscsi?

Could you send the /var/log/messages for when you run the login command 
so I can see the disk info?

--

You received this message because you are subscribed to the Google
Groups open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/open-iscsi?hl=en.



--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-10 Thread Kyle Schmitt
On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie micha...@cs.wisc.edu wrote:
 Kyle Schmitt wrote:
 What do you mean by works? Can you dd it, or fdisk it?

it works by most any measure sdc: I can dd it, fdisk it, mkfs.ext3 it,
run iozone, etc.

In contrast sdb sdd and sde cant be fdisked, dded, or even less -fed.



 Have you created a lun that is accessible to the initiator defined in
 /etc/iscsi/initiatorname.iscsi?

Yup, I've got a lun, and can read/write to it.

 Could you send the /var/log/messages for when you run the login command
 so I can see the disk info?

Sure, let me dig it out... it'll be a few minutes.

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-09 Thread Mike Christie
Kyle Schmitt wrote:
 I'm cross-posting here from linux-iscsi-users since I've seen no

linux-scsi-users would be for centos 4. Centos 5 uses a different 
initiator, but you are the right place finally :)

 traffic in the weeks since I posted this.
 
 Hi, I needed a little help or advice with my setup.  I'm trying to
 configure multipathed iscsi on a CentOS 5.4 (RHEL 5.4 clone) box.
 
 Very short version: One server with two NICs for iSCSI sees storage on
 EMC.  Storage shows up as four discs, but only one works.
 
 So far single connections work: If I setup the box to use one NIC, I
 get one connection and can use it just fine.
 
 When I setup multiple connections I have problems...
 I created two interfaces, and assigned each one to a NIC
 iscsiadm -m iface -I iface0 --op=new
 iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v eth2
 iscsiadm -m iface -I iface1 --op=new
 iscsiadm -m iface -I iface1 --op=update -n iface.net_ifacename -v eth3
 
 Each interface saw two paths to their storage, four total, so far so
 good.
 I logged all four of them them in with:
 iscsiadm -m node -T long ugly string here  -l
 
 I could see I was connected to all four via
 iscsiadm-m session
 
 At this point, I thought I was set, I had four new devices
 /dev/sdb /dev/sdc /dev/sdd /dev/sde
 
 Ignoring multipath at this point for now, here's where the problem
 started.  I have all four devices, but I can only communicate through
 one of them: /dev/sdc.
 
 As a quick test I tried to fdisk all four partitions, to see if I saw
 the same thing in each place, and only /dev/sdc works.

What do you mean by works? Can you dd it, or fdisk it?

 
 Turning on multipath, I got a multipathed device consisting of sdb sdc
 sdd and sde, but sdb sdd and sde are failed with a message of
 checker msg is emc_clariion_checker: Logical Unit is unbound or LUNZ
 

Have you created a lun that is accessible to the initiator defined in 
/etc/iscsi/initiatorname.iscsi?

Could you send the /var/log/messages for when you run the login command 
so I can see the disk info?

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.