Re: can't disconnect, can't login, what's missing?

2010-07-16 Thread Christopher Barry
On Thu, 2010-07-15 at 16:10 -0500, Mike Christie wrote:
 On 07/15/2010 12:21 PM, Christopher Barry wrote:
  All,
 
  having a perplexing problem: logging in 'kinda' works, but the device
  never shows up (fdisk -l), but the /dev/disk-by/path link is there.
 
 
 If you just fdisk /dev/sdX does it work? Does just the -l command not 
 work? Is there any IO errors in /var/log/messages when you run that command?
 
 
  when I try to logout, then back in I get this:
 
 
 This looks like a sysfs compat issue.
 
 What version of the iscsi tools are you using and for your kernel what 
 is your CONFIG_SYSFS_DEPRECATED and CONFIG_SYSFS_DEPRECATED_V2 settings 
 in your .config?
 
 Older tools need them set to y. New tools can have it on or off.

Thanks Mike, I'll investigate and get back.

 
 
  thebox # iscsiadm -m node
  -Tiqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df  -p
  192.168.111.50 -u
  iscsiadm: Could not get host for sid 1.
  iscsiadm: could not get host_no for session 6.
  iscsiadm: could not find session info for session1
 
  thebox # iscsiadm -m node
  -Tiqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df  -p
  192.168.111.50 -l
  Logging in to [iface: default, target:
  iqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df, portal:
  192.168.111.50,3260]
  iscsiadm: Could not login to [iface: default, target:
  iqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df, portal:
  192.168.111.50,3260]:
  iscsiadm: initiator reported error (15 - already exists)
 
 
  sorry for the wrapped lines. Any ideas on where to begin investigating
  this?
 
 
  regards,
  -C
 
 
 


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



can't disconnect, can't login, what's missing?

2010-07-15 Thread Christopher Barry
All,

having a perplexing problem: logging in 'kinda' works, but the device
never shows up (fdisk -l), but the /dev/disk-by/path link is there.

when I try to logout, then back in I get this:

thebox # iscsiadm -m node
-Tiqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df  -p
192.168.111.50 -u
iscsiadm: Could not get host for sid 1.
iscsiadm: could not get host_no for session 6.
iscsiadm: could not find session info for session1

thebox # iscsiadm -m node
-Tiqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df  -p
192.168.111.50 -l
Logging in to [iface: default, target:
iqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df, portal:
192.168.111.50,3260]
iscsiadm: Could not login to [iface: default, target:
iqn.1992-01.com.lsi:1535.600a0b8000370de14b7bc1df, portal:
192.168.111.50,3260]: 
iscsiadm: initiator reported error (15 - already exists)


sorry for the wrapped lines. Any ideas on where to begin investigating
this?


regards,
-C

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



discovery error, config path question

2010-07-13 Thread Christopher Barry
All,

I'm running iscsiadm discovery from an initramfs, rather than
iscsistart, and I have a couple of questions.

I'm starting iscsid without any params, as the man page shows the
default iscsid.conf that will be used if nothing is specified as living
in /etc/iscsi/, which is present there.

I've successfully worked around the well known nss issues, but I'm
getting a weird error I do not understand during discovery:

iscsiadm: Could not read iface info for bond0. Make sure a
 iface config with the file name and iface.iscsi_ifacename bond0 is
in /var/lib/iscsi/ifaces

I have an iface file named /etc/iscsi/ifaces/bond0, here are it's
contents:
# /etc/iscsi/ifaces/bond0
iface.iscsi_ifacename = bond0
iface.net_ifacename = bond0
iface.transport_name = tcp

Now, if I copy the /etc/iscsi/ifaces/bond0 file
to /var/lib/iscsi/ifaces/ then discovery works.

Why? What am I missing that the configs in /etc/iscsi/* are not being
used?


Thanks,
-C

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute

2010-06-29 Thread Christopher Barry
On Fri, 2010-06-25 at 11:06 -0700, Patrick J. LoPresti wrote:
 On Jun 23, 12:41 pm, Christopher Barry
 christopher.ba...@rackwareinc.com wrote:
 
  Absolutely correct. What I was looking for were comparisons of the
  methods below, and wanted subnet stuff out of the way while discussing
  that.
 
 Ah, I see.
 
 Well, that is fine (even necessary) for the port bonding approach, but
 for multi-path I/O (whether device-mapper or proprietary) it will
 probably not do what you expect.  When Linux has two interfaces on the
 same subnet, in my experience it tends to send all traffic through
 just one of them.  So you will definitely want to split up the subnets
 before testing multi-path I/O.
 
  Here I do not understand your reasoning. My understanding was I would
  need a session per iface to each portal to survive a controller port
  failure. If this assumption is wrong, please explain.
 
 I may have misunderstood your use of portal.  I was thinking in the
 RFC 3720 sense of IP address.
 
 So you have four IP addresses on the RAID, and four IP addresses on
 the Linux host.  You have made all of your SCSI target devices visible
 as logical units on all four addresses on the RAID.  So to get fully
 redundant paths, you only need to connect each of the four IP
 addresses on the Linux host to a single IP address on the RAID.  (So
 Linux will see each logical unit four times.)
 
 I thought you were saying you would initiate a connection from each
 host IP address to every RAID IP address (16 connections).  That would
 cause each each LU to show up 16 times, thus being harder to manage,
 with no advantages in performance or fault-tolerance.  But now it
 sounds like that is not what you meant :-).

Thanks for your reply Pat.

actually, you were correct in your first assumption - I was indeed
thinking that I would need to 'login' from each iface to each portal in
order for the initiator to know about all of the paths. This was likely
due to the fact that I was 'simplifying' :) by using a single subnet in
my example. In reality, there would be multiple subnets, and obviously
this could not occur (efficiently, as routing would obviously come into
play). Thank you for clearing that up for me.

At the end of the day, I am trying to automagically find the optimal
configuration for the type of storage available (i.e. does it support
the proprietary MPIO driver I need to work with, dm-multipath, or just a
straight connection), and what nics on what subnets area available on
the host, how do these relate to the portals the host can see, and if
bonding would be desirable. The matrix of possibilities is somewhat
daunting...

 
  this is also something I am uncertain about. For instance, in the
  balance-alb mode, each slave will communicate with a remote ip
  consistently. In the case of two slaves, and two portals how would the
  traffic be apportioned? would it write to both simultaneously? could
  this corrupt the disk in any way? would it always only use a single
  slave/portal?
 
 This is what I meant by being at the mercy of the load balancing
 performed by the bonding.
 
 If I understand the description of balance-alb correctly, outgoing
 traffic will be more-or-less round-robin; it tries to balance the load
 among the available interfaces, without worrying about keeping packets
 in order.  If packets wind up out of order, TCP will put them back in
 order at the other end, possibly (probably?) at the cost of some
 performance.
 
 Inbound traffic from any particular portal will go to a single slave.
 But there is no guarantee that the traffic will then be properly
 balanced.
 
 The advantage of multipath I/O is that it can balance the traffic at
 the level of SCSI commands.  I suspect this will be both faster and
 more consistent, but again, I have not actually tried using bonding.
 
  - Pat
 



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



need some clarity, if anyone has a minute

2010-06-23 Thread Christopher Barry
Hello,

I'm implementing some code to automagically configure iscsi connections
to a proprietary array. This array has it's own specific MPIO drivers,
and does not support DM-Multipath. I'm trying to get a handle on the
differences in redundancy provided by the various layers involved in the
connection from host to array, in a generic sense.

The array has two iSCSI ports per controller, and two controllers. The
targets can be seen through any of the ports. For simplicity, all ports
are on the same subnet.

I'll describe a series of scenarios, and maybe someone can speak their
level of usefulness, redundancy, gotchas, nuances, etc:

scenario #1
Single NIC, default iface, login to all controller portals.

scenario #2
Dual NIC, iface per NIC, login to all controller portals from each iface

scenario #3
Two bonded NICs in mode balance-alb
Single NIC, default iface, login to all controller portals.

scenario #4
Dual NIC, iface per NIC, MPIO driver, login to all controller portals
from each iface


Appreciate any advice,
Thanks,
-C

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute (correction)

2010-06-23 Thread Christopher Barry
correction inline:

On Wed, 2010-06-23 at 10:28 -0400, Christopher Barry wrote:
 Hello,
 
 I'm implementing some code to automagically configure iscsi connections
 to a proprietary array. This array has it's own specific MPIO drivers,
 and does not support DM-Multipath. I'm trying to get a handle on the
 differences in redundancy provided by the various layers involved in the
 connection from host to array, in a generic sense.
 
 The array has two iSCSI ports per controller, and two controllers. The
 targets can be seen through any of the ports. For simplicity, all ports
 are on the same subnet.
 
 I'll describe a series of scenarios, and maybe someone can speak their
 level of usefulness, redundancy, gotchas, nuances, etc:
 
 scenario #1
 Single NIC, default iface, login to all controller portals.
 
 scenario #2
 Dual NIC, iface per NIC, login to all controller portals from each iface
 
 scenario #3
 Two bonded NICs in mode balance-alb
 Single NIC, default iface, login to all controller portals.
single bonded interface, not single NIC.
 
 scenario #4
 Dual NIC, iface per NIC, MPIO driver, login to all controller portals
 from each iface
 
 
 Appreciate any advice,
 Thanks,
 -C
 



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute

2010-06-23 Thread Christopher Barry
Thanks Patrick. please see inline.

On Wed, 2010-06-23 at 08:04 -0700, Patrick wrote: 
 On Jun 23, 7:28 am, Christopher Barry
 christopher.ba...@rackwareinc.com wrote:
  This array has it's own specific MPIO drivers,
  and does not support DM-Multipath. I'm trying to get a handle on the
  differences in redundancy provided by the various layers involved in the
  connection from host to array, in a generic sense.
 
 What kind of array is it?  Are you certain it does not support
 multipath I/O?  Multipath I/O is pretty generic...
 
  For simplicity, all ports are on the same subnet.
 
 I actually would not do that.  The design is cleaner and easier to
 visualize (IMO) if you put the ports onto different subnets/VLANs.
 Even better is to put each one on a different physical switch so you
 can tolerate the failure of a switch.

Absolutely correct. What I was looking for were comparisons of the
methods below, and wanted subnet stuff out of the way while discussing
that.

 
  scenario #1
  Single (bonded) NIC, default iface, login to all controller portals.
 
 Here you are at the mercy of the load balancing performed by the
 bonding, which is probably worse than the load-balancing performed at
 higher levels.  But I admit I have not tried it, so if you decide to
 do some performance comparisons, please let me know what you
 find.  :-)
 
 I will skip right down to...
 
  scenario #4
  Dual NIC, iface per NIC, MPIO driver, login to all controller portals
  from each iface
 
 Why log into all portals from each interface?  It buys you nothing and
 makes the setup more complex.  Just log into one target portal from
 each interface and do multi-pathing among them.  This will also make
 your automation (much) simpler.

Here I do not understand your reasoning. My understanding was I would
need a session per iface to each portal to survive a controller port
failure. If this assumption is wrong, please explain.

 
 Again, I would recommend assigning one subnet to each interface.  It
 is hard to convince Linux to behave sanely when you have multiple
 interfaces connected to the same subnet.  (Linux will tend to send all
 traffic for that subnet via the same interface.  Yes, you can hack
 around this.  But why?)
 
 In other words, I would do eth0 - subnet 0 - portal 0, eth1 -
 subnet 1 - portal 1, eth2 - subnet 2 - portal 2, etc.  This is very
 easy to draw, explain, and reason about.  Then set up multipath I/O
 and you are done.
 
 In fact, this is exactly what I am doing myself.  I have multiple
 clients and multiple hardware iSCSI RAID units (Infortrend); each
 interface on each client and RAID connects to a single subnet.  Then I
 am using cLVM to stripe among the hardware RAIDs.  I am obtaining
 sustained read speeds of ~1200 megabytes/second (yes, sustained; no
 cache).  Plus I have the redundancy of multipath I/O.
 
 Trying the port bonding approach is on my to do list, but this setup
 is working so well I have not bothered yet.

this is also something I am uncertain about. For instance, in the
balance-alb mode, each slave will communicate with a remote ip
consistently. In the case of two slaves, and two portals how would the
traffic be apportioned? would it write to both simultaneously? could
this corrupt the disk in any way? would it always only use a single
slave/portal?

 
  - Pat
 




-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: mc/s - not yet in open-iscsi?

2010-06-10 Thread Christopher Barry
On Thu, 2010-06-10 at 13:36 +0400, Vladislav Bolkhovitin wrote:
 Christopher Barry, on 06/10/2010 03:09 AM wrote:
  Greetings everyone,
  
  Had a question about implementing mc/s using open-iscsi today. Wasn't
  really sure exactly what it was. From googling about, I can't find any
  references of people doing it with open-iscsi, although I see a few
  references to people asking about it. Anyone know the status on that?
 
 http://scst.sourceforge.net/mc_s.html. In short, there's no point in it 
 worth implementation and maintenance effort.
 
 Vlad
 

Thank you Vlad - that completely answered my question.

-C


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



mc/s - not yet in open-iscsi?

2010-06-09 Thread Christopher Barry
Greetings everyone,

Had a question about implementing mc/s using open-iscsi today. Wasn't
really sure exactly what it was. From googling about, I can't find any
references of people doing it with open-iscsi, although I see a few
references to people asking about it. Anyone know the status on that?


Thanks,
-C

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: [PATCH 1/2] RFC: iscsi ibft: separate ibft parsing from sysfs interface

2010-04-14 Thread Christopher Barry
On Tue, 2010-04-13 at 14:57 -0500, Mike Christie wrote:
 On 04/12/2010 01:38 PM, Christopher Barry wrote:
  On Mon, 2010-04-12 at 13:06 -0500, micha...@cs.wisc.edu wrote:
  From: Mike Christiemicha...@cs.wisc.edu
 
  Not all iscsi drivers support ibft. For drivers like be2iscsi
  that do not but are bootable through a vendor firmware specific
  format/process this patch moves the sysfs interface from the ibft code
  to a lib module. This then allows userspace tools to search for iscsi boot
  info in a common place and in a common format.
 
  ibft iscsi boot info is exported in the same place as it was
  before: /sys/firmware/ibft.
 
  vendor/fw boot info gets export in /sys/firmware/iscsi_bootX, where X is 
  the
  scsi host number of the HBA. Underneath these parent dirs, the
  target, ethernet, and initiator dirs are the same as they were before.
  ...
  ===8 snip
  +#endif
  --
  1.6.6.1
 
 
  Mike,
  To be clear, this patch will put ibft data into /var/firmware/ibft only
 
 Not, /var. You meant /sys right (I am not moving it to var. It is 
 staying in sys).

Yes - I meant /sys, sorry for typo.

 
  for those devices that actually have it, but not create a tree for say
  NICs that do not currently support it? Wondering if this is a universal
 
 It does not change ibft behavior in any way. If a device supports ibft 
 and it is setup correctly when you load iscsi_ibft it gets exported in 
 the exact same place, with the exact files and the files have the same 
 format.
 
 The patches:
 1. separate the interface from the ibft parsing, so we could add a 
 different interface on like bsg if you wanted.
 2. allow drivers that do not support ibft, but support iscsi boot using 
 some vendor specific process, to be able to export their iscsi boot info 
 in the same format. These drivers just use a different root dir. Instead 
 of /sys/firmware/ibft, they use /sys/firmware/iscsi_bootX where X is the 
 host number of the iscsi HBA that was booted from.
 
 
  gizmo that will always populate the tree that I can rely on from
  userspace during boot.
 
 With these patches, and patches that are being worked on by vendors like 
 ServerEngines that do not support ibft and use some vendor specific 
 process, if you just load the iscsi driver, like be2iscsi or qla4xxx, 
 then they will load the iscsi_boot_sysfs module in the other patch sent 
 in this patchset, and /sys/firmware/iscsi_bootX will all get populated 
 with the boot info automagically for you.
 
 The iscsi tools (iscsistart and iscsiadm) will then parse and use this 
 data like it was ibft data and boot from disks or create records or 
 whatever. I am attaching the iscsi tools patches here. I am still 
 working with the be2iscsi guys to test it out.
 
 So with the patches
 
 iscsistart -b
 
 will look for ibft data. If not found then it would look for vendor 
 specific boot info. If found it would create a session using that 
 drivers's offload engine.
 

Thanks much for clarification, and all of your hard work.


 
  Thanks,
  -C
 
 


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.