Re: iscsiadm and bonding

2010-03-10 Thread aclhkaclhk aclhkaclhk
1. ping -I bond0 192.168.123.174
yes, ok

2. a week ago, I have setup the iscsi-initiator linux without eth0.
only bond0 (mode4, eth1 and eth2) was assigned IP addr. eth0 was
unassigned and without connection to switch. Login to iscsi target was
ok via bond0. There was no need to create iface0.

the reason to use eth0 is that if there is any problem with bonding
configuration, I could still connect to the host via eth0.
Furthermore, i want to separate dom0 (eth0) traffic from domU
(bond0).

# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref
Use Iface
192.168.122.0   *   255.255.255.0   U 0  0
0 virbr0
192.168.123.0   *   255.255.255.0   U 0  0
0 eth0
192.168.123.0   *   255.255.255.0   U 0  0
0 bond0
169.254.0.0 *   255.255.0.0 U 0  0
0 bond0
default 192.168.123.1   0.0.0.0 UG0  0
0 eth0



On Mar 10, 3:34 am, Mike Christie micha...@cs.wisc.edu wrote:
 On 03/09/2010 04:19 AM, aclhkaclhk aclhkaclhk wrote:



  thanks, i could discover using bond0 but could not login.

  iscsi-target (openfiler): eth0 (192.168.123.174)
  iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

  /var/lib/iscsi/ifaces/iface0
  # BEGIN RECORD 2.0-871
  iface.iscsi_ifacename = iface0
    iface.net_ifacename = bond0
    iface.transport_name = tcp
    # END RECORD

    [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
  --
    interface iface0
    192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
    [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
    192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
  --
    interface iface0
    Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
    portal: 192.168.123.174,3260]
    iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
    vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
    iscsiadm: initiator reported error (8 - connection timed out)

    i could use eth0 to discover and login

    ping 192.168.123.174 from iscsi-initiator is ok
    ping 192.168.123.178 from iscsi-target is ok

 I am not sure if I have ever tried bonding with ifaces.

 Can you do

 ping -I bond0 192.168.123.174

 ? If that does not work then the iscsi iface bonding will not either (we
 use the same syscalls to bind to the interface).

 Is it possible to just set up the routing table so the iscsi traffic
 goes through the bonded interface to the target?

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-10 Thread Mike Christie

On 03/10/2010 02:14 AM, aclhkaclhk aclhkaclhk wrote:

1. ping -I bond0 192.168.123.174
yes, ok



Could you run iscsid manually in debugging mode and send me all the log 
output.


Instead of doing service iscsi start just do

// this starts iscsi with debugging going to the console
iscsid -d 8 -f 

// run login command
iscsiadm --mode node --targetname 192.168.123.174-vg0drbd-iscsi0 
--portal 192.168.123.174:3260 --login



Then send me everything that gets spit out.

If you can also take a ethereal/wireshark trace when you run this it 
might be helpful.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
thanks, i could discover using bond0 but could not login.

iscsi-target (openfiler): eth0 (192.168.123.174)
iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

/var/lib/iscsi/ifaces/iface0
# BEGIN RECORD 2.0-871
iface.iscsi_ifacename = iface0
 iface.net_ifacename = bond0
 iface.transport_name = tcp
 # END RECORD

 [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
--
 interface iface0
 192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
 [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
 192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
--
 interface iface0
 Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
 portal: 192.168.123.174,3260]
 iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
 vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
 iscsiadm: initiator reported error (8 - connection timed out)

 i could use eth0 to discover and login

 ping 192.168.123.174 from iscsi-initiator is ok
 ping 192.168.123.178 from iscsi-target is ok

 192.168.123.178 is authorised in iscsi-target to login


On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
  my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
  eth2 are bonded as bond0

  i want to login iscsi target using bond0 instead of eth0.

  iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
  portal 192.168.123.1:3260 --login --interface bond0
  iscsiadm: Could not read iface info for bond0.

  with interface, eth0 is used.

  the bonding was setup correctly. it could be used by xen.

 You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
 Then you can login using 'iface0' interface.

 Like this (on the fly):

 # iscsiadm -m iface -I iface0 -o new
 New interface iface0 added

 # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
 iface0 updated.

 You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
 by creating a file called 'iface0' with this content:

 iface.iscsi_ifacename = iface0
 iface.transport_name = tcp
 iface.net_ifacename = bond0

 Hopefully that helps.

 -- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
the bonding is 802.3ab. the bonding is functioning ok because i have
created a xen bridge over the bonding. the domu using the xen bridge
can connect to outside and vice versa

bond0 - xenbr1 - domu - outside hosts

pls advise

On Mar 9, 6:19 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:
 thanks, i could discover using bond0 but could not login.

 iscsi-target (openfiler): eth0 (192.168.123.174)
 iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

 /var/lib/iscsi/ifaces/iface0
 # BEGIN RECORD 2.0-871
 iface.iscsi_ifacename = iface0
  iface.net_ifacename = bond0
  iface.transport_name = tcp
  # END RECORD

  [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
 --
  interface iface0
  192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
  [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
  192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
 --
  interface iface0
  Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
  portal: 192.168.123.174,3260]
  iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
  vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
  iscsiadm: initiator reported error (8 - connection timed out)

  i could use eth0 to discover and login

  ping 192.168.123.174 from iscsi-initiator is ok
  ping 192.168.123.178 from iscsi-target is ok

  192.168.123.178 is authorised in iscsi-target to login

 On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:

  On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
   my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
   eth2 are bonded as bond0

   i want to login iscsi target using bond0 instead of eth0.

   iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
   portal 192.168.123.1:3260 --login --interface bond0
   iscsiadm: Could not read iface info for bond0.

   with interface, eth0 is used.

   the bonding was setup correctly. it could be used by xen.

  You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
  Then you can login using 'iface0' interface.

  Like this (on the fly):

  # iscsiadm -m iface -I iface0 -o new
  New interface iface0 added

  # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
  iface0 updated.

  You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
  by creating a file called 'iface0' with this content:

  iface.iscsi_ifacename = iface0
  iface.transport_name = tcp
  iface.net_ifacename = bond0

  Hopefully that helps.

  -- Pasi



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
i do not want to use multipath as i have setup a HA iscsi target.
192.168.123.174 is the cluster ip

On Mar 9, 6:25 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:
 the bonding is 802.3ab. the bonding is functioning ok because i have
 created a xen bridge over the bonding. the domu using the xen bridge
 can connect to outside and vice versa

 bond0 - xenbr1 - domu - outside hosts

 pls advise

 On Mar 9, 6:19 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:

  thanks, i could discover using bond0 but could not login.

  iscsi-target (openfiler): eth0 (192.168.123.174)
  iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

  /var/lib/iscsi/ifaces/iface0
  # BEGIN RECORD 2.0-871
  iface.iscsi_ifacename = iface0
   iface.net_ifacename = bond0
   iface.transport_name = tcp
   # END RECORD

   [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
  --
   interface iface0
   192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
   [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
   192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
  --
   interface iface0
   Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
   portal: 192.168.123.174,3260]
   iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
   vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
   iscsiadm: initiator reported error (8 - connection timed out)

   i could use eth0 to discover and login

   ping 192.168.123.174 from iscsi-initiator is ok
   ping 192.168.123.178 from iscsi-target is ok

   192.168.123.178 is authorised in iscsi-target to login

  On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:

   On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
eth2 are bonded as bond0

i want to login iscsi target using bond0 instead of eth0.

iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
portal 192.168.123.1:3260 --login --interface bond0
iscsiadm: Could not read iface info for bond0.

with interface, eth0 is used.

the bonding was setup correctly. it could be used by xen.

   You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
   Then you can login using 'iface0' interface.

   Like this (on the fly):

   # iscsiadm -m iface -I iface0 -o new
   New interface iface0 added

   # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
   iface0 updated.

   You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
   by creating a file called 'iface0' with this content:

   iface.iscsi_ifacename = iface0
   iface.transport_name = tcp
   iface.net_ifacename = bond0

   Hopefully that helps.

   -- Pasi



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread Ciprian Marius Vizitiu (GBIF)_

On 03/09/2010 01:49 PM, Hoot, Joseph wrote:

I had a similar issue, just not using bonding.  The gist of my problem was 
that, when connecting a physical network card to a bridge, iscsiadm will not 
login through that bridge (at least in my experience).  I could discover just 
fine, but wasn't ever able to login.  I am no longer attempting (at least for 
the moment because of time) to get it working this way, but I would love to 
change our environment in the future if a scenario such as this would work, 
because it gives me the flexibility to pass a virtual network card through to 
the guest and allow the guest to initiate its own iSCSI traffic instead of me 
doing it all at the dom0 level and then passing those block devices through.
   
Out of curiosity, why would you do that? Why let the guest bear the 
iSCSI load instead of the host OS offering block devices? Eventually the 
host OS could use hardware acceleration (assuming that works)?


Anybody care to give and argument because from what I've seen iSCSI load 
gets distributed to various CPUs in funny ways. Assuming KVM and no 
hardware iSCSI, have the host do iSCSI and the guests with Realtek emu 
cards, the iSCSI CPU load gets distributed. Have the guest do the iSCSI, 
again with Realtek emu one can see only the CPUs allocated to that guest 
being used; and heavy usage. But then switch to vrtio for network and 
the iSCSI load is once again spread through multiple CPUs no matter 
who's doing the iSCSI. At least for 1 guest... so which poison to chose?


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
i created a domu (using bond0) in iscsi-initiator linux. it could
connect to iscsi-target.

On Mar 9, 6:29 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:
 i do not want to use multipath as i have setup a HA iscsi target.
 192.168.123.174 is the cluster ip

 On Mar 9, 6:25 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:

  the bonding is 802.3ab. the bonding is functioning ok because i have
  created a xen bridge over the bonding. the domu using the xen bridge
  can connect to outside and vice versa

  bond0 - xenbr1 - domu - outside hosts

  pls advise

  On Mar 9, 6:19 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:

   thanks, i could discover using bond0 but could not login.

   iscsi-target (openfiler): eth0 (192.168.123.174)
   iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

   /var/lib/iscsi/ifaces/iface0
   # BEGIN RECORD 2.0-871
   iface.iscsi_ifacename = iface0
    iface.net_ifacename = bond0
    iface.transport_name = tcp
    # END RECORD

    [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
   --
    interface iface0
    192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
    [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
    192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
   --
    interface iface0
    Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
    portal: 192.168.123.174,3260]
    iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
    vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
    iscsiadm: initiator reported error (8 - connection timed out)

    i could use eth0 to discover and login

    ping 192.168.123.174 from iscsi-initiator is ok
    ping 192.168.123.178 from iscsi-target is ok

    192.168.123.178 is authorised in iscsi-target to login

   On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:

On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
 my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
 eth2 are bonded as bond0

 i want to login iscsi target using bond0 instead of eth0.

 iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
 portal 192.168.123.1:3260 --login --interface bond0
 iscsiadm: Could not read iface info for bond0.

 with interface, eth0 is used.

 the bonding was setup correctly. it could be used by xen.

You need to create the open-iscsi 'iface0' first, and bind it to 
'bond0'.
Then you can login using 'iface0' interface.

Like this (on the fly):

# iscsiadm -m iface -I iface0 -o new
New interface iface0 added

# iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v 
bond0
iface0 updated.

You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
by creating a file called 'iface0' with this content:

iface.iscsi_ifacename = iface0
iface.transport_name = tcp
iface.net_ifacename = bond0

Hopefully that helps.

-- Pasi



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread Hoot, Joseph
Agreed, I would probably continue to do it on the dom0 for now and pass-through 
the block devs.  If this solution were used it would give me the flexibility to 
initiate in the guest if I would decide to do so.  I believe there would 
definitely be cpu and network overhead.  Given the cpu's today, however, I 
don't know if I would worry too much about CPU.  Most of the issues I run into 
with virtualization is either storage or memory related, not CPU.   Also, given 
the issues I've heard with regards to tcp offloading and bnx2 drivers, I have 
disabled that in our environment (again, with cpu being more of a commodity 
these days).

Although I don't know the performance tradeoffs, I definitely think it is worth 
investigation --namely because of the flexibility that it gives the admin.   In 
addition to being able to use guest initiation, it allow me to provision a 
volume in our iSCSI storage such that the only system that needs to access it 
is the guest vm.  I don't have to allow ACLs for multiple Xen or KVM systems to 
connect to it.  Another issue specific to the EqualLogic, but may show its dirt 
in other iSCSI systems would be the fact that I can only have, I think, 15-20 
ACL's per volume.  If I have a 10-node cluster of dom0's for my Xen environment 
and each node has 2 iSCSI interfaces = ACLs that may be needed per volume 
(depending on how you write your ACLs).  If the iSCSI volume were initiated int 
he guest, I would just need to include two ACLs for each virtual nic of the 
guest.  

Also, when doing this, I have to do all the `iscsiadm -m discovery`, `iscsiadm 
-m node -T iqn -l`, and then go adjust /etc/multipath.conf on all my dom0 nodes 
before I can finally get the volume's block device to pass-through to the 
guest.  With the iSCSI guest initiation solution, I would just need to do 
this on the guest alone.  

Another issue that I have today is that if I ever want to move a vm from one 
xen cluster to another, I would need to not only rsync that base iSCSI image 
over to the other vm (because we still use image files for our root disk and 
swap), but also change around the ACLs so that the other cluster has access to 
those iSCSI volumes.  Again, look back at the last couple of paragraphs 
regarding ACLs.  If the guest were doing the initiation, I would just rsync the 
root img file over to the other cluster and startup the guest. It would still 
be the only one with the ACLs to connect.

However, one thing that would be worse with regards to passing iSCSI traffic 
through into the guest is that now the iSCSI network is less secure (because 
the initiation is being done inside the guest and not just passed through as a 
block device into the guest).  So this is something to be concerned with.

Thanks,
Joe

===
Joseph R. Hoot
Lead System Programmer/Analyst
joe.h...@itec.suny.edu
GPG KEY:   7145F633
===

On Mar 9, 2010, at 8:06 AM, Ciprian Marius Vizitiu (GBIF)_ wrote:

 On 03/09/2010 01:49 PM, Hoot, Joseph wrote:
 I had a similar issue, just not using bonding.  The gist of my problem was 
 that, when connecting a physical network card to a bridge, iscsiadm will not 
 login through that bridge (at least in my experience).  I could discover 
 just fine, but wasn't ever able to login.  I am no longer attempting (at 
 least for the moment because of time) to get it working this way, but I 
 would love to change our environment in the future if a scenario such as 
 this would work, because it gives me the flexibility to pass a virtual 
 network card through to the guest and allow the guest to initiate its own 
 iSCSI traffic instead of me doing it all at the dom0 level and then passing 
 those block devices through.
 
 Out of curiosity, why would you do that? Why let the guest bear the 
 iSCSI load instead of the host OS offering block devices? Eventually the 
 host OS could use hardware acceleration (assuming that works)?
 
 Anybody care to give and argument because from what I've seen iSCSI load 
 gets distributed to various CPUs in funny ways. Assuming KVM and no 
 hardware iSCSI, have the host do iSCSI and the guests with Realtek emu 
 cards, the iSCSI CPU load gets distributed. Have the guest do the iSCSI, 
 again with Realtek emu one can see only the CPUs allocated to that guest 
 being used; and heavy usage. But then switch to vrtio for network and 
 the iSCSI load is once again spread through multiple CPUs no matter 
 who's doing the iSCSI. At least for 1 guest... so which poison to chose?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to 
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/open-iscsi?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To 

Re: iscsiadm and bonding

2010-03-09 Thread Pasi Kärkkäinen
On Tue, Mar 09, 2010 at 07:49:00AM -0500, Hoot, Joseph wrote:
 I had a similar issue, just not using bonding.  The gist of my problem was 
 that, 
 when connecting a physical network card to a bridge, iscsiadm will not login 
 through 
 that bridge (at least in my experience).  I could discover just fine, but 
 wasn't ever able to login.  
 

This sounds like a configuration problem to me. 

Did you remove the IP addresses from eth0/eth1, and make sure only bond0 has 
the IP? 
Was the routing table correct? 

As long as the kernel routing table is correct open-iscsi shouldn't care what
interface you're using.

(Unless you bind the open-iscsi iface to some physical interface).


-- Pasi


 I am no longer attempting (at least for the moment because of time) to get it 
 working this way, but I would love to change our environment in the future if 
 a scenario such as this would work, because it gives me the flexibility to 
 pass a virtual network card through to the guest and allow the guest to 
 initiate its own iSCSI traffic instead of me doing it all at the dom0 level 
 and then passing those block devices through.
 
 I've attached a network diagram that explains my situation.  The goal is to 
 give the administrator flexibility to have fiber or iSCSI storage at the xen 
 dom0 as well as being able to pass-through that storage to the guest and 
 allow the guest to initiate iSCSI sessions.  This gives the guest the 
 flexibility to be able to run snapshot-type commands and things using the 
 EqualLogic HIT kits.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to 
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/open-iscsi?hl=en.
 

Content-Description: ATT1.txt
 
 Thanks,
 Joe
 
 ===
 Joseph R. Hoot
 Lead System Programmer/Analyst
 joe.h...@itec.suny.edu
 GPG KEY:   7145F633
 ===
 
 On Mar 9, 2010, at 5:19 AM, aclhkaclhk aclhkaclhk wrote:
 
  thanks, i could discover using bond0 but could not login.
  
  iscsi-target (openfiler): eth0 (192.168.123.174)
  iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)
  
  /var/lib/iscsi/ifaces/iface0
  # BEGIN RECORD 2.0-871
  iface.iscsi_ifacename = iface0
  iface.net_ifacename = bond0
  iface.transport_name = tcp
  # END RECORD
  
  [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
  --
  interface iface0
  192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
  [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
  192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
  --
  interface iface0
  Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
  portal: 192.168.123.174,3260]
  iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
  vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
  iscsiadm: initiator reported error (8 - connection timed out)
  
  i could use eth0 to discover and login
  
  ping 192.168.123.174 from iscsi-initiator is ok
  ping 192.168.123.178 from iscsi-target is ok
  
  192.168.123.178 is authorised in iscsi-target to login
  
  
  On Mar 8, 6:30 pm, Pasi K?rkk?inen pa...@iki.fi wrote:
  On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
  my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
  eth2 are bonded as bond0
  
  i want to login iscsi target using bond0 instead of eth0.
  
  iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
  portal 192.168.123.1:3260 --login --interface bond0
  iscsiadm: Could not read iface info for bond0.
  
  with interface, eth0 is used.
  
  the bonding was setup correctly. it could be used by xen.
  
  You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
  Then you can login using 'iface0' interface.
  
  Like this (on the fly):
  
  # iscsiadm -m iface -I iface0 -o new
  New interface iface0 added
  
  # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
  iface0 updated.
  
  You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
  by creating a file called 'iface0' with this content:
  
  iface.iscsi_ifacename = iface0
  iface.transport_name = tcp
  iface.net_ifacename = bond0
  
  Hopefully that helps.
  
  -- Pasi
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  open-iscsi group.
  To post to this group, send email to open-is...@googlegroups.com.
  To unsubscribe from this group, send email to 
  open-iscsi+unsubscr...@googlegroups.com.
  For more options, visit this group at 
  http://groups.google.com/group/open-iscsi?hl=en.
  
 


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To 

Re: iscsiadm and bonding

2010-03-09 Thread Mike Christie

On 03/09/2010 04:19 AM, aclhkaclhk aclhkaclhk wrote:

thanks, i could discover using bond0 but could not login.

iscsi-target (openfiler): eth0 (192.168.123.174)
iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

/var/lib/iscsi/ifaces/iface0
# BEGIN RECORD 2.0-871
iface.iscsi_ifacename = iface0
  iface.net_ifacename = bond0
  iface.transport_name = tcp
  # END RECORD

  [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
--
  interface iface0
  192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
  [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
  192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
--
  interface iface0
  Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
  portal: 192.168.123.174,3260]
  iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
  vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
  iscsiadm: initiator reported error (8 - connection timed out)

  i could use eth0 to discover and login

  ping 192.168.123.174 from iscsi-initiator is ok
  ping 192.168.123.178 from iscsi-target is ok



I am not sure if I have ever tried bonding with ifaces.

Can you do

ping -I bond0 192.168.123.174

? If that does not work then the iscsi iface bonding will not either (we 
use the same syscalls to bind to the interface).


Is it possible to just set up the routing table so the iscsi traffic 
goes through the bonded interface to the target?


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-08 Thread Pasi Kärkkäinen
On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
 my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
 eth2 are bonded as bond0
 
 i want to login iscsi target using bond0 instead of eth0.
 
 iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
 portal 192.168.123.1:3260 --login --interface bond0
 iscsiadm: Could not read iface info for bond0.
 
 with interface, eth0 is used.
 
 the bonding was setup correctly. it could be used by xen.
 

You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
Then you can login using 'iface0' interface.

Like this (on the fly):

# iscsiadm -m iface -I iface0 -o new
New interface iface0 added

# iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
iface0 updated.

You can set up this permanently in /var/lib/iscsi/ifaces/ directory, 
by creating a file called 'iface0' with this content:

iface.iscsi_ifacename = iface0 
iface.transport_name = tcp
iface.net_ifacename = bond0

Hopefully that helps.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.