Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
thanks, i could discover using bond0 but could not login.

iscsi-target (openfiler): eth0 (192.168.123.174)
iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

/var/lib/iscsi/ifaces/iface0
# BEGIN RECORD 2.0-871
iface.iscsi_ifacename = iface0
 iface.net_ifacename = bond0
 iface.transport_name = tcp
 # END RECORD

 [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
--
 interface iface0
 192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
 [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
 192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
--
 interface iface0
 Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
 portal: 192.168.123.174,3260]
 iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
 vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
 iscsiadm: initiator reported error (8 - connection timed out)

 i could use eth0 to discover and login

 ping 192.168.123.174 from iscsi-initiator is ok
 ping 192.168.123.178 from iscsi-target is ok

 192.168.123.178 is authorised in iscsi-target to login


On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
  my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
  eth2 are bonded as bond0

  i want to login iscsi target using bond0 instead of eth0.

  iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
  portal 192.168.123.1:3260 --login --interface bond0
  iscsiadm: Could not read iface info for bond0.

  with interface, eth0 is used.

  the bonding was setup correctly. it could be used by xen.

 You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
 Then you can login using 'iface0' interface.

 Like this (on the fly):

 # iscsiadm -m iface -I iface0 -o new
 New interface iface0 added

 # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
 iface0 updated.

 You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
 by creating a file called 'iface0' with this content:

 iface.iscsi_ifacename = iface0
 iface.transport_name = tcp
 iface.net_ifacename = bond0

 Hopefully that helps.

 -- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
the bonding is 802.3ab. the bonding is functioning ok because i have
created a xen bridge over the bonding. the domu using the xen bridge
can connect to outside and vice versa

bond0 - xenbr1 - domu - outside hosts

pls advise

On Mar 9, 6:19 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:
 thanks, i could discover using bond0 but could not login.

 iscsi-target (openfiler): eth0 (192.168.123.174)
 iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

 /var/lib/iscsi/ifaces/iface0
 # BEGIN RECORD 2.0-871
 iface.iscsi_ifacename = iface0
  iface.net_ifacename = bond0
  iface.transport_name = tcp
  # END RECORD

  [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
 --
  interface iface0
  192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
  [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
  192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
 --
  interface iface0
  Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
  portal: 192.168.123.174,3260]
  iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
  vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
  iscsiadm: initiator reported error (8 - connection timed out)

  i could use eth0 to discover and login

  ping 192.168.123.174 from iscsi-initiator is ok
  ping 192.168.123.178 from iscsi-target is ok

  192.168.123.178 is authorised in iscsi-target to login

 On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:

  On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
   my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
   eth2 are bonded as bond0

   i want to login iscsi target using bond0 instead of eth0.

   iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
   portal 192.168.123.1:3260 --login --interface bond0
   iscsiadm: Could not read iface info for bond0.

   with interface, eth0 is used.

   the bonding was setup correctly. it could be used by xen.

  You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
  Then you can login using 'iface0' interface.

  Like this (on the fly):

  # iscsiadm -m iface -I iface0 -o new
  New interface iface0 added

  # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
  iface0 updated.

  You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
  by creating a file called 'iface0' with this content:

  iface.iscsi_ifacename = iface0
  iface.transport_name = tcp
  iface.net_ifacename = bond0

  Hopefully that helps.

  -- Pasi



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
i do not want to use multipath as i have setup a HA iscsi target.
192.168.123.174 is the cluster ip

On Mar 9, 6:25 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:
 the bonding is 802.3ab. the bonding is functioning ok because i have
 created a xen bridge over the bonding. the domu using the xen bridge
 can connect to outside and vice versa

 bond0 - xenbr1 - domu - outside hosts

 pls advise

 On Mar 9, 6:19 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:

  thanks, i could discover using bond0 but could not login.

  iscsi-target (openfiler): eth0 (192.168.123.174)
  iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

  /var/lib/iscsi/ifaces/iface0
  # BEGIN RECORD 2.0-871
  iface.iscsi_ifacename = iface0
   iface.net_ifacename = bond0
   iface.transport_name = tcp
   # END RECORD

   [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
  --
   interface iface0
   192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
   [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
   192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
  --
   interface iface0
   Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
   portal: 192.168.123.174,3260]
   iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
   vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
   iscsiadm: initiator reported error (8 - connection timed out)

   i could use eth0 to discover and login

   ping 192.168.123.174 from iscsi-initiator is ok
   ping 192.168.123.178 from iscsi-target is ok

   192.168.123.178 is authorised in iscsi-target to login

  On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:

   On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
eth2 are bonded as bond0

i want to login iscsi target using bond0 instead of eth0.

iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
portal 192.168.123.1:3260 --login --interface bond0
iscsiadm: Could not read iface info for bond0.

with interface, eth0 is used.

the bonding was setup correctly. it could be used by xen.

   You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
   Then you can login using 'iface0' interface.

   Like this (on the fly):

   # iscsiadm -m iface -I iface0 -o new
   New interface iface0 added

   # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
   iface0 updated.

   You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
   by creating a file called 'iface0' with this content:

   iface.iscsi_ifacename = iface0
   iface.transport_name = tcp
   iface.net_ifacename = bond0

   Hopefully that helps.

   -- Pasi



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread Ciprian Marius Vizitiu (GBIF)_

On 03/09/2010 01:49 PM, Hoot, Joseph wrote:

I had a similar issue, just not using bonding.  The gist of my problem was 
that, when connecting a physical network card to a bridge, iscsiadm will not 
login through that bridge (at least in my experience).  I could discover just 
fine, but wasn't ever able to login.  I am no longer attempting (at least for 
the moment because of time) to get it working this way, but I would love to 
change our environment in the future if a scenario such as this would work, 
because it gives me the flexibility to pass a virtual network card through to 
the guest and allow the guest to initiate its own iSCSI traffic instead of me 
doing it all at the dom0 level and then passing those block devices through.
   
Out of curiosity, why would you do that? Why let the guest bear the 
iSCSI load instead of the host OS offering block devices? Eventually the 
host OS could use hardware acceleration (assuming that works)?


Anybody care to give and argument because from what I've seen iSCSI load 
gets distributed to various CPUs in funny ways. Assuming KVM and no 
hardware iSCSI, have the host do iSCSI and the guests with Realtek emu 
cards, the iSCSI CPU load gets distributed. Have the guest do the iSCSI, 
again with Realtek emu one can see only the CPUs allocated to that guest 
being used; and heavy usage. But then switch to vrtio for network and 
the iSCSI load is once again spread through multiple CPUs no matter 
who's doing the iSCSI. At least for 1 guest... so which poison to chose?


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread aclhkaclhk aclhkaclhk
i created a domu (using bond0) in iscsi-initiator linux. it could
connect to iscsi-target.

On Mar 9, 6:29 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:
 i do not want to use multipath as i have setup a HA iscsi target.
 192.168.123.174 is the cluster ip

 On Mar 9, 6:25 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:

  the bonding is 802.3ab. the bonding is functioning ok because i have
  created a xen bridge over the bonding. the domu using the xen bridge
  can connect to outside and vice versa

  bond0 - xenbr1 - domu - outside hosts

  pls advise

  On Mar 9, 6:19 pm, aclhkaclhk aclhkaclhk aclhkac...@gmail.com wrote:

   thanks, i could discover using bond0 but could not login.

   iscsi-target (openfiler): eth0 (192.168.123.174)
   iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

   /var/lib/iscsi/ifaces/iface0
   # BEGIN RECORD 2.0-871
   iface.iscsi_ifacename = iface0
    iface.net_ifacename = bond0
    iface.transport_name = tcp
    # END RECORD

    [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
   --
    interface iface0
    192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
    [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
    192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
   --
    interface iface0
    Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
    portal: 192.168.123.174,3260]
    iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
    vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
    iscsiadm: initiator reported error (8 - connection timed out)

    i could use eth0 to discover and login

    ping 192.168.123.174 from iscsi-initiator is ok
    ping 192.168.123.178 from iscsi-target is ok

    192.168.123.178 is authorised in iscsi-target to login

   On Mar 8, 6:30 pm, Pasi Kärkkäinen pa...@iki.fi wrote:

On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
 my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
 eth2 are bonded as bond0

 i want to login iscsi target using bond0 instead of eth0.

 iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
 portal 192.168.123.1:3260 --login --interface bond0
 iscsiadm: Could not read iface info for bond0.

 with interface, eth0 is used.

 the bonding was setup correctly. it could be used by xen.

You need to create the open-iscsi 'iface0' first, and bind it to 
'bond0'.
Then you can login using 'iface0' interface.

Like this (on the fly):

# iscsiadm -m iface -I iface0 -o new
New interface iface0 added

# iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v 
bond0
iface0 updated.

You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
by creating a file called 'iface0' with this content:

iface.iscsi_ifacename = iface0
iface.transport_name = tcp
iface.net_ifacename = bond0

Hopefully that helps.

-- Pasi



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread Hoot, Joseph
Agreed, I would probably continue to do it on the dom0 for now and pass-through 
the block devs.  If this solution were used it would give me the flexibility to 
initiate in the guest if I would decide to do so.  I believe there would 
definitely be cpu and network overhead.  Given the cpu's today, however, I 
don't know if I would worry too much about CPU.  Most of the issues I run into 
with virtualization is either storage or memory related, not CPU.   Also, given 
the issues I've heard with regards to tcp offloading and bnx2 drivers, I have 
disabled that in our environment (again, with cpu being more of a commodity 
these days).

Although I don't know the performance tradeoffs, I definitely think it is worth 
investigation --namely because of the flexibility that it gives the admin.   In 
addition to being able to use guest initiation, it allow me to provision a 
volume in our iSCSI storage such that the only system that needs to access it 
is the guest vm.  I don't have to allow ACLs for multiple Xen or KVM systems to 
connect to it.  Another issue specific to the EqualLogic, but may show its dirt 
in other iSCSI systems would be the fact that I can only have, I think, 15-20 
ACL's per volume.  If I have a 10-node cluster of dom0's for my Xen environment 
and each node has 2 iSCSI interfaces = ACLs that may be needed per volume 
(depending on how you write your ACLs).  If the iSCSI volume were initiated int 
he guest, I would just need to include two ACLs for each virtual nic of the 
guest.  

Also, when doing this, I have to do all the `iscsiadm -m discovery`, `iscsiadm 
-m node -T iqn -l`, and then go adjust /etc/multipath.conf on all my dom0 nodes 
before I can finally get the volume's block device to pass-through to the 
guest.  With the iSCSI guest initiation solution, I would just need to do 
this on the guest alone.  

Another issue that I have today is that if I ever want to move a vm from one 
xen cluster to another, I would need to not only rsync that base iSCSI image 
over to the other vm (because we still use image files for our root disk and 
swap), but also change around the ACLs so that the other cluster has access to 
those iSCSI volumes.  Again, look back at the last couple of paragraphs 
regarding ACLs.  If the guest were doing the initiation, I would just rsync the 
root img file over to the other cluster and startup the guest. It would still 
be the only one with the ACLs to connect.

However, one thing that would be worse with regards to passing iSCSI traffic 
through into the guest is that now the iSCSI network is less secure (because 
the initiation is being done inside the guest and not just passed through as a 
block device into the guest).  So this is something to be concerned with.

Thanks,
Joe

===
Joseph R. Hoot
Lead System Programmer/Analyst
joe.h...@itec.suny.edu
GPG KEY:   7145F633
===

On Mar 9, 2010, at 8:06 AM, Ciprian Marius Vizitiu (GBIF)_ wrote:

 On 03/09/2010 01:49 PM, Hoot, Joseph wrote:
 I had a similar issue, just not using bonding.  The gist of my problem was 
 that, when connecting a physical network card to a bridge, iscsiadm will not 
 login through that bridge (at least in my experience).  I could discover 
 just fine, but wasn't ever able to login.  I am no longer attempting (at 
 least for the moment because of time) to get it working this way, but I 
 would love to change our environment in the future if a scenario such as 
 this would work, because it gives me the flexibility to pass a virtual 
 network card through to the guest and allow the guest to initiate its own 
 iSCSI traffic instead of me doing it all at the dom0 level and then passing 
 those block devices through.
 
 Out of curiosity, why would you do that? Why let the guest bear the 
 iSCSI load instead of the host OS offering block devices? Eventually the 
 host OS could use hardware acceleration (assuming that works)?
 
 Anybody care to give and argument because from what I've seen iSCSI load 
 gets distributed to various CPUs in funny ways. Assuming KVM and no 
 hardware iSCSI, have the host do iSCSI and the guests with Realtek emu 
 cards, the iSCSI CPU load gets distributed. Have the guest do the iSCSI, 
 again with Realtek emu one can see only the CPUs allocated to that guest 
 being used; and heavy usage. But then switch to vrtio for network and 
 the iSCSI load is once again spread through multiple CPUs no matter 
 who's doing the iSCSI. At least for 1 guest... so which poison to chose?
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to 
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/open-iscsi?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To 

Re: open-iscsi against sun storage tek 2500 fails: 1011 error.

2010-03-09 Thread Oriol Morell




Hello Mike,

we have applied the following lines, but the problem persists. Please
check info placed in this email.
Oops, this is a mistake. Do not set the abort timeout to
zero. Set these:
  
node.conn[0].timeo.noop_out_interval = 0
  
node.conn[0].timeo.noop_out_timeout = 0
  
  
To make them stick you have to set them in iscsid.conf then rerun
discovery then relogin, or you can run
  
  
iscsiadm -m node -T your_target -o update -n
node.conn[0].timeo.noop_out_interval -v 0
  
iscsiadm -m node -T your_target -o update -n
node.conn[0].timeo.noop_out_timeout -v 0

Oriol

kabuki07 open-iscsi-2.0-871 # iscsiadm -m host -P 4
iSCSI Transport Class version 2.0-870
version 2.0-871
Host Number: 2
 State: running
 Transport: tcp
 Initiatorname: (null)
 IPaddress: 192.168.192.22
 HWaddress: (null)
 Netdev: (null)
 *
 Sessions:
 *
 Target:
iqn.1986-03.com.sun:2510.600a0b800049cc7a4a0d4c7b
 Current Portal: 192.168.192.136:3260,2
 Persistent Portal: 192.168.192.136:3260,2
 **
 Interface:
 **
 Iface Name: default
 Iface Transport: tcp
 Iface Initiatorname:
iqn.2010-03.kabuki07:openiscsi-68d8cacb96efe440162d3567ebde79e2
 Iface IPaddress: 192.168.192.22
 Iface HWaddress: (null)
 Iface Netdev: (null)
 SID: 2
 iSCSI Connection State: LOGGED IN
 iSCSI Session State: LOGGED_IN
 Internal iscsid Session State: NO CHANGE
 
 Negotiated iSCSI params:
 
 HeaderDigest: None
 DataDigest: None
 MaxRecvDataSegmentLength: 262144
 MaxXmitDataSegmentLength: 65536
 FirstBurstLength: 8192
 MaxBurstLength: 262144
 ImmediateData: Yes
 InitialR2T: Yes
 MaxOutstandingR2T: 1
 
 Attached SCSI devices:
 
 scsi2 Channel 00 Id 0 Lun: 0
 Attached scsi disk sdb State:
running
 scsi2 Channel 00 Id 0 Lun: 31
kabuki07 open-iscsi-2.0-871 # 
  
kabuki07 open-iscsi-2.0-871 # iscsiadm -m host -P 3
Host Number: 2
 State: running
 Transport: tcp
 Initiatorname: (null)
 IPaddress: 192.168.192.22
 HWaddress: (null)
 Netdev: (null)
 *
 Sessions:
 *
 Target:
iqn.1986-03.com.sun:2510.600a0b800049cc7a4a0d4c7b
 Current Portal: 192.168.192.136:3260,2
 Persistent Portal: 192.168.192.136:3260,2
 **
 Interface:
 **
 Iface Name: default
 Iface Transport: tcp
 Iface Initiatorname:
iqn.2010-03.kabuki07:openiscsi-68d8cacb96efe440162d3567ebde79e2
 Iface IPaddress: 192.168.192.22
 Iface HWaddress: (null)
 Iface Netdev: (null)
 SID: 2
 iSCSI Connection State: LOGGED IN
 iSCSI Session State: LOGGED_IN
 Internal iscsid Session State: NO CHANGE
 
 Negotiated iSCSI params:
 
 HeaderDigest: None
 DataDigest: None
 MaxRecvDataSegmentLength: 262144
 MaxXmitDataSegmentLength: 65536
 FirstBurstLength: 8192
 MaxBurstLength: 262144
 ImmediateData: Yes
 InitialR2T: Yes
 MaxOutstandingR2T: 1
  
kabuki07 open-iscsi-2.0-871 # iscsiadm -m session -P 1
Target:
iqn.1986-03.com.sun:2510.600a0b800049cc7a4a0d4c7b
 Current Portal: 192.168.192.136:3260,2
 Persistent Portal: 192.168.192.136:3260,2
 **
 Interface:
 **
 Iface Name: default
 Iface Transport: tcp
 Iface Initiatorname:
iqn.2010-03.kabuki07:openiscsi-68d8cacb96efe440162d3567ebde79e2
 Iface IPaddress: 192.168.192.22
 Iface HWaddress: (null)
 Iface Netdev: (null)
 SID: 2
 iSCSI Connection State: LOGGED IN
 iSCSI Session State: LOGGED_IN
 Internal iscsid Session State: NO CHANGE
  

kabuki07 open-iscsi-2.0-871 # iscsiadm -m host -P 1
Host Number: 2
 State: running
 Transport: tcp
 Initiatorname: (null)
 IPaddress: 192.168.192.22
 HWaddress: (null)
 Netdev: (null)
  
var/log/messages
Mar 9 08:28:23 kabuki07 kernel: scsi1 : iSCSI
Initiator over TCP/IP
  Mar 9 08:28:23 kabuki07 kernel: scsi 1:0:0:0: Direct-Access
SUN LCSM100_I 0735 PQ: 0 ANSI: 5
  Mar 9 08:28:23 kabuki07 kernel: sd 1:0:0:0: Attached scsi
generic sg1 type 0
  Mar 9 08:28:23 kabuki07 kernel: sd 1:0:0:0: [sdb] 2147483648
512-byte logical blocks: (1.09 TB/1.00 TiB)
  Mar 9 08:28:23 kabuki07 kernel: scsi 1:0:0:31: Direct-Access
SUN Universal Xport 0735 PQ: 0 ANSI: 5
  Mar 9 08:28:23 kabuki07 kernel: scsi 1:0:0:31: Attached scsi
generic sg2 type 0
  Mar 9 08:28:23 kabuki07 kernel: sd 1:0:0:0: [sdb] Write Protect
is off
  Mar 9 08:28:23 kabuki07 kernel: sd 1:0:0:0: [sdb] Mode Sense: 77
00 10 08
  Mar 9 08:28:23 kabuki07 kernel: sd 1:0:0:0: [sdb] Write cache:
enabled, read cache: enabled, supports DPO and FUA
  Mar 9 08:28:24 kabuki07 iscsid: connection1:0 is operational now
  Mar 9 08:29:15 kabuki07 kernel: sdb:
  Mar 9 08:29:15 kabuki07 kernel: connection1:0: detected conn
error (1020)
  Mar 9 08:29:16 kabuki07 iscsid: Kernel reported iSCSI connection
1:0 error (1020) state (3)
  Mar 9 08:29:19 kabuki07 iscsid: connection1:0 is operational
after recovery (1 attempts)
  Mar 9 08:30:10 kabuki07 kernel: connection1:0: detected conn
error (1020)
  Mar 9 08:30:10 kabuki07 kernel: sd 1:0:0:0: [sdb] Unhandled
error code
  Mar 9 08:30:10 

Re: iscsiadm and bonding

2010-03-09 Thread Pasi Kärkkäinen
On Tue, Mar 09, 2010 at 07:49:00AM -0500, Hoot, Joseph wrote:
 I had a similar issue, just not using bonding.  The gist of my problem was 
 that, 
 when connecting a physical network card to a bridge, iscsiadm will not login 
 through 
 that bridge (at least in my experience).  I could discover just fine, but 
 wasn't ever able to login.  
 

This sounds like a configuration problem to me. 

Did you remove the IP addresses from eth0/eth1, and make sure only bond0 has 
the IP? 
Was the routing table correct? 

As long as the kernel routing table is correct open-iscsi shouldn't care what
interface you're using.

(Unless you bind the open-iscsi iface to some physical interface).


-- Pasi


 I am no longer attempting (at least for the moment because of time) to get it 
 working this way, but I would love to change our environment in the future if 
 a scenario such as this would work, because it gives me the flexibility to 
 pass a virtual network card through to the guest and allow the guest to 
 initiate its own iSCSI traffic instead of me doing it all at the dom0 level 
 and then passing those block devices through.
 
 I've attached a network diagram that explains my situation.  The goal is to 
 give the administrator flexibility to have fiber or iSCSI storage at the xen 
 dom0 as well as being able to pass-through that storage to the guest and 
 allow the guest to initiate iSCSI sessions.  This gives the guest the 
 flexibility to be able to run snapshot-type commands and things using the 
 EqualLogic HIT kits.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to 
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/open-iscsi?hl=en.
 

Content-Description: ATT1.txt
 
 Thanks,
 Joe
 
 ===
 Joseph R. Hoot
 Lead System Programmer/Analyst
 joe.h...@itec.suny.edu
 GPG KEY:   7145F633
 ===
 
 On Mar 9, 2010, at 5:19 AM, aclhkaclhk aclhkaclhk wrote:
 
  thanks, i could discover using bond0 but could not login.
  
  iscsi-target (openfiler): eth0 (192.168.123.174)
  iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)
  
  /var/lib/iscsi/ifaces/iface0
  # BEGIN RECORD 2.0-871
  iface.iscsi_ifacename = iface0
  iface.net_ifacename = bond0
  iface.transport_name = tcp
  # END RECORD
  
  [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
  --
  interface iface0
  192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
  [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
  192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
  --
  interface iface0
  Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
  portal: 192.168.123.174,3260]
  iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
  vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
  iscsiadm: initiator reported error (8 - connection timed out)
  
  i could use eth0 to discover and login
  
  ping 192.168.123.174 from iscsi-initiator is ok
  ping 192.168.123.178 from iscsi-target is ok
  
  192.168.123.178 is authorised in iscsi-target to login
  
  
  On Mar 8, 6:30 pm, Pasi K?rkk?inen pa...@iki.fi wrote:
  On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
  my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
  eth2 are bonded as bond0
  
  i want to login iscsi target using bond0 instead of eth0.
  
  iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
  portal 192.168.123.1:3260 --login --interface bond0
  iscsiadm: Could not read iface info for bond0.
  
  with interface, eth0 is used.
  
  the bonding was setup correctly. it could be used by xen.
  
  You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
  Then you can login using 'iface0' interface.
  
  Like this (on the fly):
  
  # iscsiadm -m iface -I iface0 -o new
  New interface iface0 added
  
  # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
  iface0 updated.
  
  You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
  by creating a file called 'iface0' with this content:
  
  iface.iscsi_ifacename = iface0
  iface.transport_name = tcp
  iface.net_ifacename = bond0
  
  Hopefully that helps.
  
  -- Pasi
  
  -- 
  You received this message because you are subscribed to the Google Groups 
  open-iscsi group.
  To post to this group, send email to open-is...@googlegroups.com.
  To unsubscribe from this group, send email to 
  open-iscsi+unsubscr...@googlegroups.com.
  For more options, visit this group at 
  http://groups.google.com/group/open-iscsi?hl=en.
  
 


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To 

Re: Using IFACE_SUBNET_MASK and IFACE_VLAN

2010-03-09 Thread Mike Christie

On 03/08/2010 04:27 PM, Benjamin Li wrote:

Hi Mike,

I was wondering about your thoughts on using on IFACE_SUBNET_MASK and
IFACE_VLAN definitions in the iface file.  In version 0.5.6 of uIP we
have started to add VLAN support; but, with the current iface file only
one VLAN is supported at a time per each iSCSI offloaded interface.
This is because the code assumes the VLAN from the interface as passed
by CNIC.  When looking through the iscsid code in the 'usr' directory we
see the following definitions:

--snip snip from 'usr/idbm_fields.h'--
#define IFACE_SUBNET_MASK   iface.subnet_mask
#define IFACE_VLAN  iface.vlan
--snip snip from 'usr/idbm_fields.h'--

I don't see these referenced anywhere else in the 'usr' directory by
iscsid or iscsiadm.  Were they intended to be used?  Were these fields
in the iface file going to be implemented and kept in the iscsid records
at some point?


They are used in /open-iscsi/utils/fwparam_ibft/fw_entry.c. Right now, 
we just use them for printing out the vlan and subnet in some standard way.


It is not really used since the other code does not yet support it. You 
can change the code to whatever is needed.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: open-iscsi against sun storage tek 2500 fails: 1011 error.

2010-03-09 Thread Mike Christie

On 03/09/2010 07:05 AM, Oriol Morell wrote:

Hello Mike,

we have applied the following lines, but the problem persists. Please check info
placed in this email.




 Mar 9 08:29:15 kabuki07 kernel: connection1:0: detected conn error (1020)


Sort of a new problem maybe. It looks like the target is dropping the 
connection on us here. It might be that the target thinks we have 
screwed something up. Do you see anything in the target logs? Maybe 
something about a protocol violation or something about a bad pdu or bad 
command or something being formatted incorrectly?


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread Mike Christie

On 03/09/2010 04:19 AM, aclhkaclhk aclhkaclhk wrote:

thanks, i could discover using bond0 but could not login.

iscsi-target (openfiler): eth0 (192.168.123.174)
iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)

/var/lib/iscsi/ifaces/iface0
# BEGIN RECORD 2.0-871
iface.iscsi_ifacename = iface0
  iface.net_ifacename = bond0
  iface.transport_name = tcp
  # END RECORD

  [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
--
  interface iface0
  192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
  [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
  192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
--
  interface iface0
  Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
  portal: 192.168.123.174,3260]
  iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
  vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
  iscsiadm: initiator reported error (8 - connection timed out)

  i could use eth0 to discover and login

  ping 192.168.123.174 from iscsi-initiator is ok
  ping 192.168.123.178 from iscsi-target is ok



I am not sure if I have ever tried bonding with ifaces.

Can you do

ping -I bond0 192.168.123.174

? If that does not work then the iscsi iface bonding will not either (we 
use the same syscalls to bind to the interface).


Is it possible to just set up the routing table so the iscsi traffic 
goes through the bonded interface to the target?


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.