Re: iscsi device with multipath

2022-06-29 Thread The Lee-Man
I'm sorry I didn't see this earlier. I don't see any replies. I can't quite parse your question, though. What are you asking about? On Thursday, June 16, 2022 at 11:57:20 PM UTC-7 zhuca...@gmail.com wrote: > How can we repoduce the error with "Multiply-cliamed blocks"? -- You received this mes

iscsi device with multipath

2022-06-16 Thread can zhu
How can we repoduce the error with "Multiply-cliamed blocks"? -- You received this message because you are subscribed to the Google Groups "open-iscsi" group. To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscr...@googlegroups.com. To view this

Re: possible data corruption with iscsi_tcp and dm-multipath

2018-04-12 Thread Chris Leech
On Thu, Apr 12, 2018 at 09:43:34AM -0700, donald...@gmail.com wrote: > Hi Chris. > > Have you confirmed this is indeed data corruption that can be caused > by open-iscsi implementation? > > I am looking at using multi-path for availability purpose but if there > is risk of data corruption it is a

possible data corruption with iscsi_tcp and dm-multipath

2018-04-12 Thread donald . lu
Hi Chris. Have you confirmed this is indeed data corruption that can be caused by open-iscsi implementation? I am looking at using multi-path for availability purpose but if there is risk of data corruption it is a no go. If you have got to bottom of this I will really appreaciate your sharing

possible data corruption with iscsi_tcp and dm-multipath

2016-03-03 Thread Chris Leech
When using multipath with iscsi_tcp, there may be the possibility of data corruption when failing requests between paths during a short network interruption. The situation looks like this. 1. A write is being sent on pathA, and the data is in the TCP transmit queue. 2. The network

Re: iSCSI and multipath failover

2013-06-11 Thread Bubuli Nayak
Thanks Mike. I have few follow-up question please. Why is this default replacement timeout so high. If I understand correctly in a multipath environment it is perfectly alright to set it to 0 without any side effect. The other question is, how early iSCSI can detect session error , be it with

Re: iSCSI and multipath failover

2013-06-11 Thread Mike Christie
On 06/07/2013 12:55 AM, Bubuli Nayak wrote: > Hello experts, > > I have learnt from MIke and others comment that multipath failover would > be driven by nop timout + nop interval + replacement_timeout seconds. > > My question is what is the impact I set replacement_timeout

iSCSI and multipath failover

2013-06-11 Thread bubuli nayak
Hello experts, I have learnt from MIke and others comment that multipath failover would be driven by nop timout + nop interval + replacement_timeout seconds. My question is what is the impact I set replacement_timeout to 0. I know if NOPOUT interval is low , more frequently iSCSI initiator

iSCSI and multipath failover

2013-06-11 Thread Bubuli Nayak
Hello experts, I have learnt from MIke and others comment that multipath failover would be driven by nop timout + nop interval + replacement_timeout seconds. My question is what is the impact I set replacement_timeout to 0. I know if NOPOUT interval is low , more frequently iSCSI initiator would

Re: Multipath or not ?

2013-03-14 Thread Guillaume
013 05:58 AM, Guillaume wrote: > >>> > >>>> Hello, > >>> > >>>> I have a virtual tape library and a iscsi SAN. All have multiple > >>>> ethernet interfaces, This will ressult in multiples sessions to the > >>>> target

Re: Multipath or not ?

2013-03-14 Thread Guillaume
> >> > >>> I have a virtual tape library and a iscsi SAN. All have multiple > >>> ethernet interfaces, This will ressult in multiples sessions to the > >>> targets.So I wonder if I must use dm-multipath or not ? Does the > current > >> &g

Re: Multipath or not ?

2013-03-13 Thread Hannes Reinecke
in multiples sessions to the targets.So I wonder if I must use dm-multipath or not ? Does the current Does the device show up as a tape device or a block device? The VTL device emulates robotics, LTO cartridges and LTO5 tape drives. The SAN is a block device. Are you using SCST or TGT or LIO

Re: Multipath or not ?

2013-03-12 Thread Mike Christie
interfaces, This will ressult in multiples sessions to the >>> targets.So I wonder if I must use dm-multipath or not ? Does the current >> >> Does the device show up as a tape device or a block device? >> > > The VTL device emulates robotics, LTO cartridges and LT

Re: Multipath or not ?

2013-03-12 Thread Guillaume
gets.So I wonder if I must use dm-multipath or not ? Does the current > > Does the device show up as a tape device or a block device? > The VTL device emulates robotics, LTO cartridges and LTO5 tape drives. The SAN is a block device. > > iscsi layer handle the multiple paths to an iqn or

Re: Multipath or not ?

2013-03-11 Thread Mike Christie
On 03/09/2013 05:58 AM, Guillaume wrote: > Hello, > > I have a virtual tape library and a iscsi SAN. All have multiple > ethernet interfaces, This will ressult in multiples sessions to the > targets.So I wonder if I must use dm-multipath or not ? Does the current Does the devi

Re: Multipath or not ?

2013-03-11 Thread Guillaume
virtual tape library and a iscsi SAN. All have multiple > > ethernet interfaces, This will ressult in multiples sessions to the > > targets.So I wonder if I must use dm-multipath or not ? Does the > > Typically I only use multipath if I have one initiator talking to one &

Re: Multipath or not ?

2013-03-11 Thread Donald Williams
interfaces, This will ressult in multiples sessions to the targets.So I > wonder if I must use dm-multipath or not ? Does the current iscsi layer > handle the multiple paths to an iqn or not ? > > Another question about the output of "iscsiadm -m session" : the lines of > output

Re: Multipath or not ?

2013-03-11 Thread Mark Lehrer
I have a virtual tape library and a iscsi SAN. All have multiple ethernet interfaces, This will ressult in multiples sessions to the targets.So I wonder if I must use dm-multipath or not ? Does the Typically I only use multipath if I have one initiator talking to one target. Otherwise I just

Multipath or not ?

2013-03-09 Thread Guillaume
Hello, I have a virtual tape library and a iscsi SAN. All have multiple ethernet interfaces, This will ressult in multiples sessions to the targets.So I wonder if I must use dm-multipath or not ? Does the current iscsi layer handle the multiple paths to an iqn or not ? Another question about

Re: Ubuntu server + Open-iscsi + multipath + ocfs2. Connectivity loss causes immediate server reboot.

2012-06-14 Thread Mike Christie
On 06/14/2012 06:22 AM, Jiří Červenka wrote: > Hi, > on Ubuntu server 12.04 (3.2.0-24-generic) I use open-iscsi (2.0-871), > multipath-tools (v0.4.9) and ocfs2 (1.6.3-4ubuntu1) to access shared > storage HP P2000 G3 iscsi. Even short network connectivity loss is > causing immedia

Ubuntu server + Open-iscsi + multipath + ocfs2. Connectivity loss causes immediate server reboot.

2012-06-14 Thread Jiří Červenka
Hi, on Ubuntu server 12.04 (3.2.0-24-generic) I use open-iscsi (2.0-871), multipath-tools (v0.4.9) and ocfs2 (1.6.3-4ubuntu1) to access shared storage HP P2000 G3 iscsi. Even short network connectivity loss is causing immediate server crash and reboot. In syslog I can not found any clue what

Re: Q: multipath not recovering after device was offline

2012-03-26 Thread Mike Christie
On 03/26/2012 03:32 AM, Rene wrote: > Mike Christie writes: >> I think you need to rescan the devices at the scsi layer level (like >> doing a echo 1 > /sys/block/sdX/device/rescan) then run some multipath >> to command, then run some FS and LVM commands if needed.

Re: Q: multipath not recovering after device was offline

2012-03-26 Thread Rene
Mike Christie writes: > I think you need to rescan the devices at the scsi layer level (like > doing a echo 1 > /sys/block/sdX/device/rescan) then run some multipath > to command, then run some FS and LVM commands if needed. Hi, I'm having a similar problem and stumbled over

Re: iSCSI and multipath performance issue

2011-09-23 Thread netz-haut - stephan seitz
summaries: - Channel bonding (802.3ad) does not really help to get more throughput even in multiserver setup. It's only failover stuff. - More than 2 nics at the initiator side are not helpful, multipath context switching and irq consumption are expensive. - 9k etherframes gives (depends on workload)

iSCSI and multipath performance issue

2011-09-16 Thread --[ UxBoD ]--
Hello all, we are about to configure a new storage system that utilizes the Nexenta OS with sparsely allocated ZVOLs. We wish to present 4TB of storage to a Linux system that has four NICs available to it. We are unsure whether to present one large ZVOL or four smaller ones to maximize the use

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread --[ UxBoD ]--
- Original Message - > On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote: > > - Original Message - > >> On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: > >>> Not sure where this should go so cross posting: > >>> > >>> CentOS 5.6 with k

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread Mike Christie
On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote: - Original Message - On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: Not sure where this should go so cross posting: CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2. When I run multipath -v9 -d I get: I think

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread Mike Christie
On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote: - Original Message - On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: Not sure where this should go so cross posting: CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2. When I run multipath -v9 -d I get: I think

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread --[ UxBoD ]--
- Original Message - > On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: > > Not sure where this should go so cross posting: > > > > CentOS 5.6 with kernel 2.6.37.6 and > > device-mapper-multipath-0.4.7-42.el5_6.2. > > > > When I run multipath -v9 -d I ge

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread Mike Christie
On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: Not sure where this should go so cross posting: CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2. When I run multipath -v9 -d I get: I think you need to post to the dm devel list. I do not see any iscsi issues in

Multipath or OpeniSCSI Issue ?

2011-05-20 Thread --[ UxBoD ]--
Not sure where this should go so cross posting: CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2. When I run multipath -v9 -d I get: sdg: not found in pathvec sdg: mask = 0x1f Segmentation fault If I strace the command I see: stat("/sys/block/sdg/d

Re: Q: multipath not recovering after device was offline

2011-05-12 Thread Mike Christie
On 05/12/2011 01:30 AM, Ulrich Windl wrote: Hi! This is not an exact open-iscsi question, but tighlty related: On a SAN using FibreChannel I had a 4-way multipath device. The basic configuration (without aliases for devices) is: devices { device { vendor &qu

Q: multipath not recovering after device was offline

2011-05-11 Thread Ulrich Windl
Hi! This is not an exact open-iscsi question, but tighlty related: On a SAN using FibreChannel I had a 4-way multipath device. The basic configuration (without aliases for devices) is: devices { device { vendor "HP" pro

Re: blacklisting some paths in multipath environment

2010-12-04 Thread Arkadiusz Miskiewicz
gt; > echo 1 > /sys/block/sdXYZ/device/delete Works quite nicely. First I blacklist devices in multipath based on wwn blacklist { wwid 3600a0b80005bd40802dd4ce20897 wwid 3600a0b80005bd628035b4ce2107e } Then actual deletion rules executed by udev: # more

Re: blacklisting some paths in multipath environment

2010-12-03 Thread Mike Christie
On 12/03/2010 06:20 AM, Arkadiusz Miskiewicz wrote: Now more complicated. Can I blacklist specific devices? My array is limited only to 4 initiator-storage poll mappings (which are used to allow access to logical disk X only from initiator A). Unfortunately I have 5 hosts. So I have 5 logical

Re: blacklisting some paths in multipath environment

2010-12-03 Thread Arkadiusz Miskiewicz
On Tue, Nov 30, 2010 at 12:35 AM, Mike Christie wrote: > On 11/27/2010 06:23 PM, Arkadiusz Miskiewicz wrote: >> How I can blacklist some paths and still use automatic node.startup? >> > > You can set specific paths to not get logged into automatically by doing > > iscsiadm -m node -T target -p ip

Re: blacklisting some paths in multipath environment

2010-11-29 Thread Mike Christie
On 11/27/2010 06:23 PM, Arkadiusz Miskiewicz wrote: Hello, I'm trying to use open-iscsi with DS3300 array. DS has two controllers, each 2 ethernet ports. Unfortunately I use some SATA disk that aren't capable to be connected into two controllers (only one path on the SATA connector). This caus

blacklisting some paths in multipath environment

2010-11-29 Thread Arkadiusz Miskiewicz
Hello, I'm trying to use open-iscsi with DS3300 array. DS has two controllers, each 2 ethernet ports. Unfortunately I use some SATA disk that aren't capable to be connected into two controllers (only one path on the SATA connector). This causes disks to be accessible only through one controll

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-06-09 Thread Ian MacDonald
On Wed, 2010-06-09 at 00:04 -0500, Mike Christie wrote: > 0 for replacement_timeout means that we wait for re-establishment > forever. I recently added it since trying to guess or tell people what > is a sufficiently long enough time was a pain. > Great, this explains the only difference in you

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-06-08 Thread Mike Christie
On 06/08/2010 11:09 AM, Ian MacDonald wrote: Thanks Mike, I am a bit confused as to where I apply these; Initially I assumed in the iscsid.conf on the initiator. However it seems that these can also apply to the target configuration in ietd.conf (and apply to all initiators). I do not think yo

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-06-08 Thread Ian MacDonald
SCSI layer. Basically you want the opposite of when using dm-multipath. For this setup, you can turn off iSCSI pings by setting: node.conn[0].timeo.noop_out_interval = 0 node.conn[0].timeo.noop_out_timeout = 0 And you can turn the replacement_timer to a very long value: node.session.timeo.replaceme

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-06-07 Thread Mike Christie
On 06/07/2010 01:02 PM, Ian MacDonald wrote: Mike, This is an oldie; We finally found some time to review the config on this one box I described previously In your previous thread, http://groups.google.com/group/open-iscsi/browse_thread/thread/5da7c08dd95211e6?pli=1 for non-multipath you

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-06-07 Thread Ian MacDonald
Mike, This is an oldie; We finally found some time to review the config on this one box I described previously In your previous thread, http://groups.google.com/group/open-iscsi/browse_thread/thread/5da7c08dd95211e6?pli=1 for non-multipath you suggested

Re: open-iscsi 2.0.871 w/ multipath ping timeout on connect

2010-05-25 Thread Peter Lieven
Attached SCSI disk May 19 17:15:06 172.21.55.20 multipathd: sde: add path (uevent) May 19 17:15:06 172.21.55.20 multipathd: iqn.2001-05.com.equallogic: 0-8a0906-daa61b105-866000e7e774bea7-kvm-irle-test: load table [0 41963520 multipath 1 queue_if_no_pa May 19 17:15:06 172.21.55.20 multipathd: sde

Re: open-iscsi 2.0.871 w/ multipath ping timeout on connect

2010-05-25 Thread Peter Lieven
g the weekend. I now use 4 iscsi connections to a target (2 via each multipath interface). every connection is to the physical address of each interface of the EQL array. This makes connecting to the array a big lot faster because there is no redirect. The load sharing is also quite nice at traff

Re: open-iscsi 2.0.871 w/ multipath ping timeout on connect

2010-05-24 Thread Mike Christie
0:0:0: [sde] Attached SCSI disk May 19 17:15:06 172.21.55.20 multipathd: sde: add path (uevent) May 19 17:15:06 172.21.55.20 multipathd: iqn.2001-05.com.equallogic: 0-8a0906-daa61b105-866000e7e774bea7-kvm-irle-test: load table [0 41963520 multipath 1 queue_if_no_pa May 19 17:15:06 172.21.55.20 multipathd

Re: open-iscsi 2.0.871 w/ multipath ping timeout on connect

2010-05-24 Thread Taylor
ls and modules from kernel > 2.6.33.3. > > I encounter a strange problem sometimes when I connect using multipath > to an equallogic > array. > > I have interfaces eth1 and eth3 in the ISCSI network and if I connect > to a target on the equallogic > array it sometimes happens

open-iscsi 2.0.871 w/ multipath ping timeout on connect

2010-05-24 Thread pli
Hi, i'm running open-iscsi 2.0.871 userspace tools and modules from kernel 2.6.33.3. I encounter a strange problem sometimes when I connect using multipath to an equallogic array. I have interfaces eth1 and eth3 in the ISCSI network and if I connect to a target on the equallogic arr

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-18 Thread Ulrich Windl
On 3 May 2010 at 15:45, James Hammer wrote: > On 05/03/10 15:30, James Hammer wrote: > > On 05/03/10 13:39, Romeo Theriault wrote: > >> > >> > >> How can I resolve the "Found duplicate PV" warning/error? > >> > >> > >> It looks like this link should be able to help you. > >> > >> http://kbase.

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-05-14 Thread Mike Christie
make sure you are using the right iscsi settings at least. > multipath at the initiator (I also fear messing up my md devices). I > assume multipath is enabled by default even with a single NIC from what > I read here: > http://groups.google.com/group/open-iscsi/bro

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-05-14 Thread Ian MacDonald
On Fri, 2010-05-14 at 11:46 -0500, Mike Christie wrote: > > We had some issues with the initiator loosing connections with the > > target in this new Karmic rootfs on iSCSI setup. The problem is > that > > after some time the filesystem switches to a read-only mount > following > > I/O errors afte

Re: Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-05-14 Thread Mike Christie
On 05/12/2010 12:56 PM, Ian MacDonald wrote: We have the following new setup; Karmic with root on iSCSI (local boot partition since the NIC doesn't support native iSCSI). This was surprisingly easy following vanilla iSCSI Ubuntu installer and a few post-install tweaks from /usr/share/doc/open-is

Ubuntu Root iSCSI with Debian IET plus ChannelBonding and Multipath

2010-05-13 Thread Ian MacDonald
the email. The problems only started after the install (upon real boot and login to the system). Since the installation (which is I/O intensive) worked flawlessly several times, my initial feeling was that multipath on the initiator and/or channel bonding on the target were not playing nice together, fi

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-04 Thread James Hammer
On 05/03/10 22:01, dave wrote: And delete the lvm cache before the scan, if you haven't already. On May 3, 8:33 pm, dave wrote: IIRC, LVM was wonky when I tried to fix something similar to this. Assuming you only want the partitions on sda, sdb, sdc and the multipath devices, try: f

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-03 Thread dave
And delete the lvm cache before the scan, if you haven't already. -- Dave On May 3, 8:33 pm, dave wrote: > IIRC, LVM was wonky when I tried to fix something similar to this. > Assuming you only want the partitions on sda, sdb, sdc and the > multipath devices, try: > > f

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-03 Thread dave
IIRC, LVM was wonky when I tried to fix something similar to this. Assuming you only want the partitions on sda, sdb, sdc and the multipath devices, try: filter = [ "a|/dev/dm-*|", "a|/dev/sda[0-9]|", "a|/dev/sdb[0-9]|", "a|/ dev/sdc[0-9]|", "r|.*|

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-03 Thread James Hammer
On 05/03/10 15:30, James Hammer wrote: On 05/03/10 13:39, Romeo Theriault wrote: How can I resolve the "Found duplicate PV" warning/error? It looks like this link should be able to help you. http://kbase.redhat.com/faq/docs/DOC-2991 Essentially you'll want to create a filter in your lv

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-03 Thread James Hammer
On 05/03/10 13:39, Romeo Theriault wrote: How can I resolve the "Found duplicate PV" warning/error? It looks like this link should be able to help you. http://kbase.redhat.com/faq/docs/DOC-2991 Essentially you'll want to create a filter in your lvm.conf file so it only scans your multi

Re: pvdisplay shows "Found duplicate PV" on multipath device

2010-05-03 Thread Romeo Theriault
om your error you see it is actually choosing one of the paths: # pvdisplay > Found duplicate PV tU6s0t1wfQNQufnOqtN1KGux5JftKJSi: using /dev/sdr not > /dev/sdq > You want it to use the multipath link. But from the linked article the error is only a warning which can be ignored if it&#x

pvdisplay shows "Found duplicate PV" on multipath device

2010-05-03 Thread James Hammer
On a linux server I connect to 5 iscsi disks hosted by a SAN. I have multipathing setup. On each multipath device I used pvcreate to create a physical volume with no problems. When I create the 6th disk I get the following from pvdisplay: # pvcreate /dev/dm-18 Physical volume "/dev/

Re: Reboot hangs on failing multipath devices

2010-03-31 Thread James Hammer
Mike Christie wrote: On 03/23/2010 10:13 AM, James Hammer wrote: Mike Christie wrote: On 03/22/2010 03:38 PM, James Hammer wrote: Every time I reboot my server it hangs on the multipath devices. The server is Debian based. I've had this problem with all kernels I've tried (2.6.

RE: Reboot hangs on failing multipath devices

2010-03-26 Thread netz-haut - stephan seitz
This is a reported bug of the device-mapper on debian. There's a patch at debians bugtracker available, but as far as I remember, it has been refused by upstream developers. We're also running open-iscsi/dm-multipath/lvm/clvm stack on virtualization Hosts. Due to this behavior one big p

Re: Reboot hangs on failing multipath devices

2010-03-25 Thread Mike Christie
On 03/23/2010 10:13 AM, James Hammer wrote: Mike Christie wrote: On 03/22/2010 03:38 PM, James Hammer wrote: Every time I reboot my server it hangs on the multipath devices. The server is Debian based. I've had this problem with all kernels I've tried (2.6.18, 2.6.24, 2.6.32

Re: Reboot hangs on failing multipath devices

2010-03-23 Thread James Hammer
Mike Christie wrote: On 03/22/2010 03:38 PM, James Hammer wrote: Every time I reboot my server it hangs on the multipath devices. The server is Debian based. I've had this problem with all kernels I've tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry is set to q

Re: Reboot hangs on failing multipath devices

2010-03-22 Thread Mike Christie
On 03/22/2010 03:38 PM, James Hammer wrote: Every time I reboot my server it hangs on the multipath devices. The server is Debian based. I've had this problem with all kernels I've tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry is set to queue Here are snippet

Re: Reboot hangs on failing multipath devices

2010-03-22 Thread James Hammer
James Hammer wrote: Every time I reboot my server it hangs on the multipath devices. The server is Debian based. I've had this problem with all kernels I've tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry is set to queue I found that if I set no_path_re

Reboot hangs on failing multipath devices

2010-03-22 Thread James Hammer
Every time I reboot my server it hangs on the multipath devices. The server is Debian based. I've had this problem with all kernels I've tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry is set to queue Here are snippets from the reboot log: Stopping multip

Re: Failover time of iSCSI multipath devices.

2010-03-16 Thread bennyturns
Thks Mike, that explains it :) On Mar 16, 5:27 pm, Mike Christie wrote: > On 03/16/2010 04:02 PM, bennyturns wrote: > > > I am trying work out a formula for total failover time of my > > multipathed iSCSI device so far I have: > > > failover time = nop timout + nop interval + replacement_timeout

Re: Failover time of iSCSI multipath devices.

2010-03-16 Thread Mike Christie
On 03/16/2010 04:02 PM, bennyturns wrote: I am trying work out a formula for total failover time of my multipathed iSCSI device so far I have: failover time = nop timout + nop interval + replacement_timeout seconds + scsi block device timeout(/sys/block/sdX/device/timeout) /sys/block/sdX/devi

Re: Failover time of iSCSI multipath devices.

2010-03-16 Thread bennyturns
I am trying work out a formula for total failover time of my multipathed iSCSI device so far I have: failover time = nop timout + nop interval + replacement_timeout seconds + scsi block device timeout(/sys/block/sdX/device/timeout) Is there anything else that I am missing? -b On Mar 15, 4:53 

Re: Failover time of iSCSI multipath devices.

2010-03-16 Thread Mike Christie
On 03/16/2010 04:50 AM, Alex Zeffertt wrote: Mike Christie wrote: On 03/15/2010 05:56 AM, Alex Zeffertt wrote: The bugzilla ticket requests a merge of two git commits, but neither of those contain the libiscsi.c change that addresses bug #2. Was this a mistake, or did you deliberately omit that

Re: Failover time of iSCSI multipath devices.

2010-03-16 Thread Alex Zeffertt
Mike Christie wrote: On 03/15/2010 05:56 AM, Alex Zeffertt wrote: The bugzilla ticket requests a merge of two git commits, but neither of those contain the libiscsi.c change that addresses bug #2. Was this a mistake, or did you deliberately omit that part of your speed-up-conn-fail-take3.patch w

Re: Failover time of iSCSI multipath devices.

2010-03-15 Thread Mike Christie
On 03/15/2010 05:56 AM, Alex Zeffertt wrote: The bugzilla ticket requests a merge of two git commits, but neither of those contain the libiscsi.c change that addresses bug #2. Was this a mistake, or did you deliberately omit that part of your speed-up-conn-fail-take3.patch when you raised the ti

Re: Failover time of iSCSI multipath devices.

2010-03-15 Thread Alex Zeffertt
Mike Christie wrote: On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote: On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote: On 03/01/2010 08:53 PM, Mike Christie wrote: On 03/01/2010 12:06 PM, bet wrote: 1. Based on my timeouts I would think that my session would time out Yes. It shou

Re: Failover time of iSCSI multipath devices.

2010-03-08 Thread Pasi Kärkkäinen
On Mon, Mar 08, 2010 at 02:07:14PM -0600, Mike Christie wrote: > On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote: >> On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote: >>> On 03/01/2010 08:53 PM, Mike Christie wrote: On 03/01/2010 12:06 PM, bet wrote: > 1. Based on my timeouts I

Re: Failover time of iSCSI multipath devices.

2010-03-08 Thread Mike Christie
On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote: On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote: On 03/01/2010 08:53 PM, Mike Christie wrote: On 03/01/2010 12:06 PM, bet wrote: 1. Based on my timeouts I would think that my session would time out Yes. It should timeout about 15 s

Re: Failover time of iSCSI multipath devices.

2010-03-07 Thread Pasi Kärkkäinen
On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote: > On 03/01/2010 08:53 PM, Mike Christie wrote: >> On 03/01/2010 12:06 PM, bet wrote: >>> 1. Based on my timeouts I would think that my session would time out >> >> Yes. It should timeout about 15 secs after you see >> > Mar 1 07:14:27

Re: Failover time of iSCSI multipath devices.

2010-03-05 Thread Mike Christie
On 03/01/2010 08:53 PM, Mike Christie wrote: On 03/01/2010 12:06 PM, bet wrote: 1. Based on my timeouts I would think that my session would time out Yes. It should timeout about 15 secs after you see > Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of > 5 secs expired, recv

Re: Failover time of iSCSI multipath devices.

2010-03-02 Thread bennyturns
I can track its progress? If not should I open a BZ? 3. In a best case scenario what kind of failover time can I expect with multipath and iSCSI? I see about 25-30 seconds, is this accurate? I saw 3 second failover time using bonded NICs instead of dm-multipath, is there any specific reason to u

Re: Failover time of iSCSI multipath devices.

2010-03-01 Thread Or Gerlitz
Mike Christie wrote: > You might be hitting a bug where the network layer gets stuck trying to > send data. I attached a patch that should fix the problem Doing some multipath testing with iscsi/tcp I didn't hit this bug, any hint what does it take to have this come into play? I did

Re: Failover time of iSCSI multipath devices.

2010-03-01 Thread guy keren
Mike Christie wrote: On 03/01/2010 12:06 PM, bet wrote: 1. Based on my timeouts I would think that my session would time out Yes. It should timeout about 15 secs after you see > Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of > 5 secs expired, recv timeout 5, last rx 4

Re: Failover time of iSCSI multipath devices.

2010-03-01 Thread Mike Christie
On 03/01/2010 12:06 PM, bet wrote: 1. Based on my timeouts I would think that my session would time out Yes. It should timeout about 15 secs after you see > Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of > 5 secs expired, recv timeout 5, last rx 4884304, last ping 488930

Failover time of iSCSI multipath devices.

2010-03-01 Thread bet
Hi all. I am going through some testing of my multipathed iSCSI devices and I am seeing some longer than expected delays. I am running the latest RHEL 5.4 packages as of this morning. I am seeing the failure of the iSCSI sessions take about 67 seconds. After the iSCSI failure the multipath

Re: Need help with multipath and iscsi in CentOS 5.4

2010-01-08 Thread Kyle Schmitt
Using a single path (without MPIO) as a baseline: With bonding I saw, on average 99-100% of the speed (worst case 78%) of a single path. With MPIO (2 nics) I saw, on average 82% of the speed (worst case 66%) of the single path. With MPIO with one nic (ifconfig downed the second), I saw, on average

Re: Need help with multipath and iscsi in CentOS 5.4

2010-01-06 Thread Mike Christie
On 12/30/2009 11:48 AM, Kyle Schmitt wrote: On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie wrote: So far single connections work: If I setup the box to use one NIC, I get one connection and can use it just fine. Could you send the /var/log/messages for when you run the login command so I can se

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Kyle Schmitt
Note, the EMC specific bits of that multipath.conf were just copied from boxes that use FC to the SAN, and use MPIO successfully. -- You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-is...@googlegroups.com. To

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Kyle Schmitt
ltiple concurrent requests (which you say it doesn't, I'll believe you) OR the san was under massively different load between the test runs (not too likely, but possible. Only one other lun is in use). > That seems a bit weird. That's what I thought, otherwise I would have ju

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Pasi Kärkkäinen
devices that connect to iscsi > show traffic. > > The weird thing is that aside from writing bonding was measurably > faster than MPIO. Does that seem right? > That seems a bit weird. How did you configure multipath? Please paste your multipath settings. -- Pasi > > Here

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-30 Thread Kyle Schmitt
re's the dmesg, if that lends any clues. Thanks for any input! --Kyle 156 lines of dmesg follows cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits. iscsi: registered transport (cxgb3i) device-mapper: table: 253:6: multipath: error getting device device-mapper: ioctl: error adding

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-10 Thread Kyle Schmitt
On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie wrote: > Kyle Schmitt wrote: > What do you mean by works? Can you dd it, or fdisk it? it works by most any measure sdc: I can dd it, fdisk it, mkfs.ext3 it, run iozone, etc. In contrast sdb sdd and sde cant be fdisked, dded, or even less -fed. > H

RE: Need help with multipath and iscsi in CentOS 5.4

2009-12-10 Thread berthiaume_wayne
-iscsi@googlegroups.com Subject: Re: Need help with multipath and iscsi in CentOS 5.4 Kyle Schmitt wrote: > I'm cross-posting here from linux-iscsi-users since I've seen no linux-scsi-users would be for centos 4. Centos 5 uses a different initiator, but you are the right place finally

Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-09 Thread Mike Christie
iscsiadm -m node -T -l > > I could see I was connected to all four via > iscsiadm-m session > > At this point, I thought I was set, I had four new devices > /dev/sdb /dev/sdc /dev/sdd /dev/sde > > Ignoring multipath at this point for now, here's where the problem

Need help with multipath and iscsi in CentOS 5.4

2009-12-08 Thread Kyle Schmitt
good. I logged all four of them them in with: iscsiadm -m node -T -l I could see I was connected to all four via iscsiadm-m session At this point, I thought I was set, I had four new devices /dev/sdb /dev/sdc /dev/sdd /dev/sde Ignoring multipath at this point for now, here's where the prob

RHEL 4 Update 7 + iSCSI + multipath

2009-12-07 Thread bilimorian
Hello, This is my first post on this group so please excuse my ignorance. I am using the combination of RHEL, iSCSI and multipath to NetApp SAN. I attach to 9 LUNs. On one path I can connect to all LUNs but on the second path I am unable to see 3 of the 9 LUNs. I have tried 'googleing

SLES yaboot behaviour with multipath iSCSI installs

2009-07-27 Thread malahal
Did anyone install SLES11 on a multipath iSCSI device on power based host? I installed on a single path to begin with. It booted fine with /etc/yaboot.conf file. I created another path with "iscsiadm" command (set node startup to onboot and configured multipath) and ran "mkinitrd&q

Re: SLES11 install on multipath iSCSI

2009-06-17 Thread Hannes Reinecke
config > boot.open-iscsi on && chkconfig open-iscsi on' and then ran 'mkinitrd' to > create a new initrd with multipath support at this point. > > Is this a recommended procedure? > > With that, the system boots fine but the shutdown/reboot hangs. Upon

SLES11 install on multipath iSCSI

2009-06-15 Thread malahal
-iscsi on' and then ran 'mkinitrd' to create a new initrd with multipath support at this point. Is this a recommended procedure? With that, the system boots fine but the shutdown/reboot hangs. Upon closer inspection, I found that /etc/init.d/boot.open-iscsi sets node.conn[0].startup t

Re: Building a multipath setup with DRBD in dual-Primary

2009-05-30 Thread PCextreme B.V. - Wido den Hollander
Hi Bart, At the moment this is only a proof of concept wich seems to be working fine. I had some thoughts about it: * The iSCSI target should not run in Write-Back mode * DRBD should run in synchronus mode (protocol C) * I should use the failover policy with dm-multipath instead of round-robin

Re: Building a multipath setup with DRBD in dual-Primary

2009-05-29 Thread Bart Van Assche
On Fri, May 29, 2009 at 5:21 PM, Wido wrote: > Last night i had the idea of building a multipath iSCSI setup with DRBD > in Primary/Primary. > [ ... ] > Is my setup a possibility or should i stay with the old regular setup of > a Primary/Standy with heartbeat? I strongly recomme

Re: Building a multipath setup with DRBD in dual-Primary

2009-05-29 Thread PCextreme B.V. - Wido den Hollander
://www.pcextreme.nl Kennisbank: http://support.pcextreme.nl/ Netwerkstatus: http://nmc.pcextreme.nl On Fri, 2009-05-29 at 11:13 -0500, Mike Christie wrote: > Wido wrote: > > Hello, > > > > Last night i had the idea of building a multipath iSCSI setup with > > DRBD in Primary/Prima

  1   2   >