I'm sorry I didn't see this earlier. I don't see any replies. I can't quite
parse your question, though. What are you asking about?
On Thursday, June 16, 2022 at 11:57:20 PM UTC-7 zhuca...@gmail.com wrote:
> How can we repoduce the error with "Multiply-cliamed blocks"?
--
You received this mes
How can we repoduce the error with "Multiply-cliamed blocks"?
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to open-iscsi+unsubscr...@googlegroups.com.
To view this
On Thu, Apr 12, 2018 at 09:43:34AM -0700, donald...@gmail.com wrote:
> Hi Chris.
>
> Have you confirmed this is indeed data corruption that can be caused
> by open-iscsi implementation?
>
> I am looking at using multi-path for availability purpose but if there
> is risk of data corruption it is a
Hi Chris.
Have you confirmed this is indeed data corruption that can be caused by
open-iscsi implementation?
I am looking at using multi-path for availability purpose but if there is risk
of data corruption it is a no go. If you have got to bottom of this I will
really appreaciate your sharing
When using multipath with iscsi_tcp, there may be the possibility of
data corruption when failing requests between paths during a short
network interruption.
The situation looks like this.
1. A write is being sent on pathA, and the data is in the TCP transmit
queue.
2. The network
Thanks Mike.
I have few follow-up question please. Why is this default replacement
timeout so high.
If I understand correctly in a multipath environment it is perfectly
alright to set it to 0 without
any side effect.
The other question is, how early iSCSI can detect session error , be it
with
On 06/07/2013 12:55 AM, Bubuli Nayak wrote:
> Hello experts,
>
> I have learnt from MIke and others comment that multipath failover would
> be driven by nop timout + nop interval + replacement_timeout seconds.
>
> My question is what is the impact I set replacement_timeout
Hello experts,
I have learnt from MIke and others comment that multipath failover would be
driven by nop timout + nop interval + replacement_timeout seconds.
My question is what is the impact I set replacement_timeout to 0. I know if
NOPOUT interval is low , more frequently iSCSI initiator
Hello experts,
I have learnt from MIke and others comment that multipath failover would be
driven by nop timout + nop interval + replacement_timeout seconds.
My question is what is the impact I set replacement_timeout to 0. I know if
NOPOUT interval is low , more frequently iSCSI initiator would
013 05:58 AM, Guillaume wrote:
> >>>
> >>>> Hello,
> >>>
> >>>> I have a virtual tape library and a iscsi SAN. All have multiple
> >>>> ethernet interfaces, This will ressult in multiples sessions to the
> >>>> target
> >>
> >>> I have a virtual tape library and a iscsi SAN. All have multiple
> >>> ethernet interfaces, This will ressult in multiples sessions to the
> >>> targets.So I wonder if I must use dm-multipath or not ? Does the
> current
> >>
&g
in multiples sessions to the
targets.So I wonder if I must use dm-multipath or not ? Does the current
Does the device show up as a tape device or a block device?
The VTL device emulates robotics, LTO cartridges and LTO5 tape drives.
The SAN is a block device.
Are you using SCST or TGT or LIO
interfaces, This will ressult in multiples sessions to the
>>> targets.So I wonder if I must use dm-multipath or not ? Does the current
>>
>> Does the device show up as a tape device or a block device?
>>
>
> The VTL device emulates robotics, LTO cartridges and LT
gets.So I wonder if I must use dm-multipath or not ? Does the current
>
> Does the device show up as a tape device or a block device?
>
The VTL device emulates robotics, LTO cartridges and LTO5 tape drives.
The SAN is a block device.
> > iscsi layer handle the multiple paths to an iqn or
On 03/09/2013 05:58 AM, Guillaume wrote:
> Hello,
>
> I have a virtual tape library and a iscsi SAN. All have multiple
> ethernet interfaces, This will ressult in multiples sessions to the
> targets.So I wonder if I must use dm-multipath or not ? Does the current
Does the devi
virtual tape library and a iscsi SAN. All have multiple
> > ethernet interfaces, This will ressult in multiples sessions to the
> > targets.So I wonder if I must use dm-multipath or not ? Does the
>
> Typically I only use multipath if I have one initiator talking to one
&
interfaces, This will ressult in multiples sessions to the targets.So I
> wonder if I must use dm-multipath or not ? Does the current iscsi layer
> handle the multiple paths to an iqn or not ?
>
> Another question about the output of "iscsiadm -m session" : the lines of
> output
I have a virtual tape library and a iscsi SAN. All have multiple
ethernet interfaces, This will ressult in multiples sessions to the
targets.So I wonder if I must use dm-multipath or not ? Does the
Typically I only use multipath if I have one initiator talking to one
target. Otherwise I just
Hello,
I have a virtual tape library and a iscsi SAN. All have multiple ethernet
interfaces, This will ressult in multiples sessions to the targets.So I
wonder if I must use dm-multipath or not ? Does the current iscsi layer
handle the multiple paths to an iqn or not ?
Another question about
On 06/14/2012 06:22 AM, Jiří Červenka wrote:
> Hi,
> on Ubuntu server 12.04 (3.2.0-24-generic) I use open-iscsi (2.0-871),
> multipath-tools (v0.4.9) and ocfs2 (1.6.3-4ubuntu1) to access shared
> storage HP P2000 G3 iscsi. Even short network connectivity loss is
> causing immedia
Hi,
on Ubuntu server 12.04 (3.2.0-24-generic) I use open-iscsi (2.0-871),
multipath-tools (v0.4.9) and ocfs2 (1.6.3-4ubuntu1) to access shared
storage HP P2000 G3 iscsi. Even short network connectivity loss is causing
immediate server crash and reboot. In syslog I can not found any clue what
On 03/26/2012 03:32 AM, Rene wrote:
> Mike Christie writes:
>> I think you need to rescan the devices at the scsi layer level (like
>> doing a echo 1 > /sys/block/sdX/device/rescan) then run some multipath
>> to command, then run some FS and LVM commands if needed.
Mike Christie writes:
> I think you need to rescan the devices at the scsi layer level (like
> doing a echo 1 > /sys/block/sdX/device/rescan) then run some multipath
> to command, then run some FS and LVM commands if needed.
Hi,
I'm having a similar problem and stumbled over
summaries:
- Channel bonding (802.3ad) does not really help to get more throughput even in
multiserver setup. It's only failover stuff.
- More than 2 nics at the initiator side are not helpful, multipath context
switching and irq consumption are expensive.
- 9k etherframes gives (depends on workload)
Hello all,
we are about to configure a new storage system that utilizes the Nexenta OS
with sparsely allocated ZVOLs. We wish to present 4TB of storage to a Linux
system that has four NICs available to it. We are unsure whether to present one
large ZVOL or four smaller ones to maximize the use
- Original Message -
> On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote:
> > - Original Message -
> >> On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote:
> >>> Not sure where this should go so cross posting:
> >>>
> >>> CentOS 5.6 with k
On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote:
- Original Message -
On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote:
Not sure where this should go so cross posting:
CentOS 5.6 with kernel 2.6.37.6 and
device-mapper-multipath-0.4.7-42.el5_6.2.
When I run multipath -v9 -d I get:
I think
On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote:
- Original Message -
On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote:
Not sure where this should go so cross posting:
CentOS 5.6 with kernel 2.6.37.6 and
device-mapper-multipath-0.4.7-42.el5_6.2.
When I run multipath -v9 -d I get:
I think
- Original Message -
> On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote:
> > Not sure where this should go so cross posting:
> >
> > CentOS 5.6 with kernel 2.6.37.6 and
> > device-mapper-multipath-0.4.7-42.el5_6.2.
> >
> > When I run multipath -v9 -d I ge
On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote:
Not sure where this should go so cross posting:
CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2.
When I run multipath -v9 -d I get:
I think you need to post to the dm devel list. I do not see any iscsi
issues in
Not sure where this should go so cross posting:
CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2.
When I run multipath -v9 -d I get:
sdg: not found in pathvec
sdg: mask = 0x1f
Segmentation fault
If I strace the command I see:
stat("/sys/block/sdg/d
On 05/12/2011 01:30 AM, Ulrich Windl wrote:
Hi!
This is not an exact open-iscsi question, but tighlty related:
On a SAN using FibreChannel I had a 4-way multipath device. The basic
configuration (without aliases for devices) is:
devices {
device {
vendor &qu
Hi!
This is not an exact open-iscsi question, but tighlty related:
On a SAN using FibreChannel I had a 4-way multipath device. The basic
configuration (without aliases for devices) is:
devices {
device {
vendor "HP"
pro
gt;
> echo 1 > /sys/block/sdXYZ/device/delete
Works quite nicely.
First I blacklist devices in multipath based on wwn
blacklist {
wwid 3600a0b80005bd40802dd4ce20897
wwid 3600a0b80005bd628035b4ce2107e
}
Then actual deletion rules executed by udev:
# more
On 12/03/2010 06:20 AM, Arkadiusz Miskiewicz wrote:
Now more complicated. Can I blacklist specific devices? My array is limited only
to 4 initiator-storage poll mappings (which are used to allow access to logical
disk X only from initiator A). Unfortunately I have 5 hosts.
So I have 5 logical
On Tue, Nov 30, 2010 at 12:35 AM, Mike Christie wrote:
> On 11/27/2010 06:23 PM, Arkadiusz Miskiewicz wrote:
>> How I can blacklist some paths and still use automatic node.startup?
>>
>
> You can set specific paths to not get logged into automatically by doing
>
> iscsiadm -m node -T target -p ip
On 11/27/2010 06:23 PM, Arkadiusz Miskiewicz wrote:
Hello,
I'm trying to use open-iscsi with DS3300 array. DS has two controllers, each 2
ethernet ports.
Unfortunately I use some SATA disk that aren't capable to be connected into
two controllers (only one path on the SATA connector). This caus
Hello,
I'm trying to use open-iscsi with DS3300 array. DS has two controllers, each 2
ethernet ports.
Unfortunately I use some SATA disk that aren't capable to be connected into
two controllers (only one path on the SATA connector). This causes disks to be
accessible only through one controll
On Wed, 2010-06-09 at 00:04 -0500, Mike Christie wrote:
> 0 for replacement_timeout means that we wait for re-establishment
> forever. I recently added it since trying to guess or tell people what
> is a sufficiently long enough time was a pain.
>
Great, this explains the only difference in you
On 06/08/2010 11:09 AM, Ian MacDonald wrote:
Thanks Mike,
I am a bit confused as to where I apply these; Initially I assumed in
the iscsid.conf on the initiator. However it seems that these can also
apply to the target configuration in ietd.conf (and apply to all
initiators).
I do not think yo
SCSI layer. Basically you want the opposite of when using
dm-multipath.
For this setup, you can turn off iSCSI pings by setting:
node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0
And you can turn the replacement_timer to a very long value:
node.session.timeo.replaceme
On 06/07/2010 01:02 PM, Ian MacDonald wrote:
Mike,
This is an oldie; We finally found some time to review the config on
this one box I described previously
In your previous thread,
http://groups.google.com/group/open-iscsi/browse_thread/thread/5da7c08dd95211e6?pli=1
for non-multipath you
Mike,
This is an oldie; We finally found some time to review the config on
this one box I described previously
In your previous thread,
http://groups.google.com/group/open-iscsi/browse_thread/thread/5da7c08dd95211e6?pli=1
for non-multipath you suggested
Attached SCSI disk
May 19 17:15:06 172.21.55.20 multipathd: sde: add path (uevent)
May 19 17:15:06 172.21.55.20 multipathd: iqn.2001-05.com.equallogic:
0-8a0906-daa61b105-866000e7e774bea7-kvm-irle-test: load table [0
41963520 multipath 1 queue_if_no_pa
May 19 17:15:06 172.21.55.20 multipathd: sde
g the weekend.
I now use 4 iscsi connections to a target (2 via each multipath interface).
every connection is to the physical address of each interface of the EQL
array. This makes connecting to the array a big lot faster because
there is no redirect.
The load sharing is also quite nice at traff
0:0:0:
[sde] Attached SCSI disk
May 19 17:15:06 172.21.55.20 multipathd: sde: add path (uevent)
May 19 17:15:06 172.21.55.20 multipathd: iqn.2001-05.com.equallogic:
0-8a0906-daa61b105-866000e7e774bea7-kvm-irle-test: load table [0
41963520 multipath 1 queue_if_no_pa
May 19 17:15:06 172.21.55.20 multipathd
ls and modules from kernel
> 2.6.33.3.
>
> I encounter a strange problem sometimes when I connect using multipath
> to an equallogic
> array.
>
> I have interfaces eth1 and eth3 in the ISCSI network and if I connect
> to a target on the equallogic
> array it sometimes happens
Hi,
i'm running open-iscsi 2.0.871 userspace tools and modules from kernel
2.6.33.3.
I encounter a strange problem sometimes when I connect using multipath
to an equallogic
array.
I have interfaces eth1 and eth3 in the ISCSI network and if I connect
to a target on the equallogic
arr
On 3 May 2010 at 15:45, James Hammer wrote:
> On 05/03/10 15:30, James Hammer wrote:
> > On 05/03/10 13:39, Romeo Theriault wrote:
> >>
> >>
> >> How can I resolve the "Found duplicate PV" warning/error?
> >>
> >>
> >> It looks like this link should be able to help you.
> >>
> >> http://kbase.
make sure you are using the right iscsi
settings at least.
> multipath at the initiator (I also fear messing up my md devices). I
> assume multipath is enabled by default even with a single NIC from what
> I read here:
>
http://groups.google.com/group/open-iscsi/bro
On Fri, 2010-05-14 at 11:46 -0500, Mike Christie wrote:
> > We had some issues with the initiator loosing connections with the
> > target in this new Karmic rootfs on iSCSI setup. The problem is
> that
> > after some time the filesystem switches to a read-only mount
> following
> > I/O errors afte
On 05/12/2010 12:56 PM, Ian MacDonald wrote:
We have the following new setup; Karmic with root on iSCSI (local boot
partition since the NIC doesn't support native iSCSI). This was
surprisingly easy following vanilla iSCSI Ubuntu installer and a few
post-install tweaks from /usr/share/doc/open-is
the email.
The problems only started after the install (upon real boot and login to
the system).
Since the installation (which is I/O intensive) worked flawlessly
several times, my initial feeling was that multipath on the initiator
and/or channel bonding on the target were not playing nice together,
fi
On 05/03/10 22:01, dave wrote:
And delete the lvm cache before the scan, if you haven't already.
On May 3, 8:33 pm, dave wrote:
IIRC, LVM was wonky when I tried to fix something similar to this.
Assuming you only want the partitions on sda, sdb, sdc and the
multipath devices, try:
f
And delete the lvm cache before the scan, if you haven't already.
--
Dave
On May 3, 8:33 pm, dave wrote:
> IIRC, LVM was wonky when I tried to fix something similar to this.
> Assuming you only want the partitions on sda, sdb, sdc and the
> multipath devices, try:
>
> f
IIRC, LVM was wonky when I tried to fix something similar to this.
Assuming you only want the partitions on sda, sdb, sdc and the
multipath devices, try:
filter = [ "a|/dev/dm-*|", "a|/dev/sda[0-9]|", "a|/dev/sdb[0-9]|", "a|/
dev/sdc[0-9]|", "r|.*|
On 05/03/10 15:30, James Hammer wrote:
On 05/03/10 13:39, Romeo Theriault wrote:
How can I resolve the "Found duplicate PV" warning/error?
It looks like this link should be able to help you.
http://kbase.redhat.com/faq/docs/DOC-2991
Essentially you'll want to create a filter in your lv
On 05/03/10 13:39, Romeo Theriault wrote:
How can I resolve the "Found duplicate PV" warning/error?
It looks like this link should be able to help you.
http://kbase.redhat.com/faq/docs/DOC-2991
Essentially you'll want to create a filter in your lvm.conf file so it
only scans your multi
om your error you see it is actually
choosing one of the paths:
# pvdisplay
> Found duplicate PV tU6s0t1wfQNQufnOqtN1KGux5JftKJSi: using /dev/sdr not
> /dev/sdq
>
You want it to use the multipath link. But from the linked article the error
is only a warning which can be ignored if it
On a linux server I connect to 5 iscsi disks hosted by a SAN. I have
multipathing setup. On each multipath device I used pvcreate to create
a physical volume with no problems. When I create the 6th disk I get
the following from pvdisplay:
# pvcreate /dev/dm-18
Physical volume "/dev/
Mike Christie wrote:
On 03/23/2010 10:13 AM, James Hammer wrote:
Mike Christie wrote:
On 03/22/2010 03:38 PM, James Hammer wrote:
Every time I reboot my server it hangs on the multipath devices.
The server is Debian based. I've had this problem with all kernels
I've
tried (2.6.
This is a reported bug of the device-mapper on debian.
There's a patch at debians bugtracker available, but as far as I remember,
it has been refused by upstream developers.
We're also running open-iscsi/dm-multipath/lvm/clvm stack on virtualization
Hosts. Due to this behavior one big p
On 03/23/2010 10:13 AM, James Hammer wrote:
Mike Christie wrote:
On 03/22/2010 03:38 PM, James Hammer wrote:
Every time I reboot my server it hangs on the multipath devices.
The server is Debian based. I've had this problem with all kernels I've
tried (2.6.18, 2.6.24, 2.6.32
Mike Christie wrote:
On 03/22/2010 03:38 PM, James Hammer wrote:
Every time I reboot my server it hangs on the multipath devices.
The server is Debian based. I've had this problem with all kernels I've
tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry is
set to q
On 03/22/2010 03:38 PM, James Hammer wrote:
Every time I reboot my server it hangs on the multipath devices.
The server is Debian based. I've had this problem with all kernels I've
tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry is
set to queue
Here are snippet
James Hammer wrote:
Every time I reboot my server it hangs on the multipath devices.
The server is Debian based. I've had this problem with all kernels
I've tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf,
no_path_retry is set to queue
I found that if I set no_path_re
Every time I reboot my server it hangs on the multipath devices.
The server is Debian based. I've had this problem with all kernels I've
tried (2.6.18, 2.6.24, 2.6.32). In /etc/multipath.conf, no_path_retry
is set to queue
Here are snippets from the reboot log:
Stopping multip
Thks Mike, that explains it :)
On Mar 16, 5:27 pm, Mike Christie wrote:
> On 03/16/2010 04:02 PM, bennyturns wrote:
>
> > I am trying work out a formula for total failover time of my
> > multipathed iSCSI device so far I have:
>
> > failover time = nop timout + nop interval + replacement_timeout
On 03/16/2010 04:02 PM, bennyturns wrote:
I am trying work out a formula for total failover time of my
multipathed iSCSI device so far I have:
failover time = nop timout + nop interval + replacement_timeout
seconds + scsi block device timeout(/sys/block/sdX/device/timeout)
/sys/block/sdX/devi
I am trying work out a formula for total failover time of my
multipathed iSCSI device so far I have:
failover time = nop timout + nop interval + replacement_timeout
seconds + scsi block device timeout(/sys/block/sdX/device/timeout)
Is there anything else that I am missing?
-b
On Mar 15, 4:53
On 03/16/2010 04:50 AM, Alex Zeffertt wrote:
Mike Christie wrote:
On 03/15/2010 05:56 AM, Alex Zeffertt wrote:
The bugzilla ticket requests a merge of two git commits, but neither of
those contain the libiscsi.c change that addresses bug #2. Was this a
mistake, or did you deliberately omit that
Mike Christie wrote:
On 03/15/2010 05:56 AM, Alex Zeffertt wrote:
The bugzilla ticket requests a merge of two git commits, but neither of
those contain the libiscsi.c change that addresses bug #2. Was this a
mistake, or did you deliberately omit that part of your
speed-up-conn-fail-take3.patch w
On 03/15/2010 05:56 AM, Alex Zeffertt wrote:
The bugzilla ticket requests a merge of two git commits, but neither of
those contain the libiscsi.c change that addresses bug #2. Was this a
mistake, or did you deliberately omit that part of your
speed-up-conn-fail-take3.patch when you raised the ti
Mike Christie wrote:
On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote:
On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote:
On 03/01/2010 08:53 PM, Mike Christie wrote:
On 03/01/2010 12:06 PM, bet wrote:
1. Based on my timeouts I would think that my session would time out
Yes. It shou
On Mon, Mar 08, 2010 at 02:07:14PM -0600, Mike Christie wrote:
> On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote:
>> On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote:
>>> On 03/01/2010 08:53 PM, Mike Christie wrote:
On 03/01/2010 12:06 PM, bet wrote:
> 1. Based on my timeouts I
On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote:
On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote:
On 03/01/2010 08:53 PM, Mike Christie wrote:
On 03/01/2010 12:06 PM, bet wrote:
1. Based on my timeouts I would think that my session would time out
Yes. It should timeout about 15 s
On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote:
> On 03/01/2010 08:53 PM, Mike Christie wrote:
>> On 03/01/2010 12:06 PM, bet wrote:
>>> 1. Based on my timeouts I would think that my session would time out
>>
>> Yes. It should timeout about 15 secs after you see
>> > Mar 1 07:14:27
On 03/01/2010 08:53 PM, Mike Christie wrote:
On 03/01/2010 12:06 PM, bet wrote:
1. Based on my timeouts I would think that my session would time out
Yes. It should timeout about 15 secs after you see
> Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of
> 5 secs expired, recv
I can track its progress? If not should I open
a BZ?
3. In a best case scenario what kind of failover time can I expect
with multipath and iSCSI? I see about 25-30 seconds, is this
accurate? I saw 3 second failover time using bonded NICs instead of
dm-multipath, is there any specific reason to u
Mike Christie wrote:
> You might be hitting a bug where the network layer gets stuck trying to
> send data. I attached a patch that should fix the problem
Doing some multipath testing with iscsi/tcp I didn't hit this bug, any hint
what does it take to have this come into play? I did
Mike Christie wrote:
On 03/01/2010 12:06 PM, bet wrote:
1. Based on my timeouts I would think that my session would time out
Yes. It should timeout about 15 secs after you see
> Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of
> 5 secs expired, recv timeout 5, last rx 4
On 03/01/2010 12:06 PM, bet wrote:
1. Based on my timeouts I would think that my session would time out
Yes. It should timeout about 15 secs after you see
> Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of
> 5 secs expired, recv timeout 5, last rx 4884304, last ping 488930
Hi all. I am going through some testing of my multipathed iSCSI
devices and I am seeing some longer than expected delays. I am
running the latest RHEL 5.4 packages as of this morning. I am seeing
the failure of the iSCSI sessions take about 67 seconds. After the
iSCSI failure the multipath
Using a single path (without MPIO) as a baseline:
With bonding I saw, on average 99-100% of the speed (worst case 78%)
of a single path.
With MPIO (2 nics) I saw, on average 82% of the speed (worst case 66%)
of the single path.
With MPIO with one nic (ifconfig downed the second), I saw, on average
On 12/30/2009 11:48 AM, Kyle Schmitt wrote:
On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie wrote:
So far single connections work: If I setup the box to use one NIC, I
get one connection and can use it just fine.
Could you send the /var/log/messages for when you run the login command
so I can se
Note, the EMC specific bits of that multipath.conf were just copied
from boxes that use FC to the SAN, and use MPIO successfully.
--
You received this message because you are subscribed to the Google Groups
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To
ltiple concurrent requests (which you
say it doesn't, I'll believe you)
OR the san was under massively different load between the test runs
(not too likely, but possible. Only one other lun is in use).
> That seems a bit weird.
That's what I thought, otherwise I would have ju
devices that connect to iscsi
> show traffic.
>
> The weird thing is that aside from writing bonding was measurably
> faster than MPIO. Does that seem right?
>
That seems a bit weird.
How did you configure multipath? Please paste your multipath settings.
-- Pasi
>
> Here
re's the dmesg, if that lends any clues. Thanks for any input!
--Kyle
156 lines of dmesg follows
cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
iscsi: registered transport (cxgb3i)
device-mapper: table: 253:6: multipath: error getting device
device-mapper: ioctl: error adding
On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie wrote:
> Kyle Schmitt wrote:
> What do you mean by works? Can you dd it, or fdisk it?
it works by most any measure sdc: I can dd it, fdisk it, mkfs.ext3 it,
run iozone, etc.
In contrast sdb sdd and sde cant be fdisked, dded, or even less -fed.
> H
-iscsi@googlegroups.com
Subject: Re: Need help with multipath and iscsi in CentOS 5.4
Kyle Schmitt wrote:
> I'm cross-posting here from linux-iscsi-users since I've seen no
linux-scsi-users would be for centos 4. Centos 5 uses a different
initiator, but you are the right place finally
iscsiadm -m node -T -l
>
> I could see I was connected to all four via
> iscsiadm-m session
>
> At this point, I thought I was set, I had four new devices
> /dev/sdb /dev/sdc /dev/sdd /dev/sde
>
> Ignoring multipath at this point for now, here's where the problem
good.
I logged all four of them them in with:
iscsiadm -m node -T -l
I could see I was connected to all four via
iscsiadm-m session
At this point, I thought I was set, I had four new devices
/dev/sdb /dev/sdc /dev/sdd /dev/sde
Ignoring multipath at this point for now, here's where the prob
Hello,
This is my first post on this group so please excuse my ignorance. I
am using the combination of RHEL, iSCSI and multipath to NetApp SAN.
I attach to 9 LUNs. On one path I can connect to all LUNs but on the
second path I am unable to see 3 of the 9 LUNs. I have tried
'googleing
Did anyone install SLES11 on a multipath iSCSI device on power based
host? I installed on a single path to begin with. It booted fine with
/etc/yaboot.conf file. I created another path with "iscsiadm" command
(set node startup to onboot and configured multipath) and ran "mkinitrd&q
config
> boot.open-iscsi on && chkconfig open-iscsi on' and then ran 'mkinitrd' to
> create a new initrd with multipath support at this point.
>
> Is this a recommended procedure?
>
> With that, the system boots fine but the shutdown/reboot hangs. Upon
-iscsi on' and then ran 'mkinitrd' to
create a new initrd with multipath support at this point.
Is this a recommended procedure?
With that, the system boots fine but the shutdown/reboot hangs. Upon
closer inspection, I found that /etc/init.d/boot.open-iscsi sets
node.conn[0].startup t
Hi Bart,
At the moment this is only a proof of concept wich seems to be working
fine.
I had some thoughts about it:
* The iSCSI target should not run in Write-Back mode
* DRBD should run in synchronus mode (protocol C)
* I should use the failover policy with dm-multipath instead of
round-robin
On Fri, May 29, 2009 at 5:21 PM, Wido wrote:
> Last night i had the idea of building a multipath iSCSI setup with DRBD
> in Primary/Primary.
> [ ... ]
> Is my setup a possibility or should i stay with the old regular setup of
> a Primary/Standy with heartbeat?
I strongly recomme
://www.pcextreme.nl
Kennisbank: http://support.pcextreme.nl/
Netwerkstatus: http://nmc.pcextreme.nl
On Fri, 2009-05-29 at 11:13 -0500, Mike Christie wrote:
> Wido wrote:
> > Hello,
> >
> > Last night i had the idea of building a multipath iSCSI setup with
> > DRBD in Primary/Prima
1 - 100 of 178 matches
Mail list logo