iSCSI and multipath performance issue

2011-09-16 Thread --[ UxBoD ]--
Hello all, we are about to configure a new storage system that utilizes the Nexenta OS with sparsely allocated ZVOLs. We wish to present 4TB of storage to a Linux system that has four NICs available to it. We are unsure whether to present one large ZVOL or four smaller ones to maximize the use

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread --[ UxBoD ]--
- Original Message - > On 05/20/2011 10:45 AM, --[ UxBoD ]-- wrote: > > - Original Message - > >> On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: > >>> Not sure where this should go so cross posting: > >>> > >>> CentOS 5.6 with k

Re: Multipath or OpeniSCSI Issue ?

2011-05-20 Thread --[ UxBoD ]--
- Original Message - > On 05/20/2011 10:29 AM, --[ UxBoD ]-- wrote: > > Not sure where this should go so cross posting: > > > > CentOS 5.6 with kernel 2.6.37.6 and > > device-mapper-multipath-0.4.7-42.el5_6.2. > > > > When I run multipath -v9 -d I ge

Multipath or OpeniSCSI Issue ?

2011-05-20 Thread --[ UxBoD ]--
Not sure where this should go so cross posting: CentOS 5.6 with kernel 2.6.37.6 and device-mapper-multipath-0.4.7-42.el5_6.2. When I run multipath -v9 -d I get: sdg: not found in pathvec sdg: mask = 0x1f Segmentation fault If I strace the command I see: stat("/sys/block/sdg/device", {st

Re: Change in /dev/disk behaviour ?

2011-05-20 Thread --[ UxBoD ]--
- Original Message - > On 05/19/2011 09:22 AM, --[ UxBoD ]-- wrote: > > Hi, > > > > > > On a previous release of OpeniSCSI devices were created in the > > format: > > > > > > lrwxrwxrwx 1 root root 9 May 8 03:35 > > ip-17

Change in /dev/disk behaviour ?

2011-05-19 Thread --[ UxBoD ]--
Hi, On a previous release of OpeniSCSI devices were created in the format: lrwxrwxrwx 1 root root 9 May 8 03:35 ip-172.XXX.XXX.XXX:3260-iscsi-iqn.1986-03.com.sun:02:XXX-lun-9 -> ../../sdl where as with the latest release they are: [root@kvm02 ~]# ls -l /dev/disk/by-path/ip-172.XXX.XX

Re: CentOS 5.6

2011-05-11 Thread --[ UxBoD ]--
- Original Message - > On 05/10/2011 05:11 AM, --[ UxBoD ]-- wrote: > > As it would appear CentOS 6 is a while downstream before release we > > have > > used 5.6 on a new server build. That uses kernel 2.6.18-238.9.1.el5 > > which is pretty darn old and has a

CentOS 5.6

2011-05-10 Thread --[ UxBoD ]--
As it would appear CentOS 6 is a while downstream before release we have used 5.6 on a new server build. That uses kernel 2.6.18-238.9.1.el5 which is pretty darn old and has a nasty bug when working with iSCSI and OpenSolaris SANs. What kernel and Open-iSCSI release would you recommend for a ne

Re: Odd iSCSI connection issue

2011-05-10 Thread --[ UxBoD ]--
- Original Message - > On 05/09/2011 10:52 AM, --[ UxBoD ]-- wrote: > > Hi, > > > > Over the weekend we had to make some room on our core switches for > > a couple of new disk systems. We pulled a couple of cables from > > our current disk and updated the

Odd iSCSI connection issue

2011-05-09 Thread --[ UxBoD ]--
Hi, Over the weekend we had to make some room on our core switches for a couple of new disk systems. We pulled a couple of cables from our current disk and updated the paths so that they were no longer in use. We are now starting to see the following message, plus disk paths not available, and

Re: One device showing faulty

2010-12-09 Thread --[ UxBoD ]--
- Original Message - > On 12/08/2010 06:36 AM, --[ UxBoD ]-- wrote: > > Hi, > > > > Have noticed that one of our iscsi disks is showing as faulty: > > > > imonitor01 (3600144f0e28249004a81fc5d0005) dm-4 NEXENTA,COMSTAR > > [size=1024G][features=0

One device showing faulty

2010-12-08 Thread --[ UxBoD ]--
Hi, Have noticed that one of our iscsi disks is showing as faulty: imonitor01 (3600144f0e28249004a81fc5d0005) dm-4 NEXENTA,COMSTAR [size=1024G][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 10:0:0:2 sdaa 65:160 [failed][faulty] \_ round-robin 0 [prio=49][active] \_ 11:

Re: Which O-iSCSI version ?

2010-07-09 Thread --[ UxBoD ]--
- Original Message - > On 07/09/2010 09:21 AM, --[ UxBoD ]-- wrote: > > Hi, > > > > Am about to build a new server, based on CentOS 5.5, and we would > > like to use a newer kernel. Should we go straight for 2.6.34.1 ? or > > which would be the best to

Which O-iSCSI version ?

2010-07-09 Thread --[ UxBoD ]--
Hi, Am about to build a new server, based on CentOS 5.5, and we would like to use a newer kernel. Should we go straight for 2.6.34.1 ? or which would be the best to use the latest release of O-iSCSI ? -- Thanks, Phil -- You received this message because you are subscribed to the Google Groups

NIC Offload Engine

2010-05-18 Thread --[ UxBoD ]--
Hi, this is more of a research question if anybody could help please; has anybody performed testing with Open-ISCSI and TCP Offload Engine NIC's ? Would be very interested to see the differences between with and without and how it effects the CPU and memory load. -- Thanks, Phil -- You recei

Re: iSCSI kernel upgrade

2010-04-12 Thread --[ UxBoD ]--
- Original Message - > On 04/09/2010 08:03 AM, --[ UxBoD ]-- wrote: > > Hi, > > > > we have been having a few issues with iSCSI lockups under kernel > > 2.5.29 on CentOS. We would like to update the kernel iSCSI modules > > are would like to ask whether w

iSCSI kernel upgrade

2010-04-09 Thread --[ UxBoD ]--
Hi, we have been having a few issues with iSCSI lockups under kernel 2.5.29 on CentOS. We would like to update the kernel iSCSI modules are would like to ask whether we should use the code from GIT or the latest, semi-stable (April 2009), bundle from the website ? -- Thanks, Phil -- You rec

Re: CPU consumes 100% and freezes KVM guests when mkfs.ext4 is ran

2010-02-05 Thread --[ UxBoD ]--
- "Mike Christie" wrote: > On 02/05/2010 07:42 AM, --[ UxBoD ]-- wrote: > > - "Mike Christie" wrote: > > > >> On 02/04/2010 06:57 AM, --[ UxBoD ]-- wrote: > >>> Linux kvm01..xxx 2.6.29.1 #1 SMP > >> > >&

Re: CPU consumes 100% and freezes KVM guests when mkfs.ext4 is ran

2010-02-05 Thread --[ UxBoD ]--
- "Mike Christie" wrote: > On 02/04/2010 06:57 AM, --[ UxBoD ]-- wrote: > > Linux kvm01..xxx 2.6.29.1 #1 SMP > > I can replicate this on 2.6.29, but not on newer kernels. I remember > some bug fixes in the scsi or block layer went in for handling

Re: CPU consumes 100% and freezes KVM guests when mkfs.ext4 is ran

2010-02-04 Thread --[ UxBoD ]--
- "Mike Christie" wrote: > On 02/04/2010 06:57 AM, --[ UxBoD ]-- wrote: > > Hi, > > > > we have recently experienced a problem when running a mkfs.ext4 on a > freshly presented LUN spirals the CPU of out control and makes all the > KVM guests on the same

CPU consumes 100% and freezes KVM guests when mkfs.ext4 is ran

2010-02-04 Thread --[ UxBoD ]--
Hi, we have recently experienced a problem when running a mkfs.ext4 on a freshly presented LUN spirals the CPU of out control and makes all the KVM guests on the same host freeze. On checking /var/log/messages when it happens I see: --/ snip /--