Re: [Linux-cluster] Howto? Remove volumes

2007-10-26 Thread Steve Rigler
On Fri, 2007-10-26 at 09:29 -0500, [EMAIL PROTECTED] wrote: After using LUN masking and zoning to make sure that hosts didn't access other hosts data space, I find myself with lost volumes on some of the hosts. Since I didn't remove them before making the changes, I am now stuck with the

[Linux-cluster] Howto? Remove volumes

2007-10-26 Thread [EMAIL PROTECTED]
After using LUN masking and zoning to make sure that hosts didn't access other hosts data space, I find myself with lost volumes on some of the hosts. Since I didn't remove them before making the changes, I am now stuck with the following; # lvscan /dev/sda: read failed after 0 of 4096 at 0:

[Linux-cluster] Fencing Race Question

2007-10-26 Thread Scott Becker
In the FAQ, Fencing Questions 13. What's the right way to configure fencing when I have redundant power supplies? I'm going to setup the second example. My concern is about a race condition with the two devices: |device name=pwr01 option=off switch=1 port=1/ device name=pwr02 option=off

Re: [Linux-cluster] Howto? Remove volumes

2007-10-26 Thread [EMAIL PROTECTED]
Hi Steve, Usually you'd want to do this prior to zoning away a device, Ya, I know :). I should have done it before but I was testing stuff and it just happened to work out as I needed it so left it that way. but echo 1 /sys/block/sda/device/delete should get rid of it. There is no delete

[Linux-cluster] How to take down a CS/GFS setup with minimum downtime

2007-10-26 Thread Sævaldur Arnar Gunnarsson
I've got five RHEL4 systems with CS and about 800 GB of data on a shared GFS filesystem. I've been tasked to take down the cluster and divide the content of the shared GFS filesystem to the local disks on each system with minimum downtime. I've removed two nodes from the cluster already and am

Re: [Linux-cluster] Howto? Remove volumes

2007-10-26 Thread [EMAIL PROTECTED]
Reference removed by going to the real path; Simlink path was; /sys/block/sda/~device/delete Real path was; device - ../../devices/pci:00/:00:11.0/host0/target0:0:0/0:0:0:0 From within real path; echo 1 /sys/block/sda/device/delete OK - that should be the correct file. Does it still

Re: [Linux-cluster] Howto? Remove volumes

2007-10-26 Thread Bryn M. Reeves
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 [EMAIL PROTECTED] wrote: Reference removed by going to the real path; Simlink path was; /sys/block/sda/~device/delete Real path was; device - ../../devices/pci:00/:00:11.0/host0/target0:0:0/0:0:0:0 From within real path; echo 1

Re: [Linux-cluster] Fencing Race Question

2007-10-26 Thread Scott Becker
To clarify, example #2 in the faq: redundant power feeds, redundant APCs - no single point of failure. Provides High availability but complicates fencing and race prevention. This necessitates two power switch logins to successfully fence. I'll test by manually logging in to one device and

[Linux-cluster] updates to fence_apc

2007-10-26 Thread James Fidell
Although the fence_apc script in cman-2.0.64-1.0.1.el5 has 79xx support, it wouldn't work with a 7920 I'm using. I have patches to fix that and also allow the port to be named rather than using the default Outlet N numbering scheme. Has this already been fixed in the development stream? And if

[Linux-cluster] Problem with fence_ilo

2007-10-26 Thread James Fidell
Again in cman-2.0.64-1.0.1.el5, fence_ilo reports an error from an iLO2 session saying power cannot be turned back on without TOGGLE = Yes being set. The unit reports that it uses RIBCL 2.22, which appears to be supported in the code. Can anyone confirm that their fence_ilo script works without

Re: [Linux-cluster] How to configure resource dependency in service

2007-10-26 Thread Lon Hohberger
On Thu, 2007-10-25 at 18:55 -0700, Roger Peña wrote: --- Vlad Seagal [EMAIL PROTECTED] wrote: the active cluster.conf number is the same that it is in the cluster.conf file? manual change of that number and a forced propagation of the new conf to the all cluster should fix that Possibly; to

Re: [Linux-cluster] Fencing Race Question

2007-10-26 Thread Lon Hohberger
On Fri, 2007-10-26 at 08:47 -0700, Scott Becker wrote: To clarify, example #2 in the faq: redundant power feeds, redundant APCs - no single point of failure. Provides High availability but complicates fencing and race prevention. This necessitates two power switch logins to successfully

RE: [Linux-cluster] strange modprobe drops VIP fails service

2007-10-26 Thread Lon Hohberger
On Fri, 2007-10-26 at 11:09 -0400, Jeff Liu wrote: I've haven't received any replies, so just to bump the question again. *bump* Thanks in advance. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jeff Liu Sent: Thursday, October 25, 2007

Re: [Linux-cluster] Fencing Race Question

2007-10-26 Thread Lon Hohberger
On Fri, 2007-10-26 at 08:21 -0700, Scott Becker wrote: In the FAQ, Fencing Questions 13. What's the right way to configure fencing when I have redundant power supplies? I'm going to setup the second example. My concern is about a race condition with the two devices: device name=pwr01

Re: [Linux-cluster] Problem with fence_ilo

2007-10-26 Thread Ryan McCabe
On Fri, Oct 26, 2007 at 07:37:49PM +0100, James Fidell wrote: Can anyone confirm that their fence_ilo script works without modification in such a configuration? I've changed the script to send HOLD_PWR_BTN TOGGLE = Yes/ and that does work for me, but I don't know if it's valid for all

[Linux-cluster] GFS RG size (and tuning)

2007-10-26 Thread Jos Vos
Hi, The gfs_mkfs manual page (RHEL 5.0) says: If not specified, gfs_mkfs will choose the RG size based on the size of the file system: average size file systems will have 256 MB RGs, and bigger file systems will have bigger RGs for better performance. My 3 TB filesystems still seem

Re: [Linux-cluster] Fencing Race Question

2007-10-26 Thread Scott Becker
I think I understand how it works. It's good to know that the loser of the first race doesn't immediately try fence device 2. If it's really a race then the delay in node 2's retry attempt is necessary for it to be killed before it retries. The ssh handshaking when logging into the APC does

Re: [Linux-cluster] GFS RG size (and tuning)

2007-10-26 Thread Wendy Cheng
Jos Vos wrote: Hi, The gfs_mkfs manual page (RHEL 5.0) says: If not specified, gfs_mkfs will choose the RG size based on the size of the file system: average size file systems will have 256 MB RGs, and bigger file systems will have bigger RGs for better performance. My 3 TB

[Linux-cluster] Maximum number of nodes in RHCS 4/5

2007-10-26 Thread Jiho Hahm
Hi, the cluster suite product page on redhat site states the maximum nodes supported is 128 for RHEL5 and 16 on RHEL4. Is this accurate for a cman/rgmanager cluster? Thanks, -Jiho __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best

Re: [Linux-cluster] Howto? Remove volumes

2007-10-26 Thread [EMAIL PROTECTED]
Anyone have any additional thoughts on this? It's a left over volume after I zoned everything off. Mike --- SCSI subsystem initialized QLogic Fibre Channel HBA Driver qla2200 :00:11.0: Found an ISP2200, irq 11, iobase 0xe0816000 qla2200 :00:11.0: Configuring PCI space... qla2200