On Fri, 2007-10-26 at 09:29 -0500, [EMAIL PROTECTED] wrote:
After using LUN masking and zoning to make sure that hosts didn't access
other
hosts data space, I find myself with lost volumes on some of the hosts. Since
I didn't remove them before making the changes, I am now stuck with the
After using LUN masking and zoning to make sure that hosts didn't access other
hosts data space, I find myself with lost volumes on some of the hosts. Since
I didn't remove them before making the changes, I am now stuck with the
following;
# lvscan
/dev/sda: read failed after 0 of 4096 at 0:
In the FAQ, Fencing Questions
13. What's the right way to configure fencing when I have redundant
power supplies?
I'm going to setup the second example.
My concern is about a race condition with the two devices:
|device name=pwr01 option=off switch=1 port=1/
device name=pwr02 option=off
Hi Steve,
Usually you'd want to do this prior to zoning away a device,
Ya, I know :). I should have done it before but I was testing stuff and it
just happened to work out as I needed it so left it that way.
but echo 1 /sys/block/sda/device/delete should get rid of it.
There is no delete
I've got five RHEL4 systems with CS and about 800 GB of data on a shared
GFS filesystem.
I've been tasked to take down the cluster and divide the content of the
shared GFS filesystem to the local disks on each system with minimum
downtime.
I've removed two nodes from the cluster already and am
Reference removed by going to the real path;
Simlink path was;
/sys/block/sda/~device/delete
Real path was;
device - ../../devices/pci:00/:00:11.0/host0/target0:0:0/0:0:0:0
From within real path;
echo 1 /sys/block/sda/device/delete
OK - that should be the correct file. Does it still
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] wrote:
Reference removed by going to the real path;
Simlink path was;
/sys/block/sda/~device/delete
Real path was;
device - ../../devices/pci:00/:00:11.0/host0/target0:0:0/0:0:0:0
From within real path;
echo 1
To clarify, example #2 in the faq: redundant power feeds, redundant APCs
- no single point of failure. Provides High availability but complicates
fencing and race prevention.
This necessitates two power switch logins to successfully fence. I'll
test by manually logging in to one device and
Although the fence_apc script in cman-2.0.64-1.0.1.el5 has 79xx support,
it wouldn't work with a 7920 I'm using. I have patches to fix that and
also allow the port to be named rather than using the default Outlet N
numbering scheme.
Has this already been fixed in the development stream? And if
Again in cman-2.0.64-1.0.1.el5, fence_ilo reports an error from an iLO2
session saying power cannot be turned back on without TOGGLE = Yes
being set. The unit reports that it uses RIBCL 2.22, which appears to
be supported in the code.
Can anyone confirm that their fence_ilo script works without
On Thu, 2007-10-25 at 18:55 -0700, Roger Peña wrote:
--- Vlad Seagal [EMAIL PROTECTED] wrote:
the active cluster.conf number is the same that it
is in the cluster.conf file?
manual change of that number and a forced propagation
of the new conf to the all cluster should fix that
Possibly; to
On Fri, 2007-10-26 at 08:47 -0700, Scott Becker wrote:
To clarify, example #2 in the faq: redundant power feeds, redundant APCs
- no single point of failure. Provides High availability but complicates
fencing and race prevention.
This necessitates two power switch logins to successfully
On Fri, 2007-10-26 at 11:09 -0400, Jeff Liu wrote:
I've haven't received any replies, so just to bump the question again.
*bump*
Thanks in advance.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Liu
Sent: Thursday, October 25, 2007
On Fri, 2007-10-26 at 08:21 -0700, Scott Becker wrote:
In the FAQ, Fencing Questions
13. What's the right way to configure fencing when I have redundant
power supplies?
I'm going to setup the second example.
My concern is about a race condition with the two devices:
device name=pwr01
On Fri, Oct 26, 2007 at 07:37:49PM +0100, James Fidell wrote:
Can anyone confirm that their fence_ilo script works without
modification in such a configuration? I've changed the script to
send
HOLD_PWR_BTN TOGGLE = Yes/
and that does work for me, but I don't know if it's valid for all
Hi,
The gfs_mkfs manual page (RHEL 5.0) says:
If not specified, gfs_mkfs will choose the RG size based on the size
of the file system: average size file systems will have 256 MB RGs,
and bigger file systems will have bigger RGs for better performance.
My 3 TB filesystems still seem
I think I understand how it works. It's good to know that the loser of
the first race doesn't immediately try fence device 2. If it's really a
race then the delay in node 2's retry attempt is necessary for it to be
killed before it retries. The ssh handshaking when logging into the APC
does
Jos Vos wrote:
Hi,
The gfs_mkfs manual page (RHEL 5.0) says:
If not specified, gfs_mkfs will choose the RG size based on the size
of the file system: average size file systems will have 256 MB RGs,
and bigger file systems will have bigger RGs for better performance.
My 3 TB
Hi, the cluster suite product page on redhat site states the maximum nodes
supported is 128 for RHEL5 and 16 on RHEL4. Is this accurate for a
cman/rgmanager cluster? Thanks,
-Jiho
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best
Anyone have any additional thoughts on this? It's a left over volume after I
zoned everything off.
Mike
---
SCSI subsystem initialized
QLogic Fibre Channel HBA Driver
qla2200 :00:11.0: Found an ISP2200, irq 11, iobase 0xe0816000
qla2200 :00:11.0: Configuring PCI space...
qla2200
20 matches
Mail list logo