On Fri, Feb 06, 2009 at 12:10:34AM +0530, Rajeev P wrote:
> Hi,
>
> Consider a two node RHEL5 cluster that is configured with SCSI PR fencing
> but without a qdisk. In the event of network split, nodes will attempt to
> fence the other one. One of the node's (node that lost the race) key will be
>
Can you dump the registered keys and reservation key?
# get a list of keys resgitered for a device
sg_persist -i -k
# tells you which key holding the reservation
sg_persist -i -r
It appears that this node is trying to access /dev/sda but is not
registered with the device. WERO reservations wi
On Tue, Feb 17, 2009 at 04:23:20PM -0700, Gary Romo wrote:
>
> We had this issue a long time ago.
> What we did was remove the sg3_utils rpm and then did a chkconfig
> scsi_reserve off
Ahh, yes. If you don't intend to use SCSI-3 reservations, you
definitely need to turn off scsi_reserve.
Thanks
On Wed, Feb 18, 2009 at 04:30:04PM -0600, Alan A wrote:
> Hello all!
> Thanks for the help. I have removed sg3_utils package, and rebooted all the
> nodes. As well I removed any SCSI fencing entry from cluster.conf file.
>
> I still have a problem getting GFS up on one of the nodes. I checked the
On Thu, Feb 19, 2009 at 11:07:12AM -0600, Alan A wrote:
>
> Thank you Ryan. This was totally correct - yet it will not let me remove the
> reservation. Here is the output of several commands I tried:
>
> [r...@fendev04 ~]# sg_persist -o -C -K 0xb0b40001 /dev/gfs_acct61/acct61
> HPOPEN-V
On Thu, Feb 19, 2009 at 04:00:21PM -0600, Alan A wrote:
> Thank you very much, that did the trick. I will remove sg3_utils package
> now. Hopefully this will never happen again. I had to remove SCSI
> reservations from 4 different volumes. Any suggestions on how to avoid this
> in the future?
I'm
On Wed, Jun 10, 2009 at 02:11:21PM +0530, Rajeev P wrote:
> I wanted to know if fence_scsi is supported in a multipath environment for
> RHEL5.3 release.
Yes, it is supported.
> In earlier releases of RHEL5 fence_scsi was not supported in a multipath
> environment for RHEL5.3 release. If I am no
ltipathd: 8:176: mark as failed
> Jun 10 23:32:03 rmamseslab07 multipathd: mpath0: remaining active paths: 3
>
>
>
> -Original Message-
> From: linux-cluster-boun...@redhat.com
> [mailto:linux-cluster-boun...@redhat.com] On Behalf Of Ryan O'Hara
> Sent: Wednesday
On Sun, Jun 14, 2009 at 11:21:08AM +1000, yu song wrote:
> Hi,
>
> I am planning to build a 2 nodes cluster on rhcl 5.3, and looking for what
> fencing method I could use.
>
> On the storage side, it is EMC clarion and supports scsi 3 reservation.
>
> So I'm thinking to use fence_scsi agent to d
On Tue, Jan 12, 2010 at 11:21:14AM -0500, Evan Broder wrote:
> On Tue, Jan 12, 2010 at 3:54 AM, Christine Caulfield
> wrote:
> > On 11/01/10 09:38, Christine Caulfield wrote:
> >>
> >> On 11/01/10 09:32, Evan Broder wrote:
> >>>
> >>> On Mon, Jan 11, 2010 at 4:03 AM, Christine Caulfield
> >>> wro
On Wed, Sep 01, 2010 at 10:48:23AM -0400, Ben Turner wrote:
> Here is a kbase on fence scsi:
>
> https://access.redhat.com/kb/docs/DOC-17809
>
> It should answer any questions you have:
>
> https://access.redhat.com/kb/docs/DOC-17809
>
> Usually I try the fence_scsi_test to be sure my devices a
On Mon, Feb 28, 2011 at 12:43:10PM +0530, Parvez Shaikh wrote:
> Hi all,
>
> I have a question related to fence agents and SNMP alarms.
>
> Fence Agent can fail to fence the failed node for various reason; e.g. with
> my bladecenter fencing agent, I sometimes get message saying bladecenter
> fenc
On 04/12/2012 10:18 AM, emmanuel segura wrote:
That's right
you'll found your cluster partitioned and if you "" as redhat setting our cluster maybe you get data
corruption
How? What fence agent are you using? I've used this configuration for
years and never had data corruption.
Because eve
Thanks. I applied this patch to the upstream git repo this morning.
Ryan
On 05/07/2012 04:08 AM, Masatake YAMATO wrote:
Signed-off-by: Masatake YAMATO
diff --git a/fence/agents/kdump/fence_kdump_send.8
b/fence/agents/kdump/fence_kdump_send.8
index 4cec124..ab95836 100644
--- a/fence/agents/kd
On 06/28/2012 09:32 PM, Zama Ques wrote:
Hi All ,
I need to setup HA clustering using redhat cluster suite on two nodes , primary
concern being high availability . Before trying it on production , I am trying
to configure the setup on two desktop machines . For storage , I am creating a
parti
On 09/06/2012 03:45 PM, Chip Burke wrote:
Now that ricci is figured out, I am having some issues with fencing.
It seems VMWare Fence works very well, but our GFS2 volume is not
available until it receives a "success" status. This gives us maybe
30-60 seconds of time where we cannot access the G
[EMAIL PROTECTED] wrote:
I am having issues with a server running gfs and an SELinux error. When
/etc/init.d/gfs start or service gfs start is run, it results in a
SELinux denial. If mount -a -t gfs is run as root it works fine. The
scripts also work if setenforce 0 is used. Running setsebool -P
Christopher McCrory wrote:
Has anyone tried GFS with the Dell MD3000 array? This is a SAS
hardware raid array that can support four server connections including
shared access. (very nice piece of hardware). I have no experience
with GFS (yet).
Yes. I've tested GFS with this array using th
is problem.
Let me know if you encounter any other problem with gfs/selinux.
Ryan
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ryan O'Hara
Sent: Tuesday, August 28, 2007 1:44 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS, SELinux
Roger Peña wrote:
is this related to the fact that selinux policy
stated
this:
genfscon gfs / system_u:object_r:nfs_t
Yes. This is what would be used for a filesystem that does not support
selinux xattrs. In RHEL4.5, SELinux xattr support was added to GFS.
However...
should I follow wha
Ozarchuk, John D wrote:
I have two nodes on the same subnet, can ping each other, are both
alive, both are members of a two-node cluster. When I start cman on
both nodes at the same time it says “X not a cluster member after 60 sec
post_join_delay”. The output of clustat shows that the other
Sadek, Abdel wrote:
Is CLVM a requirement for SCSI-3 Persistent Reservations to work on a
RHEL 5.0 native cluster?
Yes.
I have a 2-node cluster. If I build my file system directly on the
devices /dev/sdb /dev/sdd etc.., there are no Persistent Reservations
being established on my storage arr
Christopher Barry wrote:
Greetings all,
I have 2 vmware esx servers, each hitting a NetApp over FS, and each
with 3 RHCS cluster nodes trying to mount a gfs volume.
All of the nodes (1,2,& 3) on esx-01 can mount the volume fine, but none
of the nodes in the second esx box can mount the gfs volu
Just out of curiosity ... what scsi_reserve enabled by default? It
shouldn't be. If it is I'll have to fix that.
Christopher Barry wrote:
Are you intentionally trying to use scsi reservations as a
fence method?
No. In fact I thought the scsi_reservation service may be
*causing* the
issue,
Christopher Barry wrote:
On Wed, 2007-10-31 at 10:44 -0500, Ryan O'Hara wrote:
Christopher Barry wrote:
Greetings all,
I have 2 vmware esx servers, each hitting a NetApp over FS, and each
with 3 RHCS cluster nodes trying to mount a gfs volume.
All of the nodes (1,2,& 3) on esx-01
Christopher Barry wrote:
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ryan O'Hara
Sent: Friday, November 02, 2007 1:43 PM
To: linux clustering
Subject: Re: [Linux-cluster] scsi reservation issue
Just out of curiosity ... what scsi_re
Christopher Barry wrote:
Okay. I had some other issues to deal with, but now I'm back to this,
and let me get you all up to speed on what I have done, and what I do
not understand about all of this.
status:
esx-01: contains nodes 1 thru 3
esx-02: contains nodes 4 thru 6
esx-01: all 3 cluster n
[EMAIL PROTECTED] wrote:
Ryan,
Thank you so much for your replies.
I tracked down the registration and reserve to the first cluster node,
by converting the hex value to the IP per your instructions. All nodes
reported only this one registration.
On that node, I try:
# sg_persist -C --out /dev
[EMAIL PROTECTED] wrote:
I didn't notice that something had changed until now after I upgraded all of
the nodes. I see in /etc/cluster the following;
-rw-r- 1 root root 1249 Nov 10 18:40 .cluster.conf
-rw-r- 1 root root 1249 Nov 13 09:50 cluster.conf
-rw-r--r-- 1 root root34
you
want scsi_watchdog to run. I'm guessing that neither of these apply to
use, and therefore the scsi_watchdog is having no effect. It is disabled
by default.
On Wed, 21 Nov 2007 10:31:32 -0600, Ryan O'Hara wrote:
[EMAIL PROTECTED] wrote:
I didn't notice that something had c
jr wrote:
Hi Guys,
does GFS not work with SELinux at all, even though SElinux seems to
initialize the Filesystem right after the mount correctly, and the files
show labels? (ls -lZ) (this is CentOS 5.1 with the most recent packages,
using GFS non2).
it seems as if i ran into something like that.
Did you pull the source from cvs or did you grab one of the tar.gz files?
Ryan
Alexandre Racine wrote:
You are right, that package was not installed.
So now I installed the package, and recompiled "fence", but "fence_scsi" is
still not there in /sbin/
Any more idea? (Thanks for the first hi
m.. mm.. wrote:
I don't get it working:
This error message comes..when i try to fence node2.
Jan 12 20:38:57 xxx4n1 fence_node[20776]: agent "fence_ipmilan" reports:
Rebooting machine @ IPMI:192.168.69.4...ipmilan: Failed to connect after 30 seconds Failed
Jan 12 20:38:57 xxx4n1 ccsd[3239]: Att
Attached is the latest version of the "Using SCSI Persistent
Reservations with Red Hat Cluster Suite" document for review.
Feel free to send questions and comments.
-Ryan
Using SCSI Persistent Reservations with Red Hat Cluster Suite
Ryan O'Hara <[EMAIL PROTECTED]>
fence_scsi), using an msdos partitioned disk
seems to work fine.
This is only in testing but I haven't seen any issues as of yet.
Ryan O'Hara wrote:
Attached is the latest version of the "Using SCSI Persistent
Reservations with Red Hat Cluster Suite" document for review.
Feel fr
Tomasz Sucharzewski wrote:
Hello,
Ryan O'Hara wrote:
4 - Limitations
...
- Multipath devices are not currently supported.
What is the reason - it is strongly required to use at least two HBA in a SAN
network, which is useless when using scsi reservation.
It has to do with h
I went back and investigated why this might happen. Seems that I had
seen it before but could not recall how this sort of thing happens.
For 4.6, the scsi_reserve script should only be run if you intend to use
SCSI reservations as a fence mechanism, as you correctly pointed out at
the end of
m.. mm.. wrote:
Hi Ryan or somebody else..
I have one question about your documentation about RedHat and scsi_reservation
fence what you have write.
About this Storage Requimenents:
You write like this. "all shared storage must use LVM2 cluster volumes"
If i have 2 cluster-nodes and shared /da
Fabio M. Di Nitto wrote:
ccs_test(8): not fully completed yet (another email will follow).
ccs_test should go away. It was never intended to be used as a
production tool, it was simply intended to be a tool to test ccs.
Futhermore, the fact that you must create connections and then use those
39 matches
Mail list logo