P
-Original Message-
From: "Moralejo, Alfredo"
Date: Wed, 10 Jun 2009 23:33:14
To: linux clustering
Subject: RE: [Linux-cluster] fence_scsi support in multipath env in RHEL5.3
Anyone is successfully using it?
I'm testing it with a clariion storage frame on RHEL 5.3, and as soon as I
Hi.
I'm testing 3nodes CentOS5.3 cluster.
When 3 nodes joined and one node leaved from cluster,but expected votes
did not reduce.
So when 2 nodes leaved(cman_tool leave),only one node status chaneged to
activity blocked.
How to Activity blocked
3 nodes joined. This is normal.
Version: 6.1.0
Co
I'm using the config provide by Red Hat by default:
device {
vendor "DGC"
product ".*"
product_blacklist "LUN_Z"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
p
On Wed, Jun 10, 2009 at 11:33:14PM +0200, Moralejo, Alfredo wrote:
> Anyone is successfully using it?
What path checker are you using? I've heard that certain path checkers
cause problems, but I honestly don't know enough about dm-multipath
to understand the reason for this.
I have successfully
Indeed, SAN replication could be another way to partially address this.
To make it work, one should be able to add sort of external resource in the
cluster monitoring the synchronization status between the source LUNs and
the target ones, and by the way automatically invert the synchronization in
On Wed, Jun 10, 2009 at 02:21:41PM -0700, Ian Hayes wrote:
> Have you tried changing clean_start="0" to 1?
Nope, will do. I misinterpreted the fenced(8) man page thinking that
clean_start="0" was the way to do this.
Thanks,
Ray
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.
Anyone is successfully using it?
I'm testing it with a clariion storage frame on RHEL 5.3, and as soon as I
enable scsi_reserve, multipath starts failing and a path goes good and bad in a
loop and scsi fencing fails sometimes, should I configure in a specific way
multipath.conf?:
Jun 10 23:31:
Have you tried changing clean_start="0" to 1?
On Wed, Jun 10, 2009 at 1:54 PM, Ray Van Dolson wrote:
> I'm setting up a simple 5 node "cluster" basically just for using a
> shared GFS2 filesystem between the nodes.
>
> I'm not really concerned about HA, I just want to be able to have all
> the n
I'm setting up a simple 5 node "cluster" basically just for using a
shared GFS2 filesystem between the nodes.
I'm not really concerned about HA, I just want to be able to have all
the nodes accessing the same block device (iSCSI)
Subhendu Ghosh wrote:
Would it be possible to look at migrating this agent to SSH (more secure)
I started with the idea of doing it over ssh, but Net::SSH module seemed
to be a lot less forgiving about the terminal quirkyness. I can have
another go. There's also the issue of manual intervent
On Wed, Jun 10, 2009 at 02:11:21PM +0530, Rajeev P wrote:
> I wanted to know if fence_scsi is supported in a multipath environment for
> RHEL5.3 release.
Yes, it is supported.
> In earlier releases of RHEL5 fence_scsi was not supported in a multipath
> environment for RHEL5.3 release. If I am no
Gordan Bobic wrote:
> As the subject line says. The agent is attached.
> As all currently included fencing agents, this one is also written in
> Perl, and has the same requirements and dependencies as the DRAC fencing
> agent (Net::Telnet, Getopt::Std).
>
> What does it take to get it included in
Yes, assuming you have sufficient free extents. Just remember to add any needed
journals first.
-paul
-Original Message-
From: Gary Romo
Date: Wed, 10 Jun 2009 12:17:01
To:
Subject: [Linux-cluster] gfs_grow
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.c
On Wed, 2009-06-10 at 13:16 +0200, Maciej Bogucki wrote:
> Rajeev P pisze:
> > I wanted to know if fence_scsi is supported in a multipath environment
> > for RHEL5.3 release.
> >
> > In earlier releases of RHEL5 fence_scsi was not supported in a
> > multipath environment for RHEL5.3 release. I
Can you increase GFS file systems on the fly, without unmounting or
stopping processes?
-Gary--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
On Wed, 2009-06-10 at 09:13 -0500, David Teigland wrote:
> On Wed, Jun 10, 2009 at 09:33:33AM -0400, William A. (Andy) Adamson wrote:
> > On Tue, Jun 9, 2009 at 3:36 PM, David Teigland wrote:
> > > On Tue, Jun 09, 2009 at 03:14:09PM -0400, William A. (Andy) Adamson wrote:
> > >> Hi David
> > >>
> >
On Wed, Jun 10, 2009 at 09:33:33AM -0400, William A. (Andy) Adamson wrote:
> On Tue, Jun 9, 2009 at 3:36 PM, David Teigland wrote:
> > On Tue, Jun 09, 2009 at 03:14:09PM -0400, William A. (Andy) Adamson wrote:
> >> Hi David
> >>
> >> Thanks for looking at this. The kernel does report a recursive lo
As cluster wiki:
http://sources.redhat.com/cluster/wiki/SCSI_FencingConfig
"Multipath devices are currently only supported for RHEL 5.0 and later with the
use of device-mapper-multipath."
Additionally, I found in a HP document info about how to set up cluster.
Acconding to that information it'
Tom Lanyon wrote:
Posting it to the Red Hat Bugzilla under the approriate component
would also help.
http://bugzilla.redhat.com
It's not a bug fix, it's a feature addition.
The bugzilla is for "defects" which, along with bugs, includes requests
for enhancements, etc.
A quick search of th
Rajeev P pisze:
I wanted to know if fence_scsi is supported in a multipath environment
for RHEL5.3 release.
In earlier releases of RHEL5 fence_scsi was not supported in a
multipath environment for RHEL5.3 release. If I am not wrong, this was
because the DM-MPIO driver forwarded the registrat
On 10/06/2009, at 5:57 PM, Gordan Bobic wrote:
Subhendu Ghosh wrote:
Posting it to the Red Hat Bugzilla under the approriate component
would also help.
http://bugzilla.redhat.com
It's not a bug fix, it's a feature addition.
The bugzilla is for "defects" which, along with bugs, includes
I wanted to know if fence_scsi is supported in a multipath environment for
RHEL5.3 release.
In earlier releases of RHEL5 fence_scsi was not supported in a multipath
environment for RHEL5.3 release. If I am not wrong, this was because the
DM-MPIO driver forwarded the registration/unregistration co
Subhendu Ghosh wrote:
Ideally, you want to post the patch to the upstream component.
Thanks for responding. It's mostly an initscript patch that checks if
for file systems mounted on tmpfs (e.g. if we put /var/lock, /var/run or
similar there to save hitting the disk) and saves and restores s
23 matches
Mail list logo