Hi,
I am using the bladecenter fencing method, and mostly it works fine.
Every now and then (I am still in pre-production testing) it fails.
What happens is this;
* It kills the power for the blade (the blade goes down and power status
in my Management Module is visibly off).
* It does not
Hola
Estoy montando un cluster de alta disponibilidad con RH5EL.
Lo estoy haciendo con VMware, a cada una de las máquinas les doy un disco
que será común entre las dos máquinas virtuales con formato gfs2, para que
pueda ser escrito y leido a la vez por los dos nodos del cluster.
El problema
hi everybody,
we wrote a custom fence script for a powerswitch that wasn't supported
by the rhcs fence tools yet.
it is able to be used the same way as the other other fence scripts are
used (both able to work it via command line switches or stdin).
what do we have to do to make cman recognize it
Hi Bob,
We now have gfs2_edit up and running and have been able to get an idea of
how the ondisk structure works.. We are modifying values as you suggest
then running fsck. We are finding that after running fsck the values we
have modified have been reverted back to before our change..
On Thu, 2008-03-27 at 12:15 +0100, Manuel Fernandez Panzuela wrote:
El problema viene cuando monto la partición en su punto de montaje, no
da error, pero cuando escribo desde un nodo, no veo lo escrito desde
el otro nodo.
Hola Manuel,
Can you post your error please?
--
Tiago Cruz
How is your Cluster connections connected. (ie. Are you using a
hub,switch or direct connecting the heartbeat cables) ?
Dalton, Maurice wrote:
Still having the problem. I can't figure it out.
I just upgraded to the latest 5.1 cman.. No help.!
-Original Message-
From: [EMAIL
Hola Manuel,
Hablo solo un pocito de Espagnol.
Habas Engles? Los gentes hablando Engles aqui.
Alexandre Racine
514-461-1300 poste 3304
[EMAIL PROTECTED]
-Original Message-
From: [EMAIL PROTECTED] on behalf of Manuel Fernandez Panzuela
Sent: Thu 2008-03-27 07:15
To:
First of all, Wendy is right. If you can use hardware snaps
to restore the data, you'll be much better off. If that's not
possible for some reason, keep reading.
I hadn't realised Netapp SANs could do this until I saw Wendys post. Once
I'd talked to our sysadmin I found they had turned this
Cisco 3550
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bennie Thomas
Sent: Thursday, March 27, 2008 9:53 AM
To: linux clustering
Subject: Re: [Linux-cluster] 3 node cluster problems
what is the switch brand. I have read where the RHCS has problems
Are you using a private vlan for your cluster communications. If not,
you should be. the communicatuions
between the clustered nodes is very chatty Just my opinion.
These are my opinions and experiences.
Any views or opinions presented are solely those of the author and do not necessarily
I believe some of the cisco switches do not have multicast enabled by
default which would prevent some of the cluster communications from
getting through properly.
http://kbase.redhat.com/faq/FAQ_51_11755
John
Bennie Thomas wrote:
Are you using a private vlan for your cluster
Ours are enabled.. I have verified that..
Thanks
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of John Ruemker
Sent: Thursday, March 27, 2008 10:41 AM
To: linux clustering
Subject: Re: [Linux-cluster] 3 node cluster problems
I believe some of the cisco
I have removed the 3rd server, as long as I am running with 2 nodes and
qdisk. I am not seeing any problems
I add the 3rd server and my problems begin.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bennie Thomas
Sent: Thursday, March 27, 2008 10:28 AM
denis napisał(a):
Hi all,
This happens whenever I try to ack a manual fence (which I have to
whenever the fence_bladecenter fails, as described in my previous post) :
#fence_ack_manual -n host1.domain.com
Warning: If the node host1.domain.com has not been manually fenced
(i.e. power
On Wed, 2008-03-26 at 16:18 -0300, Davi Baldin wrote:
Hello All,
I think to try using LVM2 in my clusters servers and have some questions
about it. Something I got on google, but something not:
1. Cluster-LVM is stable for production in Red Hat 5.1?
Always run the latest errata,
Sorry to bump my own post again (again), but I've loaded my config up in
system-config-cluster and rebuilt it to the best of my knowledge, and am still
experiencing the same issue (my childest scripts aren't being launched).
See rg_tool:
Running in test mode.
Loaded 17 resource rules
===
16 matches
Mail list logo