that
might help the cluster to promote the preferred site.
But my procedure is without any warranty and further support, sorry.
Maloja01
On Mon, Apr 7, 2014 at 4:16 PM, Maloja01 maloj...@arcor.de wrote:
On 04/07/2014 03:00 PM, Ammar Sheikh Saleh wrote:
thanks for your help ... can you guide me
FIRST you need to setup fencing (STONITH) - I do not see any stonith
resource in your cluster - that WILL be a problem in your cluster.
You could not migrate a Master/Slave. You Should use crm_master to
Score the Master-Placement. And you should remove all
client-Prefer-location-rules which
On 03/14/2014 08:50 PM, David Vossel wrote:
- Original Message -
From: Maloja01 maloj...@arcor.de
To: Linux-HA linux-ha@lists.linux-ha.org
Sent: Friday, March 14, 2014 5:32:34 AM
Subject: [Linux-HA] How to tell pacemaker to process a new event during a
long-running resource operation
Hi all,
I have a resource which could in special cases have a very long-running
start operation.
If I have a new event (like switching a standby node back to online)
during the already running transition (cluster is still
S_TRANSITION_ENGINE) I would like the cluster to process them as soon
On 01/13/2012 11:04 AM, Niclas Müller wrote:
I've grouped both as www-services and not it is running like i want.
Change to takeover is 4-6 sec. Its good, but I want to go to 1-3 sec as
far as possible. Much process last will there not because I only made a
Projekt for school with Linux HA,
hours to get the end-customers service working. In such a case
even pacemaker could not do anything.
Kind regards
Fabian
But nice tip!
Thx
Erkan :)
On Sat, Jan 07, 2012 at 10:22:32AM +0100, Maloja01 wrote:
In an other customer setup we decided to set a resource to status
unmanaged when
In an other customer setup we decided to set a resource to status
unmanaged when it has to do some special work which should not be
interrupted. After the replication (in our case redloogs in a backup db)
we set the resource to be managed again.
I never have tried to change already triggered
On 08/18/2011 12:19 PM, Ulrich Windl wrote:
Hi!
Reading the docs, I learned that pacemaker understands more complex
dependencies than group where resources are strictly sequential. For
example one could start a set of resources in parallel, then wait until all
are done, then start
The order constraints do work as I assume, but I guess that
you run into a pifall:
A clone is marked as up, if one instance in the cluster is started
successfully. The order does not say, that the clone on the same node
must be up.
Kind regards
Fabian
On 08/10/2011 01:43 PM,
On 08/05/2011 08:30 AM, Ulrich Windl wrote:
Maloja01 maloj...@arcor.de schrieb am 04.08.2011 um 18:49 in Nachricht
4e3acd86.1020...@arcor.de:
Hi Ulrich,
I did not folow the complete thread, just jumped in - sorry. Is the
resource inside a resource group? In this case the stickiness
in this thread, it would be nice
for me ...
Yes you are right - so I will rewind the thread beginning from message
1 :)
Thanks a lot anyway.
Alain
De :Maloja01 maloj...@arcor.de
A : linux-ha@lists.linux-ha.org
Date : 05/08/2011 11:02
Objet : Re: [Linux-HA] Antw: Re: location
On 08/02/2011 05:06 PM, alain.mou...@bull.net wrote:
Hi
I have this simple configuration of locations and orders between resources
group-1 , group-2 and clone-1
(on a two nodes ha cluster with Pacemaker-1.1.2-7 /corosync-1.2.3-21) :
location loc1-group-1 group-1 +100: node2
location
Hi,
processes in state D looks like locked in a kernel call/device request.
Do you have a problem with your storage? This is not cluster related .
Kind regards
Fabian
On 08/05/2011 01:55 PM, Ulrich Windl wrote:
Hi,
we run a cluster that has about 30 LVM VGs that are monitored every minute
)
Hope it helps to clarify ...
Thanks again
Alain
De :Maloja01 maloj...@arcor.de
A : linux-ha@lists.linux-ha.org
Date : 05/08/2011 11:40
Objet : Re: [Linux-HA] location and orders : Question about a behavior ...
Envoyé par :linux-ha-boun...@lists.linux-ha.org
On 08
On 08/04/2011 08:28 AM, Ulrich Windl wrote:
Hi!
Isn't the stickyness effectively based on the failcount? We have one resource
that has a location constraint for one node with a weight of 50 and a
sticky ness of 10. The resource runs on a different node and shows no
tendency of
:
Maloja01 maloj...@arcor.de schrieb am 04.08.2011 um 12:58 in Nachricht
4e3a7b5c.1030...@arcor.de:
On 08/04/2011 08:28 AM, Ulrich Windl wrote:
Hi!
Isn't the stickyness effectively based on the failcount? We have one
resource
that has a location constraint for one node with a weight of 50
Are there other nodes with the same multicast address?
On 08/02/2011 12:38 AM, Hai Tao wrote:
I reinstalled the OS for node1 (in a two nodes HA, and the node1 had a disk
error), and reconfigured HA. however, after restarting the heartbeat, I see
many errors of string2msg_ll: node [?]
STONITH
You could use
- ilo system management boards
- ipmi system managemt boards
- power swiches ,
You can even run stonith -l to figure out a proper set of
stonith devices.
And yes you can setup more than one heartbeat link.
Just add an other link derictive to /etc/ha.d/ha.cf
Access rights to the directory? - is the directory available (created)?
- Original Nachricht
Von: Lars Marowsky-Bree [EMAIL PROTECTED]
An: General Linux-HA mailing list linux-ha@lists.linux-ha.org
Datum: 29.08.2008 17:35
Betreff: Re: [Linux-HA] Getting Heartbeat OK
On
Split-Brain Situations are *very* critical for a two node setup, aspecially
when you are using
shared media like disks drbd syncs and so on.
For bigger clusters the problem is a bit more easy, bevause you get a quorum
loss, if half the
nodes are down or disconnected. You can use the directive
Did you use the correct cn (certificate attribute cn must be equal to
the cluster name)?
If you use the cluster name mycluster and your quorum server could
be reached with a special name (dont remeber it know, but you can strace
it easyly) you can also use quorumdtest as a clien test program to
I am searching for an I/O fencing method like SCSI(3) reservation.
Is there any method implemented yet for use with heartbeat to avoid
accidently mount multiple times the same file system from diffrent nodes?
Of course I could configure heartbeat not to mount twice and I could
use a quorum server
I hope my email is not shipped twice, but my last mail seams not to recive the
list.
My messge was:
Is it possible to extent a running cluster with
new cluster nodes?
The extention should be done without any stop of
any resource placed on nodes, which are running
in the cluster before we extend
Is it possible to extent a running cluster with
new cluster nodes?
The extention should be done without any stop of
any resource placed on nodes, which are running
in the cluster before we extend the cluster.
If it is possible, can I use the is_managed attribute
to leave the resources
24 matches
Mail list logo