Hi Andreas,

Many thanks for your suggestion. Actually, this is the last configuration that 
I tried, but it still didn’t seem to do the job:



pcs property set stonith-enabled=false

pcs property set no-quorum-policy=ignore

pcs resource defaults resource-stickiness=100

pcs resource create LDAP_Cluster_IP ocf:heartbeat:IPaddr2 ip=192.168.26.100 
cidr_netmask=32 op monitor interval=5s

pcs resource create dirsrv lsb:dirsrv op monitor interval="6s" role="Master" 
timeout="2s"

pcs resource clone dirsrv

pcs constraint order LDAP_Cluster_IP then dirsrv-clone

pcs constraint colocation add dirsrv-clone with LDAP_Cluster_IP INFINITY



Regards,

Bernie



From: Andreas Kurz [mailto:[email protected]]
Sent: 20 February 2016 14:02
To: Cluster Labs - All topics related to open-source clustering welcomed
Subject: Re: [ClusterLabs] Pacemaker for 389 directory server with multi-master 
replication



Hello,



On Sat, Feb 20, 2016 at 1:50 PM, Bernie Jones 
<[email protected]> wrote:

Hi all,



I’m new to this list and fairly new to pacemaker and have just spent a couple 
of days trying unsuccessfully to solve a configuration challenge.



I have seen a relevant post on this list from around four years ago but it 
doesn’t seem to have helped – here’s what I want to do.



I have 389 directory server running on two Centos servers. It’s configured for 
MMR and my plan is to use one replica as the primary LDAP server, failing over 
to the secondary only if there’s a problem. This is to avoid frequent writes to 
both replicas causing high levels of bi-directional replication traffic. SO I’m 
looking for failover rather than load balancing.



This works fine using a traditional load balancer configured appropriately for 
weighting and stickiness with a simple heartbeat to the LDAP server but I’d 
like to see if I can use Pacemaker instead using a floating IP across the two 
LDAP servers and appropriate monitoring to control switch over.



I’ve configured a floating IP resource OK but am struggling with the question 
of how to monitor the 389 server.



If I create a resource using lsb:dirsrv then I find that the server is started 
up on the primary cluster node but not on the second – which is understandable 
but not what I need.



What I would like to be able to achieve is to have the 389 instances monitored 
but not controlled such that the floating IP address switches across when 
required but without stopping or starting the 389 instances.



I'd say you want to create a clone resource from your 389 resource, so there is 
one instance running on each node.



Regards,

Andreas



Right now I’m not sure whether I should be using the dirsrv resource or looking 
for some kind of simple ‘LDAP ping’ resource that could be used.



Any advice would be hugely appreciated.



Kind regards,

Bernie



scl_header14



Tel:         01308 488392

Mob:     07770 587118

Profile: https://www.linkedin.com/in/berniejones





  _____


 <https://www.avast.com/antivirus> Avast logo

This email has been checked for viruses by Avast antivirus software.
www.avast.com <https://www.avast.com/antivirus>





  _____


 <https://www.avast.com/antivirus> Avast logo

This email has been checked for viruses by Avast antivirus software.
www.avast.com <https://www.avast.com/antivirus>




_______________________________________________
Users mailing list: [email protected]
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org





---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
_______________________________________________
Users mailing list: [email protected]
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to