if you make the LDAP daemon listen on all available interfaces, it will accept 
connections on the on-demand activated floating-ip.

Well, I'm trying to get this to work and running into a wall... I've got 3 servers, I want LDAP to run on testvm2 and testvm3. I've configured LDAP on those 2 servers. Then I configured crm from scratch as thus:

configure property stonith-enabled=falseconfigure primitive LDAP-IP ocf:heartbeat:IPaddr2 params ip="10.1.1.80" cidr_netmask="16" op monitor interval="30s" configure primitive LDAP lsb:ldap op monitor interval="40s"configure ms LDAP-clone LDAP meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
configure colocation LDAP-with-IP inf: LDAP-IP LDAP-clone:Master
configure order LDAP-after-IP inf: LDAP-IP:start LDAP-clone:promote

Here's the crm config at the moment:

node testvm1
node testvm2
node testvm3
primitive LDAP lsb:ldap \
        op monitor interval="40s"
primitive LDAP-IP ocf:heartbeat:IPaddr2 \
        params ip="10.1.1.79" cidr_netmask="16" \
        op monitor interval="30s"
ms LDAP-clone LDAP \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
colocation LDAP-with-IP inf: LDAP-IP LDAP-clone:Master
order LDAP-after-IP inf: LDAP-IP:start LDAP-clone:promote
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="3" \
        stonith-enabled="false"

But when I check status, I see the IP is not started anywhere, LDAP is started on testvm2 and testvm3 (like I wanted), but they are both listed as slaves:

============
Last updated: Mon Feb  1 14:29:47 2010
Stack: openais
Current DC: testvm3 - partition with quorum
Version: 1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7
3 Nodes configured, 3 expected votes
2 Resources configured.
============

Online: [ testvm3 testvm2 testvm1 ]

Master/Slave Set: LDAP-clone
        Slaves: [ testvm2 testvm3 ]

Failed actions:
LDAP_monitor_0 (node=(null), call=4, rc=5, status=complete): not installed LDAP:1_monitor_0 (node=(null), call=5, rc=5, status=complete): not installed
[r...@testvm3 ~]#

So I guess my 3 questions are:

1: Why isn't one of the LDAP servers being promoted to master?
2: Is the floating IP down because I've specified that it stick to the master, but there is no master? 3: I'd like the LDAP master to exist on testvm3 (with the floating IP) at all times, only failing to testvm2 if testvm3 goes down, but I'm not clear on how to specify a preference for a node to be a master.

The "Failed actions" exist, I'm guessing, because I didn't install an ldap server on testvm1, but I don't care about that because I only want LDAP to stay on testvm2 and testvm3.

I feel like I'm close but just lack a little understanding. The docs have me almost there but I'm still a bit blurry obviously. Any help much appreciated!! I think pacemaker will do great things for us if I can get it working as expected....

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to