OK - it seems I've achieved what I want via the following configuration:

node testvm1
node testvm2
node testvm3
primitive LDAP lsb:ldap \
        op monitor interval="40s" \
        op monitor interval="41s" role="Master"
primitive LDAP-IP ocf:heartbeat:IPaddr2 \
        params ip="10.1.1.80" cidr_netmask="16" \
        op monitor interval="30s"
ms LDAP-clone LDAP \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
location LDAP-IP-master LDAP-IP 10: testvm3
location LDAP-IP-slave LDAP-IP 5: testvm2
location LDAP-master LDAP-clone 10: testvm3
location LDAP-slave LDAP-clone 5: testvm2
colocation LDAP-with-IP inf: LDAP-IP LDAP-clone
order LDAP-after-IP inf: LDAP-IP:start LDAP-clone:promote
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="3" \
        stonith-enabled="false" \
        symmetric-cluster="false"

testvm3 is the preferred server for LDAP and the floating IP, if testvm3 goes down the IP floats over to testvm2. However, when I do a status I see this:

============
Last updated: Mon Feb  1 19:51:34 2010
Stack: openais
Current DC: testvm1 - partition with quorum
Version: 1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7
3 Nodes configured, 3 expected votes
2 Resources configured.
============

Online: [ testvm3 testvm2 testvm1 ]

LDAP-IP (ocf::heartbeat:IPaddr2):       Started testvm3
Master/Slave Set: LDAP-clone
        Slaves: [ testvm3 testvm2 ]

Both testvm2 and testvm3 are still "Slaves". Does anyone know what this means? Why isn't one of them a Master? I'd rather configure the IP to follow the "Master", then what I'm doing now, which is simply tell the IP and LDAP that they prefer testvm3. I have LDAP-clone and LDAP-IP colocated, but since the clone can exist on either node it doesn't mean much. I think the IP is following the preference I set, and not 'following the master' since there is no master?

How does one promote a slave to master automatically?

Thanks again for any insight!

-erich

Erich Weiler wrote:
if you make the LDAP daemon listen on all available interfaces, it will accept connections on the on-demand activated floating-ip.

Well, I'm trying to get this to work and running into a wall... I've got 3 servers, I want LDAP to run on testvm2 and testvm3. I've configured LDAP on those 2 servers. Then I configured crm from scratch as thus:

configure property stonith-enabled=falseconfigure primitive LDAP-IP ocf:heartbeat:IPaddr2 params ip="10.1.1.80" cidr_netmask="16" op monitor interval="30s" configure primitive LDAP lsb:ldap op monitor interval="40s"configure ms LDAP-clone LDAP meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
configure colocation LDAP-with-IP inf: LDAP-IP LDAP-clone:Master
configure order LDAP-after-IP inf: LDAP-IP:start LDAP-clone:promote

Here's the crm config at the moment:

node testvm1
node testvm2
node testvm3
primitive LDAP lsb:ldap \
        op monitor interval="40s"
primitive LDAP-IP ocf:heartbeat:IPaddr2 \
        params ip="10.1.1.79" cidr_netmask="16" \
        op monitor interval="30s"
ms LDAP-clone LDAP \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
colocation LDAP-with-IP inf: LDAP-IP LDAP-clone:Master
order LDAP-after-IP inf: LDAP-IP:start LDAP-clone:promote
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="3" \
        stonith-enabled="false"

But when I check status, I see the IP is not started anywhere, LDAP is started on testvm2 and testvm3 (like I wanted), but they are both listed as slaves:

============
Last updated: Mon Feb  1 14:29:47 2010
Stack: openais
Current DC: testvm3 - partition with quorum
Version: 1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7
3 Nodes configured, 3 expected votes
2 Resources configured.
============

Online: [ testvm3 testvm2 testvm1 ]

Master/Slave Set: LDAP-clone
        Slaves: [ testvm2 testvm3 ]

Failed actions:
LDAP_monitor_0 (node=(null), call=4, rc=5, status=complete): not installed LDAP:1_monitor_0 (node=(null), call=5, rc=5, status=complete): not installed
[r...@testvm3 ~]#

So I guess my 3 questions are:

1: Why isn't one of the LDAP servers being promoted to master?
2: Is the floating IP down because I've specified that it stick to the master, but there is no master? 3: I'd like the LDAP master to exist on testvm3 (with the floating IP) at all times, only failing to testvm2 if testvm3 goes down, but I'm not clear on how to specify a preference for a node to be a master.

The "Failed actions" exist, I'm guessing, because I didn't install an ldap server on testvm1, but I don't care about that because I only want LDAP to stay on testvm2 and testvm3.

I feel like I'm close but just lack a little understanding. The docs have me almost there but I'm still a bit blurry obviously. Any help much appreciated!! I think pacemaker will do great things for us if I can get it working as expected....

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to