Look at the order.
Master/replication VIP ist started after promote and stopped after demote.
listen_addresses = "*" means dont take care about specific interfaces, so it will listen on all interfaces not matter if they are coming up or switching down after postgres start.
The master either sets the promotion score to CAN_PROMOTE=100 or CAN_NOT_PROMOTE=-INF.
For every slave which is in sync it sets 100 and for each missing sync -INF.
When the master is away all slaves fall back to the funtion have_master_right. Now the normal election begins. Every slave writes its own xlog location and the one with the highest value set promotion score to 1000.
All slaves which are not in sync dont participate in the master election any more. This is the starting code of function have_master_right.
Rainer
Gesendet: Freitag, 29. März 2013 um 12:33 Uhr
Von: "Steven Bambling" <smbambl...@arin.net>
An: "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org>
Betreff: Re: [Pacemaker] PGSQL resource promotion issue
Von: "Steven Bambling" <smbambl...@arin.net>
An: "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org>
Betreff: Re: [Pacemaker] PGSQL resource promotion issue
On Mar 28, 2013, at 8:13 AM, Rainer Brestan <rainer.bres...@gmx.net> wrote:
Hi Steve,i think, you have misunderstood how ip addresses are used with this setup, PGVIP should start after promotion.Take a look at Takatoshi´s Wiki.
I see that he has the master/replication VIPs with a resource order to force promotion before moving the VIPs to the new master.
I don't get how the postgres service is going to listen on those interfaces if they have not already migrated to the new master. Even with setting the listen_addresses = "*"
The promotion sequency is very simple.When no master is existing, all slaves write their current replay xlog into the node attribute PGSQL-xlog-loc during monitor call.
Does this also hold true if a Master fails?
From the looks of it, if there was a Master before the failure that the master score is set from the function that grabs the data_status from the master (STREAMING|SYNC, STREAMING|ASYNC, STREAMING|POTENTIAL, etc ).
The reason I ask is if the master fails and the slaves don't then compare their xlog location, there is a potential for data loss if the incorrect slave is promoted.
_______________________________________________You can see all them with crm_mon -A1f.Each slave gets these attributes from all node configured in parameter node_list (hopefully your node names in Pacemaker are the same as in node_list) and compares them to get the highest.If the highest is this list is the own one, it sets the master-score to 1000, on other nodes to 100.Pacemaker then selects the node with the highest master score and promote this.RainerGesendet: Mittwoch, 27. März 2013 um 14:37 Uhr
Von: "Steven Bambling" <smbambl...@arin.net>
An: "The Pacemaker cluster resource manager" <pacemaker@oss.clusterlabs.org>
Betreff: Re: [Pacemaker] PGSQL resource promotion issueIn talking with andreask from IRC, I miss understood the need to include the op monitor. I figured it was pulled from the resource script by default.I used pcs to add the new attributes and one was then promoted to masterpcs resource add_operation PGSQL monitor interval=5s role=Masterpcs resource add_operation PGSQL monitor interval=7sv/rSTEVEOn Mar 27, 2013, at 7:08 AM, Steven Bambling <smbambl...@arin.net> wrote:_______________________________________________I've built and installed the lastest resource-agents from github on Centos 6 and configured two resources1 primitive PGVIP:pcs resource create PGVIP ocf:heartbeat:IPaddr2 ip=10.1.22.48 cidr_netmask=25 op monitor interval=1Before setting up the PGSQL resource I manually configured sync/streaming replication on the three nodes with p1.example.com as the master and verified that replication was working. I think removed the synchronous_standby_name from my postgresql.conf and stop all postgres services on all nodes1 master/slave PGSQL: -- I've the resource to use sync replication. Also I am using PGSQL 9.2.3pcs resource create PGSQL ocf:heartbeat:pgsql params pgctl="/usr/pgsql-9.2/bin/pg_ctl" pgdata="/var/lib/pgsql/9.2/data" config="/var/lib/pgsql/9.2/data/postgresql.conf" stop_escalate="5" rep_mode="sync" node_list="p1.example.com p2.example.com p3.example.com" restore_command='cp /var/lib/pgsql/9.2/archive/%f "%p"' master_ip="10.1.22.48" repuser="postgres" restart_on_promote="true" tmpdir="/var/lib/pgsql/9.2/tmpdir" xlog_check_count="3" crm_attr_timeout="5" check_wal_receiver="true" --masterI'm able to successfully get all the nodes in the cluster started and the PGVIP resource starts on the 1st node and the PGSQL:[012] resource start on each node in the cluster. The one thing I don't understand is why none of the slaves is taking over the master role.Also how would I go about force promoting one of the slaves into the master role via the PCS command line utility.v/rSTEVE
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org