> So, then instead of powering off the VM in vSphere, I instead tried a 
> `killall -9 corosync` on the primary.  This resulted in the VIP coming up on 
> node 3, and node 1 being rebooted.  Great!

Unfortunately, things don't work at all when it comes to the PostgreSQL 
resource agent...  When I `killall -9 corosync` on the primary node (node 3 in 
this case), the cluster ends up in the following state, with the VIP and 
PostgreSQL down indefinitely.  Node 3 is reset and comes back up, but as I 
don't have pacemaker and corosync set to autostart, nothing happens until I log 
on and issue a `pcs cluster start` on that node, which causes it to resume it's 
former primary status.  What do I need to do to make the resources fail over to 
one of the standby nodes when the primary goes offline?  I feel like I must be 
missing something obvious...

 vfencing       (stonith:external/vcenter):     Started d-gp2-dbpg0-1
 postgresql-master-vip  (ocf::heartbeat:IPaddr2):       Stopped
 Master/Slave Set: postgresql-ha [postgresql-10-main]
     Slaves: [ d-gp2-dbpg0-1 d-gp2-dbpg0-2 ]
     Stopped: [ d-gp2-dbpg0-3 ]

Thanks,
-- 
Casey
_______________________________________________
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to