As per nacc's comment it seems like "Wants=" is the recommended way to
hook start-up of one unit to the start-up of another unit.[1]

[1] -
https://www.freedesktop.org/software/systemd/man/systemd.unit.html#Wants=

So far I have tested using 2 scenarios (including
"Wants=pacemaker.service") and it look good so far.

---------------------------------------
* Scenario #1
[Both corosync & pacemaker installed]
---------------------------------------

- pacemaker start on corosync start.

root@xenialcorosyncpacemaker:~# systemctl status corosync | egrep "PID|Active:"
   Active: active (running) since Mon 2018-01-08 19:29:44 UTC; 21s ago
 Main PID: 445 (corosync)

root@xenialcorosyncpacemaker:~# systemctl status pacemaker | egrep "PID|Active:"
   Active: active (running) since Mon 2018-01-08 19:29:44 UTC; 27s ago
 Main PID: 447 (pacemakerd)

root@xenialcorosyncpacemaker:~# systemctl stop corosync

root@xenialcorosyncpacemaker:~# systemctl status corosync | egrep "PID|Active:"
   Active: inactive (dead) since Mon 2018-01-08 19:30:29 UTC; 1s ago
 Main PID: 445 (code=exited, status=0/SUCCESS)

root@xenialcorosyncpacemaker:~# systemctl status pacemaker | egrep "PID|Active:"
   Active: inactive (dead) since Mon 2018-01-08 19:30:29 UTC; 3s ago
 Main PID: 447 (code=exited, status=0/SUCCESS)

root@xenialcorosyncpacemaker:~# systemctl start corosync

root@xenialcorosync:~# systemctl status corosync | egrep "PID|Active:"
   Active: active (running) since Mon 2018-01-08 19:30:56 UTC; 1s ago
 Main PID: 474 (corosync)

root@xenialcorosyncpacemaker:~# systemctl status pacemaker | egrep "PID|Active:"
   Active: active (running) since Mon 2018-01-08 19:30:56 UTC; 3s ago
 Main PID: 476 (pacemakerd)

---------------------------------------
* Scenario #2
[corosync installed & pacemaker not installed]
---------------------------------------

- It doesn't seem to have any side-effects when pacemaker isn't installed. 
The Wants= options is simply ignore since the pacemaker.service is not present.

root@xenialcorosyncnopacemaker:~# systemctl status corosync | egrep 
"PID|Active:"
   Active: active (running) since Mon 2018-01-08 19:32:11 UTC; 53s ago
 Main PID: 1284 (corosync)


root@xenialcorosyncnopacemake:~# systemctl status pacemaker
● pacemaker.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

root@v:~# systemctl stop corosync

root@xenialcorosyncnopacemake:~# systemctl status pacemaker
● pacemaker.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

root@xenialcorosyncnopacemake:~# systemctl status corosync | egrep "PID|Active:"
   Active: inactive (dead) since Mon 2018-01-08 19:33:17 UTC; 4s ago
 Main PID: 1284 (code=exited, status=0/SUCCESS)


root@xenialcorosyncnopacemake:~# systemctl start corosync

root@xenialcorosyncnopacemake:~# systemctl status corosync | egrep "PID|Active:"
   Active: active (running) since Mon 2018-01-08 19:33:26 UTC; 1s ago
 Main PID: 1378 (corosync)

- Eric

-- 
You received this bug notification because you are a member of Ubuntu
High Availability Team, which is subscribed to pacemaker in Ubuntu.
https://bugs.launchpad.net/bugs/1740892

Title:
  corosync upgrade on 2018-01-02 caused pacemaker to fail

Status in OpenStack hacluster charm:
  Invalid
Status in corosync package in Ubuntu:
  In Progress
Status in pacemaker package in Ubuntu:
  New

Bug description:
  During upgrades on 2018-01-02, corosync and it's libs were upgraded:

  (from a trusty/mitaka cloud)

  Upgrade: libcmap4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  corosync:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcfg6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libcpg4:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4), libquorum5:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libcorosync-common4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4),
  libsam4:amd64 (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libvotequorum6:amd64
  (2.3.3-1ubuntu3, 2.3.3-1ubuntu4), libtotem-pg5:amd64 (2.3.3-1ubuntu3,
  2.3.3-1ubuntu4)

  During this process, it appears that pacemaker service is restarted
  and it errors:

  syslog:Jan  2 16:09:33 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now lost (was member)
  syslog:Jan  2 16:09:34 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
crm_update_peer_state: pcmk_quorum_notification: Node 
juju-machine-1-lxc-3[1001] - state is now member (was lost)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:    error: 
cfg_connection_destroy: Connection destroyed
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
pcmk_shutdown_worker: Shuting down Pacemaker
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:   notice: 
stop_child: Stopping crmd: Sent -15 to process 2050
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:    error: 
pcmk_cpg_dispatch: Connection to the CPG API failed: Library error (2)
  syslog:Jan  2 16:14:32 juju-machine-0-lxc-4 pacemakerd[1994]:    error: 
mcp_cpg_destroy: Connection destroyed

  
  Also affected xenial/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-hacluster/+bug/1740892/+subscriptions

_______________________________________________
Mailing list: https://launchpad.net/~ubuntu-ha
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~ubuntu-ha
More help   : https://help.launchpad.net/ListHelp

Reply via email to