Whenever I deploy a new unit, the 'install' and 'config-changed' hooks are run and it configures it self as a standalone unit. If there are other units in the service, peer relation-joined and peer relation-changed hooks are run and the unit reconfigures itself to be a member of this cluster.
The problem I am having is with access to an external resource defined by the service's yaml configuration (in my case, a Swift container used to store WAL logs for PITR). There is always a period where there are two (or more) units that believe they are standalone units and in control of the shared resource. I think I have a work around - use the unit name to generate a unique Swift container name and use that until the unit has joined the peer relationship. There are a few drawbacks to this approach so it isn't ideal. Can anyone think of a better model for this? I do notice from the logs that a standalone unit will join the peer relationship, although no peer relation joined hook is run until at least one more unit has joined. If the unit joined was joined to the peer relationship right at the start, before the initial config-changed hook is run, then it could detect that there are other units in the service (but I suspect this would still be racy). I think I recall a helper being suggested to allow a master unit to be elected safely which would solve my problem properly (and also remove a lot of complexity from my charm). Did anything like this get on the roadmap? -- Stuart Bishop <[email protected]> -- Juju mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju
