On 02/12/2015 07:41 PM, Jorge Niedbalski wrote: >> While typing up https://bugs.launchpad.net/juju-core/+bug/1417874 I >> realized that your proposed solution of a pre-departure hook is the >> only one that can work. Once -departed hooks start firing both the >> doomed unit and the leader have already lost the access needed to >> decommission the departing node. > > I have been struggling the last hours with the same exact issue trying > to add replication to memcached. > > The problem is that there is no a point on which i can identify > what's the exact departing unit? > > And this leads to manual operator intervention, which is _highly_ non > desirable for a juju deployed environment. > > +1 for having this feature implemented.
Hola! I'm bumping this thread to get some chatter going on - we hit a similar issue with percona-cluster charm, which is reported in this bug: https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1514472 The issue is somewhat similar as to those with mongodb - when unit is leaving relation (issued by 'juju remove-unit'), charm should first shut down percona server on the departing unit. Failing to do results in 'lost quorum' situation where remaining node thinks that network has partitioned. Unfortunately there is now way for a relation's -departed hook to know if it's executing on departing unit or on the other one so it can't know weather or not to stop percona server. Implementing a -about-to-depart hook would solve this issue. Mario -- Juju-dev mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
