I'll be tracking these kind of feature requests here: https://github.com/juju/juju/wiki/Feature-Requests
There's not much there yet, but we'll be filling it out as we go through the other various requests we've gotten. Thanks! -Cheryl On Mon, Nov 23, 2015 at 6:13 PM, Rick Harding <[email protected]> wrote: > Thank you Cheryl! > > On Mon, Nov 23, 2015, 6:33 PM Cheryl Jennings < > [email protected]> wrote: > >> This is already on my list. I'm still figuring out a good way to >> organize / record these in a publicly viewable place. >> >> I'll send out a link once I get something together. (Hopefully tonight!) >> >> Thanks! >> >> >> -Cheryl >> On Nov 23, 2015 5:20 PM, "Rick Harding" <[email protected]> >> wrote: >> >>> Thanks for the feedback. I think this is something we should try to make >>> some time for. I've copied Alexis and we'll see what can be done with the >>> team. >>> >>> Alexis, can you put this on the list of things to investigate for future >>> roadmap? >>> >>> Thanks >>> >>> On Tue, Nov 10, 2015 at 8:22 AM Mario Splivalo < >>> [email protected]> wrote: >>> >>>> On 02/12/2015 07:41 PM, Jorge Niedbalski wrote: >>>> >> While typing up https://bugs.launchpad.net/juju-core/+bug/1417874 I >>>> >> realized that your proposed solution of a pre-departure hook is the >>>> >> only one that can work. Once -departed hooks start firing both the >>>> >> doomed unit and the leader have already lost the access needed to >>>> >> decommission the departing node. >>>> > >>>> > I have been struggling the last hours with the same exact issue trying >>>> > to add replication to memcached. >>>> > >>>> > The problem is that there is no a point on which i can identify >>>> > what's the exact departing unit? >>>> > >>>> > And this leads to manual operator intervention, which is _highly_ non >>>> > desirable for a juju deployed environment. >>>> > >>>> > +1 for having this feature implemented. >>>> >>>> Hola! >>>> >>>> I'm bumping this thread to get some chatter going on - we hit a similar >>>> issue with percona-cluster charm, which is reported in this bug: >>>> >>>> https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1514472 >>>> >>>> The issue is somewhat similar as to those with mongodb - when unit is >>>> leaving relation (issued by 'juju remove-unit'), charm should first shut >>>> down percona server on the departing unit. Failing to do results in >>>> 'lost quorum' situation where remaining node thinks that network has >>>> partitioned. Unfortunately there is now way for a relation's -departed >>>> hook to know if it's executing on departing unit or on the other one so >>>> it can't know weather or not to stop percona server. Implementing a >>>> -about-to-depart hook would solve this issue. >>>> >>>> Mario >>>> >>>> >>>> -- >>>> Juju-dev mailing list >>>> [email protected] >>>> Modify settings or unsubscribe at: >>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev >>>> >>> >>> -- >>> Juju-dev mailing list >>> [email protected] >>> Modify settings or unsubscribe at: >>> https://lists.ubuntu.com/mailman/listinfo/juju-dev >>> >>>
-- Juju-dev mailing list [email protected] Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
