Re: 10.0.1.0/24

2015-11-10 Thread Pshem Kowalczyk
Hi,

I used 10.0.0.0/23 as my MAAS range.

kind regards
Pshem


On Tue, 10 Nov 2015 at 21:38 Andrew McDermott <
andrew.mcderm...@canonical.com> wrote:

> Hi,
>
> Where did you specify your base range of "10.0.0.0/23"?
>
> On 10 November 2015 at 03:03, Pshem Kowalczyk  wrote:
>
>> Hi,
>>
>> I've just re-created my environment from MAAS and I noticed that my lxc
>> containers can't talk out to the world (but the world could talk back to
>> them, for example outbound ICMP would not work, but inbound from a
>> different machine on the same L2 broadcast domain - would). That obviously
>> broke the provisioning (since the containers couldn't curl anything)
>>
>> After a little bit of looking around I found this iptables rule (in nat)
>> on a host freshly deployed from juju.
>>
>> Chain POSTROUTING (policy ACCEPT 102 packets, 10926 bytes)
>>  pkts bytes target prot opt in out source
>> destination
>>42  2807 MASQUERADE  all  --  *  *   10.0.1.0/24 !
>> 10.0.1.0/24
>>
>> Since I used a 10.0.0.0/23 as my base range and the LXC containers were
>> getting 10.0.1.x/23 addresses this rule ended up NATing all the requests to
>> the IP on the host - not good.
>>
>> What creates this rule and what's it for in the first instance?
>>
>>
>> kind regards
>> Pshem
>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
>
>
> --
> Andrew McDermott 
> Juju Core Sapphire team 
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Latest on the LXD Provider!

2015-11-10 Thread Simon Davy
On 9 November 2015 at 18:19, Rick Harding  wrote:
> Thanks Katherine. That's looking great. One request, next demo I'd be
> curious to see how easy it is to run multiple lxd environments locally. I
> know it's been possible with lxc before with a bunch of config.

Just an FYI, I have a tool to manage multiple local provider environments.

https://github.com/bloodearnest/plugins/blob/juju-add-local-env/juju-add-local-env

I've have ~12 local environments that I switch between for my
day-to-day work, usually 3-4 bootstrapped at once. I couldn't work
effectively without multiple environments.

Hopefully the above utility will be made obsolete by the lxd provider,
but it might be useful in the mean time.

> Ideally we'd
> just be able to create a new named section and say it's lxd and boom, I can
> bootstrap the new one and have it distinct from the first.

Yes!  I hope we'll be able to have 1 lxd container running a
multi-environment state server, that can manage multiple lxd
environments on your host (or remotely?)! That would be a great dev
experience.

> Keep up the great work!

And it is indeed great work :)

We've been using lxd with the manual provider, really been impressed
with what lxd brings to the table.

Couple of questions

 - do you have plans to support applying user-defined lxd profiles to
the lxd containers that juju creates? This would be great in dev, and
in special cases (e.g. give your charm access to the gpu, or bind
mount a host dir)

 - likewise, will users be able to specify base lxd image to use?

Many thanks for this work, it has the potential to really benefit our
daily usage of juju.

Thanks



-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Feature Request: -about-to-depart hook

2015-11-10 Thread Mario Splivalo
On 02/12/2015 07:41 PM, Jorge Niedbalski wrote:
>> While typing up https://bugs.launchpad.net/juju-core/+bug/1417874 I
>> realized that your proposed solution of a pre-departure hook is the
>> only one that can work. Once -departed hooks start firing both the
>> doomed unit and the leader have already lost the access needed to
>> decommission the departing node.
> 
> I have been struggling the last hours with the same exact issue trying
> to add replication to memcached.
> 
> The problem is that there is no a point on which i can identify
> what's the exact departing unit?
> 
> And this leads to manual operator intervention, which is _highly_ non
> desirable for a juju deployed environment.
> 
> +1 for having this feature implemented.

Hola!

I'm bumping this thread to get some chatter going on - we hit a similar
issue with percona-cluster charm, which is reported in this bug:

https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1514472

The issue is somewhat similar as to those with mongodb - when unit is
leaving relation (issued by 'juju remove-unit'), charm should first shut
down percona server on the departing unit. Failing to do results in
'lost quorum' situation where remaining node thinks that network has
partitioned. Unfortunately there is now way for a relation's -departed
hook to know if it's executing on departing unit or on the other one so
it can't know weather or not to stop percona server. Implementing a
-about-to-depart hook would solve this issue.

Mario


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev