Re: Handling uninstallation upon subordinate removal in reactive

2017-11-27 Thread Stuart Bishop
On 22 November 2017 at 01:58, Junien Fridrick
 wrote:

> I was also thinking that maybe the snap layer could save which
> application installed which snap, and if said application is removed,
> then remove said snap. Would that be a good idea ?

Your subordinate charm doesn't know it is the only charm using the
snap on that unit, so it is dangerous doing this sort of cleanup.
Well, right now it is pretty safe because snaps in charms are rare but
I expect this to change over time. It is exactly the same as
automatically removing deb packages, which charms rarely if ever do.
Also note that snaps can contain data, and removing them can destroy
this data. subordinates need to tread lightly to avoid victimizing
other charms on the unit, and primary charms don't bother because
removing them means the machine should ideally be rebuilt.

It would be possible to have the snap layer automatically remove
snaps, but the behaviour would need to be explicitly enabled using a
layer.yaml option.

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Is there a universal interface I can use?

2017-11-27 Thread Stuart Bishop
On 23 November 2017 at 21:37, Tilman Baumann
 wrote:
> On 22.11.2017 23:26, Haw Loeung wrote:
>> Hi Tilman,
>>
>> On Wed, Nov 22, 2017 at 04:02:08PM +0100, Tilman Baumann wrote:
>>> However, that doesn't seem to work. Juju complains the relation doesn't
>>> exist.
>>> $ juju add-relation cassandra-backup:database cassandra:database
>>> ERROR no relations found
>>>
>>> So, is there a interface that I can (ab-)use in a similar way?
>>>
>>> I don't  want to build a full blown cassandra interface and at it to the
>>> list.
>>
>> Not sure if you've seen this, but I did some work recently with
>> something similar to backup Cassandra DBs:
>>
>> | https://jujucharms.com/u/hloeung/cassandra-backup/
>
> I didn't want to talk about it before it's usable. I think I might be
> working on something similar.
>
> https://github.com/tbaumann/jujucharm-layer-cassandra-backup
>
> It seems to only use "nodetool snapshot"
> I'm integrating this for a 3rd party so I don't quite know what is going
> on there. But looks like the intent is pretty much the same.

I think this charm needs to remain a subordinate, because 'nodetool
snapshot' requires a JMX connection and that should be blocked
(because it is not secured).

I'd be happy to have actions on the main Cassandra charm to manage
snapshots, and cronned snapshots would also be a feature suitable for
the main charm. But you would still need some way to ship the
snapshots to your backup host which should be a subordinate.

Ideally the Cassandra charm would support multiple DCs, which would
allow you to backup only a subset of your nodes to get a complete copy
of your data, but that is going to need to wait until a
charms.reactive rewrite.

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: ownership and sharing of models via team namespace

2017-11-27 Thread John Meinel
So models shared with you are currently possible, but I believe everything
is still owned as an individual user, rather than at a team scope. You
should be able to add other users as 'admin' on an existing model, but
we're still working through a backlog of places that treat "owner"
differently than a user marked as "admin" on the model.

I do believe creating a team namespaced model is still on the drawing board.

John
=:->


On Mon, Nov 27, 2017 at 8:08 PM, James Beedy  wrote:

> Looking at the jujucharms page for my team namespace [0], I see a place
> where models may be shared with the team. Is this something that is in the
> works, or is there existing functionality where models can be shared
> with/owned by a team?
>
> Thanks
>
> [0] https://imgur.com/a/7xint
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: ownership and sharing of models via team namespace

2017-11-27 Thread John Meinel
So models shared with you are currently possible, but I believe everything
is still owned as an individual user, rather than at a team scope. You
should be able to add other users as 'admin' on an existing model, but
we're still working through a backlog of places that treat "owner"
differently than a user marked as "admin" on the model.

I do believe creating a team namespaced model is still on the drawing board.

John
=:->


On Mon, Nov 27, 2017 at 8:08 PM, James Beedy  wrote:

> Looking at the jujucharms page for my team namespace [0], I see a place
> where models may be shared with the team. Is this something that is in the
> works, or is there existing functionality where models can be shared
> with/owned by a team?
>
> Thanks
>
> [0] https://imgur.com/a/7xint
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


ownership and sharing of models via team namespace

2017-11-27 Thread James Beedy
Looking at the jujucharms page for my team namespace [0], I see a place
where models may be shared with the team. Is this something that is in the
works, or is there existing functionality where models can be shared
with/owned by a team?

Thanks

[0] https://imgur.com/a/7xint
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


ownership and sharing of models via team namespace

2017-11-27 Thread James Beedy
Looking at the jujucharms page for my team namespace [0], I see a place
where models may be shared with the team. Is this something that is in the
works, or is there existing functionality where models can be shared
with/owned by a team?

Thanks

[0] https://imgur.com/a/7xint
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Endpoints trials + MAAS charms

2017-11-27 Thread James Beedy
Oo  possibly I'm on to something (last item/very bottom)
https://imgur.com/a/6Wyd0

On Mon, Nov 27, 2017 at 5:53 AM, James Beedy  wrote:

> | I can also see that it took a few attempts to get there (the last
> machine is #40 :)
>
> It was a trying process to say the least - possibly I am one of few users
> who would stick around to see it through. which is actually great
> because it creates a market for people to provide the service of providing
> MAAS!
>
> At least, with the MAAS charm, you can 1) create and add your vms, 2) juju
> deploy bundle, 3) profit.
>
> Instead of deploy #40 which is probably #100 by now + tears + 10 trips to
> the datacenter + 
>
>
> On Mon, Nov 27, 2017 at 5:31 AM, John Meinel 
> wrote:
>
>> It does seem like Juju operating the LXD provider, but spanning multiple
>> machines would be an answer. I do believe that LXD itself is introducing
>> clustering into their API in the 18.04 cycle. Which would probably need
>> some updates on the Juju side to handle their targeted provisioning (create
>> a container for maas-postgresql on the 3rd machine in the LXD cluster).
>> But that might get you out of manual provisioning of a bunch of machines
>> and VMs to target everything.
>> We're certainly not at a point where you could just-do-that today.
>> I can also see that it took a few attempts to get there (the last machine
>> is #40 :)
>> John
>> =:->
>>
>>
>> On Mon, Nov 27, 2017 at 5:17 PM, James Beedy 
>> wrote:
>>
>>> Dmitrii,
>>>
>>> Thanks for the response.
>>>
>>> When taking into account the overhead to get MAAS deployed as charms, it
>>> definitely makes the LXD provider method you have described seem very
>>> appealing. Some issues I've experienced trying to get MAAS HA deployed are
>>> such that it seems like just a ton of infrastructure is needed to get MAAS
>>> HA standing using the manual provider deploying the MAAS charms. You have
>>> to provision/maintain the manual Juju controller cluster underneath MAAS,
>>> just to have MAAS  ugh
>>>
>>> I found not only the sheer quantity of servers needed to get this
>>> working quite unnerving, but also the manual ops I had to undergo to get it
>>> all standing #snowflakes #custom.
>>>
>>> I iterated on this a few times to see if I could come up with something
>>> more manageable, and this is where I landed (brace yourself) [0]
>>>
>>> What's going on there?
>>>
>>> I created a model in JAAS, manually added 3 physical hosts across
>>> different racks, then bootstrapped 4 virtual machines on each physical host
>>> (1 vm for each postgresql, maas-region, maas-rack [1], juju-controller
>>> (juju controllers for maas provider, to be checked into maas)) on each host.
>>>
>>> I then also added my vms to my JAAS model so I could deploy the charms
>>> to them (just postgresql and ubuntu at the time - the MAAS stuff got
>>> manually installed and configured after the machiens had ubuntu deployed to
>>> them in the model).
>>>
>>> (ASIDE: I'm seeing I could probably do this one better by combining my
>>> region and rack controller to the same vm, and colocating the postgresql
>>> charm with the region+rack on the same vm, giving me ha of all components
>>> using single vm on each host.)
>>>
>>> I know there are probably a plethora of issues with what I've done here,
>>> but it solved a few primary issues that seemed to outweigh the misuse.
>>>
>>> The issues were:
>>>
>>> 1) Too many machines needed to get MAAS HA
>>> 2) Chicken or egg?
>>> 3) Snowflakes!!!
>>> 4) No clear path to support MAAS HA
>>>
>>> My idea behind this was such that by using JAAS I am solving the chicken
>>> or the egg issue, and reducing the machines/infra needed to get MAAS HA.
>>> Deploying the MAAS infra with Juju eliminates the snowflake/lack of
>>> tracking and chips at the "No clear path to support MAAS HA".
>>>
>>> Thanks,
>>>
>>> James
>>>
>>> [0] http://paste.ubuntu.com/25891429/
>>> [1] http://paste.ubuntu.com/26058033/
>>>
>>> On Mon, Nov 27, 2017 at 12:09 AM, Dmitrii Shcherbakov <
>>> dmitrii.shcherba...@canonical.com> wrote:
>>>
 Hi James,

 This is an interesting approach, thanks for taking a shot at solving
 this problem!

 I thought of doing something similar a few months ago. The problematic
 aspect here is the assumption of having a provider/substrate already
 present for MAAS to be deployed - this is the chicken or the egg type of
 problem.

 If you would like to take the MAAS charm route, manual provider could
 be used with Juju to do that with pre-created hosts (which may be
 containers/VMs/hosts all in a single model with this provider, regardless
 of how they were deployed). There would be hosts for a Juju controller(s)
 and MAAS region/rack controllers in the end.

 If you put both Juju controller and MAAS into containers, it gives you
 some flexibility. If you are careful, you can even 

Re: Endpoints trials + MAAS charms

2017-11-27 Thread James Beedy
| I can also see that it took a few attempts to get there (the last machine
is #40 :)

It was a trying process to say the least - possibly I am one of few users
who would stick around to see it through. which is actually great
because it creates a market for people to provide the service of providing
MAAS!

At least, with the MAAS charm, you can 1) create and add your vms, 2) juju
deploy bundle, 3) profit.

Instead of deploy #40 which is probably #100 by now + tears + 10 trips to
the datacenter + 


On Mon, Nov 27, 2017 at 5:31 AM, John Meinel  wrote:

> It does seem like Juju operating the LXD provider, but spanning multiple
> machines would be an answer. I do believe that LXD itself is introducing
> clustering into their API in the 18.04 cycle. Which would probably need
> some updates on the Juju side to handle their targeted provisioning (create
> a container for maas-postgresql on the 3rd machine in the LXD cluster).
> But that might get you out of manual provisioning of a bunch of machines
> and VMs to target everything.
> We're certainly not at a point where you could just-do-that today.
> I can also see that it took a few attempts to get there (the last machine
> is #40 :)
> John
> =:->
>
>
> On Mon, Nov 27, 2017 at 5:17 PM, James Beedy  wrote:
>
>> Dmitrii,
>>
>> Thanks for the response.
>>
>> When taking into account the overhead to get MAAS deployed as charms, it
>> definitely makes the LXD provider method you have described seem very
>> appealing. Some issues I've experienced trying to get MAAS HA deployed are
>> such that it seems like just a ton of infrastructure is needed to get MAAS
>> HA standing using the manual provider deploying the MAAS charms. You have
>> to provision/maintain the manual Juju controller cluster underneath MAAS,
>> just to have MAAS  ugh
>>
>> I found not only the sheer quantity of servers needed to get this working
>> quite unnerving, but also the manual ops I had to undergo to get it all
>> standing #snowflakes #custom.
>>
>> I iterated on this a few times to see if I could come up with something
>> more manageable, and this is where I landed (brace yourself) [0]
>>
>> What's going on there?
>>
>> I created a model in JAAS, manually added 3 physical hosts across
>> different racks, then bootstrapped 4 virtual machines on each physical host
>> (1 vm for each postgresql, maas-region, maas-rack [1], juju-controller
>> (juju controllers for maas provider, to be checked into maas)) on each host.
>>
>> I then also added my vms to my JAAS model so I could deploy the charms to
>> them (just postgresql and ubuntu at the time - the MAAS stuff got manually
>> installed and configured after the machiens had ubuntu deployed to them in
>> the model).
>>
>> (ASIDE: I'm seeing I could probably do this one better by combining my
>> region and rack controller to the same vm, and colocating the postgresql
>> charm with the region+rack on the same vm, giving me ha of all components
>> using single vm on each host.)
>>
>> I know there are probably a plethora of issues with what I've done here,
>> but it solved a few primary issues that seemed to outweigh the misuse.
>>
>> The issues were:
>>
>> 1) Too many machines needed to get MAAS HA
>> 2) Chicken or egg?
>> 3) Snowflakes!!!
>> 4) No clear path to support MAAS HA
>>
>> My idea behind this was such that by using JAAS I am solving the chicken
>> or the egg issue, and reducing the machines/infra needed to get MAAS HA.
>> Deploying the MAAS infra with Juju eliminates the snowflake/lack of
>> tracking and chips at the "No clear path to support MAAS HA".
>>
>> Thanks,
>>
>> James
>>
>> [0] http://paste.ubuntu.com/25891429/
>> [1] http://paste.ubuntu.com/26058033/
>>
>> On Mon, Nov 27, 2017 at 12:09 AM, Dmitrii Shcherbakov <
>> dmitrii.shcherba...@canonical.com> wrote:
>>
>>> Hi James,
>>>
>>> This is an interesting approach, thanks for taking a shot at solving
>>> this problem!
>>>
>>> I thought of doing something similar a few months ago. The problematic
>>> aspect here is the assumption of having a provider/substrate already
>>> present for MAAS to be deployed - this is the chicken or the egg type of
>>> problem.
>>>
>>> If you would like to take the MAAS charm route, manual provider could be
>>> used with Juju to do that with pre-created hosts (which may be
>>> containers/VMs/hosts all in a single model with this provider, regardless
>>> of how they were deployed). There would be hosts for a Juju controller(s)
>>> and MAAS region/rack controllers in the end.
>>>
>>> If you put both Juju controller and MAAS into containers, it gives you
>>> some flexibility. If you are careful, you can even migrate those
>>> containers. Running MAAS in an unprivileged container should be perfectly
>>> possible https://github.com/CanonicalLtd/maas-docs/issues/700 - I am
>>> not sure that the instructions that require a privileged container with
>>> loop devices passed to it are relevant anymore.
>>>
>>> 

Re: Endpoints trials + MAAS charms

2017-11-27 Thread John Meinel
It does seem like Juju operating the LXD provider, but spanning multiple
machines would be an answer. I do believe that LXD itself is introducing
clustering into their API in the 18.04 cycle. Which would probably need
some updates on the Juju side to handle their targeted provisioning (create
a container for maas-postgresql on the 3rd machine in the LXD cluster).
But that might get you out of manual provisioning of a bunch of machines
and VMs to target everything.
We're certainly not at a point where you could just-do-that today.
I can also see that it took a few attempts to get there (the last machine
is #40 :)
John
=:->


On Mon, Nov 27, 2017 at 5:17 PM, James Beedy  wrote:

> Dmitrii,
>
> Thanks for the response.
>
> When taking into account the overhead to get MAAS deployed as charms, it
> definitely makes the LXD provider method you have described seem very
> appealing. Some issues I've experienced trying to get MAAS HA deployed are
> such that it seems like just a ton of infrastructure is needed to get MAAS
> HA standing using the manual provider deploying the MAAS charms. You have
> to provision/maintain the manual Juju controller cluster underneath MAAS,
> just to have MAAS  ugh
>
> I found not only the sheer quantity of servers needed to get this working
> quite unnerving, but also the manual ops I had to undergo to get it all
> standing #snowflakes #custom.
>
> I iterated on this a few times to see if I could come up with something
> more manageable, and this is where I landed (brace yourself) [0]
>
> What's going on there?
>
> I created a model in JAAS, manually added 3 physical hosts across
> different racks, then bootstrapped 4 virtual machines on each physical host
> (1 vm for each postgresql, maas-region, maas-rack [1], juju-controller
> (juju controllers for maas provider, to be checked into maas)) on each host.
>
> I then also added my vms to my JAAS model so I could deploy the charms to
> them (just postgresql and ubuntu at the time - the MAAS stuff got manually
> installed and configured after the machiens had ubuntu deployed to them in
> the model).
>
> (ASIDE: I'm seeing I could probably do this one better by combining my
> region and rack controller to the same vm, and colocating the postgresql
> charm with the region+rack on the same vm, giving me ha of all components
> using single vm on each host.)
>
> I know there are probably a plethora of issues with what I've done here,
> but it solved a few primary issues that seemed to outweigh the misuse.
>
> The issues were:
>
> 1) Too many machines needed to get MAAS HA
> 2) Chicken or egg?
> 3) Snowflakes!!!
> 4) No clear path to support MAAS HA
>
> My idea behind this was such that by using JAAS I am solving the chicken
> or the egg issue, and reducing the machines/infra needed to get MAAS HA.
> Deploying the MAAS infra with Juju eliminates the snowflake/lack of
> tracking and chips at the "No clear path to support MAAS HA".
>
> Thanks,
>
> James
>
> [0] http://paste.ubuntu.com/25891429/
> [1] http://paste.ubuntu.com/26058033/
>
> On Mon, Nov 27, 2017 at 12:09 AM, Dmitrii Shcherbakov <
> dmitrii.shcherba...@canonical.com> wrote:
>
>> Hi James,
>>
>> This is an interesting approach, thanks for taking a shot at solving this
>> problem!
>>
>> I thought of doing something similar a few months ago. The problematic
>> aspect here is the assumption of having a provider/substrate already
>> present for MAAS to be deployed - this is the chicken or the egg type of
>> problem.
>>
>> If you would like to take the MAAS charm route, manual provider could be
>> used with Juju to do that with pre-created hosts (which may be
>> containers/VMs/hosts all in a single model with this provider, regardless
>> of how they were deployed). There would be hosts for a Juju controller(s)
>> and MAAS region/rack controllers in the end.
>>
>> If you put both Juju controller and MAAS into containers, it gives you
>> some flexibility. If you are careful, you can even migrate those
>> containers. Running MAAS in an unprivileged container should be perfectly
>> possible https://github.com/CanonicalLtd/maas-docs/issues/700 - I am not
>> sure that the instructions that require a privileged container with loop
>> devices passed to it are relevant anymore.
>>
>> An alternative is to use the lxd provider (which can connect to a remote
>> daemon, not only localhost) but this is only one daemon per provider. For
>> HA purposes you would need several LXDs on different hosts and for this
>> provider to support network spaces because you may have MAAS hosts located
>> in different layer 2 networks with different subnets used. Cross-model
>> relations could be used to have a model per LXD provider but I am not sure
>> this is the best approach - units would be on different models with no
>> shared unit-level leadership.
>>
>> https://github.com/juju/juju/tree/develop/provider/lxd
>>
>> With the new LXD clustering work it might be possible overcome this
>> 

juju attach unnecessary interfaces to lxd container

2017-11-27 Thread Vladimir Burlakov
Hello everyone!
I try to deploy an application to lxd container on manually attached machine 
(ubuntu-xenial) 
and container do not start, with the error - «Missing parent 'docker_gwbridge’ 
for nic ‘eth2’». 
I see in the logs, that juju tries to add docker interfaces to the container, 
though on a target 
machine default lxd profile has no such interfaces.. I wonder, if someone can 
help me to fix
the issue? 

-
$ juju --version: 
2.2.6-xenial-amd64

-
yaml file of the application, that i trying to deploy: 
$ cat keystone.yaml
keystone:
 admin-password: openstack
 openstack-origin: 'cloud:xenial-pike'
 worker-multiplier: 0.25

-
Default profile on a target machine: 
—
### Note that the name is shown but cannot be changed
config: {}
description: Default LXD profile
devices:
 eth0:
   name: eth0
   nictype: bridged
   parent: br0
   type: nic
name: default
—

-
juju machines: 
—
Machine  StateDNSInst id   Series  AZ  Message
0started  10.199.200.17  manual:10.199.200.17  xenial  Manually 
provisioned machine
0/lxd/2  downpending   xenial  Missing 
parent 'docker_gwbridge' for nic 'eth2'
—

juju log, from target machine: 
—
2017-11-24 11:48:48 INFO juju.container.lxd lxd.go:176 instance 
"juju-99d66a-0-lxd-2" configured with map[eth0:map[nictype:bridged name:eth0 
parent:br0 hwaddr:00:16:3e:d3:42:fc mt
u:1500 type:nic] eth1:map[mtu:1500 type:nic nictype:bridged name:eth1 
parent:docker0 hwaddr:00:16:3e:40:4a:7e] eth2:map[name:eth2 
parent:docker_gwbridge hwaddr:00:16:3e:e6:fb:5a m
tu:1500 type:nic nictype:bridged]] network devices
2017-11-24 11:48:48 INFO juju.container.lxd lxd.go:187 starting instance 
"juju-99d66a-0-lxd-2" (image "juju/xenial/amd64")...
2017-11-24 11:48:51 WARNING juju.provisioner provisioner_task.go:747 failed to 
start instance (Missing parent 'docker_gwbridge' for nic 'eth2'), retrying in 
10s (10 more attempts)
/docker
—

Thanks, 
Vladimir Burlakov
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Endpoints trials + MAAS charms

2017-11-27 Thread James Beedy
Dmitrii,

Thanks for the response.

When taking into account the overhead to get MAAS deployed as charms, it
definitely makes the LXD provider method you have described seem very
appealing. Some issues I've experienced trying to get MAAS HA deployed are
such that it seems like just a ton of infrastructure is needed to get MAAS
HA standing using the manual provider deploying the MAAS charms. You have
to provision/maintain the manual Juju controller cluster underneath MAAS,
just to have MAAS  ugh

I found not only the sheer quantity of servers needed to get this working
quite unnerving, but also the manual ops I had to undergo to get it all
standing #snowflakes #custom.

I iterated on this a few times to see if I could come up with something
more manageable, and this is where I landed (brace yourself) [0]

What's going on there?

I created a model in JAAS, manually added 3 physical hosts across different
racks, then bootstrapped 4 virtual machines on each physical host (1 vm for
each postgresql, maas-region, maas-rack [1], juju-controller (juju
controllers for maas provider, to be checked into maas)) on each host.

I then also added my vms to my JAAS model so I could deploy the charms to
them (just postgresql and ubuntu at the time - the MAAS stuff got manually
installed and configured after the machiens had ubuntu deployed to them in
the model).

(ASIDE: I'm seeing I could probably do this one better by combining my
region and rack controller to the same vm, and colocating the postgresql
charm with the region+rack on the same vm, giving me ha of all components
using single vm on each host.)

I know there are probably a plethora of issues with what I've done here,
but it solved a few primary issues that seemed to outweigh the misuse.

The issues were:

1) Too many machines needed to get MAAS HA
2) Chicken or egg?
3) Snowflakes!!!
4) No clear path to support MAAS HA

My idea behind this was such that by using JAAS I am solving the chicken or
the egg issue, and reducing the machines/infra needed to get MAAS HA.
Deploying the MAAS infra with Juju eliminates the snowflake/lack of
tracking and chips at the "No clear path to support MAAS HA".

Thanks,

James

[0] http://paste.ubuntu.com/25891429/
[1] http://paste.ubuntu.com/26058033/

On Mon, Nov 27, 2017 at 12:09 AM, Dmitrii Shcherbakov <
dmitrii.shcherba...@canonical.com> wrote:

> Hi James,
>
> This is an interesting approach, thanks for taking a shot at solving this
> problem!
>
> I thought of doing something similar a few months ago. The problematic
> aspect here is the assumption of having a provider/substrate already
> present for MAAS to be deployed - this is the chicken or the egg type of
> problem.
>
> If you would like to take the MAAS charm route, manual provider could be
> used with Juju to do that with pre-created hosts (which may be
> containers/VMs/hosts all in a single model with this provider, regardless
> of how they were deployed). There would be hosts for a Juju controller(s)
> and MAAS region/rack controllers in the end.
>
> If you put both Juju controller and MAAS into containers, it gives you
> some flexibility. If you are careful, you can even migrate those
> containers. Running MAAS in an unprivileged container should be perfectly
> possible https://github.com/CanonicalLtd/maas-docs/issues/700 - I am not
> sure that the instructions that require a privileged container with loop
> devices passed to it are relevant anymore.
>
> An alternative is to use the lxd provider (which can connect to a remote
> daemon, not only localhost) but this is only one daemon per provider. For
> HA purposes you would need several LXDs on different hosts and for this
> provider to support network spaces because you may have MAAS hosts located
> in different layer 2 networks with different subnets used. Cross-model
> relations could be used to have a model per LXD provider but I am not sure
> this is the best approach - units would be on different models with no
> shared unit-level leadership.
>
> https://github.com/juju/juju/tree/develop/provider/lxd
>
> With the new LXD clustering work it might be possible overcome this
> limitation as well. I would assume LXD clustering to work on a per-rack
> basis due to latency constraints while with MAAS in a data center you would
> surely place different region controllers and rack controllers on different
> racks (availability zones).
>
> https://insights.ubuntu.com/2017/10/23/lxd-weekly-status-
> 20-authentication-conferences-more/
> "Distributed database for LXD clustering"
>
> If, by the time of LXD clustering release, there was support for
> availability zones it would have solved the problem with a single control
> plane for a Juju provider in the absence of MAAS.
>
> An alternative to the above is just usage of bootstrap automation to set
> up MAAS and then usage of Juju with charms for the rest of what you need.
>
>
> Best Regards,
> Dmitrii Shcherbakov
>
> Field Software Engineer
> IRC (freenode):