Re: lxd and constraints

2017-01-16 Thread Reed O'Brien
+1

On Mon, Jan 16, 2017 at 9:28 AM, Aaron Bentley 
wrote:

> ISTM that
>  - constraints are used to ensure that a workload runs well.  Minimum
>constraints serve this, and maximum constraints do not.  (Maximum
>constraints may be useful to ensure that a workload does not swamp
>processes outside its container.)
>
>  - Juju cannot enforce a minimum constraint.  LXD could potentially add
>support for this, and then Juju would be able to leverage it.
>
>  - Given that Juju cannot enforce a minimum constraint on LXD at this
>time, it would make sense to emit a warning that it is ignoring the
>constraint.  This would retain the portability of bundles that use
>constraints while keeping the user informed.
>
> On 2017-01-13 01:14 PM, Nate Finch wrote:
> > I just feel like we're entering a minefield that our application and CLI
> > aren't really built to handle.  I think we *should* handle it, but it
> > needs to be well planned out, instead of just doing a tiny piece at a
> > time and only figuring out later if we did the right thing.
> >
> > There's a few problems I can see:
> >
> > 1.) you can have 10 lxd containers with memory limit of 2GB on a machine
> > with 4GB of RAM.  Deploying 10 applications to those containers that
> > each have a constraint of mem=2GB will not work as you expect.  We could
> > add extra bookkeeping for this, and warn you that you appear to be
> > oversubscribing the host, but that's more work.
> >
> > 2.) What happens if you try to deploy a container without a memory limit
> > on a host that already has a container on it?
> >
> > For example:
> > 4GB host
> > 2GB lxd container
> > try to deploy a new service in a container on this machine.
> > Do we warn?  We have no clue how much RAM the service will use.  Maybe
> > it'll be fine, maybe it won't.
> >
> > 3.) Our CLI doesn't really work well with constraints on containers:
> >
> > juju deploy mysql --constraints mem=2G --to lxd
> >
> > Does this deploy a machine with 2GB of ram and a container with a 2GB
> > ram limit on it?  Or does it deploy a machine with 2GB of ram and a
> > container with no limit on it?  It has to be one or the other, and
> > currently we have no way of indicating which we want to do, and no way
> > to do the other one without using multiple commands.
> >
> > This is a more likely use case, creating a bigger machine that can hold
> > multiple containers:
> > juju add-machine --constraints mem=4GB
> > // adds machine, let's say 5
> > // create a container on machine 5 with 2GB memory limit
> > juju deploy mysql --constraints mem=2GB --to lxd:5
> >
> > At least in this case, the deploy command is clear, there's only one
> > thing they can possibly mean.  Usually, the placement directive would
> > override the constraint, but in this case, it does what you would
> > want... but it is a littler weird that --to lxd:5 uses the constraint,
> > but --to 5 ignores it.
> >
> > Note that you can't just write a simple script to do the above, because
> > the machine number is variable, so you have to parse our output and then
> > use that for the next command.  It's still scriptable, obviously, but
> > it's more complicated script than just two lines of bash.
> >
> > Also note that using this second method, you can't deploy more than one
> > unit at a time, unless you want multiple units on containers on the same
> > machine (which I think would be pretty odd).
> >
> >
> >
> > On Fri, Jan 13, 2017 at 3:48 AM Rick Harding  > > wrote:
> >
> > In the end, you say you want an instance with 2gb of ram and if the
> > cloud has an instance with that exact limit it is in fact an exact
> > limit. The key things here is the clouds don't have infinite
> > malleable instance types like containers (this works for kvm and for
> > lxd). So I'm not sure the mis-match is as far apart as it seems.
> > root disk means give me a disk this big, if you ask for 2 core as
> > long as you can match an instance type with 2 cores it's exactly the
> > max you get.
> >
> > It seems part of this can be more adjusting the language from
> > "minimum" to something closer to "requested X" where request gives
> > it more of a "I want X" without the min/max boundaries.
> >
> >
> >
> > On Fri, Jan 13, 2017 at 3:14 AM John Meinel  > > wrote:
> >
> > So we could make it so that constraints are actually 'exactly'
> > for LXD, which would then conform to both minimum and maximum,
> > and would still be actually useful for people deploying to
> > containers. We could certainly probe the host machine and say
> > "you asked for 48 cores, and the host machine doesn't have it".
> >
> > However, note that explicit placement also takes precedence over
> > constraints anyway. If you do:
> >   juju deploy mysql --constraints mem=4G
> > today, and then do:
> 

Re: lxd and constraints

2017-01-16 Thread Merlijn Sebrechts
+1

2017-01-16 18:28 GMT+01:00 Aaron Bentley :

> ISTM that
>  - constraints are used to ensure that a workload runs well.  Minimum
>constraints serve this, and maximum constraints do not.  (Maximum
>constraints may be useful to ensure that a workload does not swamp
>processes outside its container.)
>
>  - Juju cannot enforce a minimum constraint.  LXD could potentially add
>support for this, and then Juju would be able to leverage it.
>
>  - Given that Juju cannot enforce a minimum constraint on LXD at this
>time, it would make sense to emit a warning that it is ignoring the
>constraint.  This would retain the portability of bundles that use
>constraints while keeping the user informed.
>
> On 2017-01-13 01:14 PM, Nate Finch wrote:
> > I just feel like we're entering a minefield that our application and CLI
> > aren't really built to handle.  I think we *should* handle it, but it
> > needs to be well planned out, instead of just doing a tiny piece at a
> > time and only figuring out later if we did the right thing.
> >
> > There's a few problems I can see:
> >
> > 1.) you can have 10 lxd containers with memory limit of 2GB on a machine
> > with 4GB of RAM.  Deploying 10 applications to those containers that
> > each have a constraint of mem=2GB will not work as you expect.  We could
> > add extra bookkeeping for this, and warn you that you appear to be
> > oversubscribing the host, but that's more work.
> >
> > 2.) What happens if you try to deploy a container without a memory limit
> > on a host that already has a container on it?
> >
> > For example:
> > 4GB host
> > 2GB lxd container
> > try to deploy a new service in a container on this machine.
> > Do we warn?  We have no clue how much RAM the service will use.  Maybe
> > it'll be fine, maybe it won't.
> >
> > 3.) Our CLI doesn't really work well with constraints on containers:
> >
> > juju deploy mysql --constraints mem=2G --to lxd
> >
> > Does this deploy a machine with 2GB of ram and a container with a 2GB
> > ram limit on it?  Or does it deploy a machine with 2GB of ram and a
> > container with no limit on it?  It has to be one or the other, and
> > currently we have no way of indicating which we want to do, and no way
> > to do the other one without using multiple commands.
> >
> > This is a more likely use case, creating a bigger machine that can hold
> > multiple containers:
> > juju add-machine --constraints mem=4GB
> > // adds machine, let's say 5
> > // create a container on machine 5 with 2GB memory limit
> > juju deploy mysql --constraints mem=2GB --to lxd:5
> >
> > At least in this case, the deploy command is clear, there's only one
> > thing they can possibly mean.  Usually, the placement directive would
> > override the constraint, but in this case, it does what you would
> > want... but it is a littler weird that --to lxd:5 uses the constraint,
> > but --to 5 ignores it.
> >
> > Note that you can't just write a simple script to do the above, because
> > the machine number is variable, so you have to parse our output and then
> > use that for the next command.  It's still scriptable, obviously, but
> > it's more complicated script than just two lines of bash.
> >
> > Also note that using this second method, you can't deploy more than one
> > unit at a time, unless you want multiple units on containers on the same
> > machine (which I think would be pretty odd).
> >
> >
> >
> > On Fri, Jan 13, 2017 at 3:48 AM Rick Harding  > > wrote:
> >
> > In the end, you say you want an instance with 2gb of ram and if the
> > cloud has an instance with that exact limit it is in fact an exact
> > limit. The key things here is the clouds don't have infinite
> > malleable instance types like containers (this works for kvm and for
> > lxd). So I'm not sure the mis-match is as far apart as it seems.
> > root disk means give me a disk this big, if you ask for 2 core as
> > long as you can match an instance type with 2 cores it's exactly the
> > max you get.
> >
> > It seems part of this can be more adjusting the language from
> > "minimum" to something closer to "requested X" where request gives
> > it more of a "I want X" without the min/max boundaries.
> >
> >
> >
> > On Fri, Jan 13, 2017 at 3:14 AM John Meinel  > > wrote:
> >
> > So we could make it so that constraints are actually 'exactly'
> > for LXD, which would then conform to both minimum and maximum,
> > and would still be actually useful for people deploying to
> > containers. We could certainly probe the host machine and say
> > "you asked for 48 cores, and the host machine doesn't have it".
> >
> > However, note that explicit placement also takes precedence over
> > constraints anyway. If you do:
> >   juju deploy mysql --constraints mem=4G
> > today, and then do:
> >  j

Re: lxd and constraints

2017-01-16 Thread Aaron Bentley
ISTM that
 - constraints are used to ensure that a workload runs well.  Minimum
   constraints serve this, and maximum constraints do not.  (Maximum
   constraints may be useful to ensure that a workload does not swamp
   processes outside its container.)

 - Juju cannot enforce a minimum constraint.  LXD could potentially add
   support for this, and then Juju would be able to leverage it.

 - Given that Juju cannot enforce a minimum constraint on LXD at this
   time, it would make sense to emit a warning that it is ignoring the
   constraint.  This would retain the portability of bundles that use
   constraints while keeping the user informed.

On 2017-01-13 01:14 PM, Nate Finch wrote:
> I just feel like we're entering a minefield that our application and CLI
> aren't really built to handle.  I think we *should* handle it, but it
> needs to be well planned out, instead of just doing a tiny piece at a
> time and only figuring out later if we did the right thing.
> 
> There's a few problems I can see: 
> 
> 1.) you can have 10 lxd containers with memory limit of 2GB on a machine
> with 4GB of RAM.  Deploying 10 applications to those containers that
> each have a constraint of mem=2GB will not work as you expect.  We could
> add extra bookkeeping for this, and warn you that you appear to be
> oversubscribing the host, but that's more work.
> 
> 2.) What happens if you try to deploy a container without a memory limit
> on a host that already has a container on it?  
> 
> For example:
> 4GB host
> 2GB lxd container
> try to deploy a new service in a container on this machine.
> Do we warn?  We have no clue how much RAM the service will use.  Maybe
> it'll be fine, maybe it won't.
> 
> 3.) Our CLI doesn't really work well with constraints on containers:
> 
> juju deploy mysql --constraints mem=2G --to lxd
> 
> Does this deploy a machine with 2GB of ram and a container with a 2GB
> ram limit on it?  Or does it deploy a machine with 2GB of ram and a
> container with no limit on it?  It has to be one or the other, and
> currently we have no way of indicating which we want to do, and no way
> to do the other one without using multiple commands.
> 
> This is a more likely use case, creating a bigger machine that can hold
> multiple containers:
> juju add-machine --constraints mem=4GB
> // adds machine, let's say 5
> // create a container on machine 5 with 2GB memory limit
> juju deploy mysql --constraints mem=2GB --to lxd:5
> 
> At least in this case, the deploy command is clear, there's only one
> thing they can possibly mean.  Usually, the placement directive would
> override the constraint, but in this case, it does what you would
> want... but it is a littler weird that --to lxd:5 uses the constraint,
> but --to 5 ignores it.
> 
> Note that you can't just write a simple script to do the above, because
> the machine number is variable, so you have to parse our output and then
> use that for the next command.  It's still scriptable, obviously, but
> it's more complicated script than just two lines of bash.
> 
> Also note that using this second method, you can't deploy more than one
> unit at a time, unless you want multiple units on containers on the same
> machine (which I think would be pretty odd).
> 
> 
> 
> On Fri, Jan 13, 2017 at 3:48 AM Rick Harding  > wrote:
> 
> In the end, you say you want an instance with 2gb of ram and if the
> cloud has an instance with that exact limit it is in fact an exact
> limit. The key things here is the clouds don't have infinite
> malleable instance types like containers (this works for kvm and for
> lxd). So I'm not sure the mis-match is as far apart as it seems.
> root disk means give me a disk this big, if you ask for 2 core as
> long as you can match an instance type with 2 cores it's exactly the
> max you get. 
> 
> It seems part of this can be more adjusting the language from
> "minimum" to something closer to "requested X" where request gives
> it more of a "I want X" without the min/max boundaries. 
> 
> 
> 
> On Fri, Jan 13, 2017 at 3:14 AM John Meinel  > wrote:
> 
> So we could make it so that constraints are actually 'exactly'
> for LXD, which would then conform to both minimum and maximum,
> and would still be actually useful for people deploying to
> containers. We could certainly probe the host machine and say
> "you asked for 48 cores, and the host machine doesn't have it".
> 
> However, note that explicit placement also takes precedence over
> constraints anyway. If you do:
>   juju deploy mysql --constraints mem=4G
> today, and then do:
>  juju add-unit --to 2
> We don't apply the constraint limitations to that specific unit.
> Arguably we should at *least* be warning that the constraints
> for the overall application don't appear to be val

Re: lxd and constraints

2017-01-14 Thread John Meinel
So I think it is a fair point that if you did:
  juju deploy application --constraits mem=4GB
and then did something like:
  juju add-unit application --to lxd:XXX

That those constraints would end up interpreted differently. And also that:
  juju add-unit application --to YYY
would similarly just ignore the constraints.

That said, Juju does *not* do scheduling today, aside from explicit
placement, and put-on-another-machine. And providing people with the
functionality so that they have a way to express *something* about
constraining their containers instead of doing a "juju deploy" and then
having to do a "juju run" and know all the details of how to apply that.

It may be perfectly fine for someone to do "juju add-unit -n 8 --to 5
--constraints 2GB" even though they only have 8GB on that machine. As their
expectation is that any one container might try to peak at 2GB, but the
expected steady state is <1GB and they just don't want runaway allocation
in one container to starve all the others.

I know we're also mulling over how we might propose syntax that would let
you declare "this must have direct underlay access" vs "it's ok if this is
behind NAT", if we can find something nice there it might also work for
memory, etc.

John
=:->



On Fri, Jan 13, 2017 at 10:14 PM, Nate Finch 
wrote:

> I just feel like we're entering a minefield that our application and CLI
> aren't really built to handle.  I think we *should* handle it, but it needs
> to be well planned out, instead of just doing a tiny piece at a time and
> only figuring out later if we did the right thing.
>
> There's a few problems I can see:
>
> 1.) you can have 10 lxd containers with memory limit of 2GB on a machine
> with 4GB of RAM.  Deploying 10 applications to those containers that each
> have a constraint of mem=2GB will not work as you expect.  We could add
> extra bookkeeping for this, and warn you that you appear to be
> oversubscribing the host, but that's more work.
>
> 2.) What happens if you try to deploy a container without a memory limit
> on a host that already has a container on it?
>
> For example:
> 4GB host
> 2GB lxd container
> try to deploy a new service in a container on this machine.
> Do we warn?  We have no clue how much RAM the service will use.  Maybe
> it'll be fine, maybe it won't.
>
> 3.) Our CLI doesn't really work well with constraints on containers:
>
> juju deploy mysql --constraints mem=2G --to lxd
>
> Does this deploy a machine with 2GB of ram and a container with a 2GB ram
> limit on it?  Or does it deploy a machine with 2GB of ram and a container
> with no limit on it?  It has to be one or the other, and currently we have
> no way of indicating which we want to do, and no way to do the other one
> without using multiple commands.
>
> This is a more likely use case, creating a bigger machine that can hold
> multiple containers:
> juju add-machine --constraints mem=4GB
> // adds machine, let's say 5
> // create a container on machine 5 with 2GB memory limit
> juju deploy mysql --constraints mem=2GB --to lxd:5
>
> At least in this case, the deploy command is clear, there's only one thing
> they can possibly mean.  Usually, the placement directive would override
> the constraint, but in this case, it does what you would want... but it is
> a littler weird that --to lxd:5 uses the constraint, but --to 5 ignores it.
>
> Note that you can't just write a simple script to do the above, because
> the machine number is variable, so you have to parse our output and then
> use that for the next command.  It's still scriptable, obviously, but it's
> more complicated script than just two lines of bash.
>
> Also note that using this second method, you can't deploy more than one
> unit at a time, unless you want multiple units on containers on the same
> machine (which I think would be pretty odd).
>
>
>
> On Fri, Jan 13, 2017 at 3:48 AM Rick Harding 
> wrote:
>
> In the end, you say you want an instance with 2gb of ram and if the cloud
> has an instance with that exact limit it is in fact an exact limit. The key
> things here is the clouds don't have infinite malleable instance types like
> containers (this works for kvm and for lxd). So I'm not sure the mis-match
> is as far apart as it seems. root disk means give me a disk this big, if
> you ask for 2 core as long as you can match an instance type with 2 cores
> it's exactly the max you get.
>
> It seems part of this can be more adjusting the language from "minimum" to
> something closer to "requested X" where request gives it more of a "I want
> X" without the min/max boundaries.
>
>
>
> On Fri, Jan 13, 2017 at 3:14 AM John Meinel 
> wrote:
>
> So we could make it so that constraints are actually 'exactly' for LXD,
> which would then conform to both minimum and maximum, and would still be
> actually useful for people deploying to containers. We could certainly
> probe the host machine and say "you asked for 48 cores, and the host
> machine doesn't have it".
>
> However

Re: lxd and constraints

2017-01-13 Thread Nate Finch
I just feel like we're entering a minefield that our application and CLI
aren't really built to handle.  I think we *should* handle it, but it needs
to be well planned out, instead of just doing a tiny piece at a time and
only figuring out later if we did the right thing.

There's a few problems I can see:

1.) you can have 10 lxd containers with memory limit of 2GB on a machine
with 4GB of RAM.  Deploying 10 applications to those containers that each
have a constraint of mem=2GB will not work as you expect.  We could add
extra bookkeeping for this, and warn you that you appear to be
oversubscribing the host, but that's more work.

2.) What happens if you try to deploy a container without a memory limit on
a host that already has a container on it?

For example:
4GB host
2GB lxd container
try to deploy a new service in a container on this machine.
Do we warn?  We have no clue how much RAM the service will use.  Maybe
it'll be fine, maybe it won't.

3.) Our CLI doesn't really work well with constraints on containers:

juju deploy mysql --constraints mem=2G --to lxd

Does this deploy a machine with 2GB of ram and a container with a 2GB ram
limit on it?  Or does it deploy a machine with 2GB of ram and a container
with no limit on it?  It has to be one or the other, and currently we have
no way of indicating which we want to do, and no way to do the other one
without using multiple commands.

This is a more likely use case, creating a bigger machine that can hold
multiple containers:
juju add-machine --constraints mem=4GB
// adds machine, let's say 5
// create a container on machine 5 with 2GB memory limit
juju deploy mysql --constraints mem=2GB --to lxd:5

At least in this case, the deploy command is clear, there's only one thing
they can possibly mean.  Usually, the placement directive would override
the constraint, but in this case, it does what you would want... but it is
a littler weird that --to lxd:5 uses the constraint, but --to 5 ignores it.

Note that you can't just write a simple script to do the above, because the
machine number is variable, so you have to parse our output and then use
that for the next command.  It's still scriptable, obviously, but it's more
complicated script than just two lines of bash.

Also note that using this second method, you can't deploy more than one
unit at a time, unless you want multiple units on containers on the same
machine (which I think would be pretty odd).



On Fri, Jan 13, 2017 at 3:48 AM Rick Harding 
wrote:

In the end, you say you want an instance with 2gb of ram and if the cloud
has an instance with that exact limit it is in fact an exact limit. The key
things here is the clouds don't have infinite malleable instance types like
containers (this works for kvm and for lxd). So I'm not sure the mis-match
is as far apart as it seems. root disk means give me a disk this big, if
you ask for 2 core as long as you can match an instance type with 2 cores
it's exactly the max you get.

It seems part of this can be more adjusting the language from "minimum" to
something closer to "requested X" where request gives it more of a "I want
X" without the min/max boundaries.



On Fri, Jan 13, 2017 at 3:14 AM John Meinel  wrote:

So we could make it so that constraints are actually 'exactly' for LXD,
which would then conform to both minimum and maximum, and would still be
actually useful for people deploying to containers. We could certainly
probe the host machine and say "you asked for 48 cores, and the host
machine doesn't have it".

However, note that explicit placement also takes precedence over
constraints anyway. If you do:
  juju deploy mysql --constraints mem=4G
today, and then do:
 juju add-unit --to 2
We don't apply the constraint limitations to that specific unit. Arguably
we should at *least* be warning that the constraints for the overall
application don't appear to be valid for this instance.

I guess I'd rather see constraints still set limits for containers, because
people really want that functionality, and that we warn any time you do a
direct placement and the constraints aren't satisfied. (but warn isn't
failing the attempt)

John
=:->

On Fri, Jan 13, 2017 at 10:09 AM, Stuart Bishop  wrote:

On 13 January 2017 at 02:20, Nate Finch  wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.



I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.


I don't think Juju can handle this. Either constraints have different
meanings with different cloud providers, or lxd needs to accept minimum
constraints (along with any other cloud providers with this behavior).

If you decide constr

Re: lxd and constraints

2017-01-13 Thread Rick Harding
In the end, you say you want an instance with 2gb of ram and if the cloud
has an instance with that exact limit it is in fact an exact limit. The key
things here is the clouds don't have infinite malleable instance types like
containers (this works for kvm and for lxd). So I'm not sure the mis-match
is as far apart as it seems. root disk means give me a disk this big, if
you ask for 2 core as long as you can match an instance type with 2 cores
it's exactly the max you get.

It seems part of this can be more adjusting the language from "minimum" to
something closer to "requested X" where request gives it more of a "I want
X" without the min/max boundaries.



On Fri, Jan 13, 2017 at 3:14 AM John Meinel  wrote:

> So we could make it so that constraints are actually 'exactly' for LXD,
> which would then conform to both minimum and maximum, and would still be
> actually useful for people deploying to containers. We could certainly
> probe the host machine and say "you asked for 48 cores, and the host
> machine doesn't have it".
>
> However, note that explicit placement also takes precedence over
> constraints anyway. If you do:
>   juju deploy mysql --constraints mem=4G
> today, and then do:
>  juju add-unit --to 2
> We don't apply the constraint limitations to that specific unit. Arguably
> we should at *least* be warning that the constraints for the overall
> application don't appear to be valid for this instance.
>
> I guess I'd rather see constraints still set limits for containers,
> because people really want that functionality, and that we warn any time
> you do a direct placement and the constraints aren't satisfied. (but warn
> isn't failing the attempt)
>
> John
> =:->
>
> On Fri, Jan 13, 2017 at 10:09 AM, Stuart Bishop <
> stuart.bis...@canonical.com> wrote:
>
> On 13 January 2017 at 02:20, Nate Finch  wrote:
>
> I'm implementing constraints for lxd containers and provider... and
> stumbled on an impedance mismatch that I don't know how to handle.
>
>
>
> I'm not really sure how to resolve this problem.  Maybe it's not a
> problem.  Maybe constraints just have a different meaning for containers?
> You have to specify the machine number you're deploying to for any
> deployment past the first anyway, so you're already manually choosing the
> machine, at which point, constraints don't really make sense anyway.
>
>
> I don't think Juju can handle this. Either constraints have different
> meanings with different cloud providers, or lxd needs to accept minimum
> constraints (along with any other cloud providers with this behavior).
>
> If you decide constraints need to consistently mean minimum, then I'd
> argue it is best to not pass them to current-gen lxd at all. Enforcing that
> containers are restricted to the minimum viable resources declared in a
> bundle does not seem helpful, and Juju does not have enough information to
> choose suitable maximums (and if it did, would not know if they would
> remain suitable tomorrow).
>
> --
> Stuart Bishop 
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-13 Thread John Meinel
So we could make it so that constraints are actually 'exactly' for LXD,
which would then conform to both minimum and maximum, and would still be
actually useful for people deploying to containers. We could certainly
probe the host machine and say "you asked for 48 cores, and the host
machine doesn't have it".

However, note that explicit placement also takes precedence over
constraints anyway. If you do:
  juju deploy mysql --constraints mem=4G
today, and then do:
 juju add-unit --to 2
We don't apply the constraint limitations to that specific unit. Arguably
we should at *least* be warning that the constraints for the overall
application don't appear to be valid for this instance.

I guess I'd rather see constraints still set limits for containers, because
people really want that functionality, and that we warn any time you do a
direct placement and the constraints aren't satisfied. (but warn isn't
failing the attempt)

John
=:->

On Fri, Jan 13, 2017 at 10:09 AM, Stuart Bishop  wrote:

> On 13 January 2017 at 02:20, Nate Finch  wrote:
>
> I'm implementing constraints for lxd containers and provider... and
>> stumbled on an impedance mismatch that I don't know how to handle.
>>
>
>
>> I'm not really sure how to resolve this problem.  Maybe it's not a
>> problem.  Maybe constraints just have a different meaning for containers?
>> You have to specify the machine number you're deploying to for any
>> deployment past the first anyway, so you're already manually choosing the
>> machine, at which point, constraints don't really make sense anyway.
>>
>
> I don't think Juju can handle this. Either constraints have different
> meanings with different cloud providers, or lxd needs to accept minimum
> constraints (along with any other cloud providers with this behavior).
>
> If you decide constraints need to consistently mean minimum, then I'd
> argue it is best to not pass them to current-gen lxd at all. Enforcing that
> containers are restricted to the minimum viable resources declared in a
> bundle does not seem helpful, and Juju does not have enough information to
> choose suitable maximums (and if it did, would not know if they would
> remain suitable tomorrow).
>
> --
> Stuart Bishop 
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-13 Thread Stuart Bishop
On 13 January 2017 at 02:20, Nate Finch  wrote:

I'm implementing constraints for lxd containers and provider... and
> stumbled on an impedance mismatch that I don't know how to handle.
>


> I'm not really sure how to resolve this problem.  Maybe it's not a
> problem.  Maybe constraints just have a different meaning for containers?
> You have to specify the machine number you're deploying to for any
> deployment past the first anyway, so you're already manually choosing the
> machine, at which point, constraints don't really make sense anyway.
>

I don't think Juju can handle this. Either constraints have different
meanings with different cloud providers, or lxd needs to accept minimum
constraints (along with any other cloud providers with this behavior).

If you decide constraints need to consistently mean minimum, then I'd argue
it is best to not pass them to current-gen lxd at all. Enforcing that
containers are restricted to the minimum viable resources declared in a
bundle does not seem helpful, and Juju does not have enough information to
choose suitable maximums (and if it did, would not know if they would
remain suitable tomorrow).

-- 
Stuart Bishop 
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-12 Thread Mike Pontillo
On Thu, Jan 12, 2017 at 12:40 PM, Nate Finch 
wrote:

> The problem with trying to figure out how much "unused" RAM a host has is
> that it gets thrown out the window if you ever deploy any unit to the host
> machine, or if you deploy a unit in a container without a RAM constraint.
> Those units may then use as much RAM as they want.
>

Right, that's why I said the idea might be naive. But I would ask how
difficult it would be to change the design so that the amount of "unused"
resources is persisted. (Probably persisted along with the ratios, so the
calculation is consistent across deployments. You might want some ratios to
be application-specific, such as if you know you want to deploy a very
large database, but that use case can probably be accomplished with
constraints?)

In any case (depending on where you specify the ratio), you may always have
a situation where new units can use as much RAM as they want.

Regards,
Mike
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-12 Thread James Beedy
>
>
> I'm implementing constraints for lxd containers and provider... and
> stumbled on an impedance mismatch that I don't know how to handle.
>
> It seems as though lxd limits (what in juju we would call constraints) are
> implemented as maximums, not minimums.  For containers sharing a host, this
> makes sense.  If you say "don't use more than 2 gigs of ram" then you know
> the max that container will use, and thus how much leftover you can expect
> the host to have for other containers.
>


The problem of lxd controllers using 50% -1GB (the default for wild tiger
mongo engine) of the host ram should fall into this category of use cases.
- "don't use more than X gigs of ram"

https://docs.mongodb.com/manual/core/wiredtiger/#memory-use



>
> However, in Juju's case, we expect constraints to be minimums, so that we
> know the unit on that machine will have enough RAM to function (e.g. "give
> me a machine with at least 2GB of RAM, since I know my application won't
> work well with less").
>
> This impedance mismatch is tricky to untangle.  With a naive implementation
> of Juju constraints for lxd as a straight up setting of lxd limits, then
> you can add a lxd container and specify a memory constraint that is higher
> than the host machine's memory, and lxd will happily let you do that
> because it knows that container won't exceed that amount of memory (by
> definition it cannot).  But it means that juju will then let you deploy a
> unit that needs more memory than the container has access to.
>
> Note that this is also the case for setting cores.  On my local lxd
> environment I can juju add-machine --constraints cores=48 and the container
> will be created just fine.
>
> I'm not really sure how to resolve this problem.  Maybe it's not a
> problem.  Maybe constraints just have a different meaning for containers?
> You have to specify the machine number you're deploying to for any
> deployment past the first anyway, so you're already manually choosing the
> machine, at which point, constraints don't really make sense anyway.
>
> -Nate
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-12 Thread Nate Finch
Merlijn:
I definitely agree that having the same term mean different things on
different platforms is a really bad idea.  I don't think we can change the
concept of constraints as minimums at this point, but maybe a new concept
of limits (to match lxd terminology) could be added.  Limits really only
make sense for containers, so we would either have to error out, or warn
the user and ignore it if you specified a limit when deploying a base
machine.

*Mike:*
The problem with trying to figure out how much "unused" RAM a host has is
that it gets thrown out the window if you ever deploy any unit to the host
machine, or if you deploy a unit in a container without a RAM constraint.
Those units may then use as much RAM as they want.


On Thu, Jan 12, 2017 at 3:22 PM Mike Pontillo 
wrote:

On Thu, Jan 12, 2017 at 11:20 AM, Nate Finch 
wrote:

I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.


Is the second part (how much leftover you can expect the host to have for
other containers) captured somewhere? Because it seems to me that the
important question Juju needs to be asking is, "how over-provisioned is the
host I'm about to deploy on?", so that containers can be intelligently
load-balanced across the infrastructure.

Assuming Juju has full control over the hosts it is deploying containers
onto[2], I think one thing to do might be to allow the admin to specify
ratios (maybe separate for each of CPU, RAM, disk) to indicate how
over-provisioned to allow hosts and containers to be to be.

Let's take as an example a host with 16 GB of RAM, where you want to deploy
16 containers with a constraint of "at least 1G of RAM". There could be two
relevant over-provisioning ratios: one to specify how over-provisioned a
container hypervisor can be, and the other to specify how much more RAM
than the constraint specifies the container can be allowed to use. This
idea is perhaps a little naive; I'm not sure where one would specify these
values.

If that sounds confusing, maybe it's easier to look at an example[1]:

Host RAM (GB) Minimum RAM Constraint (GB) Host Over-provisioning Ratio
Container
Over-provisioning Ratio Allowed number of containers for host RAM Limit per
Container (GB)
16 2 1.00 1.00 8 2
16 1 1.00 1.00 16 1
16 1 2.00 1.00 32 1
16 1 1.00 2.00 16 2
16 1 2.00 2.00 32 2
16 1 2.00 4.00 32 4
16 1 16.00 4.00 256 4
16 1 16.00 1.00 256 1


Regards,
Mike

[1]:
https://docs.google.com/spreadsheets/d/1j6-98nB5AA_viHK9nF42MPpbZf35wv2N2gDi8DrhepU/view


[2]: (so that the numbers aren't thrown off by other juju deployments, or
non-juju deployments to the same hypervisor)
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-12 Thread Mike Pontillo
On Thu, Jan 12, 2017 at 11:20 AM, Nate Finch 
wrote:

> I'm implementing constraints for lxd containers and provider... and
> stumbled on an impedance mismatch that I don't know how to handle.
>
> It seems as though lxd limits (what in juju we would call constraints) are
> implemented as maximums, not minimums.  For containers sharing a host, this
> makes sense.  If you say "don't use more than 2 gigs of ram" then you know
> the max that container will use, and thus how much leftover you can expect
> the host to have for other containers.
>
>
Is the second part (how much leftover you can expect the host to have for
other containers) captured somewhere? Because it seems to me that the
important question Juju needs to be asking is, "how over-provisioned is the
host I'm about to deploy on?", so that containers can be intelligently
load-balanced across the infrastructure.

Assuming Juju has full control over the hosts it is deploying containers
onto[2], I think one thing to do might be to allow the admin to specify
ratios (maybe separate for each of CPU, RAM, disk) to indicate how
over-provisioned to allow hosts and containers to be to be.

Let's take as an example a host with 16 GB of RAM, where you want to deploy
16 containers with a constraint of "at least 1G of RAM". There could be two
relevant over-provisioning ratios: one to specify how over-provisioned a
container hypervisor can be, and the other to specify how much more RAM
than the constraint specifies the container can be allowed to use. This
idea is perhaps a little naive; I'm not sure where one would specify these
values.

If that sounds confusing, maybe it's easier to look at an example[1]:

Host RAM (GB) Minimum RAM Constraint (GB) Host Over-provisioning Ratio
Container
Over-provisioning Ratio Allowed number of containers for host RAM Limit per
Container (GB)
16 2 1.00 1.00 8 2
16 1 1.00 1.00 16 1
16 1 2.00 1.00 32 1
16 1 1.00 2.00 16 2
16 1 2.00 2.00 32 2
16 1 2.00 4.00 32 4
16 1 16.00 4.00 256 4
16 1 16.00 1.00 256 1


Regards,
Mike

[1]:
https://docs.google.com/spreadsheets/d/1j6-98nB5AA_viHK9nF42MPpbZf35wv2N2gDi8DrhepU/view


[2]: (so that the numbers aren't thrown off by other juju deployments, or
non-juju deployments to the same hypervisor)
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: lxd and constraints

2017-01-12 Thread Merlijn Sebrechts
A few thoughts that pop into my mind

Having constraints be "min requirements" on one and "max usage" on another
provider is an issue because this would mean bundles behave differently on
different clouds. Bundles should behave the same on all clouds to improve
portability and reduce vendor lock-in.

Specifying "max usage" constraints for LXD containers is something that's
very useful and the behavior you're describing seems logical for max
constraints. When the maximum constraint is more than the available
resources, then the container will never hit that constraint.

Maybe we can make the constraints clearer by making the difference between
min and max explicit? `--constraints cores=>4 cores=<12`.


2017-01-12 20:20 GMT+01:00 Nate Finch :

> I'm implementing constraints for lxd containers and provider... and
> stumbled on an impedance mismatch that I don't know how to handle.
>
> It seems as though lxd limits (what in juju we would call constraints) are
> implemented as maximums, not minimums.  For containers sharing a host, this
> makes sense.  If you say "don't use more than 2 gigs of ram" then you know
> the max that container will use, and thus how much leftover you can expect
> the host to have for other containers.
>
> However, in Juju's case, we expect constraints to be minimums, so that we
> know the unit on that machine will have enough RAM to function (e.g. "give
> me a machine with at least 2GB of RAM, since I know my application won't
> work well with less").
>
> This impedance mismatch is tricky to untangle.  With a naive
> implementation of Juju constraints for lxd as a straight up setting of lxd
> limits, then you can add a lxd container and specify a memory constraint
> that is higher than the host machine's memory, and lxd will happily let you
> do that because it knows that container won't exceed that amount of
> memory (by definition it cannot).  But it means that juju will then let you
> deploy a unit that needs more memory than the container has access to.
>
> Note that this is also the case for setting cores.  On my local lxd
> environment I can juju add-machine --constraints cores=48 and the container
> will be created just fine.
>
> I'm not really sure how to resolve this problem.  Maybe it's not a
> problem.  Maybe constraints just have a different meaning for containers?
> You have to specify the machine number you're deploying to for any
> deployment past the first anyway, so you're already manually choosing the
> machine, at which point, constraints don't really make sense anyway.
>
> -Nate
>
>
>
>
>
>
>
>
>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


lxd and constraints

2017-01-12 Thread Nate Finch
I'm implementing constraints for lxd containers and provider... and
stumbled on an impedance mismatch that I don't know how to handle.

It seems as though lxd limits (what in juju we would call constraints) are
implemented as maximums, not minimums.  For containers sharing a host, this
makes sense.  If you say "don't use more than 2 gigs of ram" then you know
the max that container will use, and thus how much leftover you can expect
the host to have for other containers.

However, in Juju's case, we expect constraints to be minimums, so that we
know the unit on that machine will have enough RAM to function (e.g. "give
me a machine with at least 2GB of RAM, since I know my application won't
work well with less").

This impedance mismatch is tricky to untangle.  With a naive implementation
of Juju constraints for lxd as a straight up setting of lxd limits, then
you can add a lxd container and specify a memory constraint that is higher
than the host machine's memory, and lxd will happily let you do that
because it knows that container won't exceed that amount of memory (by
definition it cannot).  But it means that juju will then let you deploy a
unit that needs more memory than the container has access to.

Note that this is also the case for setting cores.  On my local lxd
environment I can juju add-machine --constraints cores=48 and the container
will be created just fine.

I'm not really sure how to resolve this problem.  Maybe it's not a
problem.  Maybe constraints just have a different meaning for containers?
You have to specify the machine number you're deploying to for any
deployment past the first anyway, so you're already manually choosing the
machine, at which point, constraints don't really make sense anyway.

-Nate
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev