Re: [Openstack] quota question

2012-07-23 Thread Blake Yeager

 snip

 (BTW, I'd like to point out the Boson proposal and thread…)


Good point, we also need to think through how distributed quotas will work
across multiple cells.  I can see a lot of overlap between these two use
cases: I want to limit users to a specific quota for a specific flavor and
I want to limit users to a specific quota for a given cell - both of which
would be independent of a user's overall quota.

IMHO, we need to address both of these use cases at the same time.

-Blake
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quota question

2012-07-20 Thread Eoghan Glynn


 We're running a system with a really wide variety of node types. This
 variety (nodes with 24GB, 48GB, GPU nodes, and 1TB mem nodes) causes
 some real trouble with quotas. Basically, for any tenant that is going
 to use the large memory nodes (even in smaller slices), we need to set
 quotas that are high enough that they are useless when it comes to all
 other instance types. Also, instances aren't comparable across our
 range of hardware either.
 
 Is there a clever solution to this problem that I'm missing? Is
 anyone
 else running into this sort of properly operationally? I'm a little
 bit worried that a slightly too simple notion of quotas has been baked
 into openstack at a fairly deep level.
 thanks for any insight..

Hi Narayan,

I had the idea previously of applying a weighting function to the
resource usage being allocated from the quota, as opposed to simply
counting raw instances.

The notion I had in mind was more related to image usage in glance,
where the image footprint can vary very widely. However I think it
could be useful for some nova resources also.

Now for some resource types, for example say volumes, usage can be
controlled along multiple axes (i.e. number of volumes and total size),
so that gives more flexibility.

But if I'm hearing you correctly, you'd want to apply a lower weighting
to instances that are scheduled onto one of the higher-memory compute
nodes, and vice versa a higher weighting to instances that happen to
be run on lower-memory nodes.

Does that sum it up, or have I misunderstood?

BTW what kind of nova-scheduler config are you using?

Cheers,
Eoghan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quota question

2012-07-20 Thread Narayan Desai
On Fri, Jul 20, 2012 at 4:38 AM, Eoghan Glynn egl...@redhat.com wrote:

 Hi Narayan,

 I had the idea previously of applying a weighting function to the
 resource usage being allocated from the quota, as opposed to simply
 counting raw instances.

 The notion I had in mind was more related to image usage in glance,
 where the image footprint can vary very widely. However I think it
 could be useful for some nova resources also.

 Now for some resource types, for example say volumes, usage can be
 controlled along multiple axes (i.e. number of volumes and total size),
 so that gives more flexibility.

 But if I'm hearing you correctly, you'd want to apply a lower weighting
 to instances that are scheduled onto one of the higher-memory compute
 nodes, and vice versa a higher weighting to instances that happen to
 be run on lower-memory nodes.

 Does that sum it up, or have I misunderstood?

I think you've got it. I hadn't really asked with a particular
solution in mind, i was mainly looking for ideas.

I think that weighting would help. Effectively we need to discount
memory usage on the bigmem nodes, or something like that.

The harder part is that we need to be able to specify
independent/orthogonal quota constraints on different flavors. It
would be really useful to be able to say basically, you can have 2TB
of memory from this flavor, and 4TB of memory from that flavor. This
would allow saying something like you can have up to 3 1TB instances,
and independently have up to 3TB of small instances as well.

 BTW what kind of nova-scheduler config are you using?

We're using the filter scheduler. We've defined a bunch of custom
flavors, in addition to the stock ones, that allow us to fill up all
of our node types. So for each node type, we define flavors for the
complete node (minus a GB of memory for the hypervisor), and 3/4, 1/2,
1/4, and 1/8, 1/16, and 1/32 of the node. We've used a machine type
prefix for each one. The compute nodes are IBM idataplex, so we have
idp.{100,75,50,25,12,6,3}. We've done this for each machine type, so
we have idp.*, mem.*, gpu.*, etc. Each machine type has a unique
hostname prefix (cc for the idp nodes, cm for the bigmem nodes, cg for
gpu nodes, etc), and the filter scheduler is setup to route requests
for these custom flavors only to nodes with the appropriate hostname
prefix. This isn't an ideal solution, but it minimizes risk of
fragmentation. (With the default flavors, we'd see a lot of cases
where there was idle capacity left on the nodes that wasn't usable
because the ratio was wrong for the default flavors)

So far, this scheduling scheme has worked pretty well, aside from
leaving some instances in a weird state when you try to start a bunch
(20-50) at a time. I haven't had time to track that down yet.
 -nld

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quota question

2012-07-20 Thread Eoghan Glynn


 Sounds like one solution alright..
 
 But - what about making quotas pluggable, like the scheduler?

Yeah, that could certainly be a direction to head in the longer term.

The way things stand though, the decision as to which quota is being
checked against in made at the enforcement point, and the question
posed of the quotas engine is really just a dict mapping resource names
to the numbers requested (i.e. there isn't any further context provided).

So in order to allow the quotas engine ask more detailed questions
about the kind of resource required (i.e. is the instance requested
SSD-backed or whatever), we'd have provide a lot more context than we
currently do.

Cheers,
Eoghan

 This would allow for even more complex quotas, like limiting the
 number of SSD backed instances across the entire cloud per tenant,
 while still keeping the core implementation lean.
 
 Kiall
 On Jul 20, 2012 3:48 PM, Eoghan Glynn  egl...@redhat.com  wrote:
 
 
 
  The harder part is that we need to be able to specify
  independent/orthogonal quota constraints on different flavors. It
  would be really useful to be able to say basically, you can have
  2TB
  of memory from this flavor, and 4TB of memory from that flavor.
  This
  would allow saying something like you can have up to 3 1TB
  instances,
  and independently have up to 3TB of small instances as well.
 
 OK, so its the as well aspect that's problematic here.
 
 (If it were an either-or situation as opposed to a both, then
 obviously
 a combination of the instances and RAM quotas would get you part of
 the way at least).
 
 So just thinking aloud, we could potentially add new per-flavor quota
 resources so that for each existing instance-type, there was the
 potential
 to add a new quota limiting *only* that instance type (and maybe keep
 the existing instances quotas as an over-arching limit).
 
 For example, if the following quotas where set:
 
 instances: 100
 instances-m1.xlarge: 10
 instances-m1.large: 20
 instances-m1.small: 50
 instances-m1.tiny: 100
 
 and a user requested an additional xlarge instance, we'd first check
 if we
 had headroom on the instances-m1.xlarge quota and then if we also had
 headroom
 on the over-arching instances quota (before going on to check the ram
  cores
 if necessary). Whereas, if a medium instance was requested, we would
 only check
 the overarching limit, as there is no instances-medium quota defined.
 
 This would require some change to the quotas logic, to allow the set
 of
 resources that may be limited by quotas to be more dynamic (currently
 we have a fairly fixed set, whereas new instance types may be defined
 at any time).
 
 Would that address your requirement?
 
 Cheers,
 Eoghan
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quota question

2012-07-20 Thread Kevin L. Mitchell
On Fri, 2012-07-20 at 15:59 +0100, Kiall Mac Innes wrote:
 But - what about making quotas pluggable, like the scheduler?

They are; see the quota_driver configuration option.  However…

 This would allow for even more complex quotas, like limiting the
 number of SSD backed instances across the entire cloud per tenant,
 while still keeping the core implementation lean.

As Eoghan points out, a lot more context would need to be provided than
the current quota system uses, and you'd end up with something
considerably more complex.

(BTW, I'd like to point out the Boson proposal and thread…)
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] quota question

2012-07-20 Thread Eoghan Glynn


  Would that address your requirement?
 
 I think so. If these acted as a hard limit in conjunction with
 existing quota constraints, I think it would do the trick.

I've raised this a nova blueprint, so let's see if it gets any traction: 

  https://blueprints.launchpad.net/nova/+spec/flavor-specific-instance-quotas

Cheers,
Eoghan

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp