Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Alex Glikson
Khanh-Toan Tran  wrote on 14/11/2013 
06:27:39 PM:

> It is interesting to see the development of the CPU entitlement 
> blueprint that Alex mentioned. It was registered in Jan 2013.
> Any idea whether it is still going on?

Yes. I hope we will be able to rebase and submit for review soon.

Regards,
Alex

> De : Alex Glikson [mailto:glik...@il.ibm.com] 
> Envoyé : jeudi 14 novembre 2013 16:13
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : Re: [openstack-dev] [nova] Configure overcommit policy
> 
> In fact, there is a blueprint which would enable supporting this 
> scenario without partitioning -- https://blueprints.launchpad.net/
> nova/+spec/cpu-entitlement 
> The idea is to annotate flavors with CPU allocation guarantees, and 
> enable differentiation between instances, potentially running on thesame 
host.
> The implementation is augmenting the CoreFilter code to factor in 
> the differentiation. Hopefully this will be out for review soon. 
> 
> Regards, 
> Alex
> 
> 
> 
> 
> 
> From:John Garbutt  
> To:"OpenStack Development Mailing List (not for usage 
questions)" <
> openstack-dev@lists.openstack.org>, 
> Date:14/11/2013 04:57 PM 
> Subject:Re: [openstack-dev] [nova] Configure overcommit policy 
> 
> 
> 
> 
> On 13 November 2013 14:51, Khanh-Toan Tran
>  wrote:
> > Well, I don't know what John means by "modify the over-commit 
calculation in
> > the scheduler", so I cannot comment.
> 
> I was talking about this code:
> https://github.com/openstack/nova/blob/master/nova/scheduler/
> filters/core_filter.py#L64
> 
> But I am not sure thats what you want.
> 
> > The idea of choosing free host for Hadoop on the fly is rather 
complicated
> > and contains several operations, namely: (1) assuring the host never 
get
> > pass 100% CPU load; (2) identifying a host that already has a Hadoop 
VM
> > running on it, or already 100% CPU commitment; (3) releasing the host 
from
> > 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding 
other
> > applications to use the host (to economy the host resource).
> >
> > - You'll need (1) because otherwise your Hadoop VM would come short of
> > resources after the host gets overloaded.
> > - You'll need (2) because you don't want to restrict a new host while 
one of
> > your 100% CPU commited hosts still has free resources.
> > - You'll need (3) because otherwise you host would be forerever 
restricted,
> > and that is no longer "on the fly".
> > - You'll may need (4) because otherwise it'd be a waste of resources.
> >
> > The problem of changing CPU overcommit on the fly is that when your 
Hadoop
> > VM is still running, someone else can add another VM in the same host 
with a
> > higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting 
your
> > Hadoop VM also.
> > The idea of putting the host in the aggregate can give you (1) and 
(2). (4)
> > is done by AggregateInstanceExtraSpecsFilter. However, it does not 
give you
> > (3); which can be done with pCloud.
> 
> Step 1: use flavors so nova can tell between the two workloads, and
> configure them differently
> 
> Step 2: find capacity for your workload given your current cloud usage
> 
> At the moment, most of our solutions involve reserving bits of your
> cloud capacity for different workloads, generally using host
> aggregates.
> 
> The issue with claiming back capacity from other workloads is a bit
> tricker. The issue is I don't think you have defined where you get
> that capacity back from? Maybe you want to look at giving some
> workloads a higher priority over the constrained CPU resources? But
> you will probably starve the little people out at random, which seems
> bad. Maybe you want to have a concept of "spot instances" where they
> can use your "spare capacity" until you need it, and you can just kill
> them?
> 
> But maybe I am miss understanding your use case, its not totally clear 
to me.
> 
> John
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> 
> Aucun virus trouvé dans ce message.
> Analyse effectuée par AVG - www.avg.fr
> Version: 2014.0.4158 / Base de données virale: 3629/6834 - Date: 
13/11/2013
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Khanh-Toan Tran
>Step 1: use flavors so nova can tell between the two workloads, and
>configure them differently
>
>Step 2: find capacity for your workload given your current cloud usage
>
>At the moment, most of our solutions involve reserving bits of your
>cloud capacity for different workloads, generally using host
>aggregates.

>The issue with claiming back capacity from other workloads is a bit
>tricker. The issue is I don't think you have defined where you get
>that capacity back from? Maybe you want to look at giving some
>workloads a higher priority over the constrained CPU resources? But
>you will probably starve the little people out at random, which seems
>bad. Maybe you want to have a concept of "spot instances" where they
>can use your "spare capacity" until you need it, and you can just kill
>them?
>
>But maybe I am miss understanding your use case, its not totally clear to
me.



Yes currently we can only reserve some hosts for particular workloads. But
«reservation» is done by admin’s operation,

not «on-demand»  as I understand. Anyway, it’s just some speculations from
what I think of Alexander’ usecase. Or maybe

I misunderstand Alexander ?



It is interesting to see the development of the CPU entitlement blueprint
that Alex mentioned. It was registered in Jan 2013.

Any idea whether it is still going on?



De : Alex Glikson [mailto:glik...@il.ibm.com]
Envoyé : jeudi 14 novembre 2013 16:13
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [nova] Configure overcommit policy



In fact, there is a blueprint which would enable supporting this scenario
without partitioning --
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement
The idea is to annotate flavors with CPU allocation guarantees, and enable
differentiation between instances, potentially running on the same host.
The implementation is augmenting the CoreFilter code to factor in the
differentiation. Hopefully this will be out for review soon.

Regards,
Alex





From:John Garbutt 
To:"OpenStack Development Mailing List (not for usage questions)"
,
Date:    14/11/2013 04:57 PM
Subject:Re: [openstack-dev] [nova] Configure overcommit policy

  _




On 13 November 2013 14:51, Khanh-Toan Tran
 wrote:
> Well, I don't know what John means by "modify the over-commit
calculation in
> the scheduler", so I cannot comment.

I was talking about this code:

<https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core
_filter.py#L64>
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_
filter.py#L64

But I am not sure thats what you want.

> The idea of choosing free host for Hadoop on the fly is rather
complicated
> and contains several operations, namely: (1) assuring the host never get
> pass 100% CPU load; (2) identifying a host that already has a Hadoop VM
> running on it, or already 100% CPU commitment; (3) releasing the host
from
> 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding
other
> applications to use the host (to economy the host resource).
>
> - You'll need (1) because otherwise your Hadoop VM would come short of
> resources after the host gets overloaded.
> - You'll need (2) because you don't want to restrict a new host while
one of
> your 100% CPU commited hosts still has free resources.
> - You'll need (3) because otherwise you host would be forerever
restricted,
> and that is no longer "on the fly".
> - You'll may need (4) because otherwise it'd be a waste of resources.
>
> The problem of changing CPU overcommit on the fly is that when your
Hadoop
> VM is still running, someone else can add another VM in the same host
with a
> higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your
> Hadoop VM also.
> The idea of putting the host in the aggregate can give you (1) and (2).
(4)
> is done by AggregateInstanceExtraSpecsFilter. However, it does not give
you
> (3); which can be done with pCloud.

Step 1: use flavors so nova can tell between the two workloads, and
configure them differently

Step 2: find capacity for your workload given your current cloud usage

At the moment, most of our solutions involve reserving bits of your
cloud capacity for different workloads, generally using host
aggregates.

The issue with claiming back capacity from other workloads is a bit
tricker. The issue is I don't think you have defined where you get
that capacity back from? Maybe you want to look at giving some
workloads a higher priority over the constrained CPU resources? But
you will probably starve the little people out at random, which seems
bad. Maybe you want to have a concept of "spot instances" where they
can use your "spare capacity" until you need it, and you 

Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread Alex Glikson
In fact, there is a blueprint which would enable supporting this scenario 
without partitioning -- 
https://blueprints.launchpad.net/nova/+spec/cpu-entitlement 
The idea is to annotate flavors with CPU allocation guarantees, and enable 
differentiation between instances, potentially running on the same host.
The implementation is augmenting the CoreFilter code to factor in the 
differentiation. Hopefully this will be out for review soon.

Regards,
Alex





From:   John Garbutt 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   14/11/2013 04:57 PM
Subject:    Re: [openstack-dev] [nova] Configure overcommit policy



On 13 November 2013 14:51, Khanh-Toan Tran
 wrote:
> Well, I don't know what John means by "modify the over-commit 
calculation in
> the scheduler", so I cannot comment.

I was talking about this code:
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L64


But I am not sure thats what you want.

> The idea of choosing free host for Hadoop on the fly is rather 
complicated
> and contains several operations, namely: (1) assuring the host never get
> pass 100% CPU load; (2) identifying a host that already has a Hadoop VM
> running on it, or already 100% CPU commitment; (3) releasing the host 
from
> 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding 
other
> applications to use the host (to economy the host resource).
>
> - You'll need (1) because otherwise your Hadoop VM would come short of
> resources after the host gets overloaded.
> - You'll need (2) because you don't want to restrict a new host while 
one of
> your 100% CPU commited hosts still has free resources.
> - You'll need (3) because otherwise you host would be forerever 
restricted,
> and that is no longer "on the fly".
> - You'll may need (4) because otherwise it'd be a waste of resources.
>
> The problem of changing CPU overcommit on the fly is that when your 
Hadoop
> VM is still running, someone else can add another VM in the same host 
with a
> higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your
> Hadoop VM also.
> The idea of putting the host in the aggregate can give you (1) and (2). 
(4)
> is done by AggregateInstanceExtraSpecsFilter. However, it does not give 
you
> (3); which can be done with pCloud.

Step 1: use flavors so nova can tell between the two workloads, and
configure them differently

Step 2: find capacity for your workload given your current cloud usage

At the moment, most of our solutions involve reserving bits of your
cloud capacity for different workloads, generally using host
aggregates.

The issue with claiming back capacity from other workloads is a bit
tricker. The issue is I don't think you have defined where you get
that capacity back from? Maybe you want to look at giving some
workloads a higher priority over the constrained CPU resources? But
you will probably starve the little people out at random, which seems
bad. Maybe you want to have a concept of "spot instances" where they
can use your "spare capacity" until you need it, and you can just kill
them?

But maybe I am miss understanding your use case, its not totally clear to 
me.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-14 Thread John Garbutt
On 13 November 2013 14:51, Khanh-Toan Tran
 wrote:
> Well, I don't know what John means by "modify the over-commit calculation in
> the scheduler", so I cannot comment.

I was talking about this code:
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L64

But I am not sure thats what you want.

> The idea of choosing free host for Hadoop on the fly is rather complicated
> and contains several operations, namely: (1) assuring the host never get
> pass 100% CPU load; (2) identifying a host that already has a Hadoop VM
> running on it, or already 100% CPU commitment; (3) releasing the host from
> 100% CPU commitment once the Hadoop VM stops; (4) possibly avoiding other
> applications to use the host (to economy the host resource).
>
> - You'll need (1) because otherwise your Hadoop VM would come short of
> resources after the host gets overloaded.
> - You'll need (2) because you don't want to restrict a new host while one of
> your 100% CPU commited hosts still has free resources.
> - You'll need (3) because otherwise you host would be forerever restricted,
> and that is no longer "on the fly".
> - You'll may need (4) because otherwise it'd be a waste of resources.
>
> The problem of changing CPU overcommit on the fly is that when your Hadoop
> VM is still running, someone else can add another VM in the same host with a
> higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your
> Hadoop VM also.
> The idea of putting the host in the aggregate can give you (1) and (2). (4)
> is done by AggregateInstanceExtraSpecsFilter. However, it does not give you
> (3); which can be done with pCloud.

Step 1: use flavors so nova can tell between the two workloads, and
configure them differently

Step 2: find capacity for your workload given your current cloud usage

At the moment, most of our solutions involve reserving bits of your
cloud capacity for different workloads, generally using host
aggregates.

The issue with claiming back capacity from other workloads is a bit
tricker. The issue is I don't think you have defined where you get
that capacity back from? Maybe you want to look at giving some
workloads a higher priority over the constrained CPU resources? But
you will probably starve the little people out at random, which seems
bad. Maybe you want to have a concept of "spot instances" where they
can use your "spare capacity" until you need it, and you can just kill
them?

But maybe I am miss understanding your use case, its not totally clear to me.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-13 Thread Khanh-Toan Tran
Well, I don't know what John means by "modify the over-commit calculation in 
the scheduler", so I cannot comment. 

The idea of choosing free host for Hadoop on the fly is rather complicated and 
contains several operations, namely: (1) assuring the host never get pass 100% 
CPU load; (2) identifying a host that already has a Hadoop VM running on it, or 
already 100% CPU commitment; (3) releasing the host from 100% CPU commitment 
once the Hadoop VM stops; (4) possibly avoiding other applications to use the 
host (to economy the host resource). 

- You'll need (1) because otherwise your Hadoop VM would come short of 
resources after the host gets overloaded. 
- You'll need (2) because you don't want to restrict a new host while one of 
your 100% CPU commited hosts still has free resources. 
- You'll need (3) because otherwise you host would be forerever restricted, and 
that is no longer "on the fly". 
- You'll may need (4) because otherwise it'd be a waste of resources. 

The problem of changing CPU overcommit on the fly is that when your Hadoop VM 
is still running, someone else can add another VM in the same host with a 
higher CPU overcommit (e.g. 200%), (violating (1) ) thus effecting your Hadoop 
VM also. 
The idea of putting the host in the aggregate can give you (1) and (2). (4) is 
done by AggregateInstanceExtraSpecsFilter. However, it does not give you (3); 
which can be done with pCloud. 


- Original Message -

From: "Alexander Kuznetsov"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Wednesday, November 13, 2013 3:09:40 PM 
Subject: Re: [openstack-dev] [nova] Configure overcommit policy 

Toan and Alex. Having separate computes pools for Hadoop is not suitable if we 
want to use an unused power of OpenStack cluster to run Hadoop analytic jobs. 
Possibly in this case it is better to modify the over-commit calculation in the 
scheduler according John suggestion. 


On Tue, Nov 12, 2013 at 7:16 PM, Khanh-Toan Tran < 
khanh-toan.t...@cloudwatt.com > wrote: 



FYI, by default Openstack overcommit CPU 1:16, meaning it can host 16 times 
number of cores it possesses. As mentioned Alex, you can change it by enabling 
AggregateCoreFilter in nova.conf: 
scheduler_default_filters =  

and modifying the overcommit ratio by adding: 
cpu_allocation_ratio=1.0 

Just a suggestion, think of isolating the host for the tenant that uses Hadoop 
so that it will not serve other applications. You have several filters at your 
disposal: 
AggregateInstanceExtraSpecsFilter 
IsolatedHostsFilter 
AggregateMultiTenancyIsolation 

Best regards, 

Toan 


From: "Alex Glikson" < glik...@il.ibm.com > 

To: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org > 
Sent: Tuesday, November 12, 2013 3:54:02 PM 

Subject: Re: [openstack-dev] [nova] Configure overcommit policy 

You can consider having a separate host aggregate for Hadoop, and use a 
combination of AggregateInstanceExtraSpecFilter (with a special flavor mapped 
to this host aggregate) and AggregateCoreFilter (overriding 
cpu_allocation_ratio for this host aggregate to be 1). 

Regards, 
Alex 




From: John Garbutt < j...@johngarbutt.com > 
To: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org >, 
Date: 12/11/2013 04:41 PM 
Subject: Re: [openstack-dev] [nova] Configure overcommit policy 




On 11 November 2013 12:04, Alexander Kuznetsov < akuznet...@mirantis.com > 
wrote: 
> Hi all, 
> 
> While studying Hadoop performance in a virtual environment, I found an 
> interesting problem with Nova scheduling. In OpenStack cluster, we have 
> overcommit policy, allowing to put on one compute more vms than resources 
> available for them. While it might be suitable for general types of 
> workload, this is definitely not the case for Hadoop clusters, which usually 
> consume 100% of system resources. 
> 
> Is there any way to tell Nova to schedule specific instances (the ones which 
> consume 100% of system resources) without overcommitting resources on 
> compute node? 

You could have a flavor with "no-overcommit" extra spec, and modify 
the over-commit calculation in the scheduler on that case, but I don't 
remember seeing that in there. 

John 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/

Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-13 Thread Alexander Kuznetsov
Toan and Alex. Having separate computes pools for Hadoop is not suitable if
we want to use  an unused power of OpenStack cluster to run Hadoop analytic
 jobs. Possibly in this case it is better to modify the over-commit
calculation in the scheduler according John suggestion.


On Tue, Nov 12, 2013 at 7:16 PM, Khanh-Toan Tran <
khanh-toan.t...@cloudwatt.com> wrote:

> FYI, by default Openstack overcommit CPU 1:16, meaning it can host 16
> times number of cores it possesses. As mentioned Alex, you can change it by
> enabling AggregateCoreFilter in nova.conf:
>scheduler_default_filters =  adding AggregateCoreFilter here>
>
> and modifying the overcommit ratio by adding:
>   cpu_allocation_ratio=1.0
>
> Just a suggestion, think of isolating the host for the tenant that uses
> Hadoop so that it will not serve other applications. You have several
> filters at your disposal:
>  AggregateInstanceExtraSpecsFilter
>  IsolatedHostsFilter
>  AggregateMultiTenancyIsolation
>
> Best regards,
>
> Toan
>
> --
> *From: *"Alex Glikson" 
>
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Sent: *Tuesday, November 12, 2013 3:54:02 PM
>
> *Subject: *Re: [openstack-dev] [nova] Configure overcommit policy
>
> You can consider having a separate host aggregate for Hadoop, and use a
> combination of AggregateInstanceExtraSpecFilter (with a special flavor
> mapped to this host aggregate) and AggregateCoreFilter (overriding
> cpu_allocation_ratio for this host aggregate to be 1).
>
> Regards,
> Alex
>
>
>
>
> From:        John Garbutt 
> To:    "OpenStack Development Mailing List (not for usage questions)"
> ,
> Date:12/11/2013 04:41 PM
> Subject:Re: [openstack-dev] [nova] Configure overcommit policy
> --
>
>
>
> On 11 November 2013 12:04, Alexander Kuznetsov 
> wrote:
> > Hi all,
> >
> > While studying Hadoop performance in a virtual environment, I found an
> > interesting problem with Nova scheduling. In OpenStack cluster, we have
> > overcommit policy, allowing to put on one compute more vms than resources
> > available for them. While it might be suitable for general types of
> > workload, this is definitely not the case for Hadoop clusters, which
> usually
> > consume 100% of system resources.
> >
> > Is there any way to tell Nova to schedule specific instances (the ones
> which
> > consume 100% of system resources) without overcommitting resources on
> > compute node?
>
> You could have a flavor with "no-overcommit" extra spec, and modify
> the over-commit calculation in the scheduler on that case, but I don't
> remember seeing that in there.
>
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-12 Thread Khanh-Toan Tran
FYI, by default Openstack overcommit CPU 1:16, meaning it can host 16 times 
number of cores it possesses. As mentioned Alex, you can change it by enabling 
AggregateCoreFilter in nova.conf: 
scheduler_default_filters =  

and modifying the overcommit ratio by adding: 
cpu_allocation_ratio=1.0 

Just a suggestion, think of isolating the host for the tenant that uses Hadoop 
so that it will not serve other applications. You have several filters at your 
disposal: 
AggregateInstanceExtraSpecsFilter 
IsolatedHostsFilter 
AggregateMultiTenancyIsolation 

Best regards, 

Toan 

- Original Message -

From: "Alex Glikson"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Tuesday, November 12, 2013 3:54:02 PM 
Subject: Re: [openstack-dev] [nova] Configure overcommit policy 

You can consider having a separate host aggregate for Hadoop, and use a 
combination of AggregateInstanceExtraSpecFilter (with a special flavor mapped 
to this host aggregate) and AggregateCoreFilter (overriding 
cpu_allocation_ratio for this host aggregate to be 1). 

Regards, 
Alex 




From: John Garbutt  
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date: 12/11/2013 04:41 PM 
Subject: Re: [openstack-dev] [nova] Configure overcommit policy 




On 11 November 2013 12:04, Alexander Kuznetsov  wrote: 
> Hi all, 
> 
> While studying Hadoop performance in a virtual environment, I found an 
> interesting problem with Nova scheduling. In OpenStack cluster, we have 
> overcommit policy, allowing to put on one compute more vms than resources 
> available for them. While it might be suitable for general types of 
> workload, this is definitely not the case for Hadoop clusters, which usually 
> consume 100% of system resources. 
> 
> Is there any way to tell Nova to schedule specific instances (the ones which 
> consume 100% of system resources) without overcommitting resources on 
> compute node? 

You could have a flavor with "no-overcommit" extra spec, and modify 
the over-commit calculation in the scheduler on that case, but I don't 
remember seeing that in there. 

John 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-12 Thread Alex Glikson
You can consider having a separate host aggregate for Hadoop, and use a 
combination of AggregateInstanceExtraSpecFilter (with a special flavor 
mapped to this host aggregate) and AggregateCoreFilter (overriding 
cpu_allocation_ratio for this host aggregate to be 1).

Regards,
Alex




From:   John Garbutt 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   12/11/2013 04:41 PM
Subject:    Re: [openstack-dev] [nova] Configure overcommit policy



On 11 November 2013 12:04, Alexander Kuznetsov  
wrote:
> Hi all,
>
> While studying Hadoop performance in a virtual environment, I found an
> interesting problem with Nova scheduling. In OpenStack cluster, we have
> overcommit policy, allowing to put on one compute more vms than 
resources
> available for them. While it might be suitable for general types of
> workload, this is definitely not the case for Hadoop clusters, which 
usually
> consume 100% of system resources.
>
> Is there any way to tell Nova to schedule specific instances (the ones 
which
> consume 100% of system resources) without overcommitting resources on
> compute node?

You could have a flavor with "no-overcommit" extra spec, and modify
the over-commit calculation in the scheduler on that case, but I don't
remember seeing that in there.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configure overcommit policy

2013-11-12 Thread John Garbutt
On 11 November 2013 12:04, Alexander Kuznetsov  wrote:
> Hi all,
>
> While studying Hadoop performance in a virtual environment, I found an
> interesting problem with Nova scheduling. In OpenStack cluster, we have
> overcommit policy, allowing to put on one compute more vms than resources
> available for them. While it might be suitable for general types of
> workload, this is definitely not the case for Hadoop clusters, which usually
> consume 100% of system resources.
>
> Is there any way to tell Nova to schedule specific instances (the ones which
> consume 100% of system resources) without overcommitting resources on
> compute node?

You could have a flavor with "no-overcommit" extra spec, and modify
the over-commit calculation in the scheduler on that case, but I don't
remember seeing that in there.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev