Re: [openstack-dev] [heat][nova]: anti-affinity policy via heat in IceHouse?

2015-05-25 Thread Dimitri Mazmanov
Here’s one way:

heat_template_version: 2013-05-23
parameters:
  image:
type: string
default: TestVM
  flavor:
type: string
default: m1.micro
  network:
type: string
default: cirros_net2

resources:
  serv_1:
type: OS::Nova::Server
properties:
  image: { get_param: image }
  flavor: { get_param: flavor }
  networks:
- network: {get_param: network}
  scheduler_hints: {different_host: {get_resource: serv_2}}
   serv_2:
type: OS::Nova::Server
properties:
  image: { get_param: image }
  flavor: { get_param: flavor }
  networks:
- network: {get_param: network}
  scheduler_hints: {different_host: {get_resource: serv_1}}

Note: In order to the above mentioned scheduler hints to work, the following 
scheduler filter should be enabled for nova scheduler
  SameHostFilter and
  DifferentHostFilter

There’s another way of doing it using OS::Nova::ServerGroup, but it’s available 
only since Juno.

-
Dimitri

From: Daniel Comnea comnea.d...@gmail.commailto:comnea.d...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday 24 May 2015 12:24
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat][nova]: anti-affinity policy via heat in 
IceHouse?

Thanks Kevin !

Would you have an example?

Much appreciated,
Dani


On Sun, May 24, 2015 at 12:28 AM, Fox, Kevin M 
kevin@pnnl.govmailto:kevin@pnnl.gov wrote:
It works with heat. You can use a scheduler hint on the instance and the server 
group resource to make a new one.

Thanks,
Kevin


From: Daniel Comnea
Sent: Saturday, May 23, 2015 3:17:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [heat][nova]: anti-affinity policy via heat in 
IceHouse?

Hi,

I'm aware of the anti-affinity policy which you can create via nova cli and 
associated instances with it.
I'm also aware of the default policies in nova.conf

by creating instances via HEAT is any alternatives to create instances part of 
anti-affinity group?

Thx,
Dani

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota management and enforcement across projects

2015-03-24 Thread Dimitri Mazmanov
Hi,
Interesting  thread. I’ve just started looking at Boson myself to manage quotas 
across multiple regions. I think one of the cases when having quota management 
as a separate service is justified, is a multi-region case [1].

1) Some deployments can share Keystone, so there’s a need to synchronise 
resource usage across multiple OpenStacks and enforce quota in a distributed 
manner. Architecturally this will mean that there will be one Boson instance 
with the “quota management” and “admin” roles exposing REST API endpoint and 
talking REST to other client specific Boson instances. This use case can be an 
example of why it can be reasonable to have a separate service for managing 
quotas, reservations, usages.

2) As I understand it, the intention of Boson is to synchronise usage, not own 
this information. The “Usage synchronisation” paragraph in the wiki [3] 
describes one possible approach:
“…Boson will keep freshness information on the usage data; should it determine 
that the usage information it has is not fresh enough, it will reject 
reservation creation requests with a special response code, which tells the 
service to send fresh usage information for certain resources along with 
resending the reservation creation requests”.
Further, the service can reject reservation creation requests in case of 
communication failure, tracker unavailability, etc.

3) Have you looked at Blazar [3] as a possible reservation mechanism instead of 
implementing a new reservation interface in Boson? Does it at all make sense to 
use it in the context of quota management?

Regards,
Dimitri

[1] http://dc636.4shared.com/download/Z6O_jJSGba/multiregion_arch.png?lgfp=3000
[2]https://wiki.openstack.org/wiki/Boson#Usage_Synchronization
[3] https://wiki.openstack.org/wiki/Blazar

From: Salvatore Orlando sorla...@nicira.commailto:sorla...@nicira.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday 15 January 2015 02:22
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Quota management and enforcement across projects

I'm resurrecting this thread to provide an update on this effort.

I have been looking at Boson [1] as a base for starting developing a service 
for managing quotas and reservations [2].
Boson's model would satisfy most of the requirements for this service, and 
implementing additional requirements, such as hierarchical multi-tenancy should 
be quite easy (at least from the high-level design perspective). In the rest of 
this post I'm going to refer to this service both as Boson and quota 
service.

Without going into too many technical details (which I am happy to discuss 
separately), the quota service will need to provide the 4 interfaces depicted 
in [3].
1) The admin interface has the main purpose of keeping track of the services 
which use boson, registering resources to track, and managing their lifecycle.
2) The quota mgmt interface does pretty much what the quota extension does 
for many openstack project - manage resource limits per project/users, 
configure quota classes, etc.
3) The reservation interface handles the request/commit/cancel process 
supported by many Openstack projects
4) the usage interface provides abstraction to access the resource usage 
tracker and feed information to it.

These interfaces are then implemented by the Boson object model, depicted in 
[4].
This object model is not very different from the one originally proposed for 
Boson [5]
The proposed object model simplifies a bit the original one, merging the 
reservation and request concepts, as well as doing without the 
SpecificResource concept (at least for the moment). However, the proposed 
object model adds the Quota class concept and introduces the possibility of 
having child-parent relationships between Resources and User Info. The former 
will allow for applying quota to resources which are scoped within a parent 
resource (e.g.: static routes per logical router), whereas the latter should 
enable the hierarchical multi-tenancy use case.

Keeping in mind the interfaces discussed in [3], the component diagram [6] can 
be devised. There is a distinct component for each interface - plus one 
component for DB management. Most of the interactions are, of course, with the 
DB manager. Component design should be done in a way that the various 
components are as independent as possible. There are some interactions among 
components, but they can likely be replaced with interactions with the DB 
manager component.

The Boson quota service therefore represents a centralized endpoint for 
managing quotas, tracking resource usage, and performing resource reservation.
Conceptually, this is all good; however, would it scale from an architectural 
perspective? The main problem with this approach 

Re: [openstack-dev] OVF/OVA support

2014-11-11 Thread Dimitri Mazmanov
Hello,
I’m also interested in having OVF package support for applications.
As it was described earlier, there are two main tracks to this – OVF artefacts 
in Glance (corresponding driver), and translating OVF descriptors to Heat 
templates.
Is there any etherpad from the session?
-
Dimitri

From: Bhandaru, Malini K 
malini.k.bhand...@intel.commailto:malini.k.bhand...@intel.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Friday 7 November 2014 15:18
To: Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] OVF/OVA support

Gosha – this is wonderful news. Complements Intel interest.
I am in the Glance area .. stopped by a couple of times, the room was available 
2 pm onwards.
Contact made and can continue via email and IRC.
Malini

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Friday, November 07, 2014 8:20 AM
To: Bhandaru, Malini K
Cc: OpenStack Development Mailing List
Subject: Re: OVF/OVA support


Hi Malini,

I am interested in OVa support for applications. Specifically Ova to Heat as 
this is whay we usually do in Murano project.

When is free format session for Glance? Should we add this to session etherpad?

Thanks,
Gosha
On Nov 5, 2014 6:06 PM, Bhandaru, Malini K 
malini.k.bhand...@intel.commailto:malini.k.bhand...@intel.com wrote:
Please join us on Friday in the Glance track – free format session, to discuss 
supporting OVF/OVA in OpenStack.

Poll:

1)  How interested are you in this feature? 0 – 10

2)  Interested enough to help develop the feature?



Artifacts are ready for use.

We are considering defining an artifact for OVF/OVA.
What should the scope of this work be? Who are our fellow travelers?
Intel is interested in parsing OVF meta data associated with images – to ensure 
that a VM image lands on the most appropriate hardware in the cloud instance, 
to ensure optimal performance.
The goal is to remove the need to manually specify image meta data, allow the 
appliance provider to specify HW requirements, and in so doing reduce human 
error.
Are any partners interested in writing an OVF/OVA artifact = stack deployment? 
Along the lines of heat?
As a first pass, Intel we could at least

1)  Defining artifact for OVA, parsing the OVF in it, pulling out the 
images therein and storing them in the glance image database and attaching meta 
data to the same.

2)  Do not want to imply that OpenStack supports OVA/OVF -- need to be 
clear on this.

3)  An OpenStack user could create a heat template using the images 
registered in step -1

4)  OVA to Heat – there may be a loss in translation! Should we attempt 
this?

5)  What should we do with multiple volume artifacts?

6)  Are volumes read-only? Or on cloning, make copies of them?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] stack-update with existing parameters

2014-09-24 Thread Dimitri Mazmanov
TL;DR Is there any reason why stack-update doesn¹t reuse the existing
parameters when I extend my stack definition with a resource that uses
them?

I have created a stack from the hello_world.yaml template
(https://github.com/openstack/heat-templates/blob/master/hot/hello_world.ya
ml)
It has the following parameters keyname, image, flavor, admin_pass,
db_port.

heat stack-create hello_world -P
key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa
ss=Openst1 -f hello_world.yaml

Then I have added one more nova server resource with new name(server1),
rest all the details are untouched.

I get the following when I use this new template without mentioning any of
the parameter value.

heat --debug stack-update hello_world -f hello_world_modified.yaml

On debugging it throws the below exception.
The resource was found
athttp://localhost:8004/v1/7faee9dd37074d3e8896957dc4a52e22/stacks/hello_wo
rld/85a0bc2c-1a20-45c4-a8a9-7be727db6a6d; you should be redirected
automatically.
DEBUG (session) RESP: [400] CaseInsensitiveDict({'date': 'Wed, 24 Sep 2014
10:08:08 GMT', 'content-length': '961', 'content-type': 'application/json;
charset=UTF-8'})
RESP BODY: {explanation: The server could not comply with the request
since it is either malformed or otherwise incorrect., code: 400,
error: {message: The Parameter (admin_pass) was not provided.,
traceback: Traceback (most recent call last):\n\n  File
\/opt/stack/heat/heat/engine/service.py\, line 63, in wrapped\n
return func(self, ctx, *args, **kwargs)\n\n  File
\/opt/stack/heat/heat/engine/service.py\, line 576, in update_stack\n
env, **common_params)\n\n  File \/opt/stack/heat/heat/engine/parser.py\,
line 109, in __init__\ncontext=context)\n\n  File
\/opt/stack/heat/heat/engine/parameters.py\, line 403, in validate\n
param.validate(validate_value, context)\n\n  File
\/opt/stack/heat/heat/engine/parameters.py\, line 215, in validate\n
raise 
exception.UserParameterMissing(key=self.name)\n\nUserParameterMissing: The
Parameter (admin_pass) was not provided.\n, type:
UserParameterMissing}, title: Bad Request}

When I mention all the parameters then it updates the stack properly

heat --debug stack-update hello_world -P
key_name=test_keypair;image=test_image_cirros;flavor=m1.test_heat;admin_pa
ss=Openst1 -f hello_world_modified.yaml

Any reason why I can¹t reuse the existing parameters during the
stack-update if I don¹t  want to specify them again?

-
Dimitri



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Stack adopt after handle_create

2014-07-18 Thread Dimitri Mazmanov
Hi,
I¹m working on the following use-case:
I have a stack template containing a custom resource - my_res - that upon
handle_create invokes creation of another set of resources [r1, r2, r3,
...] (not expressed in the template). These newly created resources are
not associated with the stack. The goal is to include them into the
existing stack.

My idea was to perform stack adopt on r1, r2, r3, etc inside handle_create
of my_res.

handle_create 
  create my_res
  find r1, r2, ...
  for each r in [r1, r2, Š]
 my_res.stack.adopt r

Is this ok or a terrible, terrible idea?

Thanks!
-
Dimitri


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack adopt after handle_create

2014-07-18 Thread Dimitri Mazmanov

On 18/07/14 11:20, Steven Hardy sha...@redhat.com wrote:

On Fri, Jul 18, 2014 at 09:02:33AM +, Dimitri Mazmanov wrote:
 Hi,
 I¹m working on the following use-case:
 I have a stack template containing a custom resource - my_res - that
upon
 handle_create invokes creation of another set of resources [r1, r2, r3,
 ...] (not expressed in the template). These newly created resources are
 not associated with the stack. The goal is to include them into the
 existing stack.
 
 My idea was to perform stack adopt on r1, r2, r3, etc inside
handle_create
 of my_res.
 
 handle_create 
   create my_res
   find r1, r2, ...
   for each r in [r1, r2, Š]
  my_res.stack.adopt r
 
 Is this ok or a terrible, terrible idea?

Pretty much sounds like a terrible idea to me ;)

Doh :)


More information is needed to fully understand what you're trying to do,
but IMO having a hidden nested stack created like this is highly likely to
end up in a buggy mess, and having heat create resources not defined in
any
template is basically always the wrong thing to do.

I should have clarified this part. Heat creates only the custom resource
(my_res). The r1/r2/r3 set is created as a consequence, not by Heat. So
Heat has no idea what’s happening in the background.
Why would it end up in a buggy mess? Heat creates a stack and then calls
stack adopt on a few resources. If not this, what is the proper usage of
stack adopt operation then?

If you want to create a stack which contains some existing resources, I'd
do something like create a stack containing everything except r1/r2/r3,
then abandon it, modify the abandon data to add the existing resources you
want to adopt, then adopt the stack using the modified data.

The goal is to keep the create and adopt operations in a single call to
make the flow look like normal stack deployment (the user clicks Deploy)
without having to perform heat calls manually.


Of course it's much better to just create everything via a heat template,
but if you have to adopt these existing resources, that may be one way to
do it.  More information on the specifics of what you're trying to achieve
would help us advise more :)

I have a custom neutron port. When neutron creates this port it also
creates several networks associated with this port. As the result the
created networks theoretically belong to the created stack, but in reality
are not related to it until the stack adopts them.

-
Dimitri


Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-20 Thread Dimitri Mazmanov
Hi!
Comments inline.

On 20/05/14 21:58, Zane Bitter zbit...@redhat.com wrote:

On 20/05/14 12:17, Jay Pipes wrote:
 Hi Zane, sorry for the delayed response. Comments inline.

 You are assuming a public cloud provider use case above. As much as I
 tend to focus on the utility cloud model, where the incentives are
 around maximizing the usage of physical hardware by packing in as many
 paying tenants into a fixed resource, this is only one domain for
 OpenStack.

I was assuming the use case advanced in this thread, which sounded like
a semi-public cloud model.

However, I'm actually trying to argue from a higher level of abstraction
here. In any situation where there are limited resources, optimal
allocation of those resources will occur when the incentives of the
suppliers and consumers of said resources are aligned, independently of
whose definition of optimal you use. This applies equally to public
clouds, private clouds, lemonade stands, and the proverbial two guys
stranded on a desert island. In other words, it's an immutable property
of economies, not anything specific to one use case.

This makes perfect sense. I¹d add one tiny bit though. ³Šoptimal of those
resource will *eventually* occurŠ².
For clouds, by rounding up to the nearest flavour you actually leave no
space for optimisation. Even for the lemonade stands you¹d first observe
what people prefer most before deciding on optimal allocation of water or
soda bottles :)


 There are, for good or bad, IT shops and telcos that frankly are willing
 to dump money into an inordinate amount of hardware -- and see that
 hardware be inefficiently used -- in order to appease the demands of
 their application customer tenants. The impulse of onboarding teams for
 these private cloud systems is to just say yes, with utter disregard
 to the overall cost efficiency of the proposed customer use cases.

+1. I¹d also add to add support of legacy applications as another reason
for the utter disregard²


Fine, but what I'm saying is that you can just give the customer _more_
than they really wanted (i.e. round up to the nearest flavour). You can
charge them the same if you want - you can even decouple pricing from
the flavour altogether if you want. But what you can't do is assume
that, just because you gave the customer exactly what they needed and
not one kilobyte more, you still get to use/sell the excess capacity you
didn't allocate to them. Because you may not.

Like I said above, if you round up you most definitely don¹t get to use
the excess capacity.
Also, where exactly would you place this rounding up functionality? Heat?
Nova? A custom script that runs before deployment? Assume the tenant
doesn¹t know what flavours are available, because template creation is
done automatically outside of the cloud environment.


 If there was a simple switching mechanism that allowed a deployer to
 turn on or off this ability to allow tenants to construct specialized
 instance type configurations, then who really loses here? Public or
 utility cloud providers would simply leave the switch to its default of
 off and folks who wanted to provide this functionality to their users
 could provide it. Of course, there are clear caveats around lack of
 portability to other clouds -- but let's face it, cross-cloud
 portability has other challenges beyond this particular point ;)

 The insight of flavours, which is fundamental to the whole concept of
 IaaS, is that users must pay the *opportunity cost* of their resource
 usage. If you allow users to opt, at their own convenience, to pay only
 the actual cost of the resources they use regardless of the opportunity
 cost to you, then your incentives are no longer aligned with your
 customers.

 Again, the above assumes a utility cloud model. Sadly, that isn't the
 only cloud model.

__
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-
Dimitri


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Special session on heat-translator project at Atlanta summit

2014-05-06 Thread Dimitri Mazmanov
Great effort! One question, will this session relate to the overall Heat vision 
[1] (Model Interpreters, API Relay, etc)?

-
Dimitri

[1] https://wiki.openstack.org/wiki/Heat/Vision


On 05/05/14 19:21, Thomas Spatzier 
thomas.spatz...@de.ibm.commailto:thomas.spatz...@de.ibm.com wrote:


Hi all,

I mentioned in some earlier mail that we have started to implement a TOSCA
YAML to HOT translator on stackforge as project heat-translator. We have
been lucky to get a session allocated in the context of the Open source @
OpenStack program for the Atlanta summit, so I wanted to share this with
the Heat community to hopefully attract some interested people. Here is the
session link:

http://openstacksummitmay2014atlanta.sched.org/event/c94698b4ea2287eccff8fb743a358d8c#.U2e-zl6cuVg

While there is some focus on TOSCA, the goal of discussions would also be
to find a reasonable design for sitting such a translation layer on-top of
Heat, but also identify the relations and benefits for other projects, e.g.
how Murano use cases that include workflows for templates (which is part of
TOSCA) could be addressed long term. So we hope to see a lot of interested
folks there!

Regards,
Thomas

PS: Here is a more detailed description of the session that we submitted:

1) Project Name:
heat-translator

2) Describe your project, including links to relevent sites, repositories,
bug trackers and documentation:
We have recently started a stackforge project [1] with the goal to enable
the deployment of templates defined in standard format such as OASIS TOSCA
on top of OpenStack Heat. The Heat community has been implementing a native
template format 'HOT' (Heat Orchestration Templates) during the Havana and
Icehouse cycles, but it is recognized that support for other standard
formats that are sufficiently aligned with HOT are also desirable to be
supported.
Therefore, the goal of the heat-translator project is to enable such
support by translating such formats into Heat's native format and thereby
enable a deployment on Heat. Current focus is on OASIS TOSCA. In fact, the
OASIS TOSCA TC is currently working on a TOSCA Simple Profile in YAML [2]
which has been greatly inspired by discussions with the Heat team, to help
getting TOSCA adoption in the community. The TOSCA TC and the Heat team
have also be in close discussion to keep HOT and TOSCA YAML aligned. Thus,
the first goal of heat-translator will be to enable deployment of TOSCA
YAML templates thru Heat.
Development had been started in a separate public github repository [3]
earlier this year, but we are currently in the process of moving all code
to the stackforge projects

[1] https://github.com/stackforge/heat-translator
[2]
https://www.oasis-open.org/committees/document.php?document_id=52571wg_abbrev=tosca
[3] https://github.com/spzala/heat-translator

3) Please describe how your project relates to OpenStack:
Heat has been working on a native template format HOT to replace the
original CloudFormation format as the primary template of the core Heat
engine. CloudFormation shall continue to be supported as one possible
format (to protect existing content), but it is desired to move such
support out of the core engine into a translation layer. This is one
architectural move that can be supported by the heat-translator project.
Furthermore, there is desire to enable standardized formats such OASIS
TOSCA to run on Heat, which will also be possible thru heat-translator.

In addition, recent discussions [4] in the large OpenStack orchestration
community have shown that several groups (e.g. Murano) are looking at
extending orchestration capabilities beyond Heat functionality, and in the
course of doing this also extend current template formats. It has been
suggested in mailing list posts that TOSCA could be one potential format to
center such discussions around instead of several groups developing their
own orchestration DSLs. The next version of TOSCA with its simple profile
in YAML is very open for input from the community, so there is a great
opportunity to shape the standard in a way to address use cases brought up
by the community. Willingness to join discussions with the TOSCA TC have
already been indicated by several companies contributing to OpenStack.
Therefore we think the heat-translator project can help to focus such
discussions.

[4]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/028957.html

4) How do you plan to use the time and space?
Give attendees an overview of current developments of the TOSCA Simple
Profile in YAML and how we are aligning this with HOT.
Give a quick summary of current code.
Discuss next steps and long term direction of the heat-translator project:
alignment with Heat, parts that could move into Heat, parts that would stay
outside of Heat etc.
Collect use cases from other interested groups (e.g. Murano), and discuss
that as potential input for the project and also ongoing TOSCA standards
work.
Discuss if and how this project could 

Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-06 Thread Dimitri Mazmanov
Hi Solly,

On 06/05/14 19:16, Solly Ross sr...@redhat.com wrote:

For your first question, I'll probably create a BP sometime today.

Great. Thanks. Happy to help with implementation.


For your second question, allowing tenants to create flavors
prevents one of the main parts of the flavor idea from working --
having flavors that nicely fit together to prevent wasted host
resources.  For instance suppose the normal system flavors used
memory in powers of 2GB (2, 4, 8, 16, 32).  Now suppose someone
came in, created a private flavor that used 3GB of RAM.  We now
have 1GB of RAM that can never be used, unless someone decides
to come along and create a 1GB flavor (actually, since RAM has
even more granularity than that, you could have someone specify
that they wanted 1.34GB of RAM, for instance, and then you have
all sorts of weird stuff going on).

When I said create custom flavor I never meant allowing the users such
nonsense as specifying 1.34GB of RAM (this can be controlled by having
constraints). Although some can be very meticulous :)
Still an issue?
My basic idea was to let the users think in terms of physical resources
and based on that let them create the configuration they need (if they
don’t find the right flavor in the global list).


Best Regards,
Solly Ross

- Original Message -
From: Dimitri Mazmanov dimitri.mazma...@ericsson.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 3:40:08 PM
Subject: Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation
through Heat (pt.2)

This is good! Is there a blueprint describing this idea? Or any plans
describing it in a blueprint?
Would happily share the work.

Should we mix it with flavors in horizon though? I¹m thinking of having a
separate ³Resources² page,
wherein the user can ³define² resources. I¹m not a UX expert though.

But let me come back to the project-scoped flavor creation issues.
Why do you think it¹s such a bad idea to let tenants create flavors for
their project specific needs?

I¹ll refer again to the Steve Hardy¹s proposal:
- Normal user : Can create a private flavor in a tenant where they
  have the Member role (invisible to any other users)
- Tenant Admin user : Can create public flavors in the tenants where they
  have the admin role (visible to all users in the tenant)
- Domain admin user : Can create public flavors in the domains where they
  have the admin role (visible to all users in all tenants in that domain)


 If you actually have 64 flavors, though, and it's overwhelming
 your users, ...

The users won¹t see all 64 flavor, only those they have defined and
public.
__
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Dimitri Mazmanov
I guess I need to describe the use-case first.
An example of Telco application is IP Multimedia Subsystem (IMS) [1],
which is a fairly complex beast. Each component of IMS can have very
different requirements on the computing resources. If we try to capture
everything in terms of flavors the list of flavors can grow very quickly
and still be specific to one single application. There¹s also many more
apps to deploy. Agree, one can say, just round to the best matching
flavor! Will work, but not the most efficient solution (a set of 4-5
global flavors will not provide the best fitting model for every VM we
need to spawn). For such applications a flavor is not the lowest level of
granularity. RAM, CPU, Disk is. Hence the question. In OpenStack, tenants
are bound to think in terms flavors. And if this model is the lowest level
of granularity, then dynamic creation of flavors actually supports this
model and allows non-trivial applications to use flavors (I guess this is
why this question had been raised last year by NSN). But, there are some
issues related to this :) and these issues I have written down in my first
mail.

Dimitri 

[1] http://en.wikipedia.org/wiki/IP_Multimedia_Subsystem


On 05/05/14 17:23, Solly Ross sr...@redhat.com wrote:

Just to expand a bit on this, flavors are supposed to be the lowest level
of granularity,
and the general idea is to round to the nearest flavor (so if you have a
VM that requires
3GB of RAM, go with a 4GB flavor).  Hence, in my mind, it doesn't make
any sense to create
flavors on the fly; you should have enough flavors to suit your needs,
but I can't really
think of a situation where you'd need so much granularity that you'd need
to create new
flavors on the fly (assuming, of course, that you planned ahead and
created enough flavors
that you don't have VMs that are extremely over-allocated).

Best Regards,
Solly Ross

- Original Message -
From: Serg Melikyan smelik...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Sent: Monday, May 5, 2014 2:18:21 AM
Subject: Re: [openstack-dev] [Nova][Heat] Custom Nova Flavor creation
through Heat (pt.2)

 Having project-scoped flavors will rid us of the identified issues, and
will allow a more fine-grained way of managing physical resources.

Flavors concept was introduced in clouds to solve issue with effective
physical resource usage: 8Gb physical memory can be effectively splitted
to two m2.my_small with 4Gb RAM or to eight m1.my_tiny with 1 GB.

Let's consider example when your cloud have only 2 compute nodes with 8GB
RAM: 
vm1 (m1.my_tiny) - node1
vm2 (m1.my_tiny) - node1
vm3 (m2.my_small) - node1
vm4 (m2.my_small) - node2 (since we could not spawn on node1)

This leaves ability to spawn predictable 2 VMs with m1.my_tiny flavor on
node1, and 2 VMs m1.my_tiny or 1 VM m2.my_small on node2. If user has
ability to create any flavor that he likes, he can create flavor like
mx.my_flavor with 3GB of RAM that could not be spawned on node1 at all,
and leaves 1GB under-used on node2 when spawned there. If you will
multiply number of nodes to 100 or even 1000 (like some of the OpenStack
deployments) you will have very big memory under-usage.

Do we really need to have ability to allocate any number of physical
resources for VM? If yes, I suggest to make two ways to define number of
physical resource allocation for VMs: with flavors and dynamically.
Leaving to cloud admins/owners to decide which way they prefer they cloud
resources to be allocated. Since small clouds may prefer flavors, and big
clouds may have dynamic resource allocation (impact from under-usage will
be not so big). As transition plan project-scoped flavors may do the job.


On Fri, May 2, 2014 at 5:35 PM, Dimitri Mazmanov 
dimitri.mazma...@ericsson.com  wrote:



This topic has already been discussed last year and a use-case was
described (see [1]).
Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2].
Several issues have been brought up after posting my implementation for
review [3], all related to how flavors are defined/implemented in nova:


* Only admin tenants can manage flavors due to the default admin rule
in policy.json. 
* Per-stack flavor creation will pollute the global flavor list
* If two stacks create a flavor with the same name, collision will
occur, which will lead to the following error: ERROR (Conflict): Flavor
with name dupflavor already exists. (HTTP 409)
These and the ones described by Steven Hardy in [4] are related to the
flavor scoping in Nova.

Is there any plan/discussion to allow project scoped flavors in nova,
similar to the Steven¹s proposal for role-based scoping (see [4])?
Currently the only purpose of the is_public flag is to hide the flavor
from users without the admin role, but it¹s still visible in all
projects. Any plan to change this?

Having project-scoped flavors will rid us of the identified issues, and
will allow a more fine-grained way

Re: [openstack-dev] [Nova] [Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-05 Thread Dimitri Mazmanov
This is good! Is there a blueprint describing this idea? Or any plans
describing it in a blueprint?
Would happily share the work.

Should we mix it with flavors in horizon though? I¹m thinking of having a
separate ³Resources² page,
wherein the user can ³define² resources. I¹m not a UX expert though.

But let me come back to the project-scoped flavor creation issues.
Why do you think it¹s such a bad idea to let tenants create flavors for
their project specific needs?

I¹ll refer again to the Steve Hardy¹s proposal:
- Normal user : Can create a private flavor in a tenant where they
  have the Member role (invisible to any other users)
- Tenant Admin user : Can create public flavors in the tenants where they
  have the admin role (visible to all users in the tenant)
- Domain admin user : Can create public flavors in the domains where they
  have the admin role (visible to all users in all tenants in that domain)


 If you actually have 64 flavors, though, and it's overwhelming
 your users, ...

The users won¹t see all 64 flavor, only those they have defined and public.

-

Dimitri

On 05/05/14 20:18, Chris Friesen chris.frie...@windriver.com wrote:

On 05/05/2014 11:40 AM, Solly Ross wrote:
 One thing that I was discussing with @jaypipes and @dansmith over
 on IRC was the possibility of breaking flavors down into separate
 components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
 This way, you still get the control of the size of your building blocks
 (e.g. you could restrict RAM to only 2GB, 4GB, or 16GB), but you avoid
 exponential flavor explosion by separating out the axes.

I like this idea because it allows for greater flexibility, but I think
we'd need to think carefully about how to expose it via horizon--maybe
separate tabs within the overall flavors page?

As a simplifying view you could keep the existing flavors which group
all of them, while still allowing instances to specify each one
separately if desired.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Heat] Custom Nova Flavor creation through Heat (pt.2)

2014-05-02 Thread Dimitri Mazmanov
This topic has already been discussed last year and a use-case was described 
(see [1]).
Here's a Heat blueprint for a new OS::Nova::Flavor resource: [2].
Several issues have been brought up after posting my implementation for review 
[3], all related to how flavors are defined/implemented in nova:

  *   Only admin tenants can manage flavors due to the default admin rule in 
policy.json.
  *   Per-stack flavor creation will pollute the global flavor list
  *   If two stacks create a flavor with the same name, collision will occur, 
which will lead to the following error: ERROR (Conflict): Flavor with name 
dupflavor already exists. (HTTP 409)

These and the ones described by Steven Hardy in [4] are related to the flavor 
scoping in Nova.

Is there any plan/discussion to allow project scoped flavors in nova, similar 
to the Steven’s proposal for role-based scoping (see [4])?
Currently the only purpose of the is_public flag is to hide the flavor from 
users without the admin role, but it’s still visible in all projects. Any plan 
to change this?

Having project-scoped flavors will rid us of the identified issues, and will 
allow a more fine-grained way of managing physical resources.

Dimitri

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-November/018744.html
[2] https://wiki.openstack.org/wiki/Heat/Blueprints/dynamic-flavors
[3] https://review.openstack.org/#/c/90029
[4] http://lists.openstack.org/pipermail/openstack-dev/2013-November/019099.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev