Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling
I favor the second option for the same reasons as Zane described, but also don't think we need a LaunchConfiguration resource. How about just adding a attribute to the resources such that the engine knows is not meant to be handled in the usual way, and instead it is really a template (sorry for the overloaded term) used in a scaling group. For example: group: type: OS::Heat::ScalingGroup properties: scaled_resource: server_for_scaling server_for_scaling: use_for_scaling: true ( the name of this attribute is clearly up for discussion ;-) ) type: OS::Nova::Server properties: image: my_image flavor: m1.large When the engine sees the use_for_scaling set to true, then it does not call things like handle_create. Anyway, that's the general idea. I'm sure there are many other ways to achieve a similar effect. Edmund Troche From: Zane Bitter zbit...@redhat.com To: openstack-dev@lists.openstack.org, Date: 01/30/2014 09:43 AM Subject:Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling On 30/01/14 06:01, Thomas Herve wrote: Hi all, While talking to Zane yesterday, he raised an interesting question about whether or not we want to keep a LaunchConfiguration object for the native autoscaling resources. The LaunchConfiguration object basically holds properties to be able to fire new servers in a scaling group. In the new design, we will be able to start arbitrary resources, so we can't keep a strict LaunchConfiguration object as it exists, as we can have arbitrary properties. It may be still be interesting to store it separately to be able to reuse it between groups. So either we do this: group: type: OS::Heat::ScalingGroup properties: scaled_resource: OS::Nova::Server resource_properties: image: my_image flavor: m1.large The main advantages of this that I see are: * It's one less resource. * We can verify properties against the scaled_resource at the place the LaunchConfig is defined. (Note: in _both_ models these would be verified at the same place the _ScalingGroup_ is defined.) Or: group: type: OS::Heat::ScalingGroup properties: scaled_resource: OS::Nova::Server launch_configuration: server_config server_config: type: OS::Heat::LaunchConfiguration properties: image: my_image flavor: m1.large I favour this one for a few reasons: * A single LaunchConfiguration can be re-used by multiple scaling groups. Reuse is good, and is one of the things we have been driving toward with e.g. software deployments. * Assuming the Autoscaling API and Resources use the same model (as they should), in this model the Launch Configuration can be defined in a separate template to the scaling group, if the user so chooses. Or it can even be defined outside Heat and passed in as a parameter. * We can do the same with the LaunchConfiguration for the existing AWS-compatibility resources. That will allows us to fix the current broken implementation that goes magically fishing in the local stack for launch configs[1]. If we pick a model that is strictly less powerful than stuff we already know we have to support, we will likely be stuck with broken hacks forever :( (Not sure we can actually define dynamic properties, in which case it'd be behind a top property.) (This part is just a question of how the resource would look in Heat, and the answer would not really effect the API.) I think this would be possible, but it would require working around the usual code we have for managing/validating properties. Probably not a show-stopper, but it is more work. If we can do this there are a couple more benefits to this way: * Extremely deeply nested structures are unwieldy to deal with, both for us as developers[2] and for users writing templates; shallower hierarchies are better. * You would be able to change an OS::Nova::Server resource into a LaunchConfiguration, in most cases, just by changing the resource type. (This also opens up the possibility of switching between them using the environment, although I don't know how useful that would be.) cheers, Zane. [1] https://etherpad.openstack.org/p/icehouse-summit-heat-exorcism [2] https://github.com/openstack/heat/blob/master/contrib/rackspace/heat/engine/plugins/auto_scale.py ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev inline: graycol.gif___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [heat] [glance] Heater Proposal
I agree with what seems to also be the general consensus, that Glance can become Heater+Glance (the service that manages images in OS today). Clearly, if someone looks at the Glance DB schema, APIs and service type (as returned by keystone service-list), all of the terminology is about images, so we would need to more formally define what are the characteristics or image, template, maybe assembly, components etc and find what is a good generalization. When looking at the attributes for image (image table), I can see where there are a few that would be generic enough to apply to image, template etc, so those could be taken to be the base set of attributes, and then based on the type (image, template, etc) we could then have attributes that are type-specific (maybe by leveraging what is today image_properties). As I read through the discussion, the one thing that came to mind is asset management. I can see where if someone bothers to create an image, or a template, then it is for a good reason, and that perhaps you'd like to maintain it as an IT asset. Along those lines, it occurred to me that maybe what we need is to make Glance some sort of asset management service that can be leveraged by Service Catalogs, Nova, etc. Instead of storing images and templates we store assets of one kind or another, with artifacts (like files, image content, etc), and associated metadata. There is some work we could borrow from, conceptually at least, from OSLC's Asset Management specification: http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/. Looking at this spec, it probably has more than we need, but there's plenty we could borrow from it. Edmund Troche From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 12/06/2013 01:34 PM Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal As a Murano team we will be happy to contribute to Glance. Our Murano metadata repository is a standalone component (with its own git repository)which is not tightly coupled with Murano itself. We can easily add our functionality to Glance as a new component\subproject. Thanks Georgy On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya vishvana...@gmail.com wrote: On Dec 6, 2013, at 10:38 AM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800: On 12/05/2013 04:25 PM, Clint Byrum wrote: Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800: Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800: On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com wrote: Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800: Why not just use glance? I've asked that question a few times, and I think I can collate the responses I've received below. I think enhancing glance to do these things is on the table: 1. Glance is for big blobs of data not tiny templates. 2. Versioning of a single resource is desired. 3. Tagging/classifying/listing/sorting 4. Glance is designed to expose the uploaded blobs to nova, not users My responses: 1: Irrelevant. Smaller things will fit in it just fine. Fitting is one thing, optimizations around particular assumptions about the size of data and the frequency of reads/writes might be an issue, but I admit to ignorance about those details in Glance. Optimizations can be improved for various use cases. The design, however, has no assumptions that I know about that would invalidate storing blobs of yaml/json vs. blobs of kernel/qcow2/raw image. I think we are getting out into the weeds a little bit here. It is important to think about these apis in terms of what they actually do, before the decision of combining them or not can be made. I think of HeatR as a template storage service, it provides extra data and operations on templates. HeatR should not care about how those templates are stored. Glance is an image storage service, it provides extra data and operations on images (not blobs), and it happens to use swift as a backend. If HeatR and Glance were combined, it would result in taking two very different types of data (template metadata vs image metadata) and mashing them into one service. How would adding the complexity of HeatR benefit Glance, when they are dealing with conceptually two very different types of data? For instance, should a template ever care about the field minRam that is stored with an image? Combining them adds a huge development complexity with a very small operations payoff, and so Openstack is already so operationally complex that HeatR as a separate service would be knowledgeable. Only clients of Heat will ever care about data and operations on templates, so I move
Re: [openstack-dev] [heat] [glance] Heater Proposal
I thought about that, i.e. first step in implementation just adding templates, but like you said, you might end up duplicating 5 of the 7 tables in the Glance database, for every new asset type (image, template, etc). Then you would do a similar thing for the endpoints. So, I'm not sure what's a better way to approach this. For all I know, doing a s/image/asset/g for *.py,, adding attribute images.type, and a little more refactoring, might get us 80% of the asset management functionality that we would need initially ;-) Not knowing the Glance code base I'm only going by the surface footprint, so I'll leave it to the experts to comment on what would be a good approach to take Glance to the next level. Edmund Troche From: Randall Burt randall.b...@rackspace.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 12/06/2013 04:47 PM Subject:Re: [openstack-dev] [heat] [glance] Heater Proposal I too have warmed to this idea but wonder about the actual implementation around it. While I like where Edmund is going with this, I wonder if it wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates to Glance (/assemblies, /applications, etc) along side /images. Initially, we could have separate endpoints and data structures for these different asset types, refactoring the easy bits along the way and leveraging the existing data storage and caching bits, but leaving more disruptive changes alone. That can get the functionality going, prove some concepts, and allow all of the interested parties to better plan a more general v3 api. On Dec 6, 2013, at 4:23 PM, Edmund Troche edmund.tro...@us.ibm.com wrote: I agree with what seems to also be the general consensus, that Glance can become Heater+Glance (the service that manages images in OS today). Clearly, if someone looks at the Glance DB schema, APIs and service type (as returned by keystone service-list), all of the terminology is about images, so we would need to more formally define what are the characteristics or image, template, maybe assembly, components etc and find what is a good generalization. When looking at the attributes for image (image table), I can see where there are a few that would be generic enough to apply to image, template etc, so those could be taken to be the base set of attributes, and then based on the type (image, template, etc) we could then have attributes that are type-specific (maybe by leveraging what is today image_properties). As I read through the discussion, the one thing that came to mind is asset management. I can see where if someone bothers to create an image, or a template, then it is for a good reason, and that perhaps you'd like to maintain it as an IT asset. Along those lines, it occurred to me that maybe what we need is to make Glance some sort of asset management service that can be leveraged by Service Catalogs, Nova, etc. Instead of storing images and templates we store assets of one kind or another, with artifacts (like files, image content, etc), and associated metadata. There is some work we could borrow from, conceptually at least, from OSLC's Asset Management specification: http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/ . Looking at this spec, it probably has more than we need, but there's plenty we could borrow from it. Edmund Troche graycol.gifGeorgy Okrokvertskhov ---12/06/2013 01:34:13 PM---As a Murano team we will be happy to contribute to Glance. Our Murano metadata repository is a stand From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 12/06/2013 01:34 PM Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal As a Murano team we will be happy to contribute to Glance. Our Murano metadata repository is a standalone component (with its own git repository)which is not tightly coupled with Murano itself. We can easily add our functionality to Glance as a new component\subproject. Thanks Georgy On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya vishvana...@gmail.com wrote: On Dec 6, 2013, at 10:38 AM, Clint Byrum cl...@fewbar.com wrote: Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800: On 12/05/2013 04:25 PM, Clint Byrum wrote: Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800: Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800: On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at fewbar.com
Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap
You bring up a good point Thomas. I think some of the discussions are mixing template and stack perspectives, they are not the same thing, stack == instance of a template. There likely is room for tagging stacks, all under the control of the user and meant for user consumption, vs the long going discussion on template-level metadata. This may be yet another use case ;-) Edmund Troche Senior Software Engineer IBM Software Group | 11501 Burnet Rd. | Austin, TX 78758 ( +1.512.286.8977 ) T/L 363.8977 * edmund.tro...@us.ibm.com 7 +1.512.286.8977 From: Thomas Spatzier thomas.spatz...@de.ibm.com To: OpenStack Development Mailing List \(not for usage questions \) openstack-dev@lists.openstack.org, Date: 11/27/2013 11:00 AM Subject:Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap Thanks, that clarified the use case a bit. Bot looking at the use case now, isn't this stack tagging instead of template tagging? I.e. assume that for each stack a user creates, he/she can assign one or more tags so you can do better queries to find stacks later? Regards, Thomas Tim Schnell tim.schn...@rackspace.com wrote on 27.11.2013 16:24:18: From: Tim Schnell tim.schn...@rackspace.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 27.11.2013 16:28 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap Ok, I just re-read my example and that was a terrible example. I'll try and create the user story first and hopefully answer Clint's and Thomas's concerns. If the only use case for adding keywords to the template is to help organize the template catalog then I would agree the keywords would go outside of heat. The second purpose for keywords is why I think they belong in the template so I'll cover that. Let's assume that an end-user of Heat has spun up 20 stacks and has now requested help from a Support Operator of heat. In this case, the end-user did not have a solid naming convention for naming his stacks, they are all named tim1, tim2, etcŠ And also his request to the Support Operator was really vague, like My Wordpress stack is broken. The first thing that the Support Operator would do, would be to pull up end-user's stacks in either Horizon or via the heat client api. In both cases, at the moment, he would then have to either stack-show on each stack to look at the description of the stack or ask the end-user for a stack-id/stack-name. This currently gets the job done but a better experience would be for stack-list to already display some keywords about each stack so the Support Operator would have to do less digging. In this case the end-user only has one Wordpress stack so he would have been annoyed if the Support Operator requested more information from him. (Or maybe he has more than one wordpress stack, but only one currently in CREATE_FAILED state). As a team, we have already encountered this exact situation just doing team testing so I imagine that others would find value in a consistent way to determine at least a general purpose of a stack, from the stack-list page. Putting the stack-description in the stack-list table would take up too much room from a design standpoint. Once keywords has been added to the template then part of the blueprint would be to return it with the stack-list information. The previous example I attempted to explain is really more of an edge case, so let's ignore it for now. Thanks, Tim On 11/27/13 3:19 AM, Thomas Spatzier thomas.spatz...@de.ibm.com wrote: Excerpts from Tim Schnell's message on 27.11.2013 00:44:04: From: Tim Schnell tim.schn...@rackspace.com To: OpenStack Development Mailing List (not for usage questions) openstack-dev@lists.openstack.org, Date: 27.11.2013 00:47 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap snip That is not the use case that I'm attempting to make, let me try again. For what it's worth I agree, that in this use case I want a mechanism to tag particular versions of templates your solution makes sense and will probably be necessary as the requirements for the template catalog start to become defined. What I am