Re: [openstack-dev] [Glare][Heat][TripleO] Heat artifact type

2016-07-20 Thread Randall Burt
FWIW, option 2 is almost required unless we plan to be able to bundle multiple 
environments with a single template. While having a single environment for a 
single template can be useful, the even *more* useful scenario (and the primary 
one driving the development of environments initially) is when you have options 
as to how a template behaves (use Trove for the backend or pop vms and software 
config to install a database). IMO, you'd want to de-couple environments from 
the templates given that multiple environment could work for the same template.
 
On Jul 20, 2016, at 8:58 AM, "Mikhail Fedosin" 
 wrote:

> 
> 
> On Wed, Jul 20, 2016 at 5:12 AM, Qiming Teng  
> wrote:
> On Tue, Jul 19, 2016 at 06:44:06PM +0300, Oleksii Chuprykov wrote:
> > Hello!
> >
> > Today it was announced that Glare is ready for public review
> > http://lists.openstack.org/pipermail/openstack-dev/2016-July/099553.html So
> > we are ready to start working on integration Heat with Glare and
> > implementing a POC. After discussions with Glare team we see two design
> > options:
> >
> > 1) Create one artifact type that will contain template, nested templates
> > and environments.
> > Pros: It is easy to maintain integrity. Since artifact is immutable, we can
> > guarantee the consistency and prevent from accidentally removing of
> > dependent environment.
> > Cons: If we need to add new environments to use them with template, we need
> > to create new artifact.
> >
> > 2) Create 2 artifact types: environment and template.
> > Pros: It is easy to add new environments. You just need to create new
> > dependency from template artifact to environment one.
> > Cons: Some environment can be (mistakenly) removed, and template that have
> > dependencies on it will be in inconsistent state.
> 
> Option 2 looks more flexible to me. I'm not sure we are encouraging
> users to introduce or rely on a hard dependency from a template to an
> environment file. With that, it is still good to know whether glare
> supports the concept of 'reference' where a referenced artifact cannot
> be deleted.
> 
> Hey! 
> 
> Indeed, option 2 is more flexible, but in this case users have to manually 
> control dependencies, which is may be hard sometimes. Also, initially Glare 
> won't support 'hard' dependencies, this feature will be added in next 
> version, because it requires additional discussions. For this reason I 
> recommend option 1 and let Glare control template consistency for you, it 
> won't allow users to break anything. 
> 
> Best,
> Mike
>  
> 
>  - Qiming
> 
> > So we want to hear your opinions and suggestions on the matter. Thanks in
> > advance!
> >
> > Best regards,
> > Oleksii Chuprykov
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-11 Thread Randall Burt
There is a mechanism to mark them as support status "hidden" so that they don't 
show up in resource-type-show and aren't allowed in new templates, but older 
templates should still work. Eventually they may go away altogether but that 
should be far in the future. For your custom resources, you can decide when or 
if to ever remove them.

On Apr 11, 2016, at 3:58 PM, "Praveen Yalagandula" <yprav...@avinetworks.com>
 wrote:

> Randall,
> 
> Thanks for your reply.
> I was wondering especially about those "deprecated" properties.. What happens 
> after several releases? Do you just remove them at that point? If the 
> expected maximum lifespan of a stack is shorter than the span for which those 
> "deprecated" properties are maintained, then removing them works. But what 
> happens if it is longer?
> 
> Cheers,
> Praveen
> 
> On Mon, Apr 11, 2016 at 12:02 PM Randall Burt <randall.b...@rackspace.com> 
> wrote:
> Not really. Ideally, you need to write your resource such that these changes 
> are backwards compatible. We do this for the resources we ship with Heat (add 
> new properties while supporting deprecated properties for several releases).
> 
> On Apr 11, 2016, at 1:06 PM, "Praveen Yalagandula" <yprav...@avinetworks.com>
>  wrote:
> 
> > Hi,
> >
> > We are developing a custom heat resource plug-in and wondering about how to 
> > handle plug-in upgrades. As our product's object model changes with new 
> > releases, we will need to release updated resource plug-in code too. 
> > However, the "properties" stored in the heat DB for the existing resources, 
> > whose definitions have been upgraded, need to be updated too. Was there any 
> > discussion on this?
> >
> > Thanks,
> > Praveen Yalagandula
> > Avi Networks
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] upgrade options for custom heat resource plug-ins

2016-04-11 Thread Randall Burt
Not really. Ideally, you need to write your resource such that these changes 
are backwards compatible. We do this for the resources we ship with Heat (add 
new properties while supporting deprecated properties for several releases).

On Apr 11, 2016, at 1:06 PM, "Praveen Yalagandula" 
 wrote:

> Hi,
> 
> We are developing a custom heat resource plug-in and wondering about how to 
> handle plug-in upgrades. As our product's object model changes with new 
> releases, we will need to release updated resource plug-in code too. However, 
> the "properties" stored in the heat DB for the existing resources, whose 
> definitions have been upgraded, need to be updated too. Was there any 
> discussion on this?
> 
> Thanks,
> Praveen Yalagandula
> Avi Networks
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Rico Lin for heat-core

2015-12-07 Thread Randall Burt
+1

 Original message 
From: Sergey Kraynev
Date:12/07/2015 6:41 AM (GMT-06:00)
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [heat] Rico Lin for heat-core

Hi all.

I'd like to nominate Rico Lin for heat-core. He did awesome job with providing 
useful and valuable reviews. Also his contribution is really high [1] .

[1] http://stackalytics.com/report/contribution/heat-group/60

Heat core-team, please vote with:
 +1 - if you agree
  -1 - if you disagree

--
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team nomination

2015-10-20 Thread Randall Burt
+1

 Original message 
From: Sergey Kraynev
Date:10/20/2015 8:42 AM (GMT-06:00)
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [Heat] core team nomination

I'd like to propose new candidates for heat core-team:
Rabi Mishra
Peter Razumovsky

According statistic both candidate made a big effort in Heat as
reviewers and as contributors [1][2].
They were involved in Heat community work  during last several releases and
showed good understanding of Heat code.
I think, that they are ready to became core-reviewers.

Heat-cores, please vote with +/- 1.

[1] http://stackalytics.com/report/contribution/heat-group/180
[2] http://stackalytics.com/?module=heat-group=person-day
--
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] creating a stack with a config_drive

2015-08-07 Thread Randall Burt
config_drive: true just tells the instance to mount the drive. You pass data 
via the user_data property.

 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:08 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] creating a stack with a config_drive

I have been looking for a working example to create Heat stack with a
config_drive attached.

I know it is possible to deploy a nova instance with the CLI [1]

I see that OS::Nova::Server has a config_drive property that is a
Boolean value [2]

What I cannot find is how this can be used. Where is the path defined
for the config file?
Or am I completely missing what and how this should be used?

Anyone with more info on this - I would be highly grateful.

Thanks.

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2]
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server


--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] creating a stack with a config_drive

2015-08-07 Thread Randall Burt
The drive will contain the user data. Its an alternative to the metadata 
service and isn't a normal drive. Its created, mounted, and populated by Nova.

 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:35 AM (GMT-06:00)
To: Randall Burt , maishsk+openst...@maishsk.com, OpenStack Development 
Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] creating a stack with a config_drive

On 08/07/15 16:22, Randall Burt wrote:
config_drive: true just tells the instance to mount the drive. You pass data 
via the user_data property.

Thanks Randall that is what I was thinking.

But I am confused.

When booting an instance with nova boot, I can configure a local file/directory 
to be mounted as a config drive on the instance upon boot. I can also provide 
information and commands regularly through the user_data

Through Heat I can provide configuration through user_data. And I can also 
mount a config_drive.

Where do I define what that config_drive contains?


 Original message 
From: Maish Saidel-Keesing
Date:08/07/2015 8:08 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Heat] creating a stack with a config_drive

I have been looking for a working example to create Heat stack with a
config_drive attached.

I know it is possible to deploy a nova instance with the CLI [1]

I see that OS::Nova::Server has a config_drive property that is a
Boolean value [2]

What I cannot find is how this can be used. Where is the path defined
for the config file?
Or am I completely missing what and how this should be used?

Anyone with more info on this - I would be highly grateful.

Thanks.

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2]
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server


--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribemailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [app-catalog] conditional resource exposure - second thoughts

2015-07-14 Thread Randall Burt
Making users complain to admins that may have little to no control over what is 
and isn't available isn't a healthy strategy for user experience. Purposefully 
engineering hardship to try and influence operators to do the right thing in 
someone else's opinion sounds pretty counter productive to adoption as well.

FWIW, as a user, I don't want to see things I can't use because it just wastes 
my time. I agree that the docs are a good place to see all the things while 
querying the service should tell me what's available to me at the time.

On Jul 14, 2015, at 4:20 PM, Fox, Kevin M kevin@pnnl.gov
 wrote:

 We're kind of debating the same thing for the app catalog. Do we show 
 templates that don't work on a given cloud since they wont work, potentially 
 making useful tools hard to discover, or do we view it as an opportunity for 
 users to complain to their admins that they need X feature in order to do 
 what they need to do. Last time we talked about it, we were leaning towards 
 the latter.
 
 Maybe a happy middle ground is to have enough smarts in the system to show 
 the templates, identify what parts won't work, gray out the template but 
 provide a ui to notifify the admin of desire for X to work. That way users 
 can easily feed back their desires.
 
 Thanks,
 Kevin
 From: Pavlo Shchelokovskyy [pshchelokovs...@mirantis.com]
 Sent: Tuesday, July 14, 2015 11:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Heat] conditional resource exposure - second 
 thoughts
 
 Hi Heaters,
 currently we already expose to the user only resources for services deployed 
 in the cloud [1], and soon we will do the same based on whether actual user 
 roles allow creating specific resources [2]. Here I would like to get your 
 opinion on some of my thoughts regarding behavior of resource-type-list, 
 resource-type-show and template-validate with this new features.
 resource-type-list
 We already (or soon will) hide unavailable in the cloud / for the user 
 resources from the listing. But what if we add an API flag e.g. --all to show 
 all registered in the engine resources? That would give any user a glimpse of 
 what their Orchestration service can manage in principle, so they can nag the 
 cloud operator to install additional OpenStack components or give them 
 required roles :)
 resource-type-show
 Right now the plan is to disable showing unavailable to the user resources. 
 But may be we should leave this as it is, for the same purpose as above (or 
 again add a --all flag or such)?
 template-validate
 Right now Heat is failing validation for templates containing resource types 
 not registered in the engine (e.g. typo). Should we also make this call 
 available services- and roles-sensitive? Or should we leave a way for a user 
 to check validity of any template with any in principle supported resources?
 The bottom line is we are good in making Heat service to be as much 
 self-documented via its own API as possible - let's keep doing that and make 
 any Heat deployment to be the Heat primer :)
 Eager to hear your opinions.
 [1] 
 http://specs.openstack.org/openstack/heat-specs/specs/liberty/conditional-resource-exposure-services.html
 [2] 
 http://specs.openstack.org/openstack/heat-specs/specs/liberty/conditional-resource-exposure-roles.html
 Best regards,
 -- 
 Dr. Pavlo Shchelokovskyy
 Senior Software Engineer
 Mirantis Inc
 www.mirantis.com
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [app-catalog] conditional resource exposure - second thoughts

2015-07-14 Thread Randall Burt
A feedback mechanism for users is obviously a good thing, but IMO, not 
germane to the threads original purpose of how and when to expose supported 
resources in Heat. I cannot imagine us implementing such a feature directly.

This may be a good discussion to have in the context of app catalog exclusively 
and not in the context of Heat since we seem to be discussing a generally 
available catalog vs Heat running in a specific environment. These two issues 
are quite different in terms of what's supported. 

In the Heat case, the documentation in OpenStack is good enough IMO for what 
are all the things Heat can possibly let me do while the chosen endpoint is 
the place to answer what are the things *this* installation of Heat will let 
me do. If the answer to the latter is unsatisfactory, then the user should 
already have mechanisms to encourage the operator to provide what's missing.

On Jul 14, 2015, at 5:31 PM, Fox, Kevin M kevin@pnnl.gov
 wrote:

 Without a feedback loop of some kind, how does one solve issues like the 
 following:
 
 * Operator decides users don't need neutron NaaS because its too complicated 
 and users don't need it (Seen on the mailing list multiple times)
 * Software developer writes cloud templates that deploys software that needs 
 private networks to work (For example, ElasticSearch)
 * User wants to deploy said software but can't discover a working version.
 
 User is sad because they can't find a working template to do what they want. 
 They either reinvent the wheel, or give up and don't use the cloud for that 
 task.
 
 Being able to close the loop and let the operator easily know the users 
 actually need something they aren't providing gives them the opportunity to 
 fix the issue, benefiting all 3 parties.
 
 Thanks,
 Kevin
 
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Tuesday, July 14, 2015 2:40 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Heat] [app-catalog] conditional resource 
 exposure - secondthoughts
 
 Making users complain to admins that may have little to no control over what 
 is and isn't available isn't a healthy strategy for user experience. 
 Purposefully engineering hardship to try and influence operators to do the 
 right thing in someone else's opinion sounds pretty counter productive to 
 adoption as well.
 
 FWIW, as a user, I don't want to see things I can't use because it just 
 wastes my time. I agree that the docs are a good place to see all the 
 things while querying the service should tell me what's available to me at 
 the time.
 
 On Jul 14, 2015, at 4:20 PM, Fox, Kevin M kevin@pnnl.gov
 wrote:
 
 We're kind of debating the same thing for the app catalog. Do we show 
 templates that don't work on a given cloud since they wont work, potentially 
 making useful tools hard to discover, or do we view it as an opportunity for 
 users to complain to their admins that they need X feature in order to do 
 what they need to do. Last time we talked about it, we were leaning towards 
 the latter.
 
 Maybe a happy middle ground is to have enough smarts in the system to show 
 the templates, identify what parts won't work, gray out the template but 
 provide a ui to notifify the admin of desire for X to work. That way users 
 can easily feed back their desires.
 
 Thanks,
 Kevin
 From: Pavlo Shchelokovskyy [pshchelokovs...@mirantis.com]
 Sent: Tuesday, July 14, 2015 11:34 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Heat] conditional resource exposure - second 
 thoughts
 
 Hi Heaters,
 currently we already expose to the user only resources for services deployed 
 in the cloud [1], and soon we will do the same based on whether actual user 
 roles allow creating specific resources [2]. Here I would like to get your 
 opinion on some of my thoughts regarding behavior of resource-type-list, 
 resource-type-show and template-validate with this new features.
 resource-type-list
 We already (or soon will) hide unavailable in the cloud / for the user 
 resources from the listing. But what if we add an API flag e.g. --all to 
 show all registered in the engine resources? That would give any user a 
 glimpse of what their Orchestration service can manage in principle, so they 
 can nag the cloud operator to install additional OpenStack components or 
 give them required roles :)
 resource-type-show
 Right now the plan is to disable showing unavailable to the user 
 resources. But may be we should leave this as it is, for the same purpose as 
 above (or again add a --all flag or such)?
 template-validate
 Right now Heat is failing validation for templates containing resource types 
 not registered in the engine (e.g. typo). Should we also make this call 
 available services- and roles-sensitive? Or should we leave a way for a user 
 to check validity of any template with any in principle

Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?

2015-07-02 Thread Randall Burt
Maybe use all for all attributes in the schema and use show for the raw 
output from the service (as is done today for server and neutron stuff).

On Jul 2, 2015, at 12:46 PM, Steven Hardy sha...@redhat.com
 wrote:

 On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote:
   Hi Heaters.
   I don't think that my question is very huge for openstack-dev, but it
   affects a lot of Heat resourcesA 
   and need collect more opinions before apply some of follow approaches.
   I recently uploaded initial approach for implementation common 'show'
   attribute [1]A 
   On one of this review was raised one interesting suggestion:
   'show' attribute should return map of all resource's attributes, i.e.
   for each attr in self.attributes_schema:
   A  A outputs[attr] = A _resolve_attribute(attr)
   return outputs
   I agree, that it's more easier than separate show_resource method for each
   resource and it's the same, what returns Neutron API on show request.
   However, we already has opposite example, when OS::Nova::Server resource
   has bunch of attributes which are not similar on current 'show' attribute
   output:
   
 https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918
   I suppose, that the same situation will be and for other resources.
   So I want to ask about way, which we would like to follow?
   [1] show as collection of attributes
   [2] show as the same output for command some client A name of
 
 I think [2] is the most useful, and most consistent with both the nova and
 all neutron resources:
 
 https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129
 
 Another advantage of this transparent passthrough of the data returned by
 the client is folks have a workaround in the event heat attributes schema
 lack some new value that the client returns.  Obviously when it's added
 to the attributes schema, it'll be better to use that instead.
 
 Steve
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Show attribute is a collection of other attributes or not?

2015-07-02 Thread Randall Burt
On Jul 2, 2015, at 2:35 PM, Steve Baker sba...@redhat.com
 wrote:

 On 03/07/15 06:03, Randall Burt wrote:
 Maybe use all for all attributes in the schema and use show for the raw 
 output from the service (as is done today for server and neutron stuff).
 Instead of all, how about allowing a special form of {get_attr: 
 [resource_name]} with no extra arguments to return a dict of all attributes? 
 This would be consistent with how extra arguments traverse attribute data.

+1 (Hope you can read this despite my bobo client).

 On Jul 2, 2015, at 12:46 PM, Steven Hardy sha...@redhat.com
  wrote:
 
 On Thu, Jul 02, 2015 at 04:40:49PM +0300, Sergey Kraynev wrote:
   Hi Heaters.
   I don't think that my question is very huge for openstack-dev, but it
   affects a lot of Heat resourcesA
   and need collect more opinions before apply some of follow approaches.
   I recently uploaded initial approach for implementation common 'show'
   attribute [1]A
   On one of this review was raised one interesting suggestion:
   'show' attribute should return map of all resource's attributes, i.e.
   for each attr in self.attributes_schema:
   A  A outputs[attr] = A _resolve_attribute(attr)
   return outputs
   I agree, that it's more easier than separate show_resource method for 
 each
   resource and it's the same, what returns Neutron API on show request.
   However, we already has opposite example, when OS::Nova::Server resource
   has bunch of attributes which are not similar on current 'show' attribute
   output:
   
 https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/nova/server.py#L918
   I suppose, that the same situation will be and for other resources.
   So I want to ask about way, which we would like to follow?
   [1] show as collection of attributes
   [2] show as the same output for command some client A name of
 I think [2] is the most useful, and most consistent with both the nova and
 all neutron resources:
 
 https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/neutron.py#L129
 
 Another advantage of this transparent passthrough of the data returned by
 the client is folks have a workaround in the event heat attributes schema
 lack some new value that the client returns.  Obviously when it's added
 to the attributes schema, it'll be better to use that instead.
 
 Steve
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] [ Supporting swift downloads for operator languagepacks

2015-06-17 Thread Randall Burt
A bit of a tangent, but it seems like the url would be to a public Swift 
system. I am unclear if a source git repo would be relevant but, assuming 
Swift would be optional, perhaps users could host catalog LP's in git or some 
other distribution mechanism and have a method by which solum could import them 
from the catalog into that deployment's object store.

On Jun 17, 2015, at 1:58 PM, Fox, Kevin M kevin@pnnl.gov
 wrote:

 This question may be off on a tangent, or may be related.
 
 As part of the application catalog project, (http://apps.openstack.org/) 
 we're trying to provide globally accessible resources that can be easily 
 consumed in OpenStack Clouds. How would these global Language Packs fit in? 
 Would the url record in the app catalog be required to point to an Internet 
 facing public Swift system then? Or, would it point to the source git repo 
 that Solum would use to generate the LP still?
 
 Thanks,
 Kevin
 
 From: Randall Burt [randall.b...@rackspace.com]
 Sent: Wednesday, June 17, 2015 11:38 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Solum] Supporting swift   downloads   for   
   operatorlanguagepacks
 
 Yes. If an operator wants to make their LP publicly available outside of 
 Solum, I was thinking they could just make GET's on the container public. 
 That being said, I'm unsure if this is realistically do-able if you still 
 have to have an authenticated tenant to access the objects. Scratch that; 
 http://blog.fsquat.net/?p=40 may be helpful.
 
 On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:
 
 To be clear, Randall is referring to a swift container (directory).
 
 Murali has a good idea of attempting to use swift client first, as it has 
 performance optimizations that can speed up the process more than naive file 
 transfer tools. I did mention to him that wget does have a retiree feature, 
 and that we could see about using curl instead to allow for chunked encoding 
 as additional optimizations.
 
 Randall, are you suggesting that we could use swift client for both private 
 and public LP uses? That sounds like a good suggestion to me.
 
 Adrian
 
 On Jun 17, 2015, at 11:10 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 
 Can't an operator make the target container public therefore removing the 
 need for multiple access strategies?
 
  Original message 
 From: Murali Allada
 Date:06/17/2015 11:41 AM (GMT-06:00)
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
 languagepacks
 
 Hello Solum Developers,
 
 When we were designing the operator languagepack feature for Solum, we 
 wanted to make use of public urls to download operator LPs, such as those 
 available for CDN backed swift containers we have at Rackspace, or any 
 publicly accessible url. This would mean that when a user chooses to build 
 applications on to​​p of a languagepack provided by the operator, we use a 
 url to 'wget' the LP image.
 
 Recently, we have started noticing a number of failures because of 
 corrupted docker images downloaded using 'wget'. The docker images work 
 fine when we download them manually with a swift client and use them. The 
 corruption seem to be happening when we try to download a large image using 
 'wget' and there are dropped packets or intermittent network issues.
 
 My thinking is to start using the swift client to download operator LPs by 
 default instead of wget. The swift client already implements retry logic, 
 downloading large images in chunks, etc. This means we would not get the 
 niceties of using publicly accessible urls. However, the feature will be 
 more reliable and robust.
 
 The implementation would be as follows:
 • ​We'll use the existing service tenant configuration available in the 
 solum config file to authenticate and store operator languagepacks using 
 the swift client. We were using a different tenant to build and host LPs, 
 but now that we require the tenants credentials in the config file, it's 
 best to reuse the existing service tenant creds. Note: If we don't, we'll 
 have 3 separate tenants to maintain.
 • ​Service tenant
 • Operator languagepack tenant
 • Global admin tenant
 • I'll keep the option to download the operator languagepacks from a 
 publicly available url. I'll allow operators to choose which method they 
 want to use by changing a setting in the solum config file.
 FYI: In my tests, I've noticed that downloading an image using the swift 
 client is twice as fast as downloading the same image using 'wget' from a 
 CDN url.
 
 Thanks,
 Murali
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org

Re: [openstack-dev] [Solum] Supporting swift downloads for operator languagepacks

2015-06-17 Thread Randall Burt
Can't an operator make the target container public therefore removing the need 
for multiple access strategies?

 Original message 
From: Murali Allada
Date:06/17/2015 11:41 AM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
languagepacks

Hello Solum Developers,

When we were designing the operator languagepack feature for Solum, we wanted 
to make use of public urls to download operator LPs, such as those available 
for CDN backed swift containers we have at Rackspace, or any publicly 
accessible url. This would mean that when a user chooses to build applications 
on to??p of a languagepack provided by the operator, we use a url to 'wget' the 
LP image.

Recently, we have started noticing a number of failures because of corrupted 
docker images downloaded using 'wget'. The docker images work fine when we 
download them manually with a swift client and use them. The corruption seem to 
be happening when we try to download a large image using 'wget' and there are 
dropped packets or intermittent network issues.

My thinking is to start using the swift client to download operator LPs by 
default instead of wget. The swift client already implements retry logic, 
downloading large images in chunks, etc. This means we would not get the 
niceties of using publicly accessible urls. However, the feature will be more 
reliable and robust.

The implementation would be as follows:

  *   ?We'll use the existing service tenant configuration available in the 
solum config file to authenticate and store operator languagepacks using the 
swift client. We were using a different tenant to build and host LPs, but now 
that we require the tenants credentials in the config file, it's best to reuse 
the existing service tenant creds. Note: If we don't, we'll have 3 separate 
tenants to maintain.
 *   ?Service tenant
 *   Operator languagepack tenant
 *   Global admin tenant
  *   I'll keep the option to download the operator languagepacks from a 
publicly available url. I'll allow operators to choose which method they want 
to use by changing a setting in the solum config file.

FYI: In my tests, I've noticed that downloading an image using the swift client 
is twice as fast as downloading the same image using 'wget' from a CDN url.

Thanks,
Murali

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Supporting swift downloads for operator languagepacks

2015-06-17 Thread Randall Burt
Yes. If an operator wants to make their LP publicly available outside of Solum, 
I was thinking they could just make GET's on the container public. That being 
said, I'm unsure if this is realistically do-able if you still have to have an 
authenticated tenant to access the objects. Scratch that; 
http://blog.fsquat.net/?p=40 may be helpful.

On Jun 17, 2015, at 1:27 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 To be clear, Randall is referring to a swift container (directory).
 
 Murali has a good idea of attempting to use swift client first, as it has 
 performance optimizations that can speed up the process more than naive file 
 transfer tools. I did mention to him that wget does have a retiree feature, 
 and that we could see about using curl instead to allow for chunked encoding 
 as additional optimizations. 
 
 Randall, are you suggesting that we could use swift client for both private 
 and public LP uses? That sounds like a good suggestion to me.
 
 Adrian
 
 On Jun 17, 2015, at 11:10 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 
 Can't an operator make the target container public therefore removing the 
 need for multiple access strategies? 
 
  Original message 
 From: Murali Allada 
 Date:06/17/2015 11:41 AM (GMT-06:00) 
 To: OpenStack Development Mailing List (not for usage questions) 
 Subject: [openstack-dev] [Solum] Supporting swift downloads for operator 
 languagepacks
 
 Hello Solum Developers, 
  
 When we were designing the operator languagepack feature for Solum, we 
 wanted to make use of public urls to download operator LPs, such as those 
 available for CDN backed swift containers we have at Rackspace, or any 
 publicly accessible url. This would mean that when a user chooses to build 
 applications on to​​p of a languagepack provided by the operator, we use a 
 url to 'wget' the LP image.
 
 Recently, we have started noticing a number of failures because of corrupted 
 docker images downloaded using 'wget'. The docker images work fine when we 
 download them manually with a swift client and use them. The corruption seem 
 to be happening when we try to download a large image using 'wget' and there 
 are dropped packets or intermittent network issues.
 
 My thinking is to start using the swift client to download operator LPs by 
 default instead of wget. The swift client already implements retry logic, 
 downloading large images in chunks, etc. This means we would not get the 
 niceties of using publicly accessible urls. However, the feature will be 
 more reliable and robust.
 
 The implementation would be as follows:
  • ​We'll use the existing service tenant configuration available in the 
 solum config file to authenticate and store operator languagepacks using the 
 swift client. We were using a different tenant to build and host LPs, but 
 now that we require the tenants credentials in the config file, it's best to 
 reuse the existing service tenant creds. Note: If we don't, we'll have 3 
 separate tenants to maintain. 
  • ​Service tenant 
  • Operator languagepack tenant
  • Global admin tenant 
  • I'll keep the option to download the operator languagepacks from a 
 publicly available url. I'll allow operators to choose which method they 
 want to use by changing a setting in the solum config file.
 FYI: In my tests, I've noticed that downloading an image using the swift 
 client is twice as fast as downloading the same image using 'wget' from a 
 CDN url.
 
 Thanks,
 Murali
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Randall Burt
+1 Murali. AFIAK, there is no precedent for what Keith proposes, but that 
doesn't mean its a bad thing.

On Jun 16, 2015, at 12:21 AM, Murali Allada murali.all...@rackspace.com wrote:

 I agree, users should have a mechanism to keep logs around.
 
 I implemented the logs deletion feature after we got a bunch of requests from 
 users to delete logs once they delete an app, so they don't get charged for 
 storage once the app is deleted.
 
 My implementation deletes the logs by default and I think that is the right 
 behavior. Based on user requests, that is exactly what they were asking for. 
 I'm planning to add a --keep-logs flag in a follow up patch. The command will 
 look as follows
 
 Solum delete app MyApp --keep-logs
 
 -Murali
 
 
 
 
 
 On Jun 15, 2015, at 11:19 PM, Keith Bray keith.b...@rackspace.com wrote:
 
 Regardless of what the API defaults to, could we have the CLI prompt/warn so 
 that the user easily knows that both options exist?  Is there a precedent 
 within OpenStack for a similar situation?
 
 E.g. 
  solum app delete MyApp
  Do you want to also delete your logs? (default is Yes):  [YES/no]
   NOTE, if you choose No, application logs will remain on your 
 account. Depending on your service provider, you may incur on-going storage 
 charges.  
 
 Thanks,
 -Keith
 
 From: Devdatta Kulkarni devdatta.kulka...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, June 15, 2015 9:56 AM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete 
 an app?
 
 Yes, the log deletion should be optional.
 
 
 The question is what should be the default behavior. Should the default be 
 to delete the logs and provide a flag to keep them, or keep the logs by 
 default and provide a override flag to delete them?
 
 Delete-by-default is consistent with the view that when an app is deleted, 
 all its artifacts are deleted (the app's meta data, the deployment units 
 (DUs), and the logs). This behavior is also useful in our current state when 
 the app resource and the CLI are in flux. For now, without a way to specify 
 a flag, either to delete the logs or to keep them, delete-by-default 
 behavior helps us clean all the log files from the application's cloud files 
 container when an app is deleted.
 
 This is very useful for our CI jobs. Without this, we end up with lots of 
 log files in the application's container,
 
 and have to resort to separate scripts to delete them up after an app is 
 deleted.
 
 
 Once the app resource and CLI stabilize it should be straightforward to 
 change the default behavior if required.
 
 - Devdatta
 
 From: Adrian Otto adrian.o...@rackspace.com
 Sent: Friday, June 12, 2015 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Solum] Should logs be deleted when we delete an 
 app?
  
 Team,
 
 We currently delete logs for an app when we delete the app[1]. 
 
 https://bugs.launchpad.net/solum/+bug/1463986
 
 Perhaps there should be an optional setting at the tenant level that 
 determines whether your logs are deleted or not by default (set to off 
 initially), and an optional parameter to our DELETE calls that allows for 
 the opposite action from the default to be specified if the user wants to 
 override it at the time of the deletion. Thoughts?
 
 Thanks,
 
 Adrian
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Should logs be deleted when we delete an app?

2015-06-16 Thread Randall Burt
While I agree with what you're saying, the way the OpenStack clients are 
traditionally written/designed, the CLI *is* the SDK for those users who want 
to do scripting in a shell rather than in Python. If we go with your 
suggestion, we'd probably also want to have the ability to suppress those 
prompts for folks that want to shell script.

On Jun 16, 2015, at 4:42 PM, Keith Bray keith.b...@rackspace.com
 wrote:

 Isn't that what the SDK is for?   To chip in with a Product Management type 
 hat on, I'd think the CLI should be primarily focused on user experience 
 interaction, and the SDK should be primarily targeted for developer 
 automation needs around programmatically interacting with the service.   So, 
 I would argue that the target market for the CLI should not be the developer 
 who wants to script.
 
 -Keith
 
 From: Adrian Otto adrian.o...@rackspace.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, June 16, 2015 12:24 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] Should logs be deleted when we delete an 
 app?
 
 Interactive choices like that one can make it more confusing for developers 
 who want to script with the CLI. My preference would be to label the app 
 delete help text to clearly indicate that it deletes logs
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] COMMERCIAL: [heat] Stack/Resource updated_at conventions

2015-04-27 Thread Randall Burt
2 sounds right to me, but does the in-memory representation get updated or are 
we forced into a refetch at every change?

On Apr 27, 2015, at 10:46 AM, Steven Hardy sha...@redhat.com wrote:

 Hi all,
 
 I've been looking into $subject recently, I raised this bug:
 
 https://bugs.launchpad.net/heat/+bug/1448155
 
 Basically we've got some historically weird and potentially inconsistent
 behavior around updated_at, and I'm trying to figure out the best way to
 proceed.
 
 Now, we selectively update updated_at only on the transition to
 UPDATE_COMPLETE, where we store the time that we moved into
 UPDATE_IN_PROGRESS.  During the update, there's no way to derive the
 time we started the update.
 
 Also, we inconsistently store the time associated with the transition into
 IN_PROGRESS for suspend, resume, snapshot, restore and check actions (even
 though many of these don't modify the stack definition).
 
 The reason I need this is the hook/breakpoint API - the only way to detect
 if you've hit a breakpoint is via events, and to detect you've hit a hook
 during multiple sequential updates (some of which may fail or time out with
 hooks pending), you need to filter the events to only consider those with a
 timestamp newer than the transition of the stack to the update IN_PROGRESS.
 
 AFAICT there's two options:
 
 1. Update the stack.Stack so we store now at every transition (e.g in
 state_set)
 
 2. Stop trying to explicitly control updated_at, and just allow the oslo
 TimestampMixin to do it's job and update updated_at every time the DB model
 is updated.
 
 What are peoples thoughts?  Either will solve my problem, but I'm leaning
 towards (2) as the cleanest and most technically correct solution.
 
 Similar problems exist for resource.Resource AFAICT.
 
 Steve
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] core team changes

2015-01-28 Thread Randall Burt
+1

On Jan 27, 2015, at 7:36 PM, Angus Salkeld asalk...@mirantis.com
 wrote:

 Hi all
 
 After having a look at the stats:
 http://stackalytics.com/report/contribution/heat-group/90
 http://stackalytics.com/?module=heat-groupmetric=person-day
 
 I'd like to propose the following changes to the Heat core team:
 
 Add:
 Qiming Teng
 Huang Tianhua
 
 Remove:
 Bartosz Górski (Bartosz has indicated that he is happy to be removed and 
 doesn't have the time to work on heat ATM).
 
 Core team please respond with +/- 1.
 
 Thanks
 Angus
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How can I write at milestone section of blueprint?

2014-12-22 Thread Randall Burt
Its been discussed at several summits. We have settled on a general solution 
using Zaqar, but no work has been done that I know of. I was just pointing out 
that similar blueprints/specs exist and you may want to look through those to 
get some ideas about writing your own and/or basing your proposal off of one of 
them.

On Dec 22, 2014, at 12:19 AM, Yasunori Goto y-g...@jp.fujitsu.com
 wrote:

 Rundal-san,
 
 There should already be blueprints in launchpad for very similar 
 functionality.
 For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks.
 While that specifies Heat sending notifications to the outside world,
 there has been discussion around debugging that would allow the receiver to
 send notifications back. I only point this out so you can see there should be
 similar blueprints and specs that you can reference and use as examples.
 
 Thank you for pointing it out.
 But do you know current status about it?
 Though the above blueprint is not approved, and it seems to be discarded.
 
 Bye,
 
 
 On Dec 19, 2014, at 4:17 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote:
 
 Hello,
 
 This is the first mail at Openstack community,
 
 Welcome! :)
 
 and I have a small question about how to write blueprint for Heat.
 
 Currently our team would like to propose 2 interfaces
 for users operation in HOT. 
 (One is Event handler which is to notify user's defined event to heat.
 Another is definitions of action when heat catches the above notification.)
 So, I'm preparing the blueprint for it.
 
 Please include details of the exact use-case, e.g the problem you're trying
 to solve (not just the proposed solution), as it's possible we can suggest
 solutions based on exiting interfaces.
 
 However, I can not find how I can write at the milestone section of 
 blueprint.
 
 Heat blueprint template has a section for Milestones.
 Milestones -- Target Milestone for completeion:
 
 But I don't think I can decide it by myself.
 In my understanding, it should be decided by PTL.
 
 Normally, it's decided by when the person submitting the spec expects to
 finish writing the code by.  The PTL doesn't really have much control over
 that ;)
 
 In addition, probably the above our request will not finish
 by Kilo. I suppose it will be L version or later.
 
 So to clarify, you want to propose the feature, but you're not planning on
 working on it (e.g implementing it) yourself?
 
 So, what should I write at this section?
 Kilo-x, L version, or empty?
 
 As has already been mentioned, it doesn't matter that much - I see it as a
 statement of intent from developers.  If you're just requesting a feature,
 you can even leave it blank if you want and we'll update it when an
 assignee is found (e.g during the spec review).
 
 Thanks,
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 -- 
 Yasunori Goto y-g...@jp.fujitsu.com
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How can I write at milestone section of blueprint?

2014-12-19 Thread Randall Burt
There should already be blueprints in launchpad for very similar functionality. 
For example: https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks. 
While that specifies Heat sending notifications to the outside world, there has 
been discussion around debugging that would allow the receiver to send 
notifications back. I only point this out so you can see there should be 
similar blueprints and specs that you can reference and use as examples.

On Dec 19, 2014, at 4:17 AM, Steven Hardy sha...@redhat.com
 wrote:

 On Fri, Dec 19, 2014 at 05:02:04PM +0900, Yasunori Goto wrote:
 
 Hello,
 
 This is the first mail at Openstack community,
 
 Welcome! :)
 
 and I have a small question about how to write blueprint for Heat.
 
 Currently our team would like to propose 2 interfaces
 for users operation in HOT. 
 (One is Event handler which is to notify user's defined event to heat.
 Another is definitions of action when heat catches the above notification.)
 So, I'm preparing the blueprint for it.
 
 Please include details of the exact use-case, e.g the problem you're trying
 to solve (not just the proposed solution), as it's possible we can suggest
 solutions based on exiting interfaces.
 
 However, I can not find how I can write at the milestone section of 
 blueprint.
 
 Heat blueprint template has a section for Milestones.
 Milestones -- Target Milestone for completeion:
 
 But I don't think I can decide it by myself.
 In my understanding, it should be decided by PTL.
 
 Normally, it's decided by when the person submitting the spec expects to
 finish writing the code by.  The PTL doesn't really have much control over
 that ;)
 
 In addition, probably the above our request will not finish
 by Kilo. I suppose it will be L version or later.
 
 So to clarify, you want to propose the feature, but you're not planning on
 working on it (e.g implementing it) yourself?
 
 So, what should I write at this section?
 Kilo-x, L version, or empty?
 
 As has already been mentioned, it doesn't matter that much - I see it as a
 statement of intent from developers.  If you're just requesting a feature,
 you can even leave it blank if you want and we'll update it when an
 assignee is found (e.g during the spec review).
 
 Thanks,
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Rework auto-scaling support in Heat

2014-11-28 Thread Randall Burt
Per our discussion in Paris, I'm partial to Option B. I think a separate API 
endpoint is a lower priority at this point compared to cleaning up and 
normalizing the autoscale code on the back-end. Once we've refactored the 
engine code and solidified the RPC interface, it would be trivial to add an API 
on top of it. Additionally, we could even keep the privileged RPC interface for 
the Heat AS resources (assuming they stick around in some form) as an option 
for deployers. While certainly disruptive, I think we can handle this in small 
and/or isolated enough chances that reviews shouldn't be too difficult, 
especially if its possible to take the existing code largely unchanged at first 
and wrap an RPC abstraction around it.

On Nov 28, 2014, at 1:33 AM, Qiming Teng teng...@linux.vnet.ibm.com
 wrote:

 Dear all,
 
 Auto-Scaling is an important feature supported by Heat and needed by
 many users we talked to.  There are two flavors of AutoScalingGroup
 resources in Heat today: the AWS-based one and the Heat native one.  As
 more requests coming in, the team has proposed to separate auto-scaling
 support into a separate service so that people who are interested in it
 can jump onto it.  At the same time, Heat engine (especially the resource
 type code) will be drastically simplified.  The separated AS service
 could move forward more rapidly and efficiently.
 
 This work was proposed a while ago with the following wiki and
 blueprints (mostly approved during Havana cycle), but the progress is
 slow.  A group of developers now volunteer to take over this work and
 move it forward.
 
 wiki: https://wiki.openstack.org/wiki/Heat/AutoScaling
 BPs:
 - https://blueprints.launchpad.net/heat/+spec/as-lib-db
 - https://blueprints.launchpad.net/heat/+spec/as-lib
 - https://blueprints.launchpad.net/heat/+spec/as-engine-db
 - https://blueprints.launchpad.net/heat/+spec/as-engine
 - https://blueprints.launchpad.net/heat/+spec/autoscaling-api
 - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-client
 - https://blueprints.launchpad.net/heat/+spec/as-api-group-resource
 - https://blueprints.launchpad.net/heat/+spec/as-api-policy-resource
 - https://blueprints.launchpad.net/heat/+spec/as-api-webhook-trigger-resource
 - https://blueprints.launchpad.net/heat/+spec/autoscaling-api-resources
 
 Once this whole thing lands, Heat engine will talk to the AS engine in
 terms of ResourceGroup, ScalingPolicy, Webhooks.  Heat engine won't care
 how auto-scaling is implemented although the AS engine may in turn ask
 Heat to create/update stacks for scaling's purpose.  In theory, AS
 engine can create/destroy resources by directly invoking other OpenStack
 services.  This new AutoScaling service may eventually have its own DB,
 engine, API, api-client.  We can definitely aim high while work hard on
 real code.
 
 After reviewing the BPs/Wiki and some communication, we get two options
 to push forward this.  I'm writing this to solicit ideas and comments
 from the community.
 
 Option A: Top-Down Quick Split
 --
 
 This means we will follow a roadmap shown below, which is not 100% 
 accurate yet and very rough:
 
  1) Get the separated REST service in place and working
  2) Switch Heat resources to use the new REST service
 
 Pros:
  - Separate code base means faster review/commit cycle
  - Less code churn in Heat
 Cons:
  - A new service need to be installed/configured/launched
  - Need commitments from dedicated, experienced developers from very
beginning
 
 Option B: Bottom-Up Slow Growth
 ---
 
 The roadmap is more conservative, with many (yes, many) incremental
 patches to migrate things carefully.
 
  1) Separate some of the autoscaling logic into libraries in Heat
  2) Augment heat-engine with new AS RPCs
  3) Switch AS related resource types to use the new RPCs
  4) Add new REST service that also talks to the same RPC
 (create new GIT repo, API endpoint and client lib...) 
 
 Pros:
  - Less risk breaking user lands with each revision well tested
  - More smooth transition for users in terms of upgrades
 
 Cons:
  - A lot of churn within Heat code base, which means long review cycles
  - Still need commitments from cores to supervise the whole process
 
 There could be option C, D... but the two above are what we came up with
 during the discussion.
 
 Another important thing we talked about is about the open discussion on
 this.  OpenStack Wiki seems a good place to document settled designs but
 not for interactive discussions.  Probably we should leverage etherpad
 and the mailinglist when moving forward.  Suggestions on this are also
 welcomed.
 
 Thanks.
 
 Regards,
 Qiming
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list

Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-12 Thread Randall Burt
On Nov 12, 2014, at 10:42 AM, Zane Bitter zbit...@redhat.com
 wrote:

 On 12/11/14 10:10, Clint Byrum wrote:
 Excerpts from Zane Bitter's message of 2014-11-11 13:06:17 -0800:
 On 11/11/14 13:34, Ryan Brown wrote:
 I am strongly against allowing arbitrary Javascript functions for
 complexity reasons. It's already difficult enough to get meaningful
 errors when you  up your YAML syntax.
 
 Agreed, and FWIW literally everyone that Clint has pitched the JS idea
 to thought it was crazy ;)
 
 
 So far nobody has stepped up to defend me,
 
 I'll defend you, but I can't defend the idea :)
 
 so I'll accept that maybe
 people do think it is crazy. What I'm really confused by is why we have
 a new weird ugly language like YAQL (sorry, it, like JQ, is hideous),
 
 Agreed, and appealing to its similarity with Perl or PHP (or BASIC!) is 
 probably not the way to win over Python developers :D
 
 and that would somehow be less crazy than a well known mature language
 that has always been meant for embedding such as javascript.
 
 JS is a Turing-complete language, it's an entirely different kettle of fish 
 to a domain-specific language that is inherently safe to interpret from user 
 input. Sure, we can try to lock it down. It's a very tricky job to get right. 
 (Plus it requires a new external dependency of unknown quality... honestly if 
 you're going to embed a Turing-complete language, Python is a much more 
 obvious choice than JS.)
 
 Anyway, I'd prefer YAQL over trying to get the intrinsic functions in
 HOT just right. Users will want to do things we don't expect. I say, let
 them, or large sections of the users will simply move on to something
 else.
 
 The other side of that argument is that users are doing one of two things 
 with data they have obtained from resources in the template:
 
 1) Passing data to software deployments
 2) Passing data to other resources
 
 In case (1) they can easily transform the data into whatever format they want 
 using their own scripts, running on their own server.
 
 In case (2), if it's not easy for them to just do what they want without 
 having to perform this kind of manipulation, we have failed to design good 
 resources. And if we give people the tools to just paper over the problem, 
 we'll never hear about it so we can correct it at the source, just launch a 
 thousand hard-to-maintain hacks into the world.

I disagree with this last bit. Having some manner of data manipulation facility 
as part of the template language doesn't mean the resources are badly designed 
or implemented, it just means that users have constraints that we don't 
directly address. Having something more general purpose (whatever it may be) 
seems like a longer term and more maintainable solution to me than eternal tail 
chasing where 1000 specific intrinsic functions bloom. IMO, intrinsics have a 
place, but supporting something like YAQL (or whatever) seems like a general 
solution to a lot of common problems our users come up against.

As for hard-to-maintain hacks, isn't that what drives a lot of our users to 
ask for these sorts of solutions in the first place? If doing something via 
YAQL is perceived by a user as some sub-optimal hack for their use case, I see 
no reason for that to be any less visible than any other pain point or 
usability issue we have.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Conditionals, was: New function: first_nonnull

2014-11-11 Thread Randall Burt
I like this approach and seems to have greater utility beyond the original 
proposal. I'm not fussed on YAQL or straight up JSONPath, but something along 
those lines seems to make sense.

On Nov 11, 2014, at 9:35 AM, Alexis Lee alex...@hp.com
 wrote:

 Alexis Lee said on Mon, Nov 10, 2014 at 05:34:13PM +:
 How about we support YAQL expressions? https://github.com/ativelkov/yaql
 Plus some HOFs (higher-order functions) like cond, map, filter, foldleft
 etc?
 
 We could also use YAQL to provide the HOFs.
 
 Here's first_nonnull:
 
  config:
Fn::Select
  - 0
  filter:
- yaql: $.0 != null
- item1
- itemN
 
  config:
yaql: $[$ != null][0]
- item1
- itemN
 
 This approach requires less change to Heat, at the price of learning
 more YAQL.
 
 
 Alexis
 -- 
 Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-10 Thread Randall Burt
On Jul 10, 2014, at 9:21 AM, Zane Bitter zbit...@redhat.com
 wrote:

 On 10/07/14 05:34, Steven Hardy wrote:
  The other approach is to set up a new container, owned by the user, 
  every time. In that case, a provider selecting this implementation 
  would need to make it clear to customers if they would be billed for a 
  WaitCondition resource. I'd prefer to avoid this scenario though 
  (regardless of the plug-point).
 
 Why? If we won't let the user choose, then why wouldn't we let the 
 provider make this choice? I don't think its wise of us to make decisions 
 based on what a theoretical operator may theoretically do. If the same 
 theoretical provider were to also charge users to create a trust, would we 
 then be concerned about that implementation as well? What if said provider 
 decides charges the user per resource in a stack regardless of what they 
 are? Having Heat own the container(s) as suggested above doesn't preclude 
 that operator from charging the stack owner for those either.
 
 While I agree that these examples are totally silly, I'm just trying to 
 illustrate that we shouldn't deny an operator an option so long as its 
 understood what that option entails from a technical/usage perspective.
 I don't really get why this question is totally silly - I made a genuine
 request for education based on near-zero knowledge of public cloud provider
 pricing models.
 
 The way I read it Randall was not saying that the question was silly, he was 
 acknowledging that his own examples (like charging per-resource) were 
 contrived (to the point of absurdity) to illustrate his argument.

Yes. I didn't mean to imply the questions or any of the responses were silly, 
only my contrived examples.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Randall Burt
On Jul 9, 2014, at 3:15 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 08/07/14 17:13, Angus Salkeld wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 08/07/14 09:14, Zane Bitter wrote:
 I see that the new client plugins are loaded using stevedore, which is
 great and IMO absolutely the right tool for that job. Thanks to Angus 
 Steve B for implementing it.
 
 Now that we have done that work, I think there are more places we can
 take advantage of it too - for example, we currently have competing
 native wait condition resource types being implemented by Jason[1] and
 Steve H[2] respectively, and IMHO that is a mistake. We should have
 *one* native wait condition resource type, one AWS-compatible one,
 software deployments and any custom plugin that require signalling; and
 they should all use a common SignalResponder implementation that would
 call an API that is pluggable using stevedore. (In summary, what we're
 
 what's wrong with using the environment for that? Just have two resources
 and you do something like this:
 https://github.com/openstack/heat/blob/master/etc/heat/environment.d/default.yaml#L7
 
 It doesn't cover other things that need signals, like software deployments 
 (third-party plugin authors are also on their own). We only want n 
 implementations not n*(number of resources that use signals) implementations.
 
 trying to make configurable is an implementation that should be
 invisible to the user, not an interface that is visible to the user, and
 therefore the correct unit of abstraction is an API, not a resource.)
 
 
 Totally depends if we want this to be operator configurable (config file or 
 plugin)
 or end user configurable (use their environment to choose the 
 implementation).
 
 
 I just noticed, however, that there is an already-partially-implemented
 blueprint[3] and further pending patches[4] to use stevedore for *all*
 types of plugins - particularly resource plugins[5] - in Heat. I feel
 very strongly that stevedore is _not_ a good fit for all of those use
 cases. (Disclaimer: obviously I _would_ think that, since I implemented
 the current system instead of using stevedore for precisely that reason.)
 
 haha.
 
 
 The stated benefit of switching to stevedore is that it solves issues
 like https://launchpad.net/bugs/1292655 that are caused by the current
 convoluted layout of /contrib. I think the layout stems at least in part
 
 I think another great reason is consistency with how all other plugins are 
 openstack
 are written (stevedore).
 
 Sure, consistency is nice, sometimes even at the expense of being not quite 
 the right tool for the job. But there are limits to that trade-off.
 
 Also I *really* don't think we should optimize for our contrib plugins
 but for:
 1) our built in plugins
 2) out of tree plugins
 
 I completely agree, which is why I was surprised by this change. It seems to 
 be deprecating a system that is working well for built-in and out-of-tree 
 plugins in order to make minor improvements to how we handle contrib.

FWIW, when it comes to deploying Heat with non-built-in, there's no substantive 
difference in the experience between contrib and out-of-tree plugins, so 
neither system is more or less optimized for either. However, with the current 
system, there's no easy way to get rid of the built-in ones you don't want.

 
 from a misunderstanding of how the current plugin_manager works. The
 point of the plugin_manager is that each plugin directory does *not*
 have to be a Python package - it can be any directory. Modules in the
 directory then appear in the package heat.engine.plugins once imported.
 So there is no need to do what we are currently doing, creating a
 resources package, and then a parent package that contains the tests
 package as well, and then in the tests doing:
 
from ..resources import docker_container  ## noqa
 
 All we really need to do is throw the resources in any old directory,
 add that directory to the plugin_dirs list, stick the tests in any old
 package, and from the tests do
 
from heat.engine.plugins import docker_container
 
 The main reason we haven't done this seems to be to avoid having to list
 the various contrib plugin dirs separately. Stevedore solves this by
 forcing us to list not only each directory but each class in each module
 in each directory separately. The tricky part of fixing the current
 layout is ensuring the contrib plugin directories get added to the
 plugin_dirs list during the unit tests and only during the unit tests.
 However, I'm confident that could be fixed with no more difficulty than
 the stevedore changes and with far less disruption to existing operators
 using custom plugins.
 
 Stevedore is ideal for configuring an implementation for a small number
 of well known plug points. It does not appear to be ideal for managing
 an application like Heat that comprises a vast collection of
 implementations of the same interface, each bound to its own plug point.
 
 

Re: [openstack-dev] [Heat] stevedore plugins (and wait conditions)

2014-07-09 Thread Randall Burt
On Jul 9, 2014, at 4:38 PM, Zane Bitter zbit...@redhat.com
 wrote:
 On 08/07/14 17:17, Steven Hardy wrote:
 
 Regarding forcing deployers to make a one-time decision, I have a question
 re cost (money and performance) of the Swift approach vs just hitting the
 Heat API
 
 - If folks use the Swift resource and it stores data associated with the
   signal in Swift, does that incurr cost to the user in a public cloud
   scenario?
 
 Good question. I believe the way WaitConditions work in AWS is that it sets 
 up a pre-signed URL in a bucket owned by CloudFormation. If we went with that 
 approach we would probably want some sort of quota, I imagine.

Just to clarify, you suggest that the swift-based signal mechanism use 
containers that Heat owns rather than ones owned by the user?

 The other approach is to set up a new container, owned by the user, every 
 time. In that case, a provider selecting this implementation would need to 
 make it clear to customers if they would be billed for a WaitCondition 
 resource. I'd prefer to avoid this scenario though (regardless of the 
 plug-point).

Why? If we won't let the user choose, then why wouldn't we let the provider 
make this choice? I don't think its wise of us to make decisions based on what 
a theoretical operator may theoretically do. If the same theoretical provider 
were to also charge users to create a trust, would we then be concerned about 
that implementation as well? What if said provider decides charges the user per 
resource in a stack regardless of what they are? Having Heat own the 
container(s) as suggested above doesn't preclude that operator from charging 
the stack owner for those either.

While I agree that these examples are totally silly, I'm just trying to 
illustrate that we shouldn't deny an operator an option so long as its 
understood what that option entails from a technical/usage perspective.

 - What sort of overhead are we adding, with the signals going to swift,
   then in the current implementation being copied back into the heat DB[1]?
 
 I wasn't aware we were doing that, and I'm a bit unsure about it myself. I 
 don't think it's a big overhead, though.

In the current implementation, I think it is minor as well, just a few extra 
Swift API calls which should be pretty minor overhead considering the stack as 
a whole. Plus, it minimizes the above concern around potentially costly user 
containers in that it gets rid of them as soon as its done.

 It seems to me at the moment that the swift notification method is good if
 you have significant data associated with the signals, but there are
 advantages to the simple API signal approach I've been working on when you
 just need a simple one shot low overhead way to get data back from an
 instance.
 
 FWIW, the reason I revived these patches was I found that
 SoftwareDeployments did not meet my needs for a really simple signalling
 mechanism when writing tempest tests:
 
 https://review.openstack.org/#/c/90143/16/tempest/scenario/orchestration/test_volumes_create_from_backup.yaml
 
 These tests currently use the AWS WaitCondition resources, and I wanted a
 native alternative, without the complexity of using SoftwareDeployments
 (which also won't work with minimal cirros images without some pretty hacky
 workarounds[2])
 
 Yep, I am all for this. I think that Swift is the best way when we have it, 
 but not every cloud has Swift (and the latest rumours from DefCore are that 
 it's likely to stay that way), so we need operators ( developers!) to be 
 able to plug in an alternative implementation.

Very true, but not every cloud has trusts either. Many may have trusts, but 
they don't employ the EC2 extensions to Keystone and therefore can't use the 
native signals either (as I understand them anyway). Point being that either 
way, we already impose requirements on a cloud you want to run Heat against. I 
think it in our interest to make the effort to provide choices with obvious 
trade-offs.

 I'm all for making things simple, avoiding duplication and confusion for
 users, but I'd like to ensure that making this a one-time deployer level
 decision definitely makes sense, vs giving users some choice over what
 method is used.
 
 Agree, this is an important question to ask. The downside to leaving the 
 choice to the user is that it reduces interoperability between clouds. (In 
 fact, it's unclear whether operators _would_ give users a choice, or just 
 deploy one implementation anyway.) It's not insurmountable (thanks to 
 environments), but it does add friction to the ecosystem so we have to weigh 
 up the trade-offs.

Agreed that this is an important concern, but one of mine is that no other 
resource has selectable back-ends. The way an operator controls this today is 
via the global environment where they have the option to disable one or more of 
these resources or even alias one to the other. Seems a large change for 
something an operator already has the ability to deal with. 

Re: [openstack-dev] [heat] Sergey Kraynev for heat-core

2014-06-26 Thread Randall Burt
On Jun 26, 2014, at 5:08 PM, Steve Baker sba...@redhat.com wrote:

 I'd like to nominate Sergey Kraynev for heat-core. His reviews are
 valuable and prolific, and his commits have shown a sound understanding
 of heat internals.
 
 http://stackalytics.com/report/contribution/heat-group/60

+1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Heat] Ceilometer aware people, please advise us on processing notifications..

2014-06-26 Thread Randall Burt
On Jun 26, 2014, at 5:25 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 23/06/14 19:25, Clint Byrum wrote:
 Hello! I would like to turn your attention to this specification draft
 that I've written:
 
 https://review.openstack.org/#/c/100012/1/specs/convergence-continuous-observer.rst
 
 Angus has suggested that perhaps Ceilometer is a better place to handle
 this. Can you please comment on the review, or can we have a brief
 mailing list discussion about how best to filter notifications?
 
 Basically in Heat when a user boots an instance, we would like to act as
 soon as it is active, and not have to poll the nova API to know when
 that is. Angus has suggested that perhaps we can just tell ceilometer to
 hit Heat with a web hook when that happens.
 
 I'm all in favour of having Ceilometer filter the firehose for us if we can :)
 
 Webhooks would seem to add a lot of overhead though (set up + tear down a 
 connection for every notification), that could perhaps be avoided by using a 
 message bus? Given that both setting up and receiving these notifications 
 would be admin-only operations, is there any benefit to handling them through 
 a webhook API rather than through oslo.messaging?
 
 cheers,
 Zane.

In larger OpenStack deployments, the different services probably don't share 
the same message bus. While I certainly agree oslo.messaging and/or 
oslo.notifications should be an option (and probably the default one at that), 
I think there should still be an option to use ceilometer or some other 
notification mechanism. As long as its pluggable, I don't think anyone would be 
too fussed.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] fine grained quotas

2014-06-19 Thread Randall Burt
On Jun 19, 2014, at 4:17 PM, Clint Byrum cl...@fewbar.com wrote:

 I was made aware of the following blueprint today:
 
 http://blueprints.launchpad.net/heat/+spec/add-quota-api-for-heat
 http://review.openstack.org/#/c/96696/14
 
 Before this goes much further.. I want to suggest that this work be
 cancelled, even though the code looks excellent. The reason those limits
 are in the config file is that these are not billable items and they
 have a _tiny_ footprint in comparison to the physical resources they
 will allocate in Nova/Cinder/Neutron/etc.
 
 IMO we don't need fine grained quotas in Heat because everything the
 user will create with these templates will cost them and have its own
 quota system. The limits (which I added) are entirely to prevent a DoS
 of the engine.

What's more, I don't think this is something we should expose via API other 
than to perhaps query what those quota values are. It is possible that some 
provider would want to bill on number of stacks, etc (I personally agree with 
Clint here), it seems that is something that could/should be handled external 
to Heat itself.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Randall Burt
Hey, sorry for the slow follow. I have to put some finishing touches on a spec 
and submit that for review. I'll reply to the list with the link later today. 
Hope to have an initial patch up as well in the next day or so.

On Jun 5, 2014, at 10:03 AM, Nilakhya Chatterjee 
nilakhya.chatter...@globallogic.com
 wrote:

 HI Guys, 
 
 It was gr8 to find your interest in solving the nested stack resource listing.
 
 Lets move ahead by finishing any discussions left over the BP and getting an 
 approval on It.
 
 Till now what makes sense to me are : 
 
 a) an additional flag in the client call  --nested (randall)
 b) A flattened  DS in the output  (tim) 
 
 
 Thanks all ! 
 
 
 On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 Bartosz, would that be in addition to --nested? Seems like id want to be able 
 to say all of it as well as some of it.
 
 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
  wrote:
 
  Hi Tim,
 
  Maybe instead of just a flag like --nested (bool value) to resource-list we 
  can add optional argument like --depth X or --nested-level X (X - integer 
  value) to limit the depth for recursive listing of nested resources?
 
  Best,
  Bartosz
 
  On 05/19/2014 09:13 PM, Tim Schnell wrote:
  Blueprint:
  https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
  Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
  Tim
 
  On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
  On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
  On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
  wrote:
 
  On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
  Hi Nilakhya,
 
  As Randall mentioned we did discuss this exact issue at the summit. I
  was
  planning on putting a blueprint together today to continue the
  discussion.
  The Stack Preview call is already doing the necessary recursion to
  gather
  the resources so we discussed being able to pass a stack id to the
  preview
  endpoint to get all of the resources.
 
  However, after thinking about it some more, I agree with Randall that
  maybe this should be an extra query parameter passed to the
  resource-list
  call. I'Ll have the blueprint up later today, unless you have already
  started on it.
  Note there is a patch from Anderson/Richard which may help with this:
 
  https://review.openstack.org/#/c/85781/
 
  The idea was to enable easier introspection of resources backed by
  nested
  stacks in a UI, but it could be equally useful to generate a tree
  resource view in the CLI client by walking the links.
 
  This would obviously be less efficient than recursing inside the
  engine,
  but arguably the output would be much more useful if it retains the
  nesting
  structure, as opposed to presenting a fully flattened soup of
  resources
  with no idea which stack/layer they belong to.
 
  Steve
  Could we simply add stack name/id to this output if the flag is passed? I
  agree that we currently have the capability to traverse the tree
  structure of nested stacks, but several folks have requested this
  capability, mostly for UI/UX purposes. It would be faster if you want the
  flat structure and we still retain the capability to create your own
  tree/widget/whatever by following the links. Also, I think its best to
  include this in the API directly since not all users are integrating
  using the python-heatclient.
  +1 for adding the stack name/id to the output to maintain a reference to
  the initial stack that the resource belongs to. The original stated
  use-case that I am aware of was to have a flat list of all resources
  associated with a stack to be displayed in the UI when the user prompts to
  delete a stack. This would prevent confusion about what and why different
  resources are being deleted due to the stack delete.
 
  This use-case does not require any information about the nested stacks but
  I can foresee that information being useful in the future. I think a
  flattened data structure (with a reference to stack id) is still the most
  efficient solution. The patch landed by Anderson/Richard provides an
  alternate method to drill down into nested stacks if the hierarchy is
  important information though this is not the optimal solution in this
  case.
 
  Tim
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list

Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Randall Burt
I have submitted a new/expanded spec for this feature: 
https://review.openstack.org/#/c/98219/. I hope to start some WiP patches this 
afternoon/tomorrow morning. Spec reviews and input most welcome.

On Jun 5, 2014, at 11:35 AM, Randall Burt randall.b...@rackspace.com wrote:

 Hey, sorry for the slow follow. I have to put some finishing touches on a 
 spec and submit that for review. I'll reply to the list with the link later 
 today. Hope to have an initial patch up as well in the next day or so.
 
 On Jun 5, 2014, at 10:03 AM, Nilakhya Chatterjee 
 nilakhya.chatter...@globallogic.com
 wrote:
 
 HI Guys, 
 
 It was gr8 to find your interest in solving the nested stack resource 
 listing.
 
 Lets move ahead by finishing any discussions left over the BP and getting an 
 approval on It.
 
 Till now what makes sense to me are : 
 
 a) an additional flag in the client call  --nested (randall)
 b) A flattened  DS in the output  (tim) 
 
 
 Thanks all ! 
 
 
 On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 Bartosz, would that be in addition to --nested? Seems like id want to be 
 able to say all of it as well as some of it.
 
 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
 wrote:
 
 Hi Tim,
 
 Maybe instead of just a flag like --nested (bool value) to resource-list we 
 can add optional argument like --depth X or --nested-level X (X - integer 
 value) to limit the depth for recursive listing of nested resources?
 
 Best,
 Bartosz
 
 On 05/19/2014 09:13 PM, Tim Schnell wrote:
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
 Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
 Tim
 
 On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
 On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
 On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I
 was
 planning on putting a blueprint together today to continue the
 discussion.
 The Stack Preview call is already doing the necessary recursion to
 gather
 the resources so we discussed being able to pass a stack id to the
 preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the
 resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by
 nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the
 engine,
 but arguably the output would be much more useful if it retains the
 nesting
 structure, as opposed to presenting a fully flattened soup of
 resources
 with no idea which stack/layer they belong to.
 
 Steve
 Could we simply add stack name/id to this output if the flag is passed? I
 agree that we currently have the capability to traverse the tree
 structure of nested stacks, but several folks have requested this
 capability, mostly for UI/UX purposes. It would be faster if you want the
 flat structure and we still retain the capability to create your own
 tree/widget/whatever by following the links. Also, I think its best to
 include this in the API directly since not all users are integrating
 using the python-heatclient.
 +1 for adding the stack name/id to the output to maintain a reference to
 the initial stack that the resource belongs to. The original stated
 use-case that I am aware of was to have a flat list of all resources
 associated with a stack to be displayed in the UI when the user prompts to
 delete a stack. This would prevent confusion about what and why different
 resources are being deleted due to the stack delete.
 
 This use-case does not require any information about the nested stacks but
 I can foresee that information being useful in the future. I think a
 flattened data structure (with a reference to stack id) is still the most
 efficient solution. The patch landed by Anderson/Richard provides an
 alternate method to drill down into nested stacks if the hierarchy is
 important information though this is not the optimal solution in this
 case.
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-06-05 Thread Randall Burt
I've submitted the spec (finally) and will work on some initial patches this 
afternoon/tomorrow. Please provide any feedback and thanks!

https://review.openstack.org/#/c/98219

On Jun 5, 2014, at 11:35 AM, Randall Burt randall.b...@rackspace.com wrote:

 Hey, sorry for the slow follow. I have to put some finishing touches on a 
 spec and submit that for review. I'll reply to the list with the link later 
 today. Hope to have an initial patch up as well in the next day or so.
 
 On Jun 5, 2014, at 10:03 AM, Nilakhya Chatterjee 
 nilakhya.chatter...@globallogic.com
 wrote:
 
 HI Guys, 
 
 It was gr8 to find your interest in solving the nested stack resource 
 listing.
 
 Lets move ahead by finishing any discussions left over the BP and getting an 
 approval on It.
 
 Till now what makes sense to me are : 
 
 a) an additional flag in the client call  --nested (randall)
 b) A flattened  DS in the output  (tim) 
 
 
 Thanks all ! 
 
 
 On Wed, May 21, 2014 at 12:42 AM, Randall Burt randall.b...@rackspace.com 
 wrote:
 Bartosz, would that be in addition to --nested? Seems like id want to be 
 able to say all of it as well as some of it.
 
 On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
 wrote:
 
 Hi Tim,
 
 Maybe instead of just a flag like --nested (bool value) to resource-list we 
 can add optional argument like --depth X or --nested-level X (X - integer 
 value) to limit the depth for recursive listing of nested resources?
 
 Best,
 Bartosz
 
 On 05/19/2014 09:13 PM, Tim Schnell wrote:
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
 Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
 Tim
 
 On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
 On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
 On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I
 was
 planning on putting a blueprint together today to continue the
 discussion.
 The Stack Preview call is already doing the necessary recursion to
 gather
 the resources so we discussed being able to pass a stack id to the
 preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the
 resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by
 nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the
 engine,
 but arguably the output would be much more useful if it retains the
 nesting
 structure, as opposed to presenting a fully flattened soup of
 resources
 with no idea which stack/layer they belong to.
 
 Steve
 Could we simply add stack name/id to this output if the flag is passed? I
 agree that we currently have the capability to traverse the tree
 structure of nested stacks, but several folks have requested this
 capability, mostly for UI/UX purposes. It would be faster if you want the
 flat structure and we still retain the capability to create your own
 tree/widget/whatever by following the links. Also, I think its best to
 include this in the API directly since not all users are integrating
 using the python-heatclient.
 +1 for adding the stack name/id to the output to maintain a reference to
 the initial stack that the resource belongs to. The original stated
 use-case that I am aware of was to have a flat list of all resources
 associated with a stack to be displayed in the UI when the user prompts to
 delete a stack. This would prevent confusion about what and why different
 resources are being deleted due to the stack delete.
 
 This use-case does not require any information about the nested stacks but
 I can foresee that information being useful in the future. I think a
 flattened data structure (with a reference to stack id) is still the most
 efficient solution. The patch landed by Anderson/Richard provides an
 alternate method to drill down into nested stacks if the hierarchy is
 important information though this is not the optimal solution in this
 case.
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list

Re: [openstack-dev] [solum] reviews for the new API

2014-06-04 Thread Randall Burt
Sorry to poke my head in, but doesn't that beg the question of why you'd want 
to expose some third party DSL in the first place? If its an advanced feature, 
I wonder why it would be even considered before the 90% solution works, much 
less take a dependency on another non-integrated service. IMO, the value of 
Solum initially lies in addressing that 90% CI/CD/ALM solution well and in a 
way that doesn't require me to deploy and maintain an additional service that's 
not part of integrated OpenStack. At best, it would seem prudent to me to 
simply allow for a basic way for me to insert my own CI/CD steps and be 
prescriptive about how those custom steps participate in Solum's workflow 
rather than locking me into some specific workflow DSL. If I then choose to use 
Mistral to do some custom work, I can, but Solum shouldn't care what I use.

If Solum isn't fairly opinionated (at least early on) about the basic 
CI/CD-ALM lifecycle and the steps therein, I would question its utility if its 
a wrapper over my existing Jenkins jobs and a workflow service.

On Jun 4, 2014, at 1:10 PM, Julien Vey vey.jul...@gmail.com wrote:

 Murali, Roshan.
 
 I think there is a misunderstood. By default, the user wouldn't see any 
 workflow dsl. If the user does not specify anything, we would use a 
 pre-defined mistral workbook defined by Solum, as Adrian described
 
 If the user needs more, mistral is not so complicated. Have a look at this 
 example Angus has done 
 https://review.openstack.org/#/c/95709/4/etc/solum/workbooks/build.yaml
 We can define anything as solum actions, and the users would just have to 
 call one of this actions. Solum takes care of the implementation. If we have 
 comments about the DSL, Mistral's team is willing to help.
 
 Our end-users will be developers, and a lot of them will need a custom 
 workflow at some point. For instance, if Jenkins has so many plugins, it's 
 because there are as many custom build workflows, specific to each company. 
 If we add an abstraction on top of mistral or any other workflow engine, we 
 will lock developers in our own decisions, and any additional feature would 
 require a new development in Solum, whereas exposing (when users want it) 
 mistral, we would allow for any customization.
 
 Julien
 
 
 
 
 2014-06-04 19:50 GMT+02:00 Roshan Agrawal roshan.agra...@rackspace.com:
 Agreeing with what Murali said below. We should make things really simple for 
 the 99 percentile of the users, and not force the complexity needed by the 
 minority of the “advanced users” on the rest of the 99 percentile users.  
 
  
 
 Mistral is a generic workflow DSL, we do not need to expose all that 
 complexity to the Solum user that wants to customize the pipeline. 
 “Non-advanced” users will have a need to customize the pipeline. In this 
 case, the user is not necessarily the developer persona, but typically an 
 admin/release manager persona.  
 
  
 
 Pipeline customization should be doable easily, without having the understand 
 or author a generic workflow DSL.
 
  
 
 For the really advanced user who needs to have a finer grained need to tweak 
 the mistral workflow DSL (I am not sure if there will be a use case for this 
 if we have the right customizations exposed via the pipeline API), we should 
 have the “option” for the user to tweak the mistral DSL directly, but we 
 should not expect 99.9% (or more) of the users to deal with a generic 
 workflow.
 
  
 
  
 
 From: Murali Allada [mailto:murali.all...@rackspace.com] 
 Sent: Wednesday, June 04, 2014 12:09 PM
 
 
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [solum] reviews for the new API
 
  
 
 Angus/Julien,
 
  
 
 I would disagree that we should expose the mistral DSL to end users.
 
  
 
 What if we decide to use something other than Mistral in the future? We 
 should be able to plug in any workflow system we want without changing what 
 we expose to the end user.
 
  
 
 To me, the pipeline DSL is similar to our plan file. We don't expose a heat 
 template to our end users.
 
  
 
 Murali
 
  
 
  
 
  
 
 On Jun 4, 2014, at 10:58 AM, Julien Vey vey.jul...@gmail.com
 
  wrote:
 
 
 
 
 Hi Angus,
 
  
 
 I really agree with you. I would insist on #3, most of our users will use the 
 default workbook, and only advanced users will want to customize the 
 workflow. advanced users should easily understand a mistral workbook, cause 
 they are advanced
 
  
 
 To add to the cons of creating our own DSL, it will require a lot more work, 
 more design discussions, more maintenance... We might end up doing what 
 mistral is already doing. If we have some difficulties with Mistral's DSL, we 
 can talk with the team, and contribute back our experience of using Mistral.
 
  
 
 Julien
 
  
 
  
 
  
 
  
 
 2014-06-04 14:11 GMT+02:00 Angus Salkeld angus.salk...@rackspace.com:
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Hi all
 
 I have posted this series and it has 

Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-04 Thread Randall Burt
On Jun 4, 2014, at 7:05 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
 On 04/06/14 15:58, Vijendar Komalla wrote:
 Hi Devs,
 I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
 for Heat parameters encryption blueprint
 https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
 This quick and dirty implementation encrypts all the parameters on on
 Stack 'store' and decrypts on on Stack 'load'.
 Following are couple of improvements I am thinking about;
 1. Instead of encrypting individual parameters, on Stack 'store' encrypt
 all the parameters together as a dictionary  [something like
 crypt.encrypt(json.dumps(param_dictionary))]
 
 Yeah, definitely don't encrypt them individually.
 
 2. Just encrypt parameters that were marked as 'hidden', instead of
 encrypting all parameters
 
 I would like to hear your feedback/suggestions.
 
 Just as a heads-up, we will soon need to store the properties of 
 resources too, at which point parameters become the least of our 
 problems. (In fact, in theory we wouldn't even need to store 
 parameters... and probably by the time convergence is completely 
 implemented, we won't.) Which is to say that there's almost certainly no 
 point in discriminating between hidden and non-hidden parameters.
 
 I'll refrain from commenting on whether the extra security this affords 
 is worth the giant pain it causes in debugging, except to say that IMO 
 there should be a config option to disable the feature (and if it's 
 enabled by default, it should probably be disabled by default in e.g. 
 devstack).
 
 Storing secrets seems like a job for Barbican. That handles the giant
 pain problem because in devstack you can just tell Barbican to have an
 open read policy.
 
 I'd rather see good hooks for Barbican than blanket encryption. I've
 worked with a few things like this and they are despised and worked
 around universally because of the reason Zane has expressed concern about:
 debugging gets ridiculous.
 
 How about this:
 
 parameters:
  secrets:
type: sensitive
 resources:
  sensitive_deployment:
type: OS::Heat::StructuredDeployment
properties:
  config: weverConfig
  server: myserver
  input_values:
secret_handle: { get_param: secrets }
 
 The sensitive type would, on the client side, store the value in Barbican,
 never in Heat. Instead it would just pass in a handle which the user
 can then build policy around. Obviously this implies the user would set
 up Barbican's in-instance tools to access the secrets value. But the
 idea is, let Heat worry about being high performing and introspectable,
 and then let Barbican worry about sensitive things.

While certainly ideal, it doesn't solve the current problem since we can't yet 
guarantee Barbican will even be available in a given release of OpenStack. In 
the meantime, Heat continues to store sensitive user information unencrypted in 
its database. Once Barbican is integrated, I'd be all for changing this 
implementation, but until then, we do need an interim solution. Sure, debugging 
is a pain and as developers we can certainly grumble, but leaking sensitive 
user information because we were too fussed to protect data at rest seems worse 
IMO. Additionally, the solution as described sounds like we're imposing a 
pretty awkward process on a user to save ourselves from having to decrypt some 
data in the cases where we can't access the stack information directly from the 
API or via debugging running Heat code (where the data isn't encrypted anymore).


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Heat template parameters encryption

2014-06-04 Thread Randall Burt
On Jun 4, 2014, at 7:30 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Randall Burt's message of 2014-06-04 17:17:07 -0700:
 On Jun 4, 2014, at 7:05 PM, Clint Byrum cl...@fewbar.com
 wrote:
 
 Excerpts from Zane Bitter's message of 2014-06-04 16:19:05 -0700:
 On 04/06/14 15:58, Vijendar Komalla wrote:
 Hi Devs,
 I have submitted an WIP review (https://review.openstack.org/#/c/97900/)
 for Heat parameters encryption blueprint
 https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
 This quick and dirty implementation encrypts all the parameters on on
 Stack 'store' and decrypts on on Stack 'load'.
 Following are couple of improvements I am thinking about;
 1. Instead of encrypting individual parameters, on Stack 'store' encrypt
 all the parameters together as a dictionary  [something like
 crypt.encrypt(json.dumps(param_dictionary))]
 
 Yeah, definitely don't encrypt them individually.
 
 2. Just encrypt parameters that were marked as 'hidden', instead of
 encrypting all parameters
 
 I would like to hear your feedback/suggestions.
 
 Just as a heads-up, we will soon need to store the properties of 
 resources too, at which point parameters become the least of our 
 problems. (In fact, in theory we wouldn't even need to store 
 parameters... and probably by the time convergence is completely 
 implemented, we won't.) Which is to say that there's almost certainly no 
 point in discriminating between hidden and non-hidden parameters.
 
 I'll refrain from commenting on whether the extra security this affords 
 is worth the giant pain it causes in debugging, except to say that IMO 
 there should be a config option to disable the feature (and if it's 
 enabled by default, it should probably be disabled by default in e.g. 
 devstack).
 
 Storing secrets seems like a job for Barbican. That handles the giant
 pain problem because in devstack you can just tell Barbican to have an
 open read policy.
 
 I'd rather see good hooks for Barbican than blanket encryption. I've
 worked with a few things like this and they are despised and worked
 around universally because of the reason Zane has expressed concern about:
 debugging gets ridiculous.
 
 How about this:
 
 parameters:
 secrets:
   type: sensitive
 resources:
 sensitive_deployment:
   type: OS::Heat::StructuredDeployment
   properties:
 config: weverConfig
 server: myserver
 input_values:
   secret_handle: { get_param: secrets }
 
 The sensitive type would, on the client side, store the value in Barbican,
 never in Heat. Instead it would just pass in a handle which the user
 can then build policy around. Obviously this implies the user would set
 up Barbican's in-instance tools to access the secrets value. But the
 idea is, let Heat worry about being high performing and introspectable,
 and then let Barbican worry about sensitive things.
 
 While certainly ideal, it doesn't solve the current problem since we can't 
 yet guarantee Barbican will even be available in a given release of 
 OpenStack. In the meantime, Heat continues to store sensitive user 
 information unencrypted in its database. Once Barbican is integrated, I'd be 
 all for changing this implementation, but until then, we do need an interim 
 solution. Sure, debugging is a pain and as developers we can certainly 
 grumble, but leaking sensitive user information because we were too fussed 
 to protect data at rest seems worse IMO. Additionally, the solution as 
 described sounds like we're imposing a pretty awkward process on a user to 
 save ourselves from having to decrypt some data in the cases where we can't 
 access the stack information directly from the API or via debugging running 
 Heat code (where the data isn't encrypted anymore).
 
 
 I have made that exact, reasoned argument before, and then later seen
 giant swathes of code with things like
 
 if CONF.dont_encrypt:
  ...
 
 The next thing that happens is one by one the production deployments
 eventually end up with dont_encrypt because they can't debug anything
 and they've all had a multi-hour downtime event while they dealt with
 the encrypted database on several levels.
 
 I'm not coddling developers. I'm coddling operators.
 
 It's a different story if you only encrypt the sensitive leaf data,
 like passwords, credit card numbers, or personally identifying data. The
 operator can still have some clue what is going on if they can see the
 non-sensitive data.
 
 The design I suggested above isn't really that awkward and will
 be infinitely easier to understand for an operator than encrypting
 everything.

So, you're simply advocating a more granular approach in which we only encrypt 
the values for inputs marked as hidden in the interim?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican][Heat] Reviews requested for Barbican resources

2014-05-29 Thread Randall Burt
Hello Barbican devs. I was wondering if we could get some of you to weigh in on 
a couple of reviews for adding Barbican support in Heat. We seem to be churning 
a bit around current and future features supported by the resources and could 
use some expert opinions.

Blueprint: https://blueprints.launchpad.net/heat/+spec/barbican-resources
Order Resource: https://review.openstack.org/81906
Secret Resource: https://review.openstack.org/79355

Thanks in advance for your time.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-05-20 Thread Randall Burt
Bartosz, would that be in addition to --nested? Seems like id want to be able 
to say all of it as well as some of it.

On May 20, 2014, at 1:24 PM, Bartosz Górski bartosz.gor...@ntti3.com
 wrote:

 Hi Tim,
 
 Maybe instead of just a flag like --nested (bool value) to resource-list we 
 can add optional argument like --depth X or --nested-level X (X - integer 
 value) to limit the depth for recursive listing of nested resources?
 
 Best,
 Bartosz
 
 On 05/19/2014 09:13 PM, Tim Schnell wrote:
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
 
 Spec: https://wiki.openstack.org/wiki/Heat/explode-resource-list
 
 Tim
 
 On 5/19/14 1:53 PM, Tim Schnell tim.schn...@rackspace.com wrote:
 
 On 5/19/14 12:35 PM, Randall Burt randall.b...@rackspace.com wrote:
 
 
 On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I
 was
 planning on putting a blueprint together today to continue the
 discussion.
 The Stack Preview call is already doing the necessary recursion to
 gather
 the resources so we discussed being able to pass a stack id to the
 preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the
 resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by
 nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the
 engine,
 but arguably the output would be much more useful if it retains the
 nesting
 structure, as opposed to presenting a fully flattened soup of
 resources
 with no idea which stack/layer they belong to.
 
 Steve
 Could we simply add stack name/id to this output if the flag is passed? I
 agree that we currently have the capability to traverse the tree
 structure of nested stacks, but several folks have requested this
 capability, mostly for UI/UX purposes. It would be faster if you want the
 flat structure and we still retain the capability to create your own
 tree/widget/whatever by following the links. Also, I think its best to
 include this in the API directly since not all users are integrating
 using the python-heatclient.
 +1 for adding the stack name/id to the output to maintain a reference to
 the initial stack that the resource belongs to. The original stated
 use-case that I am aware of was to have a flat list of all resources
 associated with a stack to be displayed in the UI when the user prompts to
 delete a stack. This would prevent confusion about what and why different
 resources are being deleted due to the stack delete.
 
 This use-case does not require any information about the nested stacks but
 I can foresee that information being useful in the future. I think a
 flattened data structure (with a reference to stack id) is still the most
 efficient solution. The patch landed by Anderson/Richard provides an
 alternate method to drill down into nested stacks if the hierarchy is
 important information though this is not the optimal solution in this
 case.
 
 Tim
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-05-19 Thread Randall Burt
Nilakhya, We discussed this a bit at the summit and I think the consensus was 
that this would be a good thing to do by passing a flag to resource-list that 
would flatten the structure of nested stacks in the call. Tim Schnell brought 
this up as well and may be interested in helping define the use case and spec.

On May 14, 2014, at 4:32 PM, Nilakhya Chatterjee 
nilakhya.chatter...@globallogic.com wrote:

 Hi All,
 
 I recently tried to create a nested stack with the following example : 
 
 http://paste.openstack.org/show/79156/
 
 heat resource-list  gives only MyStack but intention should be to list all 
 the resources created by the nested templates, as also pointed by the command 
 help:
 
 resource-list   Show list of resources belonging to a stack
 
 
 Let me know if this requires a BP to be created for discussion.
 
 Thanks.
 
 -- 
 
 Nilakhya | Consultant Engineering
 GlobalLogic
 P +x.xxx.xxx.  M +91.989.112.5770  S skype
 www.globallogic.com
 
 http://www.globallogic.com/email_disclaimer.txt
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [HEAT] Discussion: How to list nested stack resources.

2014-05-19 Thread Randall Burt
On May 19, 2014, at 11:39 AM, Steven Hardy sha...@redhat.com
 wrote:

 On Mon, May 19, 2014 at 03:26:22PM +, Tim Schnell wrote:
 Hi Nilakhya,
 
 As Randall mentioned we did discuss this exact issue at the summit. I was
 planning on putting a blueprint together today to continue the discussion.
 The Stack Preview call is already doing the necessary recursion to gather
 the resources so we discussed being able to pass a stack id to the preview
 endpoint to get all of the resources.
 
 However, after thinking about it some more, I agree with Randall that
 maybe this should be an extra query parameter passed to the resource-list
 call. I'Ll have the blueprint up later today, unless you have already
 started on it.
 
 Note there is a patch from Anderson/Richard which may help with this:
 
 https://review.openstack.org/#/c/85781/
 
 The idea was to enable easier introspection of resources backed by nested
 stacks in a UI, but it could be equally useful to generate a tree
 resource view in the CLI client by walking the links.
 
 This would obviously be less efficient than recursing inside the engine,
 but arguably the output would be much more useful if it retains the nesting
 structure, as opposed to presenting a fully flattened soup of resources
 with no idea which stack/layer they belong to.
 
 Steve

Could we simply add stack name/id to this output if the flag is passed? I agree 
that we currently have the capability to traverse the tree structure of nested 
stacks, but several folks have requested this capability, mostly for UI/UX 
purposes. It would be faster if you want the flat structure and we still retain 
the capability to create your own tree/widget/whatever by following the links. 
Also, I think its best to include this in the API directly since not all users 
are integrating using the python-heatclient.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Proposing Thomas Spatzier for heat-core

2014-04-22 Thread Randall Burt
+1

On Apr 22, 2014, at 1:43 PM, Zane Bitter zbit...@redhat.com wrote:

 Resending with [Heat] in the subject line. My bad.
 
 On 22/04/14 14:21, Zane Bitter wrote:
 I'd like to propose that we add Thomas Spatzier to the heat-core team.
 
 Thomas has been involved in and consistently contributing to the Heat
 community for around a year, since the time of the Havana design summit.
 His code reviews are of extremely high quality IMO, and he has been
 reviewing at a rate consistent with a member of the core team[1].
 
 One thing worth addressing is that Thomas has only recently started
 expanding the focus of his reviews from HOT-related changes out into the
 rest of the code base. I don't see this as an obstacle - nobody is
 familiar with *all* of the code, and we trust core reviewers to know
 when we are qualified to give +2 and when we should limit ourselves to
 +1 - and as far as I know nobody else is bothered either. However, if
 you have strong feelings on this subject nobody will take it personally
 if you speak up :)
 
 Heat Core team members, please vote on this thread. A quick reminder of
 your options[2]:
 +1  - five of these are sufficient for acceptance
  0  - abstention is always an option
 -1  - this acts as a veto
 
 cheers,
 Zane.
 
 
 [1] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
 [2]
 https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Members
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Glance] How about managing heat template like flavors in nova?

2014-04-21 Thread Randall Burt
We discussed this with the Glance community back in January and it was agreed 
that we should extend Glance's scope to include Heat templates as well as other 
artifacts. I'm planning on submitting some patches around this during Juno.

Adding the Glance tag as this is relevant to them as well.


 Original message 
From: Mike Spreitzer
Date:04/19/2014 9:43 PM (GMT-06:00)
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] How about managing heat template like 
flavors in nova?

Gouzongmei gouzong...@huawei.com wrote on 04/19/2014 10:37:02 PM:

 We can supply APIs for getting, putting, adding and deleting current
 templates in the system, then when creating heat stacks, we just
 need to specify the name of the template.

Look for past discussion of Heat Template Repository (Heater).  Here is part of 
it: https://wiki.openstack.org/wiki/Heat/htr

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Custom Resource

2014-04-14 Thread Randall Burt

On Apr 14, 2014, at 8:08 AM, Rabi Mishra ramis...@redhat.com
 wrote:

 Hi Steve,
 
 Thanks a lot for your prompt response. I can't agree more that the CFN custom 
 resource implementation is complex with it's dependency on SNS and SQS. 
 However, it  decouples the implementation of the resource life-cycle from the 
 resource itself. IMO, this has some advantages from the template complexity 
 and flexibility point of view.

IIRC implementing something like this had been discussed quite a while back. I 
think we discussed the possibility of using web hooks and a defined api/payload 
in place of the SNS/SQS type stuff. I don't think it ever made it to the 
backlog, but I'd be happy to discuss further design and maybe add a design 
session to the summit if you're unable to make it.

 On choices you mentioned:
 
 1. Custom Python Plugins - I do think this is the best approach for my 
 use-cases. However, asking a customer to develop custom plugins and 
 maintaining them can be too much ask(asking their 3rd party tool vendors to 
 do it is even more difficult), than plugging in some of their existing infra 
 script snippets.
 
 2. Provider Resource - Use of environment files for mapping nested template 
 and exchange of parameters/attributes looks sleek. However, I am yet to 
 understand how to wrap code snippets (many of them existing scripts) for the 
 resource life-cycle in the nested template to achieve these use-cases. 
 
 With the CFN Custom resource, addition of some bits of code to the existing 
 scripts to parse the JSON snippets based on the stack life-cycle method is 
 what's required. 
 
 However, my understanding of what's possible with the Provider Resource is 
 limited at the moment. I'll spend more time and go through it before coming 
 back with an answer to the use-case feasibility and constraints.
 
 
 Regards,
 Rabi Mishra
 
 - Original Message -
 Hi Rabi,
 
 On Mon, Apr 14, 2014 at 06:44:44AM -0400, Rabi Mishra wrote:
 Hi All,
 
 Recently, I've come across some requirements for external
 integrations/resources that can be managed like stack resources
 (create,update,delete) from the stack.
 
 1. Adding/Removing DNS records for instances created as part of a stack.
 2. Integration with IPAM solutions for allocate/release of IPs (IP
 allocation pool for provider network)
 3. Other custom integration for dynamic parameters to stacks.
 
 IMHO, it would probably make sense to create a custom resource like 'AWS
 CFN Custom Resource'[1] that can be used for these kind of use cases. I
 have created a blueprint[2] for this.
 
 Heat already has a couple of ways for custom resources to be defined.
 
 The one which probably matches your requirements best is the provider
 resource interface, which allows template defined resources to be mapped
 to user-definable resource types, via an environment file:
 
 http://hardysteven.blogspot.co.uk/2013/10/heat-providersenvironments-101-ive.html
 http://docs.openstack.org/developer/heat/template_guide/environment.html
 
 Provider resources can be defined by both users, and deployers (who can use
 templates to e.g wrap an existing resource with something like DNS
 registration logic, and expose the type transparently to the end-user)
 
 For deployer requirements not satisfied by provider resources (for example
 integration with third-party services), Heat also provides a python plugin
 API, which enables deployers to create their own resource plugins as
 needed:
 
 http://docs.openstack.org/developer/heat/pluginguide.html
 
 Personally, I think these two models provide sufficient flexibility that we
 should be able to avoid the burden of maintaining a CFN compatible custom
 resource plugin API.  I've not looked at it in detail, but the CFN model
 you refer to has always seemed pretty complex to me, and seems like
 something we don't necessarily want to replicate.
 
 If there are gaps where things are not yet possible via the provider
 resource interface, I'd rather discuss incremental improvements to that
 instead of wholesale reimplementation of something compatible with AWS.
 
 Can you provide any more feedback on your use-cases, and whether the
 interfaces I linked can be used to satisfy them?
 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-04-07 Thread Randall Burt

On Apr 4, 2014, at 1:56 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 19/02/14 02:48, Clint Byrum wrote:
 Since picking up Heat and trying to think about how to express clusters
 of things, I've been troubled by how poorly the CFN language supports
 using lists. There has always been the Fn::Select function for
 dereferencing arrays and maps, and recently we added a nice enhancement
 to HOT to allow referencing these directly in get_attr and get_param.
 
 However, this does not help us when we want to do something with all of
 the members of a list.
 
 In many applications I suspect the template authors will want to do what
 we want to do now in TripleO. We have a list of identical servers and
 we'd like to fetch the same attribute from them all, join it with other
 attributes, and return that as a string.
 
 The specific case is that we need to have all of the hosts in a cluster
 of machines addressable in /etc/hosts (please, Designate, save us,
 eventually. ;). The way to do this if we had just explicit resources
 named NovaCompute0, NovaCompute1, would be:
 
   str_join:
 - \n
 - - str_join:
 - ' '
 - get_attr:
   - NovaCompute0
   - networks.ctlplane.0
 - get_attr:
   - NovaCompute0
   - name
   - str_join:
 - ' '
 - get_attr:
   - NovaCompute1
   - networks.ctplane.0
 - get_attr:
   - NovaCompute1
   - name
 
 Now, what I'd really like to do is this:
 
 map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - - NovaCompute0
 - NovaCompute1
 
 This would be helpful for the instances of resource groups too, as we
 can make sure they return a list. The above then becomes:
 
 
 map:
   - str_join:
 - \n
 - - str_join:
   - ' '
   - get_attr:
 - $1
 - networks.ctlplane.0
   - get_attr:
 - $1
 - name
   - get_attr:
   - NovaComputeGroup
   - member_resources
 
 Thoughts on this idea? I will throw together an implementation soon but
 wanted to get this idea out there into the hive mind ASAP.
 
 Apparently I read this at the time, but completely forgot about it. Sorry 
 about that! Since it has come up again in the context of the TripleO Heat 
 templates and merge.py thread, allow me to contribute my 2c.
 
 Without expressing an opinion on this proposal specifically, consensus within 
 the Heat core team has been heavily -1 on any sort of for-each functionality. 
 I'm happy to have the debate again (and TBH I don't really know what the 
 right answer is), but I wouldn't consider the lack of comment on this as a 
 reliable indicator of lazy consensus in favour; equivalent proposals have 
 been considered and rejected on multiple occasions.
 
 Since it looks like TripleO will soon be able to move over to using 
 AutoscalingGroups (or ResourceGroups, or something) for groups of similar 
 servers, maybe we could consider baking this functionality into Autoscaling 
 groups instead of as an intrinsic function.
 
 For example, when you do get_attr on an autoscaling resource it could fetch 
 the corresponding attribute from each member of the group and return them as 
 a list. (It might be wise to prepend Output. or something similar - maybe 
 Members. - to the attribute names, as AWS::CloudFormation::Stack does, so 
 that attributes of the autoscaling group itself can remain in a separate 
 namespace.)

FWIW, ResourceGroup supports this now as well as getting the attribute value 
from a given indexed member of the group.

 
 Since members of your NovaComputeGroup will be nested stacks anyway (using 
 ResourceGroup or some equivalent feature - preferably autoscaling with 
 rolling updates), in the case above you'd define in the scaled template:
 
  outputs:
hosts_entry:
  description: An /etc/hosts entry for the NovaComputeServer
  value:
- str_join:
  - ' '
  - - get_attr:
  - NovaComputeServer
  - networks
  - ctlplane
  - 0
- get_attr:
  - NovaComputeServer
  - name
 
 And then in the main template (containing the autoscaling group):
 
str_join:
  - \n
  - get_attr:
- NovaComputeGroup
- Members.hosts_entry
 
 would give the same output as your example would.
 
 IMHO we should do something like this regardless of whether it solves your 
 use case, because it's fairly easy, requires no changes to the template 
 format, and users have been asking for ways to access e.g. a list of IP 
 addresses from a scaling group. That said, it seems very likely that making 
 the other changes required for TripleO to get rid of merge.py (i.e. switching 
 to scaling groups of templates instead of by multiplying resources in 
 templates) will make this a viable solution for 

Re: [openstack-dev] [Heat][Murano][TOSCA] Murano team contrib. to Heat TOSCA activities

2014-03-10 Thread Randall Burt

On Mar 10, 2014, at 1:26 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com
 wrote:

 Hi,
 
 Thomas and Zane initiated a good discussion about Murano DSL and TOSCA 
 initiatives in Heat. I think will be beneficial for both teams to contribute 
 into TOSCA.

Wasn't TOSCA developing a simplified version in order to converge with HOT?

 While Mirantis is working on organizational part for OASIS. I would like to 
 understand what is the current view on the TOSCA and HOT relations. 
 It looks like TOSCA can cover all aspects of declarative components HOT 
 templates and imperative workflows which can be covered by Murano. What do 
 you think about that?

Aren't workflows covered by Mistral? How would this be different than including 
mistral support in Heat?

 I think TOSCA format can be used a a descriptions of Applications and 
 heat-translator can actually convert TOSCA descriptions to both HOT and 
 Murano files which can be then used for actual Application deployment. Both 
 Het and Murano workflows can coexist in Orchestration program and cover both 
 declarative templates and imperative workflows use cases.
 
 -- 
 Georgy Okrokvertskhov
 Architect,
 OpenStack Platform Products,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-02-19 Thread Randall Burt
This may also be relevant: 
https://blueprints.launchpad.net/heat/+spec/override-resource-name-in-resource-group

On Feb 19, 2014, at 1:48 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Since picking up Heat and trying to think about how to express clusters
 of things, I've been troubled by how poorly the CFN language supports
 using lists. There has always been the Fn::Select function for
 dereferencing arrays and maps, and recently we added a nice enhancement
 to HOT to allow referencing these directly in get_attr and get_param.
 
 However, this does not help us when we want to do something with all of
 the members of a list.
 
 In many applications I suspect the template authors will want to do what
 we want to do now in TripleO. We have a list of identical servers and
 we'd like to fetch the same attribute from them all, join it with other
 attributes, and return that as a string.
 
 The specific case is that we need to have all of the hosts in a cluster
 of machines addressable in /etc/hosts (please, Designate, save us,
 eventually. ;). The way to do this if we had just explicit resources
 named NovaCompute0, NovaCompute1, would be:
 
  str_join:
- \n
- - str_join:
- ' '
- get_attr:
  - NovaCompute0
  - networks.ctlplane.0
- get_attr:
  - NovaCompute0
  - name
  - str_join:
- ' '
- get_attr:
  - NovaCompute1
  - networks.ctplane.0
- get_attr:
  - NovaCompute1
  - name
 
 Now, what I'd really like to do is this:
 
 map:
  - str_join:
- \n
- - str_join:
  - ' '
  - get_attr:
- $1
- networks.ctlplane.0
  - get_attr:
- $1
- name
  - - NovaCompute0
- NovaCompute1
 
 This would be helpful for the instances of resource groups too, as we
 can make sure they return a list. The above then becomes:
 
 
 map:
  - str_join:
- \n
- - str_join:
  - ' '
  - get_attr:
- $1
- networks.ctlplane.0
  - get_attr:
- $1
- name
  - get_attr:
  - NovaComputeGroup
  - member_resources
 
 Thoughts on this idea? I will throw together an implementation soon but
 wanted to get this idea out there into the hive mind ASAP.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nominate Jason Dunsmore for heat-core

2014-02-09 Thread Randall Burt
Very +1

 Original message 
From: Steve Baker
Date:02/09/2014 4:41 PM (GMT-06:00)
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat] Nominate Jason Dunsmore for heat-core

I would like to nominate Jason Dunsmore for heat-core.

His reviews are valuable and prolific, his code contributions have
demonstrated a good knowledge of heat internals, and he has endured a
sound hazing to get multi-engine into heat.

http://russellbryant.net/openstack-stats/heat-reviewers-60.txt
http://www.stackalytics.com/?release=icehousemetric=commitsuser_id=jasondunsmore

Heat cores, please reply with your vote.

cheers

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] About LaunchConfiguration and Autoscaling

2014-01-30 Thread Randall Burt
On Jan 30, 2014, at 12:09 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Zane Bitter's message of 2014-01-30 07:38:38 -0800:
 On 30/01/14 06:01, Thomas Herve wrote:
 Hi all,
 
 While talking to Zane yesterday, he raised an interesting question about 
 whether or not we want to keep a LaunchConfiguration object for the native 
 autoscaling resources.
 
 The LaunchConfiguration object basically holds properties to be able to 
 fire new servers in a scaling group. In the new design, we will be able to 
 start arbitrary resources, so we can't keep a strict LaunchConfiguration 
 object as it exists, as we can have arbitrary properties.
 
 It may be still be interesting to store it separately to be able to reuse 
 it between groups.
 
 So either we do this:
 
 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 resource_properties:
   image: my_image
   flavor: m1.large
 
 The main advantages of this that I see are:
 
 * It's one less resource.
 * We can verify properties against the scaled_resource at the place the 
 LaunchConfig is defined. (Note: in _both_ models these would be verified 
 at the same place the _ScalingGroup_ is defined.)

This looks a lot like OS::Heat::ResourceGroup, which I believe already 
addresses some of Zane's concerns around dynamic property validation.

 
 Or:
 
 group:
   type: OS::Heat::ScalingGroup
   properties:
 scaled_resource: OS::Nova::Server
 launch_configuration: server_config
 server_config:
   type: OS::Heat::LaunchConfiguration
   properties:
 image: my_image
 flavor: m1.large
 
 
 I favour this one for a few reasons:
 
 * A single LaunchConfiguration can be re-used by multiple scaling 
 groups. Reuse is good, and is one of the things we have been driving 
 toward with e.g. software deployments.
 
 I agree with the desire for re-use. In fact I am somewhat desperate to
 have it as we try to write templates which allow assembling different
 topologies of OpenStack deployment.
 
 I would hope we would solve that at a deeper level, rather than making
 resources for the things we think will need re-use. I think nested stacks
 allow this level of re-use already anyway. Software config just allows
 sub-resource composition.

Agreed. Codifying re-use inside specific resource types is a game of catch-up I 
don't think we can win in the end.

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Nomination for heat-core

2013-12-19 Thread Randall Burt
+1


Sent from my Verizon Wireless 4G LTE Smartphone



 Original message 
From: Steve Baker sba...@redhat.com
Date: 12/18/2013 8:28 PM (GMT-06:00)
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Subject: [openstack-dev] [heat] Nomination for heat-core


I would like to nominate Bartosz Górski to be a heat-core reviewer. His reviews 
to date have been valuable and his other contributions to the project have 
shown a sound understanding of how heat works.

Here is his review history:
https://review.openstack.org/#/q/reviewer:bartosz.gorski%2540ntti3.com+project:openstack/heat,n,z

If you are heat-core please reply with your vote.

cheers
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack preview

2013-12-11 Thread Randall Burt
On Dec 10, 2013, at 3:46 PM, Zane Bitter zbit...@redhat.com wrote:

 On 10/12/13 15:10, Randall Burt wrote:
 On Dec 10, 2013, at 1:27 PM, Zane Bitter zbit...@redhat.com
  wrote:
 
 On 10/12/13 12:46, Richard Lee wrote:
 Hey all,
 
 We're working on a blueprint
 https://blueprints.launchpad.net/heat/+spec/preview-stack that adds
 the ability to preview what a given template+parameters would create in
 terms of resources.  We think this would provide significant value for
 blueprint authors and for other heat users that want to see what
 someone's template would create before actually launching resources (and
 possibly having to pay for them).
 
 +1 for this use case.
 
 BTW AWS supports something similar, which we never bothered to implement in 
 the compatibility API. You might want to do some research on that as a 
 starting point:
 
 http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_EstimateTemplateCost.html
 
 However the fact that we have pluggable resource types would make it very 
 difficult for us to do cost calculations inside Heat (and, in fact, 
 CloudFormation doesn't do that either, it just spits out a URL for their 
 separate calculator) - e.g. it's very hard to know which resources will 
 create, say, a Nova server unless they are all annotated in some way.
 
 Are you thinking the API will simply return a list of resource types and 
 counts? e.g.:
 
 {
   OS::Nova::Server: 2,
   OS::Cinder::Volume: 1,
   OS::Neutron::FloatingIP: 1
 }
 
 If so, +1 for that implementation too. Don't forget that you will have to 
 recurse through provider templates, which may not contain what they say on 
 the tin.
 
 That sounds more than reasonable to me. I don't think we could begin to do 
 any sort of meaningful cost calculation without having to mostly punt to 
 the service provider anyway.
 
 Yeah, exactly.
 
 Although it occurs to me that we may want more detail than I originally 
 thought... e.g. knowing the flavor of any Nova servers is probably quite 
 important. Any ideas?
 
 The first thing that comes to mind is that we could annotate resource types 
 with the list of parameters we want to group by. That would enable something 
 like:
 
 {
  OS::Nova::Server:
[{config: {flavor: m1.small}, count: 1},
 {config: {flavor: m3.large}, count: 1}],
  OS::Cinder::Volume:
[{config: {size: 10}, count: 1}],
  OS::Neutron::FloatingIP:
[{config: {}, count: 1}],
 }
 
 - ZB

Yeah, that makes a lot of sense from a I want to calculate what this stack is 
going to cost me use case. My only concern is that a given service provider 
may have different ideas as to what's important WRT a stack's value, but we 
could always extend this with something in the global environment similar to 
how we discussed resource support status in those reviews.

So it sounds to me like we just need to add a field to the property schema that 
says this property is important to the preview call.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-11 Thread Randall Burt
On Dec 11, 2013, at 5:44 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com
 wrote:

 Hi,
 
 To keep this thread alive I would like to share the small screencast I've 
 recorded for Murano Metadata repository. I would like to share with you what 
 we have in Murano and start a conversation about metadata repository 
 development in OpenStack. Here is a link to screencast 
 http://www.youtube.com/watch?v=Yi4gC4ZhvPg Here is a link  to a detailed 
 specification of PoC for metadata repository currently implemented in Murano.
 
 There is an etherpad (here) for new MetadataRepository design we started to 
 write after lesson learn phase of PoC. This is a future version of repository 
 we want to have. This proposal can be used as an initial basis for metadata 
 repository design conversation.
 
 It will be great if we start conversation with Glance team to understand how 
 this work can be organized. As it was revealed in this thread, the most 
 probable candidate for metadata repository service implementation is Glance 
 program. 
 
 Thanks,
 Georgy

Thanks for the link and info. I think the general consensus is this belongs in 
Glance, however I think details are being deferred until the mid-summit meet up 
in Washington D.C. (I could be totally wrong about this). In any case, I think 
I'll also start converting the existing HeatR blueprints to Glance ones. 
Perhaps it would be a good idea at this point to propose specific blueprints 
and have further ML discussions focused on specific changes?

 On Mon, Dec 9, 2013 at 3:24 AM, Thierry Carrez thie...@openstack.org wrote:
 Vishvananda Ishaya wrote:
  On Dec 6, 2013, at 10:07 AM, Georgy Okrokvertskhov
  gokrokvertsk...@mirantis.com mailto:gokrokvertsk...@mirantis.com wrote:
 
  I am really inspired by this thread. Frankly saying, Glance for Murano
  was a kind of sacred entity, as it is a service with a long history in
  OpenStack.  We even did not think in the direction of changing Glance.
  Spending a night with these ideas, I am kind of having a dream about
  unified catalog where the full range of different entities are
  presented. Just imagine that we have everything as  first class
  citizens of catalog treated equally: single VM (image), Heat template
  (fixed number of VMs\ autoscaling groups), Murano Application
  (generated Heat templates), Solum assemblies
 
  Projects like Solum will highly benefit from this catalog as it can
  use all varieties of VM configurations talking with one service.
  This catalog will be able not just list all possible deployable
  entities but can be also a registry for already deployed
  configurations. This is perfectly aligned with the goal for catalog to
  be a kind of market place which provides billing information too.
 
  OpenStack users also will benefit from this as they will have the
  unified approach for manage deployments and deployable entities.
 
  I doubt that it could be done by a single team. But if all teams join
  this effort we can do this. From my perspective, this could be a part
  of Glance program and it is not necessary to add a new program for
  that. As it was mentioned earlier in this thread an idea of market
  place for images in Glance was here for some time. I think we can
  extend it to the idea of creating a marketplace for a deployable
  entity regardless of the way of deployment. As Glance is a core
  project which means it always exist in OpenStack deployment it makes
  sense to as a central catalog for everything.
 
  +1
 
 +1 too.
 
 I don't think that Glance is collapsing under its current complexity
 yet, so extending Glance to a general catalog service that can serve
 more than just reference VM images makes sense IMHO.
 
 --
 Thierry Carrez (ttx)
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Georgy Okrokvertskhov
 Technical Program Manager,
 Cloud and Infrastructure Services,
 Mirantis
 http://www.mirantis.com
 Tel. +1 650 963 9828
 Mob. +1 650 996 3284
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack preview

2013-12-10 Thread Randall Burt
On Dec 10, 2013, at 1:27 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 10/12/13 12:46, Richard Lee wrote:
 Hey all,
 
 We're working on a blueprint
 https://blueprints.launchpad.net/heat/+spec/preview-stack that adds
 the ability to preview what a given template+parameters would create in
 terms of resources.  We think this would provide significant value for
 blueprint authors and for other heat users that want to see what
 someone's template would create before actually launching resources (and
 possibly having to pay for them).
 
 +1 for this use case.
 
 BTW AWS supports something similar, which we never bothered to implement in 
 the compatibility API. You might want to do some research on that as a 
 starting point:
 
 http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_EstimateTemplateCost.html
 
 However the fact that we have pluggable resource types would make it very 
 difficult for us to do cost calculations inside Heat (and, in fact, 
 CloudFormation doesn't do that either, it just spits out a URL for their 
 separate calculator) - e.g. it's very hard to know which resources will 
 create, say, a Nova server unless they are all annotated in some way.
 
 Are you thinking the API will simply return a list of resource types and 
 counts? e.g.:
 
 {
   OS::Nova::Server: 2,
   OS::Cinder::Volume: 1,
   OS::Neutron::FloatingIP: 1
 }
 
 If so, +1 for that implementation too. Don't forget that you will have to 
 recurse through provider templates, which may not contain what they say on 
 the tin.

That sounds more than reasonable to me. I don't think we could begin to do any 
sort of meaningful cost calculation without having to mostly punt to the 
service provider anyway.

 
 cheers,
 Zane.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack convergence first steps

2013-12-10 Thread Randall Burt
On Dec 10, 2013, at 1:03 PM, Anderson Mesquita 
anderson...@thoughtworks.commailto:anderson...@thoughtworks.com
 wrote:

To try and keep this conversation moving forward, is it safe to say that we at 
least need to change the current status attribute to something like 
action_status? And the same with status_reason being changed to 
action_status_reason? Does anybody see a reason why we shouldn't go this way, 
since it's really what status currently refers to?

IMO, this sort of change should be proposed for the v2 api, since there are 
already expectations in the v1 api as to what these things mean. For v1, I 
wouldn't be opposed to adding a synthetic resource_state attribute that 
reflects the actual status of the underlying resource (ACTIVE, RESIZE, etc).

2013/12/8 Mitsuru Kanabuchi 
kanabuchi.mits...@po.ntts.co.jpmailto:kanabuchi.mits...@po.ntts.co.jp

On Thu, 5 Dec 2013 22:13:18 -0600
Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com wrote:

 On Thu, Dec 5, 2013 at 7:25 PM, Randall Burt 
 randall.b...@rackspace.commailto:randall.b...@rackspace.comwrote:

   On Dec 5, 2013, at 6:25 PM, Christopher Armstrong 
  chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
   wrote:
 
On Thu, Dec 5, 2013 at 3:50 PM, Anderson Mesquita 
  anderson...@thoughtworks.commailto:anderson...@thoughtworks.com wrote:
 
  Hey stackers,
 
  We've been working towards making stack convergence (
  https://blueprints.launchpad.net/heat/+spec/stack-convergence) one step
  closer to being ready at a time.  After the first patch was submitted we
  got positive feedback on it as well as some good suggestions as to how to
  move it forward.
 
  The first step (https://blueprints.launchpad.net/heat/+spec/stack-check)
  is to get all the statuses back from the real world resources and update
  our stacks accordingly so that we'll be able to move on to the next step:
  converge it to the desired state, fixing any errors that may have happened.
 
  We just submitted another WiP for review, and as we were doing it, a few
  questions were raised and we'd like to get everybody's input on them. Our
  main concern is around the use and purpose of the `status` of a
  stack/resource.  `status` currently appears to represent the status of the
  last action taken, and it seems that we may need to repurpose it or
  possibly create something else to represent a stack's health (i.e.
  everything is up and running as expected, something smells fishy, something
  broke, stack's is doomed).  We described this thoroughly here:
  https://etherpad.openstack.org/p/heat-convergence
 
  Any thoughts?
 
  Cheers,
 
 
   I think a lot of OpenStack projects use status fields as status of
  the most recent operation, and I think it's totally wrong. status should
  be a known state of the _object_, not an action, and if we need statuses
  for operations, then those operations should be addressable REST objects.
  Of course there are cases where object status should be updated to reflect
  an operating status if it's a completely exclusive operation (BUILDING and
  DELETING make sense, for example).
 
   Actually, I think most projects are the opposite where status means
  what's the state of the resource (Nova, Trove, Cinder, etc), whereas Heat
  uses status as the state of the last operation. Probably wouldn't be too
  terrible to have a new state for stacks and their resources then perhaps
  deprecate and use status in the accepted way in the v2 API?

 Well, my point is that it's done inconsistently. Yes, it's mostly used as
 an object status, but nova for example uses it as an operation status for
 things like resize.

Nova's status of in resize is RESIZE and VERITY_RESIZE.
This status means Currently, Instance is ACTIVE and resize in progress.
I think Heat can assume resource status is ACTIVE in this case.

Thus, several status that contain operation status have to map resource(object)
status. However in my impression, a status that should assume another status
isn't a lot.

In my opinion, status mapping table is reasonable in present time.

Regards

--
Mitsuru Kanabuchi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][horizon]Heat UI related requirements roadmap

2013-12-06 Thread Randall Burt
I hope I'm not re-opening worm cans here, and that's not my intent, but I just 
wanted to get a little clarification in-line below:

On Dec 6, 2013, at 3:24 PM, Tim Schnell tim.schn...@rackspace.com
 wrote:

 To resolve this thread, I have created 5 blueprints based on this mailing
 list discussion. I have attempted to distill the proposed specification
 down to what seemed generally agreed upon but if you feel strongly that I
 have incorrectly captured something let's talk about it!
 
 Here are the blueprints:
 
 1) Stack Keywords
 blueprint: https://blueprints.launchpad.net/heat/+spec/stack-keywords
 spec: https://wiki.openstack.org/wiki/Heat/UI#Stack_Keywords

As proposed, these look like template keywords and not stack keywords.

I may be mis-remembering the conversation around this, but it would seem to me 
this mixes tagging templates and tagging stacks. In my mind, these are separate 
things. For the stack part, it seems like I could just pass something like 
--keyword blah multiple times to python-heatclient and not have to edit the 
template I'm passing. This lets me organize my stacks the way I want rather 
than relying on the template author (who may not be me) to organize things for 
me. Alternatively, I'd at least like the ability to accept, replace, and/or 
augment the keywords the template author proposes.

 
 2) Parameter Grouping and Ordering
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/parameter-grouping-ordering
 spec: 
 https://wiki.openstack.org/wiki/Heat/UI#Parameter_Grouping_and_Ordering
 
 3) Parameter Help Text
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/add-help-text-to-template
 spec: https://wiki.openstack.org/wiki/Heat/UI#Help_Text
 
 4) Parameter Label
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/add-parameter-label-to-template
 spec: https://wiki.openstack.org/wiki/Heat/UI#Parameter_Label
 
 
 This last blueprint did not get as much discussion so I have added it with
 the discussion flag set. I think this will get more important in the
 future but I don't need to implement right now. I'd love to hear more
 thoughts about it.
 
 5) Get Parameters API Endpoint
 blueprint: 
 https://blueprints.launchpad.net/heat/+spec/get-parameters-from-api
 spec: 
 https://wiki.openstack.org/wiki/Heat/UI#New_API_Endpoint_for_Returning_Temp
 late_Parameters

History around validate_template aside, I wonder if this doesn't re-open the 
discussion around having an endpoint that will translate an entire template 
into the native format (HOT). I understand that the idea is that we normalize 
parameter values to relieve user interfaces from having to understand several 
formats supported by Heat, but it just seems to me that there's a more general 
use case here.

 
 Thanks,
 Tim

I know its probably nit-picky, but I would prefer these specs be individual 
wiki pages instead of lumped all together. At any rate, thanks for organizing 
all this!

 
 On 11/28/13 4:55 AM, Zane Bitter zbit...@redhat.com wrote:
 
 On 27/11/13 23:37, Fox, Kevin M wrote:
 Hmm... Yeah. when you tell heat client the url to a template file, you
 could set a flag telling the heat client it is in a git repo. It could
 then automatically look for repo information and set a stack metadata
 item pointing back to it.
 
 Or just store the URL.
 
 If you didn't care about taking a performance hit, heat client could
 always try and check to see if it was a git repo url. That may add
 several extra http requests though...
 
 Thanks,
 Kevin
 
 From: Clint Byrum [cl...@fewbar.com]
 Sent: Wednesday, November 27, 2013 1:04 PM
 To: openstack-dev
 Subject: Re: [openstack-dev] [heat][horizon]Heat UI related
 requirements   roadmap
 
 Excerpts from Fox, Kevin M's message of 2013-11-27 08:58:16 -0800:
 This use case is sort of a providence case. Where did the stack come
 from so I can find out more about it.
 
 
 This exhibits similar problems to our Copyright header problems. Relying
 on authors to maintain their authorship information in two places is
 cumbersome and thus the one that is not automated will likely fall out
 of sync fairly quickly.
 
 You could put a git commit field in the template itself but then it
 would be hard to keep updated.
 
 
 Or you could have Heat able to pull from any remote source rather than
 just allowing submission of the template directly. It would just be
 another column in the stack record. This would allow said support person
 to see where it came from by viewing the stack, which solves the use
 case.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Randall Burt
I too have warmed to this idea but wonder about the actual implementation 
around it. While I like where Edmund is going with this, I wonder if it 
wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates to 
Glance (/assemblies, /applications, etc) along side /images.  Initially, we 
could have separate endpoints and data structures for these different asset 
types, refactoring the easy bits along the way and leveraging the existing data 
storage and caching bits, but leaving more disruptive changes alone. That can 
get the functionality going, prove some concepts, and allow all of the 
interested parties to better plan a more general v3 api.

On Dec 6, 2013, at 4:23 PM, Edmund Troche 
edmund.tro...@us.ibm.commailto:edmund.tro...@us.ibm.com
 wrote:


I agree with what seems to also be the general consensus, that Glance can 
become Heater+Glance (the service that manages images in OS today). Clearly, 
if someone looks at the Glance DB schema, APIs and service type (as returned by 
keystone service-list), all of the terminology is about images, so we would 
need to more formally define what are the characteristics or image, 
template, maybe assembly, components etc and find what is a good 
generalization. When looking at the attributes for image (image table), I can 
see where there are a few that would be generic enough to apply to image, 
template etc, so those could be taken to be the base set of attributes, and 
then based on the type (image, template, etc) we could then have attributes 
that are type-specific (maybe by leveraging what is today image_properties).

As I read through the discussion, the one thing that came to mind is asset 
management. I can see where if someone bothers to create an image, or a 
template, then it is for a good reason, and that perhaps you'd like to maintain 
it as an IT asset. Along those lines, it occurred to me that maybe what we need 
is to make Glance some sort of asset management service that can be leveraged 
by Service Catalogs, Nova, etc. Instead of storing images and templates  we 
store assets of one kind or another, with artifacts (like files, image content, 
etc), and associated metadata. There is some work we could borrow from, 
conceptually at least, from OSLC's Asset Management specification: 
http://open-services.net/wiki/asset-management/OSLC-Asset-Management-2.0-Specification/.
 Looking at this spec, it probably has more than we need, but there's plenty we 
could borrow from it.


Edmund Troche


graycol.gifGeorgy Okrokvertskhov ---12/06/2013 01:34:13 PM---As a Murano team 
we will be happy to contribute to Glance. Our Murano metadata repository is a 
stand

From: Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 12/06/2013 01:34 PM
Subject: Re: [openstack-dev] [heat] [glance] Heater Proposal





As a Murano team we will be happy to contribute to Glance. Our Murano metadata 
repository is a standalone component (with its own git repository)which is not 
tightly coupled with Murano itself. We can easily add our functionality to 
Glance as a new component\subproject.

Thanks
Georgy


On Fri, Dec 6, 2013 at 11:11 AM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

On Dec 6, 2013, at 10:38 AM, Clint Byrum 
cl...@fewbar.commailto:cl...@fewbar.com wrote:

 Excerpts from Jay Pipes's message of 2013-12-05 21:32:54 -0800:
 On 12/05/2013 04:25 PM, Clint Byrum wrote:
 Excerpts from Andrew Plunk's message of 2013-12-05 12:42:49 -0800:
 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at 
 fewbar.comhttp://fewbar.com/
  wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.

 Fitting is one thing, optimizations around particular assumptions about 
 the size of data and the frequency of reads/writes might be an issue, 
 but I admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.

 I think we are getting out into the weeds a little bit here. It is 
 important to think about these apis in terms of what they actually do, 
 before the decision of combining 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-06 Thread Randall Burt
On Dec 6, 2013, at 5:04 PM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Randall Burt's message of 2013-12-06 14:43:05 -0800:
 I too have warmed to this idea but wonder about the actual implementation 
 around it. While I like where Edmund is going with this, I wonder if it 
 wouldn't be valuable in the short-to-mid-term (I/J) to just add /templates 
 to Glance (/assemblies, /applications, etc) along side /images.  Initially, 
 we could have separate endpoints and data structures for these different 
 asset types, refactoring the easy bits along the way and leveraging the 
 existing data storage and caching bits, but leaving more disruptive changes 
 alone. That can get the functionality going, prove some concepts, and allow 
 all of the interested parties to better plan a more general v3 api.
 
 
 +1 on bolting the different views for things on as new v2 pieces instead
 of trying to solve the API genericism immediately.
 
 I would strive to make this a facade, and start immediately on making
 Glance more generic under the hood.  Otherwise these will just end up
 as silos inside Glance instead of silos inside OpenStack.

Totally agreed. Where it makes sense to refactor we should do that rather than 
implementing essentially different services underneath.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 10:10 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.

Fitting is one thing, optimizations around particular assumptions about the 
size of data and the frequency of reads/writes might be an issue, but I admit 
to ignorance about those details in Glance.

 2: The swift API supports versions. We could also have git as a
 backend. This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.

Agreed, though one could argue that using image names and looking up ID's or 
just using ID's as appropriate sort of handle this use case, but I agree that 
having image versioning seems a reasonable feature for Glance to have as well.

 3: I'm sure glance image users would love to have those too.

And image metadata is already there so we don't have to go through those 
discussions all over again ;).

 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.

Yeah, this was the kicker for me. I'd been thinking of adding the 
tenancy/public/private templates use case to the HeatR spec and realized that 
this was a good argument for Glance since it already has this feature.

 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)

Perhaps, and if these use cases make sense for the Glance users in general, I 
wouldn't want to re-invent all those wheels either. I admit there's some appeal 
to being able to pass a template ID to stack-create or as the type of a 
provider resource and have an actual API to call that's already got a known, 
tested client that's already part of the OpenStack ecosystem

In the end, though, even if some and not all of our use cases make sense for 
the Glance folks, we still have the option of creating the HeatR service and 
having Glance as a possible back-end data store.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 11:10 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from James Slagle's message of 2013-12-05 08:35:12 -0800:
 On Thu, Dec 5, 2013 at 11:10 AM, Clint Byrum cl...@fewbar.com wrote:
 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?
 
 
 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:
 
 I'm actually interested in the use cases laid out by Heater from both
 a template perspective and image perspective.  For the templates, as
 Robert mentioned, Tuskar needs a solution for this requirement, since
 it's deploying using templates.  For the images, we have the concept
 of a golden image in TripleO and are heavily focused on image based
 deployments.  Therefore, it seems to make sense that TripleO also
 needs a way to version/tag known good images.
 
 Given that, I think it makes sense  to do this in a way so that it's
 consumable for things other than just templates.  In fact, you can
 almost s/template/image/g on the Heater wiki page, and it pretty well
 lays out what I'd like to see for images as well.
 
 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users
 
 My responses:
 
 1: Irrelevant. Smaller things will fit in it just fine.
 
 2: The swift API supports versions. We could also have git as a
 backend.
 
 I would definitely like to see a git backend for versioning.  No
 reason to reimplement a different solution for what already works
 well.  I'm not sure we'd want to put a whole image into git though.
 Perhaps just it's manifest (installed components, software versions,
 etc) in json format would go into git, and that would be associated
 back to the binary image via uuid.  That would even make it easy to
 diff changes between versions, etc.
 
 
 Right, git for a big 'ol image makes little sense.
 
 I'm suggesting that one might want to have two glances, one for images
 which just uses swift versions and would just expose a list of versions,
 and one for templates which would use git and thus expose more features
 like a git remote for the repo. I'm not sure if glance has embraced the
 extension paradigm yet, but this would fall nicely into it.

Alternatively, Glance could have configurable backends for each image type 
allowing for optimization without the (often times messy) extension mechanism? 
This is assuming it doesn't do this already - I really need to start digging 
here. In the spirit of general OpenStack architectural paradigms, as long as 
the service exposes a consistent interface for templates that includes 
versioning support, the back-end store and (possibly) the versioning engine 
should certainly be configurable. Swift probably makes a decent first/default 
implementation.

 This feels like something we can add as an optional feature
 without exploding Glance's scope and I imagine it would actually be a
 welcome feature for image authors as well. Think about Ubuntu maintaining
 official images. If they can keep the ID the same and just add a version
 (allowing users to lock down to a version if updated images cause issue)
 that seems like a really cool feature for images _and_ templates.
 
 3: I'm sure glance image users would love to have those too.
 
 4: Irrelevant. Heat will need to download templates just like nova, and
 making images publicly downloadable is also a thing in glance.
 
 It strikes me that this might be a silo problem instead of an
 actual design problem. Folk should not be worried about jumping into
 Glance and adding features. Unless of course the Glance folk have
 reservations? (adding glance tag to the subject)
 
 I'm +1 for adding these types of features to glance, or at least
 something common, instead of making it specific to Heat templates.
 
 
 Right, it may be that glance is too limited, but from what I've seen,
 it is not and it already has the base concepts that HeaTeR wants to
 have available.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 4:08 PM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
 wrote:

Hi,

I am really glad to see the line of thinking close to what we at Murano see as 
a right direction for OpenStack development. This is a good initiative which 
potentially will be useful for other projects.  We have very similar idea about 
repository in Murano project and we even implemented the first version of it. 
We are very open for collaboration and exchanging the ideas.


In terms of overlap with Murano, I can see the overlap in the area of Murano 
Metadata Repository. We already have done some work in this area and you can 
find the detailed description here 
https://wiki.openstack.org/wiki/Murano/SimplifiedMetadataRepository. The 
implementation of the first version is already done and we plan to include the 
implementation in Murano 0.4 release which will go out in a week.


For the future roadmap with more advanced functionality we have created 
etherpad:  https://etherpad.openstack.org/p/MuranoMetadata


My concerns around Heater lie in two areas:

- Fit for OpenStack Orchestration program

Do you mean to imply that a repository of orchestration templates is a bad fit 
for the orchestration program?

- Too narrow focus as it formulated right now making hard for other projects 
like Murano take advantage of this services as general purpose metadata 
repository

That's what the discussion around using Glance is about though. The proposal 
started out as a separate service, but arguments are being made that the use 
cases fit into Glance. The use cases don't change as their focused on templates 
and not general object cataloging, but that's something to sort if/when we land 
on an implementation.

I am not sure how metadata repository related to orchestration program as it 
does not orchestrate anything. I would rather consider creating separate 
Service Catalog/Metadata Repository program or consider storage programs like 
Glance or Swift as Heater has the similar feature set. If you replace 
“template” with “object” you will actually propose a new swift implementation 
with replacing existing Swift’s versioning, acl, and metadata for objects.

Doesn't that same argument hold for Murano Metadata Repository as well? And, as 
initially proposed, its not a generic metadata repository but a template 
cataloging system. The difference maybe academic, but I think its important. 
That being said, maybe there's a case for something even more generic (store 
some meta information about some consumable artifact and a pointer to where to 
get it), but IMO, the arguments for Glance then become more compelling (not 
that I've bought in to that completely yet).

Murano as Application Catalog also could be a fit, but I don’t insist :)

It sounds to me like conceptually it would suffer from the same scoping issues 
we're already discussing though.

At the current moment Heat is not opinionated about template placement and this 
provides a lot of flexibility for other projects which use Heat under the hood. 
With your proposal, you are creating new metadata repository solution for 
specific use case of template storage making Heat much more prescriptive.

I'm not sure where this impression comes from. The Heat orchestration 
api/engine would in no way be dependent on the repository. Heat would still 
accept and orchestrate any template you passed it. At best, Heat would be 
extended to be aware of catalog urls and template id's, but in no way was it 
ever meant to imply that Heat would ever be modified to *only* work with 
templates from a catalog or require any of the catalog metadata to function in 
its core role as an orchestration engine.

Combined with TC policy which enforces projects to use existing code, this 
could be a big problem because other projects might want to keep not only Heat 
template but other components and metadata. Murano is a good example for that. 
It has multiple objects associated with Application and Heat template is only 
one of them.  That would mean that other projects would either need to 
duplicate the functionality of the catalog or significantly restrict their own 
functionality.

Or Murano could also extend Glance functionality to include these sorts of 
composite artifacts similar to how Vish described earlier in the thread.

The most scary thing for me is that you propose to add metadata information to 
the HOT template. In Murano we keep UI information in a separate file and this 
provides us a flexibility how to render UI for the same Heat template in 
different Applications. This is a question of separation of concerns as Heat as 
a deployment orchestration should not interfere to UI.

While I agree that things related to specific UI strategies don't really belong 
in the template, I think you may have misconstrued the example cited. As I 
understood it as a if this then wouldn't we have to do this unsightly thing? 
Could be that I 

Re: [openstack-dev] [heat] [glance] Heater Proposal

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 4:45 PM, Steve Baker 
sba...@redhat.commailto:sba...@redhat.com
 wrote:

On 12/06/2013 10:46 AM, Mark Washenberger wrote:



On Thu, Dec 5, 2013 at 1:05 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:

On Dec 5, 2013, at 12:42 PM, Andrew Plunk 
andrew.pl...@rackspace.commailto:andrew.pl...@rackspace.com wrote:

 Excerpts from Randall Burt's message of 2013-12-05 09:05:44 -0800:
 On Dec 5, 2013, at 10:10 AM, Clint Byrum clint at 
 fewbar.comhttp://fewbar.com/
 wrote:

 Excerpts from Monty Taylor's message of 2013-12-04 17:54:45 -0800:
 Why not just use glance?


 I've asked that question a few times, and I think I can collate the
 responses I've received below. I think enhancing glance to do these
 things is on the table:

 1. Glance is for big blobs of data not tiny templates.
 2. Versioning of a single resource is desired.
 3. Tagging/classifying/listing/sorting
 4. Glance is designed to expose the uploaded blobs to nova, not users

 My responses:

 1: Irrelevant. Smaller things will fit in it just fine.

 Fitting is one thing, optimizations around particular assumptions about the 
 size of data and the frequency of reads/writes might be an issue, but I 
 admit to ignorance about those details in Glance.


 Optimizations can be improved for various use cases. The design, however,
 has no assumptions that I know about that would invalidate storing blobs
 of yaml/json vs. blobs of kernel/qcow2/raw image.

 I think we are getting out into the weeds a little bit here. It is important 
 to think about these apis in terms of what they actually do, before the 
 decision of combining them or not can be made.

 I think of HeatR as a template storage service, it provides extra data and 
 operations on templates. HeatR should not care about how those templates are 
 stored.
 Glance is an image storage service, it provides extra data and operations on 
 images (not blobs), and it happens to use swift as a backend.

This is not completely correct. Glance already supports something akin to 
templates. You can create an image with metadata properties that specifies a 
complex block device mapping which would allow for multiple volumes and images 
to connected to the vm at boot time. This is functionally a template for a 
single vm.

Glance is pretty useless if is just an image storage service, we already have 
other places that can store bits (swift, cinder). It is much more valuable as a 
searchable repository of bootable templates. I don't see any reason why this 
idea couldn't be extended to include more complex templates that could include 
more than one vm.

FWIW I agree with all of this. I think Glance's real role in OpenStack is as a 
helper and optionally as a gatekeeper for the category of stuff Nova can 
boot. So any parameter that affects what Nova is going to boot should in my 
view be something Glance can be aware of. This list of parameters *could* grow 
to include multiple device images, attached volumes, and other things that 
currently live in the realm of flavors such as extra hardware requirements and 
networking aspects.

Just so things don't go too crazy, I'll add that since Nova is generally 
focused on provisioning individual VMs, anything above the level of an 
individual VM should be out of scope for Glance.

I think Glance should alter its approach to be less generally agnostic about 
the contents of the objects it hosts. Right now, we are just starting to do 
this with images, as we slowly advance on offering server side format 
conversion. We could find similar use cases for single vm templates.


The average heat template would provision more than one VM, plus any number of 
other cloud resources.

An image is required to provision a single nova server;
a template is required to provision a single heat stack.

Hopefully the above single vm policy could be reworded to be agnostic to the 
service which consumes the object that glance is storing.

To add to this, is it that Glance wants to be *more* integrated and geared 
towards vm or container images or that Glance wishes to have more intimate 
knowledge of the things its cataloging *regardless of what those things 
actually might be*? The reason I ask is that Glance supporting only single vm 
templates when Heat orchestrates the entire (or almost entire) spectrum of 
core and integrated projects means that its suitability as a candidate for a 
template repository plummets quite a bit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Stack convergence first steps

2013-12-05 Thread Randall Burt
On Dec 5, 2013, at 6:25 PM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

On Thu, Dec 5, 2013 at 3:50 PM, Anderson Mesquita 
anderson...@thoughtworks.commailto:anderson...@thoughtworks.com wrote:
Hey stackers,

We've been working towards making stack convergence 
(https://blueprints.launchpad.net/heat/+spec/stack-convergence) one step closer 
to being ready at a time.  After the first patch was submitted we got positive 
feedback on it as well as some good suggestions as to how to move it forward.

The first step (https://blueprints.launchpad.net/heat/+spec/stack-check) is to 
get all the statuses back from the real world resources and update our stacks 
accordingly so that we'll be able to move on to the next step: converge it to 
the desired state, fixing any errors that may have happened.

We just submitted another WiP for review, and as we were doing it, a few 
questions were raised and we'd like to get everybody's input on them. Our main 
concern is around the use and purpose of the `status` of a stack/resource.  
`status` currently appears to represent the status of the last action taken, 
and it seems that we may need to repurpose it or possibly create something else 
to represent a stack's health (i.e. everything is up and running as expected, 
something smells fishy, something broke, stack's is doomed).  We described this 
thoroughly here: https://etherpad.openstack.org/p/heat-convergence

Any thoughts?

Cheers,


I think a lot of OpenStack projects use status fields as status of the most 
recent operation, and I think it's totally wrong. status should be a known 
state of the _object_, not an action, and if we need statuses for operations, 
then those operations should be addressable REST objects. Of course there are 
cases where object status should be updated to reflect an operating status if 
it's a completely exclusive operation (BUILDING and DELETING make sense, for 
example).

Actually, I think most projects are the opposite where status means what's 
the state of the resource (Nova, Trove, Cinder, etc), whereas Heat uses status 
as the state of the last operation. Probably wouldn't be too terrible to have a 
new state for stacks and their resources then perhaps deprecate and use 
status in the accepted way in the v2 API?

--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] CLI minimal implementation

2013-12-03 Thread Randall Burt
I disagree. If a param is required and has no meaningful default, it should be 
positional IMO. I think this actually reduces confusion as you can tell from 
the signature alone that this is a value the user must supply to have any 
meaningful thing happen.

On Dec 3, 2013, at 10:13 AM, Paul Montgomery paul.montgom...@rackspace.com
 wrote:

 I agree.  With many optional parameters possible, positional parameters
 would seem to complicate things a bit (even for end users).
 
 
 On 12/3/13 8:14 AM, Arati Mahimane arati.mahim...@rackspace.com wrote:
 
 
 
 On 12/3/13 7:51 AM, Roshan Agrawal roshan.agra...@rackspace.com wrote:
 
 
 
 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Monday, December 02, 2013 8:17 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Solum] CLI minimal implementation
 
 On 12/02/2013 07:03 PM, Roshan Agrawal wrote:
 I have created a child blueprint to define scope for the minimal
 implementation of the CLI to consider for milestone 1.
 
 https://blueprints.launchpad.net/solum/+spec/cli-minimal-implementatio
 n
 
 Spec for the minimal CLI @
 
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/CLI-minimal-im
 plementation Etherpad for discussion notes:
 https://etherpad.openstack.org/p/MinimalCLI
 
 Would look for feedback on the ML, etherpad and discuss more in the
 weekly IRC meeting tomorrow.
 
 What is this R1.N syntax?  How does it relate to development
 milestones?
 Does R1 mean a requirement for milestone-1?
 
 These do not relate to development milestones. R1 is a unique identified
 for the given requirement. R1.x is a unique requirement Id for something
 that is a sub item of the top level requirement R1.
 Is there a more openstack standard way for generating requirements Id?
 
 For consistency, I would use commands like:
 
   solum app-create
   solum app-delete
   solum assembly-create
   solum assembly-delete
 
 instead of adding a space in between:
 
   solum app create
 
 to be more consistent with other clients, like:
 
   nova flavor-create
   nova flavor-delete
   glance image-create
   glance image-delete
 
 The current proposal is an attempt to be consistent with the direction
 for the openstack one CLI. Adrian's addressed it in his other reply.
 
 
 I would make required arguments positional arguments.  So, instead of:
 
   solum app-create --plan=planname
 
 do:
 
   solum app-create planname
 
 I will make this change unless hear objections
 
 In my opinion, since most of the parameters (listed here
 https://wiki.openstack.org/wiki/Solum/FeatureBlueprints/ApplicationDeploym
 e
 ntAndManagement#Solum-R1.12_app_create:_CLI) are optional,
 it would be easier to specify the parameters as param_name=value
 instead of having positional parameters.
 
 
 
 
 Lastly, everywhere you have a name, I would use a UUID.  Names
 shouldn't
 have to be globally unique (because of multi-tenancy).  UUIDs should
 always
 work, but you can support a name in the client code as a friendly
 shortcut,
 but it should fail if a unique result can not be resolved from the
 name.
 
 
 Names do not have to be globally unique; just unique within the tenant
 namespace. The Name+tenant combination should map to a unique uuid.
 The CLI is a client tool, where as a user working with names is easier.
 We will support both, but start with Names (the friendly shortcut), and
 map it to uuid behind the scenes.
 
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt
Good stuff! Some questions/comments:

If web hooks are associated with policies and policies are independent 
entities, how does a web hook specify the scaling group to act on? Does calling 
the web hook activate the policy on every associated scaling group?

Regarding web hook execution and cool down, I think the response should be 
something like 307 if the hook is on cool down with an appropriate retry-after 
header.

On Nov 14, 2013, at 10:57 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
 wrote:


On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 11:30 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com wrote:
Good stuff! Some questions/comments:

If web hooks are associated with policies and policies are independent 
entities, how does a web hook specify the scaling group to act on? Does calling 
the web hook activate the policy on every associated scaling group?


Not sure what you mean by policies are independent entities. You may have 
missed that the policy resource lives hierarchically under the group resource. 
Policies are strictly associated with one scaling group, so when a policy is 
executed (via a webhook), it's acting on the scaling group that the policy is 
associated with.

Whoops. Yeah, I missed that.



Regarding web hook execution and cool down, I think the response should be 
something like 307 if the hook is on cool down with an appropriate retry-after 
header.

Indicating whether a webhook was found or whether it actually executed anything 
may be an information leak, since webhook URLs require no additional 
authentication other than knowledge of the URL itself. Responding with only 202 
means that people won't be able to guess at random URLs and know when they've 
found one.

Perhaps, but I also miss important information as a legitimate caller as to 
whether or not my scaling action actually happened or I've been a little too 
aggressive with my curl commands. The fact that I get anything other than 404 
(which the spec returns if its not a legit hook) means I've found *something* 
and can simply call it endlessly in a loop causing havoc. Perhaps the web hooks 
*should* be authenticated? This seems like a pretty large hole to me, 
especially if I can max someone's resources by guessing the right url.


On Nov 14, 2013, at 10:57 AM, Randall Burt 
randall.b...@rackspace.commailto:randall.b...@rackspace.com
 wrote:


On Nov 14, 2013, at 10:19 AM, Christopher Armstrong 
chris.armstr...@rackspace.commailto:chris.armstr...@rackspace.com
 wrote:

http://docs.heatautoscale.apiary.io/

I've thrown together a rough sketch of the proposed API for autoscaling. It's 
written in API-Blueprint format (which is a simple subset of Markdown) and 
provides schemas for inputs and outputs using JSON-Schema. The source document 
is currently at https://github.com/radix/heat/raw/as-api-spike/autoscaling.apibp


Things we still need to figure out:

- how to scope projects/domains. put them in the URL? get them from the token?

This may be moot considering the latest from the keystone devs regarding token 
scoping to domains/projects. Basically, a token is scoped to a single 
domain/project from what I understood, so domain/project is implicit. I'm still 
of the mind that the tenant doesn't belong so early in the URI, since we can 
already surmise the actual tenant from the authentication context, but that's 
something for Openstack at large to agree on.

- how webhooks are done (though this shouldn't affect the API too much; they're 
basically just opaque)

Please read and comment :)


--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] rough draft of Heat autoscaling API

2013-11-14 Thread Randall Burt

On Nov 14, 2013, at 12:44 PM, Zane Bitter zbit...@redhat.com
 wrote:

 On 14/11/13 18:51, Randall Burt wrote:
 
 On Nov 14, 2013, at 11:30 AM, Christopher Armstrong
 chris.armstr...@rackspace.com mailto:chris.armstr...@rackspace.com
  wrote:
 
 On Thu, Nov 14, 2013 at 11:16 AM, Randall Burt
 randall.b...@rackspace.com mailto:randall.b...@rackspace.com wrote:
Regarding web hook execution and cool down, I think the response
should be something like 307 if the hook is on cool down with an
appropriate retry-after header.
 
 I strongly disagree with this even ignoring the security issue mentioned 
 below. Being in the cooldown period is NOT an error, and the caller should 
 absolutely NOT try again later - the request has been received and correctly 
 acted upon (by doing nothing).

But how do I know nothing was done? I may have very good reasons to re-scale 
outside of ceilometer or other mechanisms and absolutely SHOULD try again 
later.  As it stands, I have no way of knowing that my scaling action didn't 
happen without examining my physical resources. 307 is a legitimate response in 
these cases, but I'm certainly open to other suggestions.

 
 Indicating whether a webhook was found or whether it actually executed
 anything may be an information leak, since webhook URLs require no
 additional authentication other than knowledge of the URL itself.
 Responding with only 202 means that people won't be able to guess at
 random URLs and know when they've found one.
 
 Perhaps, but I also miss important information as a legitimate caller as
 to whether or not my scaling action actually happened or I've been a
 little too aggressive with my curl commands. The fact that I get
 anything other than 404 (which the spec returns if its not a legit hook)
 means I've found *something* and can simply call it endlessly in a loop
 causing havoc. Perhaps the web hooks *should* be authenticated? This
 seems like a pretty large hole to me, especially if I can max someone's
 resources by guessing the right url.
 
 Web hooks MUST be authenticated.
 
 cheers,
 Zane.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Do we need to clean up resource_id after deletion?

2013-11-02 Thread Randall Burt
My thoughts exactly. I meant to dig into the soft-delete code to see if those 
changes handled resource_id differently but I got to traveling and forgot. IMO, 
if it universally needs doing, then it should be done in resource.Resource and 
be cognizant of deletion policy.

From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, November 01, 2013 11:30 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Heat] Do we need to clean up resource_id after
deletion?

Excerpts from Christopher Armstrong's message of 2013-11-01 11:34:56 -0700:
 Vijendar and I are trying to figure out if we need to set the resource_id
 of a resource to None when it's being deleted.

 This is done in a few resources, but not everywhere. To me it seems either

 a) redundant, since the resource is going to be deleted anyway (thus
 deleting the row in the DB that has the resource_id column)
 b) actively harmful to useful debuggability, since if the resource is
 soft-deleted, you'll not be able to find out what physical resource it
 represented before it's cleaned up.

 Is there some specific reason we should be calling resource_id_set(None) in
 a check_delete_complete method?


I've often wondered why some do it, and some don't.

Seems to me that it should be done not inside each resource plugin but
in the generic resource handling code.

However, I have not given this much thought. Perhaps others can provide
insight into why it has been done that way.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Network topologies [and more]

2013-10-28 Thread Randall Burt

On Oct 28, 2013, at 9:07 AM, Mike Spreitzer 
mspre...@us.ibm.commailto:mspre...@us.ibm.com
 wrote:

Zane Bitter zbit...@redhat.commailto:zbit...@redhat.com wrote on 10/28/2013 
06:47:50 AM:
 On 27/10/13 16:37, Edgar Magana wrote:
  Heat Developers,
 
  I am one of the core developers for Neutron who is lately working on the
  concept of Network Topologies. I want to discuss with you if the
  following blueprint will make sense to have in heat or neutron code:
  https://blueprints.launchpad.net/neutron/+spec/network-topologies-api
 
  ...

 It sounds to me like the only thing there that Heat is not already doing
 is to dump the existing network configuration. What if you were to
 implement just that part and do it in the format of a Heat template? (An
 independent tool to convert the JSON output to a Heat template would
 also work, I guess.)

 ...

 It does sound very much like you're trying to solve the same problem as
 Heat.


In my templates I have more than a network topology.  How would I combine the 
extracted/shared network topology with the other stuff I want in my heat 
template?

Well, if Neutron generated a Heat template describing a particular topology, 
you could always include that in another template as a provider resource 
assuming said template is generated with meaningful outputs. IMO, there is some 
appeal to having Neutron generate a Heat template for this feature. Its 
something directly consumable by another Openstack service allowing you to not 
only describe, but save and re-create for later any networking configuration 
using the same artifact.


Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config

2013-10-28 Thread Randall Burt

On Oct 28, 2013, at 8:53 AM, Steven Hardy sha...@redhat.com
 wrote:

 On Sun, Oct 27, 2013 at 11:23:20PM -0400, Lakshminaraya Renganarayana wrote:
 A few us at IBM studied Steve Baker's proposal on HOT Software
 Configuration. Overall the proposed constructs and syntax are great -- we
 really like the clean syntax and concise specification of components. We
 would like to propose a few minor extensions that help with better
 expression of dependencies among components and resources, and in-turn
 enable cross-vm coordination. We have captured our thoughts on this on the
 following Wiki page
 
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-ibm-response
 
 Thanks for putting this together.  I'll post inline below with cut/paste
 from the wiki followed by my response/question:
 
 E2: Allow usage of component outputs (similar to resources):
 There are fundamental differences between components and resources...
 
 So... lately I've been thinking this is not actually true, and that
 components are really just another type of resource.  If we can implement
 the software-config functionality without inventing a new template
 abstraction, IMO a lot of the issues described in your wiki page no longer
 exist.
 
 Can anyone provide me with a clear argument for what the fundamental
 differences actually are?
 
 My opinion is we could do the following:
 - Implement software config components as ordinary resources, using the
  existing interfaces (perhaps with some enhancements to dependency
  declaration)
 - Give OS::Nova::Server a components property, which simply takes a list of
  resources which describe the software configuration(s) to be applied

I see the appeal here, but I'm leaning toward having the components define the 
resources they apply to rather than extending the interfaces of every 
compute-related resource we have or may have in the future. True, this may make 
things trickier in some respects with regard to bootstrapping the compute 
resource, but then again, don't most configuration management systems work on 
active compute instances?

 
 This provides a lot of benefits:
 - Uniformity of interfaces (solves many of the interface-mapping issues you
  discuss in the wiki)
 - Can use provider resources and environments functionality unmodified
 - Conceptually simple, we don't have to confuse everyone with a new
  abstraction sub-type and related terminology
 - Resources describing software components will be stateful, as described
  in (E4), only the states would be the existing resource states, e.g
  CREATE, IN_PROGRESS == CONFIGURING, and CREATE, COMPLETE ==
  CONFIG_COMPLETE
 
 Thoughts?

Completely agree here. So far, I've not seen how components differ from 
resources save for some name and superficial syntax changes. Ordering component 
resources can likely be achieved with the existing depends-on functionality 
as well.

 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Comments on Steve Baker's Proposal on HOT Software Config

2013-10-28 Thread Randall Burt

On Oct 28, 2013, at 9:49 AM, Steven Hardy sha...@redhat.com wrote:

 On Mon, Oct 28, 2013 at 02:33:40PM +, Randall Burt wrote:
 
 On Oct 28, 2013, at 8:53 AM, Steven Hardy sha...@redhat.com
 wrote:
 
 On Sun, Oct 27, 2013 at 11:23:20PM -0400, Lakshminaraya Renganarayana wrote:
 A few us at IBM studied Steve Baker's proposal on HOT Software
 Configuration. Overall the proposed constructs and syntax are great -- we
 really like the clean syntax and concise specification of components. We
 would like to propose a few minor extensions that help with better
 expression of dependencies among components and resources, and in-turn
 enable cross-vm coordination. We have captured our thoughts on this on the
 following Wiki page
 
 https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config-ibm-response
 
 Thanks for putting this together.  I'll post inline below with cut/paste
 from the wiki followed by my response/question:
 
 E2: Allow usage of component outputs (similar to resources):
 There are fundamental differences between components and resources...
 
 So... lately I've been thinking this is not actually true, and that
 components are really just another type of resource.  If we can implement
 the software-config functionality without inventing a new template
 abstraction, IMO a lot of the issues described in your wiki page no longer
 exist.
 
 Can anyone provide me with a clear argument for what the fundamental
 differences actually are?
 
 My opinion is we could do the following:
 - Implement software config components as ordinary resources, using the
 existing interfaces (perhaps with some enhancements to dependency
 declaration)
 - Give OS::Nova::Server a components property, which simply takes a list of
 resources which describe the software configuration(s) to be applied
 
 I see the appeal here, but I'm leaning toward having the components define 
 the resources they apply to rather than extending the interfaces of every 
 compute-related resource we have or may have in the future. True, this may 
 make things trickier in some respects with regard to bootstrapping the 
 compute resource, but then again, don't most configuration management 
 systems work on active compute instances?
 
 What every though?  Don't we have exactly one compute resource,
 OS::Nova::Server?  (I'm assuming this functionality won't be available via
 AWS compatible Instance resource)

Yes, I suppose it wouldn't do to go extending the AWS compatibility interface 
with this functionality, so I withdraw my concern.

 
 Steve
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Multi region support for Heat

2013-07-23 Thread Randall Burt

On Jul 23, 2013, at 11:03 AM, Clint Byrum cl...@fewbar.com
 wrote:

 Excerpts from Steve Baker's message of 2013-07-22 21:43:05 -0700:
 On 07/23/2013 10:46 AM, Angus Salkeld wrote:
 On 22/07/13 16:52 +0200, Bartosz Górski wrote:
 Hi folks,
 
 I would like to start a discussion about the blueprint I raised about
 multi region support.
 I would like to get feedback from you. If something is not clear or
 you have questions do not hesitate to ask.
 Please let me know what you think.
 
 Blueprint:
 https://blueprints.launchpad.net/heat/+spec/multi-region-support
 
 Wikipage:
 https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat
 
 
 What immediatley looks odd to me is you have a MultiCloud Heat talking
 to other Heat's in each region. This seems like unneccessary
 complexity to me.
 I would have expected one Heat to do this job.
 
 It should be possible to achieve this with a single Heat installation -
 that would make the architecture much simpler.
 
 
 Agreed that it would be simpler and is definitely possible.
 
 However, consider that having a Heat in each region means Heat is more
 resilient to failure. So focusing on a way to make multiple Heat's
 collaborate, rather than on a way to make one Heat talk to two regions
 may be a more productive exercise.

Perhaps, but wouldn't having an engine that only requires the downstream 
services running (nova, cinder, etc) in a given region be equally if not more 
resilient? A heat engine in region 1 can still provision resources in region 2 
even if the heat service in region 2 is unavailable. Seems that one could 
handle global availability via any cast, DR strategy or some other routing 
magic rather than having the engine itself implement some support for it.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev