[openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Huangtianhua


-邮件原件-
发件人: Flavio Percoco [mailto:fla...@redhat.com] 
发送时间: 2017年1月10日 15:34
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [heat] glance v2 support?

On 10/01/17 12:35 +0530, Rabi Mishra wrote:
>On Mon, Jan 9, 2017 at 4:45 PM, Flavio Percoco  wrote:
>
>> On 06/01/17 09:34 +0530, Rabi Mishra wrote:
>>
>>> On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi 
>>> wrote:
>>>
>>> Greetings Heat folks!

 My question is simple:
 When do you plan to support Glance v2?
 https://review.openstack.org/#/c/240450/

 The spec looks staled while Glance v1 was deprecated in Newton (and 
 v2 was started in Kilo!).


 Hi Emilien,
>>>
>>> I think we've not been able to move to v2 due to v1/v2 
>>> incompatibility[1] with respect to the location[2] property. Moving 
>>> to v2 would break all existing templates using that property.
>>>
>>> I've seen several discussions around that without any conclusion.  I 
>>> think we can support a separate v2 image resource and deprecate the 
>>> current one, unless there is a better path available.
>>>
>>
>> Hi Rabi,
>>
>> Could you elaborate on why Heat depends on the location attribute? 
>> I'm not familiar with Heat and knowing this might help me to propose 
>> something (or at least understand the difficulties).
>>
>> I don't think putting this on hold will be of any help. V1 ain't 
>> coming back and the improvements for v2 are still under heavy coding. 
>> I'd probably recommend moving to v2 with a proper deprecation path 
>> rather than sticking to v1.
>>
>>
>Hi Flavio,
>
>As much as we would like to move to v2, I think we still don't have a 
>acceptable solution for the question below. There is an earlier ML 
>thread[1], where it was discussed in detail.
>
>- What's the migration path for images created with v1 that use the 
>location attribute pointing to an external location?

Moving to Glance v2 shouldn't break this. As in, Glance will still be able to 
pull the images from external locations.

Also, to be precise more precise, you actually *can* use locations in V2.
Glance's node needs to have 2 settings enabled. The first is 
`show_multple_locations` and the second one is a policy config[0]. It's however 
not recommended to expose that to end users but that's why it was shielded 
behind policies.
---As you said, we can't use location in v2 by default. IMO, If glance v2 is 
compatible with v1, the option should be enabled by default.

I'd recommend Heat to not use locations as that will require deployers to 
either enable them for everyone or have a dedicate glance-api node for Heat.
If not use location, do we have other options for user? What should user to 
do before create a glance image using v2? Download the image data? And then 
pass the image data to glance api? I really don't think it's good way.

All that being said, switching to v2 won't prevent Glance from reading images 
from external locations if the image records exist already.
Yes, but how to create a new glance image?

[0] https://github.com/openstack/glance/blob/master/etc/policy.json#L16-L18

>While answering the above we've to keep in mind the following constraint.
>
>- Any change in the image id(new image) would potentially result in 
>nova servers using them in the template being rebuilt/replaced, and we 
>would like to avoid it.
>
>There was a suggestion to allow the 'copy-from'  with v2, which would 
>possibly make it easier for us. Is that still an option?

May be, in the long future. The improvements for v2 are still under heavy 
development.

>I assume we can probably use glance upload api to upload the image 
>data(after getting it from the external location) for an existing image?
>Last time i tried to do it, it seems to be not allowed for an 'active'
>image. It's  possible I'm missing something here.  We don't have a way 
>at present,  for a user to upload an image to heat engine( not sure if 
>we would like do to it either) or heat engine downloading the image 
>from an 'external location' and then uploading it to glance while 
>creating/updating an image resource.

Downloading the image locally and uploading it is a workaround, yes. Not ideal 
but it's simple. However, you won't need it for the migration to v2, I believe, 
since you can re-use existing images. Heat won't be able to create new images 
and have them point to external locations, though, unless the settings I 
mentioned above have been enabled.

>Also, glance location api could probably have been useful here. 
>However, we were advised in the earlier thread not to use it, as 
>exposing the location to the end user is perceived as a security risk.

++

Flavio

>
>[1]  
>http://lists.openstack.org/pipermail/openstack-dev/2016-May/094598.html
>
>
>Cheers,
>> Flavio
>>
>>
>>> [1] 
>>> https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
>>> [2] 

[openstack-dev] 答复: [Heat][Glance] Can't migrate to glance v2 completely

2016-05-19 Thread Huangtianhua
Thanks very much and sorry to reply so late. Comments inline.

-邮件原件-
发件人: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
发送时间: 2016年5月11日 22:03
收件人: OpenStack Development Mailing List (not for usage questions)
抄送: Huangtianhua
主题: Re: [openstack-dev] [Heat][Glance] Can't migrate to glance v2 completely


Thanks for your email. Comments inline.

On 5/11/16 3:06 AM, Huangtianhua wrote:
>
>
>
>
>
>
>
>
> Hi glance-team:
>
>
>  
>
>
>
> 1.   
> glance v1 supports '--location' in Api image-create, we support 
> 'location' in heat for glance image resource,
>
>
> and we don't support '--file' to use local files for upload, as the 
> caller has no control over local files on the
>
>
> server running heat-engine or there are some security risk.
>

We had a session during the summit to discuss the deprecation path. You are 
right currently v2 does not have the location support. Also, please be mindful 
that location concept in v2 (you mention above) is a bit different from that in 
v1.

It's unfortunate that public facing services have exposed v1 as v1 was designed 
to be the internal only (service) API for usage by Infrastructure services. v2 
on the other had has been designed to be used by end users and PaaS services.

Ideally, a location should never be set by the end user as the location 
mechanism used by Glance needs to be opaque to the end user (they can not be 
sure the scheme in which the location needs to be set to be acceptable to 
Glance). location logic was introduced to help admin
(operators) set a custom location on an image to help speed the boot times. 
Hence, it's a service API in a way (unless you run a very small trusted cloud). 
(In cast of heat, the scale and type of cloud would be quite different.)

--
In fact, I don't understand why the end user can't set 'location', the 
'location' to me is the url where the data for the image already resides, and 
let's consider a simple user case with heat template:
  
heat_template_version: 2013-05-23
resources:
  fedora_image:
type: OS::Glance::Image
properties:
  disk_format: qcow2
  container_format: bare
  location: 
http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
  my_server:
type: OS::Nova::Server
properties:
  # other properties
  image: {get_resources: fedora_image}
  
as above user want to use a fedora release image to create a nova server. So if 
user can't set the image 'location', how to use the image, is there any other 
way in glance v2?

Consider another user case of using custom image store in swift, user store the 
custom image data in swift already. The schema likes: 
swift+http://tenant/account:key@auth_url/container/obj
it is really complex, but if this location only expose to admin(operator), they 
need to get the key(password) of all the end-users? Is it safe?



>
>
> 2.   
> glance v1 is deprecated, I want to migrate to use glance v2 in heat, 
> so for glance image resource, I think
>
>
> we need to call two Apis when image creation: first, image-create, 
> then add-location, but for glance v2, 'location'
> apis(add-location,
>
>
> remove-location, replace-location) are unavailable by default, due the 
> config option 'show_multiple_locations' which
>
>
> default value is false, the function **location** is disabled by 
> default. So if we migrate to glance v2, then user can't
>
>
> create glance image resource by default, it’s unacceptable. I wonder 
> if we can set the config option to true to
>
>
> compatible with glance v1? Or what’s your suggestion to migrate to 
> glance v2 completely?
>

(as I mentioned above) The location APIs have been designed to be used by 
admins and are not supposed to be exposed to end user (or proxy to end user 
calls). It is a security risk when enabling that config option and unless the 
deployment is absolutely sure (like a private/trusted glance installation), 
they shouldn't enable it. Also, it's not the upload/import call that will be 
included as a part of interoperability [1].  I think a use case to support 
"copy-from" a location is worthy to be supported in v2 -- where a user 
specifies a location and the data can be pulled in by glance from that http 
url. It has been asked for by a few other user and we are strongly considering 
that case. I will be meeting defcore folks to identify the implications of the 
same (to confirm if we should or not).

As far as other alternatives are concerned, I will need to take a closer look 
at all the possible ways you are letting users create image to better consult 
you. But please (please) DO NOT expose locations (read or write).

--
So your suggestion is to w

[openstack-dev] [Heat][Glance] Can't migrate to glance v2 completely

2016-05-11 Thread Huangtianhua
Hi glance-team:


1.glance v1 supports '--location' in Api image-create, we support 
'location' in heat for glance image resource,

and we don't support '--file' to use local files for upload, as the caller has 
no control over local files on the

server running heat-engine or there are some security risk.

2.glance v1 is deprecated, I want to migrate to use glance v2 in heat, so 
for glance image resource, I think

we need to call two Apis when image creation: first, image-create, then 
add-location, but for glance v2, 'location' apis(add-location,

remove-location, replace-location) are unavailable by default, due the config 
option 'show_multiple_locations' which

default value is false, the function *location* is disabled by default. So if 
we migrate to glance v2, then user can't

create glance image resource by default, it's unacceptable. I wonder if we can 
set the config option to true to

compatible with glance v1? Or what's your suggestion to migrate to glance v2 
completely?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Re-evaluate conditions specification

2016-03-31 Thread Huangtianhua
The conditions function has been requested for a long time, and there have been 
several previous discussions, which all ended up in debating the 
implementation, and no result.
https://review.openstack.org/#/c/84468/3/doc/source/template_guide/hot_spec.rst
https://review.openstack.org/#/c/153771/1/specs/kilo/resource-enabled-meta-property.rst

I think we should focus on the simplest possible way(same as AWS) to meet the 
user requirement, and follows the AWS, there is no doubt that we will get a 
very good compatibility.
And the patches are good in-progress. I don't want everything back to zero:)
https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/support-conditions-function

In the example you given of 'variables', seems there's no relation with 
resource/output/property conditions, it seems as another function which likes 
really 'variables' to used in template.

-邮件原件-
发件人: Thomas Herve [mailto:the...@redhat.com] 
发送时间: 2016年3月31日 19:55
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Re-evaluate conditions specification

On Thu, Mar 31, 2016 at 10:40 AM, Thomas Herve  wrote:
> Hi all,
>
> As the patches for conditions support are incoming, I've found 
> something in the code (and the spec) I'm not really happy with. We're 
> creating a new top-level section in the template called "conditions"
> which holds names that can be reused for conditionally creating 
> resource.
>
> While it's fine and maps to what AWS does, I think it's a bit 
> short-sighted and limited. What I have suggested in the past is to 
> have a "variables" (or whatever you want to call it) section, where 
> one can declare names and values. Then we can add an intrinsic 
> function to retrieve data from there, and use that for examples for 
> conditions.

I was asked to give examples, here's at least one that can illustrate what I 
meant:

parameters:
   host:
  type: string
   port:
  type: string

variables:
   endpoint:
  str_replace:
template:
   http://HOST:PORT/
params:
   HOST: {get_param: host}
   PORT: {get_param: port}

resources:
   config1:
  type: OS::Heat::StructuredConfig
  properties:
config:
   hosts: [{get_variable: endpoint}]

--
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

2016-03-19 Thread Huangtianhua
+1 :)

-邮件原件-
发件人: Sergey Kraynev [mailto:skray...@mirantis.com] 
发送时间: 2016年3月16日 18:58
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [Heat] Nomination Oleksii Chuprykov to Heat core reviewer

Hi Heaters,

The Mitaka release is close to finish, so it's good time for reviewing results 
of work.
One of this results is analyze contribution results for the last release cycle.
According to the data [1] we have one good candidate for nomination to 
core-review team:
Oleksii Chuprykov.
During this release he showed significant value of review metric.
His review were valuable and useful. Also He has enough level of expertise in 
Heat code.
So I think he is worthy to join to core-reviewers team.

I ask you to vote and decide his destiny.
 +1 - if you agree with his candidature
 -1  - if you disagree with his candidature

[1] http://stackalytics.com/report/contribution/heat-group/120

--
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Status of the Support Conditionals in Heat templates

2015-12-20 Thread Huangtianhua
https://review.openstack.org/#/c/245042
First patch https://review.openstack.org/#/c/221648

I proposed this spec, because the function is really needed, many customers of 
our company complained that they have to write/manage many templates to meet 
their business(the templates are similar, can they re-used?), 
also magnum guys asked me for this function too. I know there are several 
previous discussions such as https://review.openstack.org/#/c/84468/ and 
https://review.openstack.org/#/c/153771/ , but considering the user habits 
and compatibility with CFN templates, also the sample way is easy to implement 
based on our architecture, I proposed the same style as CFN.

If you agree with it, I will be happy to continue this work, thanks:)
   

-邮件原件-
发件人: Steven Hardy [mailto:sha...@redhat.com] 
发送时间: 2015年12月18日 19:08
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat 
templates

On Wed, Dec 09, 2015 at 01:42:13PM +0300, Sergey Kraynev wrote:
>Hi Heaters,
>On the last IRC meeting we had a question about Support Conditionals spec
>[1].
>Previous attempt for this staff is here [2].
>The example of first POC in Heat can be reviewed here [3]
>As I understand we have not had any complete decision about this work.
>So I'd like to clarify feelings of community about it. This clarification
>may be done as answers for two simple questions:
> - Why do we want to implement it?
> - Why do NOT we want to implement it?
>My personal feeling is:
>- Why do we want to implement it?
>    * A lot of users wants to have similar staff.
>    * It's already presented in AWS, so will be good to have this
>feature in Heat too.
> - Why do NOT we want to implement it?
>    * it can be solved with Jinja [4] . However I don't think, that it's
>really important reason for blocking this work.
>Please share your idea about two questions above.
>It should allows us to eventually decide, want we implement it or not.

This has been requested for a long time, and there have been several previous 
discussions, which all ended up in debating the implementation, rather than 
focussing on the simplest possible way to meet the user requirement.

I think this latest attempt provides a simple way to meet the requirement, 
improves out CFN compatibility, and is inspired by an interface which has been 
proven to work.

So I'm +1 on going ahead with this - the implementation looks pretty simple :)

We've debated Jinja and other solutions before and dismissed them as either 
unsafe to run inside the heat service, or potentially too complex - this 
proposed solution appears to resolve both those concerns.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [heat] Rico Lin for heat-core

2015-12-07 Thread Huangtianhua
+ 1 :)

发件人: Sergey Kraynev [mailto:skray...@mirantis.com]
发送时间: 2015年12月7日 20:39
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [heat] Rico Lin for heat-core

Hi all.

I'd like to nominate Rico Lin for heat-core. He did awesome job with providing 
useful and valuable reviews. Also his contribution is really high [1] .

[1] http://stackalytics.com/report/contribution/heat-group/60

Heat core-team, please vote with:
 +1 - if you agree
  -1 - if you disagree

--
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] core team nomination

2015-10-20 Thread Huangtianhua
+1

-邮件原件-
发件人: Sergey Kraynev [mailto:skray...@mirantis.com] 
发送时间: 2015年10月20日 21:38
收件人: OpenStack Development Mailing List (not for usage questions)
主题: [openstack-dev] [Heat] core team nomination

I'd like to propose new candidates for heat core-team:
Rabi Mishra
Peter Razumovsky

According statistic both candidate made a big effort in Heat as reviewers and 
as contributors [1][2].
They were involved in Heat community work  during last several releases and 
showed good understanding of Heat code.
I think, that they are ready to became core-reviewers.

Heat-cores, please vote with +/- 1.

[1] http://stackalytics.com/report/contribution/heat-group/180
[2] http://stackalytics.com/?module=heat-group=person-day
--
Regards,
Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

2015-07-31 Thread Huangtianhua
+1 :)

-邮件原件-
发件人: Steve Baker [mailto:sba...@redhat.com] 
发送时间: 2015年7月31日 12:36
收件人: OpenStack Development Mailing List
主题: [openstack-dev] Proposing Kanagaraj Manickam and Ethan Lynn for heat-core

I believe the heat project would benefit from Kanagaraj Manickam and Ethan Lynn 
having the ability to approve heat changes.

Their reviews are valuable[1][2] and numerous[3], and both have been submitting 
useful commits in a variety of areas in the heat tree.

Heat cores, please express your approval with a +1 / -1.

[1] http://stackalytics.com/?user_id=kanagaraj-manickammetric=marks
[2] http://stackalytics.com/?user_id=ethanlynnmetric=marks
[3] http://stackalytics.com/report/contribution/heat-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] conditional resource exposure - second thoughts

2015-07-14 Thread Huangtianhua


-邮件原件-
发件人: Zane Bitter [mailto:zbit...@redhat.com] 
发送时间: 2015年7月15日 3:35
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [Heat] conditional resource exposure - second thoughts

On 14/07/15 14:34, Pavlo Shchelokovskyy wrote:
 Hi Heaters,

 currently we already expose to the user only resources for services 
 deployed in the cloud [1], and soon we will do the same based on 
 whether actual user roles allow creating specific resources [2]. Here 
 I would like to get your opinion on some of my thoughts regarding 
 behavior of resource-type-list, resource-type-show and 
 template-validate with this new features.

 resource-type-list
 We already (or soon will) hide unavailable in the cloud / for the user 
 resources from the listing. But what if we add an API flag e.g. --all 
 to show all registered in the engine resources? That would give any 
 user a glimpse of what their Orchestration service can manage in 
 principle, so they can nag the cloud operator to install additional 
 OpenStack components or give them required roles :)

I'd agree with Zane. Only allow admins to getting all resource types not 
ordinary users.

 resource-type-show
 Right now the plan is to disable showing unavailable to the user 
 resources. But may be we should leave this as it is, for the same 
 purpose as above (or again add a --all flag or such)?

I'd prefer to allow admins to show all resource type. The behavior should be 
consistent, ordinary users can only show the resource type which they can list. 

 template-validate
 Right now Heat is failing validation for templates containing resource 
 types not registered in the engine (e.g. typo). Should we also make 
 this call available services- and roles-sensitive? Or should we leave 
 a way for a user to check validity of any template with any in 
 principle supported resources?

I'd agree with Zane. And I think if we can give the detail reasons(service 
unavailable or resources type is not supported) why the validation is failed 
will be good to users.

cheers,
Zane.

 The bottom line is we are good in making Heat service to be as much 
 self-documented via its own API as possible - let's keep doing that 
 and make any Heat deployment to be the Heat primer :)

 Eager to hear your opinions.

 [1]
 http://specs.openstack.org/openstack/heat-specs/specs/liberty/conditio
 nal-resource-exposure-services.html

 [2]
 http://specs.openstack.org/openstack/heat-specs/specs/liberty/conditio
 nal-resource-exposure-roles.html

 Best regards,

 --
 Dr. Pavlo Shchelokovskyy
 Senior Software Engineer
 Mirantis Inc
 www.mirantis.com


 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [heat] Application level HA via Heat

2015-04-02 Thread Huangtianhua
If we replace a autoscaling group member, we can't make sure the attached 
resources keep the same, why not to call the evacuate or rebuild api of nova,
just to add meters for ha(vm state or host state) in ceilometer, and then 
signal to HA resource(such as HARestarter)?

-邮件原件-
发件人: Steven Hardy [mailto:sha...@redhat.com] 
发送时间: 2014年12月23日 2:21
收件人: openstack-dev@lists.openstack.org
主题: [openstack-dev] [heat] Application level HA via Heat

Hi all,

So, lately I've been having various discussions around $subject, and I know 
it's something several folks in our community are interested in, so I wanted to 
get some ideas I've been pondering out there for discussion.

I'll start with a proposal of how we might replace HARestarter with AutoScaling 
group, then give some initial ideas of how we might evolve that into something 
capable of a sort-of active/active failover.

1. HARestarter replacement.

My position on HARestarter has long been that equivalent functionality should 
be available via AutoScalingGroups of size 1.  Turns out that shouldn't be too 
hard to do:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 1
  max_size: 1
  resource:
type: ha_server.yaml

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_group_id: {get_resource: server_group}
  scaling_adjustment: 1

So, currently our ScalingPolicy resource can only support three adjustment 
types, all of which change the group capacity.  AutoScalingGroup already 
supports batched replacements for rolling updates, so if we modify the 
interface to allow a signal to trigger replacement of a group member, then the 
snippet above should be logically equivalent to HARestarter AFAICT.

The steps to do this should be:

 - Standardize the ScalingPolicy-AutoScaling group interface, so aynchronous 
adjustments (e.g signals) between the two resources don't use the adjust 
method.

 - Add an option to replace a member to the signal interface of AutoScalingGroup

 - Add the new replace adjustment type to ScalingPolicy

I posted a patch which implements the first step, and the second will be 
required for TripleO, e.g we should be doing it soon.

https://review.openstack.org/#/c/143496/
https://review.openstack.org/#/c/140781/

2. A possible next step towards active/active HA failover

The next part is the ability to notify before replacement that a scaling action 
is about to happen (just like we do for LoadBalancer resources
already) and orchestrate some or all of the following:

- Attempt to quiesce the currently active node (may be impossible if it's
  in a bad state)

- Detach resources (e.g volumes primarily?) from the current active node,
  and attach them to the new active node

- Run some config action to activate the new node (e.g run some config
  script to fsck and mount a volume, then start some application).

The first step is possible by putting a SofwareConfig/SoftwareDeployment 
resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the node is 
too bricked to respond and specifying DELETE action so it only runs when we 
replace the resource).

The third step is possible either via a script inside the box which polls for 
the volume attachment, or possibly via an update-only software config.

The second step is the missing piece AFAICS.

I've been wondering if we can do something inside a new heat resource, which 
knows what the current active member of an ASG is, and gets triggered on a 
replace signal to orchestrate e.g deleting and creating a VolumeAttachment 
resource to move a volume between servers.

Something like:

 resources:
  server_group:
type: OS::Heat::AutoScalingGroup
properties:
  min_size: 2
  max_size: 2
  resource:
type: ha_server.yaml

  server_failover_policy:
type: OS::Heat::FailoverPolicy
properties:
  auto_scaling_group_id: {get_resource: server_group}
  resource:
type: OS::Cinder::VolumeAttachment
properties:
# FIXME: refs is a ResourceGroup interface not currently
# available in AutoScalingGroup
instance_uuid: {get_attr: [server_group, refs, 1]}

  server_replacement_policy:
type: OS::Heat::ScalingPolicy
properties:
  # FIXME: this adjustment_type doesn't exist yet
  adjustment_type: replace_oldest
  auto_scaling_policy_id: {get_resource: server_failover_policy}
  scaling_adjustment: 1

By chaining policies like this we could trigger an update on the attachment 
resource (or a nested template via a provider resource containing many 
attachments or other resources) every time the ScalingPolicy is triggered.

For the sake of clarity, I've not included the existing stuff like ceilometer 
alarm resources etc above, but hopefully it gets the idea accross so we can 
discuss 

[openstack-dev] 答复: [Heat] Stepping down from core

2015-03-01 Thread Huangtianhua
Good lucky :)

-邮件原件-
发件人: Jeff Peeler [mailto:jpee...@redhat.com] 
发送时间: 2015年2月28日 4:23
收件人: openstack-dev@lists.openstack.org
主题: [openstack-dev] [Heat] Stepping down from core

As discussed during the previous Heat meeting, I'm going to be stepping down 
from core on the Heat project. My day to day focus is going to be more focused 
on TripleO for the foreseeable future, and I hope to be able to soon focus on 
reviews there.

Being part of Heat core since day 0 has been a good experience, but keeping up 
with multiple projects is a lot to manage. I don't know how some of you do it!

Jeff

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] core team changes

2015-01-28 Thread Huangtianhua
Hi, all,

Thanks for your recognition for my previous work. I am very happy to work with 
you and will do my best.

Regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Precursor to Phase 1 Convergence

2015-01-08 Thread Huangtianhua


发件人: Angus Salkeld [mailto:asalk...@mirantis.com]
发送时间: 2015年1月9日 14:08
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence


On Fri, Jan 9, 2015 at 3:22 PM, Murugan, Visnusaran 
visnusaran.muru...@hp.commailto:visnusaran.muru...@hp.com wrote:
Steve,

My reasoning to have a “--continue” like functionality was to run it as a 
periodic task and substitute continuous observer for now.

I am not in favor of the --continue as an API. I'd suggest responding to 
resource timeouts and if there is no response from the task, then re-start 
(continue)
the task.

-Angus


+1 Agree with Angus:)

“--continue” based command should work on realized vs. actual graph and issue a 
stack update.

I completely agree that user action should not be needed to realize a partially 
completed stack.

Your thoughts.

From: vishnu [mailto:ckmvis...@gmail.commailto:ckmvis...@gmail.com]
Sent: Friday, January 9, 2015 10:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Heat] Precursor to Phase 1 Convergence

Steve,

Auto recovery is the plan. Engine failure should be detected by way of 
heartbeat or recover partially realised stack on engine startup in case of a 
single engine scenario.

--continue command was just a additional helper api.








[图像已被发件人删除。]



Visnusaran Murugan
about.me/ckmvishnuhttp://about.me/ckmvishnu









On Thu, Jan 8, 2015 at 11:29 PM, Steven Hardy 
sha...@redhat.commailto:sha...@redhat.com wrote:
On Thu, Jan 08, 2015 at 09:53:02PM +0530, vishnu wrote:
Hi Zane,
I was wondering if we could push changes relating to backup stack removal
and to not load resources as part of stack. There needs to be a capability
to restart jobs left over by dead engines.A
something like heat stack-operation --continue [git rebase --continue]

To me, it's pointless if the user has to restart the operation, they can do
that already, e.g by triggering a stack update after a failed stack create.

The process needs to be automatic IMO, if one engine dies, another engine
should detect that it needs to steal the lock or whatever and continue
whatever was in-progress.

Had a chat with shady regarding this. IMO this would be a valuable
enhancement. Notification based lead sharing can be taken up upon
completion.

I was referring to a capability for the service to transparently recover
if, for example, a heat-engine is restarted during a service upgrade.

Currently, users will be impacted in this situation, and making them
manually restart failed operations doesn't seem like a super-great solution
to me (like I said, they can already do that to some extent)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Nominating Pavlo Shchelokovskyy for heat-core

2014-10-08 Thread Huangtianhua
congratulations:)

-邮件原件-
发件人: Pavlo Shchelokovskyy [mailto:pshchelokovs...@mirantis.com] 
发送时间: 2014年10月9日 1:29
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Nominating Pavlo Shchelokovskyy for heat-core

Hi fellow Heat Cores,

thank you for your support. I am very proud to become part of this team, and I 
will do my best to use my new superpowers wisely and responsibly.

Best regards,
Pavlo.

On Wed, Oct 8, 2014 at 1:22 AM, Angus Salkeld asalk...@mirantis.com wrote:
 Congrats Pavlo, I have added you to core.

 -Angus

 On Wed, Oct 8, 2014 at 1:18 AM, Jeff Peeler jpee...@redhat.com wrote:

 +1

 On 10/06/2014 04:41 PM, Zane Bitter wrote:

 I'd like to propose that we add Pavlo Shchelokovskyy to the 
 heat-core team.

 Pavlo has been a consistently active member of the Heat community - 
 he's a regular participant in IRC and at meetings, has been making 
 plenty of good commits[1] and maintains a very respectable review 
 rate[2] with helpful comments. I think he's built up enough 
 experience with the code base to be ready for core.

 Heat-cores, please vote by replying to this thread. As a reminder of 
 your options[3], +1 votes from 5 cores is sufficient; a -1 is 
 treated as a veto.


 Obviously past 5 +1 votes here, but showing full support doesn't hurt.



 cheers,
 Zane.

 [1]

 https://review.openstack.org/#/q/owner:%22Pavlo+Shchelokovskyy%22+st
 atus:merged+project:openstack/heat,n,z

 [2] http://russellbryant.net/openstack-stats/heat-reviewers-30.txt
 [3]
 https://wiki.openstack.org/wiki/Heat/CoreTeam#Adding_or_Removing_Mem
 bers



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: 答复: [heat] autoscaling across regions and availability zones

2014-07-09 Thread Huangtianhua


发件人: Mike Spreitzer [mailto:mspre...@us.ibm.com]
发送时间: 2014年7月10日 3:19
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] 答复: [heat] autoscaling across regions and availability 
zones

Huangtianhua huangtian...@huawei.commailto:huangtian...@huawei.com wrote on 
07/04/2014 02:35:56 AM:

 I have register a bp about this : https://blueprints.launchpad.net/
 heat/+spec/implement-autoscalinggroup-availabilityzones
 ・
 ・ And I am thinking how to implement this recently.
 ・
 ・ According to AWS autoscaling implementation  “attempts to
 distribute instances evenly between the Availability Zones that are
 enabled for your Auto Scaling group.
 ・ Auto Scaling does this by attempting to launch new
 instances in the Availability Zone with the fewest instances. If the
 attempt fails, however, Auto Scaling will attempt to launch in other
 zones until it succeeds.”

 But there is a doubt about the “fewest instance”, .e.g

 There are two azs,
Az1: has two instances
Az2: has three instances
 ・And then to create a asg with 4 instances, I think we
 should create two instances respectively in az1 and az2, right? Now
 if need to extend to 5 instances for the asg, which az to lauch new instance?
 If you interested in this bp, I think we can discuss thisJ

The way AWS handles this is described in 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html#arch-AutoScalingMultiAZ

That document leaves a lot of freedom to the cloud provider.  And rightfully 
so, IMO.  To answer your specific example, when spreading 5 instances across 2 
zones, the cloud provider gets to pick which zone gets 3 and which zone gets 2. 
 As for what a Heat scaling group should do, that depends on what Nova can do 
for Heat.  I have been told that Nova's instance-creation operation takes an 
optional parameter that identifies one AZ and, if that parameter is not 
provided, then a configured default AZ is used.  Effectively, the client has to 
make the choice.  I would start out with Heat making a random choice; in 
subsequent development it might query or monitor Nova for some statistics to 
guide the choice.

yes, I read the doc, as you said, the doc is not well written, so I 
doubt about the “fewest instance” before, but now IMO, “fewest instance” means 
the instances of the group, so you are right, to my specific example, the 
instance should be launch at random or in a round-robin mode.

An even more interesting issue is the question of choosing which member(s) to 
remove when scaling down.  The approach taken by AWS is documented at 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/us-termination-policy.html

but the design there has redundant complexity and the doc is not well written.  
Following is a short sharp presentation of an isomorphic system.

A client that owns an ASG configures that ASG to have a series (possibly empty) 
of instance termination policies; the client can change the series during the 
ASG's lifetime.  Each policy is drawn from the following menu:

  *   OldestLaunchConfiguration
  *   ClosestToNextInstanceHour
  *   OldestInstance
  *   NewestInstance

and there is a default policy of termination: ‘Default’, if not specify 
the termination policy while asg creation, the default policy is applied:

1.   Choose the az which has the max instances of the group.

2.   If there has several azs base point 1, choose az in random

3.   The az is chosen, remove the instance which LaunchConfiguration is 
oldest

4.   If there has several instances then choose according to 
ClosestToNextInstanceHour policy

5.   If there has several instances then choose in random
(see the AWS doc for the exact meaning of each).  The signature of a policy is 
this: given a set of candidate instances for removal, return a subset (possibly 
the whole input set).

When it is time to remove instances from an ASG, they are chosen one by one.  
AWS uses the following procedure to choose one instance to remove.
1.Choose the AZ from which the instance will be removed.  The choice is 
based primarily on balancing the number of group members in each AZ, and ties 
are broken randomly.
2.Starting with a candidate set consisting of all the ASG's members 
in the chosen AZ, run the configured series of policies to progressively narrow 
down the set of candidates.
3.Use OldestLaunchConfiguration and then ClosestToNextInstanceHour to 
further narrow the set of candidates.
4.Make a random choice among the final set of candidates.

Since each policy returns its input when its input's size is 1 we do not need 
to talk about early exits when defining the procedure (although the 
implementation might make such optimizations).

I plan to draft a spec.

may be no need to draft a spec, due to my bp is approved already, and I 
am developing now, if you are interested in it, we can

[openstack-dev] 答复: [heat] autoscaling across regions and availability zones

2014-07-04 Thread Huangtianhua
I have register a bp about this : 
https://blueprints.launchpad.net/heat/+spec/implement-autoscalinggroup-availabilityzones
・
・ And I am thinking how to implement this recently.
・
・ According to AWS autoscaling implementation  “attempts to distribute 
instances evenly between the Availability Zones that are enabled for your Auto 
Scaling group.
・ Auto Scaling does this by attempting to launch new instances in the 
Availability Zone with the fewest instances. If the attempt fails, however, 
Auto Scaling will attempt to launch in other zones until it succeeds.”

But there is a doubt about the “fewest instance”, .e.g

There are two azs,
   Az1: has two instances
   Az2: has three instances
・And then to create a asg with 4 instances, I think we should 
create two instances respectively in az1 and az2, right? Now if need to extend 
to 5 instances for the asg, which az to lauch new instance?
If you interestedapp:ds:interested inapp:ds:in this bp, I think we can 
discuss this:)


Thanks
发件人: Mike Spreitzer [mailto:mspre...@us.ibm.com]
发送时间: 2014年7月2日 4:23
收件人: OpenStack Development Mailing List
主题: [openstack-dev] [heat] autoscaling across regions and availability zones

An AWS autoscaling group can span multiple availability zones in one region.  
What is the thinking about how to get analogous functionality in OpenStack?

Warmup question: what is the thinking about how to get the levels of isolation 
seen between AWS regions when using OpenStack?  What is the thinking about how 
to get the level of isolation seen between AWS AZs in the same AWS Region when 
using OpenStack?  Do we use OpenStack Region and AZ, respectively?  Do we 
believe that OpenStack AZs can really be as independent as we want them (note 
that this is phrased to not assume we only want as much isolation as AWS 
provides --- they have had high profile outages due to lack of isolation 
between AZs in a region)?

I am going to assume that the answer to the question about ASG spanning 
involves spanning OpenStack regions and/or AZs.  In the case of spanning AZs, 
Heat has already got one critical piece: the OS::Heat::InstanceGroup and 
AWS::AutoScaling::AutoScalingGroup types of resources take a list of AZs as an 
optional parameter.  Presumably all four kinds of scaling group (i.e., also 
OS::Heat::AutoScalingGroup and OS::Heat::ResourceGroup) should have such a 
parameter.  We would need to change the code that generates the template for 
the nested stack that is the group, so that it spreads the members across the 
AZs in a way that is as balanced as is possible at the time.

Currently, a stack does not have an AZ.  That makes the case of an 
OS::Heat::AutoScalingGroup whose members are nested stacks interesting --- how 
does one of those nested stacks get into the right AZ?  And what does that 
mean, anyway?  The meaning would have to be left up to the template author.  
But he needs something he can write in his member template to reference the 
desired AZ for the member stack.  I suppose we could stipulate that if the 
member template has a parameter named availability_zone and typed string 
then the scaling group takes care of providing the right value to that 
parameter.

To spread across regions adds two things.  First, all four kinds of scaling 
group would need the option to be given a list of regions instead of a list of 
AZs.  More likely, a list of contexts as defined in 
https://review.openstack.org/#/c/53313/ --- that would make this handle 
multi-cloud as well as multi-region.  The other thing this adds is a concern 
for context health.  It is not enough to ask Ceilometer to monitor member 
health --- in multi-region or multi-cloud you also have to worry about the 
possibility that Ceilometer itself goes away.  It would have to be the scaling 
group's responsibility to monitor for context health, and react properly to 
failure of a whole context.

Does this sound about right?  If so, I could draft a spec.

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [heat] Sergey Kraynev for heat-core

2014-06-26 Thread Huangtianhua
+1,congratulations:)

-邮件原件-
发件人: Steve Baker [mailto:sba...@redhat.com] 
发送时间: 2014年6月27日 6:08
收件人: OpenStack Development Mailing List
主题: [openstack-dev] [heat] Sergey Kraynev for heat-core

I'd like to nominate Sergey Kraynev for heat-core. His reviews are valuable and 
prolific, and his commits have shown a sound understanding of heat internals.

http://stackalytics.com/report/contribution/heat-group/60

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] fine grained quotas

2014-06-19 Thread Huangtianhua
Hi, Clint, 

Thank you for your comments on my BP and code! 

The BP I proposed is all about putting dynamic, admin-configurable limitations 
on stack number per tenant and stack complexity. Therefore, you can consider my 
BP as 
an extension to your config file-based limitation mechanism. If the admin does 
not want to 
configure fined-grained, tenant-specific limits, the values in config become 
the defalut 
values of those limits. 

And just like only an Admin can config the limit items in the config file, the 
limit update 
and delete APIs I proposed are also Admin-only. Therefore, users can not set 
those values by 
themselves to break the anti-DoS capability you mentioned. 

The reason I want to introduce the APIs and the dynamic configurable capability 
to those 
limits mainly lies in that, since various tenants have various underlying 
resource quota, 
and even various template/stack complexity requirements, I think a global, 
static-configured 
limitation mechanism could be refined to echo user requirements better. 

Your idea? 

By the way, I do think that, the DoS problem is interesting in Heat. Can we 
have more discussion on that?

Thanks again!

-邮件原件-
发件人: Clint Byrum [mailto:cl...@fewbar.com] 
发送时间: 2014年6月20日 6:33
收件人: openstack-dev
主题: Re: [openstack-dev] [Heat] fine grained quotas

Excerpts from Randall Burt's message of 2014-06-19 15:21:14 -0700:
 On Jun 19, 2014, at 4:17 PM, Clint Byrum cl...@fewbar.com wrote:
 
  I was made aware of the following blueprint today:
  
  http://blueprints.launchpad.net/heat/+spec/add-quota-api-for-heat
  http://review.openstack.org/#/c/96696/14
  
  Before this goes much further.. I want to suggest that this work be 
  cancelled, even though the code looks excellent. The reason those 
  limits are in the config file is that these are not billable items 
  and they have a _tiny_ footprint in comparison to the physical 
  resources they will allocate in Nova/Cinder/Neutron/etc.
  
  IMO we don't need fine grained quotas in Heat because everything the 
  user will create with these templates will cost them and have its 
  own quota system. The limits (which I added) are entirely to prevent 
  a DoS of the engine.
 
 What's more, I don't think this is something we should expose via API 
 other than to perhaps query what those quota values are. It is 
 possible that some provider would want to bill on number of stacks, 
 etc (I personally agree with Clint here), it seems that is something 
 that could/should be handled external to Heat itself.

Far be it from any of us to dictate a single business model. However, Heat is a 
tool which encourages consumption of billable resources by making it easier to 
tie them together. This is why FedEx gives away envelopes and will come pick up 
your packages for free.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Nova] Nominating Ken'ichi Ohmichi for nova-core

2014-06-15 Thread Huangtianhua
+1, congratulation:)

-邮件原件-
发件人: Michael Still [mailto:mi...@stillhq.com] 
发送时间: 2014年6月14日 6:41
收件人: OpenStack Development Mailing List
主题: [openstack-dev] [Nova] Nominating Ken'ichi Ohmichi for nova-core

Greetings,

I would like to nominate Ken'ichi Ohmichi for the nova-core team.

Ken'ichi has been involved with nova for a long time now.  His reviews on API 
changes are excellent, and he's been part of the team that has driven the new 
API work we've seen in recent cycles forward. Ken'ichi has also been reviewing 
other parts of the code base, and I think his reviews are detailed and helpful.

Please respond with +1s or any concerns.

References:

  
https://review.openstack.org/#/q/owner:ken1ohmichi%2540gmail.com+status:open,n,z

  https://review.openstack.org/#/q/reviewer:ken1ohmichi%2540gmail.com,n,z

  http://www.stackalytics.com/?module=nova-groupuser_id=oomichi

As a reminder, we use the voting process outlined at 
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our core team.

Thanks,
Michael

--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: 答复: 答复: [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-23 Thread Huangtianhua


-邮件原件-
发件人: Jay Pipes [mailto:jaypi...@gmail.com] 
发送时间: 2014年4月23日 7:41
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] 答复: 答复: [Nova][Neutron][Cinder][Heat]Should we support 
tags for os resources?

On Tue, 2014-04-22 at 02:02 +, Huangtianhua wrote:
 Thanks very much.
 
  
 
 I have register the blueprints for nova.  
 
 https://blueprints.launchpad.net/nova/+spec/add-tags-for-os-resources
 
  
 
 The simple plan is:
 
 1.  Add the tags api (create tags/delete tags/describe tags) for
 v3 api
 
 2.  Change the implement for instance from “metadata” to “tags”
 
  
 
 Your suggestions?

Hi again,

The Nova blueprint process has changed. We now use a Gerrit repository to 
submit, review, and approve blueprint specifications. Please see here for 
information on how to submit a spec for the proposed blueprint:

https://wiki.openstack.org/wiki/Blueprints#Nova

Thanks very much. I will submit a spec soon.

Thank you!
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-21 Thread Huangtianhua
I plan to register a blueprints in nova for record this. Can I?


-邮件原件-
发件人: Jay Pipes [mailto:jaypi...@gmail.com] 
发送时间: 2014年4月20日 21:06
收件人: openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for 
os resources?

On Sun, 2014-04-20 at 08:35 +, Huangtianhua wrote:
 Hi all: 
 
 Currently, the EC2 API of OpenStack only has tags support (metadata) 
 for instances. And there has already a blueprint about to add support 
 for volumes and volume snapshots using “metadata”.
 
 There are a lot of resources such as
 image/subnet/securityGroup/networkInterface(port) are supported add 
 tags for AWS.
 
 I think we should support tags for these resources. There may be no 
 property “metadata for these resources, so we should to add 
 “metadata” to support the resource tags, the change related API.

Hi Tianhua,

In OpenStack, generally, the choice was made to use maps of key/value pairs 
instead of lists of strings (tags) to annotate objects exposed in the REST 
APIs. OpenStack REST APIs inconsistently call these maps of key/value pairs:

 * properties (Glance, Cinder Image, Volume respectively)
 * extra_specs (Nova InstanceType)
 * metadata (Nova Instance, Aggregate and InstanceGroup, Neutron)
 * metadetails (Nova Aggregate and InstanceGroup)
 * system_metadata (Nova Instance -- differs from normal metadata in that 
the key/value pairs are 'owned' by Nova, not a user...) 

Personally, I think tags are a cleaner way of annotating objects when the 
annotation is coming from a normal user. Tags represent by far the most common 
way for REST APIs to enable user-facing annotation of objects in a way that is 
easy to search on. I'd love to see support for tags added to any 
searchable/queryable object in all of the OpenStack APIs.

I'd also like to see cleanup of the aforementioned inconsistencies in how maps 
of key/value pairs are both implemented and named throughout the OpenStack 
APIs. Specifically, I'd like to see this implemented in the next major version 
of the Compute API:

 * Removal of the metadetails term
 * All key/value pairs can only be changed by users with elevated privileged 
system-controlled (normal users should use tags)
 * Call all these key/value pair combinations properties -- technically, 
metadata is data about data, like the size of an integer. These key/value 
pairs are just data, not data about data.
 * Identify key/value pairs that are relied on by all of Nova to be a specific 
key and value combination, and make these things actual real attributes on some 
object model -- since that is a much greater guard for the schema of an object 
and enables greater performance by allowing both type safety of the underlying 
data and removes the need to search by both a key and a value.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: 答复: [Nova][Neutron][Cinder][Heat]Should we support tags for os resources?

2014-04-21 Thread Huangtianhua
Thanks very much.

I have register the blueprints for nova.
https://blueprints.launchpad.net/nova/+spec/add-tags-for-os-resources

The simple plan is:

1.   Add the tags api (create tags/delete tags/describe tags) for v3 api

2.   Change the implement for instance from “metadata” to “tags”


Your suggestions?

Thanks
发件人: Jay Pipes [mailto:jaypi...@gmail.com]
发送时间: 2014年4月22日 3:46
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] 答复: [Nova][Neutron][Cinder][Heat]Should we support tags 
for os resources?

Absolutely. Feel free.

On Mon, Apr 21, 2014 at 4:48 AM, Huangtianhua 
huangtian...@huawei.commailto:huangtian...@huawei.com wrote:
I plan to register a blueprints in nova for record this. Can I?


-邮件原件-
发件人: Jay Pipes [mailto:jaypi...@gmail.commailto:jaypi...@gmail.com]
发送时间: 2014年4月20日 21:06
收件人: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
主题: Re: [openstack-dev] [Nova][Neutron][Cinder][Heat]Should we support tags for 
os resources?

On Sun, 2014-04-20 at 08:35 +, Huangtianhua wrote:
 Hi all:

 Currently, the EC2 API of OpenStack only has tags support (metadata)
 for instances. And there has already a blueprint about to add support
 for volumes and volume snapshots using “metadata”.

 There are a lot of resources such as
 image/subnet/securityGroup/networkInterface(port) are supported add
 tags for AWS.

 I think we should support tags for these resources. There may be no
 property “metadata for these resources, so we should to add
 “metadata” to support the resource tags, the change related API.

Hi Tianhua,

In OpenStack, generally, the choice was made to use maps of key/value pairs 
instead of lists of strings (tags) to annotate objects exposed in the REST 
APIs. OpenStack REST APIs inconsistently call these maps of key/value pairs:

 * properties (Glance, Cinder Image, Volume respectively)
 * extra_specs (Nova InstanceType)
 * metadata (Nova Instance, Aggregate and InstanceGroup, Neutron)
 * metadetails (Nova Aggregate and InstanceGroup)
 * system_metadata (Nova Instance -- differs from normal metadata in that 
the key/value pairs are 'owned' by Nova, not a user...)

Personally, I think tags are a cleaner way of annotating objects when the 
annotation is coming from a normal user. Tags represent by far the most common 
way for REST APIs to enable user-facing annotation of objects in a way that is 
easy to search on. I'd love to see support for tags added to any 
searchable/queryable object in all of the OpenStack APIs.

I'd also like to see cleanup of the aforementioned inconsistencies in how maps 
of key/value pairs are both implemented and named throughout the OpenStack 
APIs. Specifically, I'd like to see this implemented in the next major version 
of the Compute API:

 * Removal of the metadetails term
 * All key/value pairs can only be changed by users with elevated privileged 
system-controlled (normal users should use tags)
 * Call all these key/value pair combinations properties -- technically, 
metadata is data about data, like the size of an integer. These key/value 
pairs are just data, not data about data.
 * Identify key/value pairs that are relied on by all of Nova to be a specific 
key and value combination, and make these things actual real attributes on some 
object model -- since that is a much greater guard for the schema of an object 
and enables greater performance by allowing both type safety of the underlying 
data and removes the need to search by both a key and a value.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Nova][Neutron]Detach interface will delete the port

2014-04-17 Thread Huangtianhua
Hi all,

Port is a resource define in Heat. And heat support the actions: create a 
port/delete a port/attach to a server/detach from a server.

But we can't re-attach a port which once be detached.

-
There is such a scenario:


1.   Create a stack with a template:

..

resources:

  my_instance:

type: OS::Nova::Server

properties:

  image: { get_param: ImageId }

  flavor: { get_param: InstanceType }

  networks: [ { port : {Ref: instacne_port}}]



  instacne_port:

type: OS::Neutron::Port

properties:

  network_id: { get_param: Network }



Heat will create a port and a server, and attach the port to the server.



2.   I want to attach the port the another server, so I update the stack 
with a new template:

..

resources:

  my_instance:

type: OS::Nova::Server

properties:

  image: { get_param: ImageId }

  flavor: { get_param: InstanceType }

  my_instance2:

type: OS::Nova::Server

properties:

  image: { get_param: ImageId }

  flavor: { get_param: InstanceType }

  networks: [ { port : {Ref: instacne_port}}]



  instacne_port:

type: OS::Neutron::Port

properties:

  network_id: { get_param: Network }



Heat will invoke the nova detach_interface API to detach the interface, and 
wanted to attach the port to the new server.

But the stack update is failed , and there is an 404 portId not find error 
raised on neutron. Because the port has been deleted while detaching.



There is no real detach api for heat to invoke. The nova API detach_interface 
will invoke the Neutron API delete_port, and then the port will be deleted.
   
---

I think there are two solutions:
First:
Heat get the port information before to detach, and to create the port again 
before to attach.

But I think it looks ugly and will increase risk failure for re-create.

Second:
Neutron provide a detach_port api to nova, so that nova provide the real 
detach not delete to heat.

What do you think about?

Cheers

Tianhua



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev