Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-07-22 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Hongbin,

This is really a good idea, because it will mitigate much of the job of 
implementing loop and conditional branch in Heat ResourceGroup. But as Kevin 
pointed out in below mail, it need a careful upgrade/migration path.

Meanwhile, as for the blueprint of supporting multiple flavor 
(https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor), we 
have implemented a Proof of Concept/prototype based on the current 
ResourceGroup method. (see the design spec 
https://review.openstack.org/#/c/345745/ for details.)

I am wondering whether we can continue with the implementation of supporting 
multiple flavor based on the current Resource Group for now? or Do you have any 
plan on when to implement the "manually managing the bay nodes"?

Regards,
Gary

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: Tuesday, May 17, 2016 3:01 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Sounds ok, but there needs to be a careful upgrade/migration path, where both 
are supported until after all pods are migrated out of nodes that are in the 
resourcegroup.

Thanks,
Kevin


From: Hongbin Lu
Sent: Sunday, May 15, 2016 3:49:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discuss the idea of manually managing the bay 
nodes
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 ...

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Hongbin,

Thank you for your guidance. I’ll take a look at the related existing 
blueprints and patches to see whether there are any duplicated jobs first.

Regards,
Gary

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Monday, June 13, 2016 10:58 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Gary,

It is hard to tell if your change fits into Magnum upstream or not, unless 
there are further details. I encourage you to upload your changes to gerrit, so 
that we can review and discuss it inline. Also, keep in mind that the change 
might be rejected if it doesn’t fit into upstream objectives or it is 
duplicated to other existing work, but I hope it won’t discourage your 
contribution. If your change is related to Ironic, we might request you to 
coordinate your work with Spyros and/or others who is working on Ironic 
integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: June-13-16 3:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
<li-gong.d...@hpe.com<mailto:li-gong.d...@hpe.com>> wrote:
Hi Tom/All,

>6. Ironic Integration: 
>https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, 
>variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as 
>this is the only networking model Ironic currently supports. Multi-tenant 
>networking is imminent. This should be done before work on an Ironic template 
>starts.

We have already implemented a bay template that uses a flat networking model 
and other python code(making magnum to find the correct heat template) which is 
used in our own project.
What do you think of this feature? If you think it is necessary for Magnum, I 
can contribute this codes to Magnum upstream.

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates, there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


Regards,
Gary Duan


-Original Message-
From: Cammann, Tom
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we d

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Sypros,

Thank you for pointing out the blueprint and the patch.
What we have done is modifying the existing kubecluster-ironic, kubemaster 
–ironic and kubeminion-ironic yaml files.
I take a look at the blueprint and the patch you pointed out.

Regards,
Gary

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: Monday, June 13, 2016 3:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
<li-gong.d...@hpe.com<mailto:li-gong.d...@hpe.com>> wrote:
Hi Tom/All,

>6. Ironic Integration: 
>https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, 
>variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as 
>this is the only networking model Ironic currently supports. Multi-tenant 
>networking is imminent. This should be done before work on an Ironic template 
>starts.

We have already implemented a bay template that uses a flat networking model 
and other python code(making magnum to find the correct heat template) which is 
used in our own project.
What do you think of this feature? If you think it is necessary for Magnum, I 
can contribute this codes to Magnum upstream.

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates, there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


Regards,
Gary Duan


-Original Message-
From: Cammann, Tom
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 
what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and 
general usage of the bay COE. We decided this will be done completed by Magnum 
by adding access to cAdvisor/heapster through baking access to cAdvisor by 
default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise b

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Tom/All,

>6. Ironic Integration: 
>https://etherpad.openstack.org/p/newton-magnum-ironic-integration
>- Start the implementation immediately
>- Prefer quick work-around for identified issues (cinder volume attachment, 
>variation of number of ports, etc.)

>We need to implement a bay template that can use a flat networking model as 
>this is the only networking model Ironic currently supports. Multi-tenant 
>networking is imminent. This should be done before work on an Ironic template 
>starts.

We have already implemented a bay template that uses a flat networking model 
and other python code(making magnum to find the correct heat template) which is 
used in our own project. 
What do you think of this feature? If you think it is necessary for Magnum, I 
can contribute this codes to Magnum upstream.

Regards,
Gary Duan
  

-Original Message-
From: Cammann, Tom 
Sent: Tuesday, May 03, 2016 1:12 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit

Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 
what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and 
general usage of the bay COE. We decided this will be done completed by Magnum 
by adding access to cAdvisor/heapster through baking access to cAdvisor by 
default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise but needs more investigation and isn’t a current 
priority.

Tom


From: Hongbin Lu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 30 April 2016 at 05:05
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Notes for Magnum design summit

Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: 

Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-25 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Ricardo,

This is really good suggestion. I'd like to see whether we can use 
"foreach"/"repeat" in ResourceGroup in Heat.

Regards,
Gary Duan

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: Thursday, April 21, 2016 3:49 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
provision minion nodes

Hi Hongbin.

On Wed, Apr 20, 2016 at 8:13 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
>
>
>
> From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> [mailto:li-gong.d...@hpe.com]
> Sent: April-20-16 3:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
> provision minion nodes
>
>
>
> Hi Folks,
>
>
>
> We are considering whether Magnum can supports 2 Nova flavors to 
> provision Kubernetes and other COE minion nodes.
>
> This requirement comes from the below use cases:
>
> -  There are 2 kind of baremetal machines in customer site: one is
> legacy machines which doesn’t support UEFI secure boot and others are 
> new machines which support UEFI secure boot. User want to use Magnum 
> to provisions a Magnum bay of Kubernetes from these 2 kind of 
> baremetal machines and for the machines supporting secure boot, user 
> wants to use UEFI secure boot to boot them up. And 2 Kubernetes 
> label(secure-booted and
> non-secure-booted) are created and User can deploy their 
> data-senstive/cirtical workload/containers/pods on the baremetal 
> machines which are secure-booted.
>
>
>
> This requirement requires Magnum to supports 2 Nova flavors(one is
> “extra_spec: secure_boot=True” and the other doesn’t specify it) based 
> on the Ironic 
> feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-
> implemented/uefi-secure-boot.html
> ).
>
>
>
> Could you kindly give me some comments on these requirement or whether 
> it is reasonable from your point? If you agree, we can write design 
> spec and implement this feature?
>
>
>
> I think the requirement is reasonable, but I would like to solve the 
> problem in a generic way. In particular, there could be another user 
> who might ask for N nova flavors to provision COE nodes in the future. 
> A challenge to support N groups of Nova instances is how to express 
> arbitrary number of resource groups (with different flavors) in a Heat 
> template (Magnum uses Heat template to provision COE clusters). Heat 
> doesn’t seem to support the logic of looping from 1 to N. There could 
> be other challenges/complexities along the way. If the proposed design 
> can address all the challenges and the implementation is clean, I am 
> OK to add support for this feature. Thoughts from others?

This looks similar to the way we looked at passing a list of availability 
zones. Mathieu asked and got a good answer:
http://lists.openstack.org/pipermail/openstack-dev/2016-March/088175.html

Something similar can probably be used to pass multiple flavors? Just in case 
it helps.

Cheers,
  Ricardo

>
>
>
> Regards,
>
> Gary
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-21 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Hongbin,

Thank you for your comments.
I had thought about supporting multiple nova flavors, and as you pointed out, 
it introduces an additional challenge of expressing a map containing multiple 
flavors/resource groups with the corresponding node count. If we just support 2 
nova flavor, we can create 3 resource group in kubecluster.yaml and one is for 
master node, the second is for minion nodes of the first flavor, the third is 
for minion nodes of the second flavor.
But if you think it is necessary to support multiple nova flavor, I need to 
consider a more generic way to implement it.

Regards,
Gary Duan


From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Thursday, April 21, 2016 2:13 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
provision minion nodes



From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) [mailto:li-gong.d...@hpe.com]
Sent: April-20-16 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision 
minion nodes

Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision 
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:

-  There are 2 kind of baremetal machines in customer site: one is 
legacy machines which doesn't support UEFI secure boot and others are new 
machines which support UEFI secure boot. User want to use Magnum to provisions 
a Magnum bay of Kubernetes from these 2 kind of baremetal machines and for the 
machines supporting secure boot, user wants to use UEFI secure boot to boot 
them up. And 2 Kubernetes label(secure-booted and non-secure-booted) are 
created and User can deploy their data-senstive/cirtical 
workload/containers/pods on the baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is "extra_spec: 
secure_boot=True" and the other doesn't specify it) based on the Ironic 
feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it is 
reasonable from your point? If you agree, we can write design spec and 
implement this feature?

I think the requirement is reasonable, but I would like to solve the problem in 
a generic way. In particular, there could be another user who might ask for N 
nova flavors to provision COE nodes in the future. A challenge to support N 
groups of Nova instances is how to express arbitrary number of resource groups 
(with different flavors) in a Heat template (Magnum uses Heat template to 
provision COE clusters). Heat doesn't seem to support the logic of looping from 
1 to N. There could be other challenges/complexities along the way. If the 
proposed design can address all the challenges and the implementation is clean, 
I am OK to add support for this feature. Thoughts from others?

Regards,
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-21 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Eli,

This is exactly what I want. If you guys think this requirement is reasonable, 
I’d like to commit a design spec so that we could discuss it in details.

Regards,
Gary Duan

From: Eli Qiao [mailto:liyong.q...@intel.com]
Sent: Wednesday, April 20, 2016 5:08 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
provision minion nodes


Kannan,

I think Duan Li is talking about using both 2 kinds of (secure-booted and 
non-secure-booted) node deploy *minion* node.

The scenario may like this:
let say 2 flavors:

  *   flavor_secure
  *   flavor_none_secure
For now, flavor-id in baymodel can only be set as one value, Duan Li's 
requirement is to using flavor-id = [flavor_none_secure, flavor_secure]
and provision one cluster which minion nodes are build from 2 types of flavor, 
then after cluster(bay ) provision finished , passing lable to
let k8s cluster to chose a minion node to start pod on that specific node.


For now, Magnum doesn't support it yet, I think it good to have it, but the 
implementation may be differnece per COE since after we
provision bay, the scheduler work are done by k8s/swarm/mesos.

Eli.

On 2016年04月20日 16:36, Kai Qiang Wu wrote:
Hi Duan Li,

Not sure if I get your point very clearly.

1> Magnum did support :
https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/baymodel.py#L65

flavor-id for minion node
master-flavor-id for master node

So your K8s cluster could have such two kinds of flavors.


2> For one question about ironic case (I found you deploy on ironic), I did not 
think Magnum templates now support ironic case now.
As ironic VLAN related feature are still developing, and not merged(many 
patches are under review, pick one for example 
https://review.openstack.org/#/c/277853)


I am not sure how would you use ironic for k8s cluster ?


--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi KaiQiang,

Thank you for your reply.

As for 1), You are correct in that Magnum does support 2 flavors(one is for 
master node and the other is for minion nodes).  What I want to address is 
whether we should support 2 or N Nova flavors ONLY for minion nodes.

As for 2), We have made Magnum templates works with Ironic(only for 
Fedora/Atomic/Kubernetes) to create a Magnun bay of Kubernetess and uses the 
flat network for now (as, for now Ironic doesn’t support VLAN network) in our 
proto environment. Currently we just use Heat template(Resource Group) -> 
Nova:Server -> Ironic driver as Nova hypervisor to implement it.

Regards,
Gary

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: Wednesday, April 20, 2016 4:37 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to 
provision minion nodes


Hi Duan Li,

Not sure if I get your point very clearly.

1> Magnum did support :
https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/baymodel.py#L65

flavor-id for minion node
master-flavor-id for master node

So your K8s cluster could have such two kinds of flavors.


2> For one question about ironic case (I found you deploy on ironic), I did not 
think Magnum templates now support ironic case now.
As ironic VLAN related feature are still developing, and not merged(many 
patches are under review, pick one for example 
https://review.openstack.org/#/c/277853)


I am not sure how would you use ironic for k8s cluster ?

Also in this summit 
https://etherpad.openstack.org/p/magnum-newton-design-summit-topics, we will 
have session about ironic cases:
here it is : Ironic Integration: Add support for Ironic virt-driver

If you had ways to make ironic work with Magnum, we welcome your contribution 
for that topic.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for "Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)" 
---20/04/2016 03:46:18 pm---Hi Folks, We are considerin]"Duan, Li-Gong (Gary, 
HPServers-Core-OE-PSC)" ---20/04/2016 03:46:18 pm---Hi Folks, We are 
considering whether Magnum can supports 2 Nova flavors to provision Kubernetes 
and

From: "Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)" 
<li-gong.d...@hpe.com<mailto:li-gong.d...@hpe.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 20/04/2016 03:46 pm
Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision 
minion nodes





Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision 
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:
- There are 2 kind of baremetal machines in customer site: one is legacy 
machines which doesn’t support UEFI secure boot and others are new machines 
which support UEFI secure boot. User want to use Magnum to provisions a Magnum 
bay of Kubernetes from these 2 kind of baremetal machines and for the machines 
supporting secure boot, user wants to use UEFI secure boot to boot them up. And 
2 Kubernetes label(secure-booted and non-secure-booted) are created and User 
can deploy their data-senstive/cirtical workload/containers/pods on the 
baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is “extra_spec: 
secure_boot=True” and the other doesn’t specify it) based on the Ironic 
feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it is 
reasonable from your point? If you agree, we can write design spec and 
implement this feature?

Regards,
Gary__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-20 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Folks,

We are considering whether Magnum can supports 2 Nova flavors to provision 
Kubernetes and other COE minion nodes.
This requirement comes from the below use cases:

-  There are 2 kind of baremetal machines in customer site: one is 
legacy machines which doesn't support UEFI secure boot and others are new 
machines which support UEFI secure boot. User want to use Magnum to provisions 
a Magnum bay of Kubernetes from these 2 kind of baremetal machines and for the 
machines supporting secure boot, user wants to use UEFI secure boot to boot 
them up. And 2 Kubernetes label(secure-booted and non-secure-booted) are 
created and User can deploy their data-senstive/cirtical 
workload/containers/pods on the baremetal machines which are secure-booted.

This requirement requires Magnum to supports 2 Nova flavors(one is "extra_spec: 
secure_boot=True" and the other doesn't specify it) based on the Ironic 
feature(https://specs.openstack.org/openstack/ironic-specs/specs/kilo-implemented/uefi-secure-boot.html
 ).

Could you kindly give me some comments on these requirement or whether it is 
reasonable from your point? If you agree, we can write design spec and 
implement this feature?

Regards,
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
+1 for Eli.

Regards,
Gary Duan

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Friday, April 01, 2016 2:18 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer 
team

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-14 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Sergey,

Thanks a lot. 
What I am using is Liberty release of Heat in a devstack environment.

I'll provide my trace log later.

Regards,
Gary

-Original Message-
From: Sergey Kraynev [mailto:skray...@mirantis.com] 
Sent: Friday, March 11, 2016 10:23 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

Hi Gary,

I have tried your template and it works for me correctly. Note, that I used 
private network (because my servers have not any public IP in template).

So your issue looks like a strange bug, because I can not reproduce it.
Could you share traceback if your error and also provide information about Heat 
version. Please create new bug with all this data and ping us to review it.

On 11 March 2016 at 08:25, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
<li-gong.d...@hpe.com> wrote:
> Hi Sergey,
>
> Thanks for your reply.
>
> Thanks for your pointing out that "depends_on" is not needed when we have 
> already used "get_attr".

So as Zane pointed, when we use get_attr it's expected, that we start create 
rg_b, when rg_a will be finally completed/created , because all information (in 
your case it's attributes) will be available after creation of rg_a.

In heat we have two types of dependencies explicit and implicit. So implicit 
dependencies will be created with using some Heat intrinsic functions. 
Depends_on add explicit dependency. All these dependencies work in the same 
way, depended resource will be created, when all his dependencies be resolved 
(created).

>
>>you create in rg_a some Server and probably it goes to active state 
>>before ip address becomes available for get_attr. It is necessary to 
>>check, but if it's try to add wait condition for this resource, then 
>>you will get created rg_a with fully available resources and I suppose 
>>IP will be available
>
> Do you mean that with "depends_on" functionalities, Heat will launch another 
> resource group(in my case, "rg_b") as soon as the server in "rg_a" becomes 
> "active" state?
> Actually, in my real program code, there is  a wait condition, but it is 
> located in the server template, not Resource group(in my case, it is 
> "b.yaml), which is like:
> --
> rg_a_wc_notify:
> type: OS::Heat::SoftwareConfig
> properties:
>   group: ungrouped
>   config:
> str_replace:
>   template: |
> #!/bin/bash -v
> wc_notify --data-binary '{"status": "SUCCESS"}'
>   params:
> wc_notify: {get_attr: [master_wait_handle, curl_cli]}
> --
> Is it the wait condition which you mentioned in " but if it's try to add wait 
> condition for this resource"? or you want this wait condition to be added to 
> "a.yaml"(the template declaring resource group)?
>
> And as per my observation, only after Heat receives the signal of "SUCCESS", 
> then it try to begin launch "rg_b"(my another server in another resource 
> group).
>
> I am wondering whether there is a chance that, the "IP" information is 
> available but Heat doesn't try to get it until the creation of the 2 resource 
> groups(rg_a and rg_b) is completed?



>
> Regards,
> Gary
>
> -Original Message-
> From: Sergey Kraynev [mailto:skray...@mirantis.com]
> Sent: Wednesday, March 09, 2016 6:42 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [heat] issue of ResourceGroup in Heat 
> template
>
> Hi Gary,
>
>
> First of all you don't need to use "depends_on", because using "get_attr" 
> already create implicit dependency from rg_a.
> About getting Null instead of real Ip address:
> It sounds like a bug, but IMO, it's expected behavior, because I suppose it 
> happens due to:
>  - you create in rg_a some Server and probably it goes to active state before 
> ip address becomes available for get_attr. It is necessary to check, but if 
> it's try to add wait condition for this resource, then you will get created 
> rg_a with fully available resources and I suppose IP will be available.
>
> On 9 March 2016 at 13:14, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> <li-gong.d...@hpe.com> wrote:
>> Hi,
>>
>>
>>
>> I have 3 Heat templates using ResourceGroup. There are 2 resource 
>> groups(rg_a and rg_b) and rg_b depends on rg_a.  and rg_b requires 
>> the IP address of rg_a as the paremeter of rg_b. I use “rg_a_public_ip: 
>> {get_attr:
>> [rg_a, rg_a_public_ip]}” to get the 

Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-10 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Jay,

Thanks for your reply.

> Is this still an issue when you remove the resource group and create the 
> resource directly? The count of 1 might just be for testing purposes, but if 
> that's the end goal you should be able to drop the group entirely.

Unfortunately, the count of 1 is just for testing purpose and my end goal is 
that the count should be inputed as paramaters.

Regards,
Gary

-Original Message-
From: Jay Dobies [mailto:jason.dob...@redhat.com] 
Sent: Thursday, March 10, 2016 5:55 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template



On 3/9/16 4:39 PM, Zane Bitter wrote:
> On 09/03/16 05:42, Sergey Kraynev wrote:
>> Hi Gary,
>>
>>
>> First of all you don't need to use "depends_on", because using 
>> "get_attr" already create implicit dependency from rg_a.
>> About getting Null instead of real Ip address:
>> It sounds like a bug, but IMO, it's expected behavior, because I 
>> suppose it happens due to:
>>   - you create in rg_a some Server and probably it goes to active 
>> state before ip address becomes available for get_attr. It is 
>> necessary to check, but if it's try to add wait condition for this 
>> resource, then you will get created rg_a with fully available 
>> resources and I suppose IP will be available.
>
> I would have expected the IP address to be available before the server 
> becomes CREATE_COMPLETE. If it isn't then I'd consider that a bug too 
> - as you pointed out, people are relying on the dependency created by 
> get_attr to ensure that they can actually get the attribute.
>
> cheers,
> Zane.
>
>> On 9 March 2016 at 13:14, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
>> <li-gong.d...@hpe.com> wrote:
>>> Hi,
>>>
>>>
>>>
>>> I have 3 Heat templates using ResourceGroup. There are 2 resource 
>>> groups(rg_a and rg_b) and rg_b depends on rg_a.  and rg_b requires 
>>> the IP address of rg_a as the paremeter of rg_b. I use 
>>> "rg_a_public_ip:
>>> {get_attr:
>>> [rg_a, rg_a_public_ip]}" to get the IP address of rg_a both in the 
>>> section of rg_b parameters (rg_b/properties/resource_def/properties) 
>>> and the section of outputs.
>>>
>>> As per my observation,  rg_a_public_ip shows "null" in the parameter 
>>> section of rg_b. while rg_a_public_ip shows the correct IP address 
>>> in the outputs section of the yaml file.
>>>
>>>
>>>
>>> My questions are:
>>>
>>> 1)  Does this behavior is expected as designed or this is a bug?
>>>
>>> 2)  What is the alternative solution for the above case(user want
>>> to get
>>> the run-time information of the instance when creating the second 
>>> resource
>>> group)  if this behavior is expected?
>>>
>>>
>>>
>>> --- a.yaml ---
>>>
>>> resources:
>>>
>>> rg_a:
>>>
>>>type: OS::Heat::ResourceGroup
>>>
>>>properties:
>>>
>>>count: 1

Is this still an issue when you remove the resource group and create the 
resource directly? The count of 1 might just be for testing purposes, but if 
that's the end goal you should be able to drop the group entirely.


>>>resource_def:
>>>
>>>type: b.yaml
>>>
>>>properties:
>>>
>>> ...
>>>
>>>
>>>
>>> rg_b:
>>>
>>> type: OS::Heat::ResourceGroup
>>>
>>> depends_on:
>>>
>>>  -rg_a
>>>
>>> properties:
>>>
>>>  count: 2
>>>
>>>  resource_def:
>>>
>>>  type: c.yaml
>>>
>>>  properties:
>>>
>>>  rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
>>>   the value is "null"
>>>
>>>  ...
>>>
>>>
>>>
>>> outputs:
>>>
>>> rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
>>> -  the value is correct.
>>>
>>> --
>>>
>>>
>>>
>>> --b.yaml  
>>>
>>> ...
>>>
>>> resources:
>>>
>>>  rg_a:
>>>
>>> type: OS::Nova::Server
>>>
>>> properties:
>>>
>>&

Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-10 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Zane,

Thanks for your reply.
I guess you mean that IP address of rg_a should be available AS SOON AS/AFTER  
the server of rg_a becomes CREATE_COMPLETE? As Sergey pointed out, there is a 
chance that IP address might be available when the server of rg_a becomes 
CREATE_COMPLETE. Actually, IMHO, it depends on when a server becomes ACTIVE or 
CREATE_COMPLETE. It will becomes ACTIVE or CREATE_COMPLETE when the OS is boot 
up but initiation services(such as network interface starting up) has not been 
done or when both OS itself and initialization jobs such as daemon service up 
and network interface up, IP assigned.

Regards,
Gary

-Original Message-
From: Zane Bitter [mailto:zbit...@redhat.com] 
Sent: Thursday, March 10, 2016 5:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

On 09/03/16 05:42, Sergey Kraynev wrote:
> Hi Gary,
>
>
> First of all you don't need to use "depends_on", because using 
> "get_attr" already create implicit dependency from rg_a.
> About getting Null instead of real Ip address:
> It sounds like a bug, but IMO, it's expected behavior, because I 
> suppose it happens due to:
>   - you create in rg_a some Server and probably it goes to active 
> state before ip address becomes available for get_attr. It is 
> necessary to check, but if it's try to add wait condition for this 
> resource, then you will get created rg_a with fully available 
> resources and I suppose IP will be available.

I would have expected the IP address to be available before the server becomes 
CREATE_COMPLETE. If it isn't then I'd consider that a bug too - as you pointed 
out, people are relying on the dependency created by get_attr to ensure that 
they can actually get the attribute.

cheers,
Zane.

> On 9 March 2016 at 13:14, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
> <li-gong.d...@hpe.com> wrote:
>> Hi,
>>
>>
>>
>> I have 3 Heat templates using ResourceGroup. There are 2 resource 
>> groups(rg_a and rg_b) and rg_b depends on rg_a.  and rg_b requires 
>> the IP address of rg_a as the paremeter of rg_b. I use "rg_a_public_ip: 
>> {get_attr:
>> [rg_a, rg_a_public_ip]}" to get the IP address of rg_a both in the 
>> section of rg_b parameters (rg_b/properties/resource_def/properties) 
>> and the section of outputs.
>>
>> As per my observation,  rg_a_public_ip shows "null" in the parameter 
>> section of rg_b. while rg_a_public_ip shows the correct IP address in 
>> the outputs section of the yaml file.
>>
>>
>>
>> My questions are:
>>
>> 1)  Does this behavior is expected as designed or this is a bug?
>>
>> 2)  What is the alternative solution for the above case(user want to get
>> the run-time information of the instance when creating the second 
>> resource
>> group)  if this behavior is expected?
>>
>>
>>
>> --- a.yaml ---
>>
>> resources:
>>
>> rg_a:
>>
>>type: OS::Heat::ResourceGroup
>>
>>properties:
>>
>>count: 1
>>
>>resource_def:
>>
>>type: b.yaml
>>
>>properties:
>>
>> ...
>>
>>
>>
>> rg_b:
>>
>> type: OS::Heat::ResourceGroup
>>
>> depends_on:
>>
>>  -rg_a
>>
>> properties:
>>
>>  count: 2
>>
>>  resource_def:
>>
>>  type: c.yaml
>>
>>  properties:
>>
>>  rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
>>   the value is "null"
>>
>>  ...
>>
>>
>>
>> outputs:
>>
>> rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
>> -  the value is correct.
>>
>> --
>>
>>
>>
>> --b.yaml  
>>
>> ...
>>
>> resources:
>>
>>  rg_a:
>>
>> type: OS::Nova::Server
>>
>> properties:
>>
>>   ...
>>
>> outputs:
>>
>>   rg_a_public_ip:
>>
>>   value: {get_attr: [rg_a, networks, public, 0]}
>>
>> --
>>
>>
>>
>> -- c.yaml 
>>
>> parameters:
>>
>> rg_a_public_ip:
>>
>>   type: string
>>
>>   description: IP of rg_a
>>
>> ...
>>
>> resources:
>>
>> rg_b:
>>
>>  type: OS::Nova::Server
>

Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-10 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Sergey,

Thanks for your reply.

Thanks for your pointing out that "depends_on" is not needed when we have 
already used "get_attr".

>you create in rg_a some Server and probably it goes to active state before ip 
>address becomes available for get_attr. It is necessary to check, but if it's 
>try to add wait condition for this resource, then you will get created rg_a 
>with fully available resources and I suppose IP will be available

Do you mean that with "depends_on" functionalities, Heat will launch another 
resource group(in my case, "rg_b") as soon as the server in "rg_a" becomes 
"active" state?
Actually, in my real program code, there is  a wait condition, but it is 
located in the server template, not Resource group(in my case, it is "b.yaml), 
which is like:
-- 
rg_a_wc_notify:
type: OS::Heat::SoftwareConfig
properties:
  group: ungrouped
  config:
str_replace:
  template: |
#!/bin/bash -v
wc_notify --data-binary '{"status": "SUCCESS"}'
  params:
wc_notify: {get_attr: [master_wait_handle, curl_cli]}
--
Is it the wait condition which you mentioned in " but if it's try to add wait 
condition for this resource"? or you want this wait condition to be added to 
"a.yaml"(the template declaring resource group)?

And as per my observation, only after Heat receives the signal of "SUCCESS", 
then it try to begin launch "rg_b"(my another server in another resource group).

I am wondering whether there is a chance that, the "IP" information is 
available but Heat doesn't try to get it until the creation of the 2 resource 
groups(rg_a and rg_b) is completed?

Regards,
Gary 

-Original Message-
From: Sergey Kraynev [mailto:skray...@mirantis.com] 
Sent: Wednesday, March 09, 2016 6:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [heat] issue of ResourceGroup in Heat template

Hi Gary,


First of all you don't need to use "depends_on", because using "get_attr" 
already create implicit dependency from rg_a.
About getting Null instead of real Ip address:
It sounds like a bug, but IMO, it's expected behavior, because I suppose it 
happens due to:
 - you create in rg_a some Server and probably it goes to active state before 
ip address becomes available for get_attr. It is necessary to check, but if 
it's try to add wait condition for this resource, then you will get created 
rg_a with fully available resources and I suppose IP will be available.

On 9 March 2016 at 13:14, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
<li-gong.d...@hpe.com> wrote:
> Hi,
>
>
>
> I have 3 Heat templates using ResourceGroup. There are 2 resource 
> groups(rg_a and rg_b) and rg_b depends on rg_a.  and rg_b requires the 
> IP address of rg_a as the paremeter of rg_b. I use “rg_a_public_ip: {get_attr:
> [rg_a, rg_a_public_ip]}” to get the IP address of rg_a both in the 
> section of rg_b parameters (rg_b/properties/resource_def/properties) 
> and the section of outputs.
>
> As per my observation,  rg_a_public_ip shows “null” in the parameter 
> section of rg_b. while rg_a_public_ip shows the correct IP address in 
> the outputs section of the yaml file.
>
>
>
> My questions are:
>
> 1)  Does this behavior is expected as designed or this is a bug?
>
> 2)  What is the alternative solution for the above case(user want to get
> the run-time information of the instance when creating the second 
> resource
> group)  if this behavior is expected?
>
>
>
> --- a.yaml ---
>
> resources:
>
> rg_a:
>
>   type: OS::Heat::ResourceGroup
>
>   properties:
>
>   count: 1
>
>   resource_def:
>
>   type: b.yaml
>
>   properties:
>
>…
>
>
>
> rg_b:
>
> type: OS::Heat::ResourceGroup
>
> depends_on:
>
> -rg_a
>
> properties:
>
> count: 2
>
> resource_def:
>
> type: c.yaml
>
> properties:
>
> rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
>   the value is “null”
>
> …
>
>
>
> outputs:
>
>rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}
> -  the value is correct.
>
> --
>
>
>
> --b.yaml  
>
> …
>
> resources:
>
> rg_a:
>
> type: OS::Nova::Server
>
> properties:
>
>  …
>
> outputs:
>
>  rg_a_public_ip:
>
>  value: {get_attr: [rg_a, networks, public, 0]}
>
> ---

[openstack-dev] [heat] issue of ResourceGroup in Heat template

2016-03-09 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi,

I have 3 Heat templates using ResourceGroup. There are 2 resource groups(rg_a 
and rg_b) and rg_b depends on rg_a.  and rg_b requires the IP address of rg_a 
as the paremeter of rg_b. I use "rg_a_public_ip: {get_attr: [rg_a, 
rg_a_public_ip]}" to get the IP address of rg_a both in the section of rg_b 
parameters (rg_b/properties/resource_def/properties) and the section of outputs.
As per my observation,  rg_a_public_ip shows "null" in the parameter section of 
rg_b. while rg_a_public_ip shows the correct IP address in the outputs section 
of the yaml file.

My questions are:

1)  Does this behavior is expected as designed or this is a bug?

2)  What is the alternative solution for the above case(user want to get 
the run-time information of the instance when creating the second resource 
group)  if this behavior is expected?

--- a.yaml ---
resources:
rg_a:
  type: OS::Heat::ResourceGroup
  properties:
  count: 1
  resource_def:
  type: b.yaml
  properties:
   ...

rg_b:
type: OS::Heat::ResourceGroup
depends_on:
-rg_a
properties:
count: 2
resource_def:
type: c.yaml
properties:
rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}   
  the value is "null"
...

outputs:
   rg_a_public_ip: {get_attr: [rg_a, rg_a_public_ip]}   
 -  the value is correct.
--

--b.yaml  
...
resources:
rg_a:
type: OS::Nova::Server
properties:
 ...
outputs:
 rg_a_public_ip:
 value: {get_attr: [rg_a, networks, public, 0]}
--

-- c.yaml 
parameters:
rg_a_public_ip:
 type: string
 description: IP of rg_a
...
resources:
rg_b:
type: OS::Nova::Server
properties:
 ...
outputs:
 ...
---

Regards,
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] publisher metering_secret in ceilometer.conf

2014-08-18 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Folks,

I created a new pollster plugin for ceilometer by:

- adding a new item under enterypoint/ceilometer.poll.central in setup.cfg 
file
- adding the implementation code inheriting plugin.CentralPollster
- adding a new source to pipeline.yaml as bellows:
---
- name: service_source
   interval: 600
   meters:
- service.stat
  sinks:
- meter_sink
---

But the new meter doesn't show up in the output of ceilometer meter-list.
See the log of ceilometer-agent-central, it might be caused by not correct 
signature during dispactching:
--
2014-08-18 03:07:13.170 16528 DEBUG ceilometer.dispatcher.database [-] metering 
data service.stat for 7398ae3f-c866-4484-b975-19d121acb2b1 @ 
2014-08-18T03:07:13.137888: 0 record_metering_data 
/opt/stack/ceilometer/ceilometer/dispatcher/database.py:55
2014-08-18 03:07:13.171 16528 WARNING ceilometer.dispatcher.database [-] 
message signature invalid, discarding message: {u'counter_name': ...
---
Seen from the source code 
(ceilometer/dispatcher/database.py:record_metering_data()), it fails when 
verify_signature. So I added a new item to ceilometer.conf:
-
[publisher_rpc]
metering_secret = 
-


And then the above warning log(message signature invalid) disappears, but the 
it seems that record_metering_data() is NOT invoked at all, because there is no 
the above debug log either.

And then I remove the metering_secret from ceilometer.conf, 
record_metering_data() is still not be invoked.

My questions are:

-  For the issue of message signature invalid during invoking 
recording_metering_data(), is it a correct solution to add metering_secret 
items to ceilometer.conf?

-  Are there anything wrong after adding meter_secret, so that 
record_metering_data() could not be invoked?

-  What else config/source file should I need to modify if I want my 
new meter shown up in ceilometer meter-list output?

-  Any other suggestions/comments?

Thanks in advance!
-Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] publisher metering_secret in ceilometer.conf

2014-08-18 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Btw, there is no service.stat record/row in meter table in ceilometer 
database.

Regards,
Gary

From: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Sent: Monday, August 18, 2014 3:45 PM
To: openstack-dev@lists.openstack.org
Cc: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Subject: [Ceilometer] publisher metering_secret in ceilometer.conf

Hi Folks,

I created a new pollster plugin for ceilometer by:

- adding a new item under enterypoint/ceilometer.poll.central in setup.cfg 
file
- adding the implementation code inheriting plugin.CentralPollster
- adding a new source to pipeline.yaml as bellows:
---
- name: service_source
   interval: 600
   meters:
- service.stat
  sinks:
- meter_sink
---

But the new meter doesn't show up in the output of ceilometer meter-list.
See the log of ceilometer-agent-central, it might be caused by not correct 
signature during dispactching:
--
2014-08-18 03:07:13.170 16528 DEBUG ceilometer.dispatcher.database [-] metering 
data service.stat for 7398ae3f-c866-4484-b975-19d121acb2b1 @ 
2014-08-18T03:07:13.137888: 0 record_metering_data 
/opt/stack/ceilometer/ceilometer/dispatcher/database.py:55
2014-08-18 03:07:13.171 16528 WARNING ceilometer.dispatcher.database [-] 
message signature invalid, discarding message: {u'counter_name': ...
---
Seen from the source code 
(ceilometer/dispatcher/database.py:record_metering_data()), it fails when 
verify_signature. So I added a new item to ceilometer.conf:
-
[publisher_rpc]
metering_secret = 
-


And then the above warning log(message signature invalid) disappears, but the 
it seems that record_metering_data() is NOT invoked at all, because there is no 
the above debug log either.

And then I remove the metering_secret from ceilometer.conf, 
record_metering_data() is still not be invoked.

My questions are:

-  For the issue of message signature invalid during invoking 
recording_metering_data(), is it a correct solution to add metering_secret 
items to ceilometer.conf?

-  Are there anything wrong after adding meter_secret, so that 
record_metering_data() could not be invoked?

-  What else config/source file should I need to modify if I want my 
new meter shown up in ceilometer meter-list output?

-  Any other suggestions/comments?

Thanks in advance!
-Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Add a pollster plugin

2014-08-14 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Thanks a lot, Doug. It does work after configuring pipeline.yaml.



Regards,

Gary

On Aug 12, 2014, at 4:11 AM, Duan, Li-Gong (Gary at 
HPServers-Core-OE-PSChttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev)
 li-gong.duan at 
hp.comhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
wrote:



 Hi Folks,



 Is there any best practices or good way to debug whether new pollster plugin 
 work fine for ceilometer?



 I'd like to add a new pollster plugin into Ceilometer by

  - adding a new item under enterypoint/ceilometer.poll.central in 
 setup.cfg file

  - adding the implementation code inheriting plugin.CentralPollster.



 But when I sudo python setup.py install and restart ceilometer-related 
 services in devstack, NO new metering is displayed upon ceilometer 
 meter-list and I expect that there should be a new metering showing the item 
 defined in setup.cfg.

 Is there any other source/config files I need to modify or add?



You need to define a pipeline [1] to include the data from your new pollster 
and schedule it to be run.



Doug



[1] http://docs.openstack.org/developer/ceilometer/configuration.html#pipelines





 Thanks in advance,

 Gary



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Add a pollster plugin

2014-08-12 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Folks,

Is there any best practices or good way to debug whether new pollster plugin 
work fine for ceilometer?

I'd like to add a new pollster plugin into Ceilometer by
 - adding a new item under enterypoint/ceilometer.poll.central in setup.cfg 
file
 - adding the implementation code inheriting plugin.CentralPollster.

But when I sudo python setup.py install and restart ceilometer-related 
services in devstack, NO new metering is displayed upon ceilometer meter-list 
and I expect that there should be a new metering showing the item defined in 
setup.cfg.
Is there any other source/config files I need to modify or add?

Thanks in advance,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-30 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Jay,



Thanks for your comment. You suggestion is good but I am wondering why

we cannot use or leverage Ceilometer to monitor infrastructure-related,

as it can used to monitor tenant-related things.



Regards,

Gary



On 07/29/2014 02:05 AM, Duan, Li-Gong (Gary at 
HPServers-Core-OE-PSChttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev)
 wrote:

 Hi Folks,



 Are there any guide or examples to show how to produce a new event or

 notification add add a handler for this event in ceilometer?



 I am asked to implement OpenStack service monitoring which will send an

 event and trigger the handler once a service, say nova-compute, crashes,

 in a short time. L



 The link (http://docs.openstack.org/developer/ceilometer/events.html)

 does a good job on the explanation of concept and hence I know that I

 need to emit notification to message queue and ceilometer-collector will

 process them and generate events but it is far from real implementations.



I would not use Ceilometer for this, as it is more tenant-facing than

infrastructure service facing. Instead, I would use a tried-and-true

solution like Nagios and NRPE checks. Here's an example of such a check

for a keystone endpoint:



https://github.com/ghantoos/debian-nagios-plugins-openstack/blob/master/plugins/check_keystone



Best,

-jay


From: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Sent: Tuesday, July 29, 2014 5:05 PM
To: openstack-dev@lists.openstack.org
Subject: [Ceilometer] Generate Event or Notification in Ceilometer

Hi Folks,

Are there any guide or examples to show how to produce a new event or 
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an event 
and trigger the handler once a service, say nova-compute, crashes, in a short 
time. :(
The link (http://docs.openstack.org/developer/ceilometer/events.html) does a 
good job on the explanation of concept and hence I know that I need to emit 
notification to message queue and ceilometer-collector will process them and 
generate events but it is far from real implementations.

Regards,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Generate Event or Notification in Ceilometer

2014-07-29 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Folks,

Are there any guide or examples to show how to produce a new event or 
notification add add a handler for this event in ceilometer?

I am asked to implement OpenStack service monitoring which will send an event 
and trigger the handler once a service, say nova-compute, crashes, in a short 
time. :(
The link (http://docs.openstack.org/developer/ceilometer/events.html) does a 
good job on the explanation of concept and hence I know that I need to emit 
notification to message queue and ceilometer-collector will process them and 
generate events but it is far from real implementations.

Regards,
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev