Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-07-22 Thread Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
Hi Hongbin,

This is really a good idea, because it will mitigate much of the job of 
implementing loop and conditional branch in Heat ResourceGroup. But as Kevin 
pointed out in below mail, it need a careful upgrade/migration path.

Meanwhile, as for the blueprint of supporting multiple flavor 
(https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor), we 
have implemented a Proof of Concept/prototype based on the current 
ResourceGroup method. (see the design spec 
https://review.openstack.org/#/c/345745/ for details.)

I am wondering whether we can continue with the implementation of supporting 
multiple flavor based on the current Resource Group for now? or Do you have any 
plan on when to implement the "manually managing the bay nodes"?

Regards,
Gary

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: Tuesday, May 17, 2016 3:01 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Sounds ok, but there needs to be a careful upgrade/migration path, where both 
are supported until after all pods are migrated out of nodes that are in the 
resourcegroup.

Thanks,
Kevin


From: Hongbin Lu
Sent: Sunday, May 15, 2016 3:49:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discuss the idea of manually managing the bay 
nodes
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 ...

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-20 Thread Hongbin Lu
Hi all,

During the discussion in this ML and team meetings, it seems most of us 
accepted the idea of supporting heterogeneous cluster. What we didn't agree 
well is how to implement it. To move it forward, I am going to summarize 
various implementation options so that we can debate each options thoughtfully.

* Goal:
Add support for provisioning and managing a COE cluster with nodes of various 
types. For example, a k8s cluster with N groups of nodes: the first group of 
nodes have flavor A, the second group of nodes have flavor B, and so on.

* Option 1:
Implement it in Heat templates declaratively. For example, if users want to 
create a cluster with 5 nodes, Magnum will generate a set of mappings of 
parameters for each node. For example:

  $ heat stack-create -f cluster.yaml \
  -P count=5 \
  -P az_map='{"0":"az1",...,"4":"az4"}' \
  -P flavor_map='{"0":"m1.foo",...,"4":"m1.bar"}'

Inside the top-level template, it contains a single resource group. The trick 
is passing %index% to the nested template.

  $ cat cluster.yaml
  heat_template_version: 2015-04-30
  parameters:
count:
  type: integer
az_map:
  type: json
flavor_map:
  type: json
  resources:
   AGroup:
  type: OS::Heat::ResourceGroup
  properties:
count: {get_param: count}
resource_def:
  type: server.yaml
  properties:
availability_zone_map: {get_param: az_map}
flavor_map: {get_param: flavor_map}
index: '%index%'

In the nested template, use 'index' to retrieve the parameters.

  $ cat server.yaml
  heat_template_version: 2015-04-30
  parameters:
availability_zone_map:
  type: json
flavor_map:
  type: json
index:
  type: string
  resources:
   server:
  type: OS::Nova::Server
  properties:
image: the_image
flavor: {get_param: [flavor_map, {get_param: index}]}
availability_zone: {get_param: [availability_zone_map, {get_param: 
index}]}

This approach has a critical drawback. As pointed out by Zane [1], we cannot 
remove member from the middle of the list. Therefore, the usage of resource 
group was not recommended.

* Option 2:
Generate Heat template by using the generator [2]. The code to generate the 
Heat template will be something like below:

  $ cat generator.py
  from os_hotgen import composer
  from os_hotgen import heat

  tmpl_a = heat.Template(description="...")
  ...

  for group in rsr_groups:
  # parameters
  param_name = group.name + '_flavor'
  param_type = 'string'
  param_flavor = heat.Parameter(name=param_name, type=param_type)
  tmpl_a.add_parameter(param_flavor)
  param_name = group.name + '_az'
  param_type = 'string'
  param_az = heat.Parameter(name=param_name, type=param_type)
  tmpl_a.add_parameter(param_az)
  ...

  # resources
  rsc = heat.Resource(group.name, 'OS::Heat::ResourceGroup')
  resource_def = {
  'type': 'server.yaml',
  'properties': {
  'availability_zone': heat.FnGetParam(param_az.name),
  'flavor': heat.FnGetParam(param_flavor.flavor),
  ...
  }
  }
  resource_def_prp = heat.ResourceProperty('resource_def', resource_def)
  rsc.add_property(resource_def_prp)
  count_prp = heat.ResourceProperty('count', group.count)
  rsc.add_property(count_prp)
  tmpl_a.add_resource(rsc)
  ...

  print composer.compose_template(tmpl_a)

* Option 3:
Remove the usage of ResourceGroup and manually manage Heat stacks for each bay 
node. For example, for a cluster with 5 nodes, Magnum is going to create 5 Heat 
stacks:

  for node in nodes:
  fields = {
  'stack_name': node.name,
  'parameters': {
  'flavor': node.flavor,
  'availability_zone': node.availability_zone,
  ...
  },
  'template': 'server.yaml',
  ...
  }
  osc.heat().stacks.create(**fields)

The major change is to have Magnum manage multiple Heat stacks instead of one 
big stack. The main advantage is that Magnum can update stack freely and the 
codebase is relatively simple. I guess the main disadvantage is performance, as 
Magnum need to iterate all Heat stacks to compute the state of the cluster. An 
optimization is to combine this approach with ResourceGroup. For example, for a 
cluster with 2 nodes in flavor A and 3 nodes with flavor B, Magnum will create 
2 Heat stacks: the first Heat stack contains a resource group with flavor A, 
the second Heat stack contains a resource group of flavor B.

Thoughts?

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-June/097522.html
[2] https://review.openstack.org/#/c/328822/

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-07-16 3:02 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-07 Thread Ricardo Rocha
+1 on this. Another use case would be 'fast storage' for dbs, 'any
storage' for memcache and web servers. Relying on labels for this
makes it really simple.

The alternative of doing it with multiple clusters adds complexity to
the cluster(s) description by users.

On Fri, Jun 3, 2016 at 1:54 AM, Fox, Kevin M  wrote:
> As an operator that has clouds that are partitioned into different host 
> aggregates with different flavors targeting them, I totally believe we will 
> have users that want to have a single k8s cluster span multiple different 
> flavor types. I'm sure once I deploy magnum, I will want it too. You could 
> have some special hardware on some nodes, not on others. but you can still 
> have cattle, if you have enough of them and the labels are set appropriately. 
> Labels allow you to continue to partition things when you need to, and ignore 
> it when you dont, making administration significantly easier.
>
> Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated into a 
> k8s cluster. I may want 30 instances of container x that doesn't care where 
> they land, and prefer 5 instances that need cuda. The former can be deployed 
> with a k8s deployment. The latter can be deployed with a daemonset. All 
> should work well and very non pet'ish. The whole tenant could be viewed with 
> a single pane of glass, making it easy to manage.
>
> Thanks,
> Kevin
> 
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Thursday, June 02, 2016 4:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing 
> the bay nodes
>
> I am really struggling to accept the idea of heterogeneous clusters. My 
> experience causes me to question whether a heterogeneus cluster makes sense 
> for Magnum. I will try to explain why I have this hesitation:
>
> 1) If you have a heterogeneous cluster, it suggests that you are using 
> external intelligence to manage the cluster, rather than relying on it to be 
> self-managing. This is an anti-pattern that I refer to as “pets" rather than 
> “cattle”. The anti-pattern results in brittle deployments that rely on 
> external intelligence to manage (upgrade, diagnose, and repair) the cluster. 
> The automation of the management is much harder when a cluster is 
> heterogeneous.
>
> 2) If you have a heterogeneous cluster, it can fall out of balance. This 
> means that if one of your “important” or “large” members fail, there may not 
> be adequate remaining members in the cluster to continue operating properly 
> in the degraded state. The logic of how to track and deal with this needs to 
> be handled. It’s much simpler in the heterogeneous case.
>
> 3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
> are harder to work with, and that usually means that unplanned outages are 
> more frequent, and last longer than they with a homogeneous cluster.
>
> Summary:
>
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
>
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the 
> cluster controller
>   - More reliable
>
> My bias is to assert that applications that want a heterogeneous mix of 
> system capacities at a node level should be deployed on multiple homogeneous 
> bays, not a single heterogeneous one. That way you end up with a composition 
> of simple systems rather than a larger complex one.
>
> Adrian
>
>
>> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
>>
>> Personally, I think this is a good idea, since it can address a set of 
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For 
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>>
>> The use case above should be very common and universal everywhere. To 
>> address the use case, Magnum needs to support provisioning heterogeneous set 
>> of nodes at deploy time and managing them at runtime. It looks the proposed 
>> idea (manually managing individual nodes or individual group of nodes) can 
>> address this requirement very well. Besides the proposed idea, I cannot 
>> think of an alternative solution.
>>
>> Therefore, I vote to support the proposed idea.
>>
>> Best regards,
>> Hongbin
>>
>>> -Original Message-
>>> From: Hongbin Lu
>>> Sent: June-01-16 11:44 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>>> managing the bay nodes
>>>
>>> Hi team,
>>>
>>> A blueprint was created for tracking this idea:
>>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>>> nodes . I won't approve the BP until 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-06 Thread Yuanying OTSUKA
+1 Kevin

“heterogeneous cluster is more advanced and harder to control”
So, I believe that Magnum should control and overcome this problem.
Magnum is a container infrastructure as a service.
Managing heterogeneous environment seems scope of Magnum’s mission.

2016年6月3日(金) 8:55 Fox, Kevin M :

> As an operator that has clouds that are partitioned into different host
> aggregates with different flavors targeting them, I totally believe we will
> have users that want to have a single k8s cluster span multiple different
> flavor types. I'm sure once I deploy magnum, I will want it too. You could
> have some special hardware on some nodes, not on others. but you can still
> have cattle, if you have enough of them and the labels are set
> appropriately. Labels allow you to continue to partition things when you
> need to, and ignore it when you dont, making administration significantly
> easier.
>
> Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated into
> a k8s cluster. I may want 30 instances of container x that doesn't care
> where they land, and prefer 5 instances that need cuda. The former can be
> deployed with a k8s deployment. The latter can be deployed with a
> daemonset. All should work well and very non pet'ish. The whole tenant
> could be viewed with a single pane of glass, making it easy to manage.
>
> Thanks,
> Kevin
> 
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Thursday, June 02, 2016 4:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
>
> I am really struggling to accept the idea of heterogeneous clusters. My
> experience causes me to question whether a heterogeneus cluster makes sense
> for Magnum. I will try to explain why I have this hesitation:
>
> 1) If you have a heterogeneous cluster, it suggests that you are using
> external intelligence to manage the cluster, rather than relying on it to
> be self-managing. This is an anti-pattern that I refer to as “pets" rather
> than “cattle”. The anti-pattern results in brittle deployments that rely on
> external intelligence to manage (upgrade, diagnose, and repair) the
> cluster. The automation of the management is much harder when a cluster is
> heterogeneous.
>
> 2) If you have a heterogeneous cluster, it can fall out of balance. This
> means that if one of your “important” or “large” members fail, there may
> not be adequate remaining members in the cluster to continue operating
> properly in the degraded state. The logic of how to track and deal with
> this needs to be handled. It’s much simpler in the heterogeneous case.
>
> 3) Heterogeneous clusters are complex compared to homogeneous clusters.
> They are harder to work with, and that usually means that unplanned outages
> are more frequent, and last longer than they with a homogeneous cluster.
>
> Summary:
>
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
>
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the
> cluster controller
>   - More reliable
>
> My bias is to assert that applications that want a heterogeneous mix of
> system capacities at a node level should be deployed on multiple
> homogeneous bays, not a single heterogeneous one. That way you end up with
> a composition of simple systems rather than a larger complex one.
>
> Adrian
>
>
> > On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> >
> > Personally, I think this is a good idea, since it can address a set of
> similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud. For
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning heterogeneous
> set of nodes at deploy time and managing them at runtime. It looks the
> proposed idea (manually managing individual nodes or individual group of
> nodes) can address this requirement very well. Besides the proposed idea, I
> cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> >> -Original Message-
> >> From: Hongbin Lu
> >> Sent: June-01-16 11:44 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> >> managing the bay nodes
> >>
> >> Hi team,
> >>
> >> A blueprint was created for tracking this idea:
> >> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> >> nodes . I won't approve the BP until there is a team decision on
> >> 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-03 Thread Hongbin Lu
I agree that heterogeneous cluster is more advanced and harder to control, but 
I don't get why we (as service developers/providers) care about that. If there 
is a significant portion of users asking for advanced topologies (i.e. 
heterogeneous cluster) and willing to deal with the complexities, Magnum should 
just provide them (unless there are technical difficulties or other valid 
arguments). From my point of view, Magnum should support the basic use cases 
well (i.e. homogenous), *and* be flexible to accommodate various advanced use 
cases if we can.

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 7:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> I am really struggling to accept the idea of heterogeneous clusters. My
> experience causes me to question whether a heterogeneus cluster makes
> sense for Magnum. I will try to explain why I have this hesitation:
> 
> 1) If you have a heterogeneous cluster, it suggests that you are using
> external intelligence to manage the cluster, rather than relying on it
> to be self-managing. This is an anti-pattern that I refer to as “pets"
> rather than “cattle”. The anti-pattern results in brittle deployments
> that rely on external intelligence to manage (upgrade, diagnose, and
> repair) the cluster. The automation of the management is much harder
> when a cluster is heterogeneous.
> 
> 2) If you have a heterogeneous cluster, it can fall out of balance.
> This means that if one of your “important” or “large” members fail,
> there may not be adequate remaining members in the cluster to continue
> operating properly in the degraded state. The logic of how to track and
> deal with this needs to be handled. It’s much simpler in the
> heterogeneous case.
> 
> 3) Heterogeneous clusters are complex compared to homogeneous clusters.
> They are harder to work with, and that usually means that unplanned
> outages are more frequent, and last longer than they with a homogeneous
> cluster.
> 
> Summary:
> 
> Heterogeneous:
>   - Complex
>   - Prone to imbalance upon node failure
>   - Less reliable
> 
> Heterogeneous:
>   - Simple
>   - Don’t get imbalanced when a min_members concept is supported by the
> cluster controller
>   - More reliable
> 
> My bias is to assert that applications that want a heterogeneous mix of
> system capacities at a node level should be deployed on multiple
> homogeneous bays, not a single heterogeneous one. That way you end up
> with a composition of simple systems rather than a larger complex one.
> 
> Adrian
> 
> 
> > On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> >
> > Personally, I think this is a good idea, since it can address a set
> of similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud.
> For example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> >> -Original Message-
> >> From: Hongbin Lu
> >> Sent: June-01-16 11:44 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> >> managing the bay nodes
> >>
> >> Hi team,
> >>
> >> A blueprint was created for tracking this idea:
> >> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> >> nodes . I won't approve the BP until there is a team decision on
> >> accepting/rejecting the idea.
> >>
> >> From the discussion in design summit, it looks everyone is OK with
> >> the idea in general (with some disagreements in the API style).
> >> However, from the last team meeting, it looks some people disagree
> >> with the idea fundamentally. so I re-raised this ML to re-discuss.
> >>
> >> If you agree or disagree with the idea of manually managing the Heat
> >> stacks (that contains individual bay nodes), please write down your
> >> arguments here. Then, we can start debating on that.
> >>
> >> Best regards,
> >> Hongbin
> >>
> >>> -Original Message-
> >>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> >>> Sent: May-16-16 5:28 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> Subject: Re: [openstack-dev] [magnum] Discuss 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Fox, Kevin M
As an operator that has clouds that are partitioned into different host 
aggregates with different flavors targeting them, I totally believe we will 
have users that want to have a single k8s cluster span multiple different 
flavor types. I'm sure once I deploy magnum, I will want it too. You could have 
some special hardware on some nodes, not on others. but you can still have 
cattle, if you have enough of them and the labels are set appropriately. Labels 
allow you to continue to partition things when you need to, and ignore it when 
you dont, making administration significantly easier.

Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated into a k8s 
cluster. I may want 30 instances of container x that doesn't care where they 
land, and prefer 5 instances that need cuda. The former can be deployed with a 
k8s deployment. The latter can be deployed with a daemonset. All should work 
well and very non pet'ish. The whole tenant could be viewed with a single pane 
of glass, making it easy to manage.

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Thursday, June 02, 2016 4:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

I am really struggling to accept the idea of heterogeneous clusters. My 
experience causes me to question whether a heterogeneus cluster makes sense for 
Magnum. I will try to explain why I have this hesitation:

1) If you have a heterogeneous cluster, it suggests that you are using external 
intelligence to manage the cluster, rather than relying on it to be 
self-managing. This is an anti-pattern that I refer to as “pets" rather than 
“cattle”. The anti-pattern results in brittle deployments that rely on external 
intelligence to manage (upgrade, diagnose, and repair) the cluster. The 
automation of the management is much harder when a cluster is heterogeneous.

2) If you have a heterogeneous cluster, it can fall out of balance. This means 
that if one of your “important” or “large” members fail, there may not be 
adequate remaining members in the cluster to continue operating properly in the 
degraded state. The logic of how to track and deal with this needs to be 
handled. It’s much simpler in the heterogeneous case.

3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
are harder to work with, and that usually means that unplanned outages are more 
frequent, and last longer than they with a homogeneous cluster.

Summary:

Heterogeneous:
  - Complex
  - Prone to imbalance upon node failure
  - Less reliable

Heterogeneous:
  - Simple
  - Don’t get imbalanced when a min_members concept is supported by the cluster 
controller
  - More reliable

My bias is to assert that applications that want a heterogeneous mix of system 
capacities at a node level should be deployed on multiple homogeneous bays, not 
a single heterogeneous one. That way you end up with a composition of simple 
systems rather than a larger complex one.

Adrian


> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
>
> Personally, I think this is a good idea, since it can address a set of 
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For 
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>
> The use case above should be very common and universal everywhere. To address 
> the use case, Magnum needs to support provisioning heterogeneous set of nodes 
> at deploy time and managing them at runtime. It looks the proposed idea 
> (manually managing individual nodes or individual group of nodes) can address 
> this requirement very well. Besides the proposed idea, I cannot think of an 
> alternative solution.
>
> Therefore, I vote to support the proposed idea.
>
> Best regards,
> Hongbin
>
>> -Original Message-
>> From: Hongbin Lu
>> Sent: June-01-16 11:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>>
>> Hi team,
>>
>> A blueprint was created for tracking this idea:
>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> nodes . I won't approve the BP until there is a team decision on
>> accepting/rejecting the idea.
>>
>> From the discussion in design summit, it looks everyone is OK with the
>> idea in general (with some disagreements in the API style). However,
>> from the last team meeting, it looks some people disagree with the idea
>> fundamentally. so I re-raised this ML to re-discuss.
>>
>> If you agree or disagree with the idea of manually managing the Heat
>> stacks (that contains individual bay nodes), please write down your
>> 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Steven Dake (stdake)
Hongbin,

Have you considered a workflow engine?

FWIW I agree with Adrian about the difficulties of heterogenous systems.
Much better to operate, and in reality the world has moved entirely to
x86_64 + Linux.  I could see a future in which ARM breaks into the server
space, but that is multiple years away if at ever.

Regards
-steve


On 6/2/16, 7:42 AM, "Hongbin Lu"  wrote:

>Madhuri,
>
>It looks both of us agree the idea of having heterogeneous set of nodes.
>For the implementation, I am open to alternative (I supported the
>work-around idea because I cannot think of a feasible implementation by
>purely using Heat, unless Heat support "for" logic which is very unlikely
>to happen. However, if anyone can think of a pure Heat implementation, I
>am totally fine with that).
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
>> Sent: June-02-16 12:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi Hongbin,
>> 
>> I also liked the idea of having heterogeneous set of nodes but IMO such
>> features should not be implemented in Magnum, thus deviating Magnum
>> again from its roadmap. Whereas we should leverage Heat(or may be
>> Senlin) APIs for the same.
>> 
>> I vote +1 for this feature.
>> 
>> Regards,
>> Madhuri
>> 
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Thursday, June 2, 2016 3:33 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Personally, I think this is a good idea, since it can address a set of
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>> 
>> The use case above should be very common and universal everywhere. To
>> address the use case, Magnum needs to support provisioning
>> heterogeneous set of nodes at deploy time and managing them at runtime.
>> It looks the proposed idea (manually managing individual nodes or
>> individual group of nodes) can address this requirement very well.
>> Besides the proposed idea, I cannot think of an alternative solution.
>> 
>> Therefore, I vote to support the proposed idea.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -Original Message-
>> > From: Hongbin Lu
>> > Sent: June-01-16 11:44 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> > managing the bay nodes
>> >
>> > Hi team,
>> >
>> > A blueprint was created for tracking this idea:
>> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> > nodes . I won't approve the BP until there is a team decision on
>> > accepting/rejecting the idea.
>> >
>> > From the discussion in design summit, it looks everyone is OK with
>> the
>> > idea in general (with some disagreements in the API style). However,
>> > from the last team meeting, it looks some people disagree with the
>> > idea fundamentally. so I re-raised this ML to re-discuss.
>> >
>> > If you agree or disagree with the idea of manually managing the Heat
>> > stacks (that contains individual bay nodes), please write down your
>> > arguments here. Then, we can start debating on that.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > > -Original Message-
>> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> > > Sent: May-16-16 5:28 AM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > The discussion at the summit was very positive around this
>> > requirement
>> > > but as this change will make a large impact to Magnum it will need
>> a
>> > > spec.
>> > >
>> > > On the API of things, I was thinking a slightly more generic
>> > > approach to incorporate other lifecycle operations into the same
>> API.
>> > > Eg:
>> > > magnum bay-manage  
>> > >
>> > > magnum bay-manage  reset –hard
>> > > magnum bay-manage  rebuild
>> > > magnum bay-manage  node-delete  magnum bay-manage
>> > >  node-add –flavor  magnum bay-manage  node-reset
>> > >  magnum bay-manage  node-list
>> > >
>> > > Tom
>> > >
>> > > From: Yuanying OTSUKA 
>> > > Reply-To: "OpenStack Development Mailing List (not for usage
>> > > questions)" 
>> > > Date: Monday, 16 May 2016 at 01:07
>> > > To: "OpenStack Development Mailing List (not for usage questions)"
>> > > 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Adrian Otto
I am really struggling to accept the idea of heterogeneous clusters. My 
experience causes me to question whether a heterogeneus cluster makes sense for 
Magnum. I will try to explain why I have this hesitation:

1) If you have a heterogeneous cluster, it suggests that you are using external 
intelligence to manage the cluster, rather than relying on it to be 
self-managing. This is an anti-pattern that I refer to as “pets" rather than 
“cattle”. The anti-pattern results in brittle deployments that rely on external 
intelligence to manage (upgrade, diagnose, and repair) the cluster. The 
automation of the management is much harder when a cluster is heterogeneous.

2) If you have a heterogeneous cluster, it can fall out of balance. This means 
that if one of your “important” or “large” members fail, there may not be 
adequate remaining members in the cluster to continue operating properly in the 
degraded state. The logic of how to track and deal with this needs to be 
handled. It’s much simpler in the heterogeneous case.

3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
are harder to work with, and that usually means that unplanned outages are more 
frequent, and last longer than they with a homogeneous cluster.

Summary:

Heterogeneous:
  - Complex
  - Prone to imbalance upon node failure
  - Less reliable

Heterogeneous:
  - Simple
  - Don’t get imbalanced when a min_members concept is supported by the cluster 
controller
  - More reliable

My bias is to assert that applications that want a heterogeneous mix of system 
capacities at a node level should be deployed on multiple homogeneous bays, not 
a single heterogeneous one. That way you end up with a composition of simple 
systems rather than a larger complex one.

Adrian


> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> 
> Personally, I think this is a good idea, since it can address a set of 
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For 
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To address 
> the use case, Magnum needs to support provisioning heterogeneous set of nodes 
> at deploy time and managing them at runtime. It looks the proposed idea 
> (manually managing individual nodes or individual group of nodes) can address 
> this requirement very well. Besides the proposed idea, I cannot think of an 
> alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Hongbin Lu
>> Sent: June-01-16 11:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi team,
>> 
>> A blueprint was created for tracking this idea:
>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> nodes . I won't approve the BP until there is a team decision on
>> accepting/rejecting the idea.
>> 
>> From the discussion in design summit, it looks everyone is OK with the
>> idea in general (with some disagreements in the API style). However,
>> from the last team meeting, it looks some people disagree with the idea
>> fundamentally. so I re-raised this ML to re-discuss.
>> 
>> If you agree or disagree with the idea of manually managing the Heat
>> stacks (that contains individual bay nodes), please write down your
>> arguments here. Then, we can start debating on that.
>> 
>> Best regards,
>> Hongbin
>> 
>>> -Original Message-
>>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>>> Sent: May-16-16 5:28 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>>> managing the bay nodes
>>> 
>>> The discussion at the summit was very positive around this
>> requirement
>>> but as this change will make a large impact to Magnum it will need a
>>> spec.
>>> 
>>> On the API of things, I was thinking a slightly more generic approach
>>> to incorporate other lifecycle operations into the same API.
>>> Eg:
>>> magnum bay-manage  
>>> 
>>> magnum bay-manage  reset –hard
>>> magnum bay-manage  rebuild
>>> magnum bay-manage  node-delete  magnum bay-manage
>>>  node-add –flavor  magnum bay-manage  node-reset
>>>  magnum bay-manage  node-list
>>> 
>>> Tom
>>> 
>>> From: Yuanying OTSUKA 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> Date: Monday, 16 May 2016 at 01:07
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Keith Bray
Has an email been posted to the [heat] community for their input?  Maybe I
missed it.

Thanks,
-Keith

On 6/2/16, 9:42 AM, "Hongbin Lu"  wrote:

>Madhuri,
>
>It looks both of us agree the idea of having heterogeneous set of nodes.
>For the implementation, I am open to alternative (I supported the
>work-around idea because I cannot think of a feasible implementation by
>purely using Heat, unless Heat support "for" logic which is very unlikely
>to happen. However, if anyone can think of a pure Heat implementation, I
>am totally fine with that).
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
>> Sent: June-02-16 12:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi Hongbin,
>> 
>> I also liked the idea of having heterogeneous set of nodes but IMO such
>> features should not be implemented in Magnum, thus deviating Magnum
>> again from its roadmap. Whereas we should leverage Heat(or may be
>> Senlin) APIs for the same.
>> 
>> I vote +1 for this feature.
>> 
>> Regards,
>> Madhuri
>> 
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Thursday, June 2, 2016 3:33 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Personally, I think this is a good idea, since it can address a set of
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>> 
>> The use case above should be very common and universal everywhere. To
>> address the use case, Magnum needs to support provisioning
>> heterogeneous set of nodes at deploy time and managing them at runtime.
>> It looks the proposed idea (manually managing individual nodes or
>> individual group of nodes) can address this requirement very well.
>> Besides the proposed idea, I cannot think of an alternative solution.
>> 
>> Therefore, I vote to support the proposed idea.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -Original Message-
>> > From: Hongbin Lu
>> > Sent: June-01-16 11:44 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> > managing the bay nodes
>> >
>> > Hi team,
>> >
>> > A blueprint was created for tracking this idea:
>> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> > nodes . I won't approve the BP until there is a team decision on
>> > accepting/rejecting the idea.
>> >
>> > From the discussion in design summit, it looks everyone is OK with
>> the
>> > idea in general (with some disagreements in the API style). However,
>> > from the last team meeting, it looks some people disagree with the
>> > idea fundamentally. so I re-raised this ML to re-discuss.
>> >
>> > If you agree or disagree with the idea of manually managing the Heat
>> > stacks (that contains individual bay nodes), please write down your
>> > arguments here. Then, we can start debating on that.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > > -Original Message-
>> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> > > Sent: May-16-16 5:28 AM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > The discussion at the summit was very positive around this
>> > requirement
>> > > but as this change will make a large impact to Magnum it will need
>> a
>> > > spec.
>> > >
>> > > On the API of things, I was thinking a slightly more generic
>> > > approach to incorporate other lifecycle operations into the same
>> API.
>> > > Eg:
>> > > magnum bay-manage  
>> > >
>> > > magnum bay-manage  reset –hard
>> > > magnum bay-manage  rebuild
>> > > magnum bay-manage  node-delete  magnum bay-manage
>> > >  node-add –flavor  magnum bay-manage  node-reset
>> > >  magnum bay-manage  node-list
>> > >
>> > > Tom
>> > >
>> > > From: Yuanying OTSUKA 
>> > > Reply-To: "OpenStack Development Mailing List (not for usage
>> > > questions)" 
>> > > Date: Monday, 16 May 2016 at 01:07
>> > > To: "OpenStack Development Mailing List (not for usage questions)"
>> > > 
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > Hi,
>> > >
>> > > I think, user also want to specify the deleting node.
>> > > So we should manage “node” 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread 乔立勇
Hongbin,

for the implementation of heterogeneous, I think we should avoid to talking
with nova or other service directly, which will bring lots of coding.
maybe the best way is to refactor our heat template, and let a bay support
several heat template when we scale-out new node or delete additional node.

Eli.

2016-06-02 22:42 GMT+08:00 Hongbin Lu :

> Madhuri,
>
> It looks both of us agree the idea of having heterogeneous set of nodes.
> For the implementation, I am open to alternative (I supported the
> work-around idea because I cannot think of a feasible implementation by
> purely using Heat, unless Heat support "for" logic which is very unlikely
> to happen. However, if anyone can think of a pure Heat implementation, I am
> totally fine with that).
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> > Sent: June-02-16 12:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi Hongbin,
> >
> > I also liked the idea of having heterogeneous set of nodes but IMO such
> > features should not be implemented in Magnum, thus deviating Magnum
> > again from its roadmap. Whereas we should leverage Heat(or may be
> > Senlin) APIs for the same.
> >
> > I vote +1 for this feature.
> >
> > Regards,
> > Madhuri
> >
> > -Original Message-
> > From: Hongbin Lu [mailto:hongbin...@huawei.com]
> > Sent: Thursday, June 2, 2016 3:33 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Personally, I think this is a good idea, since it can address a set of
> > similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> > regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud. For
> > example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> > address the use case, Magnum needs to support provisioning
> > heterogeneous set of nodes at deploy time and managing them at runtime.
> > It looks the proposed idea (manually managing individual nodes or
> > individual group of nodes) can address this requirement very well.
> > Besides the proposed idea, I cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> > > -Original Message-
> > > From: Hongbin Lu
> > > Sent: June-01-16 11:44 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > Hi team,
> > >
> > > A blueprint was created for tracking this idea:
> > > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> > > nodes . I won't approve the BP until there is a team decision on
> > > accepting/rejecting the idea.
> > >
> > > From the discussion in design summit, it looks everyone is OK with
> > the
> > > idea in general (with some disagreements in the API style). However,
> > > from the last team meeting, it looks some people disagree with the
> > > idea fundamentally. so I re-raised this ML to re-discuss.
> > >
> > > If you agree or disagree with the idea of manually managing the Heat
> > > stacks (that contains individual bay nodes), please write down your
> > > arguments here. Then, we can start debating on that.
> > >
> > > Best regards,
> > > Hongbin
> > >
> > > > -Original Message-
> > > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > > > Sent: May-16-16 5:28 AM
> > > > To: OpenStack Development Mailing List (not for usage questions)
> > > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > > managing the bay nodes
> > > >
> > > > The discussion at the summit was very positive around this
> > > requirement
> > > > but as this change will make a large impact to Magnum it will need
> > a
> > > > spec.
> > > >
> > > > On the API of things, I was thinking a slightly more generic
> > > > approach to incorporate other lifecycle operations into the same
> > API.
> > > > Eg:
> > > > magnum bay-manage  
> > > >
> > > > magnum bay-manage  reset –hard
> > > > magnum bay-manage  rebuild
> > > > magnum bay-manage  node-delete  magnum bay-manage
> > > >  node-add –flavor  magnum bay-manage  node-reset
> > > >  magnum bay-manage  node-list
> > > >
> > > > Tom
> > > >
> > > > From: Yuanying OTSUKA 
> > > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > > questions)" 
> > > > Date: Monday, 16 May 2016 at 01:07
> > > > To: "OpenStack Development 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Hongbin Lu
Madhuri,

It looks both of us agree the idea of having heterogeneous set of nodes. For 
the implementation, I am open to alternative (I supported the work-around idea 
because I cannot think of a feasible implementation by purely using Heat, 
unless Heat support "for" logic which is very unlikely to happen. However, if 
anyone can think of a pure Heat implementation, I am totally fine with that).

Best regards,
Hongbin

> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: June-02-16 12:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi Hongbin,
> 
> I also liked the idea of having heterogeneous set of nodes but IMO such
> features should not be implemented in Magnum, thus deviating Magnum
> again from its roadmap. Whereas we should leverage Heat(or may be
> Senlin) APIs for the same.
> 
> I vote +1 for this feature.
> 
> Regards,
> Madhuri
> 
> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Thursday, June 2, 2016 3:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Personally, I think this is a good idea, since it can address a set of
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Hongbin Lu
> > Sent: June-01-16 11:44 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi team,
> >
> > A blueprint was created for tracking this idea:
> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> > nodes . I won't approve the BP until there is a team decision on
> > accepting/rejecting the idea.
> >
> > From the discussion in design summit, it looks everyone is OK with
> the
> > idea in general (with some disagreements in the API style). However,
> > from the last team meeting, it looks some people disagree with the
> > idea fundamentally. so I re-raised this ML to re-discuss.
> >
> > If you agree or disagree with the idea of manually managing the Heat
> > stacks (that contains individual bay nodes), please write down your
> > arguments here. Then, we can start debating on that.
> >
> > Best regards,
> > Hongbin
> >
> > > -Original Message-
> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > > Sent: May-16-16 5:28 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > The discussion at the summit was very positive around this
> > requirement
> > > but as this change will make a large impact to Magnum it will need
> a
> > > spec.
> > >
> > > On the API of things, I was thinking a slightly more generic
> > > approach to incorporate other lifecycle operations into the same
> API.
> > > Eg:
> > > magnum bay-manage  
> > >
> > > magnum bay-manage  reset –hard
> > > magnum bay-manage  rebuild
> > > magnum bay-manage  node-delete  magnum bay-manage
> > >  node-add –flavor  magnum bay-manage  node-reset
> > >  magnum bay-manage  node-list
> > >
> > > Tom
> > >
> > > From: Yuanying OTSUKA 
> > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > questions)" 
> > > Date: Monday, 16 May 2016 at 01:07
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > Hi,
> > >
> > > I think, user also want to specify the deleting node.
> > > So we should manage “node” individually.
> > >
> > > For example:
> > > $ magnum node-create —bay …
> > > $ magnum node-list —bay
> > > $ magnum node-delete $NODE_UUID
> > >
> > > Anyway, if magnum want to manage a lifecycle of container
> > > infrastructure.
> > > This feature is necessary.
> > >
> > > Thanks
> > > 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Kumari, Madhuri
Hi Hongbin,

I also liked the idea of having heterogeneous set of nodes but IMO such 
features should not be implemented in Magnum, thus deviating Magnum again from 
its roadmap. Whereas we should leverage Heat(or may be Senlin) APIs for the 
same.

I vote +1 for this feature.

Regards,
Madhuri

-Original Message-
From: Hongbin Lu [mailto:hongbin...@huawei.com] 
Sent: Thursday, June 2, 2016 3:33 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually 
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on 
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the 
> idea in general (with some disagreements in the API style). However, 
> from the last team meeting, it looks some people disagree with the 
> idea fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat 
> stacks (that contains individual bay nodes), please write down your 
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually 
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a 
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic 
> > approach to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage 
> >  node-add –flavor  magnum bay-manage  node-reset 
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA 
> > Reply-To: "OpenStack Development Mailing List (not for usage 
> > questions)" 
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually 
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container 
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > >:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap, 
> > Magnum manages bay nodes by using ResourceGroup from Heat. This 
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example, 
> > there is a request to provision bay nodes across availability zones
> [1].
> > There is another request to provision bay nodes with different set 
> > of flavors [2]. For the request features above, ResourceGroup won’t 
> > work very well.
> >
> > The proposal is to remove the usage of ResourceGroup and manually 
> > create Heat stack for each bay nodes. For example, for creating a 
> > cluster with 2 masters and 3 minions, Magnum is going to manage 6
> Heat
> > stacks (instead of 1 big 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the
> idea in general (with some disagreements in the API style). However,
> from the last team meeting, it looks some people disagree with the idea
> fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat
> stacks (that contains individual bay nodes), please write down your
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic approach
> > to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage
> >  node-add –flavor  magnum bay-manage  node-reset
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA 
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > >:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap,
> > Magnum manages bay nodes by using ResourceGroup from Heat. This
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example,
> > there is a request to provision bay nodes across availability zones
> [1].
> > There is another request to provision bay nodes with different set of
> > flavors [2]. For the request features above, ResourceGroup won’t work
> > very well.
> >
> > The proposal is to remove the usage of ResourceGroup and manually
> > create Heat stack for each bay nodes. For example, for creating a
> > cluster with 2 masters and 3 minions, Magnum is going to manage 6
> Heat
> > stacks (instead of 1 big Heat stack as right now):
> > * A kube cluster stack that manages the global resources
> > * Two kube master stacks that manage the two master nodes
> > * Three kube minion stacks that manage the three minion nodes
> >
> > The proposal might require an additional API endpoint to manage nodes
> > or a group of nodes. For example:
> > $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --
> > availability-zone us-east-1 ….
> > $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --
> > availability-zone us-east-2 …
> >
> > Thoughts?
> >
> > [1] https://blueprints.launchpad.net/magnum/+spec/magnum-
> 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Hongbin Lu
Hi team,

A blueprint was created for tracking this idea: 
https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-nodes . I 
won't approve the BP until there is a team decision on accepting/rejecting the 
idea.

From the discussion in design summit, it looks everyone is OK with the idea in 
general (with some disagreements in the API style). However, from the last team 
meeting, it looks some people disagree with the idea fundamentally. so I 
re-raised this ML to re-discuss.

If you agree or disagree with the idea of manually managing the Heat stacks 
(that contains individual bay nodes), please write down your arguments here. 
Then, we can start debating on that.

Best regards,
Hongbin

> -Original Message-
> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> Sent: May-16-16 5:28 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> The discussion at the summit was very positive around this requirement
> but as this change will make a large impact to Magnum it will need a
> spec.
> 
> On the API of things, I was thinking a slightly more generic approach
> to incorporate other lifecycle operations into the same API.
> Eg:
> magnum bay-manage  
> 
> magnum bay-manage  reset –hard
> magnum bay-manage  rebuild
> magnum bay-manage  node-delete  magnum bay-manage 
> node-add –flavor  magnum bay-manage  node-reset 
> magnum bay-manage  node-list
> 
> Tom
> 
> From: Yuanying OTSUKA 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Monday, 16 May 2016 at 01:07
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi,
> 
> I think, user also want to specify the deleting node.
> So we should manage “node” individually.
> 
> For example:
> $ magnum node-create —bay …
> $ magnum node-list —bay
> $ magnum node-delete $NODE_UUID
> 
> Anyway, if magnum want to manage a lifecycle of container
> infrastructure.
> This feature is necessary.
> 
> Thanks
> -yuanying
> 
> 
> 2016年5月16日(月) 7:50 Hongbin Lu
> >:
> Hi all,
> 
> This is a continued discussion from the design summit. For recap,
> Magnum manages bay nodes by using ResourceGroup from Heat. This
> approach works but it is infeasible to manage the heterogeneity across
> bay nodes, which is a frequently demanded feature. As an example, there
> is a request to provision bay nodes across availability zones [1].
> There is another request to provision bay nodes with different set of
> flavors [2]. For the request features above, ResourceGroup won’t work
> very well.
> 
> The proposal is to remove the usage of ResourceGroup and manually
> create Heat stack for each bay nodes. For example, for creating a
> cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat
> stacks (instead of 1 big Heat stack as right now):
> * A kube cluster stack that manages the global resources
> * Two kube master stacks that manage the two master nodes
> * Three kube minion stacks that manage the three minion nodes
> 
> The proposal might require an additional API endpoint to manage nodes
> or a group of nodes. For example:
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --
> availability-zone us-east-1 ….
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --
> availability-zone us-east-2 …
> 
> Thoughts?
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-
> zones
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-
> flavor
> 
> Best regards,
> Hongbin
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Hongbin Lu
I have an opposing point of view. In summary of below, there are actually three 
kinds of style.

1. The “python-*client” style. For example:
$ magnum node-add [--flavor ] 

2. The OSC style. For example:
$ openstack bay node add [--flavor ]

3. The proposed style (which is a mix of #1 and #2). For example:
$ magnum bay node add [--flavor ]

My observation is that all OpenStack projects are following both #1 and #2. I 
just couldn't find any OpenStack project that implements #3. If Magnum 
implements #3, we immediately become an outlier. I understand the intention is 
to make the python-client -> OSC migration easier, but the consequence might be 
more confusion than the original migration plan. Therefore, I would vote 
against #3.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: May-16-16 7:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes


I would vote for the OSC pattern to make it easier for the users, since we 
already expect that migration path.
Also agree with Tom that this is a significant change so we should write a spec 
to think through carefully.
Ton,

[Inactive hide details for Adrian Otto ---05/16/2016 11:24:33 AM---> On May 16, 
2016, at 7:59 AM, Steven Dake (stdake)  On May 16, 2016, at 7:59 AM, Steven Dake (stdake) 
> wrote: >

From: Adrian Otto >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 05/16/2016 11:24 AM
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes






> On May 16, 2016, at 7:59 AM, Steven Dake (stdake) 
> > wrote:
>
> Tom,
>
> Devil's advocate here.. :)
>
> Can you offer examples of other OpenStack API services which behave in
> this way with a API?

The more common pattern is actually:

  

or:

  

Examples:

# trove resize-instance  
# nova reboot --hard 

The OSC tool uses:

   

Example:

# openstack server reboot [-h] [--hard | --soft] [--wait] 

If we wanted to be consistent with the original Openstack style, the proposal 
would be something like:

magnum reset [--hard] 
magnum rebuild 
magnum node-delete 
magnum node-add [--flavor ] 
magnum node-reset  
magnum node-list 

If we wanted to model after OSC, it would be:

magnum bay reset [--hard] 
magnum bay rebuild 
magnum bay node delete 
magnum bay node add [--flavor ] 
magnum bay node reset  
magnum bay node list 

This one is my preference, because when integrated with OSC, the user does not 
need to change the command arguments, just swap in “openstack” for “magnum”. 
The actual order of placement for named options does not matter.

Adrian

>
> I'm struggling to think of any off the top of my head, but admittedly
> don't know all the ins and outs of OpenStack ;)
>
> Thanks
> -steve
>
>
> On 5/16/16, 2:28 AM, "Cammann, Tom" 
> > wrote:
>
>> The discussion at the summit was very positive around this requirement
>> but as this change will make a large impact to Magnum it will need a spec.
>>
>> On the API of things, I was thinking a slightly more generic approach to
>> incorporate other lifecycle operations into the same API.
>> Eg:
>> magnum bay-manage  
>>
>> magnum bay-manage  reset -hard
>> magnum bay-manage  rebuild
>> magnum bay-manage  node-delete 
>> magnum bay-manage  node-add -flavor 
>> magnum bay-manage  node-reset 
>> magnum bay-manage  node-list
>>
>> Tom
>>
>> From: Yuanying OTSUKA >
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> >
>> Date: Monday, 16 May 2016 at 01:07
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> >
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>>
>> Hi,
>>
>> I think, user also want to specify the deleting node.
>> So we should manage “node” individually.
>>
>> For example:
>> $ magnum node-create -bay …
>> $ magnum node-list -bay
>> $ magnum node-delete $NODE_UUID
>>
>> Anyway, if magnum want to manage a lifecycle of container infrastructure.
>> This feature is necessary.
>>
>> Thanks
>> -yuanying
>>
>>
>> 2016年5月16日(月) 7:50 Hongbin Lu
>> >:
>> Hi all,
>>
>> This is a continued discussion from the design summit. For recap, Magnum
>> manages bay nodes by using ResourceGroup from Heat. This approach works
>> but it is infeasible to manage the heterogeneity across bay nodes, 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Ton Ngo

I would vote for the OSC pattern to make it easier for the users, since we
already expect that migration path.
Also agree with Tom that this is a significant change so we should write a
spec to think through carefully.
Ton,



From:   Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   05/16/2016 11:24 AM
Subject:Re: [openstack-dev] [magnum] Discuss the idea of manually
managing the bay nodes




> On May 16, 2016, at 7:59 AM, Steven Dake (stdake) 
wrote:
>
> Tom,
>
> Devil's advocate here.. :)
>
> Can you offer examples of other OpenStack API services which behave in
> this way with a API?

The more common pattern is actually:

  

or:

  

Examples:

# trove resize-instance  
# nova reboot --hard 

The OSC tool uses:

   

Example:

# openstack server reboot [-h] [--hard | --soft] [--wait] 

If we wanted to be consistent with the original Openstack style, the
proposal would be something like:

magnum reset [--hard] 
magnum rebuild 
magnum node-delete  []
magnum node-add [--flavor ] 
magnum node-reset  
magnum node-list 

If we wanted to model after OSC, it would be:

magnum bay reset [--hard] 
magnum bay rebuild 
magnum bay node delete  []
magnum bay node add [--flavor ] 
magnum bay node reset  
magnum bay node list 

This one is my preference, because when integrated with OSC, the user does
not need to change the command arguments, just swap in “openstack” for
“magnum”. The actual order of placement for named options does not matter.

Adrian

>
> I'm struggling to think of any off the top of my head, but admittedly
> don't know all the ins and outs of OpenStack ;)
>
> Thanks
> -steve
>
>
> On 5/16/16, 2:28 AM, "Cammann, Tom"  wrote:
>
>> The discussion at the summit was very positive around this requirement
>> but as this change will make a large impact to Magnum it will need a
spec.
>>
>> On the API of things, I was thinking a slightly more generic approach to
>> incorporate other lifecycle operations into the same API.
>> Eg:
>> magnum bay-manage  
>>
>> magnum bay-manage  reset ?hard
>> magnum bay-manage  rebuild
>> magnum bay-manage  node-delete 
>> magnum bay-manage  node-add ?flavor 
>> magnum bay-manage  node-reset 
>> magnum bay-manage  node-list
>>
>> Tom
>>
>> From: Yuanying OTSUKA 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, 16 May 2016 at 01:07
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>>
>> Hi,
>>
>> I think, user also want to specify the deleting node.
>> So we should manage “node” individually.
>>
>> For example:
>> $ magnum node-create ?bay …
>> $ magnum node-list ?bay
>> $ magnum node-delete $NODE_UUID
>>
>> Anyway, if magnum want to manage a lifecycle of container
infrastructure.
>> This feature is necessary.
>>
>> Thanks
>> -yuanying
>>
>>
>> 2016年5月16日(月) 7:50 Hongbin Lu
>> >:
>> Hi all,
>>
>> This is a continued discussion from the design summit. For recap, Magnum
>> manages bay nodes by using ResourceGroup from Heat. This approach works
>> but it is infeasible to manage the heterogeneity across bay nodes, which
>> is a frequently demanded feature. As an example, there is a request to
>> provision bay nodes across availability zones [1]. There is another
>> request to provision bay nodes with different set of flavors [2]. For
the
>> request features above, ResourceGroup won’t work very well.
>>
>> The proposal is to remove the usage of ResourceGroup and manually create
>> Heat stack for each bay nodes. For example, for creating a cluster with
2
>> masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead
>> of 1 big Heat stack as right now):
>> * A kube cluster stack that manages the global resources
>> * Two kube master stacks that manage the two master nodes
>> * Three kube minion stacks that manage the three minion nodes
>>
>> The proposal might require an additional API endpoint to manage nodes or
>> a group of nodes. For example:
>> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2
>> --availability-zone us-east-1 ….
>> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3
>> --availability-zone us-east-2 …
>>
>> Thoughts?
>>
>> [1]
>> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
>> [2]
https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
>>
>> Best regards,
>> Hongbin
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Fox, Kevin M
I think I remember something about resourcegroups having a way to delete one of 
them too. Might double check.

Thanks,
Kevin


From: Cammann, Tom
Sent: Monday, May 16, 2016 2:28:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

The discussion at the summit was very positive around this requirement but as 
this change will make a large impact to Magnum it will need a spec.

On the API of things, I was thinking a slightly more generic approach to 
incorporate other lifecycle operations into the same API.
Eg:
magnum bay-manage  

magnum bay-manage  reset –hard
magnum bay-manage  rebuild
magnum bay-manage  node-delete 
magnum bay-manage  node-add –flavor 
magnum bay-manage  node-reset 
magnum bay-manage  node-list

Tom

From: Yuanying OTSUKA 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 16 May 2016 at 01:07
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Hi,

I think, user also want to specify the deleting node.
So we should manage “node” individually.

For example:
$ magnum node-create —bay …
$ magnum node-list —bay
$ magnum node-delete $NODE_UUID

Anyway, if magnum want to manage a lifecycle of container infrastructure.
This feature is necessary.

Thanks
-yuanying


2016年5月16日(月) 7:50 Hongbin Lu 
>:
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won’t work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 ….
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 …

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Fox, Kevin M
Sounds ok, but there needs to be a careful upgrade/migration path, where both 
are supported until after all pods are migrated out of nodes that are in the 
resourcegroup.

Thanks,
Kevin


From: Hongbin Lu
Sent: Sunday, May 15, 2016 3:49:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Discuss the idea of manually managing the bay 
nodes

Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won’t work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 ….
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 …

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Adrian Otto

> On May 16, 2016, at 7:59 AM, Steven Dake (stdake)  wrote:
> 
> Tom,
> 
> Devil's advocate here.. :)
> 
> Can you offer examples of other OpenStack API services which behave in
> this way with a API?

The more common pattern is actually:

   

or:

  

Examples:

# trove resize-instance  
# nova reboot --hard 

The OSC tool uses:

   

Example:

# openstack server reboot [-h] [--hard | --soft] [--wait] 

If we wanted to be consistent with the original Openstack style, the proposal 
would be something like:

magnum reset [--hard]  
magnum rebuild 
magnum node-delete  []
magnum node-add [--flavor ]  
magnum node-reset  
magnum node-list 

If we wanted to model after OSC, it would be:

magnum bay reset [--hard]  
magnum bay rebuild 
magnum bay node delete  []
magnum bay node add [--flavor ]  
magnum bay node reset  
magnum bay node list 

This one is my preference, because when integrated with OSC, the user does not 
need to change the command arguments, just swap in “openstack” for “magnum”. 
The actual order of placement for named options does not matter.

Adrian

> 
> I'm struggling to think of any off the top of my head, but admittedly
> don't know all the ins and outs of OpenStack ;)
> 
> Thanks
> -steve
> 
> 
> On 5/16/16, 2:28 AM, "Cammann, Tom"  wrote:
> 
>> The discussion at the summit was very positive around this requirement
>> but as this change will make a large impact to Magnum it will need a spec.
>> 
>> On the API of things, I was thinking a slightly more generic approach to
>> incorporate other lifecycle operations into the same API.
>> Eg:
>> magnum bay-manage  
>> 
>> magnum bay-manage  reset –hard
>> magnum bay-manage  rebuild
>> magnum bay-manage  node-delete 
>> magnum bay-manage  node-add –flavor 
>> magnum bay-manage  node-reset 
>> magnum bay-manage  node-list
>> 
>> Tom
>> 
>> From: Yuanying OTSUKA 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, 16 May 2016 at 01:07
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi,
>> 
>> I think, user also want to specify the deleting node.
>> So we should manage “node” individually.
>> 
>> For example:
>> $ magnum node-create —bay …
>> $ magnum node-list —bay
>> $ magnum node-delete $NODE_UUID
>> 
>> Anyway, if magnum want to manage a lifecycle of container infrastructure.
>> This feature is necessary.
>> 
>> Thanks
>> -yuanying
>> 
>> 
>> 2016年5月16日(月) 7:50 Hongbin Lu
>> >:
>> Hi all,
>> 
>> This is a continued discussion from the design summit. For recap, Magnum
>> manages bay nodes by using ResourceGroup from Heat. This approach works
>> but it is infeasible to manage the heterogeneity across bay nodes, which
>> is a frequently demanded feature. As an example, there is a request to
>> provision bay nodes across availability zones [1]. There is another
>> request to provision bay nodes with different set of flavors [2]. For the
>> request features above, ResourceGroup won’t work very well.
>> 
>> The proposal is to remove the usage of ResourceGroup and manually create
>> Heat stack for each bay nodes. For example, for creating a cluster with 2
>> masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead
>> of 1 big Heat stack as right now):
>> * A kube cluster stack that manages the global resources
>> * Two kube master stacks that manage the two master nodes
>> * Three kube minion stacks that manage the three minion nodes
>> 
>> The proposal might require an additional API endpoint to manage nodes or
>> a group of nodes. For example:
>> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2
>> --availability-zone us-east-1 ….
>> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3
>> --availability-zone us-east-2 …
>> 
>> Thoughts?
>> 
>> [1] 
>> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
>> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
>> 
>> Best regards,
>> Hongbin
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> tack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Steven Dake (stdake)
Tom,

Devil's advocate here.. :)

Can you offer examples of other OpenStack API services which behave in
this way with a API?

I'm struggling to think of any off the top of my head, but admittedly
don't know all the ins and outs of OpenStack ;)

Thanks
-steve


On 5/16/16, 2:28 AM, "Cammann, Tom"  wrote:

>The discussion at the summit was very positive around this requirement
>but as this change will make a large impact to Magnum it will need a spec.
>
>On the API of things, I was thinking a slightly more generic approach to
>incorporate other lifecycle operations into the same API.
>Eg:
>magnum bay-manage  
>
>magnum bay-manage  reset –hard
>magnum bay-manage  rebuild
>magnum bay-manage  node-delete 
>magnum bay-manage  node-add –flavor 
>magnum bay-manage  node-reset 
>magnum bay-manage  node-list
>
>Tom
>
>From: Yuanying OTSUKA 
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
>Date: Monday, 16 May 2016 at 01:07
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>managing the bay nodes
>
>Hi,
>
>I think, user also want to specify the deleting node.
>So we should manage “node” individually.
>
>For example:
>$ magnum node-create —bay …
>$ magnum node-list —bay
>$ magnum node-delete $NODE_UUID
>
>Anyway, if magnum want to manage a lifecycle of container infrastructure.
>This feature is necessary.
>
>Thanks
>-yuanying
>
>
>2016年5月16日(月) 7:50 Hongbin Lu
>>:
>Hi all,
>
>This is a continued discussion from the design summit. For recap, Magnum
>manages bay nodes by using ResourceGroup from Heat. This approach works
>but it is infeasible to manage the heterogeneity across bay nodes, which
>is a frequently demanded feature. As an example, there is a request to
>provision bay nodes across availability zones [1]. There is another
>request to provision bay nodes with different set of flavors [2]. For the
>request features above, ResourceGroup won’t work very well.
>
>The proposal is to remove the usage of ResourceGroup and manually create
>Heat stack for each bay nodes. For example, for creating a cluster with 2
>masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead
>of 1 big Heat stack as right now):
>* A kube cluster stack that manages the global resources
>* Two kube master stacks that manage the two master nodes
>* Three kube minion stacks that manage the three minion nodes
>
>The proposal might require an additional API endpoint to manage nodes or
>a group of nodes. For example:
>$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2
>--availability-zone us-east-1 ….
>$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3
>--availability-zone us-east-2 …
>
>Thoughts?
>
>[1] 
>https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
>[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
>
>Best regards,
>Hongbin
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribetack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread taget

hi Tom
I like your idea on define a generic approach of bay life cycle operations.

Seems current propose is to allow user dynamically adding/deleting nodes 
from a created bay, what if the master/node flavor in baymodel(bay's 
flavor) ? if a user add a new node with flavor which is not defined in 
baymodel, what the behavior should be like, bad request?


Beside, seems we added a new resource 'node' which will represent a node 
in cluster, then what the node api looks like? how to deail a orphan node?
if a node (or node group) is deleted by user from a bay, destroy 
it(them) or just detach them? We may need to think about node life-cycle 
too.


We can also define a group nodes as node group to allow user do batch 
operation.



On 2016年05月16日 17:28, Cammann, Tom wrote:

magnum bay-manage  reset –hard
magnum bay-manage  rebuild
magnum bay-manage  node-delete 
magnum bay-manage  node-add –flavor 
magnum bay-manage  node-reset 
magnum bay-manage  node-list


--
Best Regards, Eli Qiao (乔立勇)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Cammann, Tom
The discussion at the summit was very positive around this requirement but as 
this change will make a large impact to Magnum it will need a spec.

On the API of things, I was thinking a slightly more generic approach to 
incorporate other lifecycle operations into the same API.
Eg:
magnum bay-manage  

magnum bay-manage  reset –hard
magnum bay-manage  rebuild
magnum bay-manage  node-delete 
magnum bay-manage  node-add –flavor 
magnum bay-manage  node-reset 
magnum bay-manage  node-list

Tom

From: Yuanying OTSUKA 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 16 May 2016 at 01:07
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Hi,

I think, user also want to specify the deleting node.
So we should manage “node” individually.

For example:
$ magnum node-create —bay …
$ magnum node-list —bay
$ magnum node-delete $NODE_UUID

Anyway, if magnum want to manage a lifecycle of container infrastructure.
This feature is necessary.

Thanks
-yuanying


2016年5月16日(月) 7:50 Hongbin Lu 
>:
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won’t work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 ….
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 …

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-15 Thread Qiming Teng
On Sun, May 15, 2016 at 10:49:39PM +, Hongbin Lu wrote:
> Hi all,
> 
> This is a continued discussion from the design summit. For recap, Magnum 
> manages bay nodes by using ResourceGroup from Heat. This approach works but 
> it is infeasible to manage the heterogeneity across bay nodes, which is a 
> frequently demanded feature. As an example, there is a request to provision 
> bay nodes across availability zones [1]. There is another request to 
> provision bay nodes with different set of flavors [2]. For the request 
> features above, ResourceGroup won't work very well.
> 
> The proposal is to remove the usage of ResourceGroup and manually create Heat 
> stack for each bay nodes. For example, for creating a cluster with 2 masters 
> and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
> stack as right now):
> * A kube cluster stack that manages the global resources
> * Two kube master stacks that manage the two master nodes
> * Three kube minion stacks that manage the three minion nodes
> 
> The proposal might require an additional API endpoint to manage nodes or a 
> group of nodes. For example:
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
> --availability-zone us-east-1 
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
> --availability-zone us-east-2 ...
> 
> Thoughts?
> 
> [1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
> 
> Best regards,
> Hongbin

Seriously, I'm suggesting Magnum to use Senlin for this task. Senlin has
an API that provides rich operations you will need to manage a cluster
of things, where the "thing" here can be a Heat stack or a Nova server.

A "thing" is modeled as a Profile in Senlin, so it is pretty easy and
straightforward for Magnum to feed in the HOT templates (possibly with
parameters and environments?) to Senlin and offload the group management
task from Magnum.

Speaking of cross-AZ placement, Senlin has a policy plugin for this
purpose already. Regarding bay nodes bearing different set of flavors,
Senlin also permits that.

I believe by offloading these operations to Senlin, Magnum can remain
focusing on getting COE management and get it done well. I also believe
that Senlin team will be very responsive to your requirements if there
are needs to tune the Senlin API/policy/mechanism.

Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-15 Thread Yuanying OTSUKA
Hi,

I think, user also want to specify the deleting node.
So we should manage “node” individually.

For example:
$ magnum node-create —bay …
$ magnum node-list —bay
$ magnum node-delete $NODE_UUID

Anyway, if magnum want to manage a lifecycle of container infrastructure.
This feature is necessary.

Thanks
-yuanying


2016年5月16日(月) 7:50 Hongbin Lu :

> Hi all,
>
>
>
> This is a continued discussion from the design summit. For recap, Magnum
> manages bay nodes by using ResourceGroup from Heat. This approach works but
> it is infeasible to manage the heterogeneity across bay nodes, which is a
> frequently demanded feature. As an example, there is a request to provision
> bay nodes across availability zones [1]. There is another request to
> provision bay nodes with different set of flavors [2]. For the request
> features above, ResourceGroup won’t work very well.
>
>
>
> The proposal is to remove the usage of ResourceGroup and manually create
> Heat stack for each bay nodes. For example, for creating a cluster with 2
> masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead of
> 1 big Heat stack as right now):
>
> * A kube cluster stack that manages the global resources
>
> * Two kube master stacks that manage the two master nodes
>
> * Three kube minion stacks that manage the three minion nodes
>
>
>
> The proposal might require an additional API endpoint to manage nodes or a
> group of nodes. For example:
>
> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2
> --availability-zone us-east-1 ….
>
> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3
> --availability-zone us-east-2 …
>
>
>
> Thoughts?
>
>
>
> [1]
> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
>
> [2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor
>
>
>
> Best regards,
>
> Hongbin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-15 Thread Hongbin Lu
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won't work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 ...

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev