This is really a good idea, because it will mitigate much of the job of
implementing loop and conditional branch in Heat ResourceGroup. But as Kevin
pointed out in below mail, it need a careful upgrade/migration path.
Meanwhile, as for the blueprint of supporting multiple flavor
During the discussion in this ML and team meetings, it seems most of us
accepted the idea of supporting heterogeneous cluster. What we didn't agree
well is how to implement it. To move it forward, I am going to summarize
various implementation options so that we can debate each options
+1 on this. Another use case would be 'fast storage' for dbs, 'any
storage' for memcache and web servers. Relying on labels for this
makes it really simple.
The alternative of doing it with multiple clusters adds complexity to
the cluster(s) description by users.
On Fri, Jun 3, 2016 at 1:54 AM,
“heterogeneous cluster is more advanced and harder to control”
So, I believe that Magnum should control and overcome this problem.
Magnum is a container infrastructure as a service.
Managing heterogeneous environment seems scope of Magnum’s mission.
2016年6月3日(金) 8:55 Fox, Kevin M
I agree that heterogeneous cluster is more advanced and harder to control, but
I don't get why we (as service developers/providers) care about that. If there
is a significant portion of users asking for advanced topologies (i.e.
heterogeneous cluster) and willing to deal with the complexities,
As an operator that has clouds that are partitioned into different host
aggregates with different flavors targeting them, I totally believe we will
have users that want to have a single k8s cluster span multiple different
flavor types. I'm sure once I deploy magnum, I will want it too. You
Have you considered a workflow engine?
FWIW I agree with Adrian about the difficulties of heterogenous systems.
Much better to operate, and in reality the world has moved entirely to
x86_64 + Linux. I could see a future in which ARM breaks into the server
space, but that is multiple
I am really struggling to accept the idea of heterogeneous clusters. My
experience causes me to question whether a heterogeneus cluster makes sense for
Magnum. I will try to explain why I have this hesitation:
1) If you have a heterogeneous cluster, it suggests that you are using external
Has an email been posted to the [heat] community for their input? Maybe I
On 6/2/16, 9:42 AM, "Hongbin Lu" wrote:
>It looks both of us agree the idea of having heterogeneous set of nodes.
>For the implementation, I am open to
for the implementation of heterogeneous, I think we should avoid to talking
with nova or other service directly, which will bring lots of coding.
maybe the best way is to refactor our heat template, and let a bay support
several heat template when we scale-out new node or delete
It looks both of us agree the idea of having heterogeneous set of nodes. For
the implementation, I am open to alternative (I supported the work-around idea
because I cannot think of a feasible implementation by purely using Heat,
unless Heat support "for" logic which is very unlikely
I also liked the idea of having heterogeneous set of nodes but IMO such
features should not be implemented in Magnum, thus deviating Magnum again from
its roadmap. Whereas we should leverage Heat(or may be Senlin) APIs for the
I vote +1 for this feature.
Personally, I think this is a good idea, since it can address a set of similar
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific
A blueprint was created for tracking this idea:
https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-nodes . I
won't approve the BP until there is a team decision on accepting/rejecting the
From the discussion in design summit, it looks everyone is OK with the idea
I have an opposing point of view. In summary of below, there are actually three
kinds of style.
1. The “python-*client” style. For example:
$ magnum node-add [--flavor ]
2. The OSC style. For example:
$ openstack bay node add [--flavor ]
3. The proposed style (which is a mix of #1 and #2).
I would vote for the OSC pattern to make it easier for the users, since we
already expect that migration path.
Also agree with Tom that this is a significant change so we should write a
spec to think through carefully.
From: Adrian Otto
I think I remember something about resourcegroups having a way to delete one of
them too. Might double check.
From: Cammann, Tom
Sent: Monday, May 16, 2016 2:28:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Sounds ok, but there needs to be a careful upgrade/migration path, where both
are supported until after all pods are migrated out of nodes that are in the
From: Hongbin Lu
Sent: Sunday, May 15, 2016 3:49:39 PM
> On May 16, 2016, at 7:59 AM, Steven Dake (stdake) wrote:
> Devil's advocate here.. :)
> Can you offer examples of other OpenStack API services which behave in
> this way with a API?
The more common pattern is actually:
Devil's advocate here.. :)
Can you offer examples of other OpenStack API services which behave in
this way with a API?
I'm struggling to think of any off the top of my head, but admittedly
don't know all the ins and outs of OpenStack ;)
On 5/16/16, 2:28 AM, "Cammann,
I like your idea on define a generic approach of bay life cycle operations.
Seems current propose is to allow user dynamically adding/deleting nodes
from a created bay, what if the master/node flavor in baymodel(bay's
flavor) ? if a user add a new node with flavor which is not defined
The discussion at the summit was very positive around this requirement but as
this change will make a large impact to Magnum it will need a spec.
On the API of things, I was thinking a slightly more generic approach to
incorporate other lifecycle operations into the same API.
On Sun, May 15, 2016 at 10:49:39PM +, Hongbin Lu wrote:
> Hi all,
> This is a continued discussion from the design summit. For recap, Magnum
> manages bay nodes by using ResourceGroup from Heat. This approach works but
> it is infeasible to manage the heterogeneity across bay nodes, which
I think, user also want to specify the deleting node.
So we should manage “node” individually.
$ magnum node-create —bay …
$ magnum node-list —bay
$ magnum node-delete $NODE_UUID
Anyway, if magnum want to manage a lifecycle of container infrastructure.
This feature is
This is a continued discussion from the design summit. For recap, Magnum
manages bay nodes by using ResourceGroup from Heat. This approach works but it
is infeasible to manage the heterogeneity across bay nodes, which is a
frequently demanded feature. As an example, there is a request
Mail list logo