OK. If using Keystone is not acceptable, I am going to propose a new approach:
· Store data in Magnum DB
· Encrypt data before writing it to DB
· Decrypt data after loading it from DB
· Have the encryption/decryption key stored in config file
· Use encry
num to lock in to a single vendor.
Regards
-steve
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 7, 2016 at 10:06 AM
To: "OpenSt
I think we'd better to have a clear guidance here.
For projects that are currently using WSME, should they have a plan to migrate
to other tools? If yes, is there any suggestion for the replacement tools? I
think it will be more clear to have an official guideline in this matter.
Best regards,
Hi team,
FYI. In short, we have to temporarily disable SELinux [1] due to bug 1551648
[2].
SELinux is an important security features for Linux kernel. It improves
isolation between neighboring containers in the same host. In before, Magnum
had it turned on in each bay node. However, we have to
doesn’t seem to
be.
Then individuals or companies who are passionate about an alternative OS can
develop the features for that OS.
Corey
On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu
mailto:hongbin...@huawei.com>> wrote:
From: Adrian Otto
[mailto:adrian.o...@rackspace.com<mailto:adrian.o..
Adrian,
I think Shu Muto was originally proposed to be a magnum-ui liaison, not magnum
liaison.
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-04-16 7:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subje
+1
BTW, I am magnum core, not magnum-ui core. Not sure if my vote is counted.
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-04-16 7:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev]
Regards
-steve
Note that it will take a thoughtful approach (subject to discussion) to balance
these interests. Please take a moment to review the interest above. Do you or
others disagree with these? If so, why?
Adrian
On Mar 4, 2016, at 9:09 AM, Hongbin Lu
mailto:hongbin...@huawei.com>>
his thread so far is
"its too hard". Its not too hard, especially with Heat conditionals making
their way into Mitaka.
Regards
-steve
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To:
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org&g
team,
Shu Muto is interested in to became liaisons from magnum-ui.
He put great effort into translating English to Japanease in magnum-ui and
horizon.
I recommend him to be liaison.
Thanks
-yuanying
2016年2月29日(月) 23:56 Hongbin Lu
mailto:hongbin...@huawei.com>>:
Hi team,
FYI, I18n team
sence of such a community interest, my preference is to simplify
to increase our velocity. This seems to me to be a relatively easy way to
reduce complexity around heat template versioning. What do you think?
Thanks,
Adrian
On Feb 29, 2016, at 8:40 AM, Hongbin Lu
mailto:hongbin...@huawei.com
me to be a relatively easy way to reduce complexity around heat template
versioning. What do you think?
Thanks,
Adrian
On Feb 29, 2016, at 8:40 AM, Hongbin Lu
mailto:hongbin...@huawei.com>> wrote:
Hi team,
This is a continued discussion from a review [1]. Corey O'Brien suggest
true.
In addition, I don't think we should break the coreos template by adding the
trust token as a heat parameter.
Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS
support. Why you want to remove CoreOS templates from the tree. Please note
that this is a
Hi team,
FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if
you interest in this role.
Best regards,
Hongbin
From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all][i18n] Liaiso
-Original Message-
From: James Bottomley [mailto:james.bottom...@hansenpartnership.com]
Sent: February-26-16 12:38 PM
To: Daniel P. Berrange
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] A proposal to separate the design summit
On Fr
28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
Follow your heart. You are miracle!
[Inactive hide details for Hongbin Lu ---26/02/2016 08:02:23 am---Hi
Hi team,
FYI, you might encounter the following error if you pull from master recently:
magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 1
Create for bay swarmbay failed: Failed to create trustee %(username) in domain
$(domain_id) (HTTP 500)"
This is due to a recent commi
Hi Ricardo,
+1 from me. I like this feature.
Best regards,
Hongbin
-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
Sent: February-23-16 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] containers across avai
Hi Heat team,
It looks Magnum gate broke after this patch was landed:
https://review.openstack.org/#/c/273631/ . I would appreciate if anyone can
help for trouble-shooting the issue. If the issue is confirmed, I would prefer
a quick-fix or a revert, since we want to unlock the gate ASAP. Thank
Wanghua,
Please add your requests to the midcycle agenda [1], or bring it up in the team
meeting under the open discussion. We can discuss it if agenda allows.
[1] https://etherpad.openstack.org/p/magnum-mitaka-midcycle-topics
Best regards,
Hongbin
From: 王华 [mailto:wanghua.hum...@gmail.com]
Se
if you are interested in this idea, we've
submitted a proposal at the Austin summit:
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211.
Peng
Disclaim: I maintainer Hyper.
-
Hyper - Make VM run like Cont
k,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
Follow your heart. You are miracle!
[Inactive hide details for Hongbin Lu ---15/02/2016 01:26:09 am---Kai Qiang, A
major benefit is to have M
Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
Follow your heart. You are miracle!
[Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks
for sharing your ins
Steve,
Thanks for directing Shiva to here. BTW, most of your code on objects and db
are still here :).
Shiva,
Please do join the #openstack-containers channel (It is hard to do
trouble-shooting in ML). I believe contributors in the channel are happy to
help you. For Magnum team, it looks we s
Best regards,
Hongbin
From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Hongbin,
I am not sure that it's good idea, it look
Hi team,
Sorry for bringing up this old thread, but a recent debate on container
resource [1] reminded me the use case Kris mentioned below. I am going to
propose a preliminary idea to address the use case. Of course, we could
continue the discussion in the team meeting or midcycle.
Idea: Intr
Rabi,
As you observed, I have uploaded two testing patches [1][2] that depends on
your fix patch [3] and the reverted patch [4] respectively. An observation is
that the test "gate-functional-dsvm-magnum-mesos" failed in [1], but passed in
[2]. That implies the reverted patch does resolve an iss
Hi Heat team,
As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi submitted on a
fix (https://review.openstack.org/#/c/278576/), but it doesn't seem to be
enough to unlock the broken gate. In particular, it seems templates with
SoftwareDeploymentGroup resource failed to complete (I h
Hi Team,
In order to resolve issue #3, it looks like we have to significantly reduce the
memory consumption of the gate tests. Details can be found in this patch
https://review.openstack.org/#/c/276958/ . For core team, a fast review and
approval of that patch would be greatly appreciated, sinc
Corey,
Thanks for investigating the gate issues and summarizing it. It looks there are
multiple problems to solve, and tickets were created for each one.
1. https://bugs.launchpad.net/magnum/+bug/1542384
2. https://bugs.launchpad.net/magnum/+bug/1541964
3. https://bugs.launc
I would vote for a quick fix + a blueprint.
BTW, I think it is a general consensus that we should move away from Atomic for
various reasons (painful image building, lack of document, hard to use, etc.).
We are working on fixing the CoreOS templates which could replace Atomic in the
future.
Bes
I can clarify Eli’s question further.
1) is this by designed that we don't allow magnum-api to access DB directly ?
Yes, that is what it is. Actually, The magnum-api was allowed to access DB
directly in before. After the indirection API patch landed [1], magnum-api
starts using magnum-conductor
Hi Magnum team,
FYI, you might interest to review the Magnum integration spec from Kuryr team:
https://review.openstack.org/#/c/269039/
Best regards,
Hongbin
From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: January-31-16 2:57 AM
To: OpenStack Development Mailing List (not for usage questions)
+1
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: February-01-16 10:59 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] New Core Reviewers
Magnum Core Team,
I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Review
As Kai Qiang mentioned, in term of OpenStack projects, Magnum depends on
Keystone, Nova, Glance, Heat, Cinder. If you are looking for the exact set of
dependencies, you can find it here:
https://github.com/openstack/magnum/blob/stable/liberty/requirements.txt .
If you want to run Magnum with ol
meaningful and sustainable way.
________
From: Hongbin Lu
Sent: Tuesday, January 19, 2016 9:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays
Assume your logic is applied. Shou
Architect – Private Cloud R&D
email: mike.met...@rackspace.com<mailto:mike.met...@rackspace.com>
cell: +1-305-282-7606
From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Devel
Development Mailing List
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from
gate
Hongbin,
I did some digging and found that docker storage driver wasn’t configured
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for
: [openstack-dev] [magnum] Nesting /containers resource under /bays
What are the reasons for keeping /containers?
On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu
mailto:hongbin...@huawei.com>> wrote:
Disagree.
If the container managing part is removed, Magnum is just a COE deployment
tool. T
te Cloud R&D - Rackspace
________
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays
I
face area until these are put into further use.
________
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Sent: Wednesday, January 13, 2016 5:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /
Hi Jamie,
I would like to clarify several things.
First, a container uuid is intended to be unique globally (not within
individual cluster). If you create a container with duplicated uuid, the
creation will fail regardless of its bay. Second, you are in control of the
uuid of the container tha
remove swarm func test from
gate
Hongbin,
I’m not aware of any viable options besides using a nonvoting gate job. Are
there other alternatives to consider? If not, let’s proceed with that approach.
Adrian
> On Jan 7, 2016, at 3:34 PM, Hongbin Lu wrote:
>
> Clark,
>
> That is
[mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from
gate
On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
> Hi folks,
>
> It looks the swarm func test is
Hi folks,
It looks the swarm func test is currently unstable, which negatively impacts
the patch submission workflow. I proposed to remove it from Jenkins gate (but
keep it in Jenkins check), until it becomes stable. Please find the details in
the review (https://review.openstack.org/#/c/264998
+1
Thanks Steven for pointing out the pitfall.
Best regards,
Hongbin
-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com]
Sent: December-23-15 3:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [openstack][magnum]a problem a
If we decide to support quotas in CaaS layer (i.e. limit the # of bays), its
implementation doesn't have to be redundant to IaaS layer (from Nova, Cinder,
etc.). The implementation could be a layer on top of IaaS, in which requests
need to pass two layers of quotas to succeed. There would be thr
Jay,
I think we should agree on a general direction before asking for a spec. It is
bad to have contributors spend time working on something that might not be
accepted.
Best regards,
Hongbin
From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: December-20-15 6:17 PM
To: OpenStack Development Mai
Hi Jeremy,
If you can make the swap size consistent, that would be terrific. Consistent
settings across test nodes can improve the predictability of the test results.
Thanks for the assistant from infra team to locate the cause of this error. We
greatly appreciate it.
Best regards,
Hongbin
--
cate memory"
On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > Hi,
> >
> > As Kai Qiang mentioned, magnum gate recently had a bunch of random
> > failures, which occurred on creating a nova instance wi
Suro,
FYI. In before, we tried a distributed lock implementation for bay operations
(here are the patches [1,2,3,4,5]). However, after several discussions online
and offline, we decided to drop the blocking implementation for bay operations,
in favor of non-blocking implementation (which is not
Hi Tom,
If I remember correctly, the decision is to drop the COE-specific API (Pod,
Service, Replication Controller) in the next API version. I think a good way to
do that is to put a deprecated warning in current API version (v1) for the
removed resources, and remove them in the next API versi
Hi,
As Kai Qiang mentioned, magnum gate recently had a bunch of random failures,
which occurred on creating a nova instance with 2G of RAM. According to the
error message, it seems that the hypervisor tried to allocate memory to the
nova instance but couldn’t find enough free memory in the host
ction and handle things appropriately.
We should think through the scenarios carefully to come to agreement on how
this would work.
Ton Ngo,
[Inactive hide details for Hongbin Lu ---12/09/2015 03:09:23 PM---As Bharath
mentioned, I am +1 to extend the "container" object]Hongbin Lu ---12
As Bharath mentioned, I am +1 to extend the "container" object to Mesos bay. In
addition, I propose to extend "container" to k8s as well (the details are
described in this BP [1]). The goal is to promote this API resource to be
technology-agnostic and make it portable across all COEs. I am going
t going to be a
nuisance to keep up with the various upstreams until they become completely
stable from an API perspective, and no additional changes are likely. All of
our COE’s still have plenty of maturation ahead of them, so this is the wrong
time to wrap them.
If someone really wants apps a
Jay,
Agree and disagree. Containerize some COE daemons will facilitate the version
upgrade and maintenance. However, I don’t think it is correct to blindly
containerize everything unless there is an investigation performed to
understand the benefits and costs of doing that. Quoted from Egor, th
Here is a bit more context.
Currently, at k8s and swarm bay, some required binaries (i.e. etcd and flannel)
are built into image and run at host. We are exploring the possibility to
containerize some of these system components. The rationales are (i) it is
infeasible to build custom packages in
Hi Bharath,
I agree the "container" part. We can implement "magnum container-create .." for
mesos bay in the way you mentioned. Personally, I don't like to introduce
"apps" and "appgroups" resources to Magnum, because they are already provided
by native tool [1]. I couldn't see the benefits to
Hi team,
I would like to start this ML to discuss the git rename issue. Here is the
problem. In Git, it is handy to retrieve commit history of a file/folder. There
are several ways to do that. In CLI, you can run "git log ..." to show the
history. In Github, you can click "History" bottom on to
I am going to share something that might be off the topic a bit.
Yesterday, I was pulled to the #openstack-infra channel to participant a
discussion, which is related to the atomic image download in Magnum. It looks
the infra team is not satisfied with the large image size. In particular, they
Hi Steve,
Thanks for your contributions. Personally, I would like to think for your
mentorship and guidance when I was new to Magnum. It helps me a lot to pick up
everything. Best wish for your adventure in Kolla.
Best regards,
Hongbin
From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent:
Hi Mars,
I cannot reproduce the error. My best guess is that your VMs don’t have
external internet access (Could you verify it by ssh into one of your VM and
type “curl openstack.org” ?). If not, please create a bug to report the error
(https://bugs.launchpad.net/magnum).
Thanks,
Hongbin
From
Hi Bertrand,
Thanks for reporting the error. I confirmed that this error was consistently
reproducible. A bug ticket was created for that.
https://bugs.launchpad.net/magnum/+bug/1506226
Best regards,
Hongbin
-Original Message-
From: Bertrand NOEL [mailto:bertrand.n...@cern.ch]
Sent: O
Hi team,
I want to move the discussion in the review below to here, so that we can get
more feedback
https://review.openstack.org/#/c/232175/
In summary, magnum currently added support for specifying the memory size of
containers. The specification of the memory size is optional, and the COE w
hanks,
Kevin
From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Kris,
I think the proposal of hierarchical pro
Kris,
I think the proposal of hierarchical projects is out of the scope of magnum,
and you might need to bring it up at keystone or cross-project meeting. I am
going to propose a walk-around that might work for you at existing tenancy
model.
Suppose there is a department (department A) with tw
+1 for both. Welcome!
From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers
+1 from me for both Vilobh and Hua.
Thanks,
Dims
On Wed, Sep 30, 2015 a
+1 from me as well.
I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve the
promise, instead of the main goal.
Best regards,
Hongbin
From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: Ope
Hi Ton,
If I understand your proposal correctly, it means the inputted password will be
exposed to users in the same tenant (since the password is passed as stack
parameter, which is exposed within tenant). If users are not admin, they don't
have privilege to create a temp user. As a result, us
For the guidance, I saw the judgement is a bit subjective. It could happen that
a contributor think his/her patch is trivial (or it is not fixing a function
defect), but a reviewer think the opposite. For example, I find it hard to
judge when I reviewed the following patches:
https://review.ope
Hi,
I am fine to have an election with Adrian Otto, and potentially with other
candidates who are also late.
Best regards,
Hongbin
From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: September-17-15 4:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [opensta
Hi all,
I would like to announce my candidacy for the PTL position of Magnum.
I involved in Magnum project starting from December 2014. At that time,
Magnum's code base is much smaller than right now. Since then, I worked with a
diverse set of team members to land features, discuss the roadmap,
Hi Ryan,
I think pushing python k8sclient out of magnum tree (option 3) is the decision,
which was made in Vancouver Summit (if I remembered correctly). It definitely
helps for solving the k8s versioning problems.
Best regards,
Hongbin
From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
S
Hi team,
Currently, magnum weekly team meeting is scheduled at Tuesday UTC1600 and
UTC2200. As our team growing, contributors from different timezones joined and
actively participated. I worried that our current meeting schedule (which was
decided a long time ago) might not be update-to-date to
Hi team,
As you may know, magnum is tested with pre-built Fedora Atomic images.
Basically, these images are standard atomic image with k8s packages
pre-installed. The images can be downloaded from fedorapeople.org [1]. In most
cases, you are able to test magnum by using images there. If you are
the way they
are designed, the smaller the upfront cost, and it will also be a major savings
later on if something like [1] pops up.
[1]: https://bugs.launchpad.net/nova/+bug/1474074
[2]: https://review.openstack.org/#/c/217342/
On 8/27/2015 9:46 AM, Hongbin Lu wrote:
-1 from me.
IMHO, the rol
-1 from me.
IMHO, the rolling upgrade feature makes sense for a mature project (like Nova),
but not for a young project like Magnum. It incurs overheads for contributors &
reviewers to check the object compatibility in each patch. As you mentioned,
the key benefit of this feature is supporting
Hi Wanghua,
For the question about how to pass user password to bay nodes, there are
several options here:
1. Directly inject the password to bay nodes via cloud-init. This might
be the simplest solution. I am not sure if it is OK in security aspect.
2. Inject a scoped Keystone tru
Adrian,
If the reason to avoid leader election is because it is complicated and error
prone, this argument may not be true. Leader election is complicated in a pure
distributed system in which there is no centralized storage. However, Magnum
has a centralized database, so it is possible to impl
Suro,
I think service/pod/rc are k8s-specific. +1 for Jay’s suggestion about renaming
COE-specific command, since the new naming style looks consistent with other
OpenStack projects. In addition, it will eliminate name collision of different
COEs. Also, if we are going to support pluggable COE,
also have plan integrate with mesos for scheduling. Once mesos
integration finished, we can treat mesos+hyper as another kind of bay.
Thanks
2015-07-19 4:15 GMT+08:00 Hongbin Lu
mailto:hongbin...@huawei.com>>:
Peng,
Several questions Here. You mentioned that HyperStack is a single big “bay
Peng,
Several questions Here. You mentioned that HyperStack is a single big “bay”.
Then, who is doing the multi-host scheduling, Hyper or something else? Were you
suggesting to integrate Hyper with Magnum directly? Or you were suggesting to
integrate Hyper with Magnum indirectly (i.e. through k
k...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
--------
Follow your h
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
--------
Follow your heart. You are miracle!
[Inactive hid
#x27;ironic' means a baremetal nova instance.
Adrian
Original message
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack
I am going to propose a third option:
3. virt_type
I have concerns about option 1 and 2, because “instance_type” and flavor was
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or
“baremetal”, it may cause confusions.
[1] https://blueprints.launchpad.net/nova/+spec/f
Hi,
I sent this email to request investigations on a suspicious commit [1] from
devstack, which possibly breaks magnum's functional gate test. The first
breakage at Magnum side occurred at Jul 10 4:18 PM [2], which is about half an
hour after the suspicious commit being merged. By dipping into
+1 Welcome Tom!
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: July-09-15 10:21 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [magnum] Tom Cammann for core
Team,
Tom Cammann (tcammann) has become a valued Magnum contributor, and consis
Agree. The motivation of pulling templates out of Magnum tree is hoping these
templates can be leveraged by a larger community and get more feedback.
However, it is unlikely to be the case in practise, because different people
has their own version of templates for addressing different use cases
Hi team,
I would like to start my question by using a sample template:
heat_template_version: 2014-10-16
parameters:
count:
type: number
default: 5
removal_list:
type: comma_delimited_list
default: []
resources:
sample_group:
type: OS::Heat::ResourceGroup
properties:
I think option #3 is the most desired choice in performance’s point of view,
because magnum is going to support multiple conductors and all conductors share
the same DB. However, if each conductor runs its own thread for periodic task,
we will end up to have multiple instances of tasks for doing
blueprint.
This way we solve for the use case, and don't need a new attribute on the bay
resource that requires users to concatenate multiple attribute values in order
to get a native client tool working.
Adrian
On Jun 12, 2015, at 6:32 PM, Hongbin Lu
mailto:hongbin...@huawei.com>> wrote:
A use case could be the cloud is behind a proxy and the API port is filtered.
In this case, users have to start the service in an alternative port.
Best regards,
Hongbin
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-12-15 2:22 PM
To: OpenStack Development Mailing List (not for
Could we have a new group magnum-ui-core and include magnum-core as a subgroup,
like the heat-coe-tempalte-core group.
Thanks,
Hongbin
From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: June-04-15 1:58 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstac
Hi Jay,
For your question “what is the mesos object that we want to manage”, the short
answer is it depends. There are two options I can think of:
1. Don’t manage any object from Marathon directly. Instead, we can focus
on the existing Magnum objects (i.e. container), and implements them
+1!
From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: May-31-15 1:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for
Magnum
Hi core team,
Kennan (Kai Qiang Wu's nickname) has really done
Hi Ronald,
I think the “update” action is definitely appropriate to use, since it is not
specific to magnum (Heat and Ironic use it as well). By looking through the
existing list of actions [1], it looks the start/stop actions fits into the
openstackclient resume/suspend actions. The “execute”
+1!
On Apr 28, 2015, at 11:14 PM, "Steven Dake (stdake)" wrote:
> Hi folks,
>
> I would like to nominate Madhuri Kumari to the core team for Magnum. Please
> remember a +1 vote indicates your acceptance. A –1 vote acts as a complete
> veto.
>
> Why Madhuri for core?
> She participates on
Hi Madhuri,
Amazing work! I wouldn't concern the code duplication and modularity issue
since the codes are generated. However, there is another concern here: if
we find a bug/improvement of the generated code, we probably need to modify
the generator. The question is if the upstream will accept th
301 - 400 of 409 matches
Mail list logo