Steve,
I will defer to the experts in openstack-infra on this one. As long as the
image works without modifications, then I think it would be fine to cache the
upstream one. Practically speaking, I do anticipate a point at which we will
want to adjust something in the image, and it will be
Adrian,
Makes sense. Do the images have to be built to be mirrored though? Can't
they just be put on the mirror sites fro upstream?
Thanks
-steve
On 3/29/16, 11:02 AM, "Adrian Otto" wrote:
>Steve,
>
>I¹m very interested in having an image locally cached in glance
Steve,
I’m very interested in having an image locally cached in glance in each of the
clouds used by OpenStack infra. The local caching of the glance images will
produce much faster gate testing times. I don’t care about how the images are
built, but we really do care about the performance
Yolanda,
That is a fantastic objective. Matthieu asked why build our own images if
the upstream images work and need no further customization?
Regards
-steve
On 3/29/16, 1:57 AM, "Yolanda Robla Mota"
wrote:
>Hi
>The idea is to build own images using
Hi
The idea is to build own images using diskimage-builder, rather than
downloading the image from external sources. By that way, the image can
live in our mirrors, and is built using the same pattern as other images
used in OpenStack.
It also opens the door to customize the images, using
Hi,
We are using the official Fedora Atomic 23 images here (on Mitaka M1
however) and it seems to work fine with at least Kubernetes and Docker
Swarm.
Any reason to continue building specific Magnum image ?
Regards,
Mathieu
Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
>
Hi team,
Please review the agenda for our team meeting tomorrow:
https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-03-29_1600_UTC
. Please feel free to add items to the agenda if you like.
Best regards,
Hongbin
Yes, that's exactly what I want to do, adding dcos cli and also add Chronos
to Mesos Bay to make it can handle both long running services and batch
jobs.
Thanks,
On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki
wrote:
> On 03/25/2016 07:57 AM, Jay Lau wrote:
>
>> Hi
On 03/25/2016 07:57 AM, Jay Lau wrote:
Hi Magnum,
The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a
DCOS which focus on container service based on mesos.
For more detail, please refer to
opment Mailing List <openstack-dev@lists.openstack.org>
Sent: Thursday, March 24, 2016 11:57 PM
Subject: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
Hi Magnum,The current mesos bay only include mesos and marathon, it is better
to enhance the mesos bay have more components
Hi Magnum,
The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a DCOS
which focus on container service based on mesos.
For more detail, please refer to
Just an FYI for folks trying to follow the two-weekly Fedora Atomic update
cycle.
-Steve
- Forwarded Message -
> From: "Adam Miller"
> To: "Fedora Cloud SIG" ,
> rel-...@lists.fedoraproject.org,
>
- Original Message -
> From: "Kai Qiang Wu" <wk...@cn.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Sent: Wednesday, March 23, 2016 9:10:30 PM
> Subject: Re:
From: Yolanda Robla Mota <yolanda.robla-m...@hpe.com>
To: <openstack-dev@lists.openstack.org>
Date: 03/23/2016 04:12 AM
Subject: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder
>
Date: 24/03/2016 01:12 am
Subject: Re: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder
Hi Yolanda,
Thank you for making a huge improvement from the manual process of building
the Fedora Atomic image.
Although Atomic does publish a public OpenStac
it?
>
> Thanks,
> Kevin
>
>
>
>
> From: Ian Cordasco [sigmaviru...@gmail.com]
> Sent: Wednesday, March 23, 2016 2:45 PM
> To: Monty Taylor; OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openst
From: Ian Cordasco [sigmaviru...@gmail.com]
Sent: Wednesday, March 23, 2016 2:45 PM
To: Monty Taylor; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum
-Original Message-
From
penstack.org>
Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum
> On 03/23/2016 04:45 PM, Ian Cordasco wrote:
> > -Original Message-
> > From: Monty Taylor
> > Reply: OpenStack Development Mailing List (not for usage questions)
> > Dat
penstack.org <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum
On 03/22/2016 06:27 PM, Kevin Carter wrote:
/me wearing my deployer hat now: Many of my customers and product folks
want Magnum but they also want magnum to be as secure and
g>
Subject: Re: [openstack-dev] [magnum] Streamline adoption of Magnum
> On 03/22/2016 06:27 PM, Kevin Carter wrote:
> > /me wearing my deployer hat now: Many of my customers and product folks
> > want Magnum but they also want magnum to be as secure and stable as
> > poss
com>
To: <openstack-dev@lists.openstack.org>
Date: 03/23/2016 04:12 AM
Subject: [openstack-dev] [magnum] Generate atomic images using
diskimage-builder
Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generatin
Hi
I wanted to start a discussion on how Fedora Atomic images are being
built. Currently the process for generating the atomic images used on
Magnum is described here:
http://docs.openstack.org/developer/magnum/dev/build-atomic-image.html.
The image needs to be built manually, uploaded to
19 PM
To:"OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>
Subject:Re: [openstack-dev] [magnum] High Availability
Tim,
Thanks for your advice. I respect your point of view and we will
de
t;daneh...@cisco.com <mailto:daneh...@cisco.com>> wrote:
>>
>>
>>
>> From:Hongbin Lu <hongbin...@huawei.com <mailto:hongbin...@huawei.com>>
>> Reply-To:"OpenStack Development Mailing List (not for usage
>> questions)" <openstack-de
agnum.rst
[2] http://thenewstack.io/hypernetes-brings-multi-tenancy-microservices/
I am going to send another ML to describe the details. You are welcome to
provide your inputs. Thanks.
Best regards,
Hongbin
From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Develo
stack.org>>
Date: Monday, March 21, 2016 at 8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability
Tim,
Thanks
8:19 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability
Tim,
Thanks for your advice. I respect your point of view and we will def
t;openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] High Availability
> Tim,
>
> Thanks for your advice. I respect your point of view and we will definitely
> encourage
> our users to try Barbican if they see fits. However, for the sake of Magnum,
>
ist (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:op
> -Original Message-
> From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
> Sent: March-20-16 1:57 AM
> To: maishsk+openst...@maishsk.com; OpenStack Development Mailing List
> (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availabilit
,
Hongbin
From: Dave McCowan (dmccowan) [mailto:dmcco...@cisco.com]
Sent: March-19-16 10:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
The most basic requirement here for Magnum is that it needs a safe place
Hi all,
I'm writing the spec for the COE drivers, and I wanted some feedback about what
it should include. I'm trying to reconstruct some of the discussion that was
had at the mid-cycle meet-up, but since I wasn't there I need to rely on people
who were :)
>From my perspective, the spec
/openstack/anchor
Cheers
-Rob
> -Original Message-
> From: Maish Saidel-Keesing [mailto:mais...@maishsk.com]
> Sent: 19 March 2016 18:10
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
>
> Forgiv
- Original Message -
> From: "Kai Qiang Wu" <wk...@cn.ibm.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Sent: Tuesday, March 15, 2016 3:20:46 PM
> Subject: Re: [openstack-d
t;
>Thanks,
>Kevin
>
>From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
>Sent: Friday, March 18, 2016 6:45 AM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [magnum] High Availability
>
>Hongbin,
>
Hi.
We're on the way, the API is using haproxy load balancing in the same
way all openstack services do here - this part seems to work fine.
For the conductor we're stopped due to bay certificates - we don't
currently have barbican so local was the only option. To get them
accessible on all
tack-dev@lists.openstack.org>
> Sent: Tuesday, March 15, 2016 3:20:46 PM
> Subject: Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro
>
> Hi Stdake,
>
> There is a patch about Atomic 23 support in Magnum. And atomic 23 uses
> kubernetes 1.0.6, and
On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal <
douglas.mendiza...@rackspace.com> wrote:
> [snip]
> >
> > Regarding the Keystone solution, I'd like to hear the Keystone team's
> feadback on that. It definitely sounds to me like you're trying to put a
> square peg in a round hole.
> >
>
>
I
)
Subject: Re: [openstack-dev] [magnum] High Availability
On 3/18/16, 12:59 PM, "Fox, Kevin M" <kevin@pnnl.gov> wrote:
>+1. We should be encouraging a common way of solving these issues across
>all the openstack projects and security is a really important thing.
>spreading i
ftware within
>> those environments. As misguided as this viewpoint may be, it’s common. My
>> belief is that it’s best to offer the best practice by default, and only
>> allow insecure operation when someone deliberately turns of fundamental
>> security features.
>
approved.
>
> [1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage q
a totally understandable, but unreasonable request.
Thanks,
Kevin
From: Douglas Mendizábal [douglas.mendiza...@rackspace.com]
Sent: Friday, March 18, 2016 6:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability
Hongbin
.@rackspace.com]
> Sent: March-18-16 9:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] High Availability
>
> Hongbin,
>
> I think Adrian makes some excellent points regarding the adoption of
> Barbican. As the PTL for Barbican
Follow your heart. You are miracle!
From: Jamie Hannaford <jamie.hannaf...@rackspace.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date: 17/03/2016 07:24 pm
Subject: [openstack-dev
All,
Does anyone have experience deploying Magnum in a highly-available fashion? If
so, I'm interested in learning from your experience. My biggest unknown is the
Conductor service. Any insight you can provide is greatly appreciated.
Regards,
Daneyon Hansen
tack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
>
>
>
> Hongbin,
>
>
>
> I tweaked the blueprint in accordance with this approach, and approved
> it for Newton:
>
> https://blueprints.laun
;
> Best regards,
> Hongbin
>
> -Original Message-
> From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com]
> Sent: March-18-16 9:45 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum] High Availability
>
> Hongbin,
I thought that a big part of the use case with Magnum + Barbican was
Certificate management for Bays?
-Rob
From: "Dave McCowan (dmccowan)"
Reply-To: OpenStack List
Date: Saturday, 19 March 2016 14:56
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] High Availability
The
org>>
Date: Friday, March 18, 2016 at 10:52 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability
OK. If using Keystone
mmunity that we were out to integrate with and enhance
>>> OpenStack not to replace it.
>>>
>>> Now, with all that said, I do recognize that not all clouds are motivated
>>> to use all available security best practices. They may be operating in
>>
chnology easily
accessible.
Thanks,
Adrian
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availa
t;openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] High Availability
> Thanks Adrian,
>
> I think the Keystone approach will work. For others, please speak up if it
> doesn’t work
> for you.
So I think we need to clear out some assumptions before declaring th
I announce my candidacy [1] for, and respectfully respect your support to
continue as your Magnum PTL.
Here are are my achievements and OpenStack experience and that make me the best
choice for this role:
* Founder of the OpenStack Containers Team
* Established vision and specification for
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability
...
If you disagree, I would request you to justify why this approach works f
bout Magnum adoption as much as all of us,
> so I’d like us to think creatively about how to strike the right balance
> between re-implementing existing technology, and making that technology
> easily accessible.
>
> Thanks,
>
> Adrian
>
>>
>>
Hi,
I would like to announce my candidacy for the PTL position of Magnum.
To introduce myself, my involvement in Magnum began in December 2014, in which
the project was at a very early stage. Since then, I have been working with the
team to explore the roadmap, implement and refine individual
---Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
I have trouble understanding that blueprint. I will put some remark
nts.launchpad.net/magnum/+spec/barbican-alternative-store
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subjec
-dev] [magnum] High Availability
Hi.
We're on the way, the API is using haproxy load balancing in the same way all
openstack services do here - this part seems to work fine.
For the conductor we're stopped due to bay certificates - we don't currently
have barbican so local was the only option
;wk...@cn.ibm.com>
Sent: 17 March 2016 15:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] COE drivers spec
Here are some of my raw points,
1. For the driver mentioned, I think we not necessary use bay-driver here, as
have network-
: [openstack-dev] [magnum] High Availability
Hongbin,
I tweaked the blueprint in accordance with this approach, and approved it for
Newton:
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
I think this is something we can all agree on as a middle ground, If not, I’m
open
ble.
Thanks,
Adrian
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
I have trouble u
@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability
Hongbin,
I think Adrian makes some excellent points regarding the adoption of Barbican.
As the PTL for Barbican, it's frustrating to me to constantly hear from other
projects that securing their sensitive data is a requirement
/heat-specs/specs/juno/encrypt-hidden-parameters.html
Best regards,
Hongbin
From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
On Fri, Mar 18, 2016 at 4
> On Mar 17, 2016, at 11:41 AM, Ricardo Rocha wrote:
>
> Hi.
>
> We're on the way, the API is using haproxy load balancing in the same
> way all openstack services do here - this part seems to work fine
I expected the API to work. Thanks for the confirmation.
>
> For
Hi Bradley Jones,
I propose reorganization of "Magnum-UI Drivers" on Launchpad for Magnum-UI.
The team can contain magnum core members and magnum-ui core members.
Also review the following patch, please.
https://review.openstack.org/#/c/289584/
So I suggest cutting the mitaka release until
tack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date: 16/03/2016 03:23 am
Subject:Re: [openstack-dev] [magnum] Discussion of supporting
single/multiple OS distro
WFM as long as we stick to the spir
Hi peers,
Thank you for the appointment to core!!
I'm honored to working with my peers.
I'll do my best as core and i18n liaison.
Since I received previous mail after going home,
I'm sorry for absence in last meeting.
Best regards,
Shu Muto
"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 14, 2016 at 8:10 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org&
Hi,
Voting has concluded. Welcome, Shu Muto to the magnum-UI core team! I will
announce your new status at today's team meeting.
Thanks,
Adrian
> On Mar 14, 2016, at 5:40 PM, Shuu Mutou wrote:
>
> Hi team,
>
> Thank you very much for vote to me.
> I'm looking
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-14-16 4:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
Steve,
I think you may have misunderstood our intent here
Hi team,
Thank you very much for vote to me.
I'm looking forward to work more with our peers.
However, when is the end of this vote?
Shu Muto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
stions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Developme
t;openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 7, 2016 at 10:06 AM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sub
Hi team,
FYI. In short, we have to temporarily disable SELinux [1] due to bug 1551648
[2].
SELinux is an important security features for Linux kernel. It improves
isolation between neighboring containers in the same host. In before, Magnum
had it turned on in each bay node. However, we have
From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
Hongbin, I think the offer to support different OS options
ackspace.com]
> *Sent:* March-04-16 6:31 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
>
> *Subject:* Re: [openstack-dev] [magnum] Discussion of supporting
> single/multiple OS distro
>
>
>
> Steve,
>
>
>
> On Mar 4, 2016, at 2:4
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-04-16 7:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice
Magnum UI Cores,
I propose t
ssage-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: March-04-16 7:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum-ui] Proposed Core addition, and removal
> notice
>
> Magnum UI Cores,
>
> I pro
)
Subject: Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n
Kato,
I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for
Magnum until further notice. Thanks for raising this important request.
Regards,
Adrian
> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki
+1
BTW, I am magnum core, not magnum-ui core. Not sure if my vote is counted.
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-04-16 7:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev
ent Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date: 05/03/2016 06:25 pm
Subject: Re: [openstack-dev] [magnum-ui] Proposed Core addition, and
removal notice
+1, welcome Shu.
-Yuanying
On Sat, Mar 5, 2016 at 09:37 B
+1, welcome Shu.
-Yuanying
On Sat, Mar 5, 2016 at 09:37 Bradley Jones (bradjone)
wrote:
> +1
>
> Shu has done some great work in magnum-ui and will be a welcome addition
> to the core team.
>
> Thanks,
> Brad
>
> > On 5 Mar 2016, at 00:29, Adrian Otto
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-04-16 6:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
Steve,
On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake
+1
Shu has done some great work in magnum-ui and will be a welcome addition to the
core team.
Thanks,
Brad
> On 5 Mar 2016, at 00:29, Adrian Otto wrote:
>
> Magnum UI Cores,
>
> I propose the following changes to the magnum-ui core group [1]:
>
> + Shu Muto
> -
Magnum UI Cores,
I propose the following changes to the magnum-ui core group [1]:
+ Shu Muto
- Dims (Davanum Srinivas), by request - justified by reduced activity level.
Please respond with your +1 votes to approve this change or -1 votes to oppose.
Thanks,
Adrian
Kato,
I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for
Magnum until further notice. Thanks for raising this important request.
Regards,
Adrian
> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki wrote:
>
> I added Magnum to the list... Feel free
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
S
4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
Hongbin,
To be clear, this
.
Best regards,
Hongbin
From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
I don't think anyone is saying that c
ons)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
I don't think anyone is saying that code should somehow block support for
multiple distros. The discussion at midcycle was about what the we should gate
on and ensure feature parity for as a team. Ideall
[mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple
OS distro
I don't think anyone is saying that code should somehow block support for
multiple
-steve
>
> From: Hongbin Lu <hongbin...@huawei.com>
> Reply-To: "openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> Date: Monday, February 29, 2016 at 9:40 AM
> To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.opensta
AM
To:
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS
distro
Hi team,
This is a
On Fri, Mar 04, 2016 at 01:09:26PM +0800, Qiming Teng wrote:
> Another option is to try out senlin service. What you need to do is
> something like below:
>
> 1. Create a heat template you want to deploy as a group, say,
> node_template.yaml
>
> 2. Create a senlin profile spec (heat_stack.yaml)
Another option is to try out senlin service. What you need to do is
something like below:
1. Create a heat template you want to deploy as a group, say,
node_template.yaml
2. Create a senlin profile spec (heat_stack.yaml) which may look
like, for example:
type: os.heat.stack
version: 1.0
Thank you both for your answers !
Indeed I need it sooner rather than later (as usual :) ) so the Newton
release is a bit too far away.
In the meantime I just test your solution with the index and the map
and it works great !
I'll use that for now, and we will discuss taking over the Heat bp
On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:
> On 02/03/16 05:50, Mathieu Velten wrote:
> >Hi all,
> >
> >I am looking at a way to spawn nodes in different specified
> >availability zones when deploying a cluster with Magnum.
> >
> >Currently Magnum directly uses predefined Heat
On 02/03/16 05:50, Mathieu Velten wrote:
Hi all,
I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.
Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal
Hi all,
I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.
Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I
601 - 700 of 1499 matches
Mail list logo