look forward to the honor of serving you well.
Thanks,
Adrian Otto
[1] https://review.openstack.org/454908
[2] https://docs.openstack.org/developer/magnum/policies.html
__
OpenStack Development Mailing List (not for usage
ecause, they will be using an openstack
>> cloud
>> and they will be wanting to use the magnum service. So, it only make sense
>> to type openstack magnum cluster or mcluster which is shorter.
>>
>>
>> On 21 March 2017 at 02:24, Qiming Teng <teng...@linux.vnet
pi...@gmail.com>>
wrote:
On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,
Stephen Watson has been working on an magnum feature to add magnum commands to
the openstack client by implementing a plugin:
https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclie
Dean,
Thanks for your reply.
> On Mar 20, 2017, at 2:18 PM, Dean Troyer <dtro...@gmail.com> wrote:
>
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
>> the argument is actually the service name, such as “ec2”. This is
>
Jay,
On Mar 20, 2017, at 12:35 PM, Jay Pipes
<jaypi...@gmail.com<mailto:jaypi...@gmail.com>> wrote:
On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,
Stephen Watson has been working on an magnum feature to add magnum commands to
the openstack client by implementing a pl
t; To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
>> commands in osc?
>>
>> On 03/20/2017 03:08 PM, Adrian Otto wrote:
>>> Team,
>>>
>>> Stephen Watson has been working on an magnu
> Thanks,
> Kevin
> ________
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Monday, March 20, 2017 12:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum][osc] What name to use for magnum commands
> in osc?
>
Team,
Stephen Watson has been working on an magnum feature to add magnum commands to
the openstack client by implementing a plugin:
https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
In review of this work, a question has resurfaced, as to what the client
I have opened the following bug ticket for this issue:
https://bugs.launchpad.net/magnum/+bug/1663757
Regards,
Adrian
On Feb 10, 2017, at 1:46 PM, Adrian Otto
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
What I’d like to see in this case is to use secure
What I’d like to see in this case is to use secure connections by default, and
to make workarounds for self signed certificates or other optional workarounds
for those who need them. I would have voted against patch set 383493. It’s also
not linked to a bug ticket, which we normally require
I voted to merge [1] and [2]:
[1] https://review.openstack.org/#/c/429753/
[2] https://review.openstack.org/#/c/430941/
FFE approved for Magnum, provided this does not cause problems for other
projects.
Adrian
On Feb 8, 2017, at 7:25 AM, Adrian Otto
<adrian.o...@rackspace.com<mailto:ad
We are actively working to verify that magnum-ui works with the adjusted
requirements.txt, and as soon as we have confirmed this change is
non-disruptive, I will be ready to approve the FFE.
Adrian
> On Feb 7, 2017, at 4:54 PM, Richard Jones wrote:
>
> It looks like
, because I want Magnum to be my
primary focus. I look forward to your vote, and continued success together.
Thanks,
Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ
Team,
I will be starting our feature freeze today. We have a few more patches to
consider for merge before we enter the freeze. I’ll let you all know when each
has been considered, and we are ready to begin the freeze.
Thanks,
Adrian
ers provided to us as input at the PTG. This
might give us a chance to source further input from a wider audience than our
PTG attendees.
Thoughts?
Thanks,
Adrian
On 18 Jan 2017 8:36 p.m., "Adrian Otto"
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Josh,
> On Jan 18, 2017, at 10:18 AM, Josh Berkus wrote:
>
> Magnum Devs:
>
> Is there going to be a magnum team meeting around OpenStack Summit in
> Boston?
>
> I'm the community manager for Atomic Host, so if you're going to have
> Magnum meetings, I'd like to send you
> On Jan 16, 2017, at 11:02 AM, Dave McCowan (dmccowan)
> wrote:
>
> On 1/16/17, 11:52 AM, "Ian Cordasco" wrote:
>
>> -Original Message-
>> From: Rob C
>> Reply: OpenStack Development Mailing List (not for usage
Team,
We discussed this in today’s team meeting:
http://eavesdrop.openstack.org/meetings/containers/2017/containers.2017-01-03-16.00.html
Our consensus was to start iterating on this in-tree and break it out later
into a separate repo once we have reasonably mature drivers, and/or further
Ruben,
I found the following two reviews:
https://review.openstack.org/397150 Magnum_driver for congress
https://review.openstack.org/397151 Test for magnum_driver
Are these what you are referring to, or is it something else?
Thanks,
Adrian
> On Nov 14, 2016, at 4:13 AM, Ruben
Jaycen and Yatin,
You have each been added as new core reviewers. Congratulations to you both,
and thanks for stepping up to take on this new role!
Cheers,
Adrian
> On Nov 7, 2016, at 11:06 AM, Adrian Otto <adrian.o...@rackspace.com> wrote:
>
> Magnum Core Team,
>
> I
Magnum Core Team,
I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum Core
Reviewers. Please respond with your votes.
Thanks,
Adrian Otto
__
OpenStack Development Mailing List (not for usage questions
Thank Doug for organizing this. In our session, I mentioned that Magnum is
working through some of the issues now, and will touch on a portion of these
concerns in one of our sessions:
Thursday, October 27, 2:40pm-3:20pm
CCIB - P1 - Room 124/125
I plan to attend, but may be a few minutes late. Apologies in advance.
Adrian
> On Oct 24, 2016, at 8:23 PM, Doug Wiegley
> wrote:
>
> As part of a requirements mailing list thread [1], the idea of a servicevm
> working group, or a common framework for reference
016/summit-schedule/global-search?t=Magnum:>
If there are any additional topics you would like to cover with the team,
please add them to our Friday afternoon meetup:
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/17216
Thanks,
Adria
together.
Thanks,
Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
I am struggling to understand why we would want to remove projects from our big
tent at all, as long as they are being actively developed under the principles
of "four opens". It seems to me that working to disqualify such projects sends
an alarming signal to our ecosystem. The reason we made
s/mentally/centrally/
Autocorrect is not my friend.
On Jul 29, 2016, at 11:26 AM, Adrian Otto
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Yasmin,
One option you have is to use the libvirt-lxc nova virt driver, and use an
image that has a docker daem
Yasmin,
One option you have is to use the libvirt-lxc nova virt driver, and use an
image that has a docker daemon installed on it. That would give you a way to
place docker containers on a data plane the uses no virtualization, but you
need to individually manage each instance. Another option
On Jul 27, 2016, at 1:26 PM, Hongbin Lu
> wrote:
Here is the guideline to evaluate an API change:
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
. In particular, I highlight the followings:
"""
The
On Jul 27, 2016, at 1:26 PM, Hongbin Lu
> wrote:
Here is the guideline to evaluate an API change:
http://specs.openstack.org/openstack/api-wg/guidelines/evaluating_api_changes.html
. In particular, I highlight the followings:
"""
The
How about a shark? Something along these lines:
http://www.logoground.com/logo.php?id=10554
On Jul 25, 2016, at 3:54 PM, Hongbin Lu wrote:
Hi team,
OpenStack want to promote individual projects by choosing a mascot to represent
the project. The idea is to create a
+1
--
Adrian
On Jul 22, 2016, at 10:28 AM, Hongbin Lu
> wrote:
Hi all,
Spyros has consistently contributed to Magnum for a while. In my opinion, what
differentiate him from others is the significance of his contribution, which
adds
Rackspace is willing to host in Austin, TX or San Antonio, TX, or San
Francisco, CA.
--
Adrian
On Jun 7, 2016, at 1:35 PM, Hongbin Lu
> wrote:
Hi all,
Please find the Doodle pool below for selecting the Magnum midcycle date.
Presumably, it
I am really struggling to accept the idea of heterogeneous clusters. My
experience causes me to question whether a heterogeneus cluster makes sense for
Magnum. I will try to explain why I have this hesitation:
1) If you have a heterogeneous cluster, it suggests that you are using external
Brandon,
Magnum uses neutron’s LBaaS service to allow for multi-master bays. We can
balance connections between multiple kubernetes masters, for example. It’s not
needed for single master bays, which are much more common. We have a blueprint
that is in design stage for de-coupling magnum from
> On May 25, 2016, at 12:43 PM, Ben Swartzlander wrote:
>
> On 05/25/2016 06:48 AM, Sean Dague wrote:
>> I've been watching the threads, trying to digest, and find the way's
>> this is getting sliced doesn't quite slice the way I've been thinking
>> about it. (which might
> On May 24, 2016, at 12:09 PM, Mike Perez wrote:
>
> On 12:24 May 24, Thierry Carrez wrote:
>> Morgan Fainberg wrote:
>>> [...] If we are accepting golang, I want it to be clearly
>>> documented that the expectation is it is used exclusively where there is
>>> a
Before considering a project rename.m, I suggest you seek guidance from the
OpenStack technical committee, and/or the OpenStack-infra team. There is
probably a simple workaround to the concern voiced below.
--
Adrian
> On May 24, 2016, at 1:37 AM, Shuu Mutou wrote:
>
> On May 16, 2016, at 7:59 AM, Steven Dake (stdake) wrote:
>
> Tom,
>
> Devil's advocate here.. :)
>
> Can you offer examples of other OpenStack API services which behave in
> this way with a API?
The more common pattern is actually:
or:
Examples:
# trove
Magnum Team,
In accordance with our Fishbowl discussion yesterday at the Newton Design
Summit in Austin, I have proposed the following revision to Magnum’s mission
statement:
https://review.openstack.org/311476
The idea is to narrow the scope of our Magnum project to allow us to focus on
4:49 PM, "Joshua Harlow"
<harlo...@fastmail.com<mailto:harlo...@fastmail.com>> wrote:
>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap AP
> On Apr 20, 2016, at 2:49 PM, Joshua Harlow <harlo...@fastmail.com> wrote:
>
> Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with le
Hongbin,
Both of approaches you suggested may only work for one binary format. If you
try to use docker on a different system architecture, the pre-cache of images
makes it even more difficult to get the correct images built and loaded.
I suggest we take an approach that allows the Baymodel
---Original Message-----
>>> From: Flavio Percoco [mailto:fla...@redhat.com]
>>> Sent: April-12-16 8:40 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Cc: foundat...@lists.openstack.org
>>> Subject: Re: [openstack-dev] [OpenSt
Please don't miss the point here. We are seeking a solution that allows a
location to place a client side encrypted blob of data (A TLS cert) that
multiple magnum-conductor processes on different hosts can reach over the
network.
We *already* support using Barbican for this purpose, as well as
to
allocate a session in design summit to discuss it as a team.
Best regards,
Hongbin
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.opensta
Amrith,
I respect your point of view, and agree that the idea of a common compute API
is attractive… until you think a bit deeper about what that would mean. We
seriously considered a “global” compute API at the time we were first
contemplating Magnum. However, what we came to learn through
On Apr 8, 2016, at 3:15 PM, Hongbin Lu
> wrote:
Hi team,
I would like to give an update for this thread. In the last team, we discussed
several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this
+1
On Mar 31, 2016, at 11:18 AM, Hongbin Lu
> wrote:
Hi all,
Eli Qiao has been consistently contributed to Magnum for a while. His
contribution started from about 10 months ago. Along the way, he implemented
several important blueprints and
+1
On Mar 31, 2016, at 11:18 AM, Hongbin Lu
> wrote:
Hi all,
Eli Qiao has been consistently contributed to Magnum for a while. His
contribution started from about 10 months ago. Along the way, he implemented
several important blueprints and
just be put on the mirror sites fro upstream?
Thanks
-steve
On 3/29/16, 11:02 AM, "Adrian Otto"
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Steve,
I¹m very interested in having an image locally cached in glance in each
of the clouds used by OpenStack infr
Steve,
I’m very interested in having an image locally cached in glance in each of the
clouds used by OpenStack infra. The local caching of the glance images will
produce much faster gate testing times. I don’t care about how the images are
built, but we really do care about the performance
Marcos,
Great question. The current intent is to backport security fixes and critical
bugs, and to focus on master for new feature development. Although we would
love to expand scope to backport functionality, I’m not sure it’s realistic
without an increased level of commitment from that group
> On Mar 24, 2016, at 7:48 AM, Hongbin Lu wrote:
>
>
>
>> -Original Message-
>> From: Assaf Muller [mailto:as...@redhat.com]
>> Sent: March-24-16 9:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev]
Team,
This thread is a continuation of a branch of the previous High Availability
thread [1]. As the Magnum PTL, I’ve been aware of a number of different groups
who have started using Magnum in recent months. For various reasons, there have
been multiple requests for information about how to
Team,
Time to close down this thread and start a new one. I’m going to change the
subject line, and start with a summary. Please restrict further discussion on
this thread to the subject of High Availability.
Thanks,
Adrian
On Mar 22, 2016, at 11:52 AM, Daneyon Hansen (danehans)
,
Adrian
On Mar 17, 2016, at 6:13 PM, Adrian Otto
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Hongbin,
One alternative we could discuss as an option for operators that have a good
reason not to use Barbican, is to use Keystone.
Keystone credentials
integration. Further leverage Neutron through Kuryr, and
demonstrate ways to integrate alternative options.
I look forward to your vote, and to continued success together.
Thanks,
Adrian Otto
[1] https://review.openstack.org/293729
at
some point in the future, justifying the use of an alternate.
Adrian
> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.o...@rackspace.com> wrote:
>
> Hongbin,
>
>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
>>
>> Ad
I have trouble understanding that blueprint. I will put some remarks on the
whiteboard. Duplicating Barbican sounds like a mistake to me.
--
Adrian
> On Mar 17, 2016, at 12:01 PM, Hongbin Lu wrote:
>
> The problem of missing Barbican alternative implementation has been
ble.
Thanks,
Adrian
Best regards,
Hongbin
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
I have trouble u
Hi,
Voting has concluded. Welcome, Shu Muto to the magnum-UI core team! I will
announce your new status at today's team meeting.
Thanks,
Adrian
> On Mar 14, 2016, at 5:40 PM, Shuu Mutou wrote:
>
> Hi team,
>
> Thank you very much for vote to me.
> I'm looking
OS.
Corey
On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
From: Adrian Otto
[mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>]
Sent: March-04-16 6:31 PM
To: OpenStack Development Mailing List (not for
Hi there Janki. Thanks for catching that. I think we can address this by
creating a branch for the client that aligns with kilo. I’ve triaged the magnum
bug on this, and I’m willing to help drive it to resolution together.
Regards,
Adrian
On Mar 9, 2016, at 8:16 PM, Janki Chhatbar
Magnum UI Cores,
I propose the following changes to the magnum-ui core group [1]:
+ Shu Muto
- Dims (Davanum Srinivas), by request - justified by reduced activity level.
Please respond with your +1 votes to approve this change or -1 votes to oppose.
Thanks,
Adrian
Kato,
I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for
Magnum until further notice. Thanks for raising this important request.
Regards,
Adrian
> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki wrote:
>
> I added Magnum to the list... Feel free
Steve,
On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake)
<std...@cisco.com<mailto:std...@cisco.com>> wrote:
From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Hongbin,
To be clear, this pursuit is not about what OS options cloud operators can
select. We will be offering a method of choice. It has to do with what we plan
to build comprehensive testing for, and the implications that has on our pace
of feature development. My guidance here is that we
OpenStack deployments)
and CoreOS (is highly adopted/tested in Kub community and Mesosphere DCOS uses
it as well).
We can implement CoreOS support as driver and users can use it as reference
implementation.
---
Egor
____
From: Adrian Otto <adrian.o...@rack
Consider this: Which OS runs on the bay nodes is not important to end users.
What matters to users is the environments their containers execute in, which
has only one thing in common with the bay node OS: the kernel. The linux
syscall interface is stable enough that the various linux
se the labels in the docker
> daemon config.
>
> On Wed, Feb 24, 2016 at 6:01 PM, Vilobh Meshram
> <vilobhmeshram.openst...@gmail.com> wrote:
>> +1 from me too for the idea. Please file a blueprint. Seems feasible and
>> useful.
>>
>> -Vilobh
>>
>
Ricardo,
Yes, that approach would work. I don’t see any harm in automatically adding
tags to the docker daemon on the bay nodes as part of the swarm heat template.
That would allow the filter selection you described.
Adrian
> On Feb 23, 2016, at 4:11 PM, Ricardo Rocha
Thanks everyone for your votes. Welcome Ton and Egor to the core team!
Regards,
Adrian
> On Feb 1, 2016, at 7:58 AM, Adrian Otto <adrian.o...@rackspace.com> wrote:
>
> Magnum Core Team,
>
> I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewe
Agreed.
> On Jan 31, 2016, at 10:46 PM, 王华 wrote:
>
> Hi all,
>
> I want to remove node object from Magnum. The node object represents either a
> bare metal or virtual machine node that is provisioned with an OS to run the
> containers, or alternatively,
> run
Magnum Core Team,
I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers.
Please respond with your votes.
Thanks,
Adrian Otto
__
OpenStack Development Mailing List (not for usage questions
Team,
We have selected Feb 18-19 for the Midcycle, and will be hosted by HPE. Please
save the date. The exact location is forthcoming, and is expected to be
Sunnyvale.
Thanks,
Adrian
> On Jan 11, 2016, at 11:29 AM, Adrian Otto <adrian.o...@rackspace.com> wrote:
>
>
Team,
We are planning a mid cycle meetup for the Magnum team to be held in the San
Francisco Bay area. If you would like to attend, please take a moment to
respond to this poll to select the date:
http://doodle.com/poll/k8iidtamnkwqe3hd
Thanks,
Adrian
Hongbin,
I’m not aware of any viable options besides using a nonvoting gate job. Are
there other alternatives to consider? If not, let’s proceed with that approach.
Adrian
> On Jan 7, 2016, at 3:34 PM, Hongbin Lu wrote:
>
> Clark,
>
> That is true. The check pipeline
This sounds like a source-of-truth concern. From my perspective the solution is
not to create redundant quotas. Simply quota the Magnum resources. Lower level
limits *could* be queried by magnum prior to acting to CRUD the lower level
resources. In the case we could check the maximum allowed
ave extensive experience in producing containers for
various runtime environments focused around OpenStack.
Regards
-steve
From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questi
Clint,
> On Dec 16, 2015, at 11:56 AM, Tim Bell wrote:
>
>> -Original Message-
>> From: Clint Byrum [mailto:cl...@fewbar.com]
>> Sent: 15 December 2015 22:40
>> To: openstack-dev
>> Subject: Re: [openstack-dev] [openstack][magnum]
Tom,
> On Dec 16, 2015, at 9:31 AM, Cammann, Tom wrote:
>
> I don’t see a benefit from supporting the old API through a microversion
> when the same functionality will be available through the native API.
+1
[snip]
> Have we had any discussion on adding a v2 API and
On Dec 16, 2015, at 2:25 PM, James Bottomley
<james.bottom...@hansenpartnership.com<mailto:james.bottom...@hansenpartnership.com>>
wrote:
On Wed, 2015-12-16 at 20:35 +0000, Adrian Otto wrote:
Clint,
On Dec 16, 2015, at 11:56 AM, Tim Bell
<tim.b...@cern.ch<mailto:tim.b.
Clint,
I think you are categorically dismissing a very real ops challenge of how to
set correct system limits, and how to adjust them in a running system. I have
been stung by this challenge repeatedly over the years. As developers we
*guess* at what a sensible default for a value will be for
g P.R.China 100193
Follow your heart. You are miracle!
Adrian Otto ---17/12/2015 07:00:37 am---Tom, > On Dec 16, 2015, at
9:31 AM, Cammann, Tom <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>> wrote:
From: Adrian Otto <adria
> On Dec 16, 2015, at 6:24 PM, Joshua Harlow wrote:
>
> SURO wrote:
>> Hi all,
>> Please review and provide feedback on the following design proposal for
>> implementing the blueprint[1] on async-container-operations -
>>
>> 1. Magnum-conductor would have a pool of
Vilobh,
Thanks for advancing this important topic. I took a look at what Tim referenced
how Nova is implementing nested quotas, and it seems to me that’s something we
could fold in as well to our design. Do you agree?
Adrian
On Dec 14, 2015, at 10:22 PM, Tim Bell
On Dec 12, 2015, at 9:19 AM, Ton Ngo >
wrote:
Hi Hongbin,
The proposal sounds reasonable: basically it provides an agnostic alternative
to the single command line that a user can invoke with docker or kubectl. If
the user needs more advanced support
Until I see evidence to the contrary, I think adding some bootstrap complexity
to simplify the process of bay node image management and customization is worth
it. Think about where most users will focus customization efforts. My guess is
that it will be within these docker images. We should ask
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>> wrote:
Adrian,
I would like to be an alternate.
Regards
Wanghua
On Wed, Dec 2, 2015 at 10:19 AM, Adrian Otto
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Everett,
Thanks for reaching out. El
Everett,
Thanks for reaching out. Eli is a good choice for this role. We should also
identify an alternate as well.
Adrian
--
Adrian
> On Dec 1, 2015, at 6:15 PM, Qiao,Liyong wrote:
>
> hi Everett
> I'd like to take it.
>
> thanks
> Eli.
>
>> On 2015年12月02日 05:18,
Li Yong,
At any rate, this should not be hardcoded. I agree that the default value
should match the RPC timeout.
Adrian
> On Nov 24, 2015, at 11:23 PM, Qiao,Liyong wrote:
>
> hi all
> In Magnum code, we hardcode it as DEFAULT_DOCKER_TIMEOUT = 10
> This bring troubles
for Mesos integration
as there is a huge eco-system build on top of Mesos.
On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Bharath,
I agree with Hongbin on this. Let’s not expand magnum to deal with apps or
appgroup
As I see this, we need to pick the better of two options, even when neither is
perfect. I’d rather have magnum’s source as intuitive and easy to maintain as
possible. If it becomes more difficult to follow the commit history for a file
in order to achieve that improvement, I’m willing to live
Bharath,
I agree with Hongbin on this. Let’s not expand magnum to deal with apps or
appgroups in the near term. If there is a strong desire to add these things, we
could allow it by having a plugin/extensions interface for the Magnum API to
allow additional COE specific features. Honestly,
Eli,
I like this proposed approach. We did have a discussion with a few Stackers
from openstack-infra in Tokyo to express our interest in using bare metal for
gate testing. That’s still a way out, but that may be another way to speed this
up further. A third idea would be to adjust the nova
Sometimes producing alternate implementations can be more effective than
abstract discussions because they are more concrete. If an implementation can
be produced (possibly multiple different implementations by different
contributors) in a short period of time without significant effort, that’s
Bruce,
That sounds like this bug to me:
https://bugs.launchpad.net/magnum/+bug/1411333
Resolved by:
https://review.openstack.org/148059
I think you need this:
keystone service-create --name=magnum \
--type=container \
--description="magnum
in the past as well.
Adrian
On Nov 2, 2015, at 4:22 PM, Adrian Otto
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Bruce,
That sounds like this bug to me:
https://bugs.launchpad.net/magnum/+bug/1411333
Resolved by:
https://review.openstack.org/148059
I th
Nate,
On Oct 29, 2015, at 11:26 PM, Potter, Nathaniel
> wrote:
Hi everyone,
I’m interested in starting up a puppet module that will handle the Magnum
containers project. Would this be something the community might want? Thanks!
1 - 100 of 366 matches
Mail list logo