Hi,
I wanted to add my yaml as new resources (via
/etc/heat/environment.d/default.yaml, but we use some external files in the
OS::Nova::Server personality section.
It looks like the heat cli handles that when you pass yaml to it, but I
couldn't get it to work either through horizon, or even
Hi,
the nova config in my OpenStack installation has the normal limit of only
returning 1000 items as a result to an API call. But I have way more than 1000
VMs.
Stupid question: How can I get the next 1000 and ultimately all VMs listed
using nova list (or openstack server list)?
I could
In case it wasn't already assumed, anyone is welcome to contact me directly
(irc: gus, email, or in Austin) if they have questions or want help with
privsep integration work. It's early days still and the docs aren't
extensive (ahem).
os-brick privsep change just recently merged (yay), and I
I got confirmation from Mesosphere that we can use the open source DC/OS in
Magnum now, it is a good time to enhance the Mesos Bay to Open Source DCOS.
From Mesosphere
DC/OS software is licensed under the Apache License, so you should feel
free to use it within the
Safe travels! See you in austin.
On Thu, Apr 21, 2016 at 4:22 PM, Tony Breeds
wrote:
> On Thu, Apr 21, 2016 at 02:13:15PM -0400, Doug Hellmann wrote:
> > The release team is preparing for and traveling to the summit, just as
> > many of you are. With that in mind, we
Needless to say, 13.7 ms after I sent this e-mail, I found this bug
report:
https://bugs.launchpad.net/mos/+bug/1527581
Humble apologies...
-Ken
On 2016-04-21 23:11, Ken D'Ambrosio wrote:
I'd heard from some users they were having trouble allocating floating
IPs in our Liberty cloud, so I
@hongbin,
FYI, there is a patch from yolanda to using fedora atomic images built
in our mirros https://review.openstack.org/#/c/306283/
On 2016年04月22日 10:41, Hongbin Lu wrote:
Hi team,
Based on a request, I created a link to the latest atomic image that
Magnum is using:
I'd heard from some users they were having trouble allocating floating
IPs in our Liberty cloud, so I tried on my test account. I had two
floating IPs (with a quota of (at least) three); I released and
immediately tried to re-acquire, and it failed with:
[Fri Apr 22 02:59:36.839441 2016]
Hi team,
Based on a request, I created a link to the latest atomic image that Magnum is
using: https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2 . We
plan to keep this link pointing to the newest atomic image so that we can avoid
updating the name of the image for every image
Inline below ... thread is too long, will catch you in Austin.
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Thursday, April 21, 2016 8:08 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] More on the topic of DELIMITER, the Quota
>
Hi
I get the latest sources and delete the /etc/vitrage/vitrage.conf,do
unstack.sh and stack.sh.
But deploy failed. The local.cnof is the same as before.
Is there any vitrage configures i missed?
Use openstack user create cli to create nova,glance and etc is successful.
Thanks for your help. :)
On 3/31/2016 7:31 AM, Znoinski, Waldemar wrote:
[WZ] See comments below about full/small wiki but would the below is enough or
you'd want or to see more:
- networking-ci runs (with exceptions):
tempest.api.network
tempest.scenario.test_network_basic_ops
- nfv-ci runs (with exceptions):
On 3/30/2016 8:47 PM, yongli he wrote:
Hi, mriedem
Shaohe is on vacation. And Intel SRIOV CI comment to Neutron. running
the macvtap vnic SRIOV testing and plus required neutron smoking test.
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI
Regards
Yongli He
在
Thanks Kris, issue resolved after adding below lines to sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
appreciate your help, thanks a lot again.
On Thu, Apr 21, 2016 at 8:25 PM, Kris G. Lindgren
On 04/21/2016 07:07 PM, Jay Pipes wrote:
Hmm, where do I start... I think I will just cut to the two primary
disagreements I have. And I will top-post because this email is way too
big.
1) On serializable isolation level.
No, you don't need it at all to prevent races in claiming. Just use a
On 04/20/2016 06:40 PM, Matt Riedemann wrote:
Note that I think the only time Nova gets details about ports in the API
during a server create request is when doing the network request
validation, and that's only if there is a fixed IP address or specific
port(s) in the request, otherwise Nova
I like Malini’s suggestion on meeting for a lunch to get to know each other,
then continue on Thursday.
So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday and then
continue the discussion at Room 400 at 3:10pm Thursday.
Since Salon C is a big room, I will put a sign “Common
Make sure that the bridges are being created (1 bridge per vm) they should be
named close to the vm tap device name. Then make sure that you have bridge
nf-call-* files enabled:
http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Under hybrid mode what happens is a linux
Hmm, where do I start... I think I will just cut to the two primary
disagreements I have. And I will top-post because this email is way too big.
1) On serializable isolation level.
No, you don't need it at all to prevent races in claiming. Just use a
compare-and-update with retries strategy.
Yup, Murano indeed may be part of the solution. The problem is much larger then
any one single OpenStack project though, so its good to have the discussions
with the various projects to see where the pieces best fit. If Magnum at the
end of the day rejects the idea that a COE abstraction is not
+1. That's a very good list. Thanks for writing it up. :)
Kevin
From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, April 21, 2016 4:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev]
Amrith,
Very well thought out. Thanks. :)
I agree a nova driver that let you treat containers the same way as vm's, bare
metal, and lxc containers would be a great thing, and if it could plug into
magnum managed clusters well, would be awesome.
I think a bit of the conversation around it gets
Thats cool. Hopefully something great will come of it. :)
Thanks for sharing the link. :)
Kevin
From: Joshua Harlow [harlo...@fastmail.com]
Sent: Thursday, April 21, 2016 2:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject:
Hi,
I am running into a issue where security group rules are not applying to
instances when I create a new security group with default rules it should
reject all incoming traffic but it is allowing everything without blocking
here is my config for nova :
security_group_api = neutron
On Thu, Apr 21, 2016 at 02:13:15PM -0400, Doug Hellmann wrote:
> The release team is preparing for and traveling to the summit, just as
> many of you are. With that in mind, we are going to hold off on
> releasing anything until 2 May, unless there is some sort of critical
> issue or gate
Hi Monty,
I respect your position, but I want to point out that there is not only one
human wants this. There are a group of people want this. I have been working
for Magnum in about a year and a half. Along the way, I have been researching
how to attract users to Magnum. My observation is
Yeah. its good to disagree and talk through it. sometimes there just isn't a
way to see eye to eye on something. thats fine too. I was just objecting to the
assertion:
"I do not believe anyone in the world wants us to build an
> abstraction layer on top of the _use_ of swarm/k8s/mesos. People
See something similar with heartbeat seems like reconnection attempt fails
2016-04-21 15:27:01.294 6 DEBUG nova.openstack.common.loopingcall
[req-9c9785ed-2598-4b95-a40c-307f8d7e8416 - - - - -] Dynamic looping call
> sleeping for 60.00 seconds _inner
On Thu, Apr 21, 2016 at 11:29 PM, Franck Barillaud wrote:
> I've been using Kola to deploy Mitaka on x86 and it works great. Now I would
> like to do the same thing on IBM Power8 systems (ppc64). I've setup a local
> registry with an Ubuntu image.
> I've docker and a local
Thanks! somehow I missed it earlier.
On 4/11/16 9:53 PM, Clark Boylan wrote:
> On Mon, Apr 11, 2016, at 06:18 PM, Nikhil Komawar wrote:
>> Hi,
>>
>> I noticed on a recent merge to glance [1] that the bot updated the bug
>> [2] with comment from "in progress" to "fix released" vs. earlier
>>
I've been using Kola to deploy Mitaka on x86 and it works great. Now I
would like to do the same thing on IBM Power8 systems (ppc64). I've setup
a local registry with an Ubuntu image.
I've docker and a local registry running on a Power8 system. When I issue
the following command:
kolla-build
As I was preparing some thoughts for the Board/TC meeting on Sunday that will
discuss this topic, I made the notes below and was going to post them on a
topic specific etherpad but I didn't find one.
I want to represent the view point of a consumer of compute services in
OpenStack. Trove is a
> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: Thursday, April 21, 2016 5:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
>
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
100% agreed on all your points… with the addition that the level of
functionality you are asking for doesn’t need to be baked into an API
service such as Magnum. I.e., Magnum doesn’t have to be the thing
providing the easy-button app deployment — Magnum isn’t and shouldn’t be a
Docker Hub
I thought this was also what the goal of https://cncf.io/ was starting
to be? Maybe to early to tell if that standardization will be an real
outcome vs just an imagined outcome :-P
-Josh
Fox, Kevin M wrote:
The COE's have a pressure not to standardize their api's between competing
COE's. If
+1 on Wednesday lunch
On Thu, Apr 21, 2016 at 12:02 PM, Ihar Hrachyshka
wrote:
> Cathy Zhang wrote:
>
> Hi everyone,
>>
>> We have room 400 at 3:10pm on Thursday available for discussion of the
>> two topics.
>> Another option is to use the common
Mitaka on Xenial looks good now! YAY!! :-P
On 21 April 2016 at 16:24, Martinx - ジェームズ
wrote:
> This is awesome!
>
> However, I am facing a hard time to make it work with OpenvSwitch and
> multi-node environment... All works in an "All in One" fashion...
>
> I'll
On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
Here's where we disagree.
We may have to agree to disagree.
Your speaking for everyone in the world now, and all you need is one
counter example. I'll be that guy. Me. I want a common abstraction
for some common LCD stuff.
We also disagree on
Team,
A "bicycle" will have to be present anyway, as a code which interacts with
Ansible, because as far as I understand Ansible on it's own cannot provide
all the functionality in one go, so a wrapper for it will have to be
present anyway.
I think me and Alexander we will look into converting
I believe you just described Murano.
On 04/21/2016 03:31 PM, Fox, Kevin M wrote:
There are a few reasons, but the primary one that affects me is Its from the
app-catalog use case.
To gain user support for a product like OpenStack, you need users. The easier you make it
to use, the more users
On Thu, Apr 21, 2016 at 7:21 AM, Zhipeng Huang wrote:
> Hi Infra Team,
>
> Thanks for helping merging the patch that created project Coupler, now based
> on my understanding from
> http://docs.openstack.org/infra/manual/creators.html , could you please add
> me to
There are a few reasons, but the primary one that affects me is Its from the
app-catalog use case.
To gain user support for a product like OpenStack, you need users. The easier
you make it to use, the more users you can potentially get. Traditional
Operating Systems learned this a while back.
Here's where we disagree.
Your speaking for everyone in the world now, and all you need is one counter
example. I'll be that guy. Me. I want a common abstraction for some common LCD
stuff.
Both Sahara and Trove have LCD abstractions for very common things. Magnum
should too.
You are falsely
On 4/21/16 1:38 PM, Joshua Harlow wrote:
> This might be harder in retrying, but I think I can help u make
> something that will work, since retrying has a way to provide a custom
> delay function.
Thanks for that. My question was if this might be useful as a new
backoff in retrying (vs a
On 04/21/2016 04:04 PM, Monty Taylor wrote:
> On 04/21/2016 02:08 PM, Devananda van der Veen wrote:
>> The first cross-project design summit tracks were held at the following
>> summit, in Atlanta, though I recall it lacking the necessary
>> participation to be successful. Today, we have many more
If you don¹t want a user to have to choose a COE, can¹t we just offer an
option for the operator to mark a particular COE as the ³Default COE² that
could be defaulted to if one isn¹t specified in the Bay create call? If
the operator didn¹t specify a default one, then the CLI/UI must submit one
in
The neutron section was missing from nova.conf and now the instances work
but having issues with metadata server. instances boot but no network
access.
Thanks
Paras.
On Thu, Apr 21, 2016 at 9:58 AM, Martinx - ジェームズ
wrote:
> My nova.conf is this one:
>
>
>
On 04/21/2016 02:08 PM, Devananda van der Veen wrote:
On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck
> wrote:
Hey everyone-
So, HPE is seeking sponsors to continue the core party. The reasons
are varied - internal sponsors
On 21 April 2016 at 16:05, Martinx - ジェームズ
wrote:
>
>
> On 21 April 2016 at 15:54, Martinx - ジェームズ
> wrote:
>
>>
>> On 21 April 2016 at 15:52, Martinx - ジェームズ
>> wrote:
>>
>>> Guys,
>>>
>>> I'm trying to deploy
I just manage to make OpenStack Mitaka to work with both Linux Bridges, and
OpenvSwitch...
Everything is work on both "All in One" and multi-node environments.
Very soon, I'll post instructions about how to use the Ansible automation
that I am developing to play with this...
Then, you guys will
I agree with that, and thats why providing some bare minimum abstraction will
help the users not have to choose a COE themselves. If we can't decide, why can
they? If all they want to do is launch a container, they should be able to
script up "magnum launch-container foo/bar:latest" and get
Boden Russell wrote:
I haven't spent much time on this, so the answers below are a first
approximation based on a quick visual inspection (e.g. subject to change
when I get a chance to hack on some code).
On 4/21/16 12:10 PM, Salvatore Orlando wrote:
Can you share more details on the "few
The COE's have a pressure not to standardize their api's between competing
COE's. If you can lock a user into your api, then they cant go to your
competitor.
The standard api really needs to come from those invested in not being locked
in. OpenStack's been largely about that since the
+1.
From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, April 21, 2016 7:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
abstraction for all COEs
>
Hi All,
The Ubuntu OpenStack Engineering team is pleased to announce the general
availability of OpenStack Mitaka in Ubuntu 16.04 LTS and for Ubuntu 14.04
LTS via the Ubuntu Cloud Archive.
Ubuntu 14.04 LTS
You can enable the Ubuntu Cloud Archive for OpenStack Mitaka on
This is awesome!
However, I am facing a hard time to make it work with OpenvSwitch and
multi-node environment... All works in an "All in One" fashion...
I'll keep researching and testing it, until it works.
Nice job guys, congrats!
Cheers!
Thiago
On 21 April 2016 at 16:08, Corey Bryant
We are seeing issues only on client side as of now.
But we do have
net.ipv4.tcp_retries2 = 3 set
Ajay
From: "Edmund Rhudy (BLOOMBERG/ 731 LEX)"
>
Reply-To: "Edmund Rhudy (BLOOMBERG/ 731 LEX)"
>
Hi All,
The Ubuntu OpenStack Engineering team is pleased to announce the general
availability of OpenStack Mitaka in Ubuntu 16.04 LTS and for Ubuntu 14.04
LTS via the Ubuntu Cloud Archive.
Ubuntu 14.04 LTS
You can enable the Ubuntu Cloud Archive for OpenStack Mitaka on
Thanks Kris that’s good information will try out your suggestions
Ajay
From: "Kris G. Lindgren" >
Date: Thursday, April 21, 2016 at 12:08 PM
To: Ajay Kalambur >,
Ricardo,
That is great! It is good to hear Magnum works well in your side.
Best regards,
Hongbin
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: April-21-16 1:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re:
On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck
wrote:
> Hey everyone-
>
> So, HPE is seeking sponsors to continue the core party. The reasons are
> varied - internal sponsors have moved to other projects, the Big Tent has
> drastically increased the # of cores, and the
Do you recommend both or can I do away with the system timers and just keep the
heartbeat?
Ajay
From: "Kris G. Lindgren" >
Date: Thursday, April 21, 2016 at 11:54 AM
To: Ajay Kalambur >,
On 21 April 2016 at 15:54, Martinx - ジェームズ
wrote:
>
> On 21 April 2016 at 15:52, Martinx - ジェームズ
> wrote:
>
>> Guys,
>>
>> I'm trying to deploy Mitaka on Xenial, using OpenvSwitch.
>>
>> I am using the following documents:
>>
>>
Cathy Zhang wrote:
Hi everyone,
We have room 400 at 3:10pm on Thursday available for discussion of the
two topics.
Another option is to use the common room with roundtables in "Salon C"
during Monday or Wednesday lunch time.
Room 400 at 3:10pm is a closed room
I haven't spent much time on this, so the answers below are a first
approximation based on a quick visual inspection (e.g. subject to change
when I get a chance to hack on some code).
On 4/21/16 12:10 PM, Salvatore Orlando wrote:
> Can you share more details on the "few things we need" that
>
On 21 April 2016 at 15:52, Martinx - ジェームズ
wrote:
> Guys,
>
> I'm trying to deploy Mitaka on Xenial, using OpenvSwitch.
>
> I am using the following documents:
>
> http://docs.openstack.org/mitaka/install-guide-ubuntu
>
>
>
Yea, that only fixes part of the issue. The other part is getting the
openstack messaging code itself to figure out the connection its using is no
longer valid. Heartbeats by itself solved 90%+ of our issues with rabbitmq and
nodes being disconnected and never reconnecting.
I vote for Monday to get the ball rolling, meet the interested parties, and
Continue on Thursday at 3:10 in a quieter setting ... so we leave with some
consensus.
Thanks Cathy!
Malini
-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Thursday, April 21, 2016
Do you have rabbitmq/oslo messaging heartbeats enabled?
If you aren't using heartbeats it will take a long time for the nova-compute
agent to figure out that its actually no longer attached to anything.
Heartbeat does periodic checks against rabbitmq and will catch this state and
reconnect.
On Thu, Apr 21, 2016 at 2:43 PM, Tim Bell wrote:
>
> On 21/04/16 19:40, "Doug Hellmann" wrote:
>
> >Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
> >> Michael Krotscheck wrote:
> >>
> >>
> >> So.. while I understand the need for
On 21/04/16 19:40, "Doug Hellmann" wrote:
>Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
>> Michael Krotscheck wrote:
>>
>>
>> So.. while I understand the need for calmer parties during the week, I
>> think the general trends is to have less
Hi everyone,
We have room 400 at 3:10pm on Thursday available for discussion of the two
topics.
Another option is to use the common room with roundtables in "Salon C" during
Monday or Wednesday lunch time.
Room 400 at 3:10pm is a closed room while the Salon C is a big open room which
can
Boden Russell wrote:
On 4/20/16 3:29 PM, Doug Hellmann wrote:
Yes, please, let's try to make that work and contribute upstream if we
need minor modifications, before we create something new.
We can leverage the 'retrying' module (already in global requirements).
It lacks a few things we need,
Salvatore Orlando wrote:
On 21 April 2016 at 16:54, Boden Russell > wrote:
On 4/20/16 3:29 PM, Doug Hellmann wrote:
> Yes, please, let's try to make that work and contribute upstream if we
> need minor modifications, before we create
On 4/11/2016 3:49 PM, Matt Riedemann wrote:
A few people have been asking about planning for the nova midcycle for
newton. Looking at the schedule [1] I'm thinking weeks R-15 or R-11 work
the best. R-14 is close to the US July 4th holiday, R-13 is during the
week of the US July 4th holiday, and
Excerpts from Jeremy Stanley's message of 2016-04-21 17:54:37 +:
> On 2016-04-21 13:40:15 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > I didn't realize the tag was being used that way. I agree it's
> > completely inappropriate, and I wish someone had asked.
> [...]
>
> It's likely seen by
The release team is preparing for and traveling to the summit, just as
many of you are. With that in mind, we are going to hold off on
releasing anything until 2 May, unless there is some sort of critical
issue or gate blockage. Please feel free to submit release requests to
openstack/releases,
On 21 April 2016 at 16:54, Boden Russell wrote:
> On 4/20/16 3:29 PM, Doug Hellmann wrote:
> > Yes, please, let's try to make that work and contribute upstream if we
> > need minor modifications, before we create something new.
>
> We can leverage the 'retrying' module
On 2016-04-21 17:54:56 + (+), Adrian Otto wrote:
> Below is an excerpt from:
> https://www.openstack.org/legal/community-code-of-conduct/
>
> "When we disagree, we consult others. Disagreements, both social
> and technical, happen all the time and the OpenStack community is
> no
On Thu, Apr 21, 2016 at 10:21 AM Monty Taylor wrote:
> Neat! Maybe let's find a time at the summit to sit down and look through
> things. I'm guessing that adding a second language consumer to the
> config will raise a ton of useful questions around documentation, data
>
Hi.
The thread is a month old, but I sent a shorter version of this to
Daneyon before with some info on the things we dealt with to get
Magnum deployed successfully. We wrapped it up in a post (there's a
video linked there with some demos at the end):
On 2016-04-21 13:40:15 -0400 (-0400), Doug Hellmann wrote:
[...]
> I didn't realize the tag was being used that way. I agree it's
> completely inappropriate, and I wish someone had asked.
[...]
It's likely seen by some as a big-tent proxy for the old integrated
vs. incubated distinction.
--
Hi
I am seeing on Kilo if I bring down one contoller node sometimes some computes
report down forever.
I need to restart the compute service on compute node to recover. Looks like
oslo is not reconnecting in nova-compute
Here is the Trace from nova-compute
2016-04-19 20:25:39.090 6 TRACE
Excerpts from Colette Alexander's message of 2016-04-21 08:07:52 -0700:
> >
> >
> > >> Colette Alexander wrote:
> > >>> Hi everyone!
> > >>>
> > >>> Quick summary of where we're at with leadership training: dates are
> > >>> confirmed as available with ZingTrain, and we're finalizing trainers
> >
Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
> Michael Krotscheck wrote:
> > So, HPE is seeking sponsors to continue the core party. The reasons are
> > varied - internal sponsors have moved to other projects, the Big Tent
> > has drastically increased the # of cores, and
Folks,
I'd like to request workroom sessions swap.
I planned to lead a discussion of Fuel UI modularization on Wed
11.00-11.40, but at the same time there will be discussion of handling JS
dependencies of Horizon which I'd really like to attend.
So I request to swap my discussion with
On 04/21/2016 11:03 AM, Tim Bell wrote:
On 21/04/16 17:38, "Hongbin Lu" wrote:
-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-21-16 10:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re:
On 04/21/2016 11:01 AM, Flavio Percoco wrote:
On 21/04/16 12:26 +0200, Thierry Carrez wrote:
Joshua Harlow wrote:
Thierry Carrez wrote:
Adrian Otto wrote:
This pursuit is a trap. Magnum should focus on making native container
APIs available. We should not wrap APIs with leaky abstractions.
On 04/21/2016 10:35 AM, Michael Krotscheck wrote:
On Thu, Apr 21, 2016 at 8:28 AM Monty Taylor > wrote:
On 04/21/2016 10:05 AM, Hayes, Graham wrote:
> On 21/04/2016 15:39, Michael Krotscheck wrote:
>> used piecemeal, however
On 04/21/2016 10:32 AM, Michael Krotscheck wrote:
On Thu, Apr 21, 2016 at 8:10 AM Hayes, Graham > wrote:
On 21/04/2016 15:39, Michael Krotscheck wrote:
python-openstackclient does require the creation of a new repo for each
project
Thierry Carrez wrote:
[...]
I think it's inappropriate because it gives a wrong incentive to become
a core reviewer. Core reviewing should just be a duty you sign up to,
not necessarily a way to get into a cool party. It was also a bit
exclusive of other types of contributions.
Apparently in
Hello!
Today I'm happy to present you a demo of a new service called Glare (means
GLance Artifact REpository) which will be used as a unified catalog of
artifacts in OpenStack. This service appeared in Mitaka in February
and it succeeded
Glance v3 API, that has become the experimental version of
On 21/04/16 12:26 +0200, Thierry Carrez wrote:
Joshua Harlow wrote:
Thierry Carrez wrote:
Adrian Otto wrote:
This pursuit is a trap. Magnum should focus on making native container
APIs available. We should not wrap APIs with leaky abstractions. The
lowest common denominator of all COEs is an
Michael Krotscheck wrote:
So, HPE is seeking sponsors to continue the core party. The reasons are
varied - internal sponsors have moved to other projects, the Big Tent
has drastically increased the # of cores, and the upcoming summit format
change creates quite a bit of uncertainty on everything
There¹s one more issue with lowest common denominator API. Every time a
new version of native client is released, magnum will be responsible for
making those sure the common denominator API works with that version of
native client. Since the native client will always have more
functions/features
On Thu, Apr 21, 2016 at 9:07 AM, Michael Krotscheck
wrote:
> Hey everyone-
>
> So, HPE is seeking sponsors to continue the core party. The reasons are
> varied - internal sponsors have moved to other projects, the Big Tent has
> drastically increased the # of cores, and the
On 20/04/16 13:00, Rico Lin wrote:
Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.
After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or some
Hey everyone-
So, HPE is seeking sponsors to continue the core party. The reasons are
varied - internal sponsors have moved to other projects, the Big Tent has
drastically increased the # of cores, and the upcoming summit format change
creates quite a bit of uncertainty on everything surrounding
On 21/04/16 17:38, "Hongbin Lu" wrote:
>
>
>> -Original Message-
>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>> Sent: April-21-16 10:32 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev]
+1 re Mon, though Fri could work as well.
On Thu, Apr 21, 2016 at 3:55 AM, Jay Dobies wrote:
>
>
> On 4/20/16 1:00 PM, Rico Lin wrote:
>
>> Hi team
>> Let plan for more informal meetup(relax) time! Let all heaters and any
>> other projects can have fun and chance for
1 - 100 of 159 matches
Mail list logo