Hi,
The API Workgroup git repository has been setup and you can access it
here.
http://git.openstack.org/cgit/openstack/api-wg/
There is some content there though not all the proposed guidelines from
the wiki page are in yet:
https://wiki.openstack.org/wiki/Governance/Proposed/APIGuidelines
On Oct 21, 2014 4:10 AM, Daniel P. Berrange berra...@redhat.com wrote:
On Tue, Oct 21, 2014 at 12:58:48PM +0200, Kashyap Chamarthy wrote:
I was discussing $subject on #openstack-nova, Nikola Dipanov suggested
Sounds like a great idea.
it's worthwhile to bring this up on the list.
I
Hi keshava,
Hi,
1. From where the MPLS traffic will be initiated ?
In this design, MPLS traffic will be initiated from a network node,
where the qrouter is located. However, we though of alternative design
where MPLS traffic is initiated on the compute node, directly from a
VM plugged
On 22 October 2014 06:24, Tom Fifield t...@openstack.org wrote:
On 22/10/14 03:07, Andrew Laski wrote:
On 10/21/2014 04:31 AM, Nikola Đipanov wrote:
On 10/20/2014 08:00 PM, Andrew Laski wrote:
One of the big goals for the Kilo cycle by users and developers of the
cells functionality
On 10/22/2014 02:26 AM, Maru Newby wrote:
We merged caching support for the metadata agent in juno, and backported to
icehouse. It was enabled by default in juno, but disabled by default in
icehouse to satisfy the stable maint requirement of not changing functional
behavior.
While
hi all.
when I am reviewing code of in nova/compute/manager.py
I find that the detach_interface calls deallocate port from neutron first
then calls detach_interface in the hypervisor, then what will happen if
hypervisor detach_interface failed? the result is the port can be seen
on guest but
Greetings,
Back in Havana a, partially-implemented[0][1], Cinder driver was merged
in Glance to provide an easier and hopefully more consistent interaction
between glance, cinder and nova when it comes to manage volume images
and booting from volumes.
While I still don't fully understand the
Hi Jorge!
Welcome back, eh! You've been missed.
Anyway, I just wanted to say that your proposal sounds great to me, and
it's good to finally be closer to having concrete requirements for logging,
eh. Once this discussion is nearing a conclusion, could you write up the
specifics of logging into a
On Tue, Oct 21, 2014 at 09:41:44PM +, Jiang, Yunhong wrote:
Hi, Daniel's all,
This is a follow up to Daniel's
http://osdir.com/ml/openstack-dev/2014-10/msg00557.html , Info on XenAPI
data format for 'host_data' call.
I'm considering to change the compute capability to be a
I spent a little time trying to work out a good way to include this kind of
data in the ComputeNode object. You will have seen that I added the
supported_instances reported to the RT by the virt drivers as a list of HVSpec
– where HVSpec is a new nova object I created for the purpose.
The
-Original Message-
From: henry hly [mailto:henry4...@gmail.com]
Sent: 08 October 2014 09:16
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
cascading
Hi,
Good questions: why not just
Thanks Andrew for this (very) exhaustive list.
As you have pointed out, for all the missing features (I think flavors
can also be a part of that list) the community needs to decide where
the info lives primarily (API or compute cells) and how it is
propagated (Synced, sent with the request, asked
Hi all,
Do we have a meeting today?
I can't see something in the wiki about today...
Itai
Sent from my iPhone
On Oct 8, 2014, at 2:06 AM, Steve Gordon sgor...@redhat.com wrote:
Hi all,
Just a quick reminder that the NFV subteam meets Wednesday 8th October 2014 @
1400 UTC in
Ihar Hrachyshka wrote:
[...]
For stable branches, we have so called periodic jobs that are
triggered once in a while against the current code in a stable branch,
and report to openstack-stable-maint@ mailing list. An example of
failing periodic job report can be found at [2]. I envision that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 22/10/14 02:26, Maru Newby wrote:
We merged caching support for the metadata agent in juno, and
backported to icehouse. It was enabled by default in juno, but
disabled by default in icehouse to satisfy the stable maint
requirement of not
On Wed, 22 Oct 2014, Thierry Carrez wrote:
So while I think periodic jobs are a good way to increase corner case
testing coverage, I am skeptical of our collective ability to have the
discipline necessary for them not to become a pain. We'll need a strict
process around them: identified groups
On Tue, Oct 21, 2014 at 12:07:41PM +0100, Daniel P. Berrange wrote:
On Tue, Oct 21, 2014 at 12:58:48PM +0200, Kashyap Chamarthy wrote:
I was discussing $subject on #openstack-nova, Nikola Dipanov suggested
it's worthwhile to bring this up on the list.
I was looking at
Matt,
I've submitted a review to remove the gate-nova-docker-requirements
from nova-docker:
https://review.openstack.org/#/c/130192/
I am good with treating the current situation with DSVM jobs as we
bug if there is consensus. I'll try to dig in, but we may need Dean,
Sean etc to help figure it
I,d like to start a conversation on usage requirements and have a few
suggestions. I advocate that, since we will be using TCP and HTTP/HTTPS
based protocols, we inherently enable connection logging for load
balancers for several reasons:
Just request from the operator side of things:
Please
My understanding is same as from Ihar, and we no longer have the degradation
in the latest Icehouse update. There was a degradation in 2014.1.2 [2]
but the fix
was backported in 2014.1.3 [1].
We don't need to take care of backporting when considering metadata RPC patch.
[1]
Greetings,
On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
Greetings,
Back in Havana a, partially-implemented[0][1], Cinder driver was merged
in Glance to provide an easier and hopefully more consistent interaction
between glance, cinder and nova when it comes to
On Tue, Oct 21, 2014 at 1:08 PM, Clint Byrum cl...@fewbar.com wrote:
So Tuskar would be a part of that deployment cloud, and would ask you
things about your hardware, your desired configuration, and help you
get the inventory loaded.
So, ideally our gate would leave the images we test as part
Just a reminder that, as we mentioned last week, no meeting today.
--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/22/2014 08:03 AM, Dugger, Donald D wrote:
Just a reminder that, as we mentioned last week, no meeting today.
The meetings are supposed to be on Tuesday, no? And yeah, no one showed
up yesterday.
- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Dear requirements-core folks,
Here's the review as promised:
https://review.openstack.org/130210
Thanks,
dims
On Wed, Oct 22, 2014 at 7:27 AM, Davanum Srinivas dava...@gmail.com wrote:
Matt,
I've submitted a review to remove the gate-nova-docker-requirements
from nova-docker:
On 22/10/14 14:40, James Slagle wrote:
On Tue, Oct 21, 2014 at 1:08 PM, Clint Byrum cl...@fewbar.com wrote:
So Tuskar would be a part of that deployment cloud, and would ask you
things about your hardware, your desired configuration, and help you
get the inventory loaded.
So, ideally our
Sigh. I've progressed from being challenged by time of day (e.g. timezone) to
being challenged by day of week. Pretty soon I'll be confused about the year
:-)
Sorry about that.
--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786
-Original Message-
From:
Hello everyone,
Is it possible to generate documentation while using Falcon API framework?
Thanks beforehand best regards,
Romain Ziba.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
Greetings,
On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
Greetings,
Back in Havana a, partially-implemented[0][1], Cinder driver was merged
in Glance to provide an easier and hopefully more consistent interaction
There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly something that I'd
like to see us land in Kilo, as it enables a bunch of things for the
NFV use cases. I'm going to propose that we talk about this at an
upcoming Neutron meeting
Hey Stephen (and Robert),
For real-time usage I was thinking something similar to what you are proposing.
Using logs for this would be overkill IMO so your suggestions were what I was
thinking of starting with.
As far as storing logs is concerned I was definitely thinking of offloading
these
On Oct 22, 2014, at 12:53 AM, Jakub Libosvar libos...@redhat.com wrote:
On 10/22/2014 02:26 AM, Maru Newby wrote:
We merged caching support for the metadata agent in juno, and backported to
icehouse. It was enabled by default in juno, but disabled by default in
icehouse to satisfy the
Hi Noel,
On 22 October 2014 01:57, Noel Burton-Krahn n...@pistoncloud.com wrote:
Hi Armando,
Sort of... but what happens when the second one dies?
You mean, you lost both (all) agents? In this case, yes you'd need to
resurrect the agents or move the networks to another available agent.
- Original Message -
From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Hi all,
Do we have a meeting today?
I can't see something in the wiki about today...
Itai
Hi
On Wed, Oct 22, 2014 at 7:33 AM, Flavio Percoco fla...@redhat.com wrote:
On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
Greetings,
On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com
wrote:
Greetings,
Back in Havana a, partially-implemented[0][1], Cinder driver was merged
Hi everyone,
TL;DR:
Update
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
Longer version:
In the same spirit as the Oslo Liaisons, we are introducing in the Kilo
cycle liaisons for the Vulnerability Management Team.
Historically we've been trying to rely on a
Kyle,
I pointed out the similarity of the two specifications while reviewing them
a few months ago (see patch set #4).
Ian then approached me on IRC (I'm afraid it's going to be a bit difficult
to retrieve those logs), and pointed out that actually the two
specifications, in his opinion, try to
- Original Message -
From: Kyle Mestery mest...@mestery.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly
Hi all,
Thanks to those who attended the meeting today, for those who missed in minutes
and the full log are available at these locations:
* Meeting ended Wed Oct 22 14:25:30 2014 UTC. Information about MeetBot at
http://wiki.debian.org/MeetBot . (v 0.1.4)
* Minutes:
Hi Everyone,
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
this Thursday, October 23rd at 17:00 UTC in the #openstack-meeting channel.
The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to
On Oct 22, 2014, at 8:21 AM, Dugger, Donald D donald.d.dug...@intel.com wrote:
Sigh. I've progressed from being challenged by time of day (e.g. timezone)
to being challenged by day of week. Pretty soon I'll be confused about the
year :-)
Sorry about that.
Heh, no worries – I've had a
Replied in inline.
On Wed, Oct 22, 2014 at 9:33 PM, Flavio Percoco fla...@redhat.com wrote:
On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
Greetings,
On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco fla...@redhat.com wrote:
Greetings,
Back in Havana a, partially-implemented[0][1], Cinder driver
The Kolla development community would like to announce the release of
Kolla Milestone #1. This milestone constitutes two weeks of effort by
the developers and is available for immediate download from
https://github.com/stackforge/kolla/archive/version-m1.tar.gz.
Kolla is a project to
Hi,
Great that we can have more focus on this. I'll attend the meeting on Monday
and also attend the summit, looking forward to these discussions.
Thanks,
Erik
-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: den 22 oktober 2014 16:29
To: OpenStack Development
A few weeks ago in IRC we discussed the criteria for joining the core
team in Kolla. I believe Daneyon has met all of these requirements by
reviewing patches along with the rest of the core team and providing
valuable comments, as well as implementing neutron and helping get
nova-networking
With some xargs, sed, and pandoc - I now present to you the first
attempt at converting the DevStack docs to RST, and making the doc build
look similar to other projects.
https://review.openstack.org/130241
It is extremely rough, I basically ran everything through Pandoc and
cleaned up any
On 10/20/2014 10:00 PM, Steve Kowalik wrote:
With the move to removing nova-baremetal, I'm concerned that portions
of os-cloud-config will break once python-novaclient has released with
the bits of the nova-baremetal gone -- import errors, and such like.
I'm also concerned about backward
Please don't send review requests to the list. The preferred methods of
asking for reviews are discussed in this post:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html
Thanks.
-Ben
On 10/22/2014 02:57 AM, Eli Qiao wrote:
hi all.
when I am reviewing code of in
On 10/22/2014 06:07 AM, Thierry Carrez wrote:
Ihar Hrachyshka wrote:
[...]
For stable branches, we have so called periodic jobs that are
triggered once in a while against the current code in a stable branch,
and report to openstack-stable-maint@ mailing list. An example of
failing periodic job
Great work Daneyon! Excellent job with neutron and nova-networking!
+1
-Ryan
- Original Message -
From: Steven Dake sd...@redhat.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Wednesday, October 22, 2014 11:04:24 AM
Subject: [openstack-dev] [kolla]
On Tue, Oct 21, 2014 at 6:29 PM, Stuart Fox stu...@demonware.net wrote:
Having written/worked on a few DC automation tools, Ive typically broken
down the process of getting unknown hardware into production in to 4
distinct stages.
1) Discovery (The discovery of unknown hardware)
2)
On 2014-10-22 10:05 AM, John Griffith wrote:
Ideas started spreading from there to Using a Read Only Cinder Volume
per image, to A Glance owned Cinder Volume that would behave pretty
much the current local disk/file-system model (Create a Cinder Volume
for Glance, attach it to the Glance
Don
Will there be a meeting next week? What is the regular time slot for the
meeting?
I'd like to work w you on a technical slide to use in Paris
Do we need to socialize the Gantt topic more?
Thx
Uri (Oo-Ree)
C: 949-378-7568
From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
Sent:
On Wed, Oct 22, 2014 at 08:04:24AM -0700, Steven Dake wrote:
A few weeks ago in IRC we discussed the criteria for joining the core team
in Kolla. I believe Daneyon has met all of these requirements by reviewing
patches along with the rest of the core team and providing valuable
comments, as
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/22/2014 10:54 AM, Elzur, Uri wrote:
Will there be a meeting next week? What is the regular time slot for the
meeting?
Tuesdays at 1500 UTC in IRC channel: #openstack-meeting
Hi, everyone.
For one project we need to have backup of info about nodes (astute.yaml).
In case the Fuel and a Node-n is down.
How a bad idea to keep a copy of the astute.yaml file of each node to each
node of the cluster?
For example:
pod_state/node-1.yaml
pod_state/node-2.yaml
The regular meeting time is Tuesdays at 15:00 UTC/11am EST:
https://wiki.openstack.org/wiki/Meetings#Gantt_.28Scheduler.29_team_meeting
Generally, we don't do slides for design summit sessions -- we use
etherpads instead and the sessions are discussions, not presentations.
Next week's
Hello Stackers,
Here is a first blueprint to discuss for Kilo.
I would like to start discussing related to monitoring API in MagnetoDB. I've
written Blueprint[1] about this.
The goal of this is to create a API for exposing usage statistic for users,
external monitoring or billing tools.
fyi, latest update after discussion on #openstack-infra, consensus
seems to be to allow projects to add to g-r.
131+ All OpenStack projects, regardless of status, may add entries to
132+ ``global-requirements.txt`` for dependencies if the project is going
133+ to run integration tests under a
Just for the record, they are watching us!:-O
https://aws.amazon.com/blogs/aws/new-aws-directory-service/
Best!
Thiago
On 16 August 2014 16:03, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:
Hey Stackers,
I'm wondering here... Samba4 is pretty solid (up coming 4.2 rocks), I'm
using
Hey all,
Just wanted to drop a quick note to say that I decided to leave Rackspace to
pursue another opportunity. My last day was last Friday. I won’t have much time
for OpenStack, but I’m going to continue to hang out in the channels. Having
been involved in the project since day 1, I’m going
Hi Andrew,
Thank you for sharing your ideas. We have similar blueprint where you
should be able to save/restore information about your environment
https://blueprints.launchpad.net/fuel/+spec/save-and-restore-env-settings
For development, it's very useful when you need to create the identical
On 10/21/2014 05:44 AM, Nikola Đipanov wrote:
On 10/20/2014 07:38 PM, Jay Pipes wrote:
Hi Dan, Dan, Nikola, all Nova devs,
OK, so in reviewing Dan B's patch series that refactors the virt
driver's get_available_resource() method [1], I am stuck between two
concerns. I like (love even) much of
On 10/21/2014 04:51 PM, Dan Smith wrote:
The rationale behind two parallel data model hiercharies is that the
format the virt drivers report data in, is not likely to be exactly
the same as the format that the resoure tracker / scheduler wishes to
use in the database.
Yeah, and in cases where
Chris,
All the best on your next adventure - you'll be missed here!
-Deva
On Wed, Oct 22, 2014 at 10:37 AM, Chris Behrens cbehr...@codestud.com wrote:
Hey all,
Just wanted to drop a quick note to say that I decided to leave Rackspace to
pursue another opportunity. My last day was last
Chris,
Best of luck on the new adventure! Definitely don’t be a stranger!
Cheers,
Morgan
On Oct 22, 2014, at 10:37, Chris Behrens cbehr...@codestud.com wrote:
Hey all,
Just wanted to drop a quick note to say that I decided to leave Rackspace to
pursue another opportunity. My last day
Chris,
All the best to you on you new adventure.
Chris Krelle
NobodyCam
On Wed, Oct 22, 2014 at 10:37 AM, Chris Behrens cbehr...@codestud.com
wrote:
Hey all,
Just wanted to drop a quick note to say that I decided to leave Rackspace
to pursue another opportunity. My last day was last
I won’t have much time for OpenStack, but I’m going to continue to
hang out in the channels.
Nope, sorry, veto.
Some options to explain your way out:
1. Oops, I forgot it wasn't April
2. I have a sick sense of humor; I'm getting help for it
3. I've come to my senses after a brief break from
Chris,
It was great to work with you, best of luck and enjoy this new opportunity.
Cheers,
Lucas
On Wed, Oct 22, 2014 at 6:50 PM, Morgan Fainberg
morgan.fainb...@gmail.com wrote:
Chris,
Best of luck on the new adventure! Definitely don’t be a stranger!
Cheers,
Morgan
On Oct 22, 2014, at
On 2014-10-22 12:31:53 -0400 (-0400), Davanum Srinivas wrote:
fyi, latest update after discussion on #openstack-infra, consensus
seems to be to allow projects to add to g-r.
[...]
If that's deemed unacceptable for other reasons, the alternative
solution which was floated is to tweak
Hi folks,
due to the requirement to have PTL for the program, we're running
elections for the MagnetoDB PTL for Kilo cycle. Schedule and policies
are fully aligned with official OpenStack PTLs elections.
You can find more info in official elections wiki page [0] and
the same page for MagnetoDB
The application projects are dropping python 2.6 support during Kilo, and I’ve
had several people ask recently about what this means for Oslo. Because we
create libraries that will be used by stable versions of projects that still
need to run on 2.6, we are going to need to maintain support for
Hi,
I have these concerns regarding command execution in Manila.
I was to propose these to be discussed at the Design Summit.
It might be too late for that. Then we can discuss it here --
or at the summit, just informally.
Thanks for Valeriy for his ideas about some of the topics below.
Hi folks,
We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.
Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings
http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141023T18
P.S. The main topic is finalisation
Chris,
Thanks for your work on cells in OpenStack nova... we're heavily exploiting it
to scale out the CERN cloud.
Tim
-Original Message-
From: Chris Behrens [mailto:cbehr...@codestud.com]
Sent: 22 October 2014 19:37
To: OpenStack Development Mailing List (not for usage questions)
On 10/22/2014 12:24 AM, Tom Fifield wrote:
On 22/10/14 03:07, Andrew Laski wrote:
On 10/21/2014 04:31 AM, Nikola Đipanov wrote:
On 10/20/2014 08:00 PM, Andrew Laski wrote:
One of the big goals for the Kilo cycle by users and developers of the
cells functionality within Nova is to get it to a
On 10/22/2014 03:42 AM, Vineet Menon wrote:
On 22 October 2014 06:24, Tom Fifield t...@openstack.org
mailto:t...@openstack.org wrote:
On 22/10/14 03:07, Andrew Laski wrote:
On 10/21/2014 04:31 AM, Nikola Đipanov wrote:
On 10/20/2014 08:00 PM, Andrew Laski wrote:
One
What is current best practice to restore a failed Fuel node?
*Adam Lawson*
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072
On Wed, Oct 22, 2014 at 10:40 AM, Sergii
I suppose this BP also has some relevance to such a discussion.
https://review.openstack.org/#/c/100278/
/ Bob
On 2014-10-22 15:42, Kyle Mestery mest...@mestery.com wrote:
There are currently at least two BPs registered for VLAN trunk support
to VMs in neutron-specs [1] [2]. This is clearly
On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:
After today’s meeting, we have filled our seven session slots. Here’s the
proposed list, in no particular order. If you think something else needs to
be on the list, speak up today because I’ll be plugging all of this
On 10/22/2014 12:52 AM, Michael Still wrote:
Thanks for this.
It would be interesting to see how much of this work you think is
achievable in Kilo. How long do you see this process taking? In line
with that, is it just you currently working on this? Would calling for
volunteers to help be
Hi Chris
On 10/21/2014 11:08 PM, Christopher Yeoh wrote:
The API Workgroup git repository has been setup and you can access it
here.
Cool, adding it to the repos to watch.
There is some content there though not all the proposed guidelines from
the wiki page are in yet:
we could set up a job to publish under docs.o.org/api-wg pretty easily -
it seems like a good place to start to publish this content.
thanks for getting the repo all setup chris and jay.
Thanks,
_
Steve Martinelli
OpenStack Development - Keystone
Hi Jorge,
Good discussion so far + glad to have you back :)
I am not a big fan of using logs for billing information since ultimately (at
least at HP) we need to pump it into ceilometer. So I am envisioning either the
amphora (via a proxy) to pump it straight into that system or we collect it
On Wed, Oct 22, 2014 at 2:26 PM, Steve Martinelli steve...@ca.ibm.com
wrote:
we could set up a job to publish under docs.o.org/api-wg pretty easily -
it seems like a good place to start to publish this content.
thanks for getting the repo all setup chris and jay.
Thanks for setting up the
- Original Message -
On 10/21/2014 07:53 PM, David Vossel wrote:
- Original Message -
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: October 21, 2014 15:07
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Nova]
I just had a conflict come up. I won't be able to make it to the meeting.
I wanted to announce that IPAM is very likely topic for a design
session at the summit. I will spend some time reviewing the old
etherpads starting here [1] since the topic was set aside early in
Juno.
Carl
[1]
I notice at the top of the GitHub mirror page [1] it reads, API Working Group
http://openstack.org”
Can we get that changed to API Working Group
https://wiki.openstack.org/wiki/API_Working_Group”?
That URL would be much more helpful to people who come across the GitHub repo.
It's not a code
It is a code change :) Everything is a code change around here. You will
want to update the projects.yaml file in openstack-infra/project-config
[0]. If you add a 'homepage:
https://wiki.openstack.org/wiki/API_Working_Group' key value pair to the
api-wg dict there the jeepyb tooling should update
Everett, I think the description is managed by this file:
https://github.com/openstack-infra/project-config/blob/master/gerrit/projects.yaml
- Steve
From: Everett Toews everett.to...@rackspace.com
To: OpenStack Development Mailing List (not for usage questions)
+1 to the proposed schedule from me.
On 10/22/2014 02:11 PM, Doug Hellmann wrote:
On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:
After today’s meeting, we have filled our seven session slots. Here’s the
proposed list, in no particular order. If you think something
+1 from me as well.
-- dims
On Wed, Oct 22, 2014 at 5:05 PM, Ben Nemec openst...@nemebean.com wrote:
+1 to the proposed schedule from me.
On 10/22/2014 02:11 PM, Doug Hellmann wrote:
On Oct 20, 2014, at 1:22 PM, Doug Hellmann d...@doughellmann.com wrote:
After today’s meeting, we have
Hello.
Its clear that the scheduler and resource tracker inside Nova are
areas where we need to innovate. There are a lot of proposals in this
space at the moment, and it can be hard to tell which ones are being
implemented and in which order.
I have therefore asked Jay Pipes to act as
- Original Message -
From: Steve Gordon sgor...@redhat.com
To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
- Original Message -
From: Steve Gordon sgor...@redhat.com
To: OpenStack Development Mailing List (not for usage
On Wed, 22 Oct 2014 14:44:26 -0500
Anne Gentle a...@openstack.org wrote:
On Wed, Oct 22, 2014 at 2:26 PM, Steve Martinelli
steve...@ca.ibm.com wrote:
we could set up a job to publish under docs.o.org/api-wg pretty
easily - it seems like a good place to start to publish this
content.
On Wed, 22 Oct 2014 20:36:27 +
Everett Toews everett.to...@rackspace.com wrote:
I notice at the top of the GitHub mirror page [1] it reads, API
Working Group http://openstack.org”
Can we get that changed to API Working Group
https://wiki.openstack.org/wiki/API_Working_Group”?
That
Great question!
For some backstory, the community interest in supporting XML has always
been lackluster, so the XML translation middleware has been on a slow road
of decline. It's a burden for everyone to maintain, and only works for
certain API calls. For the bulk of Keystone's documented APIs,
Let's get a list of questions / talking points set up on this etherpad:
https://etherpad.openstack.org/p/paris_absentee_talking_points
If you can't make it to the summit (like me) then you can put any questions or
concerns you have in this document.
If you are going to the summit, please take a
We are scheduled for Monday, 03 Nov, 14:30 - 16:00. I have a conflict with the
“Meet the Influencers” talk that runs from 14:30-18:30, plus the GBP session is
on Tuesday, 04 Nov, 12:05-12:45. I was thinking we would want to co-located the
Congress and GBP talks as much as possible.
The BOSH
On 23 Oct 2014, at 5:55 am, Andrew Laski andrew.la...@rackspace.com wrote:
While I agree that N is a bit interesting, I have seen N=3 in production
[central API]--[state/region1]--[state/region DC1]
\-[state/region DC2]
--[state/region2 DC]
1 - 100 of 108 matches
Mail list logo