Daniel, thank you very much for the extensive and detailed email.
The plan looks good to me and it makes sense, also the OVS option will
tested, and available when selected.
On Wed, Oct 24, 2018 at 4:41 PM Daniel Alvarez Sanchez
> Hi Stackers!
> The purpose of this email is
> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo
> Yesterday, during the Oslo meeting we discussed  the possibility of
> creating a new Special Interest Group  to provide home and release
> means for operator related tools 
Yesterday, during the Oslo meeting we discussed  the possibility of
creating a new Special Interest Group  to provide home and release
means for operator related tools   
I continued the discussion with M.Hillsman later, and he made me aware
of the operator
have a look at dragonflow project, may be it's similar to what you're
trying to accomplish
On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal wrote:
> Thanks for the help. I am trying to run a custom Ryu app from the nova
> compute node and have all the openvswitches connected to this new
I believe we could add some of the networking ovn jobs, we need to
decide which one would be more beneficial.
On Tue, Oct 2, 2018 at 10:02 AM wrote:
> Hi Miguel, all,
> The initiative is very welcome and will help make it more efficient to
> develop in stadium projects.
Hi Jirka & Daniel, thanks for your answers... more inline.
On Wed, Oct 3, 2018 at 10:44 AM Jiří Stránský wrote:
> On 03/10/2018 10:14, Miguel Angel Ajo Pelayo wrote:
> > Hi folks
> >I was trying to deploy neutron with networking-ovn via
I was trying to deploy neutron with networking-ovn via tripleo-quickstart
scripts on master, and this config file . It doesn't work, overcloud
deploy cries with:
1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02
17:47:51,864 DEBUG: 26691 -- Error: image
Thanks for the info Doug.
On Mon, Oct 1, 2018 at 6:25 PM Doug Hellmann wrote:
> Miguel Angel Ajo Pelayo writes:
> > Thank you for the guidance and ping Doug.
> > Was this triggered by  ? or By the 1.1.0 tag pushed to gerrit?
> The release jobs are alway
Oh, ok 1.1.0 tag didn't have 'venv' in tox.ini, but master has it since:
On Mon, Oct 1, 2018 at 10:01 AM Miguel Angel Ajo Pelayo
> Thank you for the guidance and ping Doug.
> Was this triggered by  ? or By the 1.1.0 t
Thank you for the guidance and ping Doug.
Was this triggered by  ? or By the 1.1.0 tag pushed to gerrit?
I'm working to make os-log-merger part of the OpenStack governance
projects, and to make sure we release it as a tarball.
It's a small tool I've been using for years making my life
Good luck Gary, thanks for all those years on Neutron! :)
On Wed, Sep 19, 2018 at 9:32 PM Nate Johnston
> On Wed, Sep 19, 2018 at 06:19:44PM +, Gary Kotton wrote:
> > I have recently transitioned to a new role where I will be working on
> other parts of
ic tests and
> address them as suitable for the specific plugin.
> *From: *Miguel Angel Ajo Pelayo <majop...@redhat.com>
> *Reply-To: *OpenStack List <email@example.com>
> *Date: *Saturday, April 7, 2018 at 8:56 AM
this issue isn't only for networking ovn, please note that it happens with
a flew other vendor plugins (like nsx), at least this is something we have
found in downstream certifications.
On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez wrote:
> > On 6 Apr 2018, at
You can run as many as you want, generally an haproxy is used in front of
them to balance load across neutron servers.
Also, keep in mind, that the db backend is a single mysql, you can also
distribute that with galera.
That is the configuration you will get by default when you deploy in HA
Right, that's a little absurd, 1TB? :-) , I completely agree.
They could live with anything, but I'd try to estimate minimums across
for example, an RDO test deployment with containers looks like:
(undercloud) [stack@undercloud ~]$ ssh firstname.lastname@example.org "sudo df -h
Very good summary, thanks for leading the PTG and neutron so well. :)
On Mon, Mar 12, 2018 at 11:25 PM fumihiko kakuma
> Hi Miguel,
> > * As part of the neutron-lib effort, we have found networking projects
> > are very inactive. Examples are
Good point, I'm moving this to the openstack-dev list
> On Mon, Feb 12, 2018 at 12:37 AM, Miguel Angel Ajo Pelayo
> <majop...@redhat.com> wrote:
> > Hi folks :)
> >We were talking this morning about the change for the new engine
I have created an etherpad for networking-ovn, if
https://etherpad.openstack.org/p/networking-ovn-ptg-rocky with some topics
I thought are relevant.
But please feel free to add anything you believe it could be interesting
and fill attendance so it's easier to sync & meet. :)
That may help, of course, but I gues it could also be capacity related.
On Wed, Dec 20, 2017 at 11:42 AM Takashi Yamamoto
> On Wed, Dec 20, 2017 at 7:18 PM, Lucas Alvares Gomes
> > Hi,
> >>> Hi all,
> >>> Just sending this
If we could have one member from networking-ovn on the neutron-stable-maint
team that would be great. That means the member would have to be trusted
not to handle neutron-patches when not knowing what he's doing, and of
course, follow the stable guidelines, which are absolutely important. But I
That adds more latency, I believe some vendor plugins do it like that
Have you checked out networking-ovn?, it's all done in openflow, and you
have Ha (A/P) for free without extra namespaces, just flows and bfd
On Dec 4, 2017 4:22 PM, "Jaze Lee"
I wanted to rise this topic, I have been wanting to do it from long
but preferred to wait until the zuulv3 stuff was a little bit more stable,
be now it's a good time.
We were thinking about the option of having a couple of non-voting jobs
the neutron check for
Welcome Daniel! :)
On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes
> Hi all,
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews
"+1" I know, I'm not active, but I care about neutron, and slaweq is a
On Nov 29, 2017 8:37 PM, "Ihar Hrachyshka" wrote:
> YES, FINALLY.
> On Wed, Nov 29, 2017 at 11:29 AM, Kevin Benton wrote:
> > +1! ... even though I haven't been
Thank you very much :-)
On Tue, Oct 10, 2017 at 4:09 PM, Lucas Alvares Gomes Martins <
> On Tue, Oct 10, 2017 at 2:25 PM, Russell Bryant
> > Hello, everyone. I'd like to welcome two new members to the
> > networking-ovn-core
definetely dig more into this.
> Having a lot of messages broadcasted to all the neutron agents is not
> something you want especially in the context of femdc.
> : https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds
It could be that too TBH I'm not sure :)
On Fri, Sep 22, 2017 at 11:02 AM, Sławomir Kapłoński <sla...@kaplonski.pl>
> Isn't OVS setting MTU automatically MTU for bridge as lowest value from
> ports connected to this bridge?
> > Wiadomość napisana przez Miguel
I believe that one of the problems is that if you set a certain MTU in an
OVS switch, new connected ports will be automatically assigned to such MTU
the ovs-vswitchd daemon.
On Wed, Sep 20, 2017 at 10:45 PM, Ian Wells wrote:
> Since OVS is doing L2 forwarding, you
On Thu, Sep 21, 2017 at 3:16 AM, Kevin Benton wrote:
> OpenStack Development Mailing List (not for usage questions)
I wrote those lines.
At that time, I tried a couple a publisher and a receiver at that scale. It
was the receiver side what crashed trying to subscribe, the sender was
Sadly I don't keep the test examples, I should have stored them in github
or something. It shouldn't be hard to
+1! Thanks for organizing
On Wed, Sep 13, 2017 at 10:11 AM, Sandhya Dasu (sadasu)
> Thanks for organizing.
> On 9/13/17, 7:28 AM, "Thomas Morin" wrote:
> Takashi Yamamoto, 2017-09-13 03:05:
> > +1
Kevin!, and thank you for all the effort and energy you have put into
openstack-neutron during the last few years. It's been great to have you on
On Mon, Sep 11, 2017 at 5:18 PM, Ihar Hrachyshka
> It's very sad news for the team, but I hope that Kevin
I'm also interested in this topic. :)
On Mon, Sep 11, 2017 at 11:12 AM, Jay Pipes wrote:
> I'm interested in this. I get in to Denver this evening so if we can do
> this session tomorrow or later, that would be super.
> On 09/11/2017 01:11 PM, Mooney,
Big +1 for Miguel Lavalle for me, Miguel, thank you for taking this
responsibility on behalf of the Neutron/OpenStack community.
On Fri, Sep 8, 2017 at 8:59 PM, Kevin Benton wrote:
> Hi everyone,
> Due to a change in my role at my employer, I no longer have time to be the
Thank you Kevin & Miguel! ;)
On Thu, Sep 7, 2017 at 4:04 PM, Kevin Benton wrote:
> Hello everyone,
> With the help of Miguel we have a tentative schedule in the PTG. Please
> check the etherpad and if there is anything missing you wanted to see
> discussed, please reach out
I wonder if it makes sense to provide a helper script to do what it's
explained on the document.
So we could ~/devstack/tools/run_locally.sh n-sch.
If yes, I'll send the patch.
On Fri, Sep 8, 2017 at 3:00 PM, Eric Fried wrote:
> Oh, are we talking about the logs produced
it a bit more future proof, and able to easily
integrate with vendor plugins without the need to modify the service file.
On Tue, Sep 5, 2017 at 9:27 AM, Miguel Angel Ajo Pelayo <majop...@redhat.com
> Why do we need to put all the configuration in a single file?
Why do we need to put all the configuration in a single file?
That would be a big big change to deployers. It'd be great if we can think
of an alternate solution. (not sure how that's being handled for other
On Mon, Sep 4, 2017 at 3:01 PM, Kevin
Good (amazing) job folks. :)
El 10 ago. 2017 9:43, "Thierry Carrez" escribió:
> Oh, that's good for us. Should still be fixed, if only so that we can
> test properly :)
> Kevin Benton wrote:
> > This is just the code simulating the conntrack entries that would be
On Mon, May 8, 2017 at 2:48 AM, Michael Still wrote:
> It would be interesting for this to be built in a way where other
> endpoints could be added to the list that have extra headers added to them.
> For example, we could end up with something quite similar to EC2 IAMS if
Some of you already know, but I wanted to make it official.
Recently I moved to work on the networking-ovn component,
and OVS/OVN itself, and while I'll stick around and I will be available
on IRC for any questions I'm already not doing a good work with
Thank you for the patches. I merged them, released 1.1.0 and proposed 
On Wed, Mar 15, 2017 at 10:14 AM, Gorka Eguileor
> On 14/03, Ihar Hrachyshka wrote:
> > Hi all,
> > the patch that started to produce log index
Nate, it was a pleasure working with you, you and your team made great
contributions to OpenStack and neutron. I'll be very happy if we ever have
the chance to work again together.
Best regards, and very good luck, my friend.
On Tue, Mar 7, 2017 at 4:55 AM, Kevin Benton wrote:
On Wed, Feb 22, 2017 at 1:53 PM, Thomas Morin
> Wed Feb 22 2017 11:13:18 GMT-0500 (EST), Anil Venkata:
> While relevant, I think this is not possible until br-int allows to match
> the network a packet belongs to (the ovsdb port tags don't let you do that
I have updated the spreadsheet. In the case of RH/RDO we're using the same
in the case of HA, pacemaker is not taking care of those anymore since the
We let systemd take care to restart the services that die, and we worked
with the community
to make sure that
On Mon, Feb 20, 2017 at 9:16 AM, John Davidge
> On 2/20/17, 4:48 AM, "Carlos Gonçalves" wrote:
> >On Mon, Feb 20, 2017 at 9:17 AM, Kevin Benton
> > wrote:
> >No problem. Keep sending in RSPVs
Lol, ack :)
On Mon, Feb 20, 2017 at 2:37 AM, Kevin Benton wrote:
> Clothes are strongly recommended as far as I understand it.
> On Mon, Feb 20, 2017 at 1:47 AM, Gary Kotton wrote:
>> What is the dress code J
>> *From: *"Das, Anindita"
I believe those are traces left by the reference implementation of cinder
setting very high debug level on tgtd. I'm not sure if that's related or
the culprit at all (probably the culprit is a mix of things).
I wonder if we could disable such verbosity on tgtd, which certainly is
going to slow
Jeremy Stanley wrote:
> It's an option of last resort, I think. The next consistent flavor
> up in most of the providers donating resources is double the one
> we're using (which is a fairly typical pattern in public clouds). As
> aggregate memory constraints are our primary quota limit, this
On Fri, Feb 3, 2017 at 7:55 AM, IWAMOTO Toshihiro
> At Wed, 1 Feb 2017 16:24:54 -0800,
> Armando M. wrote:
> > Hi,
> > [TL;DR]: OpenStack services have steadily increased their memory
> > footprints. We need a concerted way to address the oom-kills
Armando, thank you very much for all the work you've done as PTL,
my best wishes, and happy to know that you'll be around!
On Wed, Jan 11, 2017 at 1:52 AM, joehuang wrote:
> Sad to know that you will step down from Neutron PTL. Had several f2f
+1 Good work. :)
On Fri, Dec 16, 2016 at 11:59 AM, Rossella Sblendido
> On 12/16/2016 09:25 AM, Ihar Hrachyshka wrote:
> > Armando M. wrote:
> >> Hi neutrinos,
> >> I would like to propose Ryan and Nate as the go-to fellows for
On Fri, Dec 16, 2016 at 2:44 AM, Vasudevan, Swaminathan (PNB Roseville) <
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Thursday, December 15, 2016 3:15 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
It's been an absolute pleasure working with you on every single interaction.
Very good luck Henry,
On Fri, Dec 2, 2016 at 8:14 AM, Andreas Scheuring <
> Henry, it was a pleasure working with you! Thanks!
> All the best for your further journey!
Sad to see you go Carl,
Thanks for so many years of hard work, as Brian said, OpenStack /
Neutron is better thanks to your contributions through the last years.
My best wishes for you.
On Fri, Nov 18, 2016 at 9:51 AM, Vikram Choudhary wrote:
> It was really a good
I could be wrong, but I suspect we're doing it this way to be able to do
changes to several objects atomically, and roll back the transactions if at
some point in time what we're trying to accomplish is not possible.
On Tue, Nov 15, 2016 at 10:06 AM, Gary Kotton
I probably won't be able to go, but if you plan to hangout in any
other place around after/before dinner, may be I'll join.
Cheers & Enjoy! :)
On Mon, Oct 17, 2016 at 12:56 PM, Nate Johnston wrote:
> I responded to Miguel privately, but I'll be there as well!
+1!, even if my vote does not count :-)
On Tue, Oct 11, 2016 at 12:00 AM, Eichberger, German
> +1 (even if it doesn’t matter)
> From: Stephen Balukoff
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
This was my point of view on a possible solution:
After much thinking (and quite little doing) I believe the option "2"
I proposed is a rather reasonable one:
2) Before cleaning a namespace blindly in the end, identify
I just found this one created recently, and I will try to build on top of it:
On Wed, Sep 28, 2016 at 1:52 PM, Miguel Angel Ajo Pelayo
> Refloating this thread.
> I posted this rfe/bug , and I'm plan
> multinode controllers)" - Rally is suitable for many kind of tests=)
> Especially for testing at scale! If you have any question how to use Rally
> feel free to ask Rally team!
> - Best regards, Roman Vasylets. Rally team member
> On Thu, Aug 11, 20
Ack, and thanks for the summary Ihar,
I will have a look on it tomorrow morning, please update this thread
with any progress.
On Tue, Sep 27, 2016 at 8:22 PM, Ihar Hrachyshka wrote:
> Hi all,
> so we started getting ‘Address already in use’ when trying to start dnsmasq
Congratulations Ihar!, well deserved through hard work! :)
On Mon, Sep 19, 2016 at 8:03 PM, Brian Haley wrote:
> Congrats Ihar!
> On 09/17/2016 12:40 PM, Armando M. wrote:
>> Hi folks,
>> I would like to propose Ihar to become a member of the Neutron
Option 2 sounds reasonable to me too. :)
On Tue, Sep 6, 2016 at 2:39 PM, Akihiro Motoki wrote:
> What releases should we support in API references?
> There are several options.
> 1. The latest stable release + master
> 2. All supported stable releases + master
> 3. more
Thanks for the report, I'm adding some notes inline (OSC/SDK)
On Sat, Aug 27, 2016 at 2:13 AM, Armando M. wrote:
> Hi Neutrinos,
> For those of you who couldn't join in person, please find a few notes below
> to capture some of the highlights of the event.
RE: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness
> I talked to our driver architect and according to him this is vendor
> implementation (according to him this should work with Mellanox NIC) I need
> to verify that this indeed working.
> On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
> <majop...@redhat.com> wrote:
@moshe, any insight on this?
I guess that'd depend on the nic internal switch implementation and
how the switch ARP tables are handled there (per network, or global
If that's the case for some sr-iov vendors (or all), would it make
sense to have a global switch to create globally
What are the current options/tools we're considering?
> Lubosz Kosnik
> Cloud Software Engineer OSIC
>> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo <majop...@redhat.com>
>> Recently, I s
Thank you!! :)
On Mon, Aug 8, 2016 at 5:49 PM, Michael Johnson <johnso...@gmail.com> wrote:
> Thank you for your work here. I would support an effort to setup a
> multi-node gate job.
> On Mon, Aug 8, 2016 at 5:04 AM, Miguel Angel
On Tue, Aug 9, 2016 at 8:08 AM, Antonio Ojea wrote:
> What do you think about openwrt images?
> They are small, have documentation to build your custom images, have a
> packaging system and have tons of networking features (ipv6, vlans, ...) ,
> also seems
Recently, I sent a series of patches  to make it easier for
developers to deploy a multi node octavia controller with
n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
Since this is the way the service is designed to work (with horizontal
scalability in mind), and we want
Keep us posted!! :)
On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K wrote:
> Hi just a quick fyi,
> About 2 weeks ago I did some light testing with the conntrack security group
> driver and the newly
> Merged upserspace conntrack support in ovs.
The problem with the other projects image builds is that they are
based for bigger systems, while cirros is an embedded-device-like
image which boots in a couple of seconds.
Couldn't we contribute to cirros to have such module load by default ?
Or may be it's time for Openstack to build their
Ohhh, yikes, even though I'm late my vote would have been super +1!!
On Tue, Jul 26, 2016 at 5:04 PM, Jakub Libosvar wrote:
> On 26/07/16 16:56, Assaf Muller wrote:
>> We've hit critical mass from cores interesting in the testing area.
>> Welcome Jakub to the core
Oh yikes, I was "hit by a plane" (delay) plus a huge jet lag and
didn't make it to the meeting, I'll be there next week. Thank you.
On Tue, Jul 12, 2016 at 9:48 AM, Miguel Angel Ajo Pelayo
> I'd like to ask for some prioritization on this RFE ,
I'd like to ask for some prioritization on this RFE , since it's blocking
one of the already existing RFEs for RFE (ingress bandwidth limiting),
and we're trying to enhance the operator experience on the QoS service.
It's been discussed on previous driver meetings, and it seems to have
:10 PM, Kevin Benton <ke...@benton.pub> wrote:
> Yeah, no meetings in #openstack-neutron please. It leaves us nowhere to
> discuss development stuff during that hour.
> On Tue, May 17, 2016 at 2:54 AM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
I agree, let's try to find a timeslot that works.
using #openstack-neutron with the meetbot works, but it's going to generate
a lot of noise.
On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka
> > On 16 May 2016, at 15:47, Takashi Yamamoto
I started by opening a tiny RFE, that may help in the organization
of flows inside OVS agent, for inter operability of features (SFC,
TaaS, ovs fw, and even port trunking with just openflow).  
Does governors ballroom in Hilton sound ok?
We can move to somewhere else if necessary.
OpenStack Development Mailing List (not for usage questions)
Please add me to whatsapp or telegram if you use that : +34636522569
El 27/4/2016 12:50, majop...@redhat.com escribió:
> Trying to find you folks. I was late
> El 27/4/2016 12:04, "Paul Carver" escribió:
>> SFC team and anybody else dealing with flow
Trying to find you folks. I was late
El 27/4/2016 12:04, "Paul Carver" escribió:
> SFC team and anybody else dealing with flow selection/classification (e.g.
> I just wanted to confirm that we're planning to meet in salon C today
> (Wednesday) to get lunch but
Classifiers, while we need to make the full pipeline of features
(externally pluggable) work together.
> On Thu, Apr 21, 2016 at 12:58 PM, IWAMOTO Toshihiro <iwam...@valinux.co.jp>
>> At Wed, 20 Apr 2016 14:12:07 +0200,
>> Miguel Angel Ajo Pelayo wrote
On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
> On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes <jaypi...@gmail.com> wrote:
>> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
>> Yes, Nova's conducto
I think this is an interesting topic.
What do you mean exactly by FC ? (feature chaining?)
I believe we have three things to look at: (sorry for the TL)
1) The generalization of traffic filters / traffic classifiers. Having
common models, some sort of common API or common API structure
Sorry, I just saw, FC = flow classifier :-), I made it a multi purpose
abrev. now ;)
On Wed, Apr 20, 2016 at 2:12 PM, Miguel Angel Ajo Pelayo
> I think this is an interesting topic.
> What do you mean exactly by FC ? (feature chaining?)
On Fri, Apr 15, 2016 at 7:32 AM, IWAMOTO Toshihiro
> At Mon, 11 Apr 2016 14:42:59 +0200,
> Miguel Angel Ajo Pelayo wrote:
>> On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
>> <iwam...@valinux.co.jp> wrote:
>> > A
On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> Hi Miguel Angel, comments/answers inline :)
> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
>> In the context of  (generic resource pools / scheduling
On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
> At Fri, 8 Apr 2016 12:21:21 +0200,
> Miguel Angel Ajo Pelayo wrote:
>> Hi, good that you're looking at this,
>> You could create a lot of ports with this meth
On Sun, Apr 10, 2016 at 10:07 AM, Moshe Levi <mosh...@mellanox.com> wrote:
> *From:* Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> *Sent:* Friday, April 08, 2016 4:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
In the context of  (generic resource pools / scheduling in nova) and
 (minimum bandwidth guarantees -egress- in neutron), I had a talk a few
weeks ago with Jay Pipes,
The idea was leveraging the generic resource pools and scheduling
mechanisms defined in  to find the right
Hi, good that you're looking at this,
You could create a lot of ports with this method  and a bit of extra
bash, without the extra expense of instance RAM.
This effort is going to be still more
On Fri, Apr 8, 2016 at 11:28 AM, Ihar Hrachyshka
> Kevin Benton wrote:
> I don't know if my vote counts in this area, but +1!
> What the gentleman said ^, +1.
"me too ^" , +1 !
On Mon, Mar 21, 2016 at 3:17 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> On 03/21/2016 06:22 AM, Miguel Angel Ajo Pelayo wrote:
>> I was doing another pass on this spec, to see if we could leverage
>> it as-is for QoS / bandwidth trackin
I was doing another pass on this spec, to see if we could leverage
it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
have a question 
I guess I'm just missing some detail, but looking at the 2nd scenario,
why wouldn't availability zones allow the same exactly if we
On Wed, Mar 9, 2016 at 4:16 PM, Doug Hellmann wrote:
> Excerpts from Armando M.'s message of 2016-03-08 15:43:05 -0700:
> > On 8 March 2016 at 15:07, Doug Hellmann wrote:
> > > Excerpts from Armando M.'s message of 2016-03-08 12:49:16 -0700:
> On 26 Feb 2016, at 02:38, Sean McGinnis wrote:
> On Thu, Feb 25, 2016 at 04:13:56PM +0800, Qiming Teng wrote:
>> Hi, All,
>> After reading through all the +1's and -1's, we realized how difficult
>> it is to come up with a proposal that makes everyone happy. When
Thanks a lot for working on this, I’m not following the [Horizon] tag and I
this. I’ve added the Neutron and QoS tags.
I will give it a try as soon as I can.
Keep up the good work!,
> On 10 Feb 2016, at 13:04, masco
> On 09 Feb 2016, at 21:43, Sean M. Collins wrote:
> Kevin Benton wrote:
>> I agree with the mtu setting because there isn't much of a downside to
>> enabling it. However, the others do have reasons to be disabled.
>> csum - requires runtime detection of support for a
1 - 100 of 164 matches
Mail list logo