Hi all,
In reviewing the change [1], the question arose as to whether any project other
than neutron-dynamic-routing has been using the "next_hop" field included in a
callback notification emitted when FIP association is updated [2]. The question
as to whether to deprecate or simply remove
From: Tidwell, Ryan
Sent: Tuesday, December 6, 2016 3:13:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario
testing?
Thanks for the pointer. I'll take a look and see what can
rio
testing?
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
On 6 December 2016 at 14:44, Tidwell, Ryan
<ryan.tidw...@hpe.com<mailto:ryan.tidw...@hpe.com>> wrote:
This is at
This is at the top of my list to look at. I've been thinking a lot about how to
implement some tests. For instance, do we need to actually stand up a BGP peer
of some sort to peer neutron with and assert the announcements somehow? Or
should we assume that Ryu works properly and make sure we
Bence,
I had been meaning to go a little deeper with performance benchmarking, but
I’ve been crunched for time. Thanks for doing this, this is some great analysis.
As Armando mentioned, L2pop seemed to be the biggest impediment to control
plane performance. If I were to use trunks heavily in
Cathy,
There are a few outstanding reviews to be wrapped up, including docs. However,
this is mostly complete and the bulk of the functionality has merged and you
can try it out.
Code Reviews:
https://review.openstack.org/#/q/project:openstack/neutron+branch:master+topic:bp/vlan-aware-vms
At one point early during the Mitaka cycle we had a weekly IRC meeting on this
topic going. I got side-tracked by other work at the time and I stopped
attending, so my apologies if these are still happening. I'm wondering if it
would be useful to get these going again if this is not currently
I just wanted to give a heads-up to everyone that a bug in Ryu 4.2 which was
just recently pushed to pypi seems to causing issues in the python34 jobs in
neutron-dynamic-routing. This issue will likely also cause problems for
backports to stable/mitaka in the main neutron repository. I have
+1
-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Monday, April 25, 2016 11:07 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Social at the summit
OK, there is
Gary,
I’m not sure I understand the relationship you’re drawing between BGP and L2
GW, could you elaborate? The BGP code that landed in Mitaka is mostly geared
toward the use case where you want to directly route your tenant networks
without any NAT (ie no floating IP’s, no SNAT). Neutron
...@gmail.com>> wrote:
On 22 January 2016 at 08:57, Tidwell, Ryan
<ryan.tidw...@hpe.com<mailto:ryan.tidw...@hpe.com>> wrote:
I wanted to raise the question of whether to develop BGP dynamic routing in the
Neutron repo or spin it out to as a stadium project. This ques
I wanted to raise the question of whether to develop BGP dynamic routing in the
Neutron repo or spin it out to as a stadium project. This question has been
raised recently on reviews and in offline discussions. For those unfamiliar
with this work, BGP efforts in Neutron entail admin-only
This was a compromise we made toward the end of Kilo. The subnetpools resource
was implemented as a core resource, but for purposes of Horizon interaction and
a lack of another method for evolving the Neutron API we deliberately added a
shim extension. I believe this was done with a couple
I was looking over the admin guide
http://docs.openstack.org/admin-guide-cloud/networking_config-agents.html#configure-l3-agent
and noticed this:
If you reboot a node that runs the L3 agent, you must run the
neutron-ovs-cleanup command before the neutron-l3-agent service starts.
Taking a look
John Belamaric made a good point that the closest thing that we have to
representing an L3 domain right now is a subnet pool.
This is actually a really good point. If you take the example of a L3 network
that spans segments, you could put something like a /16 into a subnet pool.
That /16
I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged during
Kilo. I'm wondering if we think we have identified a root cause and have
merged an appropriate long-term fix, or if https://review.openstack.org/148718
was merged just so there's at least a fix available while we
DHCP lease
due packet has no checksum
On Jun 1, 2015, at 7:26 PM, Tidwell, Ryan ryan.tidw...@hp.com wrote:
I see a fix for https://bugs.launchpad.net/neutron/+bug/1244589 merged during
Kilo. I'm wondering if we think we have identified a root cause and have
merged an appropriate long-term
The Nova analog in Neutron is specifically what I was interested in. Makes
perfect sense. Thanks!
-Ryan
-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com]
Sent: Friday, May 15, 2015 11:54 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron]
are more then welcoming everyone to help this effort.
Gal.
[1] https://review.openstack.org/#/c/151247/
[2] https://etherpad.openstack.org/p/YVR-neutron-sg-fwaas-future-direction
On Fri, May 15, 2015 at 2:21 AM, Tidwell, Ryan
ryan.tidw...@hp.commailto:ryan.tidw...@hp.com wrote:
I was batting around
I was batting around some ideas regarding IPAM functionality, and it occurred
to me that rate-limiting at an API level might come in handy and as an example
might help provide one level of defense against DoS for an external IPAM
provider that Neutron might make calls off to. I'm simply using
Erik,
I’m looking forward to seeing this blueprint re-proposed and am able to pitch
in to help get this in to Liberty. Let me know how I can help.
-Ryan
From: Erik Moe [mailto:erik@ericsson.com]
Sent: Friday, May 08, 2015 6:30 AM
To: OpenStack Development Mailing List (not for usage
I will quickly spin another patch set with the shim extension. Hopefully this
will be all it takes to get subnet allocation merged.
-Ryan
-Original Message-
From: Akihiro Motoki [mailto:amot...@gmail.com]
Sent: Monday, March 30, 2015 2:00 PM
To: OpenStack Development Mailing List (not
Great suggestion Kevin. Passing 0.0.0.1 as gateway_ip_template (or whatever
you call it) is essentially passing an address index, so when you OR 0.0.0.1
with the CIDR you get your gateway set as the first usable IP in the subnet.
The intent of the user is to allocate the first usable IP
I agree with dropping support for the wildcards. It can always be revisited at
later. I agree that being locked into backward compatibility with a design that
we really haven't thought through is a good thing to avoid. Most importantly
(to me anyway) is that this will help in getting subnet
Thanks Salvatore. Here are my thoughts, hopefully there’s some merit to them:
With implicit allocations, the thinking is that this is where a subnet is
created in a backward-compatible way with no subnetpool_id and the subnets
API’s continue to work as they always have.
In the case of a
Keshava,
This sounds like you're asking how you might do service function chaining with
Neutron. Is that a fair way to characterize your thoughts? I think the concept
of service chain provisioning in Neutron is worth some discussion, keeping in
mind Neutron is not a fabric controller.
-Ryan
26 matches
Mail list logo