Public bug reported:
Recently, due to this bug:
https://bugs.launchpad.net/neutron/+bug/1821912
We noticed that sometimes the guest OS is not fully UP, but test case is trying
to login it. A simple idea is to ping it first, then try to login. So we hope
to find a way for tempest to verify the
Public bug reported:
[l3][scale issue] unrestricted hosting routers in network node increase
service operating pressure
Related problem was reported here:
https://bugs.launchpad.net/neutron/+bug/1828494
These issues have same background, unlimited router creation for entire cluster,
"""
Every
Public bug reported:
Recently we meet some scale issue about L3-agent. According to what I'm
informed, most cloud service provider does not charge for the neutron
virtual router. This can become a headach for the operators. Every
tenant may create free routers for doing nothing. But neutron will
-subnet-${1}
}
create_net_struct $1
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
--
You received this bug notification because you are a member of Ya
er-delete $router_id
neutron subnet-delete scale-test-subnet-${1}
neutron net-delete scale-test-net-${1}
}
create_clean_net_struct $1
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Changed in: neutron
Assignee: (unassigned) =>
, the floating IP QoS rules may not set to
the devices back because the bandwidth value does not change.
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: In Progress
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon
/l3_dvrscheduler_db.py#L401
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
--
You received this bug notification because you are a member of Yahoo!
Engineer
r cf57179c-0005-4f43-9c2c-53cd46fc2328
2019-04-17 16:40:12.184 25866 INFO neutron.agent.l3.agent [None
req-669f303b-7860-4c97-b6b9-f31fb56b84e7 - 001a18cadd4b401e9fdeab6c411d9816 - -
-] Finished a router update for bcd0539a-6230-4de3-a8be-aafdf5c98132
** Affects: neutron
Importance: Undecide
Public bug reported:
[port forwarding] should not process port forwarding if snat node only
run DHCP
Assuming you have 3 network nodes, the agent modes are all `dvr_snat`.
One `dvr_ha` router1 is scheduled to node1 and node2. The dhcp namespace
(connected to the router1) is scheduled to node3.
Public bug reported:
Env: stable/queens
CentOS 7.5
kernel-3.10.0-862.11.6.el7.x86_64
There are many bottleneck locks in the agent extensions. For instance, l3 agent
extensions now have lock 'qos-fip' [1], 'qos-gateway-ip' [2], 'port-forwarding'
[3], 'log-port' [4]. For L2 agent, it is
For the tap-device, the local vlan tag was stripped before send to it. So you
cannot see it.
And in your last comment #6, you can see, all your packets do not have any vlan
tag.
Here are some example packets which have vlan id 205 captured by tcpdump:
14:53:54.915607 fe:ea:c8:20:fe:d0 >
site-packages/neutron/agent/linux/daemon.py",
line 139, in write
ERROR neutron os.write(self.fd, b"%d" % pid)
ERROR neutron TypeError: unsupported operand type(s) for %: 'bytes' and 'int'
ERROR neutron
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong
Public bug reported:
Example:
http://logs.openstack.org/42/644842/2/check/neutron-tempest-plugin-api/1c82227/testr_results.html.gz
log search:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22observed_range%5B'project_id'%5D)%5C%22
Exception:
Neutron just merged this BP recently:
https://review.openstack.org/#/q/status:merged+branch:master+topic:bp/network-segment-range-management
Please following the new required params to refactor the code, maybe these
lines in tricircle itself:
According to the trace log here, it may have been fixed by this:
https://github.com/openstack/networking-sfc/commit/eb72322943c111df7eaaa472857383ac2b2f2012#diff-dedc1f04413287f9620970b90ea536e3
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because
*** This bug is a duplicate of bug 1799135 ***
https://bugs.launchpad.net/bugs/1799135
** This bug has been marked a duplicate of bug 1799135
[l3][port_forwarding] update floating IP (has binding port_forwarding) with
empty {} input will lose router_id
--
You received this bug
Public bug reported:
Sometimes we can not say a cloud deployment has unlimited capacity, especially
for small cluster. And sometimes, cluster expansion takes time. You can not
adjust all users/project quota at once. Then users began to complain, why I
cannot create resource since I still have
Public bug reported:
For now, L3 IPs are all have bandwidth QoS functionality. Floating IPs and
gateway IPs have the same TC rules. And for one specific IP, it can not be set
in two hosts for current neutron architecture. That is saying, where the IP is
working, we can get the TC statistic
Public bug reported:
Problem Description
===
How to do trouble shooting if one vm lost the connection? How to find out the
problem why the floating IP is not connectable?
No easy way, cloud operators need to dump the flows or iptables rules for it,
and then find out which parts
According to Rodolfo's explanation, so we can close this bug.
** Changed in: neutron
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1792493
Title:
Public bug reported:
For instance:
This neutron server was restarted 6 seconds earlier than l3-agent with a RPC
version upgrading:
http://logs.openstack.org/71/633871/5/check/neutron-grenade-dvr-multinode/3fcd71c/logs/screen-q-svc.txt.gz#_Feb_10_06_32_10_279268
Public bug reported:
There is a race condition between nova-compute boots instance and l3-agent
processes DVR (local) router in compute node.
This issue can be seen when a large number of instances were booted to one same
host, and instances are under different DVR router.
So the l3-agent will
Public bug reported:
The ovs-agent will lost some tunnels to other nodes, for instance to DHCP node
or L3 node, these lost tunnels can sometimes cause VM failed to boot or
dataplane down.
When subnets or security group ports quantity reaches 2000+, this issue can be
seen in high probability.
Public bug reported:
Ovs-agent will lost some flows during restart, for instance, flows to DHCP or
L3, tunnel flows. These lost flows can sometimes cause VM failed to boot or
dataplane down.
When subnets or security group ports quantity reaches 2000+, this issue can be
seen in high
Public bug reported:
ovs-agent clean stale flows action will dump all the bridge flows first. When
subnets or security group ports quantity reach 2000+, this will become really
time-consuming.
And sometimes this dump action can also get failed, then the ovs-agent will
dump again. And things
Public bug reported:
When subnets or security group ports quantity reach 2000+, there are many stale
flows.
Some basic exception procedure:
(1) ovs-agent dump-flows
(2) ovs-agent delete some flows
(3) ovs-agent install new flows (with new cookies)
(4) any exception raise in (2) or (3), such as
Public bug reported:
When subnets or security group ports quantity reach 2000+, it is really
too hard to do trouble shooting if one VM lost the connection. The flow
tables are almost unreadable (reach 30k+ flows). We have no way to check
the ovs-agent flow status. And restart the L2 agent does
Public bug reported:
When subnets or security group ports quantity reach 2000+, the ovs-agent
connection to ovs-vswitchd may get lost, drop or timeout during restart.
This is a subproblem of bug #1813703, for more information, please see the
summary:
Public bug reported:
When ports quantity under one subnet or security group reaches 2000+, the
ovs-agent will always get RPC timeout during restart.
This is a subproblem of bug #1813703, for more information, please see the
summary:
https://bugs.launchpad.net/neutron/+bug/1813703
** Affects:
Public bug reported:
When subnets or security group ports quantity reach 2000+, the ovs-agent failed
to restart and do fullsync infinitely.
This is a subproblem of bug #1813703, for more information, please see the
summary:
https://bugs.launchpad.net/neutron/+bug/1813703
** Affects: neutron
Public bug reported:
When subnets or security group ports quantity reach 2000+, the ovs-agent will
take more than 15-40 mins+ to restart.
During this restart time, the ovs will not process any port, aka VM booting on
this host will not get the L2 flows established.
This is a subproblem of bug
Public bug reported:
[L2] [summary] ovs-agent issues at large scale
Recently we have tested the ovs-agent with the openvswitch flow based
security group, and we met some issues at large scale. This bug will
give us a centralized location to track the following problems.
Problems:
(1) RPC
Public bug reported:
examples:
http://logs.openstack.org/48/631448/2/check/neutron-functional/7d739fd/logs/testr_results.html.gz
http://logs.openstack.org/85/627285/5/check/neutron-functional/1fb360d/logs/testr_results.html.gz
** Affects: neutron
Importance: Undecided
Status: New
Public bug reported:
Log search:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ERROR%20neutron.agent.linux.utils%20%5B-%5D%20Exit%20code%3A%201%3B%20Stdin%3A%20%3B%20Stdout%3A%20PING%5C%22
Sometimes it could happen 2600+ times per 12 hour.
** Affects: neutron
then you will get that exception in neutron server log.
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
--
You received this bug notification because
or neutron server are all up, the ovs-agent will require a manually restart
again to recover the traffic.
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Description changed:
ENV:
neutron: stable/queens
tenant network type: vlan
Public bug reported:
ENV: devstack master
Floating IP port_forwardings with different protocols can not have the same
internal or external port number to the same vm_port. But we can have different
application server, for instance TCP server and UDP server, listen to the same
port at same
Public bug reported:
ENV: devstack master
Floating IP port forwarding internal or external port number should not
allow 0, otherwise you will get some ValueError exception in neutron
server.
Step to reproduce:
1. create router with connected privated subnet and public gateway.
2. create VM to
Public bug reported:
ENV: devstack master
step to reproduce:
1. create router
2. add router public gateway
3. add router interface to subnet1, subnet2, subnet3
4. create a vm to subnet1
5. create floating IP with port forwarding to the vm port from subnet1
Then, you will not be able to remove
Public bug reported:
Should not allow creating port_forwarding to a port which already has a
binding floating IP for dvr routers.
ENV: devstack master
step to reproduce:
1. create dvr router with connected privated subnet and public gateway.
2. create VM to the private subnet
3. binding
Public bug reported:
ENV: devstack master
step to reproduce:
1. create dvr router with connected privated subnet and public gateway.
2. create VM to the private subnet
3. create floating IP A with port forwarding to the VM port
4. binding floating IP B to VM port
Then floating IP A with port
Public bug reported:
ENV: devstack master
Step to reproduce:
1. create floating IP
2. create port forwarding for that floating IP
3. update floating IP with empty dict:
curl -g -i -X PUT
http://controller:9696/v2.0/floatingips/2bb4cc5d-7fae-4c1b-9482-ead60d67abea \
-H "User-Agent:
Public bug reported:
It's really simple for neutron server to query the network id when user only
supply the subnet_id during the port creation.
But now when you create the port, you need to input the network id everytime.
** Affects: neutron
Importance: Undecided
Status: New
--
Public bug reported:
Since the port_forwarding l3 agent extension handle it's own floating
IPs, I think we can add these IPs to L3 (fip_qos) QoS extension
procedure. Let all floating IPs (L3 IPs) can be limited under the QoS
policy.
** Affects: neutron
Importance: Undecided
Status:
Public bug reported:
Some L3 ports can now be directly modify the IP address, but there are
some type of device_owner, for instance network:router_centralized_snat,
should not allow to change the IP address, otherwise it will make things
really complicated.
Step to reproduce, update dvr router
|
| updated_at| 2018-09-30T10:03:30Z
|
+---+--+
Both (1) and (2) has no effects in the l3 agent qrouter-namespace or
(dvr snat-names
Public bug reported:
ENV:
devstack multinode master
Problem:
If dvr ha router have a state change event, the l3 agent will get unnecessary
router_update message.
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Changed in: neutron
Public bug reported:
Recently bug:
https://bugs.launchpad.net/neutron/+bug/1752903
and the fix https://review.openstack.org/#/c/599494/
are trying to create floating IP only including IPv4 version.
For now, if the public network has both IPv4 and IPv6 subnet
the floating IP (port) may have both
Public bug reported:
ENV:
platform: centos7.5
kernel: 3.10.0-862.11.6.el7.x86_64
neutron: master
openstack: devstack multi-node master
Problem:
The centralized fip will show up in the snat namespace if the host backing
online.
Reproduce:
(1) create dvr + ha router, and connected to the private
Public bug reported:
ENV:
master
devstack multinode install:
1 controller node
2 compute nodes -> dvr_no_external (compute1, compute2)
2 network nodes -> dvr_snat (network1, network2)
Problem:
For L3 DVR HA router, when the network node, which hosting the `master` router,
is down and up.
The
Public bug reported:
ENV:
master
devstack multinode install:
1 controller node
2 compute nodes -> dvr_no_external (compute1, compute2)
2 network nodes -> dvr_snat (network1, network2)
Problem:
For L3 DVR HA router, the centralized floating IPs nat rules are not installed
in
Public bug reported:
Supposing the tenant network type is vlan. And we have a neutron network whose
vlan id is 1000 (CIDR: 192.168.111.0/24, gateway IP: 192.168.111.1).
We aslo have a physical switch (SWITCH-1), which connect the compute NODE-1,
NODE-2.
For these compute nodes, we set the l3
Disappear from the test env, please retore this if anyone meets this
again.
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
It's time to close this? Open a new bug for this if it's still an
issue.
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
ENV:
neutron stable/queens
with backport patches:
Fix no packet log data when debug is set False in configuration
https://review.openstack.org/#/c/591545/
Fix lost connection when create security group log
https://review.openstack.org/#/c/593534/
Fix no ACCEPT event can get
Duplicated to the bug:
https://bugs.launchpad.net/neutron/+bug/1777598
Please backport this:
https://review.openstack.org/#/c/576418/
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
** Changed in: neutron
Status: In Progress => Invalid
** Changed in: neutron
Assignee: LIU Yulong (dragon889) => (unassigned)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.ne
Documentation is right.
Your bug description is incorrect:
"""
### Scenario ###
Connect to your undercloud host, source to overcloudrc and execute the folowing
commands:
1 - Create policy - "openstack network qos policy create bw-limiter"
2 - Create rule - "openstack network qos rule create
Public bug reported:
ENV:
neutron stable/queens
python-neutron-vpnaas-12.0.0-1.el7.noarch
openstack-neutron-vpnaas-12.0.0-1.el7.noarch
centos 7
3.10.0-862.3.2.el7.x86_64
Firstly, there is no config guide for queens VPNaaS.
So we use this doc:
``3 - Enable QoS to my Floating IP - "openstack floating ip set --qos-policy
bw-limiter 10.0.0.220 "``
After this action, you can show the FLOATING IP qos policy by:
openstack floating ip show
something like this:
openstack floating ip show 172.16.100.103
Public bug reported:
ENV:
neutron: stable/queens 12.0.2
$ uname -r
3.10.0-862.3.2.el7.x86_64
$ sudo ovs-appctl vlog/list
consolesyslogfile
-------
backtrace OFFDBG INFO
...
ALL-app OFFDBG
Public bug reported:
ENV:
Neutron stable/queens (12.0.1)
CentOS 7 (3.10.0-514.26.2.el7.x86_64)
Ceph v10.2.9 Jewel
How to reproduce:
Concurrently create 256 VMs in a single network which has 2 dhcp agents.
Exception:
nova-compute side:
2018-06-25 17:56:09.394 43886 DEBUG nova.compute.manager
Public bug reported:
Too many DBDeadlockError and IP address collision during port creating
ENV:
Neutron stable/queens (12.0.1)
CentOS 7 (3.10.0-514.26.2.el7.x86_64)
This is a edge scenario testing after we meet bug:
https://bugs.launchpad.net/neutron/+bug/1777965
We have 3 neutron-server
Public bug reported:
ENV:
Neutron stable/queens (12.0.1)
CentOS 7 (3.10.0-514.26.2.el7.x86_64)
Ceph v10.2.9 Jewel
Exception:
2018-06-20 14:21:52.070 140217 ERROR oslo_middleware.catch_errors
DBDeadlock: (pymysql.err.InternalError) (1205, u'Lock wait timeout exceeded;
try restarting
1083 sender
[ 4] 0.00-10.00 sec 13.0 MBytes 10.9 Mbits/sec receiver
iperf Done.
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Description changed:
ENV:
stable/queens (12.0.1)
centos 7
Fix Released in neutron-12.0.1:
https://github.com/openstack/neutron/commits/12.0.1
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
** Changed in: neutron
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596611
Title:
[RFE] Create L3 floating IPs with qos (rate limit)
==
[RFE] Create L3 IPs with qos (rate limit)
https://bugs.launchpad.net/neutron/+bug/1596611
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Changed in: neutron
Assignee: (unassigned) => LIU Yulong (dragon889)
--
You rece
Public bug reported:
Steps to reproduce:
1. create centos 7 vm with CentOS-7-x86_64-DVD-1708.iso
2. do the neutron test docs [1] steps:
- git clone https://git.openstack.org/openstack-dev/devstack ../devstack
- ./tools/configure_for_func_testing.sh ../devstack -i
- tox -e dsvm-functional
3.
Public bug reported:
This bug was reported for mitaka/liberty release.
There already has a fixed bug here:
https://bugs.launchpad.net/neutron/+bug/1641879
But the original patch which caused this bug was backported to mitaka and
liberty.
*** This bug is a duplicate of bug 1642918 ***
https://bugs.launchpad.net/bugs/1642918
** This bug has been marked a duplicate of bug 1642918
floating-ip nova notification is broken
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Mitaka needs this: https://review.openstack.org/#/c/382210/
** Changed in: neutron
Status: New => Incomplete
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
Thanks John, now this bug only for one creation failed log, something
I've described in comment #2.
** Description changed:
ENV: stable/mitaka,VXLAN
Neutron API: two neutron-servers behind a HA proxy VIP.
- Exception log:
- [1] http://paste.openstack.org/show/585669/
- [2]
Public bug reported:
We are now facing a nova operation issue about setting different ceph rbd pool
to each corresponding nova compute node in one available zone. For instance:
(1) compute-node-1 in az1 and set images_rbd_pool=pool1
(2) compute-node-2 in az1 and set images_rbd_pool=pool2
This
** Changed in: neutron
Status: Invalid => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1633306
Title:
Partial HA network causing HA router creation failed (race conditon)
Public bug reported:
ENV: stable/mitaka,VXLAN
Neutron API: two neutron-servers behind a HA proxy VIP.
Exception log:
[1] http://paste.openstack.org/show/585669/
[2] http://paste.openstack.org/show/585670/
Log [1] shows that the subnet of HA network is concurrently deleted
while a new HA router
** Changed in: neutron
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1625106
Title:
gate failure: check_public_network_connectivity
Status in
Public bug reported:
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22in%20check_public_network_connectivity%5C%22
** Affects: neutron
Importance: Undecided
Status: New
** Tags: gate-failure
** Tags added: gate-failure
--
You received this bug
** Changed in: neutron
Status: Invalid => In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609217
Title:
DVR: dvr router should not exist in not-binded network node
** Changed in: horizon
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1616827
Title:
Network topology fail if router service is
Yulong (dragon889)
Status: New
** Changed in: nova
Assignee: (unassigned) => LIU Yulong (dragon889)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1617
Public bug reported:
ENV:
stable/mitaka
When A DVR floating IP associate to a port, the `floatingip_update_callback`
will immediately start a `start_route_advertisements` to notify DR agent that
FIP bgp route.
But this bgp route is not right, its next_hop is set to snat gateway IP address.
And
** Changed in: neutron/kilo
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514728
Title:
insufficient service name for external process
Status in
** Changed in: neutron/kilo
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501686
Title:
Incorrect exception handling in DB code involving rollbacked
** Changed in: neutron/kilo
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1554696
Title:
Neutron server log filled with "device requested by agent
Public bug reported:
ENV: stable/mitaka
Reproduce:
1. associate floating IP to VM
2. dissociate floating IP from VM
Then you will find the following trace in neutron-bgp-dragent log:
http://paste.openstack.org/show/561084/
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong
Public bug reported:
The ha state change BatchNotifier uses the following calculated
interval.
def _calculate_batch_duration(self):
# Slave becomes the master after not hearing from it 3 times
detection_time = self.conf.ha_vrrp_advert_int * 3
# Keepalived takes a
** Changed in: horizon
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1327061
Title:
flavor extras cpu_period out of range
Public bug reported:
API: floating IP updating with `{}` change its association
If pass a empty `{}` data dict to the floating IP update API, the associated
floating IP will be dissociated from its port.
** Affects: neutron
Importance: Undecided
Status: New
--
You received this
/neutron/blob/master/neutron/agent/l3/router_info.py#L332
DVR/DVR_SNAT_HA router does not have this issue.
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: New
** Summary changed:
- HA: ha router gateway secondary IPs was removed Unexpectedly
+ HA
** Changed in: neutron
Status: In Progress => Fix Released
** Tags added: mitaka-backport-potential
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580899
Title:
Overlapped new
Public bug reported:
The neutron DR agent only advertise floating IP routes as destination(floating
IP/32) - next_hop(gateway IP).
Such routes may cause link unreachable in a layer 3 isolated multi-AZ network
environment.
For instances:
ISP1 - DC1
ISP2 - DC2
ISP3 - DC3
For the floating IPs,
Public bug reported:
ENV:
stable/mitaka
How to reproduce:
1. create a HA router
neutron router-create --ha True --distributed False test1
2. set the HA router gateway with more than one IP
neutron router-gateway-set --fixed-ip ip_address=172.16.5.110 --fixed-ip
ip_address=172.16.5.111 test1
Public bug reported:
ENV:
stable/mitaka
hosts:
compute1 (nova-compute, l3-agent (dvr), metedate-agent)
compute2 (nova-compute, l3-agent (dvr), metedate-agent)
network1 (l3-agent (dvr_snat), metedata-agent, dhcp-agent)
network2 (l3-agent(dvr_snat), metedata-agent, dhcp-agent)
How to reproduce?
Public bug reported:
ENV:
neutron-8.1.2-1 (stable/mitaka)
When query a bgpspeaker's routes, the DVR fip host routes query will get the
routes including the central fip routes.
This will let the central fip has more than one next_hop routes.
For instance:
+-+--+
|
Public bug reported:
Now floating IP can be disassociated via two different API data dict:
{'port_id': null} or a dict without `port_id` key.
And, floating IP can not be updated with its original port_id, you may get some
bad request
exception.
Which will cause some know issues:
1. Updating
: 'NoneType' object has no attribute 'config' (HA router
deleting procedure)
http://paste.openstack.org/show/523757/
infinite loop trace:
http://paste.openstack.org/show/528407/
** Affects: neutron
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: In Progress
** Changed in: neutron/kilo
Status: New => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533454
Title:
L3 agent unable to update HA router state after race between
This bug comes again:
http://paste.openstack.org/show/523757/
** Changed in: neutron
Status: Fix Released => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533441
Title:
HA
Please see the alternative fix:
https://review.openstack.org/#/c/324380/
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
101 - 200 of 235 matches
Mail list logo