Public bug reported:
Attempting to update an IPv6 subnet within Horizon that was created via
CLI, and whose ipv6-address-mode is set (dhcp-stateful vs None), results
in an error:
list index out of range
Creating the subnet via Horizon with the exact same parameters behaves
properly.
--
Public bug reported:
Recently switched from using DHCP Agent to built-in OVN DHCP for
baremetal deployments.
Version: Zed
OS: 22.04 LTS
OVS: 3.0.1
OVN: 22.09
When a baremetal node is provisioned, during PXE I am getting a lease
from an OVN controller but nothing further (ie. no TFTP). Here is
Public bug reported:
Issue:
The Neutron DHCP agent bootstraps the DHCP leases file for a network
using all associated subnets[1]. In a multi-segment environment,
however, a DHCP agent can only service a single segment/subnet of a
given network.
The DHCP namespace, then, is configured with an
Public bug reported:
This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:
- [x] This doc is inaccurate in this way: service_plugin path is incorrect
since rolling OVN into Neutron
-
Public bug reported:
I came across an issue today for a user that was experiencing issues
connecting to metadata at 169.254.169.254. For a long time, cloud-init
has had a fallback mechanism to that allowed it to contact the metadata
service at http:///latest/meta-data if
Public bug reported:
We recently upgraded an environment from Newton -> Rocky, and
experienced a dramatic increase in the amount of time it takes to return
a full security group list. For ~8,000 security groups, it takes nearly
75 seconds. This was not observed in Newton.
I was able to replicate
Public bug reported:
Release: OpenStack Stein
Driver: LinuxBridge
Using Stein w/ the LinuxBridge mech driver/agent, we have found that
traffic is being flooded across bridges. Using tcpdump inside an
instance, you can see unicast traffic for other instances.
We have confirmed the macs table
** Changed in: openstack-ansible
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1643991
Title:
504 Gateway Timeout when creating a port
Status in
Marking invalid for OSA. If this is still an issue, please submit for
Neutron project.
** Changed in: openstack-ansible
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
Version: Pike
OpenStack Client: 3.12.0
When testing Subnet Pool functionality, I found that the behavior
between the openstack and neutron clients is different.
Subnet pool:
root@controller01:~# openstack subnet pool show MySubnetPool
Public bug reported:
- [x] This doc is inaccurate in this way: The documentation states that the
'openstack-dashboard-ubuntu-theme' package can be removed to revert to the
default Horizon theme. However, that package did not appear to be installed on
my system via the 'openstack-dashboard'
Public bug reported:
- [x] This doc is inaccurate in this way: The guide is missing the steps
needed to create the Glance database. It mentions using the mysql
client, but does not include the commands to create glance DB and user.
---
Release: 15.0.1.dev1 on
Public bug reported:
Environment: OpenStack Newton
Driver: ML2 w/ OVS
Firewall: openvswitch
In this environment, we have observed OVS flooding network traffic
across all ports in a given VLAN on the integration bridge due to the
lack of a FDB entry for the destination MAC address. Across the
Public bug reported:
Environment: OpenStack Newton
Driver: ML2 w/ OVS
Firewall: openvswitch
Clients using an OpenStack cloud based on the Newton release are facing
network issues when updating security groups/rules. We are able to
replicate the issue by modifying security group rules in an
Public bug reported:
Version: OpenStack Newton (OSA v14.2.11)
neutron-openvswitch-agent version 9.4.2.dev21
Issue:
Users complained that instances were unable to procure their IP via
DHCP. On the controllers, numerous ports were found in BUILD state.
Tracebacks similar to the following could be
Thanks, Brian. I confirmed that the other 'arping' package was being
installed over iputils-arping post-deploy by another set of playbooks.
The difference in behavior between the two packages is subtle and not
enough to cause any outright errors, but will affect users in a negative
way as
** Also affects: openstack-ansible
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715734
Title:
Gratuitous ARP for floating IPs not so
Public bug reported:
OpenStack Release: Newton
OS: Ubuntu 16.04 LTS
When working in an environment with multiple application deployments
that build up/tear down routers and floating ips, it has been observed
that connectivity to new instances using recycled floating IPs may be
impacted.
In this
It has been determined that the networks attached to the router were
associated with different scopes. Additional testing has found the
proper rules are being added. marking as invalid.
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a
Public bug reported:
Release: OpenStack-Ansible 13.3.4 (Mitaka)
Scenario:
Neutron routers are connected to single provider network and single
tenant network. Floating IPs are *not* used, and SNAT is disabled on the
router:
Public bug reported:
OpenStack Release: Newton
Operating System: Ubuntu 16.04 LTS 4.4.0-45-generic
OpenStack Distro: OpenStack-Ansible 14.0.2
While working to test/implement macvtap functionality, I found it was
not possible to boot an instance when using the macvtap mech driver and
macvtap
Public bug reported:
OpenStack Version: v14 (Newton)
NIC: Mellanox ConnectX-3 Pro
While testing an SR-IOV implementation, we found that
pci_passthrough_whitelist in nova.conf is involved in the population of
the pci_devices table in the Nova DB. Making changes to the
device/interface in the
Public bug reported:
Version: Mitaka
While performing failover testing of L3 HA routers, we've discovered an
issue with regards to the failure of an agent to report its state.
In this scenario, we have a router (7629f5d7-b205-4af5-8e0e-
a3c4d15e7677) scheduled to (3) L3 agents:
Public bug reported:
Problem Description
===
Currently, subnets are created with an allocation pool(s) that is either
a) user-defined or b) automatically generated based on CIDR. This RFE
asks that the community support the creation of subnets without an
allocation pool.
Neutron
Public bug reported:
When attempting to schedule a router to an L3 agent without an external
network set, the following error is observed:
root@infra01_neutron_server_container-96ae0d98:~# neutron l3-agent-router-add
7ec8336e-3d82-46f5-8e15-2f2477090021 TestRouter
Agent
Public bug reported:
Posting here, because I'm not sure of a better place at the moment.
Environment: Juno
OS: Ubuntu 14.04 LTS
Plugin: ML2/LinuxBridge
root@infra01_neutron_agents_container-4c850328:~# bridge -V
bridge utility, 0.0
root@infra01_neutron_agents_container-4c850328:~# ip -V
ip
Public bug reported:
Version: 2015.2 (Liberty)
Plugin: ML2 w/ LinuxBridge
While testing various NICs, I found that changing the physical interface
mapping in the ML2 configuration file and restarting the agent resulted
in the old physical interface remaining in the bridge. This can be
observed
Public bug reported:
= Scenario =
• Kilo/Juno
• Single Neutron router with enable_snat=false
• two instances in two tenant networks attached to router
• each instance has a floating IP
INSTANCE A: TestNet1=192.167.7.3, 10.1.1.7
INSTANCE B: TestNet2=10.0.8.3, 10.1.1.6
When instances communicate
Public bug reported:
I am currently experiencing (random) cases of instances that are spun up
having limited connectivity. There are about 650 instances in the
environment and 45 networks.
Network Info:
- ML2/LinuxBridge/l2pop
- VXLAN networks
Symptoms:
- On the local compute node, the instance
Public bug reported:
By assigning the subnet gateway address to a port as an allowed address,
a user can cause ARP conflicts and deny service to other users in the
network. This can be exacerbated by the use of arping to send gratuitous
ARPs and poison the arp cache of instances in the same
Public bug reported:
Problem:
In Icehouse/Juno, when using ML2/LinuxBridge and VXLAN networks,
allowed-address-pairs functionality is broken. It appears to be a case
where the node drops broadcast traffic (ff:ff:ff:ff:ff:ff), specifically
ARP requests, from an instance.
Steps to reproduce:
1.
31 matches
Mail list logo