Public bug reported:
Description
===
evacuate an instance on non-shared storage succeeded and boot image is
rebuilt
Steps to reproduce
==
1. Create a two compute nodes cluster without shared storage
2. boot a image backed virtual machine
3. shutdown down the compute
Public bug reported:
Description
===
We are using victoria nova and find that after a same-host-cold-migrate,
subsequently cold migrate could break anti-affinity policy.
Steps to reproduce
==
1 provision a openstack cluster with 2 compute nodes
2 create a server group
Public bug reported:
Description
===
I found that get_machine_ips could took too long before returning IP addresses.
There are
around 160 instances with about 200 nics which results in around 1000 network
adapters on the host.
calling netifaces.ifaddresses approximately took around 0.2
Public bug reported:
Description
===
Instance.availability_zone is set in nova.conductor while scheduling.
But host's availability_zone could be modified when host is added to an
aggregate, but instance.availability_zone will not be changed, instead
'availabity_zone' will be cached in
Public bug reported:
Description
===
we are running a victoria openstack cluster (python3). and I observe
that everytime when an openstack compute service list executed, nova-api
will create a new connection to memcache. Actually there are several
reasons to cause this behavior
1. when
Public bug reported:
Description
===
we are running a victoria openstack cluster (python3). and I observe
that everytime when an openstack compute service list executed, nova-api
will create a new connection to memcache. Actually there are several
reasons to cause this behavior
1. when
Public bug reported:
Hello,
I am testing centos 7.6 using an Victoria Openstack. In the virtual
machine, I am finding the route looks like below
# ip r
default via 172.31.0.1 dev eth0
192.168.0.0/16 dev eth1 proto kernel scope link src 192.168.0.9
169.254.0.0/16 dev eth0 scope link metric
Public bug reported:
We are observing that neutron-dhcp-agent's state is deviating from "real
state", by saying real state, I mean all hosted dnsmasq are running and
configured.
For example, agent A is hosting 1,000 networks, if I reboot agent A then
all dnsmasq processes are gone, and dhcp
Public bug reported:
Hello,
I am using cloud-init version: /usr/bin/cloud-init 20.4.1-0ubuntu1~18.04.1,
ubuntu version is root@ubuntu:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.5 LTS
Release:18.04
Codename: bionic
I found
Public bug reported:
Description
===
When rebuilding instance failed due to a potentially problematic cinder API,
then when trying to rebuild again, nova will try to disconnect volume again
although the path has already clearer. It is generally OK for rbd backend, but
it could cause
Public bug reported:
I found meaning of option "router_auto_schedule" is hard to follow. A
quick code review finds it is only used at (tests excluded)
```python
def get_router_ids(self, context, host):
"""Returns IDs of routers scheduled to l3 agent on
This will
Public bug reported:
We observe excessive DB calls to load DistributedPortBindings,
We have enabled DVR and have some huge virtual routers with around
60 router interfaces scheduled on around 200 compute nodes.
We saw something like
```console
2022-05-12 05:59:06.406 50 ERROR
Public bug reported:
We are using Rocky 13.0.6 neutron which seems removing router namespace if
retry limit got hit.
After some investigations, it seems that delete a server which already
associates with a floating ip address
seems causes a broadcast notification to all related routers. In
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1964587
Title:
default video driver
Status in OpenStack Compute
Public bug reported:
Hello, I saw on amd64 platform nova defaults to use cirrus as video
driver and windows virtual machine got a small resolution.
video driver virtio could allow a larger resolution. And looks like the
driver type cannot be set by user.
My question is why using cirrus as
Thank you, I saw a patch has been merged upstream for new releases. and
this should be fixed.
** Changed in: horizon
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
Public bug reported:
Horizon will auto fill a device_name "vda" by default. But vda only
makes senses to virtio-blk block device. For scsi device, sda makes more
sense.
Nova will take care of device name if not specified, so why not make
this field null by default and let nova chose a better
Public bug reported:
Description
===
nova compute will failed to update vgpu mdev placement data if mdev type
changed while
there are some previously created mdev devices with different types. For
nvidia, under such
circumstances max available instances will be 0.
Steps to reproduce
Public bug reported:
Description
===
When trying to create a server image, nova compute will endlessly waits for
snapshot to be created.
this is quite dangerous because server's file system has already been frozen
and IO operations has been
disabled.
** Affects: nova
Importance:
Public bug reported:
Description
=
nova compute service failed to start after reboot, if there are vgpu
virtual machines beforehand.
Error log
2021-08-20 09:37:30.331 284159 DEBUG nova.virt.libvirt.volume.mount [None
req-6ad4e06c-980e-4759-8b36-6c696e596dab - - - - -]
Public bug reported:
Description
===
We have a use case to attach FPGA device to virtual machine. This FPGA
card gets two functions, we can attach both of them using alias. After
both of them are passing through to the virtual machine, we found that
they are not appearing as different
Public bug reported:
Description
===
detach a multi-attach enabled volume failed after swapping volume.
Steps to reproduce
==
1. Create two volume type with multi attach enabled (A, B)
2. Create a new volume using type A
3. attach it a server
4. Retype this volume to
Public bug reported:
Description
===
cold migration failed when server is specified with a numa topology
Steps to reproduce
==
create server from a flavor specified with numa topology parameters and then do
a cold migrate or resize
Expected
success
Actual
Public bug reported:
ubuntu 18.04 uses netplan to manage networks, netplan could either use
NetworkManager or systemd-networkd
internally, but it does not use networking.
cloud-init.service explicitly depends on networking.service to complete which
might be problematic
because network service
Public bug reported:
I am using neutron-ovs-agent using openvswitch firewall, there are
around 40 ports with same security group on the same compute node. it
seems update security group for each port will consume near 3 seconds
which sums up to around 100 seconds in total. This significantly
Public bug reported:
2021-04-25 03:19:37.303 1 ERROR
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None
req-413ff802-0c14-47ad-8221-14d7e972bad3 - - - - -] Error while processing VIF
ports: TypeError: %d format: a number is required, not list
2021-04-25 03:19:37.303 1 ERROR
Public bug reported:
Description
===
from
https://github.com/ceph/ceph/blob/0be78da368f2dc1c891e3caafac38f7aa96d3c49/src/pybind/rados/rados.pyx#L660,
it looks like function connect in object rados will ignore timeout input
and therefore makes current configuration does not take effect.
Public bug reported:
Description
===
from
https://github.com/ceph/ceph/blob/0be78da368f2dc1c891e3caafac38f7aa96d3c49/src/pybind/rados/rados.pyx#L660,
it looks like function connect in object rados will ignore timeout input
and therefore makes current configuration does not take effect.
Public bug reported:
Description
===
querying large number of vms through server detail is slow, and a lot of time
is wasted on calling neutron api to obtain security group info.
Expected result
===
obtaining security group info should not consumes half of total query
time
Public bug reported:
hello, after reading the code, it seems nova-compute can only use
vhostuser mode if netdev is enabled on ovs bridge. an internal use case
requires us to allow using tap device as well as vhostuser device on the
same host. Do this sound like a valid use case?
** Affects:
Public bug reported:
Description
===
When rabbitMQ unstable, there might be a chance that method
https://github.com/openstack/nova/blob/7a1222a8654684262a8e589d91e67f2b9a9da336/nova/compute/api.py#L4741
will timeout but bdm is successfully created.
Under such cases, volume will be shown
Public bug reported:
Sorry this is actually a bug report but discussing for better
clarification in document.
Currently, we are running iptables firewall in production and saw performance
degrade thus
we plan to upgrade to ovs firewall in place. By reading the doc I found
upgrading process is
Public bug reported:
we are using openstack-neutron rocky with openvswitch versioned 2.10.0
We are using ubuntu 18.04 which shipped with a libc6 bug, reported here
https://github.com/openvswitch/ovs-issues/issues/175.
My question is that when this bug happens ovs agent will not working and
Public bug reported:
we are using openstack-neutron rocky with openvswitch versioned 2.10.0
We are using ubuntu 18.04 which shipped with a libc6 bug, reported here
https://github.com/openvswitch/ovs-issues/issues/175.
My question is that when this bug happens ovs agent will not working and
Ok, i'll try out Victoria and compare the result. thank you for reply.
** Changed in: neutron
Status: New => Opinion
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1909160
Title:
Public bug reported:
I saw listing security group is slow and causing cpu spikes
unexpectedly,
I run a Rock neutron-server with api worker set to 1, when executing
command like
```console
root@mgt01:~# time curl -H "x-auth-token: $token"
Public bug reported:
Maybe it's a k8s kube-proxy related bug, but maybe it is easier to solve on
neutron's side...
In k8s either NodePort or ExternalIP will generate iptable rules which will
effect vm traffic when
hybrid iptable plugin enabled.
The problem is:
Chain PREROUTING (policy
Public bug reported:
for libvirt version 4.0.0, scsi disk with an unit equal to 7 will not be
able to attach due to libvirt's own limitation.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering
Public bug reported:
We are using openstack rocky and we I check the memcached, I found
root@compute:~# telnet compute 11211
Trying 192.168.0.17...
Connected to compute.
Escape character is '^]'.
stats cachedump 15 1
ITEM c9067b617ec1e6e7f78318c19e7ce2c7f4f9dcd6 [2034 b; 0 s]
expiration time
Public bug reported:
Create a port on a shared network using a user with member role on
another project fails.
** Affects: horizon
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
I think previous title is misleading. Actually hostname itself is still
A. what changes is fqdn name seen by hostname --fqdn.
** Changed in: nova
Status: Invalid => New
** Summary changed:
- how to deal with hypervisor name changing
+ how to deal with hypervisor host fqdn name changing
Public bug reported:
nova fails to correctly account for resources after hypervisor name
changes. For example, if previously the hypervisor name is A, and some
later it switches to A.B, then all of the instances which belong to A
will not be included in the resource computation for A.B although
** Changed in: neutron
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542032
Title:
IP reassembly issue on the Linux bridges in Openstack
Status in
Public bug reported:
Should we offer support for volume backed instance?
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
We are using neutron rocky, with security driver set to iptables_hybrid,
the cluster is deployed on top of a kubernetes cluster. And all the
networks are set to mtu 1500
The problem I am facing right now is that ping across compute nodes
fails with a packet size larger than
Public bug reported:
Cinder backend Image creation failed after long waiting when the volume
is still creating
root@mgt01:~# openstack volume list --all | grep
fb8aee1b-e19e-4336-8fa2-864f1664b834
| b1e021bd-974d-4974-961b-47ab7f9b0a16 |
image-fb8aee1b-e19e-4336-8fa2-864f1664b834
Public bug reported:
I saw l2pop rules for a vlan network which causes problems for mac
learning. There is no dvr routed associated with it. It is a pure vlan
netowrk.
root@compute02:/tmp# ovs-ofctl dump-flows br-tun table=21
cookie=0xcd381baa7a6d5b5c, duration=1703630.319s, table=21,
Public bug reported:
Uploading image to rbd backend stuck at saving state, and rbd du command
shows image size is not increasing, as well as ceph osd pool stats shows
that there is no client io.
a tcpdump shows the program is actually trying receive from client with
a rather small window size
Public bug reported:
Description
===
Nova uses strict cpu flag comparison during live migration, this introduces
some problems when
migrating with some cpu flags which do not affect actually migration. For
example, `monitoring` flag
could be neglected safely.
So I think it might be
Public bug reported:
Sometimes I saw database is not consistent for some reasons,
for example, as shown below
MariaDB [neutron]> select * from ipamsubnets where
neutron_subnet_id='9a8fd2b0-743c-4500-8978-9e5bf9b38347'
-> ;
Public bug reported:
Description
When resize to local host enabled and do a cold migration sometimes
fails with
1. migrating to same host failed
2. and then a list index out of bound error
Steps to reproduce
===
deploy two compute nodes and make workload
Public bug reported:
After
https://github.com/openstack/neutron/commit/efa8dd08957b5b6b1a05f0ed412ff00462a9f216
this patch, I saw unexpected vlan interruption after live migration.
The steps to reproduce the problem is simple,
first create two vm01, vm02 on compute01 and compute02 separately,
Public bug reported:
Description
===
after live migration, block device mapping's connection stays at "attaching",
which is confusing piece of information. The root cause seems caused by
different code path between live migration and attach volume.
Steps to reproduce
==
Public bug reported:
We are using Openstack Neutron 13.0.6 and it is deployed using
OpenStack-helm.
I test ping servers in the same vlan while rebooting neutron-ovs-agent.
The result shows
root@mgt01:~# openstack server list
Public bug reported:
pep8 checking fails for rocky branch on ubuntu 18.04.3
root@mgt02:~/src/nova# tox -epep8 -vvv
removing /root/src/nova/.tox/log
using tox.ini: /root/src/nova/tox.ini
using tox-3.1.0 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc
skipping sdist step
pep8 start:
Public bug reported:
We are testing OpenStack on Phytium,FT2000PLUS
root@compute01:~# lscpu
Architecture: aarch64
Byte Order:Little Endian
CPU(s):64
On-line CPU(s) list: 0-63
Thread(s) per core:1
Core(s) per socket:4
Socket(s): 16
NUMA
quot;{0}"
}
},
{
"projects":[
{
"name":"{1}",
"roles":[
{
"name":"member"
}
Public bug reported:
During starting up nova-compute service, we are hit by the following
error message
+ sed -i s/HOST_IP// /tmp/logging-nova-compute.conf
+ exec nova-compute --config-file /etc/nova/nova.conf --config-file
/tmp/pod-shared/nova-console.conf --config-file
with fixed ip on floating network
Then call `routers_updated_on_host` manually, then this dvr will be created on
the host where vm resides on, but actually it should be there.
** Affects: neutron
Importance: Undecided
Assignee: norman shen (jshen28)
Status: In Progress
Public bug reported:
sorry post bug at wrong place.
** Affects: neutron
Importance: Undecided
Status: Invalid
** Changed in: neutron
Status: New => Invalid
** Description changed:
- we are using OpenStack Queens:
- nova-common/xenial,now 2:17.0.9-6~u16.01+mcp189 all
Public bug reported:
we are using OpenStack Queens:
nova-common/xenial,now 2:17.0.9-6~u16.01+mcp189 all [installed]
nova-compute/xenial,now 2:17.0.9-6~u16.01+mcp189 all [installed,automatic]
nova-compute-kvm/xenial,now 2:17.0.9-6~u16.01+mcp189 all [installed]
guest vm uses windows 2012
Public bug reported:
We are having a distributed router which used by hundreds of virtual
machines scattered across around 150 compute nodes. When nova sends port
update request to neutron, it will generally taking nearly 4 min to
complete.
Neutron version is openstack Queens 12.0.5.
I found
62 matches
Mail list logo