Reviewed: https://review.openstack.org/482823
Committed:
https://git.openstack.org/cgit/openstack/horizon/commit/?id=476b0a4e5a3b3c871d72acb7021de4d000a30ee0
Submitter: Zuul
Branch:master
commit 476b0a4e5a3b3c871d72acb7021de4d000a30ee0
Author: Chiew Yee Xin
Public bug reported:
If you run image-import api on any image which is in active state will
return 500 error as it raises InvalidImageStatusTransition because Image
status transition from active to importing is not allowed.
Ideally it should return HTTP 409 Conflict error to the user.
Public bug reported:
If you run image-import api on any image which is in saving state and
does not have container-format and/or disk-format set goes into active
state. Ideally image which does not have container-format or disk-format
set should raise bad request error.
Prerequisites:
1. Ensure
Public bug reported:
If you run image-import api on any image which is in queued state having
valid container-format and disk-format set will return 500 error as it
raises IOError: [Errno 2] No such file or directory:
'/tmp/staging/567bfb61-d9f7-47e5-aa1a-90b7797e70be'. Also image status
changes
Public bug reported:
If you run image-import api on any image which is in queued state and
doesn't have container-format and disk-format set will return 500 error
as it raises ValueError: Properties disk_format, container_format must
be set prior to saving data. Ideally it should return HTTP 400
Public bug reported:
Steps to repro:
* Deploy with multiple DHCP agents per network (e.g. 3) and multiple L3 agents
per router (e.g. 2)
* Create a network
* Create a subnet
* Create a DVR+HA router
* Uplink router to external network
* Deploy a VM on the network
The resolv.conf of the VM looks
OK I've figured it out, very sorry, not a bug. In newton we had
mech_driver set to midonet_ext and in ocata this is now just midonet
again so this is why everything was failing.
** Changed in: networking-midonet
Status: New => Invalid
** Changed in: neutron
Status: New => Invalid
Public bug reported:
Given a security group ID I would like an API to determine which devices
(nova instances) use this security group.
Currently the only way to do this is by looking in the database and
doing some SQL on the securitygroupportbindings table.
** Affects: neutron
Importance:
Have submitted an RFE for this at
https://bugs.launchpad.net/neutron/+bug/1734026
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Reviewed: https://review.openstack.org/358425
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=77e51f14a50dafb46176e50ff3788e7918ff29df
Submitter: Zuul
Branch:master
commit 77e51f14a50dafb46176e50ff3788e7918ff29df
Author: Gary Kotton
Date: Sun Aug
Reviewed: https://review.openstack.org/521652
Committed:
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=1254fca65a1ca6d259232f7e70621a9ba65a93b0
Submitter: Zuul
Branch:master
commit 1254fca65a1ca6d259232f7e70621a9ba65a93b0
Author: Boden R
Date: Mon
Public bug reported:
Description
===
Currently, when we get servers list in multi cell, we scatter gather results
from cells, but if we get back exception or timeout from cell, we will get 500
error finally.
We should handle raise or timeout after getting back all results.
Maybe we
Public bug reported:
Description
When zombied instances appear(You can also see bug
https://bugs.launchpad.net/nova/+bug/911366),
set running_deleted_instance_poll_interval = 60 and
running_deleted_instance_action = reap, then nova-compute service will clear
those zombied instances, but if
** Changed in: neutron
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728479
Title:
some security-group rules will be covered.
Status in
Public bug reported:
When an organization has more than one LDAP server and a potentially
large number of clients connecting to them, they may support automatic
discovery of those servers by creating DNS SRV records for them. The
overview of how this works is described here:
Public bug reported:
Hi,
I am running Ocata Neutron with OVS DVR, l2_population is on, and Ocata
Octavia is also installed. Under a certain circumstance, I am getting
incorrect ARP entries in the routers for the VRRP address of the
loadbalancers created.
Here is the ARP table for a router that
Added the Nova project as this issue hits the Quobyte Nova drivers mount
point validation, too.
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: nova
Assignee: (unassigned) => Silvan Kaiser (2-silvan)
--
You received this bug notification because you are a
Public bug reported:
Description
===
Sometimes when a baremetal instance is terminated, some VIFs are not
detached from the node. This can lead to the node becoming unusable,
with subsequent attempts to provision it fail during VIF attachment due
to there being insufficient free ironic
Reviewed: https://review.openstack.org/363634
Committed:
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bfe947b26266e13251b7ba972d8b57e67e9ebb02
Submitter: Zuul
Branch:master
commit bfe947b26266e13251b7ba972d8b57e67e9ebb02
Author: Adrien Cunin
Date:
Reviewed: https://review.openstack.org/519622
Committed:
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5e08a9b0e7d4f99d217ca73c6aa37e52a13c5d5a
Submitter: Zuul
Branch:master
commit 5e08a9b0e7d4f99d217ca73c6aa37e52a13c5d5a
Author: Sławek Kapłoński
Date:
Reviewed: https://review.openstack.org/494136
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=3759f105a7c4c3029a81a5431434190ef1bbb020
Submitter: Zuul
Branch:master
commit 3759f105a7c4c3029a81a5431434190ef1bbb020
Author: Pavlo Shchelokovskyy
Public bug reported:
The 2.36 microversion broke the 'force' parameter in the os-quota-sets
API:
https://developer.openstack.org/api-ref/compute/#update-quotas
It's because for 2.36 the schema redefined the properties but didn't
copy the force parameter:
Yes, you can find which port is the instance using and the query the
port, it will show you security groups.
The port belonging to instance has device_id equal to instance id.
** Changed in: neutron
Status: New => Opinion
--
You received this bug notification because you are a member of
Public bug reported:
Description
===
nova list command fails with TypeError instead of CommandError when an
existing but not valid attribute of the object is given as field.
$ /usr/bin/nova list --all --status ERROR --fields update
ERROR (TypeError): object.__new__(thread.lock) is not
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/newton
Importance: Undecided
Status: New
** Also affects: nova/ocata
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
good catch doude!
adding neutron, since the field definition is in neutron-lib
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Public bug reported:
Sometimes when build_instance fails on n-cpu, the error that n-cond
receives is mangles like this:
Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: ERROR
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9
tempest-FloatingIpSameNetwork-1597192363
Sorry what you are explaining is the reverse of what I want and doesn't
help, I have a security group ID and I want to know what instances have
that security group applied.
We have thousands of instances and querying each one to see if they have
the security group applied is very inefficient and
Public bug reported:
- [x] This doc is inaccurate in this way:
At the end of the documentation regarding the glance rolling update, the
command should be: « glance-manage db contract » but the minus is
missing. That could lead to improper use of the glance command itself
and an incomplete
Disregard my previous comment, this is only à DB issue.
** Changed in: bgpvpn
Status: New => Confirmed
** No longer affects: neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
30 matches
Mail list logo