** Changed in: nova
Importance: Undecided => High
** Tags added: train-rc-potential
** Also affects: nova/train
Importance: Undecided
Status: New
** Changed in: nova/train
Status: New => Confirmed
** Changed in: nova/train
Importance: Undecided => High
--
You received
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22if%20mig.dest_compute%20%3D%3D%20self.host%20and%20'new_resources'%20in%20mig_ctx%3A%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d
** Also affects: nova/train
Importance: Undecided
Status: New
**
Public bug reported:
Seen here:
https://zuul.opendev.org/t/openstack/build/2b10b4a240b84245bcee3366db93951d/log/logs/screen-n-cpu.txt.gz?severity=4#2675
Oct 21 13:35:16.977968 ubuntu-bionic-rax-iad-0012404623 nova-
compute[26938]: ERROR nova.compute.manager [None req-dd5ddbad-4234-4288
Hmm, did something change in Stein on the Cinder side to enforce the
update_volume_admin_metadata policy rule on the os-attach API? I'm not
aware of anything that has changed on the nova side in stein that would
be related to this.
** Also affects: cinder
Importance: Undecided
Status:
Public bug reported:
https://c6fecb2db5c55fa0effa-
6229cc6450d9b491384804026d2fbd81.ssl.cf5.rackcdn.com/688980/1/gate
/openstack-tox-py36/71a8bdd/testr_results.html.gz
ft1.2:
nova.tests.unit.virt.powervm.tasks.test_vm.TestVMTasks.test_power_on_revert_StringException:
Traceback (most recent
That API is for nova-network only which we are removing so eventually
that API is just going to return a 410 response and won't be used
anyway.
** Changed in: nova
Status: New => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Looks like expected_task_state is pulled from the values dict here:
https://github.com/openstack/nova/blob/1a226aaa9e8c969ddfdfe198c36f7966b1f692f3/nova/db/sqlalchemy/api.py#L2850
and if not None converted to a list here:
Public bug reported:
I noticed this in some code I was writing when it didn't behave like I
expected:
https://review.opendev.org/#/c/627891/63/nova/conductor/tasks/cross_cell_migrate.py@423
https://review.opendev.org/#/c/688832/2/nova/conductor/tasks/cross_cell_migrate.py@781
That "works"
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/train
Importance: Undecided
Status: New
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
--
Public bug reported:
This came up in the cross-cell resize review:
https://review.opendev.org/#/c/627890/60/nova/conductor/tasks/cross_cell_migrate.py@495
And I was able to recreate with a functional test here:
https://review.opendev.org/#/c/688832/
That test is doing a cross-cell cold
Hits in ironic multinode jobs:
This goes back to Stein because https://review.opendev.org/#/c/591597/
changed the method from using DELETE /allocations/{consumer_id} to the
GET/PUT dance.
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects: nova/train
Importance: Undecided
Status:
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Importance: Undecided => Low
** Changed in: nova/queens
Assignee: (unassigned) => Silvan Kaiser (2-silvan)
--
You received
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in:
Public bug reported:
There is no documentation for the image cache, so we should add one to
the admin guide.
I think a relatively simple beginning would include:
- A high level description of what an image cache is, where it lives,
and the benefits.
- Which compute drivers support image cache
This is extremely latent but I've marked it going back to at least
queens since that's currently our oldest non-extended maintenance
branch.
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/train
Importance: Undecided
Status: New
** Also
Public bug reported:
https://review.opendev.org/#/c/684118/ recently merged and is causing an
issue because a variable used in the log message isn't in scope:
Oct 07 07:16:51.372050 ubuntu-bionic-ovh-bhs1-0012185489 nova-scheduler[28235]:
ERROR oslo_messaging.rpc.server [None
To capture what I said in the now abandoned patch:
"This would change something that's not an error to an error, regardless
of the weird latent behavior. Because of that, I think this would
require a microversion which means we'd need a spec if we wanted to
change this. gmann was compiling a list
We don't seem to be hitting this in the gate anymore so I'm not sure if
it's just rare now or if it's resolved some other way:
http://status.openstack.org/elastic-recheck/#1783565
I'm marking invalid for now though. We can re-open if necessary.
** Changed in: nova
Assignee: Zhenyu Zheng
Public bug reported:
This is demonstrated by this functional test patch:
https://review.opendev.org/#/c/686734/
That adds a test which creates a single server create request to create
10 servers and each server has 255 BDMs using the same image and asserts
that the API calls GET
Public bug reported:
- [x] This doc is inaccurate in this way:
This came up in review:
https://review.opendev.org/#/c/685927/2//COMMIT_MSG@9
https://docs.openstack.org/api-ref/compute/#show-server-details
and
https://docs.openstack.org/api-ref/compute/#list-servers-detailed
response
Public bug reported:
- [x] This doc is inaccurate in this way:
This came up during a review to remove nova-net usage from functional
tests and enhance the neutron fixture used in those tests:
https://review.opendev.org/#/c/685927/2/nova/tests/functional/test_servers.py@1264
In summary, GET
Public bug reported:
Method `nova.volume.cinder.API#create` accepts `size` as the 3rd args,
but in wrapper of `nova.volume.cinder.translate_volume_exception`, the 3rd
parameter is volume_id. If we hit cinder exception when creating volumes
like the response body down below:
```
{"itemNotFound":
Public bug reported:
This came up in the mailing list while answering some questions about
when/how various cells v2 and database related commands get run:
http://lists.openstack.org/pipermail/openstack-
discuss/2019-October/009937.html
Recent change https://review.opendev.org/#/c/671298/ was
** Also affects: nova/train
Importance: Undecided
Status: New
** Changed in: nova/train
Status: New => In Progress
** Changed in: nova/train
Importance: Undecided => Low
** Changed in: nova/train
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You
Public bug reported:
I noticed this while working on a functional test to recreate a bug
during resize reschedule:
https://review.opendev.org/#/c/686017/
And discussed a bit in IRC:
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
Note for backports: this problem goes back to Pike but we won't be able
to backport the fix since it's going to require RPC API version changes.
** No longer affects: nova/pike
** No longer affects: nova/queens
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
Public bug reported:
This came up in the mailing list today:
http://lists.openstack.org/pipermail/openstack-
discuss/2019-September/009827.html
It's not immediately obvious that console proxy services should be run
per-cell rather than globally.
One would expect to see something about that
** Also affects: nova/train
Importance: Undecided
Status: New
** Changed in: nova/train
Status: New => In Progress
** Changed in: nova/train
Assignee: (unassigned) => Boris Bobrov (bbobrov)
--
You received this bug notification because you are a member of Yahoo!
** Also affects: nova/train
Importance: Undecided
Status: New
** Changed in: nova/train
Assignee: (unassigned) => Dan Smith (danms)
** Changed in: nova/train
Status: New => In Progress
** Changed in: nova/train
Importance: Undecided => High
** Changed in: nova
** Also affects: nova/train
Importance: High
Assignee: Artom Lifshitz (notartom)
Status: In Progress
** No longer affects: nova/train
** Also affects: nova/train
Importance: High
Assignee: Artom Lifshitz (notartom)
Status: In Progress
** Changed in: nova/train
** Also affects: nova/train
Importance: Undecided
Status: New
** Changed in: nova/train
Status: New => Confirmed
** Changed in: nova/train
Importance: Undecided => High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/stein
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
I know tempest has a novnc console test, I wonder if the same is
possible for ironic serial consoles in ironic CI testing so we could
avoid these types of regressions in the future?
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/stein
Importance:
This goes back to Newton:
https://github.com/openstack/nova/commit/76dfb4ba9fa0fed1350021591956c4e8143b1ce9
** Changed in: nova
Status: New => In Progress
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/stein
Importance: Undecided
Status: New
**
Do you have the logs? Are there specific errors in the scheduler or
conductor logs about NoValidHost? You can trace a request through the
logs by the request ID which is something like "req-" so trace a
request and see why the scheduler is filtering out all hosts. I'm
closing this as invalid since
** No longer affects: nova/ocata
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1694844
Title:
Boot from volume fails when cross_az_attach=False and volume is
Public bug reported:
Seen here:
https://zuul.opendev.org/t/openstack/build/d53346210978403f888b85b82b2fe0c7/log/logs/screen-n-sch.txt.gz?severity=3#2368
Sep 22 00:50:54.174385 ubuntu-bionic-ovh-gra1-0011664420 nova-
scheduler[18043]: WARNING nova.context [None req-
** Tags added: low-hanging-fruit
** No longer affects: python-glanceclient
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1763761
Title:
CPU topologies in nova - doesn't mention numa
Is this still a problem we need to track? Mitaka is long end of life
upstream at this point so I'm not even sure this is a problem on
upstream stable branches for which we could backport a fix.
** Changed in: nova
Assignee: Stephen Finucane (stephenfinucane) => (unassigned)
** Changed in:
** Also affects: glance/rocky
Importance: Undecided
Status: New
** Also affects: glance/stein
Importance: Undecided
Assignee: Thomas Bechtold (toabctl)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => In Progress
** Changed in: nova/pike
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of
.
** Affects: nova
Importance: Low
Assignee: Matt Riedemann (mriedem)
Status: Confirmed
** Tags: doc
** Changed in: nova
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, wh
Double check the configuration for the [neutron] section in nova.conf
against this:
https://docs.openstack.org/neutron/rocky/install/controller-install-
ubuntu.html#configure-the-compute-service-to-use-the-networking-service
Note that the install guide is just a reference, the actual URLs have
for years.
** Changed in: nova
Status: In Progress => Invalid
** Changed in: nova
Assignee: Matt Riedemann (mriedem) => (unassigned)
** No longer affects: nova/queens
** No longer affects: nova/rocky
** Changed in: nova
Status: Invalid => Fix Released
--
You received
*** This bug is a duplicate of bug 1843615 ***
https://bugs.launchpad.net/bugs/1843615
This was fixed with https://review.opendev.org/#/c/681540/ since I
didn't remember we already had a bug for this.
** This bug has been marked a duplicate of bug 1843615
** Also affects: nova/stein
Importance: Undecided
Status: New
** Changed in: nova/stein
Status: New => Confirmed
** Changed in: nova/stein
Importance: Undecided => High
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
how/775148/
so it appears something has changed when that is sent or we're losing a
race with when force complete is triggered? Meaning maybe we don't catch
the force complete in time before post live migration starts.
** Affects: nova
Importance: High
Assignee: Matt Riedemann (mriedem)
Public bug reported:
- [x] This doc is inaccurate in this way:
https://docs.openstack.org/api-ref/compute/?expanded=show-server-
topology-detail#id401
There is no 'host_numa_node' parameter in the response, it's called
'host_node'.
---
Release: on 2019-08-06
will be a legacy
dict rather than a full RequestSpec object so the code here:
https://github.com/openstack/nova/blob/19.0.0/nova/conductor/manager.py#L302-L321
Needs to account for the legacy dict case.
** Affects: nova
Importance: Low
Assignee: Matt Riedemann (mriedem)
Status: In Progress
Public bug reported:
This may be related to bug 1838309 but I'm not sure so I'm reporting it
separately so we can track it in elastic-recheck. This is the traceback
in the nova-compute logs:
Sep 06 01:28:11.837685 ubuntu-bionic-rax-iad-0010857269 nova-compute[3855]:
DEBUG
Public bug reported:
- [x] This doc is inaccurate in this way:
The reference link here is broken:
https://docs.openstack.org/nova/latest/contributor/testing/zero-
downtime-upgrade.html#zero-downtime-upgrade-process
---
Release: on 2017-09-06 22:01:01
SHA:
** Also affects: nova/stein
Importance: Undecided
Status: New
** Changed in: nova
Importance: Undecided => Medium
** Changed in: nova/stein
Importance: Undecided => Medium
** Changed in: nova/stein
Status: New => Confirmed
--
You received this bug notification because
Public bug reported:
This PCI validation code in the live migration task in conductor is run
per possible dest host for the migration:
https://github.com/openstack/nova/blob/master/nova/conductor/tasks/live_migrate.py#L212-L228
But is host agnostic, meaning if I have 100 possible dest hosts for
nic) is ready.
This doesn't really break anything, but it's an ugly traceback in the
logs that could be avoided. We should handle the VirtDriverNotReady
error and return from the periodic.
** Affects: nova
Importance: Low
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Tags: co
Public bug reported:
- [x] This is a doc addition request.
The description for the AggregateInstanceExtraSpecsFilter filter is not
clear:
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#aggregateinstanceextraspecsfilter
(note it's also described here:
** Changed in: nova
Status: New => Opinion
** Changed in: nova
Importance: Undecided => Wishlist
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512645
Title:
Security groups
** Summary changed:
- Instances recovered after failed migrations enter error state
+ Instances recovered after failed migrations enter error state (hyper-v)
** Tags added: live-migration
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/ocata
Importance:
*** This bug is a duplicate of bug 1838666 ***
https://bugs.launchpad.net/bugs/1838666
The actual version of libvirt on the system shouldn't matter, these
tests should not be running against a real libvirt, everything should be
faked out. My guess is the tests are using unordered dicts and
Public bug reported:
Seen with an ironic re-balance in this job:
https://d01b2e57f0a56cb7edf0-b6bc206936c08bb07a5f77cfa916a2d4.ssl.cf5.rackcdn.com/678298/4/check
/ironic-tempest-ipa-wholedisk-direct-tinyipa-multinode/92c65ac/
On the subnode we see the RT detect that the node is moving hosts:
Public bug reported:
Seen here:
https://d01b2e57f0a56cb7edf0-b6bc206936c08bb07a5f77cfa916a2d4.ssl.cf5.rackcdn.com/678298/4/check
/ironic-tempest-ipa-wholedisk-direct-tinyipa-
multinode/92c65ac/compute1/logs/screen-n-cpu.txt.gz
We see a warning that a compute node could not be found by host and
** No longer affects: neutron
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833902
Title:
Revert resize tests are failing in jobs with iptables_hybrid fw driver
Status in OpenStack
Public bug reported:
The archive_deleted_rows command returns 1 meaning some records were
archived and the code documents that if automating and not using
--until-complete, you should keep going while you get rc=1 until you get
rc=0:
*** This bug is a duplicate of bug 1729621 ***
https://bugs.launchpad.net/bugs/1729621
** This bug has been marked a duplicate of bug 1729621
Inconsistent value for vcpu_used
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Fix Released
** Changed in: nova/rocky
Status: New => Fix Released
** Changed in: nova/pike
*** This bug is a duplicate of bug 1839674 ***
https://bugs.launchpad.net/bugs/1839674
** This bug has been marked a duplicate of bug 1839674
ResourceTracker.compute_nodes won't try to create a ComputeNode a second
time if the first create() fails
--
You received this bug notification
Public bug reported:
- [x] This doc is inaccurate in this way:
The [neutron]/url option
https://docs.openstack.org/nova/latest/configuration/config.html#neutron.url
in nova has been deprecated since the Queens release and is being
removed in Train. The neutron/compute config guide in the neutron
Looks like the nova-api service isn't configured properly for
authenticating to neutron, make sure the [neutron] section of your nova
configuration is set for working with neutron. See:
https://docs.openstack.org/neutron/latest/install/controller-install-
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Changed in: nova/ocata
Status: New => In Progress
** Changed in: nova/ocata
Importance: Undecided => Low
** Changed in: nova/ocata
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You
** Also affects: nova/pike
Importance: Undecided
Status: New
** Changed in: nova/pike
Importance: Undecided => Low
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
Seen here:
https://logs.opendev.org/21/655721/14/check/nova-grenade-live-
migration/2ee634d/logs/subnode-2/screen-n-cpu.txt.gz?level=TRACE#_Aug_13_10_03_49_974378
Aug 13 10:03:49.974378 ubuntu-bionic-limestone-regionone-0010083920
nova-compute[25863]: WARNING
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Importance: Undecided => Low
** Changed in: nova/queens
Status: New => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
This filter code goes back to 2012 so we could backport the fix further
(to pike and ocata) but no one is really using the libvirt+lxc code as
far as I can tell, at least not with python3, so we can just backport to
the non-extended-maintenance branches unless someone wants to backport
them
Public bug reported:
Seen in the nova-lxc CI job here:
https://logs.opendev.org/24/676024/2/experimental/nova-
lxc/f9a892c/controller/logs/screen-n-cpu.txt.gz#_Aug_12_23_31_05_043911
Aug 12 23:31:05.043911 ubuntu-bionic-rax-ord-0010072710 nova-compute[27015]:
ERROR nova.compute.manager [None
*** This bug is a duplicate of bug 1669468 ***
https://bugs.launchpad.net/bugs/1669468
** This bug has been marked a duplicate of bug 1669468
tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc fails
intermittently in neutron multinode nv job
--
You received this bug
Public bug reported:
There are some tests, mostly related to BuildRequest objects, that are
calling nova.objects.base.obj_equal_prims which does not assert
anything, it only returns True or False - the test code itself must
assert the expected result of the obj_equal_prims method.
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also
ot load 'id' in the base class"
We should only map the ComputeNode when we've successfully created it.
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Tags: resource-tracker
--
You received this bug notificatio
Some related discussion in IRC today:
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
nova.2019-08-09.log.html#t2019-08-09T17:21:09
** Changed in: nova
Status: In Progress => Opinion
--
You received this bug notification because you are a member of Yahoo!
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1839621
Title:
Inappropriate split of transport_url string
Status in
vstack
Assignee: (unassigned) => Matt Riedemann (mriedem)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/16694
Just because you configure the API to allow resizing to the same host
doesn't mean the scheduler is going to pick the same host, e.g. if the
host the instance is on is already full, or does not have spare capacity
for the new flavor you're resizing *to* then the scheduler will pick
another host.
There are some ideas about hard-deleting the compute nodes records when
they (soft) deleted but only if ironic nodes, but that gets messy (and
called from lots of places, like when a nova-compute service record is
deleted), so it's probably easiest to just revert this:
Public bug reported:
Noticed here:
https://logs.opendev.org/32/634832/43/check/nova-tox-functional-
py36/d4f3be5/testr_results.html.gz
With this test:
nova.tests.functional.notification_sample_tests.test_service.TestServiceUpdateNotificationSampleLatest.test_service_disabled
That's a simple
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/stein
Importance: Undecided
Status: New
**
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/rocky
Status: New => In Progress
** Changed in: nova/rocky
Assignee: (unassigned) => Balazs Gibizer (balazs-gibizer)
** Changed in: nova/rocky
Importance: Undecided => Low
--
You
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in:
something like [api_database]/connection instead.
** Affects: nova
Importance: Low
Assignee: Matt Riedemann (mriedem)
Status: In Progress
** Tags: docs nova-manage
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
https://review.opendev.org/#/q/Icddbe4760eaff30e4e13c1e8d3d5d3f489dac3c4
goes back to stable/rocky so this should go back that far as well.
** Changed in: nova
Importance: Undecided => Medium
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/stein
** No longer affects: devstack
** Changed in: neutron
Importance: Undecided => Critical
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1838811
Title:
Public bug reported:
Various things come up in IRC every once in a while about configuration
options that need to be tweaked at large scale (blizzard, cern, etc)
which once you hit hundreds or thousands of compute nodes need to be
changed to avoid killing the control plane.
One such option is
** Affects: nova
Importance: Medium
Assignee: Matt Riedemann (mriedem)
Status: Triaged
** Affects: nova/rocky
Importance: Medium
Status: Confirmed
** Affects: nova/stein
Importance: Medium
Status: Confirmed
** Tags: neutron performance
** Changed in: nova
** Also affects: neutron
Importance: Undecided
Status: New
** Changed in: neutron
Status: New => Confirmed
** Changed in: devstack
Status: New => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
Public bug reported:
I'm seeing this all over the nova tox functional job console logs since
the placement client code in nova was changed to use the openstacksdk:
https://logs.opendev.org/61/673961/1/gate/nova-tox-functional-
py36/a4cb2af/job-output.txt.gz#_2019-08-01_17_51_24_070487
Technically this goes back to Pike but I'm not sure we care about fixing
it there at this point since Pike is in Extended Maintenance mode
upstream. Someone can backport it to stable/pike if they care to.
** Also affects: nova/stein
Importance: Undecided
Status: New
** Also affects:
Public bug reported:
This warning log from the ResourceTracker is logged quite a bit in CI
runs:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22is%20not%20being%20actively%20managed%20by%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d
2601 hits in 7 days.
Actually ignore comment 15, claim_resources didn't raise
AllocationUpdateFailed until Stein:
https://github.com/openstack/nova/commit/37301f2f278a3702369eec957402e36d53068973
So the bug doesn't apply to Rocky or Queens.
** No longer affects: nova/rocky
** No longer affects: nova/queens
--
I'll be backporting the non-fill provider mapping part of this to rocky
and queens since the code fix and functional tests related to bug
1837955 rely on changes from the series that fixed this bug.
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects:
What version of os-brick are you using? There might be fixes in newer
releases of os-brick but you'd have to check the change log probably.
Lee Yarwood might be familiar with any related changes to os-brick as
well.
** Tags added: libvirt live-migration volumes
** Also affects: os-brick
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => In Progress
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova
Importance: Undecided => Medium
--
You received this bug notification
101 - 200 of 2946 matches
Mail list logo