** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1793446
Title:
Avoid Forcing the Translation of Translatable
Latching on, we had a similar failure on the manila gate today:
http://paste.openstack.org/show/730492/
** Also affects: manila
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
Reviewed: https://review.openstack.org/536288
Committed:
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=2c98c01e9907248de247b751a8deee1461f8be5d
Submitter: Zuul
Branch:master
commit 2c98c01e9907248de247b751a8deee1461f8be5d
Author: yanpuqing
Date: Thu Jan 4 08:23:42 2018
Public bug reported:
Description
===
When the vendordata_providers option is set to DynamicJSON, the config
drive ceases to function and instances fail to spawn with a
'InvalidMetadataPath: /openstack/2013-10-17/vendor_data.json' error.
Specifically, this issue occurred when spawning
This bug was fixed in the package keystone - 2:14.0.0-0ubuntu2
---
keystone (2:14.0.0-0ubuntu2) cosmic; urgency=medium
* d/control: Set min python-oslo.log to rocky version (3.39.0) as
requirements.txt min version is too low (LP: #1793347).
-- Corey Bryant Thu, 20 Sep 2018
** Changed in: nova/queens
Status: Fix Committed => Fix Released
** Changed in: nova/pike
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
Hello All,
I just installed xenial on eucalyptus 4.4 and I get the following
message when I log in as root.
**
# This system is using the EC2 Metadata Service, but does not appear to #
# be running on
** Also affects: nova/queens
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Changed in: nova/queens
Status: New => Confirmed
** Changed in: nova/rocky
Status: New => Confirmed
** Changed in: nova/rocky
Reviewed: https://review.openstack.org/533168
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=eefb20e4658e17f91fa76b74fef6ff899babe51b
Submitter: Zuul
Branch:master
commit eefb20e4658e17f91fa76b74fef6ff899babe51b
Author: Brooks Kaminski
Date: Fri Jan 12 06:05:36 2018
This bug was fixed in the package nova - 2:18.0.0-0ubuntu5
---
nova (2:18.0.0-0ubuntu5) cosmic; urgency=medium
* d/control: Set min python-oslo.db to rocky version (4.40.0) as
requirements.txt min version is too low (LP: #1793353).
-- Corey Bryant Thu, 20 Sep 2018 11:26:53
This hasn't shown up in a long time so marking it invalid now.
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
It is possible that placement gets out of sync which can cause
scheduling problems that would go unknown. I've built out this script
would would be nice to have as `nova-manage placement audit`:
Based on consensus above, I've switched the bug to public and triaged it
as a class D report, tagging it as a potential hardening opportunity or
security-related improvement.
** Information type changed from Private Security to Public
** Description changed:
- This issue is being treated as a
Public bug reported:
This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:
- [ X ] This doc is inaccurate in this way: Step 7 of 'Add a key pair'
instructs end users to respond to the
Public bug reported:
The ironic driver does not use its local cache of node data for the
get_info call, which is used during the instance power sync. This
results in N API calls per power sync loop, where N is the number of
instances managed by the compute service doing the sync.
We should aim
Reviewed: https://review.openstack.org/582613
Committed:
https://git.openstack.org/cgit/openstack/glance/commit/?id=c58e5e02af76cad3967d22d14c63794c6d60456f
Submitter: Zuul
Branch:master
commit c58e5e02af76cad3967d22d14c63794c6d60456f
Author: Corey Bryant
Date: Fri Jul 13 09:20:04 2018
The related issue is that the scheduler was not filtering out deleted
compute node records when pulling them from the cell DB:
https://github.com/openstack/nova/blob/d87852ae6a1987b6faa3cb5851f9758b47ef4636/nova/objects/compute_node.py#L443
Because ^ that query doesn't filter out deleted
Public bug reported:
If you are taking a nova-compute service out of service permanently, the
logical steps would be:
1) Take down the service
2) Delete it from the service list (nova service-delete )
However, this does not delete the compute node record which stays
forever, leading to the
Are you sure you're stopping the nova-compute service before deleting
the actual service record via the API?
https://developer.openstack.org/api-ref/compute/#delete-compute-service
Otherwise the ResourceTracker in the compute process will recreate the
compute node.
The Service.destroy is called
Public bug reported:
ENV:
master
devstack multinode install:
1 controller node
2 compute nodes -> dvr_no_external (compute1, compute2)
2 network nodes -> dvr_snat (network1, network2)
Problem:
For L3 DVR HA router, when the network node, which hosting the `master` router,
is down and up.
The
Thanks everyone for the prompt analysis. I've triaged this as a class B2
port per the OpenStack VMT taxonomy: https://security.openstack.org/vmt-
process.html#incident-report-taxonomy
** Information type changed from Private Security to Public
** Changed in: ossa
Status: Incomplete =>
Public bug reported:
ENV:
master
devstack multinode install:
1 controller node
2 compute nodes -> dvr_no_external (compute1, compute2)
2 network nodes -> dvr_snat (network1, network2)
Problem:
For L3 DVR HA router, the centralized floating IPs nat rules are not installed
in
Public bug reported:
my env is stable/queens
when delete a network/sunbet,neutron delete subnet first and then delete
ipamsubnet.
the code here:
https://github.com/openstack/neutron/blob/stable/queens/neutron/db/db_base_plugin_v2.py#L1029
the DB operate is just like:
DELETE FROM subnets WHERE
Thanks very much for reporting this Tobias. I will have a fix coming for
the Ubuntu package. We use apt dist-upgrade in our upgrade tests which
won't uncover this.
** Also affects: nova (Ubuntu)
Importance: Undecided
Status: New
** Also affects: nova (Ubuntu Cosmic)
Importance:
Public bug reported:
When a users creates a keypair in Horizon, but exceeds it's keypair
quota, the user is logged out instead of presenting a nice message.
Before presenting the login page, 2 errors are quickly shown, i had to
record the output to get those:
'Error: Forbidden. Redirecting to
Thanks very much for reporting this Tobias. I will have a fix coming for
the Ubuntu package. We use apt dist-upgrade in our upgrade tests which
won't uncover this.
** Also affects: keystone (Ubuntu)
Importance: Undecided
Status: New
** Also affects: keystone (Ubuntu Cosmic)
Reviewed: https://review.openstack.org/603775
Committed:
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=157c507f70e244c990d0c087e6afea4531891c04
Submitter: Zuul
Branch:master
commit 157c507f70e244c990d0c087e6afea4531891c04
Author: Brian Haley
Date: Wed Sep 19 09:22:35
Public bug reported:
For example, do not do this:
# WRONG
LOG.info(_LI('some message: exception=%s'), six.text_type(exc))
Instead, use this style:
# RIGHT
LOG.info(_LI('some message: exception=%s'), exc)
refer to:
28 matches
Mail list logo