Public bug reported:
We multiple-create 3 instances, but the host resource is only enough for 1
instance,
nova-scheduler consume the resource of selected host for the first instance in
select_destinations.
After the multiple creating fails, we try to boot 1 instance with same flavor,
the host
Public bug reported:
According to the code and the comment of code, it seams that we don't
want to support type filter for policy list, refer to:
https://github.com/openstack/keystone/blob/master/keystone/policy/core.py#L64
However, the controller defines the type filter:
Public bug reported:
the scenario:
1. create a vm using bootable volume.
2. delete this vm
3. restart service nova-compute when vm's task state is deleting.
When nova-compute is up, vm became deleted successful, but the bootable
volume is still in-use state and can't delete it using cinder
Public bug reported:
Disabling user in ldap brakes user-list for project.
Step to reproduce.
* create a testuser user in ldap backend for keystone.
* check that user exist in user list.
* assign some role to this user in any test project.
* check that this user appear in keystone user-list
The filtering by 'type' is not being done by the manager/driver. It's
being done at Controller level, at PolicyV3.wrap_collection(..)
** Changed in: keystone
Status: In Progress = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
The term name is used incorrectly and is misleading in the pluggable
extensions settings documentation and the enabled files. The value that
needs to be specified is the slug not the name. This correction will aid
in developers and deployers using the correct value.
Change
Public bug reported:
When nova secgroup-list --all-tenants is given with sourcing admin, it is
returning secgroups of only admin tenant.
It should actually list secgroups of all the tenants present.
This bug is reproduced in stable/icehouse and stable/juno
Steps to reproduce this bug:
Public bug reported:
In the bug:https://bugs.launchpad.net/neutron/+bug/1288923
It changes that reserve the dhcp port to be reused after
remove-network-from-agent.
The code is:
port['device_id'] = constants.DEVICE_ID_RESERVED_DHCP_PORT
self.update_port(context,
I believe everyone is on oslo.concurrency now, so this should no longer
be an issue anywhere.
** Changed in: cinder
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
Given the overwhelming consensus that this isn't exploitable, I've
switched the bug to public and marked the security advisory task won't
fix so this can just be worked as a normal bug/hardening opportunity.
** Information type changed from Private Security to Public
** Changed in: ossa
Public bug reported:
When metadata server (nova-api:8775 by default) gets a request without X
-Instance-ID-Signature header, the server errors out with the following
stacktrace:
2015-01-08 18:10:51.955 INFO nova.metadata.wsgi.server [-] 127.0.0.1 GET /
HTTP/1.1 status: 200 len: 215 time:
Public bug reported:
The problem I experienced and a solution is described under the
following link:
https://ask.openstack.org/en/question/57296/juno-centos-7
-buildabortexception-build-of-instance-aborted-failed-to-allocate-the-
networks-not-rescheduling/
It was also reported here:
Public bug reported:
Logout dropdown was blocked by message tip,
if you close one message tip, the next will come top again : (
** Affects: horizon
Importance: Undecided
Assignee: LIU Yulong (dragon889)
Status: In Progress
** Attachment added: message block the logout
Public bug reported:
In nova/notifications.py(370)info_from_instance():
The AttributeError: 'Instance' object has no attribute 'get_flavor' throws on:
instance_type = instance.get_flavor()
The stacktrace is:
- self.compute_api.update(context, local_instance, **base_options)
Public bug reported:
When specifying a wrong local_ip with tunnel type 'vxlan' which doesn't belong
to the host a tunnel is created where local_ip is the wrong one and
the remote_ip is the right one.
There should be a sanity check to check that the IP address in local_ip belongs
to the host.
Public bug reported:
Seen on RDO Juno, running on CentOS 7.
Steps to reproduce:
- Set admin_workers=1 and public_workers=1 in /etc/keystone/keystone.conf
- Start the keystone service: `systemctl start openstack-keystone`
- Start a 'persistent' TCP connection to keystone: `telnet localhost 5000
Public bug reported:
Booting a image that was snapshoted from a VM with an ephemarl disk
fails. This is due to the fact that the wrong root disk is uploaded!
** Affects: nova
Importance: Critical
Assignee: Gary Kotton (garyk)
Status: In Progress
** Changed in: nova
The issue is here: bulk termination stops, when an instance is stopped
e.g via command line manually after displaying the list of instances.
** Changed in: horizon
Status: Expired = New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Let's track the filesystem: case on bug 1408663, for clarity.
** Changed in: ossa
Status: In Progress = Fix Released
** Changed in: glance
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
*** This bug is a security vulnerability ***
Public security bug reported:
Jin Liu reported that OSSA-2014-041 (CVE-2014-9493) only fixed the
vulnerability for swift: and file: URI, but overlooked filesystem: URIs.
Please see bug 1400966 for historical reference.
** Affects: glance
Public bug reported:
ubuntu@ubuntu-ThinkCentre-M93p:~$ nova list --te
+--+--+++-+--+
| ID | Name | Status | Task State | Power
State | Networks |
Public bug reported:
When trying to downgrade from version 61, it fails with an AttributeError.
$ keystone-manage db_sync 60
2015-01-08 08:29:56.494 CRITICAL keystone [-] AttributeError: 'MetaData' object
has no attribute 'c'
2015-01-08 08:29:56.494 TRACE keystone Traceback (most recent call
Public bug reported:
Problem description
===
The Nova REST API returns with server action ``os-getSerialConsole``
a connection info (a websocket URL) although the nova-serialproxy service
is *not* activated.
Steps to reproduce
==
* Configure in ``nova.conf``
bknudson pointed out the real issue, sqlalchemy-migrate is always
logging deprecation warnings, that's why moving the deprecation warnings
fixture in nova to after the db fixture fixed the problem for nova:
https://github.com/stackforge/sqlalchemy-
24 matches
Mail list logo