Public bug reported:
Description of problem:
The deletion of an image fails when Glance is configured to work with RBD
store, with the configuration settings that are described in this manual:
http://docs.ceph.com/docs/master/rbd/rbd-openstack/#juno
It seems like the glance client is stuck.
Public bug reported:
L3 agent drivers are singletons. They're created once, and hold
self.l3_agent. During testing, the agent is tossed away and re-built,
but the drivers singletons are pointing at the old agent, and its old
configuration. Single each agent has its own state_path (And dependent
Public bug reported:
If a non-admin tenant creates a network and a subnet and delete
operation is performed by the admin tenant, PLUMgrid plugin ends up
passing incorrect tenant_id to the backend and this fails the calls.
Fix would be to get correct tenant UUID in delete subnet operation.
**
I don't think this bug belongs to neutron project itself
** Changed in: neutron
Status: New = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403823
Title:
tests in
** Project changed: neutron = devstack
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403625
Title:
devstack defaults to VXLAN even though ENABLE_TENANT_TUNNELS is False
in
** Also affects: tempest
Importance: Undecided
Status: New
** Tags added: gate-failure
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403291
Title:
What you get through the API is just a reflection of DB state, in which a port
has IP address regardless of subnet settings.
I don't think output should be affected by the state of the backend.
So IMO, we don't need to fix this.
** Changed in: neutron
Status: New = Opinion
--
You
Public bug reported:
http://logs.openstack.org/01/141001/4/gate/gate-tempest-dsvm-neutron-
full/0fcd5ec/console.html.gz
2014-12-19 14:01:31.371 | SSHTimeout: Connection to the 172.24.4.69
via SSH timed out.
** Affects: neutron
Importance: Undecided
Status: New
--
You
Public bug reported:
I've generated a FreeBSD qcow2 including base and kernel with cloud-init
and dependencies. cloud-init runs when the instance boots, but the
growpart module fails due to, what appears to be, two separate problems.
One of the dependencies is the gpart port/pkg which,
Public bug reported:
In add_static_nat(...):
LOG.debug(MidoClient.add_static_nat called:
tenant_id=%(tenant_id)s, chain_name=%(chain_name)s,
from_ip=%(from_ip)s, to_ip=%(to_ip)s,
port_id=%(port_id)s, nat_type=%(nat_type)s,
Public bug reported:
A share storage live-migration failed at function of
_post_live_migration() because umount command failed, but the status
of this instance is still migrating.
Log is as follows:
2014-12-19 16:45:32.741 6127 INFO nova.compute.manager [-] [instance:
Reviewed: https://review.openstack.org/143215
Committed:
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=14e6c86d5a457dbbb90690d55655a4532919255a
Submitter: Jenkins
Branch:master
commit 14e6c86d5a457dbbb90690d55655a4532919255a
Author: Matthew Kassawara
Public bug reported:
ml2.db.get_dynamic_segment() includes this line:
LOG.debug(No dynamic segment %s found for
Network:%(network_id)s,
Physical network:%(physnet)s,
segmentation_id:%(segmentation_id)s,
{'network_id': network_id,
Public bug reported:
cisco.db.n1kv_db_v2._validate_segment_range_uniqueness() includes these
lines:
msg = (_(NetworkProfile name %s already exists),
net_p[name])
LOG.error(msg)
raise n_exc.InvalidInput(error_message=msg)
As written, msg is a tuple, and the various logging
Public bug reported:
There are a small number of examples of eager interpolation in
neutron:
logging.debug(foo %s % arg)
These should be converted to perform the interpolation lazily within
the logging function, since if the severity is below the logging level
then the interpolation can be
Public bug reported:
can not delete an instance if the instance's rescue lvm can not be
found.
how to reproduce:
1. configure images_type lvm for libvirt dirver.
[libvirt]
images_type = lvm
images_volume_group = stack-volumes-lvmdriver-1 -- lvm used
2. rescue the instance, will generate
I met similar issue recently, with share storage ceph backend.
'ImageBusy: error removing image' just was another exception when
clean up instance on target host after evacuate failure . The image
is used by the original instance from ceph side, so ceph think it was
used . In normal
Public bug reported:
Evacuate provide a way to recover instance from a failed compute node,
compute manager changes instance's host and node name with target host
before do real action '_rebuild_default_impl', we didn't catch exception
from _rebuild_default_impl, any evacuate
Public bug reported:
If instance is booted from volume, then shelving the instance sets the
status as SHELVED_OFFLOADED, instance files are getting deleted properly
from the base path. When you call the unshelve instance, it fails on the
conductor with error Unshelve attempted but the image_id is
19 matches
Mail list logo