Public bug reported:
Getting the following error in nova-api logs when neutron doesn't
support floating IPs
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack File
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 125, in
__call__
2015-09-29 10:04:50.551 6376 TRACE no
Public bug reported:
In our nova-compute logs we get a ton of these messages over and over
2015-10-01 11:01:54.781 30811 WARNING nova.compute.manager [req-
f61f4f85-72e7-481b-a8a3-90551bdc4b58 - - - - -] [instance: 75f733b5
-842e-4bde-9570-efa2735e6f12] Instance build timed out. Set to error
stat
Public bug reported:
Using the metadata to get the security groups for an instance by
curl http://169.254.169.254/latest/meta-data/security-groups
Doesn't work when you are using neutron. This is because the metadata
server is hard coded to look for security groups in the nova DB.
** Affects: n
Public bug reported:
Since upgrading to icehouse we consistently get reply_x queues
filling up with unacked messages. To fix this I have to restart the
service. This seems to happen when something is wrong for a short period
of time and it doesn't clean up after itself.
So far I've seen the i
The `_` function is installed in the entry of the keystone
** Changed in: keystone
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1250309
Title:
Actually, I've just realised this is a simple spelling mistake, created
a new bug and will mark this invalid.
See bug 1253510
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Open
Public bug reported:
When I try and shelve an instance I get the following error on the
compute node:
2013-12-04 10:39:59.716 18800 ERROR nova.openstack.common.rpc.amqp
[req-d87825e7-9c2f-4735-94e2-4c470ee0edab d9646718471b46aeb5fd94c702336ca9
0bdf024c921848c4b74d9e69af9edf08] Exception during
Public bug reported:
When you unshelve an instance that has been offloaded it doesn't set:
OS-EXT-SRV-ATTR:host
OS-EXT-SRV-ATTR:hypervisor_hostname
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineer
Public bug reported:
When unshelving a shelved instance that has been offloaded to glance it doesn't
actually use the image stored in glance.
It actually uses the image that the instance was booted up with in the first
place.
This seems a bit crazy to me so it would be great if someone could
re
Public bug reported:
Few issues with LXC and volumes that relate to the same code.
* Hard rebooting a volume will make attached volumes disappear from
libvirt xml
* Booting an instance specifying an extra volume (passing in
block_device_mappings on server.create) will result in the volume not
be
Public bug reported:
We currently have an installation with a single region. We want to
expand to another region but this is proving difficult.
When I add a new endpoint for our new region, clients that are
configured not to use a region can (depending on the ID of the endpoint)
all of a sudden n
Public bug reported:
When using the filters on the instance table view when trying to filter
by several attributes like availability_zone or key_name this has no
effect. This is because the microversion used is not high enough.
Using these filters came in from microversion 2.83
https://docs.open
Public bug reported:
When I try to filter instances by the vcpus filter I get the following
error:
'vcpus' does not match any of regexes: '^_' (HTTP 400)
When I look at the nova compute api spec I see that filtering by vcpus
isn't supported https://docs.openstack.org/api-
ref/compute/?expanded=l
Public bug reported:
Don't seem to be able to get floating IP information from openstack metadata
network_data.json
I can get this via EC2 metadata and would be good if it was in openstack
metadata too
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug n
:tid 140634143590144]
[remote 172.26.25.159:58730] TypeError: tenant_floating_ip_allocate() takes
from 1 to 3 positional arguments but 4 were given
[Thu Jun 06 23:24:09.770356 2019] [wsgi:error] [pid 26848:tid 140634143590144]
[remote 172.26.25.159:58730]

** Affects: horizon
Importance: Undecide
** Changed in: nova
Status: Expired => Confirmed
** Changed in: nova
Status: Confirmed => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741810
Title:
Public bug reported:
When running multiple cells and calling evacuate on an instance when
determining a source host the scheduler is not restricting hosts to be
in the same destination cell.
This is affecting us with stein version but looking in the code it looks
to affect master too
** Affects:
Public bug reported:
We have a hypervisor that needs to go down for maintenance. There are 2
instances on the host within a server group with affinity.
It seems to be impossible to live migrate them both to a different host.
Looks like there used to be a force argument to live migration but this
** Also affects: cloud-archive
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837252
Title:
[OSSA-2019-004] Ageing time of 0
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1746627
Title:
Reverse floating IP records are not removed when floati
Public bug reported:
We recently rolled out a config change to update the max_password_length
to avoid all the log messages. We set this to 54 as mentioned in the
release notes which we discovered was a BIG mistake as this broke
everyone authenticating using existing application credentials.
Ther
Been looking into this, this is definitely a bug. The issue is because
cell conductors still need to be configured with the API database to
allow for some upcalls
https://docs.openstack.org/nova/latest/admin/cells.html#operations-
requiring-upcalls . Unless this doc is out of date?
When api_databa
Public bug reported:
We use ML2 with linuxbridge and ovn mech drivers. When upgrading to yoga
DHCP stopped working as the DHCP extension was disabled.
** Affects: neutron
Importance: Undecided
Status: In Progress
--
You received this bug notification because you are a member of Ya
Public bug reported:
Running
keystone-manage fernet_rotate --keystone-user root --keystone-group
keystone
Will not work as expected due to some wrong logic when uid is set to 0
due to 0 == False
The new 0 key will have ownership of root:root, not root:keystone
** Affects: keystone
Importa
Public bug reported:
Upgrading neutron from rocky -> stein and get a considerable slow down when
listing all security groups for a project. Goes from ~2 seconds to almost 2
minutes. Looking into the code it looks like it is very inefficient because it
gets all rules from the DB and then filters
Public bug reported:
Environment:
Stein nova-conductor having set upgrade_levels to rocky
Rocky nova-compute
Boot an instance with a flavour that has a pci_device
Error:
Failed to publish message to topic 'nova': maximum recursion depth
exceeded: RuntimeError: maximum recursion depth exceeded
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837339
Title:
CIDR's of the form 12.34.56.78/0 should be an error
St
Public bug reported:
I'm trying to allow users to see what roles they have on all of their
projects.
It would seem that this should do this in policy
"identity:list_role_assignments": "rule:admin_or_monitoring or
project_id:%(scope.project.id)s or user_id:%(scope.user.id)s"
However this doesn't
Public bug reported:
When configuring placement service for nova-computes it is required to
put in the region name for the placement services.
When talking to other services like neutron or cinder specifying a
region name isn't required and if you just have 1 region (possibly the
most common type
Public bug reported:
We have bridge_mappings set for linuxbridge agent to use a non standard
bridge naming convention.
This works in all places apart from the setting zone rules in iptables.
The code in neutron/agent/linux/iptables_firewall.py doesn't take into
account mappings and just uses the
Public bug reported:
The setting CREATE_INSTANCE_FLAVOR_SORT looks like it has no affect
anymore, possibly due to the change to using the angular launch instance
version?
This is using the queens dashboard
** Affects: horizon
Importance: Undecided
Status: New
--
You received this
Public bug reported:
In the launch instance view the drop down list for selecting an
availability zone is in a random order. Would be good if this was sorted
alphabetically.
This is in Queens dashboard
** Affects: horizon
Importance: Undecided
Assignee: Sam Morrison (sorrison
)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 6436,
in _archive_if_instance_deleted
{'table': table.__tablename__,
AttributeError: 'Table' object has no attribute '__tablename__'
** Affects: nova
Importance: Undecided
Public bug reported:
Currently no way to create an image with visibility of community or
shared. This has been supported in glance for a few releases now.
** Affects: horizon
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Public bug reported:
Currently listing all flavors, including disabled and non public flavors
is hard coded to only allow context.is_admin.
This should be controller by policy so operators can allow other roles
to list these type of flavors too.
** Affects: nova
Importance: Undecided
atest stable/pike the latest version there is 4.17
https://github.com/openstack/nova/blob/6ef30d5078595108c1c0f2b5c258ae6ef2db1eeb/nova/compute/rpcapi.py#L330
** Affects: nova
Importance: Undecided
Assignee: Sam Morrison (sorrison)
Status: In Progress
--
You received thi
Public bug reported:
Nova by default looks for the cinder endpoint by looking for a service
type of volumev3 and that also has a name of "cinderv3"
I think it should only be looking for an endpoint with a type of
volumev3
The name attribute of an endpoint should be free to set by the operator.
I have just done the N -> O upgrade and have seen this error.
We have done the expand and migrate db syncs.
We have 3 newton keystones and when I added an ocata one I saw this
issue on the ocata one.
Its happening on a POST to /v3/auth/tokens and is affecting about 3% of
requests (we have around
Public bug reported:
In Newton nova removed the nova-cert process. This means downloading the
ec2 certificates doesn't work anymore. Getting the EC2 credentials from
keystone still works so just need to only return EC2 creds now.
** Affects: horizon
Importance: Undecided
Status: New
icate all the keystone api in horizon it would be good
if it could be supported natively.
** Affects: horizon
Importance: Undecided
Assignee: Sam Morrison (sorrison)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, whi
Public bug reported:
I'm wanting to change the domain of a project but it doesn't look like
this is supported via the API.
Changing via some SQL in the DB works fine so would be great if this can
be achieved via the API.
** Affects: keystone
Importance: Undecided
Status: New
--
Y
Public bug reported:
I'm trying to figure out which instances are using a specific security
group but it doesn't look possible via the API (unless I'm missing
something).
The only way to do this is by looking in the database and doing some sql
on the securitygroupportbindings table.
Is there ano
Sorry what you are explaining is the reverse of what I want and doesn't
help, I have a security group ID and I want to know what instances have
that security group applied.
We have thousands of instances and querying each one to see if they have
the security group applied is very inefficient and t
OK I've figured it out, very sorry, not a bug. In newton we had
mech_driver set to midonet_ext and in ocata this is now just midonet
again so this is why everything was failing.
** Changed in: networking-midonet
Status: New => Invalid
** Changed in: neutron
Status: New => Invalid
-
Public bug reported:
Given a security group ID I would like an API to determine which devices
(nova instances) use this security group.
Currently the only way to do this is by looking in the database and
doing some SQL on the securitygroupportbindings table.
** Affects: neutron
Importance:
Have submitted an RFE for this at
https://bugs.launchpad.net/neutron/+bug/1734026
** Changed in: neutron
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/173374
Public bug reported:
In our ML2 environment we have 2 mech drivers, linuxbridge and midonet.
We have linuxbridge and midnet networks bound to instances on the same
compute nodes. All works well except the midonet ports get marked as
DOWN. I've traced this back to the linuxbridge agent.
It seems
Public bug reported:
We have a special read_only role in keystone and have given that role
the ability to list all instances via the policy rule:
index:get_all_tenants.
It can't however list all instances on a specific host for instance. I'm
not sure if a new policy rule should be added or it sho
Public bug reported:
We have a bunch of external shared provider networks that users can
attach a port to and get direct access to the Internet.
We also have a bunch of floating IP networks that users can use for
floating IPs.
The two types of networks are shared and external.
The issue is tha
Reopening this bug, going through an upgrade from liberty -> mitaka and
getting this bug
** Changed in: nova
Status: Expired => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.laun
This affects mitaka, not sure how to make it say that in launchpad
** Changed in: nova
Status: Invalid => New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585515
Public bug reported:
In my nova.conf I have
firewall_driver = nova.virt.firewall.NoopFirewallDriver
When I start nova-api-metadata it installs some iptables rules (and
blows away what is already there)
I want to make it not manage any iptables rules by using the noop driver
however it has no af
Public bug reported:
Just installed the Newton version of the openstack dashboard and when
listing images the buttons to filter by category no longer appear.
I can see the option IMAGES_LIST_FILTER_TENANTS appears at:
openstack_dashboard/dashboards/project/images/images/tables.py
So it looks as
Also noted this affects nova too, pagination works but href links to
things like flavours are returned as http links not https
** Also affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is sub
Public bug reported:
The generated v3 openrc file doesn't specify either PROJECT_DOMAIN_ID or
PROJECT_DOMAIN_NAME.
If your project isn't in the default domain then this openrc file won't work
for you.
** Affects: horizon
Importance: Undecided
Assignee: Sam Mor
k_dashboard/api/nova.py", line 504, in
server_vnc_console
instance_id, console_type)['console'])
KeyError: 'console'
** Affects: horizon
Importance: Undecided
Assignee: Sam Morrison (sorrison)
Status: In Progress
--
You received this bug notification bec
** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555384
Title:
ML2: routers and multiple mechanism drivers
Status in
Public bug reported:
We have 11,000 users, doing a `client.users.list()` takes around 14-20
seconds
We have 14,000 projects and doing a `client.projects.list()` takes
around 7-10 seconds.
So you can see we have more projects however it takes about double the
time to list users.
I should mention
Public bug reported:
With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds
In Mitaka it takes 19-22 seconds. This is a significant slow down.
We have ~9,000 users
We also changed from going under eventlet to moving to apache wsgi
We have ~10,000 project and this api (project-l
Public bug reported:
I'm trying to allow a certain role to do certain things to any projects
instances through policy.json and it isn't working as expected.
I've set the following policies to allow my role to do a "nova show" but
with no luck, the same is with any other instance action like start
Public bug reported:
I want to allow a special role to update the owner attribute of an
image.
It looks as if this action is hard coded to only allow context
"is_admin" to to this operation.
This should be configurable via policy
** Affects: glance
Importance: Undecided
Status: Ne
Public bug reported:
When Kilo api cell sends an instance_build to a juno compute cell it
sends down objects, juno is expecting primitives.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team,
Public bug reported:
On a clean kilo install using source security groups I am seeing the
following trace on boot and delete
a2413f7] Deallocating network for instance _deallocate_network
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
2015-08-14 09:46:06.688 11618 ERROR oslo_mess
Public bug reported:
Get the following error when upgrading my juno DB to kilo
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo
INFO [alembic.migration] Context impl MySQLImpl.
INFO [alembic.migration] Will assume non-
Public bug reported:
We have an ML2 environment with linuxbridge and midonet networks. For L3
we use the midonet driver.
If a user tries to bind a linuxbridge port to a midonet router it
returns the following error:
{"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
a1a4-9dfe3d22e62
Public bug reported:
Get the following error running DB migration when upgrading from kilo
-> mitaka
2016-04-20 09:31:37.560 10471 INFO migrate.versioning.api [-] 90 -> 91...
2016-04-20 09:31:37.822 10471 CRITICAL keystone [-] OperationalError:
(_mysql_exceptions.OperationalError) (1091, "Can'
Public bug reported:
Having thousands of agents all talking to the same rabbit is hard to
manage
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https:/
Public bug reported:
There seems to be a scale issue with security groups and large number or
agents.
Users spinning up lots of instances in the same security group with
source group rules can trigger orders of magnatutes more messages in
rabbit that normal operation.
Possibly this can be done m
Public bug reported:
Have upgraded to Mitaka and getting a 501 when deleting a project. This
happens in both v2 and v3 api. The project actually deletes.
Am using stable/mitaka branch and the sql backend
$ keystone tenant-create --name deleteme
+-+--
Public bug reported:
When I have Juno control and Icehouse compute and icehouse network
deleting an instance doesn't work.
This is due to the Fixed IP object having an embedded version of the
network object that is too new for Icehouse. This causes and infinite
loop
** Affects: nova
Importa
Public bug reported:
When running Juno with Icehouse computes on starting nova-compute you
get a RuntimeError: maximum recursion depth exceeded while calling a
Python object due to it trying to backport the service object.
This is caused by the Juno conductor, when it sends back the service
objec
Public bug reported:
When upgrading to Juno and running DB migrations I get the following
error:
glance-manage db version
34
glance-manage db sync
2015-01-16 13:42:08.647 6746 CRITICAL glance [-] ValueError: Tables
"task_info,tasks" have non utf8 collation, please make sure all tables are
CH
Public bug reported:
Even though the aggregate_multitenancy_isolation says it can filter on
multiple tenants it currently doesn't.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
If I have
[conductor]
workers = 0
I get 1 conductor process
Increasing the value I get the following
workers = 1 = 1 process
workers = 2 = 3 processes
workers = 3 = 4 processes
Looks like if workers > 1 processes = workers + 1
workers < 2 = processes = 1
This is in Jun
I'm getting this in nova-api too, we don't use neutron.
I get it when I do a nova list or nova show. Restarting nova-api fixed
it for a while but then it comes back again.
==> /var/log/nova/nova-api.log <==
2015-03-27 13:56:06.649 10962 WARNING keystonemiddleware.auth_token [-]
Retrying on HTTP
** Also affects: keystonemiddleware
Importance: Undecided
Status: New
** Summary changed:
- ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object
has no attribute 'insecure'
+ ConnectionFailed: Connection to XX failed: 'HTTPSConnectionPool' object
has no att
Fixed in 0.14.2
** Changed in: python-glanceclient
Status: New => Fix Released
** Also affects: python-glanceclient (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenS
Public bug reported:
We upgraded our dashboard to juno and now the "Create Security Group"
button has disappeared.
I've tracked this down to a key error in class
CreateGroup(tables.LinkAction) method allowed:
if usages['security_groups']['available'] <= 0:
KeyError: ('available',)
pp usages['s
Public bug reported:
The AZ of an instance is calculated wrong when creating neutron ports.
It uses the requested instance AZ as opposed to the actual AZ of the
instance.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a mem
** Changed in: horizon/juno
Status: Invalid => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1422049
Title:
Security group checking action permi
Public bug reported:
Attach and detach interface are not supported when using cells
** Affects: nova
Importance: Undecided
Assignee: Sam Morrison (sorrison)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which
Public bug reported:
The metadata agent has no ability to override which url neutron uses. It
relies on neutron being in the keystone catalog.
If neutron isn't in the catalog metadata agent will fail.
** Affects: neutron
Importance: Undecided
Status: New
--
You received this bug
Public bug reported:
Upgrading to Juno you can no longer boot a volume that is bigger than
the flavours disk size.
There should be no need to take this into account when using a volume.
** Affects: nova
Importance: Undecided
Assignee: Sam Morrison (sorrison)
Status: In
Public bug reported:
Trying to upgrade from Juno -> Kilo
keystone-manage db_version
55
keystone-manage db_sync
2015-06-26 16:52:47.494 6169 CRITICAL keystone [-] ProgrammingError:
(ProgrammingError) (1146, "Table 'keystone_k.identity_provider' doesn't exist")
'ALTER TABLE identity_provider Eng
Public bug reported:
I'm trying to list servers by filtering on system_metadata or metadata.
I should be able to do something like (looking into the code)
nclient.servers.list(search_opts={'system_metadata': {"some_value":
"some_key"}, 'all_tenants': 1})
But this dictionary gets turned into a u
Public bug reported:
I want to allow a user with a certain role to be able to do a nova show.
I set the policy.json file to allow this by setting
==
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rul
Public bug reported:
I'm trying to allow a non admin to be able to do a
nova list --all-tenants --tenant XX
I have set my policy.json file to allow this user who has a role called
monitoring to do this:
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_i
Public bug reported:
I'm trying to delete an image that has a status of "deleted"
It's not deleted as I can do an image-show and it returns plus I can see
it in image_locations and it exists in the backend which for us is swift
glance image-show 17c6077c-99f0-41c7-9bd2-175216330990
+
Public bug reported:
I have a compute node with 20 volumes attached using iscsi and multipath.
Each multipath device has 4 iscsi devices.
When I disconnect a volume it generates 779 multipath -ll calls.
iscsiadm -m node --rescan
iscsiadm -m session --rescan
multipath - r
multipath -ll /dev/sdc
This is due to inefficient code in nova.virt.libvirt.volume
The massive amount of multipath calls is from the code to figure out other IQNs
attached to the compute node.
There is basically a nested for loop, this code can be changed to make it more
efficient.
** Project changed: cinder => nov
Public bug reported:
This is hard to explain so here goes.
When a compute node is making a lot of multipath calls eg. due to bug 1277316
sometimes it can fail to retrieve a multipath device.
When this happens it falls back to using the raw iscsi device.
Example code:
host_device = ("/dev/d
Public bug reported:
If you have two instances on the same compute node that each have a
volume attached (using iscsi backend)
If you delete both of them triggering a disconnect volume the following
happens:
First request will delete the device
echo 1> /sys/block/sdr/device/delete
The second re
Public bug reported:
Glance v2 registry API still doesn't work in Icehouse. This time
thankfully the fix is pretty simple
Basically this is because the "configure_registry_client()" method in
registry/client/v2/api.py isn't called anywhere in the code.
Adding this call to the get_registry_client
Public bug reported:
Trying to create an image with V2 API and get the following error:
glance --os-image-api-version 2 --os-image-url http://glance-icehouse:9292/
image-create --container-format bare --disk-format raw --name trusty2 --file
trusty-server-cloudimg-amd64-disk1.img
Request return
Public bug reported:
The code to backport an object doesn't work at all. This code is only
called in one place.
In nova/objects/base.py in _process_object
If the version is incompatible it tries to backport it:
def _process_object(self, context, objprim):
try:
objinst =
Public bug reported:
in nova.objects.base it imports conductor
from nova.conductor import api as conductor_api
self._conductor = conductor_api.API()
This bypasses the logic to detemin whether to use conductor RPC service
or not.
Should do
from nova import conductor
self._conductor = conductor.
Public bug reported:
This affects Havana not Icehouse
The method signature of attach_volume changed from Havana -> Icehouse
-def attach_volume(self, context, instance, volume_id, device=None):
+def attach_volume(self, context, instance, volume_id, device=None,
+ disk
Public bug reported:
Doing a "nova migration-list" I get the following error in the nova-api
logs:
ERROR Exception handling resource: 'unicode' object does not support item
deletion
TRACE nova.api.openstack.wsgi Traceback (most recent call last):
TRACE nova.api.openstack.wsgi File "/opt/nova/n
Public bug reported:
If a child cell stops functioning we still include it when we send down
broadcast messages that require a response.
This causes things like listing hosts, hypervisor-stats etc. to fail if one of
your compute cells is down.
We know if the cell is mute so we shouldn't send me
Public bug reported:
Can clean up the code path here to make it act like the rest of nova
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
1 - 100 of 102 matches
Mail list logo