Public bug reported:
Recreate Steps:
1) Create multiple routers and allocate each router interface for neutron route
ports from different network.
for example, below, there are 4 routers with each have 4,2,1,2 ports. (So
totally 9 router ports in database)
[root@controller ~]# neutron route
Public bug reported:
When attach a volume to a windows instance(without specify the device
name), this mount point will be always linux format such as /dev/sdb
from CLI or the Horizon GUI.
Checking the openstack nova, looks it seems only allow the device name
with /dev/* format regardless of any
Public bug reported:
Even with user admin context, and following the API-ref for image/detail
api and attempt to query deleted images will get failure for both ways.
1)
[root@node191 glance]# curl -i -X GET -H 'User-Agent: python-glanceclient' -H
'Content-Type: application/octet-stream' -H 'Acc
Public bug reported:
When Horizon configured with using cinder v2 api as below:
OPENSTACK_API_VERSIONS = {
"identity": 2.0,
"volume": 2
}
Note that: Nova already switch to use cinderv2 api as default since commit:
https://review.openstack.org/#/c/124468
After nova attach a volume(with
Public bug reported:
When ceilometer is configured in SSL, within Horizon configured as
below for the local_setting
OPENSTACK_SSL_CACERT=
OPENSTACK_SSL_NO_VERIFY=false
Horizon fail to load the meter-list from ceilometer while ceilometer
will get the meter-list with the same cert via command lin
Public bug reported:
noVNCproxy will not be functional if python version lower than 2.7.4.
It will raise exception as below:
parse = urlparse.urlparse(self.path)
if parse.scheme not in ('http', 'https'):
# From a bug in urlparse in Python < 2.7.4 we cannot support
# s
Public bug reported:
Currently vmware driver will adopt uuid for instance names. This will
lead two problems:
1) instance name template will not apply for vmware driver. But it will
display instance name with nova show command. It will be misleading.
[root@cmwo cmwo]# nova show temp-vm-host1-99
Public bug reported:
With currently vmware driver. the power_off will explicitly calls
PowerOffVM_Task.
In this case, If a virtual machine is writing to disk when it receives a
Power Off command, data corruption may occur.
Actually in SDK, there is another method
ShutdownGuest which will issue
Public bug reported:
When doing a concurrent spawn for VMs(30). Noticed some VM spawn
failures due to error below.
The reason is that during prebuild_instance, it will list all instances
of existing. But if concurrent process for spawning is happening, it is
possible that certain VM is just in pr
Public bug reported:
Currently each Neutron agent will impose db calls to Neutron Server to
query devices, port and networks when it get start up.
Take ml2 rpc.py method get_device_details for example:
It can be noticed that during this call:
it will get each port and then get each network that
irewallDriver' object has no attribute 'update_security_group_rules'
** Affects: neutron
Importance: Undecided
Assignee: zhu zhu (zhuzhubj)
Status: In Progress
** Description changed:
With recent merged code support If19be8579ca734a899cdd673c919eee8165aaa0e
(Ref
Public bug reported:
As for the nova scheduler for scheduler multiple attempts, If with
certain host deployment attempt failed raise with detail exceptions,
nova scheduler will choose other host to retry.
But after all attempts are tried. it will raise a Generic NoValidHost
exception without a p
Public bug reported:
With openvswitch neutron agent, during the daemon loop, the phase for
setup_port_filters will try to grab the cast method
'security_group_rules_for_devices' to Neutron Server.
And this operation will be very time consuming and performance
bottleneck as it include ports quer
Public bug reported:
When there are a number of VMs in VCenter(1200vm+) managed by nova, the nova
compute service need to take more than
1.5 h to get start.
Since the init host will try to sync all VMs power status from VCenter,
with VM's nubmer increased, this will also cost a lot of time. T
Public bug reported:
1. create a cinder from an existed image
cinder create 2 --display-name hbvolume-newone --image-id 9769cbfe-2d1a-
4f60-9806-16810c666d7f
2. set the created volume with error status
cinder reset-state --state error 76f5e521-d45f-4675-851e-48f8e3a3f039
3. boot a vm from the c
Public bug reported:
>From VCenter driver, with compute node configured to be cluster01. And
>VCenter IP is 10.9.1.43
But with hypervisor show command, it will display the host IP for the
controller node. It should be pointing to
the really VCenter IP.
[root@dhcp-10-9-3-83 ~]# nova hypervisor
Public bug reported:
When using vmware driver to attach a volume during VM spawn as below
using --block-device.
The VM will show 'Active' in openstack, but the actuall the VM couldn't
be loaded. Showing 'Operating System Not Found'.
nova boot --flavor 7 --image trend-thin --block-device source=v
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350164
Title:
VMWare: spawn VM failure if there are multiple dc in VC
Public bug reported:
Using Icehouse vmware code, when VCenter configured with two dcs(with one dc
empty).
Spawn VM with a flat image, deployment will failure.
1. DC1
--> Cluster1
--> Host A
--> Host B
--> Cluster2
--> Host C
2. DC2(without clusters or h
| vcpus_used| 281
+---+-
** Affe
Public bug reported:
First use vcenter driver to spawn some instances to one of the
datastores that esxi is binding to. Later this datastore became
unavailable due to certain reason(power off or network problem). Then
when restart nova-compute, found that compute service will exist with
errors.
Public bug reported:
For now the nova quota-update will not give any constraints for the
values provided(user_id, tenant_id) when updates.
Actually if user 'nova quota-update service --ram=9' and it will get
successfully. But it could give user some confusion that quota-show
--tenant is dif
Public bug reported:
neutron was still adopting the oslo incubator code for rpc modules.
During the qpid connection setup from amqp get_connection_pool.
Duplicate connections will be created during __init__(class Connection
in impl_qpid.py). And after the first connection object is created,
this
Public bug reported:
After enable rpc_workers other than 0, restart the neutron-server, and
found that No consumers will be ever created for q-plugin within Qpid.
It does appear that the all sub processes of neutron-server are getting
hanging within the step of self.connection.open() in impl_qpid
24 matches
Mail list logo