[Yahoo-eng-team] [Bug 1677426] Re: Live Migration fails as different cpu types

2017-03-30 Thread Maciej Szankin
Oops, I did read it wrong, you are performing LM from westmere to
broadwell.

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1677426

Title:
  Live Migration fails as different cpu types

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm trying to live migration from systen with Westmere to system with
  Broadwell but fails

  Additional info:
  operating system: centos 7.3
  openstack: Ocata

  /etc/nova/nova.conf 
  cpu_mode=none
  cpu_model=none
  virt_type=kvm

  Error report:
  *
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
[req-98ec383e-d18d-4e79-89e9-725a7e244ce5 - - - - -] Exception during message 
handling
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 155, in 
_process_incoming
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 222, 
in dispatch
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 192, 
in _do_dispatch
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 685, in 
decorated_function
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 216, in 
decorated_function
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 204, in 
decorated_function
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5213, in 
check_can_live_migrate_destination
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
disk_over_commit)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5224, in 
_do_check_can_live_migrate_destination
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
block_migration, disk_over_commit)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5655, in 
check_can_live_migrate_destination
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 

[Yahoo-eng-team] [Bug 1677426] Re: Live Migration fails as different cpu types

2017-03-30 Thread Maciej Szankin
Live migration fails because of different cpu capabilities. For
instance, Broadwell has a AVX, AVX2, TXT, TSX extensions, which are not
available in Westmere.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1677426

Title:
  Live Migration fails as different cpu types

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm trying to live migration from systen with Westmere to system with
  Broadwell but fails

  Additional info:
  operating system: centos 7.3
  openstack: Ocata

  /etc/nova/nova.conf 
  cpu_mode=none
  cpu_model=none
  virt_type=kvm

  Error report:
  *
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
[req-98ec383e-d18d-4e79-89e9-725a7e244ce5 - - - - -] Exception during message 
handling
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 155, in 
_process_incoming
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 222, 
in dispatch
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 192, 
in _do_dispatch
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 75, in 
wrapped
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
function_name, call_dict, binary)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 66, in 
wrapped
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 685, in 
decorated_function
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 216, in 
decorated_function
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
kwargs['instance'], e, sys.exc_info())
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 204, in 
decorated_function
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5213, in 
check_can_live_migrate_destination
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
disk_over_commit)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5224, in 
_do_check_can_live_migrate_destination
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server 
block_migration, disk_over_commit)
  2017-03-29 20:31:44.979 4051 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5655, in 
check_can_live_migrate_destination
  

[Yahoo-eng-team] [Bug 1524044] Re: Deleting a VM with the name "vm_ln_[1" leads to a traceback

2017-03-21 Thread Maciej Szankin
It works as it is supposed to on current master. Bug is no longer valid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524044

Title:
  Deleting a VM with the name "vm_ln_[1" leads to a traceback

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. Exact version of Nova/OpenStack you are running:
  ii  nova-api 2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - API frontend
  ii  nova-cert2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - certificate management
  ii  nova-common  2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
  ii  nova-conductor   2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - conductor service
  ii  nova-consoleauth 2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy  2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler   2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  2:12.0.0~rc1-0ubuntu1~cloud0  
all  OpenStack Compute Python libraries
  ii  python-novaclient2:2.29.0-1~cloud0 
all  client library for OpenStack Compute API

  
  2. Relevant log files:



  2015-12-08 10:54:46.428 5561 INFO nova.osapi_compute.wsgi.server 
[req-9e819344-85a6-4bf7-a225-88cdf59e235e bfb9feab857949bda1d1a40d5da4350d 
61909ec3cd0f4d7fb6c641d71d01e106 - - -] 10.100.100.20 "GET /v2/ HTTP/1.1" 
status: 200 len: 575 time: 0.0552752
  2015-12-08 10:54:46.559 5561 ERROR oslo_db.sqlalchemy.exc_filters 
[req-6ca54a98-8477-4207-8aa6-c7c2e84de3bb bfb9feab857949bda1d1a40d5da4350d 
61909ec3cd0f4d7fb6c641d71d01e106 - - -] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1139, u"Got error 'brackets ([ ]) not balanced' 
from regexp") [SQL: u'SELECT anon_1.instances_created_at AS 
anon_1_instances_created_at, anon_1.instances_updated_at AS 
anon_1_instances_updated_at, anon_1.instances_deleted_at AS 
anon_1_instances_deleted_at, anon_1.instances_deleted AS 
anon_1_instances_deleted, anon_1.instances_id AS anon_1_instances_id, 
anon_1.instances_user_id AS anon_1_instances_user_id, 
anon_1.instances_project_id AS anon_1_instances_project_id, 
anon_1.instances_image_ref AS anon_1_instances_image_ref, 
anon_1.instances_kernel_id AS anon_1_instances_kernel_id, 
anon_1.instances_ramdisk_id AS anon_1_instances_ramdisk_id, 
anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_l
 aunch_index, anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data AS anon_1_instances_key_data, 
anon_1.instances_power_state AS anon_1_instances_power_state, 
anon_1.instances_vm_state AS anon_1_instances_vm_state, 
anon_1.instances_task_state AS anon_1_instances_task_state, 
anon_1.instances_memory_mb AS anon_1_instances_memory_mb, 
anon_1.instances_vcpus AS anon_1_instances_vcpus, anon_1.instances_root_gb AS 
anon_1_instances_root_gb, anon_1.instances_ephemeral_gb AS 
anon_1_instances_ephemeral_gb, anon_1.instances_ephemeral_key_uuid AS 
anon_1_instances_ephemeral_key_uuid, anon_1.instances_host AS 
anon_1_instances_host, anon_1.instances_node AS anon_1_instances_node, 
anon_1.instances_instance_type_id AS anon_1_instances_instance_type_id, 
anon_1.instances_user_data AS anon_1_instances_user_data, 
anon_1.instances_reservation_id AS anon_1_instances_reservation_id, 
anon_1.instances_launched_at AS anon_1_instances_launched_at, 
anon_1.instances_terminated_at AS an
 on_1_instances_terminated_at, anon_1.instances_availability_zone AS 
anon_1_instances_availability_zone, anon_1.instances_display_name AS 
anon_1_instances_display_name, anon_1.instances_display_description AS 
anon_1_instances_display_description, anon_1.instances_launched_on AS 
anon_1_instances_launched_on, anon_1.instances_locked AS 
anon_1_instances_locked, anon_1.instances_locked_by AS 
anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS 
anon_1_instances_architecture, anon_1.instances_vm_mode AS 
anon_1_instances_vm_mode, anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_default_ephemeral_device AS 
anon_1_instances_default_ephemeral_device, anon_1.instances_default_swap_device 
AS anon_1_instances_default_swap_device, anon_1.instances_config_drive AS 
anon_1_instances_config_drive, anon_1.instances_access_ip_v4 AS 
anon_1_instances_access_
 ip_v4, 

[Yahoo-eng-team] [Bug 1488111] Re: Boot from volumes that fail in initialize_connection are not rescheduled

2017-03-21 Thread Maciej Szankin
Liberty has hit the EOL, so this one is invalid. Mitaka was removed from
affected releases, so I am closing this one.

** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova/liberty
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488111

Title:
  Boot from volumes that fail in initialize_connection are not
  rescheduled

Status in OpenStack Compute (nova):
  Won't Fix
Status in OpenStack Compute (nova) liberty series:
  Won't Fix

Bug description:
  Version: OpenStack Liberty

  Boot from volumes that fail in volume initialize_connection are not
  rescheduled.  Initialize connection failures can be very host-specific
  and in many cases the boot would succeed if the instance build was
  rescheduled to another host.

  The instance is not rescheduled because the initialize_connection is being 
called down this stack:
  nova.compute.manager _build_resources
  nova.compute.manager _prep_block_device
  nova.virt.block_device attach_block_devices
  nova.virt.block_device.DriverVolumeBlockDevice.attach

  When this fails an exception is thrown which lands in this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1740
  and throws an InvalidBDM exception which is caught by this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2110

  this in turn throws a BuildAbortException which causes the instance to not be 
rescheduled by landing the flow in this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2004

  To fix this we likely need a different exception thrown from
  nova.virt.block_device.DriverVolumeBlockDevice.attach when the failure
  is in initialize_connection and then work back up the stack to ensure
  that when this different exception is thrown a BuildAbortException  is
  not thrown so the reschedule can happen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1636489] Re: Volume attachment fails for all the available instances after running different volume operations for 1-2 hours or more.

2017-03-21 Thread Maciej Szankin
Since the Mitaka cycle we use the direct release model, which means
those bug reports should have Fix Released.

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1636489

Title:
  Volume attachment fails for all the available instances after running
  different volume operations for 1-2 hours or more.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to Reproduce:
  1. Configure Devstack setup with storage backend (LVM).
  2. Create at least 10-15 instances.
  3. Run different volume operations via automation script for 5 hours or more.
  4. Wait for 1-2 hours.

  Observations:
  1. Attachment failed for every volume attached to an instance. 
“attachVolume--break out wait after 5 minutes. ERROR >> Failed to attach 
volume” record is displayed in automation script logs.
  2. Error:DeviceIsBusy exception is raised (observed in n-cpu.log).

  Additional Note:
  It is observed only in Devstack stable/Mitaka and stable/Newton release. It 
works perfectly well with Devstack stable/Liberty release. Different volume 
operations executed randomnly via automation script are: create_volume, 
create_snapshot, delete_snapshot, delete_volume, attach_volume, detach_volume. 

  Possible Suspect after analysis:
  Before failure when the last detachment request comes to an instance, Nova's 
"detach_volume" fires the detach method into libvirt, which claims success, but 
the device is still attached as per the guest XML file. Hypervisor in libvirt 
is trying to take an exclusive lock on the disk for the subsequent attachment 
request, that all I/O caching is disabled. Libvirt will treat this metadata as 
a black box, never attempting to interpret or modify it. Nova then finishes the 
teardown, releasing the resources, which then causes I/O errors in the guest, 
and subsequent volume_attach requests from Nova to fail spectacularly due to it 
trying to use an in-use resource. This appears to be a race condition, in that 
it creates an intermittent issue and a complete attachment failure after 
different volume operations are triggered continuously.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1636489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664759] Re: cells v2 accepts creating two cells with same name

2017-03-21 Thread Maciej Szankin
Since the Mitaka cycle we use the direct release model, which means this
bug report should be tagged Fix Released.

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1664759

Title:
  cells v2 accepts creating two cells with same name

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I'm not seeing that nova-manage cell_v2 provides a way to update the
  transport_url (or database) for a cell.  It's possible that I'm
  missing something here but I've not found a way to do this.

  This is a problem when scaling rabbitmq, for example.  If I have a
  cell associated with a single rabbitmq instance and I want to scale to
  2 rmq instances, then there needs to be a way to update the database
  for the cell.

  Running 'nova-manage cell_v2 create_cell --name cell1' twice, 1st time
  with the transport_url in nova.conf having one rmq server, and 2nd
  time with the transport_url in nova.conf having two rmq servers is
  successful.  After this, there are two cell_mappings records for
  "cell1", and it appears that the first entry is what ends up being
  used for ensuing commands (ie. if I take the the original rmq out of
  active/active HA, I can't deploy an instance).

  Here's what the cell_mappings table looks like after the 2nd
  create_cell call:

  mysql> select * from cell_mappings;
  
+-+++--+---+--+--+
  | created_at  | updated_at | id | uuid
 | name  | transport_url
  | database_connection 
 |
  
+-+++--+---+--+--+
  | 2017-02-14 21:06:49 | NULL   |  1 | 
---- | cell0 | none:/// 
 | 
mysql://nova:7Jf4sgRNqbfzR8d3hxyWKYpzFfY6gK95@10.5.30.174/nova_cell0 |
  | 2017-02-14 21:06:56 | NULL   |  2 | 
1499460c-41f2-422d-b452-03b7995907c4 | cell1 | 
rabbit://nova:PtPFqF24ZxsB5GqCRN77Pbrp4h3cCYgJJ9XJwBThPhF2kz9M2Trbg8CSpFVcjY5L@10.5.30.169:/openstack
  | 
mysql://nova:7Jf4sgRNqbfzR8d3hxyWKYpzFfY6gK95@10.5.30.174/nova   |
  | 2017-02-14 22:35:48 | NULL   |  5 | 
4b363076-7d89-451d-be99-057b0ad67e73 | cell1 | 
rabbit://nova:PtPFqF24ZxsB5GqCRN77Pbrp4h3cCYgJJ9XJwBThPhF2kz9M2Trbg8CSpFVcjY5L@10.5.30.169:,nova:PtPFqF24ZxsB5GqCRN77Pbrp4h3cCYgJJ9XJwBThPhF2kz9M2Trbg8CSpFVcjY5L@10.5.30.187:/openstack
 | mysql://nova:7Jf4sgRNqbfzR8d3hxyWKYpzFfY6gK95@10.5.30.174/nova   |
  
+-+++--+

  It seems as if the 2nd cell_create call should update the original
  cell1 record, or there should be an cell_update subcommand.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1664759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673430] Re: Launch Instance error ImageNotFound

2017-03-16 Thread Maciej Szankin
Can you add steps, environment details etc?

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1673430

Title:
  Launch Instance error ImageNotFound

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  2017-03-16 19:09:57.370 29479 INFO nova.compute.claims 
[req-bff9ab69-2110-41e5-bf34-4a13574f4076 0c8c68d359a249388def4b4ec5d6c507 
9b8d0fa36b2d425a9dd6edf3dc5d2344 - - -] [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] Claim successful
  2017-03-16 19:09:57.988 29479 INFO nova.virt.osinfo 
[req-bff9ab69-2110-41e5-bf34-4a13574f4076 0c8c68d359a249388def4b4ec5d6c507 
9b8d0fa36b2d425a9dd6edf3dc5d2344 - - -] Cannot load Libosinfo: (No module named 
Libosinfo)
  2017-03-16 19:09:58.505 29479 INFO nova.virt.libvirt.driver 
[req-bff9ab69-2110-41e5-bf34-4a13574f4076 0c8c68d359a249388def4b4ec5d6c507 
9b8d0fa36b2d425a9dd6edf3dc5d2344 - - -] [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] Creating image
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager 
[req-bff9ab69-2110-41e5-bf34-4a13574f4076 0c8c68d359a249388def4b4ec5d6c507 
9b8d0fa36b2d425a9dd6edf3dc5d2344 - - -] [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] Instance failed to spawn
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] Traceback (most recent call last):
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2218, in 
_build_resources
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] yield resources
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2064, in 
_build_and_run_instance
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] block_device_info=block_device_info)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2772, in 
spawn
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] admin_pass=admin_password)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3190, in 
_create_image
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] instance, size, fallback_from_host)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 6789, in 
_try_fetch_image_cache
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] size=size)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 251, 
in cache
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] *args, **kwargs)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 593, 
in create_image
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] prepare_template(target=base, 
max_size=size, *args, **kwargs)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] return f(*args, **kwargs)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 241, 
in fetch_func_sync
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056] fetch_func(target=target, *args, 
**kwargs)
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 429, in 
fetch_image
  2017-03-16 19:09:58.609 29479 ERROR nova.compute.manager [instance: 
1d8734a3-cd0e-4ee8-ad2c-20b46c98e056]   

[Yahoo-eng-team] [Bug 1652874] Re: server_group json-schema would not work correctly

2017-03-08 Thread Maciej Szankin
If my guess is correct this can be now closed, as the
https://review.openstack.org/415482 patch should fix it. Please correct
me if I am wrong, zhengzhenyu.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1652874

Title:
  server_group json-schema would not work correctly

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  During the review of https://review.openstack.org/#/c/413453 , we found the 
other issue from the same POV.
  We need to investigate the following json-schema has the same bug as the LP 
https://bugs.launchpad.net/nova/+bug/1651064

  
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/server_groups.py#L30

  'items': [{'enum': ['anti-affinity', 'affinity']}],

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1652874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663600] Re: Showing forced_down value for the compute nodes.

2017-03-08 Thread Maciej Szankin
This does not seem to be a bug, rather a feature request and as such,
will require a bp/spec. Please reach out to folks on #openstack-nova
channel if you want to discuss it.

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: python-novaclient
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663600

Title:
  Showing forced_down value for the compute nodes.

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Invalid

Bug description:
  Currently, no ways to identify whether specific compute_node was
  really forced_down or not.

  There may be the possibility that nova-compute service down. We know
  only after seeing the compute logs.

  Steps to Reproduce:
  

  1) Forced_down the compute_node.
  2) Execute nova hypervisor-list ( saying state is down ). State will even 
down, if nova-compute service not able to start.

  
  Actual Output:

  +--+---+---+--+
  | ID   | Hypervisor hostname   | State | Status   |
  +--+---+---+--+
  | 1| compute1.hostname.com | down  | enabled  |
  ---

  Expected Output:

  
+++---+-+-+
  | ID | Hypervisor hostname| State | Status  | 
Forced_down |
  
+++---+-+-+
  | 1  | compute1.hostname.com  | down  | enabled | yes 
|
  
+++---+-+-+

  Forced_down = True ( value will be yes )
  Forced_down = False ( value will be no )

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640993] Re: xenserver hits vif plugging timeout with neutron CI job

2017-03-08 Thread Maciej Szankin
Proposed change that closes this bug was merged, but somehow infra did
not catch that. Marking as complete.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1640993

Title:
  xenserver hits vif plugging timeout with neutron CI job

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We're now running neutron by default in CI jobs for ocata if they
  aren't explicitly specified otherwise to run nova-network.

  That might be unrelated to this, but I saw the citrix xenserver
  neutron job fail today with a vif plugging timeout:

  http://dd6b71949550285df7dc-
  dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/47/395747/8/check
  /dsvm-tempest-neutron-network/e3832fe/logs/screen-n-cpu.txt.gz

  2016-11-11 00:55:41.191 3175 WARNING nova.virt.xenapi.vmops [req-
  5f5774e3-92ee-4e49-947d-ce1c879bc1ab tempest-
  ServersTestManualDisk-1338550150 tempest-
  ServersTestManualDisk-1338550150] [instance: 79c00d5c-b285-44e5-b8db-
  9fc9a8e31478] Timeout waiting for vif plugging callback

  nova.conf seems to be configured correctly:

  http://dd6b71949550285df7dc-
  dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/47/395747/8/check
  /dsvm-tempest-neutron-network/e3832fe/logs/etc/nova/nova.conf.txt.gz

  vif_plugging_timeout = 300
  vif_plugging_is_fatal = True

  So I'm wondering if there is a problem in the vif plugging callback
  logic in the xenapi driver code.

  I see this earlier before the timeout:

  2016-11-11 00:50:29.526 3175 DEBUG nova.compute.manager [req-5f5774e3
  -92ee-4e49-947d-ce1c879bc1ab tempest-ServersTestManualDisk-1338550150
  tempest-ServersTestManualDisk-1338550150] [instance:
  79c00d5c-b285-44e5-b8db-9fc9a8e31478] Preparing to wait for external
  event network-vif-plugged-b6c3af4c-98cc-4077-8d4d-7009835c0c5c
  prepare_for_instance_event
  /opt/stack/new/nova/nova/compute/manager.py:324

  2016-11-11 00:50:29.527 3175 DEBUG nova.compute.manager [req-5f5774e3
  -92ee-4e49-947d-ce1c879bc1ab tempest-ServersTestManualDisk-1338550150
  tempest-ServersTestManualDisk-1338550150] [instance:
  79c00d5c-b285-44e5-b8db-9fc9a8e31478] Preparing to wait for external
  event network-vif-plugged-f5d2ac5a-36bf-4562-b2d0-18c40f640a3c
  prepare_for_instance_event
  /opt/stack/new/nova/nova/compute/manager.py:324

  2016-11-11 00:50:29.528 3175 DEBUG nova.virt.xenapi.vmops [req-
  5f5774e3-92ee-4e49-947d-ce1c879bc1ab tempest-
  ServersTestManualDisk-1338550150 tempest-
  ServersTestManualDisk-1338550150] wait for instance event:[('network-
  vif-plugged', u'b6c3af4c-98cc-4077-8d4d-7009835c0c5c'), ('network-vif-
  plugged', u'f5d2ac5a-36bf-4562-b2d0-18c40f640a3c')] _spawn
  /opt/stack/new/nova/nova/virt/xenapi/vmops.py:599

  Then it starts doing the vif plugging. The odd thing is it logs twice
  that it's preparing to wait for external events, but only logs
  'waiting for instance event' once. So is it waiting for another even
  that doesn't happen?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1640993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669840] Re: OCATA: Error: Unable to retrieve instances.

2017-03-06 Thread Maciej Szankin
Hello, kadtab.

Please be more explicit about your configuration - for instance if the
installation was devstack, from packages or some other deployment tool
and the steps that lead to your problem, that could help reproduce this
problem.

Because of the above, I am marking this one as invalid. Please provide
additional informations and after doing so, feel free to mark this bug
as "New".

Other than that

Cheers!

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669840

Title:
  OCATA: Error: Unable to retrieve instances.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I have configured Ocata release on Ubuntu 16.

  Everything seems to be working until I install Cinder volume. After
  configuring cinder, I started receiving an error under Instances
  option of Horizon.

  I get an error stating "Unable to retrieve instances."

  Any guide in the above mentioned regard will be highly appreciated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1669840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1668907] Re: Failed to start nova-compute.service if the cluster_name includes a few of special characters

2017-03-06 Thread Maciej Szankin
While it might be true that nova compute crashes if the hostname
contains a '%' char in it, but still this is not the issue that should
be handled on our side. '%' is not a character that is allowed to appear
in a hostname string.

https://en.wikipedia.org/wiki/Hostname, section "Restrictions on valid 
hostnames"
"The Internet standards (Requests for Comments) for protocols mandate that 
component hostname labels may contain only the ASCII letters 'a' through 'z' 
(in a case-insensitive manner), the digits '0' through '9', and the hyphen 
('-'). The original specification of hostnames in RFC 952, mandated that labels 
could not start with a digit or with a hyphen, and must not end with a hyphen. 
However, a subsequent specification (RFC 1123) permitted hostname labels to 
start with digits. No other symbols, punctuation characters, or white space are 
permitted."

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1668907

Title:
  Failed to start nova-compute.service if the cluster_name includes a
  few of special characters

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The compute node configs with vmware, if the cluster_name includes
  special character "%", the nova-compute.service will not start
  successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1668907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1669847] [NEW] sample config default values show details of infra worker instead of intended default values

2017-03-03 Thread Maciej Szankin
Public bug reported:

Following the discovery in [1], few more places were found where the
config options default instead of the intended value reports details of
infra worker.

Affected options are:

* console_host, nova/conf/compute.py
* xenserver.console_public_hostname, nova/conf/xenserver.py

[1] https://bugs.launchpad.net/nova/+bug/1669746

For context:

#
# Console proxy host to be used to connect to instances on this host. It is the
# publicly visible name for the console host.
#
# Possible values:
#
# * Current hostname (default) or any string representing hostname.
#  (string value)
#console_host = socket.gethostname()

#
# Publicly visible name for this console host.
#
# Possible values:
#
# * A string representing a valid hostname
#  (string value)
# Deprecated group/name - [DEFAULT]/console_public_hostname
#console_public_hostname = ubuntu-xenial-osic-cloud1-s3700-7551763

** Affects: nova
 Importance: Medium
 Assignee: Maciej Szankin (mszankin)
 Status: In Progress


** Tags: doc

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: (unassigned) => Maciej Szankin (mszankin)

** Changed in: nova
   Status: New => In Progress

** Tags added: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669847

Title:
  sample config default values show details of infra worker instead of
  intended default values

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Following the discovery in [1], few more places were found where the
  config options default instead of the intended value reports details
  of infra worker.

  Affected options are:

  * console_host, nova/conf/compute.py
  * xenserver.console_public_hostname, nova/conf/xenserver.py

  [1] https://bugs.launchpad.net/nova/+bug/1669746

  For context:

  #
  # Console proxy host to be used to connect to instances on this host. It is 
the
  # publicly visible name for the console host.
  #
  # Possible values:
  #
  # * Current hostname (default) or any string representing hostname.
  #  (string value)
  #console_host = socket.gethostname()

  #
  # Publicly visible name for this console host.
  #
  # Possible values:
  #
  # * A string representing a valid hostname
  #  (string value)
  # Deprecated group/name - [DEFAULT]/console_public_hostname
  #console_public_hostname = ubuntu-xenial-osic-cloud1-s3700-7551763

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1669847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229445] Re: db type could not be determined

2017-02-16 Thread Maciej Szankin
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229445

Title:
  db type could not be determined

Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  New
Status in Testrepository:
  Triaged

Bug description:
  In openstack/python-novaclient project, run test in py27 env, then run
  test in py33 env,  the following error will stop test:

  db type could not be determined

  But, if you run "tox -e py33" fist, then run "tox -e py27", it will be
  fine, no error.

  workaround:
  remove the file in .testrepository/times.dbm, then run py33 test, it will be 
fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1229445/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640164] Re: Rolling upgrade M to N: DBDeadlock Error when create instance during sync database

2017-01-05 Thread Maciej Szankin
** Changed in: nova
   Status: In Progress => Won't Fix

** Changed in: nova
   Status: Won't Fix => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1640164

Title:
  Rolling upgrade M to N:  DBDeadlock Error when create instance during
  sync database

Status in OpenStack Compute (nova):
  New

Bug description:
  I have 3 controller nodes running HA active/active, using KVM
  hypervisor and Maria cluster as shared database. The system was
  deployed by Devstack Mitaka version on virtual machines which was
  created by virt-manager.

  I have upgraded Keytone to N version, then I tried to Rolling Upgrade Nova 
from M to N version folowed:
  http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process

  The document said that:
  Using the newly installed nova code, run the DB sync. (nova-manage db sync; 
nova-manage api_db sync). These schema change operations should have minimal or 
no effect on performance, and should not cause any operations to fail.

  However, during the sync database, I cannot create the VM. Nova raise
  that:

  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-b5c82715-6306-4f6d-972b-3387015da12c)

  Full log here: http://paste.openstack.org/show/588365/

  After finishing the sync process, I can create VM as well.

  
  ==Reproduce==

  # Controller1:
  1. Stop all nova service, except nova api (n-api).

  2. Upgrade source code:
  $ cd /opt/stack/nova/
  $ git checkout remotes/origin/stable/newton
  $ git checkout -b stable/newton remotes/origin/stable/newton
  $ git pull
  $ sudo -E pip install -r requirements.txt --upgrade

  3. Downgrade some packages dependency (because I used --upgrade as above)
  $ sudo pip uninstall oslo.messaging
  $ sudo pip uninstall kombu
  $ sudo pip uninstall cffi
  $ sudo -E pip install oslo.messaging==5.10.0
  $ sudo -E pip install kombu==3.0.35
  $ sudo -E pip install cffi==1.5.2

  4. Update /etc/nova/nova.conf:
  [upgrade_levels]
  compute = auto

  5. Sync DB
  $ nova-manage db sync
  $ nova-manage api_db sync

  6. During the Sync DB, try to create VM, execute on controller 2 and 3 (not 
concurrency):
  $ nova boot --flavor m1.nano --image 21ffa33b-e9eb-43f4-aa73-ceb8f2cbc6fc 
--nic net-name=net1 VM_test
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-b5c82715-6306-4f6d-972b-3387015da12c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1640164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633518] Re: The passphrase used to encrypt or decrypt volumes was mangled prior to Newton

2016-12-20 Thread Maciej Szankin
lyarwood - AFAIK we do not use ``Fix COmmited`` status, just ``Fix
Released``.

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633518

Title:
  The passphrase used to encrypt or decrypt volumes was mangled prior to
  Newton

Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  In Progress

Bug description:
  Description
  ===

  tl;dr hex(x) previously stripped leading 0's from individual hex
  numbers while encoding the passphrase back to a hex string before use
  to encrypt/decrypt a luks volume.

  Prior to Newton the following method was used to encode passphrases
  when attempting to use or create a luks volume :

  def _get_passphrase(self, key):
  """Convert raw key to string."""
  return ''.join(hex(x).replace('0x', '') for x in key)

  
https://github.com/openstack/nova/blob/82190bdd283dda37f7517fd9a268b5e55183f06c/nova/volume/encryptors/cryptsetup.py#L92-L94

  This was replaced in Newton with the move to Castellan in the
  following change that altered both the decoding and encoding steps :

  Replace key manager with Castellan
  https://review.openstack.org/#/c/309614/

  The original method used the built-in hex() call to convert individual
  unsigned ints back to hex. This would strip the leading 0 from each
  hex digit pair, altering the eventual passphrase used to encrypt or
  decrypt the volume.

  For example, the following one liner represents both the initial
  decode step preformed by ConfKeyManager and the step above to encode
  the passphrase in the LuksEncryptor class :

  >>> ''.join(hex(x).replace('0x', '') for x in array.array('B', 
'752523eb50c3bf2ba3ff639c250405805fd4e779894ef5360e15e081696a'.decode('hex')).tolist())
  '752523eb50c3bf2ba3ff639c2545805fd4e779894ef536e15e081696a'

  Original string: 752523eb50c3bf2ba3ff639c250405805fd4e779894ef5360e15e081696a
  New string : 752523eb50c3bf2ba3ff639c25 4 5805fd4e779894ef536 e15e081696a

  The returned string is missing various 0's that have been stripped by
  the hex() call :

  >>> hex(14)
  '0xe'

  >>> int(0x0e)
  14

  >>> int(0xe)
  14

  >>> hex(4)
  '0x4'

  >>> int(0x04)
  4

  >>> int(0x4)
  4

  The following one liner represents the current decode and encode
  steps, producing the same string as is entered :

  >>> import binascii
  >>> 
binascii.hexlify(bytes(binascii.unhexlify('752523eb50c3bf2ba3ff639c250405805fd4e779894ef5360e15e081696a'))).decode('utf-8')
  u'752523eb50c3bf2ba3ff639c250405805fd4e779894ef5360e15e081696a'

  Original string: 752523eb50c3bf2ba3ff639c250405805fd4e779894ef5360e15e081696a
  New string : 752523eb50c3bf2ba3ff639c250405805fd4e779894ef5360e15e081696a

  IMHO the way to handle this is to add a simple retry in master and
  stable/newton when we fail due to a bad passphrase using the mangled
  passphrase.

  We should also improve the testing in this area as it appears all
  previous testing used zero based passphrases, missing this issue when
  it landed in Newton.

  More notes available downstream in the following bug :

  Nova encryption alters the key used
  https://bugzilla.redhat.com/show_bug.cgi?id=1382415

  Steps to reproduce
  ==
  - Encrypt a volume in Mitaka or earlier.
  - Upgrade to Newton or later.
  - Attempt to use the volume.

  Expected result
  ===
  Volume is decrypted and usable.

  Actual result
  =
  Unable to decrypt the volume due to the use of an modified passphrase during 
initial formatting and use prior to Newton.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/
 Newton and later.
 
  2. Which hypervisor did you use?
 Libvirt

  2. Which storage type did you use?
 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)
 N/A

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1633518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567369] Re: Added server tags support in nova-api

2016-12-19 Thread Maciej Szankin
Bug was merged so...

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1567369

Title:
  Added server tags support in nova-api

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://review.openstack.org/268932
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 537df23d85e0f7c461643efe6b6501d267ae99d0
  Author: Sergey Nikitin 
  Date:   Fri Jan 15 17:11:05 2016 +0300

  Added server tags support in nova-api
  
  Added new API microversion which allows the following:
  - add tag to the server
  - replace set of server tags with new set of tags
  - get information about server, including list of tags for server
  - get just list of tags for server
  - check if tag exists on a server
  - remove specified tag from server
  - remove all tags from server
  - search servers by tags
  
  DocImpact
  APIImpact
  
  Implements: blueprint tag-instances
  
  Change-Id: I9573aa52aae9f49945d8806ca5e52ada29fb087a

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1567369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649297] [NEW] N313 hacking check is not being run

2016-12-12 Thread Maciej Szankin
Public bug reported:

Description
===
N313 hacking check has a regex that is not allowing the check to be run.

Steps to reproduce
==
Change any configuration option to start with lower case letter and run ``tox 
-e pep8``

Expected result
===
``tox -e pep8`` command should fail, due to violating N313 hacking check.

Actual result
=
``tox -e pep8`` passes.

Environment
===
Current master branch (4728c3e4fde5b5b7b068f60ea410d663deea7db2)

Logs & Configs
==
None. This check also does not have any UT coverage.

** Affects: nova
 Importance: Low
 Assignee: Maciej Szankin (mszankin)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Maciej Szankin (mszankin)

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649297

Title:
  N313 hacking check is not being run

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  N313 hacking check has a regex that is not allowing the check to be run.

  Steps to reproduce
  ==
  Change any configuration option to start with lower case letter and run ``tox 
-e pep8``

  Expected result
  ===
  ``tox -e pep8`` command should fail, due to violating N313 hacking check.

  Actual result
  =
  ``tox -e pep8`` passes.

  Environment
  ===
  Current master branch (4728c3e4fde5b5b7b068f60ea410d663deea7db2)

  Logs & Configs
  ==
  None. This check also does not have any UT coverage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596507] Re: XenAPI: Support neutron security group

2016-11-29 Thread Maciej Szankin
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596507

Title:
  XenAPI: Support neutron security group

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://review.openstack.org/251271
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit bebc0a4b2ea915fa214518ea667cd25812cda058
  Author: Huan Xie 
  Date:   Mon Nov 30 09:24:54 2015 +

  XenAPI: Support neutron security group
  
  This implementation is to give support on neutron security group with
  XenServer as compute driver. When using neutron+openvswitch, the ovs
  agent on compute node cannot run correctly due to lack of qbr linux
  bridge on compute node. This change will add qbr linux bridge when
  xenserver as hypervisor
  Xenserver driver now doesn't have linux bridge, the connection is:
  compute node: vm-vif -> br-int -> br-eth
  network node: br-eth -> br-int -> br-ex
  With this implemented, linux bridge(qbr) will be added in compute
  node. Thus the security group rules can be applied on qbr bridge.
  The connection will look like:
  compute node: vm-vif -> qbr(linux bridge) -> br-int -> br-eth
  network node: br-eth -> br-int -> br-ex
  
  Closes-Bug: #1526138
  
  Implements: blueprint support-neutron-security-group
  
  DocImpact: /etc/modprobe.d/blacklist-bridge file in dom0 should be
  deleted since it prevent loading linux bridge module in dom0
  
  Depends-On: I377f8ad51e1d2725c3e0153e64322055fcce7b54
  
  Change-Id: Id9b39aa86558a9f7099caedabd2d517bf8ad3d68

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625101] Re: Some cinder failures return "500" errors to users

2016-11-29 Thread Maciej Szankin
Fixed in https://review.openstack.org/#/c/382660/

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625101

Title:
  Some cinder failures return "500" errors to users

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When a user tries to delete a volume that is attached to a VM using
  nova the user gets an "Unexpected Exception" error when the cinder
  extension fails with an InvalidInput exception. This is not
  particularly helpful for troubleshooting and the message is misleading
  for users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1625101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625462] Re: The value of migration.dest_compute is incorrect after resize_revert operation successfully

2016-11-29 Thread Maciej Szankin
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625462

Title:
  The value of migration.dest_compute is incorrect after resize_revert
  operation successfully

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I have two host: tecs-controller-node and tecs-OpenStack-Nova.

  1. I did migrate action, then instance's state is as follow:
  
+--+--+---++-++
  | ID   | Name | Status| Task 
State | Power State | Networks   |
  
+--+--+---++-++
  | ea40a5f7-d92b-4ad6-94c0-a18f2465519e | hanrong1 | VERIFY_RESIZE | - 
 | Running | public=172.24.4.2, 2001:db8::8 |
  
+--+--+---++-++

  2. I did look at migrations' table for instance source_node and
  dest_node.

  mysql> select * from migrations where 
instance_uuid='ea40a5f7-d92b-4ad6-94c0-a18f2465519e';
  
+-+-+++--+-+--+--+--+--+--+--+-+-+++--+--+--++++
  | created_at  | updated_at  | deleted_at | id | 
source_compute   | dest_compute| dest_host| status   | 
instance_uuid| old_instance_type_id | 
new_instance_type_id | source_node  | dest_node   | deleted | 
migration_type | hidden | memory_total | memory_processed | memory_remaining | 
disk_total | disk_processed | disk_remaining |
  
+-+-+++--+-+--+--+--+--+--+--+-+-+++--+--+--++++
  | 2016-09-20 02:38:24 | 2016-09-20 02:38:58 | NULL   |  1 | 
tecs-controller-node | tecs-OpenStack-Nova | 192.168.1.60 | finished | 
ea40a5f7-d92b-4ad6-94c0-a18f2465519e |6 |   
 6 | tecs-controller-node | tecs-OpenStack-Nova |   0 | migration  |
  0 | NULL | NULL | NULL |   NULL | 
  NULL |   NULL |
  
+-+-+++--+-+--+--+--+--+--+--+-+-+++--+--+--++++

  source_compute: tecs-controller-node
  source_node: tecs-controller-node
  dest_compute:tecs-OpenStack-Nova
  dest_node: tecs-OpenStack-Nova

  3. I did resize-revert action
  stack@tecs-controller-node:~$ nova resize-revert hanrong1
  stack@tecs-controller-node:~$ nova list
  
+--+--+---+--+-++
  | ID   | Name | Status| Task 
State   | Power State | Networks   |
  
+--+--+---+--+-++
  | ea40a5f7-d92b-4ad6-94c0-a18f2465519e | hanrong1 | REVERT_RESIZE | 
resize_reverting | Running | public=172.24.4.2, 2001:db8::8 |
  
+--+--+---+--+-++
  stack@tecs-controller-node:~$ nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | 
Power State | Networks   |
  
+--+--+++-++
  | ea40a5f7-d92b-4ad6-94c0-a18f2465519e | hanrong1 | ACTIVE | -  | 
Running | public=172.24.4.2, 2001:db8::8 |
  

[Yahoo-eng-team] [Bug 1625644] [NEW] Policy config is misplaced

2016-09-20 Thread Maciej Szankin
Public bug reported:

Patch 9864801d468de5dde79141cbab4374bd2310bef2 introduced config options
that are registered outside nova/conf/ directory, thus violating check
N342 (which does not work, it will be fixed in
https://review.openstack.org/#/c/355597/ which cannot be introduced
until the issue with policy configs is fixed).

Logs:
2016-09-14 08:33:33.496626 | ./nova/cmd/policy_check.py:39:1: N342  Config 
options should be in the central location '/nova/conf/*'. Do not declare new 
config options outside of that folder.
2016-09-14 08:33:33.496779 | ./nova/cmd/policy_check.py:148:1: N342  Config 
options should be in the central location '/nova/conf/*'. Do not declare new 
config options outside of that folder.

** Affects: nova
 Importance: Undecided
 Assignee: Maciej Szankin (mszankin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Maciej Szankin (mszankin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625644

Title:
  Policy config is misplaced

Status in OpenStack Compute (nova):
  New

Bug description:
  Patch 9864801d468de5dde79141cbab4374bd2310bef2 introduced config
  options that are registered outside nova/conf/ directory, thus
  violating check N342 (which does not work, it will be fixed in
  https://review.openstack.org/#/c/355597/ which cannot be introduced
  until the issue with policy configs is fixed).

  Logs:
  2016-09-14 08:33:33.496626 | ./nova/cmd/policy_check.py:39:1: N342  Config 
options should be in the central location '/nova/conf/*'. Do not declare new 
config options outside of that folder.
  2016-09-14 08:33:33.496779 | ./nova/cmd/policy_check.py:148:1: N342  Config 
options should be in the central location '/nova/conf/*'. Do not declare new 
config options outside of that folder.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1625644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615903] Re: free_disk_gb is not correctly, because swap disk size is not minus.

2016-08-23 Thread Maciej Szankin
Swap partition is a disk space that is not available for storing any
kind of data other than memory dumps. So - I think it is perfectly fine
to assume that the size of swap sums up with used space.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615903

Title:
  free_disk_gb is not correctly, because swap disk size is not  minus.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. code in compute/resource_tracker.py

  def _update_usage(self, usage, sign=1):
  mem_usage = usage['memory_mb']
  disk_usage = usage.get('root_gb', 0)

  overhead = self.driver.estimate_instance_overhead(usage)
  mem_usage += overhead['memory_mb']
  disk_usage += overhead.get('disk_gb', 0)

  self.compute_node.memory_mb_used += sign * mem_usage
  self.compute_node.local_gb_used += sign * disk_usage
  self.compute_node.local_gb_used += sign * usage.get('ephemeral_gb', 0)
  self.compute_node.vcpus_used += sign * usage.get('vcpus', 0)

  # free ram and disk may be negative, depending on policy:
  self.compute_node.free_ram_mb = (self.compute_node.memory_mb -
   self.compute_node.memory_mb_used)
  self.compute_node.free_disk_gb = (self.compute_node.local_gb -
self.compute_node.local_gb_used)

  self.compute_node.running_vms = self.stats.num_instances

  2. So I think self.compute_node.local_gb_used is contained swap disk
  size. And free_disk_gb is not minus swap disk size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614556] Re: Resize does not allow you to resize down in mitaka

2016-08-23 Thread Maciej Szankin
This is not a bug, in my opinion.

Resizing down works when two flavors have the same disk params. Resizing
down disk is not permitted due to creating potential filesystem fault.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614556

Title:
  Resize does not allow you to resize down in mitaka

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Deployed cluster with Fuel 9.0.

  Created and launched an instance with resources of 2VCPUs, 8GB of ram,
  100GB ephemeral storage. Ephemeral storage is not backed by CEPH.

  I was able to successfully use Horizon to resize the instance (CentOS)
  from the current running resources above to a larger instances with
  4VCPUS, 16GB of ram and 200GB of ephemeral storage. However, when I
  went to go back down to the 2VCPUS, 8GB Ram, 100GB ephemeral the
  instance won't resize.

  In horizon

  1. click on the "Resize" option
  2. Confirm the resize
  3. Horizon displays a "Success" message (but actually failed in the 
background)

  
  I check the instances "Action Log" in horizon and I see that it errored. Go 
check the logs on the controller and see.

  WARNING nova.scheduler.host_manager [req-d46ad3a1-be18-464b-
  a3b9-11123e481fcc bdb162ee567d4230a988895f2a000a8b9
  84ec9bb0ccc34eea84fbf49b557c4a66] Host has more disk space than
  database expected (43gb > 34gb)

  Searching around on the internet I found this
  https://ask.openstack.org/en/question/43359/resize-openstack-icehouse-
  instance-at-same-node-bug/ and it isn't clear that this was a bug or
  not. Some people have reported that it works for them in older
  versions but not in newer version. Others report that this is a
  problem resizing down in that it should just show an error message. I
  just need clarification if this is a bug, it is intended to error on a
  resize down (granted I don't think it should show success in then
  failure).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1614556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329755] Re: novncproxy children progresses don't quit while stopping novncproxy

2016-08-17 Thread Maciej Szankin
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329755

Title:
  novncproxy children progresses don't quit while stopping novncproxy

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1.we start nova-novncproxy service

  2.run  ps -ef|grep nova-novncproxy,we find only one processes

  3.then we gget a vnc url such as
  "http://11.12.13.14:6080/vnc_auto.html?token=-9229-43c33-49f3f07-;
  , for example nova get-vnc-console server-uuid novnc, and open it in
  your browser.

  4.now run 'ps -ef|grep nova-novncproxy' again, we find two vnc
  processes. Obviously, one main and one child process.

  5.now we stop  novncproxy main process, and run 'ps -ef|grep nova-
  novncproxy', we find still a process in alive, it is child process,
  and its parent pid is 1, init process.

  6.now we start novncproxy process, for example 'python
  /usr/bin/novncproxy --config /etc/nova/nova.conf' ,we find novncproxy
  process can't start. and find error info '2014-06-12 20:11:07.635 7356
  TRACE nova.cmd.novncproxy error: [Errno 98] Address already in use'

  7.close your browser which open vnc window, then start novncproxy ,
  well started.

  
  it means, when stop novncproxy,we need to close all children processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268955] Re: OVS agent updates the wrong port when using XenAPI + Neutron with HVM or PVHVM

2016-08-15 Thread Maciej Szankin
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268955

Title:
  OVS agent updates the wrong port when using XenAPI + Neutron with HVM
  or PVHVM

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Environment
  ==
  - Xen Server 6.2
  - OpenStack Havana installed with Packstack
  - Neutron OVS agent using VLAN

  From time to time, when an instance is started, it fails to get
  network connectivity. As a result the instance cannot get its IP
  address from DHCP and it remains unreachable.

  After further investigation, it appears that the OVS agent running on
  the compute node is updating the wrong OVS port because on startup, 2
  ports exist for the same instance: vifX.0 and tapX.0. The agent
  updates whatever port is returned in first position (see logs below).
  Note that the tapX.0 port is only transient and disappears after a few
  seconds.

  Workaround
  ==

  Manually update the OVS port on dom0:

  $ ovs-vsctl set Port vif17.0 tag=1

  OVS Agent logs
  

  2014-01-14 14:15:11.382 18268 DEBUG neutron.agent.linux.utils [-] Running 
command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--columns=external_ids,name,ofport', 'find', 
'Interface', 'external_ids:iface-id="98679ab6-b879-4b1b-a524-01696959d468"'] 
execute /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
  2014-01-14 14:15:11.403 18268 DEBUG qpid.messaging.io.raw [-] SENT[3350c68]: 
'\x0f\x01\x00\x19\x00\x01\x00\x00\x00\x00\x00\x00\x04\n\x01\x00\x07\x00\x010\x00\x00\x00\x00\x01\x0f\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x00\x02\n\x01\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x81'
 writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
  2014-01-14 14:15:11.649 18268 DEBUG neutron.agent.linux.utils [-]
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', '--', '--columns=external_ids,name,ofport', 'find', 
'Interface', 'external_ids:iface-id="98679ab6-b879-4b1b-a524-01696959d468"']
  Exit code: 0
  Stdout: 'external_ids: {attached-mac="fa:16:3e:46:1e:91", 
iface-id="98679ab6-b879-4b1b-a524-01696959d468", iface-status=active, 
xs-network-uuid="b2bf90df-be17-a4ff-5c1e-3d69851f508a", 
xs-vif-uuid="2d2718d8-6064-e734-2737-cdcb4e06efc4", 
xs-vm-uuid="7f7f1918-3773-d97c-673a-37843797f70a"}\nname: 
"tap29.0"\nofport  : 52\n\nexternal_ids: 
{attached-mac="fa:16:3e:46:1e:91", 
iface-id="98679ab6-b879-4b1b-a524-01696959d468", iface-status=inactive, 
xs-network-uuid="b2bf90df-be17-a4ff-5c1e-3d69851f508a", 
xs-vif-uuid="2d2718d8-6064-e734-2737-cdcb4e06efc4", 
xs-vm-uuid="7f7f1918-3773-d97c-673a-37843797f70a"}\nname: 
"vif29.0"\nofport  : 51\n\n'
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60
  2014-01-14 14:15:11.650 18268 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Port 
98679ab6-b879-4b1b-a524-01696959d468 updated. Details: {u'admin_state_up': 
True, u'network_id': u'ad37f107-074b-4c58-8f36-4705533afb8d', 
u'segmentation_id': 100, u'physical_network': u'default', u'device': 
u'98679ab6-b879-4b1b-a524-01696959d468', u'port_id': 
u'98679ab6-b879-4b1b-a524-01696959d468', u'network_type': u'vlan'}
  2014-01-14 14:15:11.650 18268 DEBUG neutron.agent.linux.utils [-] Running 
command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap29.0', 'tag=1'] execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:43
  2014-01-14 14:15:11.913 18268 DEBUG neutron.agent.linux.utils [-]
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ovs-vsctl', '--timeout=2', 'set', 'Port', 'tap29.0', 'tag=1']
  Exit code: 0
  Stdout: '\n'
  Stderr: '' execute 
/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:60

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549854] Re: compute host "has not been heard from in a while" failing ceph jobs

2016-08-15 Thread Maciej Szankin
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549854

Title:
  compute host "has not been heard from in a while" failing ceph jobs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/89/283689/3/gate/gate-tempest-dsvm-full-
  ceph-src-glance_store/f23f9eb/logs/screen-n-sch.txt.gz?level=TRACE

  and here:

  http://logs.openstack.org/08/283708/3/check/gate-tempest-dsvm-full-
  ceph/094fc27/logs/screen-n-sch.txt.gz?level=TRACE

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22has%20not%20been%20heard%20from%20in%20a%20while%5C%22%20AND%20tags%3A%5C%22screen-n-sch.txt%5C%22%20AND%20build_name%3A*ceph*=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1549854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474253] Re: Cannot rebuild a instance booted from volume

2016-08-15 Thread Maciej Szankin
Cleaning up this bug as it was reopened automatically by infra due to
change in gerrit. This should be fixed now fixed given Matt's comment.
Feel free to reopen if I am mistaken.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474253

Title:
  Cannot rebuild a instance booted from volume

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  User rebuild a instance booted from volume in the following steps.

  1. Stop a instance
  2. Detach a root device volume.
  3. Attach a new root device volume.
  4. Start a instance

  But, currently, It's impossible by these reasons.

  1. User not allowed to detach a root device volume.
 - detach boot device volume without warning 
   (https://bugs.launchpad.net/nova/+bug/1279300)

  2. User not allowed to attach a root device volume expect when creating a 
instance.
 - A get_next_device_name which is executed when attaching volume, never 
return a root_device_name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611888] Re: 1844-887-9236 Quickbooks Activation Support Phone Number

2016-08-15 Thread Maciej Szankin
** Changed in: nova
   Status: New => Won't Fix

** Changed in: nova
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611888

Title:
  1844-887-9236 Quickbooks Activation Support Phone Number

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1844-887-9236 Quickbooks Activation Support Phone Number1844-887-9236
  Quickbooks Activation Support Phone Number1844-887-9236 Quickbooks
  Activation Support Phone Number1844-887-9236 Quickbooks Activation
  Support Phone Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  

[Yahoo-eng-team] [Bug 1611892] Re: Quickbooks Customer Support Phone Number 1-844-887-9236 (SUPPORT TEAM)

2016-08-15 Thread Maciej Szankin
** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611892

Title:
  Quickbooks Customer Support Phone Number 1-844-887-9236 (SUPPORT TEAM)

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1611892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613411] [NEW] check_config_option_in_central_place does not check anything

2016-08-15 Thread Maciej Szankin
Public bug reported:

Description
===
Hacking check check_config_option_in_central_place does not check anything.
It is because of `not` keyword in 
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L660 , 
which causes every file that is checked and is not in nova/conf directory to be 
omitted.


Steps to reproduce
==
1. Add dummy config opt to random file, for example:

  diff --git a/nova/cmd/api.py b/nova/cmd/api.py
  index d8c76ca..d5fe906 100644
  --- a/nova/cmd/api.py
  +++ b/nova/cmd/api.py
  @@ -22,6 +22,7 @@ Starts both the EC2 and OpenStack APIs in separate 
greenthreads.

   import sys

  +from oslo_config import cfg
   from oslo_log import log as logging
   from oslo_reports import guru_meditation_report as gmr
   import six
  @@ -39,6 +40,8 @@ CONF = nova.conf.CONF


   def main():
  +new_opt = cfg.StrOpt('test_opt', default='test',
  + help='test_opt description')
   config.parse_args(sys.argv)
   logging.setup(CONF, "nova")
   utils.monkey_patch()

2. Run tox with command:
  $ tox -epep8

3. Observe as no N342 checking error is reported.


Expected result
===
N342 checking error is reported

Actual result
=
No N342 checking error is reported.

Environment
===
Nova master branch, commit 15e536518ae1a366c8a8b15d9183072050e4b6f2 (newest 
when reporting this bug).

Logs & Configs
==
No need for logs.

** Affects: nova
 Importance: Undecided
 Assignee: Maciej Szankin (mszankin)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Maciej Szankin (mszankin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1613411

Title:
  check_config_option_in_central_place does not check anything

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  Hacking check check_config_option_in_central_place does not check anything.
  It is because of `not` keyword in 
https://github.com/openstack/nova/blob/master/nova/hacking/checks.py#L660 , 
which causes every file that is checked and is not in nova/conf directory to be 
omitted.

  
  Steps to reproduce
  ==
  1. Add dummy config opt to random file, for example:

diff --git a/nova/cmd/api.py b/nova/cmd/api.py
index d8c76ca..d5fe906 100644
--- a/nova/cmd/api.py
+++ b/nova/cmd/api.py
@@ -22,6 +22,7 @@ Starts both the EC2 and OpenStack APIs in separate 
greenthreads.

 import sys

+from oslo_config import cfg
 from oslo_log import log as logging
 from oslo_reports import guru_meditation_report as gmr
 import six
@@ -39,6 +40,8 @@ CONF = nova.conf.CONF

  
 def main():
+new_opt = cfg.StrOpt('test_opt', default='test',
+ help='test_opt description')
 config.parse_args(sys.argv)
 logging.setup(CONF, "nova")
 utils.monkey_patch()

  2. Run tox with command:
$ tox -epep8

  3. Observe as no N342 checking error is reported.


  Expected result
  ===
  N342 checking error is reported

  Actual result
  =
  No N342 checking error is reported.

  Environment
  ===
  Nova master branch, commit 15e536518ae1a366c8a8b15d9183072050e4b6f2 (newest 
when reporting this bug).

  Logs & Configs
  ==
  No need for logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1613411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590250] Re: vm console showing blank screen

2016-07-06 Thread Maciej Szankin
Tested current master branch on 16.04 with local.conf just as in bug
description and everything works correctly.

Are you sure that console is available on the same IP as the VM? And if
not, that this IP is ping-able?

Marking as invalid until any further proof appears.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590250

Title:
  vm console showing blank screen

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi,
  I installed openstack using devstack on new ubuntu machine, it installed 
properly, and VM also instantiated successfully, but when I try to access 
console of VM instance, it is simply showing blank screen.. 
  I checked novnc server , it is working and receiving requests. 
  I am running followsing services :- 

  disable_service n-net
  enable_service neutron q-svc q-agt q-dhcp q-l3 q-meta
  disable_service n-spice
  enable_service n-novnc
  disable_service n-xvnc
  enable_service n-sproxy
  disable_service tempest

  this problem is coming on liberty as well as on mitaka.

  could this problem relates to system hardware also?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp