[Yahoo-eng-team] [Bug 1296532] Re: Network interface should be restarted if instance has a static ip address.

2014-03-24 Thread Hiroyuki Eguchi
** Changed in: cloud-init
   Status: New = Incomplete

** Changed in: cloud-init
   Status: Incomplete = New

** Changed in: cloud-init
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1296532

Title:
  Network interface should be restarted if instance has a static ip
  address.

Status in Init scripts for use on cloud images:
  Invalid

Bug description:
  Network interface should be restarted if instance has a static ip
  address.

  Details are as follows.

  1. create a instance with specifying a static ip address A.

  2. create a snapshot of a instance.

  3. create a instance from snapshot with specifying a static ip address
  B.

3.1 instance start (At this time, network interface is still specified to 
use ip address A.)
3.2 cloud-init edits /etc/network/interface to use ip address B.
3.3 cloud-init executes ifup command.
but network interface has already up, so new network information is not 
reflected.

  
  We have to modify the _bring_up_interface method like this.

  --- cloudinit/distros/__init__.py   2014-02-12 19:56:55 +
  +++ cloudinit/distros/__init__.py   2014-03-24 05:59:00 +
  @@ -271,11 +271,13 @@
   util.write_file(self.hosts_fn, contents.getvalue(), mode=0644)

   def _bring_up_interface(self, device_name):
  -cmd = ['ifup', device_name]
  -LOG.debug(Attempting to run bring up interface %s using command %s,
  -   device_name, cmd)
  +cmd_down = ['ifdown', device_name]
  +cmd_up = ['ifup', device_name]
  +LOG.debug(Attempting to restart interface %s using command %s %s,
  +   device_name, cmd_down, cmd_up)
   try:
  -(_out, err) = util.subp(cmd)
  +(_out, down_err) = util.subp(cmd_down)
  +(_out, up_err) = util.subp(cmd_up)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1296532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1279881] Re: Cannot send scheduler.run_instance events

2014-03-24 Thread Mike Perez
Cinder patch: https://review.openstack.org/#/c/81696/

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
   Importance: Undecided = Critical

** Changed in: cinder
 Assignee: (unassigned) = John Griffith (john-griffith)

** Changed in: cinder
Milestone: None = icehouse-rc1

** Changed in: cinder
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1279881

Title:
  Cannot send scheduler.run_instance events

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When launching an instance I'm able to trigger the following error
  with notifications active:

  2014-02-13 16:26:39.974 ERROR oslo.messaging.notify._impl_messaging
  [-] Could not send notification to notifications.
  Payload={'_context_roles': [u'admin'], '_context_request_id': u'req-
  7d0ca56f-f609-40ef-91e1-acf7d5a4a607', '_context_quota_class': None,
  'event_type': 'scheduler.run_instance', '_context_service_catalog':
  [{u'endpoints': [{u'adminURL':
  u'http://162.209.87.220:8776/v1/8c5cfa81726043afb5641ec3d6665f27',
  u'region': u'RegionOne', u'id': u'125bb6de586c435eb4e14e5e62090b43',
  u'internalURL':
  u'http://162.209.87.220:8776/v1/8c5cfa81726043afb5641ec3d6665f27',
  u'publicURL':
  u'http://162.209.87.220:8776/v1/8c5cfa81726043afb5641ec3d6665f27'}],
  u'endpoints_links': [], u'type': u'volume', u'name': u'cinder'}],
  'timestamp': '2014-02-13 16:26:39.970161', '_context_user':
  u'b6bbfab520dc4f6b91a95cd819f0316c', '_unique_id':
  '447230aa40904dd1aa2b6a9319b57f56', '_context_instance_lock_checked':
  False, '_context_user_id': u'b6bbfab520dc4f6b91a95cd819f0316c',
  'payload': {'instance_id': u'84a4f45c-3f85-4527-ae85-6f323e136d01',
  'state': 'error', 'request_spec': {u'num_instances': 1,
  u'block_device_mapping': [{u'instance_uuid': u'84a4f45c-
  3f85-4527-ae85-6f323e136d01', u'guest_format': None, u'boot_index': 0,
  u'delete_on_termination': True, u'no_device': None,
  u'connection_info': None, u'snapshot_id': None, u'device_name': None,
  u'disk_bus': None, u'image_id': u'0fbb3b3b-
  7d69-42d2-8bc7-3b87073e69b2', u'source_type': u'image',
  u'device_type': u'disk', u'volume_id': None, u'destination_type':
  u'local', u'volume_size': None}], u'image': {u'status': u'active',
  u'name': u'ubuntu1310', u'deleted': False, u'container_format':
  u'bare', u'created_at': u'2014-02-13T15:50:11.00', u'disk_format':
  u'raw', u'updated_at': u'2014-02-13T15:50:17.00', u'id': u
  '0fbb3b3b-7d69-42d2-8bc7-3b87073e69b2', u'owner':
  u'8c5cfa81726043afb5641ec3d6665f27', u'min_ram': 0, u'checksum':
  u'5c7a196274f24cd40100eac98f661057', u'min_disk': 0, u'is_public':
  True, u'deleted_at': None, u'properties': {}, u'size': 244711424},
  u'instance_type': {u'root_gb': 40, u'name': u'm1.medium',
  u'ephemeral_gb': 0, u'memory_mb': 4096, u'vcpus': 2, u'extra_specs':
  {}, u'swap': 0, u'rxtx_factor': 1.0, u'flavorid': u'3',
  u'vcpu_weight': None, u'id': 1}, u'instance_properties': {u'vm_state':
  u'building', u'availability_zone': u'nova', u'terminated_at': None,
  u'ephemeral_gb': 0, u'instance_type_id': 1, u'user_data': None,
  u'cleaned': False, u'vm_mode': None, u'deleted_at': None,
  u'reservation_id': u'r-6sv4oy43', u'id': 2, u'security_groups':
  [{u'project_id': u'8c5cfa81726043afb5641ec3d6665f27', u'user_id':
  u'b6bbfab520dc4f6b91a95cd819f0316c', u'description': u'default',
  u'deleted': False, u'created_at': u'2014-02-13T15:50:52.00',
  u'updated_at': None, u'deleted_at': None, u'id': 1, u'name':
  u'default'}], u'disable_terminate': False, u'display_name': u'lol',
  u'uuid': u'84a4f45c-3f85-4527-ae85-6f323e136d01',
  u'default_swap_device': None, u'info_cache': {u'instance_uuid': u
  '84a4f45c-3f85-4527-ae85-6f323e136d01', u'deleted': False,
  u'created_at': u'2014-02-13T16:26:39.00', u'updated_at': None,
  u'network_info': [], u'deleted_at': None}, u'hostname': u'lol',
  u'launched_on': None, u'display_description': u'lol', u'key_data':
  None, u'kernel_id': u'', u'power_state': 0,
  u'default_ephemeral_device': None, u'progress': 0, u'project_id':
  u'8c5cfa81726043afb5641ec3d6665f27', u'launched_at': None,
  u'scheduled_at': None, u'node': None, u'ramdisk_id': u'',
  u'access_ip_v6': None, u'access_ip_v4': None, u'deleted': False,
  u'key_name': None, u'updated_at': None, u'host': None,
  u'ephemeral_key_uuid': None, u'architecture': None, u'user_id':
  u'b6bbfab520dc4f6b91a95cd819f0316c', u'system_metadata':
  {u'image_min_disk': u'40', u'instance_type_memory_mb': u'4096',
  u'instance_type_swap': u'0', u'instance_type_vcpu_weight': None,
  u'instance_type_root_gb': u'40', u'instance_type_id': u'1',
  u'instance_type_name': u'm1.medium', u'instance_type_ephemeral_gb':
  u'0', u'instance_type_rxtx_factor': u'1.0', u'instance_type_flavorid':
  u'3', 

[Yahoo-eng-team] [Bug 1296575] [NEW] TypeError: libvirt_info() takes exactly 6 arguments (7 given) when boot instance to rbd directly

2014-03-24 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Failed to boot the instance when configure the nova to boot instance to
rbd pool directly.

2014-03-21 03:46:31.437 2433 ERROR nova.compute.manager 
[req-31ea99e5-80a2-449b-bdd8-f349d29b4388 3fafebf2c11240669ae6990abefb60ee 
d5fa85db624b4867a31f154b6520776d] [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] Error: libvirt_info() takes exactly 6 
arguments (7 given)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] Traceback (most recent call last):
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1043, in 
_build_instance
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] set_access_ip=set_access_ip)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1426, in _spawn
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1423, in _spawn
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] block_device_info)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2088, in 
spawn
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] write_to_disk=True)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3084, in 
to_xml
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] disk_info, rescue, block_device_info)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2960, in 
get_guest_config
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] inst_type):
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2737, in 
get_guest_storage_config
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] inst_type)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2700, in 
get_guest_disk_config
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] self.get_hypervisor_version())
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] TypeError: libvirt_info() takes exactly 6 
arguments (7 given)
2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] 
2014-03-21 03:46:59.955 2433 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources

 
Nova.conf on compute node:
rbd_pool = nova
rbd_user = nova-compute
rbd_secret_uuid = 514c9fca-8cbe-11e2-9c52-3bc8c7819472
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf
libvirt_images_rbd_pool = glance
libvirt_images_type = rbd
libvirt_inject_password = false
libvirt_inject_partition = -2
libvirt_inject_key = false

** Affects: heat
 Importance: Undecided
 Assignee: Kaya LIU (kayaliu)
 Status: Invalid

** Affects: nova
 Importance: Undecided
 Status: New

-- 
TypeError: libvirt_info() takes exactly 6 arguments (7 given) when boot 
instance to rbd directly
https://bugs.launchpad.net/bugs/1296575
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296575] Re: TypeError: libvirt_info() takes exactly 6 arguments (7 given) when boot instance to rbd directly

2014-03-24 Thread Kaya LIU
** Also affects: ubuntu
   Importance: Undecided
   Status: New

** Package changed: ubuntu = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296575

Title:
  TypeError: libvirt_info() takes exactly 6 arguments (7 given) when
  boot instance to rbd directly

Status in Orchestration API (Heat):
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  Failed to boot the instance when configure the nova to boot instance
  to rbd pool directly.

  2014-03-21 03:46:31.437 2433 ERROR nova.compute.manager 
[req-31ea99e5-80a2-449b-bdd8-f349d29b4388 3fafebf2c11240669ae6990abefb60ee 
d5fa85db624b4867a31f154b6520776d] [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] Error: libvirt_info() takes exactly 6 
arguments (7 given)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] Traceback (most recent call last):
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1043, in 
_build_instance
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] set_access_ip=set_access_ip)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1426, in _spawn
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1423, in _spawn
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] block_device_info)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2088, in 
spawn
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] write_to_disk=True)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3084, in 
to_xml
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] disk_info, rescue, block_device_info)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2960, in 
get_guest_config
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] inst_type):
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2737, in 
get_guest_storage_config
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] inst_type)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2700, in 
get_guest_disk_config
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] self.get_hypervisor_version())
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] TypeError: libvirt_info() takes exactly 6 
arguments (7 given)
  2014-03-21 03:46:31.437 2433 TRACE nova.compute.manager [instance: 
3e3f1bc4-f472-4365-accf-4401f3c040f3] 
  2014-03-21 03:46:59.955 2433 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources

   
  Nova.conf on compute node:
  rbd_pool = nova
  rbd_user = nova-compute
  rbd_secret_uuid = 514c9fca-8cbe-11e2-9c52-3bc8c7819472
  libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf
  libvirt_images_rbd_pool = glance
  libvirt_images_type = rbd
  libvirt_inject_password = false
  libvirt_inject_partition = -2
  libvirt_inject_key = false

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1296575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296589] [NEW] Image API v2 image tags broken with E500

2014-03-24 Thread Erno Kuvaja
Public bug reported:

There is set([]) passed to json.dumps() which is not parseable:
 self.func(req, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 625, in 
__call__
 request, **action_args)
   File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 644, in 
dispatch
 return method(*args, **kwargs)
   File /usr/lib/python2.7/dist-packages/glance/common/utils.py, line 422, in 
wrapped
 return func(self, req, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/glance/api/v2/image_tags.py,
 line 46, in update
 image_repo.save(image)
   File /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 65, in 
save
 result = self.base.save(base_item)
   File /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 65, in 
save
 result = self.base.save(base_item)
   File /usr/lib/python2.7/dist-packages/glance/notifier/__init__.py, line 
140, in save
 super(ImageRepoProxy, self).save(image)
   File
 /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 65, in save
 result = self.base.save(base_item)
   File /usr/lib/python2.7/dist-packages/glance/api/policy.py, line 186, in 
save
 return super(ImageRepoProxy, self).save(image)
   File /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 65, in 
save
 result = self.base.save(base_item)
   File /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 65, in 
save
 
 result = self.base.save(base_item)
   File /usr/lib/python2.7/dist-packages/glance/store/__init__.py, line 398, 
in save
 result = super(ImageRepoProxy, self).save(image)
   File /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 65, in 
save
 result = self.base.save(base_item)
   File /usr/lib/python2.7/dist-packages/glance/db/__init__.py, line 170, in 
save
 image.tags)
   File /usr/lib/python2.7/dist-packages/glance/db/registry/api.py, line 59,
 in wrapper
 return func(client, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/glance/db/registry/api.py, line 200, 
in image_tag_set_all
 client.image_tag_set_all(image_id=image_id, tags=tags)
   File /usr/lib/python2.7/dist-packages/glance/common/rpc.py, line 274, in 
method_proxy
 return self.do_request(item, **kw)
   File /usr/lib/python2.7/dist-packages/glance/common/rpc.py, line 240, in 
do_request
 'kwargs': kwargs}])
   File
 /usr/lib/python2.7/dist-packages/glance/common/client.py, line 64, in wrapped
 return func(self, *args, **kwargs)
   File /usr/lib/python2.7/dist-packages/glance/common/rpc.py, line 223, in 
bulk_request
 body = self._serializer.to_json(commands)
   File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 577, in 
to_json
 return json.dumps(data, default=self._sanitizer)
   File /usr/lib/python2.7/json/__init__.py, line 238, in dumps
 
 **kw).encode(obj)
   File /usr/lib/python2.7/json/encoder.py, line 201, in encode
 chunks = self.iterencode(o, _one_shot=True)
   File /usr/lib/python2.7/json/encoder.py, line 264, in iterencode
 return _iterencode(o, 0)
 ValueError: Circular reference detected

After adding sanitizer check for it the sqlalchemy gets empty id to fetch:
 Mar 21 14:58:26.627 HOST 24293 DEBUG glance.db.sqlalchemy.api [...] No image 
found with ID None
 Mar 21 14:58:26.628 HOST 24293 ERROR glance.common.rpc [...] RPC Call Error: 
No image found with ID None
 Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/glance/common/rpc.py, line 177, in 
__call__
 result = method(req.context, **kwargs)
   File /usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/api.py, line 
832, in image_get_all
 
 force_show_deleted=showing_deleted)
   File /usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/api.py, line 
451, in _image_get
 raise exception.NotFound(msg)
 NotFound: No image found with ID None

** Affects: glance
 Importance: Undecided
 Assignee: Erno Kuvaja (jokke)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Erno Kuvaja (jokke)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1296589

Title:
  Image API v2 image tags broken with E500

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  There is set([]) passed to json.dumps() which is not parseable:
   self.func(req, *args, **kwargs)
 File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 625, 
in __call__
   request, **action_args)
 File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 644, 
in dispatch
   return method(*args, **kwargs)
 File /usr/lib/python2.7/dist-packages/glance/common/utils.py, line 422, 
in wrapped
   return func(self, req, *args, **kwargs)
 File /usr/lib/python2.7/dist-packages/glance/api/v2/image_tags.py,
   line 46, in update
   image_repo.save(image)
 File 

[Yahoo-eng-team] [Bug 1296590] [NEW] [libvirt] snapshots in progress are not cleaned when deleting an instance

2014-03-24 Thread Xavier Queralt
Public bug reported:

When creating an instance snapshot, if such instance is deleted while in
the middle of the process, the snapshot may be left in the SAVING state
because the instance disappears in the middle of the process or moves to
the deleting task_state.

Steps to reproduce:

$ nova boot --image image_id --flavor flavor test
$ nova image-create test test-snap
$ nova delete test

The image 'test-snap' will be left in the SAVING state although it
should be deleted when we detect the situation.

** Affects: nova
 Importance: Medium
 Assignee: Xavier Queralt (xqueralt)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296590

Title:
  [libvirt] snapshots in progress are not cleaned when deleting an
  instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  When creating an instance snapshot, if such instance is deleted while
  in the middle of the process, the snapshot may be left in the SAVING
  state because the instance disappears in the middle of the process or
  moves to the deleting task_state.

  Steps to reproduce:

  $ nova boot --image image_id --flavor flavor test
  $ nova image-create test test-snap
  $ nova delete test

  The image 'test-snap' will be left in the SAVING state although it
  should be deleted when we detect the situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296592] [NEW] NotImplementedError is not handled by neutronv2 api when creating private/public domains

2014-03-24 Thread Haiwei Xu
Public bug reported:

When trying to create private dns domain, I got a 500 error.

$ nova dns-create-private-domain domain1
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-89966476-eaa5-48d4-94cd-1e540dc0867a)

2014-03-25 03:15:51.419 ERROR nova.api.openstack 
[req-89966476-eaa5-48d4-94cd-1e540dc0867a demo demo] Caught error:
2014-03-25 03:15:51.419 TRACE nova.api.openstack Traceback (most recent call 
last):
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
2014-03-25 03:15:51.419 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
2014-03-25 03:15:51.419 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 601, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack resp = self.call_func(req, 
*args, **self.kwargs)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-03-25 03:15:51.419 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 917, in __call__
2014-03-25 03:15:51.419 TRACE nova.api.openstack content_type, body, accept)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 983, in _process_stack
2014-03-25 03:15:51.419 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1070, in dispatch
2014-03-25 03:15:51.419 TRACE nova.api.openstack return method(req=request, 
**action_args)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ip_dns.py, line 
174, in update
2014-03-25 03:15:51.419 TRACE nova.api.openstack create_dns_domain(context, 
fqdomain, area)
2014-03-25 03:15:51.419 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 1207, in 
create_private_dns_domain
2014-03-25 03:15:51.419 TRACE nova.api.openstack raise NotImplementedError()
2014-03-25 03:15:51.419 TRACE nova.api.openstack NotImplementedError
2014-03-25 03:15:51.419 TRACE nova.api.openstack

** Affects: nova
 Importance: Undecided
 Assignee: Haiwei Xu (xu-haiwei)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296592

Title:
  NotImplementedError is not handled by neutronv2 api when creating
  private/public domains

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When trying to create private dns domain, I got a 500 error.

  $ nova dns-create-private-domain domain1
  ERROR: The server has either erred or is incapable of performing the 
requested 

[Yahoo-eng-team] [Bug 1296593] [NEW] Compute manager _poll_live_migration 'instance_ref' argument should be renamed to 'instance'

2014-03-24 Thread Nikola Đipanov
Public bug reported:

The reason is 2-fold:

* wrap_instance_fault decorator expects the argument to be 'instance'
* We are using new-wold objects in live migration and instance_ref used to 
imply a dict.

** Affects: nova
 Importance: Medium
 Assignee: Nikola Đipanov (ndipanov)
 Status: In Progress

** Changed in: nova
Milestone: None = icehouse-rc1

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
 Assignee: (unassigned) = Nikola Đipanov (ndipanov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296593

Title:
  Compute manager _poll_live_migration 'instance_ref' argument should be
  renamed to 'instance'

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The reason is 2-fold:

  * wrap_instance_fault decorator expects the argument to be 'instance'
  * We are using new-wold objects in live migration and instance_ref used to 
imply a dict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296627] [NEW] Instance snapshots shown in 'boot from image' list

2014-03-24 Thread Rushi Agrawal
Public bug reported:

When we store a snapshot of an instance, and then try to create an
instance, that instance snapshot is shown in two list, namely 'boot from
snapshot' and 'boot from image' list. This is confusing to the user. We
should report it only in snapshot list.

I can see that the instance snapshots are having an attribute
'instance_uuid', which the glance images don't have. We can filter out
the snapshot from volumes using this method.

Any other suggestions?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296627

Title:
  Instance snapshots shown in 'boot from image' list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When we store a snapshot of an instance, and then try to create an
  instance, that instance snapshot is shown in two list, namely 'boot
  from snapshot' and 'boot from image' list. This is confusing to the
  user. We should report it only in snapshot list.

  I can see that the instance snapshots are having an attribute
  'instance_uuid', which the glance images don't have. We can filter out
  the snapshot from volumes using this method.

  Any other suggestions?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1180040] Re: Race condition in attaching/detaching volumes when compute manager is unreachable

2014-03-24 Thread Nikola Đipanov
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180040

Title:
  Race condition in attaching/detaching volumes when compute manager is
  unreachable

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When a compute manager is offline, or if it cannot pick up messages
  for some reason, a race condition exists in attaching/detaching
  volumes.

  Try attach and detach a volume and then bring the compute manager
  online. Then the reserve_block_device_name message gets delivered and
  a block_device_mapping is created for this instance/volume regardless
  of the state of the volume. This will result in the following issues.

  1. The mountpoint is no longer be usable.
  2. os-volume_attachments API will list the volume as attached to the instance.

  
  Steps to reproduce (This was recreated in Devstack with nova trunk 75af47a.)

  1. Spawn an instance (Mine is a multinode devstack setup, so I spawn it to a 
different machine than the api, but the race condition should be reproducible 
in a single-node setup too)
  2. Create a volume
  3. Stop the compute manager (n-cpu)
  4. Try to attach the volume to the instance, it should fail after a while
  5. Try to detach the volume
  6. List the volumes. The volume should be in 'available' state. Optionally 
you can delete it at this point
  7. Check db for block_device_mapping. It shouldn't have any reference to this 
volume
  8. Start compute manager on the node that the instance is running
  9. Check db for block_device_mapping and it should now have a new entry 
associating this volume and instance regardless of the state of the volume

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1180040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274317] Re: heal_instance_info_cache_interval config is not effective

2014-03-24 Thread Thierry Carrez
Keep FixCommitted until RC1 is tagged

** Changed in: nova
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274317

Title:
  heal_instance_info_cache_interval config is not effective

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  There is configuration item in /etc/nova/nova.conf that controls how
  often the instance info should be updated. By default the value is 60
  seconds. However, the current implementation only uses that value to
  prevent over clocked.  Configure it to a different value in nova.conf
  does not has impact how often the task is executed.

  If I change the code in  /usr/lib/python2.6/site-
  packages/nova/compute/manager.py with the spacing parameter, the
  configured value will be in action. Please fix this bug.

  @periodic_task.periodic_task(spacing=CONF.heal_instance_info_cache_interval)
  def _heal_instance_info_cache(self, context):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291244] Re: An incomprehensible message in volumes/forms.py

2014-03-24 Thread Thierry Carrez
Keep FixCommitted until RC1 is tagged

** Changed in: horizon
   Status: Fix Released = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291244

Title:
  An incomprehensible message in volumes/forms.py

Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  In the file:
  horizon/openstack_dashboard/dashboards/project/volumes/volumes/forms.py,
  line 308, there is a message: The volume size cannot be less than the
  volume size.

  I think it's not understandable. I guess it should be: The volume size
  cannot be less than the volume source size.

  Please check.

  Regards
  Daisy

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296662] [NEW] NotFound should be insteaded of InstanceNotFound

2014-03-24 Thread ugvddm
Public bug reported:

 we can see the exception is InstanceNotFound  from:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1758,

but, the exception is NotFound in
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/server_diagnostics.py#L47.

so, the NotFound should be insteaded of InstanceNotFound

** Affects: nova
 Importance: Undecided
 Assignee: ugvddm (271025598-9)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = ugvddm (271025598-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296662

Title:
  NotFound should be insteaded of InstanceNotFound

Status in OpenStack Compute (Nova):
  New

Bug description:
   we can see the exception is InstanceNotFound  from:
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1758,

  but, the exception is NotFound in
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/server_diagnostics.py#L47.

  so, the NotFound should be insteaded of InstanceNotFound

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1075971] Re: Attach volume with libvirt disregards target device but still reserves it

2014-03-24 Thread Nikola Đipanov
Since it is now possible to both boot instances and attach volumes
without specifying device names after
https://blueprints.launchpad.net/nova/+spec/improve-block-device-
handling BP has been implemented. in which case the device names will be
handled properly by Nova.

It is still possible to supply device names (for backwards
compatibility's sake), which would cause the same behavior as described
above. This is really an issue due to the fact that there is no way to
make sure libvirt uses the same device name as supplied to it since
libvirt only takes this as ordering hints. the best solution really _is_
to rely on Nova to actually choose the device name as per implemented
BP.

** Changed in: nova
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1075971

Title:
  Attach volume with libvirt disregards target device but still reserves
  it

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Running devstack with libvirt/qemu - the problem is that attaching a
  volume (either by passing it with --block_device_mapping to boot or by
  using nova volume-attach) completely disregards the device name passed
  as can be seen fromt the folloowng shell session. However the device
  remains reserved so subsequent attach attempts will fail on the
  specified device, and succeed with some other given (which will not be
  honored again).

  The following session is how to reproduce it:

  [ndipanov@devstack devstack]$ cinder list
  
+--+---+--+--+-+-+
  |  ID  |   Status  | Display Name | Size | 
Volume Type | Attached to |
  
+--+---+--+--+-+-+
  | 5792f1ed-c5f7-40c6-913f-43aa66c717c7 | available |   bootable   |  3   |
 None| |
  | abc77933-119b-4105-b085-092c93be36f5 | available |   blank_2|  1   |
 None| |
  | b4de941a-627c-447a-9226-456159d95173 | available |blank |  1   |
 None| |
  
+--+---+--+--+-+-+
  [ndipanov@devstack devstack]$ nova list

  [ndipanov@devstack devstack]$ nova boot --image 
c346fdd1-d438-472b-98f5-b4c5f2b716f8 --flavor 1 --block_device_mapping 
vdr=b4de941a-627c-447a-9226-456159d95173:::0 --key_name nova_key w_vol
  ++--+
  | Property   | Value|
  ++--+
  | OS-DCF:diskConfig  | MANUAL   |
  | OS-EXT-STS:power_state | 0|
  | OS-EXT-STS:task_state  | scheduling   |
  | OS-EXT-STS:vm_state| building |
  | accessIPv4 |  |
  | accessIPv6 |  |
  | adminPass  | CqgT4dXkq64t |
  | config_drive   |  |
  | created| 2012-11-07T14:02:00Z |
  | flavor | m1.tiny  |
  | hostId |  |
  | id | caa459d5-27ae-4c5b-b190-fd740054a2ec |
  | image  | cirros-0.3.0-x86_64-uec  |
  | key_name   | nova_key |
  | metadata   | {}   |
  | name   | w_vol|
  | progress   | 0|
  | security_groups| [{u'name': u'default'}]  |
  | status | BUILD|
  | tenant_id  | 5f68e605463940dda20e876604385c43 |
  | updated| 2012-11-07T14:02:01Z |
  | user_id| 104895e85fe54ae5a2cc5c5a650f50b0 |
  ++--+
  [ndipanov@devstack devstack]$ nova list
  
+--+---++--+
  | ID   | Name  | Status | Networks |
  +--+---++--+
  | caa459d5-27ae-4c5b-b190-fd740054a2ec | w_vol | ACTIVE | private=10.0.0.2 |
  +--+---++--+
  [ndipanov@devstack devstack]$ ssh -o StrictHostKeyChecking=no -i 
nova_key.priv cirros@10.0.0.2
  @@@
  @WARNING: REMOTE 

[Yahoo-eng-team] [Bug 1296690] [NEW] nova-manage db arvhice_deleted_rows doesn't work

2014-03-24 Thread fujioka yuuichi
Public bug reported:

nova-manage db archive_delete_rows command cannot delete instance table.
instance_actions tables is related on instance table.
but instance_actions is not deleted even if instance is deleted.

$ nova-manage db archive_deleted_rows --max_rows 100
2014-03-24 12:05:55.855 ERROR nova.db.sqlalchemy.api 
[req-c342c3d2-2f3c-4612-b03b-946a5d4323ff None None] IntegrityError detected 
when archiving table instances
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api Traceback (most recent 
call last):
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 5613, in 
archive_deleted_rows_for_table
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api result_delete = 
conn.execute(delete_statement)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, 
in execute
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api params)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 761, 
in _execute_clauseelement
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api compiled_sql, 
distilled_params
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 874, 
in _execute_context
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api context)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, 
in _handle_dbapi_exception
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api exc_info
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, 
in raise_from_cause
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api 
reraise(type(exception), exception, tb=exc_tb)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, 
in _execute_context
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api context)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
324, in do_execute
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api 
cursor.execute(statement, parameters)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 205, in 
execute
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api 
self.errorhandler(self, exc, value)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api raise errorclass, 
errorvalue
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api IntegrityError: 
(IntegrityError) (1451, 'Cannot delete or update a parent row: a foreign key 
constraint fails (`nova`.`instance_actions`, CONSTRAINT 
`fk_instance_actions_instance_uuid` FOREIGN KEY (`instance_uuid`) REFERENCES 
`instances` (`uuid`))') 'DELETE FROM instances WHERE instances.id in (SELECT 
T1.id FROM (SELECT instances.id \nFROM instances \nWHERE instances.deleted != 
%s ORDER BY instances.id \n LIMIT %s) as T1)' (0, 100)
2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296690

Title:
  nova-manage db arvhice_deleted_rows doesn't work

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova-manage db archive_delete_rows command cannot delete instance table.
  instance_actions tables is related on instance table.
  but instance_actions is not deleted even if instance is deleted.

  $ nova-manage db archive_deleted_rows --max_rows 100
  2014-03-24 12:05:55.855 ERROR nova.db.sqlalchemy.api 
[req-c342c3d2-2f3c-4612-b03b-946a5d4323ff None None] IntegrityError detected 
when archiving table instances
  2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api Traceback (most recent 
call last):
  2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 5613, in 
archive_deleted_rows_for_table
  2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api result_delete = 
conn.execute(delete_statement)
  2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, 
in execute
  2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api params)
  2014-03-24 12:05:55.855 TRACE nova.db.sqlalchemy.api   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 761, 
in _execute_clauseelement
  2014-03-24 

[Yahoo-eng-team] [Bug 1296722] [NEW] Horizon Error: Unable to create new image. When Glance file filtering enabled

2014-03-24 Thread Tzach Shefi
Public bug reported:

Description of problem:
When filtering supported file types in Glance, uploading a none supported file 
gives a none informative/misleading error on Horizon - Error: Unable to create 
new image. 

Uploading same image in Glance CLI returns a better warning: Invalid
disk format

[root@cougar12 ~(keystone_admin)]# glance  image-create --name isoShouldFail1 
--disk-format iso --container-format bare --file /dsl-4.4.10.iso
Request returned failure status.
400 Bad Request
Invalid disk format 'iso' for image.
(HTTP 400)


Version-Release number of selected component (if applicable):
 RHEL 6.5
 python-django-openstack-auth-1.1.2-2.el6ost.noarch
 openstack-glance-2013.2.2-2.el6ost.noarch


How reproducible:
Every time


Steps to Reproduce:
1. Setup deployment
2. on /etc/glance/glance-api.conf, enable disk_formats and remove one of them 
(I removed iso),   disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi
3. Restart glance-api service
4. From Horizon upload a new image from file, select a file of the format you 
deleted from list on step 2. 
5. Get Horizon error

Actual results:
On Horizon none informative error message: Error: Unable to create new image.

Expected results:
I'd expect a better error like: Unable to create new image, none supported 
iso format. Seeing as this info is found on Horizon.log why isn't it returned 
on error message.


Description of problem:
When filtering supported file types in Glance, uploading a none supported file 
gives a none informative/misleading error on Horizon - Error: Unable to create 
new image. 

Uploading same image in Glance CLI returns a better warning: Invalid
disk format

[root@cougar12 ~(keystone_admin)]# glance  image-create --name isoShouldFail1 
--disk-format iso --container-format bare --file /dsl-4.4.10.iso
Request returned failure status.
400 Bad Request
Invalid disk format 'iso' for image.
(HTTP 400)


Version-Release number of selected component (if applicable):
RHEL 6.5
python-django-openstack-auth-1.1.2-2.el6ost.noarch
openstack-glance-2013.2.2-2.el6ost.noarch


How reproducible:
Every time


Steps to Reproduce:
1. Setup deployment
2. on /etc/glance/glance-api.conf, enable disk_formats and remove one of them 
(on test removed iso)  disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi
3. Restart glance-api service
4. From Horizon upload a new image from file, select a file of the format you 
deleted from list on step 2. 
5. Get Horizon none informative error.

Actual results:
On Horizon none informative error message: Error: Unable to create new image.

Expected results:
I'd expect a better error like, Unable to create new image, none supported 
iso format. Seeing as this info is found on Horizon.log(see below) why isn't 
this important info featured on Horizon's error message.

Horizon.log

date: Mon, 24 Mar 2014 13:00:56 GMT
content-length: 58
content-type: text/plain; charset=UTF-8
x-openstack-request-id: req-45e2c4b5-abb7-48a3-8725-132e34c06de0

400 Bad Request
Invalid disk format 'iso' for image.
 
2014-03-24 13:00:57,119 4603 ERROR glanceclient.common.http Request returned 
failure status.
2014-03-24 13:00:57,119 4603 WARNING horizon.exceptions Recoverable error: 400 
Bad Request
Invalid disk format 'iso' for image.
(HTTP 400)
2014-03-24 13:00:57,152 4603 DEBUG horizon.base Panel with slug domains is 
not registered with Dashboard admin.
2014-03-24 13:00:57,152 4603 DEBUG horizon.base Panel with slug groups is not 
registered with Dashboard admin.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296722

Title:
  Horizon Error: Unable to create new image. When Glance file filtering
  enabled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  When filtering supported file types in Glance, uploading a none supported 
file gives a none informative/misleading error on Horizon - Error: Unable to 
create new image. 

  Uploading same image in Glance CLI returns a better warning: Invalid
  disk format

  [root@cougar12 ~(keystone_admin)]# glance  image-create --name isoShouldFail1 
--disk-format iso --container-format bare --file /dsl-4.4.10.iso
  Request returned failure status.
  400 Bad Request
  Invalid disk format 'iso' for image.
  (HTTP 400)

  
  Version-Release number of selected component (if applicable):
   RHEL 6.5
   python-django-openstack-auth-1.1.2-2.el6ost.noarch
   openstack-glance-2013.2.2-2.el6ost.noarch

  
  How reproducible:
  Every time

  
  Steps to Reproduce:
  1. Setup deployment
  2. on /etc/glance/glance-api.conf, enable disk_formats and remove one of them 
(I removed iso),   disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi
  3. Restart glance-api service
  4. From Horizon upload a new image from file, select a file of the format you 
deleted from list on step 

[Yahoo-eng-team] [Bug 1295674] Re: Meta bug for tracking Openstack 2013.1.5 Stable Update

2014-03-24 Thread Chuck Short
** No longer affects: nova (Ubuntu Saucy)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1295674

Title:
   Meta bug for tracking Openstack 2013.1.5 Stable Update

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (Nova):
  New
Status in “cinder” package in Ubuntu:
  New
Status in “glance” package in Ubuntu:
  New
Status in “horizon” package in Ubuntu:
  New
Status in “keystone” package in Ubuntu:
  New
Status in “neutron” package in Ubuntu:
  New
Status in “nova” package in Ubuntu:
  New

Bug description:
  This is a meta-bug used for tracking progress of the 2013.1.5 Grizzly
  stable update to cinder, glance, horizon, keystone, nova, and neutron
  in Ubuntu 13.04.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1295674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296333] Re: Keystone docs fail to build with SQLAlchemy 0.9

2014-03-24 Thread Victor Sergeyev
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1296333

Title:
  Keystone docs fail to build with SQLAlchemy 0.9

Status in OpenStack Identity (Keystone):
  In Progress
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  
  When using SQLAlchemy 0.9, the docs fail to build. This is preventing 
Keystone from moving up to SQLAlchemy 0.9.

  There's a commit to update keystone's requirements to SQLAlchemy 0.9
  which is failing the docs build:
  https://review.openstack.org/#/c/82231/

  The log there only shows the first failure. When I generate docs on my
  build system I get the following errors:

   sphinx.errors.SphinxWarning:
   /opt/stack/keystone/keystone/common/sql/__init__.py:docstring of
   keystone.common.sql.relationship:53: ERROR: Unknown interpreted text
   role paramref.

   sphinx.errors.SphinxWarning:
   /opt/stack/keystone/keystone/openstack/common/db/sqlalchemy/utils.py:
   docstring of keystone.openstack.common.db.sqlalchemy.utils.or_:26:
   WARNING: more than one target found for cross-reference u'and_':
   keystone.common.sql.core.and_, keystone.common.sql.and_

   sphinx.errors.SphinxWarning:
   /opt/stack/keystone/keystone/openstack/common/db/sqlalchemy/utils.py:
   docstring of keystone.openstack.common.db.sqlalchemy.utils.select:12:
   WARNING: undefined label: coretutorial_selecting (if the link has no
   caption the label must precede a section header)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1296333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283599] Re: TestNetworkBasicOps occasionally fails to delete resources

2014-03-24 Thread Mark McClain
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: neutron
Milestone: icehouse-rc1 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1283599

Title:
  TestNetworkBasicOps occasionally fails to delete resources

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  Network, Subnet and security group appear to be in use when they are deleted.
  Observed in: 
http://logs.openstack.org/84/75284/3/check/check-tempest-dsvm-neutron-full/d792a7a/logs

  Observed so far with neutron full job only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1283599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293953] Re: Add broadcast reply option to DnsMasq

2014-03-24 Thread Alessandro Federico
Sorry, I have done a mistake clicking on Fix Released. How can be reverted to 
Confirmed?
Ale

** Changed in: neutron
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293953

Title:
  Add broadcast reply option to DnsMasq

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The bug is opened for traceability.
  In order to support virtual network on top of Infiniband Fabric, there is a 
requirement to receive DHCP response via broadcast message (according to IB 
Spec).
  To achieve it, new option should be added to Dhcp agent configuration, 
'allow_broadcast_reply'. The default should be False. Once set to True,  
--dhcp-broadcast will be added when dhcpMasq process is spawned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296768] [NEW] keystone.tests.test_wsgi.ServerTest.test_keepalive_and_keepidle_set MismatchError: 1 != 2

2014-03-24 Thread Dolph Mathews
Public bug reported:

The following test consistently fails in OS X:

==
FAIL: keystone.tests.test_wsgi.ServerTest.test_keepalive_and_keepidle_set
--
_StringException: pythonlogging:'': {{{
Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
Starting /Users/dolph/Environments/os/bin/nosetests on 127.0.0.1:1234
}}}

Traceback (most recent call last):
  File /Users/dolph/Environments/os/lib/python2.7/site-packages/mock.py, line 
1201, in patched
return func(*args, **keywargs)
  File /Users/dolph/Projects/keystone/keystone/tests/test_wsgi.py, line 319, 
in test_keepalive_and_keepidle_set
self.assertEqual(mock_sock.setsockopt.call_count, 2)
  File 
/Users/dolph/Environments/os/lib/python2.7/site-packages/testtools/testcase.py,
 line 321, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/Users/dolph/Environments/os/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
raise mismatch_error
MismatchError: 1 != 2

According to keystone.common.environment.__init__, the expected behavior
varies for OS X:

  # Optionally enable keepalive on the wsgi socket.
  # This option isn't available in the OS X version of eventlet

But the test is written without the same flexibility.

** Affects: keystone
 Importance: Low
 Assignee: Dolph Mathews (dolph)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1296768

Title:
  keystone.tests.test_wsgi.ServerTest.test_keepalive_and_keepidle_set
  MismatchError: 1 != 2

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The following test consistently fails in OS X:

  ==
  FAIL: keystone.tests.test_wsgi.ServerTest.test_keepalive_and_keepidle_set
  --
  _StringException: pythonlogging:'': {{{
  Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
  Starting /Users/dolph/Environments/os/bin/nosetests on 127.0.0.1:1234
  }}}

  Traceback (most recent call last):
File /Users/dolph/Environments/os/lib/python2.7/site-packages/mock.py, 
line 1201, in patched
  return func(*args, **keywargs)
File /Users/dolph/Projects/keystone/keystone/tests/test_wsgi.py, line 
319, in test_keepalive_and_keepidle_set
  self.assertEqual(mock_sock.setsockopt.call_count, 2)
File 
/Users/dolph/Environments/os/lib/python2.7/site-packages/testtools/testcase.py,
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/Users/dolph/Environments/os/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
  raise mismatch_error
  MismatchError: 1 != 2

  According to keystone.common.environment.__init__, the expected
  behavior varies for OS X:

# Optionally enable keepalive on the wsgi socket.
# This option isn't available in the OS X version of eventlet

  But the test is written without the same flexibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1296768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269418] Re: nova rescue doesn't put VM into RESCUE status on vmware (CVE-2014-2573)

2014-03-24 Thread Thierry Carrez
We'll need an havana backport, but maybe wait for the patch to make it
to master first to avoid duplication of work

** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: Triaged = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269418

Title:
  nova rescue doesn't put VM into RESCUE status on vmware
  (CVE-2014-2573)

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  New
Status in The OpenStack VMwareAPI subTeam:
  In Progress
Status in OpenStack Security Advisories:
  In Progress

Bug description:
  nova rescue of VM on vmWare will create a additional VM ($ORIGINAL_ID-
  rescue), but after that, the original VM has status ACTIVE. This leads
  to

  [root@jhenner-node ~(keystone_admin)]# nova unrescue foo
  ERROR: Cannot 'unrescue' while instance is in vm_state stopped (HTTP 409) 
(Request-ID: req-792cabb2-2102-47c5-9b15-96c74a9a4819)

  the original can be deleted, which then causes leaking of the -rescue
  VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269418/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287219] Re: scope of domain admin too broad in v3 policy sample

2014-03-24 Thread Thierry Carrez
** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1287219

Title:
  scope of domain admin too broad in v3 policy sample

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  Using the policies in the new default policy.v3cloudsample.json file,
  a domain admin can easily elevate himself and become the cloud admin:

  1) Get a token of a domain admin (a user with 'admin' role on any domain 
other that the default domain which is the cloud admin's domain)
  2) Grant yourself the admin role on the default domain which is the domain of 
the cloud admin (PUT 
/v3/domains/default/user/your_id_here/roles/admin_role_id
  3) Change your domain_id to the id of the default domain (PATCH 
/v3/users/your_id_here -d '{user: {domain_id: default}}'
  4) Get a new token scoped to the default domain

  == You are now the cloud admin

  It is expected that step number 2 should fail. Admins should be able
  to grant roles only on their domain and their projects, not on other
  projects. Otherwise, it is as if they are not really scoped at all.

  NOTE: I am using the default policy.v3cloudsample.json file as is, unchanged. 
I only defined the domain of the cloud admins to be the default domain by 
editing this rule:
  cloud_admin: rule:admin_required and domain_id:default,

  I think that the default policy file should be changed to prevent
  administrators' ability to grant roles on objects of foreign domains
  (with the exception of admins in the domain defined by the cloud_admin
  rule, of course).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1287219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296808] [NEW] Devstack. Fail to boot an instance if more than 1 network is defined

2014-03-24 Thread Avishay Balderman
Public bug reported:

Using Horizon I try to launch an instance with 2 networks defined.

The operation fails with the following error:
ERROR nova.scheduler.filter_scheduler [req-cae61024-6723-4218-bd5e-71b42d181cea 
admin demo] [instance: 54c6f9ba-57e5-4680-bb6b-72eb2da484db] Error from last 
host: devstack-vmware1 (node devstack-vmware1): [u'Traceback (most recent call 
last):
File /opt/stack/nova/nova/compute/manager.py, line 1304, in _build_instance 
set_access_ip=set_access_ip)
File /opt/stack/nova/nova/compute/manager.py, line 394, in decorated_function 
 return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1716, in _spawn 
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)
 
File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__  
six.reraise(self.type_ self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 1713, in _spawn 
block_device_info)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2241, in spawn 
block_device_info)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 3628, in 
_create_domain_and_network network_info)
File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ self.gen.next()
File /opt/stack/nova/nova/compute/manager.py, line 556, in 
wait_for_instance_event actual_event = event.wait() uAttributeError: 
'NoneType' object has no attribute 'wait'\n]

Looks like there is a None event in the events map.

When  I  launch an instance with 1 network defined I face no issues.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296808

Title:
  Devstack. Fail to boot an instance if more than 1 network is defined

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using Horizon I try to launch an instance with 2 networks defined.

  The operation fails with the following error:
  ERROR nova.scheduler.filter_scheduler 
[req-cae61024-6723-4218-bd5e-71b42d181cea admin demo] [instance: 
54c6f9ba-57e5-4680-bb6b-72eb2da484db] Error from last host: devstack-vmware1 
(node devstack-vmware1): [u'Traceback (most recent call last):
  File /opt/stack/nova/nova/compute/manager.py, line 1304, in _build_instance 
set_access_ip=set_access_ip)
  File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function  return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1716, in _spawn 
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)
   
  File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in 
__exit__  six.reraise(self.type_ self.value, self.tb)
  File /opt/stack/nova/nova/compute/manager.py, line 1713, in _spawn 
block_device_info)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2241, in spawn 
block_device_info)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 3628, in 
_create_domain_and_network network_info)
  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ self.gen.next()
  File /opt/stack/nova/nova/compute/manager.py, line 556, in 
wait_for_instance_event actual_event = event.wait() uAttributeError: 
'NoneType' object has no attribute 'wait'\n]

  Looks like there is a None event in the events map.

  When  I  launch an instance with 1 network defined I face no issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244025] Re: Remote security group criteria don't work in Midonet plugin

2014-03-24 Thread Thierry Carrez
OK, closing the security issue since this code is not actually used
anywhere.

** Changed in: ossa
   Status: Confirmed = Won't Fix

** Information type changed from Public Security to Public

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244025

Title:
  Remote security group criteria don't work in Midonet plugin

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in neutron havana series:
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  When creating a security rule that specifies a remote security group
  (rather than a CIDR range), the Midonet plugin does not enforce this
  criterion. With an egress rule, for example, one of the criteria for a
  particular rule may be that only traffic to security group A will be
  allowed out. This criterion is ignored, and traffic will be allowed
  out regardless of the destination security group, provided that it
  conforms to the rule's other criteria.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1244025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281763] Re: Create volume dropdown not showing auto-generated name

2014-03-24 Thread Julie Pichon
The review comments indicate this was fixed as part of another commit in
Icehouse 3.

** Changed in: horizon
   Status: In Progress = Fix Released

** Changed in: horizon
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1281763

Title:
  Create volume dropdown not showing auto-generated name

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  
  1. Go to Launch Instance, put in Instance Count: 2 and Instance Boot Source: 
Boot from Image (creates a new volume).

  2. Go to Project  Volumes, in the table, you should see those 2
  volumes you created

  3. Now Create Volume

  4. For Volume Source, select Volume

  4. Open the drop-down menu for 'Use a volume as source'.  You will see
  here that they do not show the auto-generated names attached to the 2
  instances you just created. It just says the size i.e. (2GB)

  Please see image attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1281763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296818] [NEW] xenapi: vm_mode cannot be changed during rebuild

2014-03-24 Thread Johannes Erdfelt
Public bug reported:

When rebuilding an instance to a new image with a different effective
vm_mode, this isn't seen and the original vm_mode is used. This causes
problems when going from HVM to PV leading to an instance that cannot
boot.

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Johannes Erdfelt (johannes.erdfelt)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296818

Title:
  xenapi: vm_mode cannot be changed during rebuild

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When rebuilding an instance to a new image with a different effective
  vm_mode, this isn't seen and the original vm_mode is used. This causes
  problems when going from HVM to PV leading to an instance that cannot
  boot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296825] Re: ERROR: InvocationError: '/home/jenkins/workspace/gate-neutron-python26/.tox/py26/bin/python

2014-03-24 Thread James E. Blair
Picking the first hit from the link you supplied:

http://logs.openstack.org/54/78854/6/gate/gate-neutron-
python26/02adbc4/console.html

Looks like a neutron unit test timeout.  That's not an infrastructure
bug.


** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296825

Title:
  ERROR: InvocationError: '/home/jenkins/workspace/gate-neutron-
  python26/.tox/py26/bin/python

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ERROR: InvocationError: '/home/jenkins/workspace/gate-neutron-
  python26/.tox/py26/bin/python

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRVJST1I6IEludm9jYXRpb25FcnJvcjogJy9ob21lL2plbmtpbnMvd29ya3NwYWNlL2dhdGUtbmV1dHJvbi1weXRob24yNi8udG94L3B5MjYvYmluL3B5dGhvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJjdXN0b20iLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsiZnJvbSI6IjIwMTQtMDMtMDFUMTY6MzE6NDMrMDA6MDAiLCJ0byI6IjIwMTQtMDMtMjRUMTY6MzE6NDMrMDA6MDAiLCJ1c2VyX2ludGVydmFsIjoiMCJ9LCJzdGFtcCI6MTM5NTY3ODg0MjY5OH0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296839] [NEW] xen boot from volume attempts resize

2014-03-24 Thread Sandy Walsh
Public bug reported:

When attempting a boot-from-volume with a volume size that doesn't match
the disk size the compute manager will attempt to resize the volume
(which fails). It's fine to press on with the given size.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296839

Title:
  xen boot from volume attempts resize

Status in OpenStack Compute (Nova):
  New

Bug description:
  When attempting a boot-from-volume with a volume size that doesn't
  match the disk size the compute manager will attempt to resize the
  volume (which fails). It's fine to press on with the given size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294069] Re: XenAPI: Boot from volume without image_ref broken

2014-03-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/81497
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=f1a2dbffe8ba369b0a8a125e975864a5d88f3e87
Submitter: Jenkins
Branch:master

commit f1a2dbffe8ba369b0a8a125e975864a5d88f3e87
Author: Bob Ball bob.b...@citrix.com
Date:   Wed Mar 19 11:08:54 2014 +

XenAPI: Cirros images must always boot as PV.

The default for VHD disk-types is PV, which is why booting from a
server works.  However, creating a volume from the image needs to
pass this parameter on to the volume.  Note that
Id673158442fde27e8d468ca412c9bd557a886e6b is also required to fix
bug 1294069

Change-Id: I7ea1d85d6082787ac4551f78300a04bf59074261
Partial-Bug: 1294069


** Changed in: devstack
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294069

Title:
  XenAPI: Boot from volume without image_ref broken

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  https://review.openstack.org/#/c/78194/ changed tempest to clear
  image_ref for some BFV tests - in particular the
  test_volume_boot_pattern

  This now results in a KeyError: 'disk_format' exception from Nova
  when using the XenAPI driver.

  http://paste.openstack.org/show/73733/ is a nicer format of the below
  - but might disappear!

  2014-03-18 11:20:07.475 ERROR nova.compute.manager 
[req-82096fe0-921a-4bc1-9c41-d0aafad4c923 TestVolumeBootPattern-581093620 
TestVolumeBootPattern-1800543246] [instance: 
2b047f24-675c-4921-8cf3-85584097f106] Error: 'disk_format'
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] Traceback (most recent call last):
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/compute/manager.py, line 1306, in _build_instance
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] set_access_ip=set_access_ip)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/compute/manager.py, line 394, in decorated_function
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] return function(self, context, *args, 
**kwargs)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/compute/manager.py, line 1708, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] six.reraise(self.type_, self.value, 
self.tb)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/compute/manager.py, line 1705, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] block_device_info)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/virt/xenapi/driver.py, line 236, in spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] admin_password, network_info, 
block_device_info)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 357, in spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] network_info, block_device_info, 
name_label, rescue)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 526, in _spawn
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] 
undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File /opt/stack/nova/nova/utils.py, 
line 812, in rollback_and_reraise
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106] self._rollback()
  2014-03-18 11:20:07.475 TRACE nova.compute.manager [instance: 
2b047f24-675c-4921-8cf3-85584097f106]   File 

[Yahoo-eng-team] [Bug 1296862] [NEW] Keystone doc build fails when built from sdist tarballs

2014-03-24 Thread Dirk Mueller
Public bug reported:

keystone/tests/tmp is not included in the sdist tarballs, hence running
sphinx fails with:

[   27s]   File 
/home/abuild/rpmbuild/BUILD/keystone-2014.1.dev141.g0fb0dfd/keystone/tests/core.py,
 line 92, in module
[   27s] os.mkdir(TMPDIR)
[   27s] OSError: [Errno 2] No such file or directory: 
'/home/abuild/rpmbuild/BUILD/keystone-2014.1.dev141.g0fb0dfd/keystone/tests/tmp/29703'


thats because the parent dir (tests/tmp) is missing.

** Affects: keystone
 Importance: Undecided
 Assignee: Dirk Mueller (dmllr)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1296862

Title:
  Keystone doc build fails when built from sdist tarballs

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  keystone/tests/tmp is not included in the sdist tarballs, hence
  running sphinx fails with:

  [   27s]   File 
/home/abuild/rpmbuild/BUILD/keystone-2014.1.dev141.g0fb0dfd/keystone/tests/core.py,
 line 92, in module
  [   27s] os.mkdir(TMPDIR)
  [   27s] OSError: [Errno 2] No such file or directory: 
'/home/abuild/rpmbuild/BUILD/keystone-2014.1.dev141.g0fb0dfd/keystone/tests/tmp/29703'

  
  thats because the parent dir (tests/tmp) is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1296862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296873] [NEW] VMware: InvalidDiskFormat when booting from volume

2014-03-24 Thread Ryan Hsu
Public bug reported:

Using the VC Driver, an InvalidDiskFormat is seen when attempting to
boot from a volume. The scenario is:

1. Create a volume of any size
2. Boot from the volume

The following error message is seen:

Traceback (most recent call last):
  File /opt/stack/nova/nova/compute/manager.py, line 1306, in _build_instance
set_access_ip=set_access_ip)
  File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1708, in _spawn
LOG.exception(_('Instance failed to spawn'), instance=instance)
  File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/stack/nova/nova/compute/manager.py, line 1705, in _spawn
block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 611, in spawn
admin_password, network_info, block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 223, in spawn
(file_type, is_iso) = self._get_disk_format(image_meta)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 189, in 
_get_disk_format
raise exception.InvalidDiskFormat(disk_format=disk_format)
InvalidDiskFormat: Disk format None is not acceptable

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: driver vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296873

Title:
  VMware: InvalidDiskFormat when booting from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using the VC Driver, an InvalidDiskFormat is seen when attempting to
  boot from a volume. The scenario is:

  1. Create a volume of any size
  2. Boot from the volume

  The following error message is seen:

  Traceback (most recent call last):
File /opt/stack/nova/nova/compute/manager.py, line 1306, in 
_build_instance
  set_access_ip=set_access_ip)
File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1708, in _spawn
  LOG.exception(_('Instance failed to spawn'), instance=instance)
File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 1705, in _spawn
  block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 611, in spawn
  admin_password, network_info, block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 223, in spawn
  (file_type, is_iso) = self._get_disk_format(image_meta)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 189, in 
_get_disk_format
  raise exception.InvalidDiskFormat(disk_format=disk_format)
  InvalidDiskFormat: Disk format None is not acceptable

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296874] [NEW] VMware: InvalidDiskFormat when booting from volume

2014-03-24 Thread Ryan Hsu
Public bug reported:

Using the VC Driver, an InvalidDiskFormat is seen when attempting to
boot from a volume. The scenario is:

1. Create a volume of any size
2. Boot from the volume

The following error message is seen:

Traceback (most recent call last):
  File /opt/stack/nova/nova/compute/manager.py, line 1306, in _build_instance
set_access_ip=set_access_ip)
  File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1708, in _spawn
LOG.exception(_('Instance failed to spawn'), instance=instance)
  File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/stack/nova/nova/compute/manager.py, line 1705, in _spawn
block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 611, in spawn
admin_password, network_info, block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 223, in spawn
(file_type, is_iso) = self._get_disk_format(image_meta)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 189, in 
_get_disk_format
raise exception.InvalidDiskFormat(disk_format=disk_format)
InvalidDiskFormat: Disk format None is not acceptable

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: driver vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296874

Title:
  VMware: InvalidDiskFormat when booting from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using the VC Driver, an InvalidDiskFormat is seen when attempting to
  boot from a volume. The scenario is:

  1. Create a volume of any size
  2. Boot from the volume

  The following error message is seen:

  Traceback (most recent call last):
File /opt/stack/nova/nova/compute/manager.py, line 1306, in 
_build_instance
  set_access_ip=set_access_ip)
File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1708, in _spawn
  LOG.exception(_('Instance failed to spawn'), instance=instance)
File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 1705, in _spawn
  block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 611, in spawn
  admin_password, network_info, block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 223, in spawn
  (file_type, is_iso) = self._get_disk_format(image_meta)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 189, in 
_get_disk_format
  raise exception.InvalidDiskFormat(disk_format=disk_format)
  InvalidDiskFormat: Disk format None is not acceptable

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296900] [NEW] Instance and Volume names should be displayed on forms

2014-03-24 Thread Jackie Heitzer
Public bug reported:

Display Instance Name on:
  openstack_dashboard/dashboards/project/images/snapshots/forms.py
  openstack_dashboard/dashboards/project/instances/forms.py
  openstack_dashboard/dashboards/project/instances/workflows/resize_instance.py

Display Volume Name on:
  openstack_dashboard/dashboards/project/volumes/volumes/forms.py

** Affects: horizon
 Importance: Undecided
 Assignee: Jackie Heitzer (jackie-heitzer)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Jackie Heitzer (jackie-heitzer)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1296900

Title:
  Instance and Volume names should be displayed on forms

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Display Instance Name on:
openstack_dashboard/dashboards/project/images/snapshots/forms.py
openstack_dashboard/dashboards/project/instances/forms.py

openstack_dashboard/dashboards/project/instances/workflows/resize_instance.py

  Display Volume Name on:
openstack_dashboard/dashboards/project/volumes/volumes/forms.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1296900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296913] [NEW] GroupAntiAffinityFilter scheduler hint no longer works

2014-03-24 Thread David Patterson
Public bug reported:

Passing a scheduler hint for the GroupAntiAffinityFilter no longer
works:

nova boot --flavor m1.nano --image cirros-0.3.1-x86_64-uec --nic net-
id=909e7fa9-b3af-4601-84c2-01145b1dea72 --hint group=foo server-foo

ERROR (NotFound): The resource could not be found. (HTTP 404) (Request-
ID: req-21430f41-e6ca-46db-ab5c-890a1d1dbd01)

screen-n-api.log contains message:

Caught error: Instance group foo could not be found.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296913

Title:
  GroupAntiAffinityFilter scheduler hint no longer works

Status in OpenStack Compute (Nova):
  New

Bug description:
  Passing a scheduler hint for the GroupAntiAffinityFilter no longer
  works:

  nova boot --flavor m1.nano --image cirros-0.3.1-x86_64-uec --nic net-
  id=909e7fa9-b3af-4601-84c2-01145b1dea72 --hint group=foo server-foo

  ERROR (NotFound): The resource could not be found. (HTTP 404)
  (Request-ID: req-21430f41-e6ca-46db-ab5c-890a1d1dbd01)

  screen-n-api.log contains message:

  Caught error: Instance group foo could not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296912] [NEW] python-qpid-python package does not exist

2014-03-24 Thread David Andrew
Public bug reported:

Broken package dependencies in the icehouse ubuntu cloud archive for precise:
python-qpid-python is listed as a dependency for python-heat


root@icontrol:~# apt-get install python-qpid-python
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package python-qpid-python is not available, but is referred to by another 
package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'python-qpid-python' has no installation candidate

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296912

Title:
  python-qpid-python package does not exist

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Broken package dependencies in the icehouse ubuntu cloud archive for precise:
  python-qpid-python is listed as a dependency for python-heat


  root@icontrol:~# apt-get install python-qpid-python
  Reading package lists... Done
  Building dependency tree
  Reading state information... Done
  Package python-qpid-python is not available, but is referred to by another 
package.
  This may mean that the package is missing, has been obsoleted, or
  is only available from another source

  E: Package 'python-qpid-python' has no installation candidate

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296929] [NEW] test_list_servers_filter_by_zero_limit fails with DB2

2014-03-24 Thread Matt Riedemann
Public bug reported:

Tempest test test_list_servers_filter_by_zero_limit does a server list
query with limit=0 which is OK with MySQL and PostgreSQL but not with
DB2 since DB2's fetch-first clause doesn't support values  1:

http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0059212.html?resultof=%22%46%45%54%43%48%22%20%22%66%65%74%63%68%22%20%22%66%69%72%73%74%22%20

From the DB2 doc:

The fetch-first-clause lets the database manager know that the
application does not want to retrieve more than integer rows, regardless
of how many rows there might be in the result table when this clause is
not specified. An attempt to fetch beyond integer rows is handled the
same way as normal end of data (SQLSTATE 02000). The value of integer
must be a positive integer (not zero).

Looking at the Nova API paginate collections docs:

http://docs.openstack.org/api/openstack-compute/2/content
/Paginated_Collections-d1e664.html

It doesn't say anything about lower bounds validation, only that over
limit is a 413 HTTP error response.  Otherwise the examples use limit=1.

There isn't really any point of allowing the query to get down into the
DB API layer just to perform a query and then remove the results, so we
should just detect limit == 0 in the nova API layer and just return an
empty response.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api db2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296929

Title:
  test_list_servers_filter_by_zero_limit fails with DB2

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest test test_list_servers_filter_by_zero_limit does a server list
  query with limit=0 which is OK with MySQL and PostgreSQL but not with
  DB2 since DB2's fetch-first clause doesn't support values  1:

  
http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0059212.html?resultof=%22%46%45%54%43%48%22%20%22%66%65%74%63%68%22%20%22%66%69%72%73%74%22%20

  From the DB2 doc:

  The fetch-first-clause lets the database manager know that the
  application does not want to retrieve more than integer rows,
  regardless of how many rows there might be in the result table when
  this clause is not specified. An attempt to fetch beyond integer rows
  is handled the same way as normal end of data (SQLSTATE 02000). The
  value of integer must be a positive integer (not zero).

  Looking at the Nova API paginate collections docs:

  http://docs.openstack.org/api/openstack-compute/2/content
  /Paginated_Collections-d1e664.html

  It doesn't say anything about lower bounds validation, only that over
  limit is a 413 HTTP error response.  Otherwise the examples use
  limit=1.

  There isn't really any point of allowing the query to get down into
  the DB API layer just to perform a query and then remove the results,
  so we should just detect limit == 0 in the nova API layer and just
  return an empty response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296940] [NEW] Potential AttributeError in _get_servers if FlavorNotFound

2014-03-24 Thread Matt Riedemann
Public bug reported:

In the Nova servers API (v2 and v3), this line could fail with an
AtributeError if there is a FlavorNotFound exception above it:

https://github.com/openstack/nova/blob/2014.1.b3/nova/api/openstack/compute/servers.py#L611

That code should either check if instance_list is empty first or set
instance_list to an empty InstanceList object if FlavorNotFound is hit.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: api unified-objects

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
   Status: New = Triaged

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296940

Title:
  Potential AttributeError in _get_servers if FlavorNotFound

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  In the Nova servers API (v2 and v3), this line could fail with an
  AtributeError if there is a FlavorNotFound exception above it:

  
https://github.com/openstack/nova/blob/2014.1.b3/nova/api/openstack/compute/servers.py#L611

  That code should either check if instance_list is empty first or set
  instance_list to an empty InstanceList object if FlavorNotFound is
  hit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296940/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296948] [NEW] VMware: Instance fails to spawn due to Concurrent modification by another operation

2014-03-24 Thread Ryan Hsu
Public bug reported:

Some instances are failing to spawn with the following error message:

VMwareDriverException: Cannot complete operation due to concurrent
modification by another operation.

It's possible this is due to a race condition in the VC driver as this
does not happen frequently. This was encountered a few times by the
Minesweeper CI. Affected builds include:

http://208.91.1.172/logs/81905/6
http://208.91.1.172/logs/80220/3

The Traceback seen in the scheduler log is:

Traceback (most recent call last):
  File /opt/stack/nova/nova/compute/manager.py, line 1306, in _build_instance
set_access_ip=set_access_ip)
  File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function
return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1718, in _spawn
LOG.exception(_('Instance failed to spawn'), instance=instance)
  File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/stack/nova/nova/compute/manager.py, line 1715, in _spawn
block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 611, in spawn
admin_password, network_info, block_device_info)
  File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 593, in spawn
root_gb_in_kb, linked_clone)
  File /opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 75, in 
attach_disk_to_vm
self._session._wait_for_task(reconfig_task)
  File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 940, in 
_wait_for_task
ret_val = done.wait()
  File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, in 
wait
return hubs.get_hub().switch()
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, 
in switch
return self.greenlet.switch()
VMwareDriverException: Cannot complete operation due to concurrent modification 
by another operation.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: driver vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296948

Title:
  VMware: Instance fails to spawn due to Concurrent modification by
  another operation

Status in OpenStack Compute (Nova):
  New

Bug description:
  Some instances are failing to spawn with the following error message:

  VMwareDriverException: Cannot complete operation due to concurrent
  modification by another operation.

  It's possible this is due to a race condition in the VC driver as this
  does not happen frequently. This was encountered a few times by the
  Minesweeper CI. Affected builds include:

  http://208.91.1.172/logs/81905/6
  http://208.91.1.172/logs/80220/3

  The Traceback seen in the scheduler log is:

  Traceback (most recent call last):
File /opt/stack/nova/nova/compute/manager.py, line 1306, in 
_build_instance
  set_access_ip=set_access_ip)
File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function
  return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1718, in _spawn
  LOG.exception(_('Instance failed to spawn'), instance=instance)
File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 1715, in _spawn
  block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 611, in spawn
  admin_password, network_info, block_device_info)
File /opt/stack/nova/nova/virt/vmwareapi/vmops.py, line 593, in spawn
  root_gb_in_kb, linked_clone)
File /opt/stack/nova/nova/virt/vmwareapi/volumeops.py, line 75, in 
attach_disk_to_vm
  self._session._wait_for_task(reconfig_task)
File /opt/stack/nova/nova/virt/vmwareapi/driver.py, line 940, in 
_wait_for_task
  ret_val = done.wait()
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, 
in wait
  return hubs.get_hub().switch()
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 
187, in switch
  return self.greenlet.switch()
  VMwareDriverException: Cannot complete operation due to concurrent 
modification by another operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296957] [NEW] Security_Group FirewallDriver default=None cause L2 agent to fail

2014-03-24 Thread Irena Berezovsky
Public bug reported:

Default value for FirewallDriver set to None in security_group_rpc.py.
L2Agent fails when using default value with following error:

/opt/stack/neutron/neutron/agent/securitygroups_rpc.py:129
2014-03-07 08:15:09.120 31995 CRITICAL neutron 
[req-63f8e61b-9b71-4178-95b9-ab070a4e3b26 None] 'NoneType' object has no 
attribute 'rpartition'
2014-03-07 08:15:09.120 31995 TRACE neutron Traceback (most recent call last):
2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/usr/local/bin/neutron-linuxbridge-agent, line 10, in module
2014-03-07 08:15:09.120 31995 TRACE neutron sys.exit(main())
2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 987, in main
2014-03-07 08:15:09.120 31995 TRACE neutron root_helper)
2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 787, in __init__
2014-03-07 08:15:09.120 31995 TRACE neutron self.init_firewall()
2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py, line 130, in 
init_firewall
2014-03-07 08:15:09.120 31995 TRACE neutron self.firewall = 
importutils.import_object(firewall_driver)
2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/importutils.py, line 38, in 
import_object
2014-03-07 08:15:09.120 31995 TRACE neutron return 
import_class(import_str)(*args, **kwargs)
2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/importutils.py, line 26, in 
import_class
2014-03-07 08:15:09.120 31995 TRACE neutron mod_str, _sep, class_str = 
import_str.rpartition('.')
2014-03-07 08:15:09.120 31995 TRACE neutron AttributeError: 'NoneType' object 
has no attribute 'rpartition'
2014-03-07 08:15:09.120 31995 TRACE neutron 

This can be fixed by setting default  firewall_driver =
neutron.agent.firewall.NoopFirewallDriver or verification on L2 Agent
start-up for firewall_driver is not being None.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296957

Title:
  Security_Group FirewallDriver default=None cause L2 agent to fail

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Default value for FirewallDriver set to None in security_group_rpc.py.
  L2Agent fails when using default value with following error:

  /opt/stack/neutron/neutron/agent/securitygroups_rpc.py:129
  2014-03-07 08:15:09.120 31995 CRITICAL neutron 
[req-63f8e61b-9b71-4178-95b9-ab070a4e3b26 None] 'NoneType' object has no 
attribute 'rpartition'
  2014-03-07 08:15:09.120 31995 TRACE neutron Traceback (most recent call last):
  2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/usr/local/bin/neutron-linuxbridge-agent, line 10, in module
  2014-03-07 08:15:09.120 31995 TRACE neutron sys.exit(main())
  2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 987, in main
  2014-03-07 08:15:09.120 31995 TRACE neutron root_helper)
  2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py,
 line 787, in __init__
  2014-03-07 08:15:09.120 31995 TRACE neutron self.init_firewall()
  2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py, line 130, in 
init_firewall
  2014-03-07 08:15:09.120 31995 TRACE neutron self.firewall = 
importutils.import_object(firewall_driver)
  2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/importutils.py, line 38, in 
import_object
  2014-03-07 08:15:09.120 31995 TRACE neutron return 
import_class(import_str)(*args, **kwargs)
  2014-03-07 08:15:09.120 31995 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/importutils.py, line 26, in 
import_class
  2014-03-07 08:15:09.120 31995 TRACE neutron mod_str, _sep, class_str = 
import_str.rpartition('.')
  2014-03-07 08:15:09.120 31995 TRACE neutron AttributeError: 'NoneType' object 
has no attribute 'rpartition'
  2014-03-07 08:15:09.120 31995 TRACE neutron 

  This can be fixed by setting default  firewall_driver =
  neutron.agent.firewall.NoopFirewallDriver or verification on L2 Agent
  start-up for firewall_driver is not being None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296967] [NEW] instances stuck with task_state of REBOOTING after controller switchover

2014-03-24 Thread Chris Friesen
Public bug reported:


We were doing some testing of Havana and have run into a scenario that ended up 
with two instances stuck with a task_state of REBOOTING following a reboot of 
the controller:

1) We reboot the controller.
2) Right after it comes back up something calls compute.api.API.reboot() on an 
instance.
3) That sets instance.task_state = task_states.REBOOTING and then calls 
instance.save() to update the database.
4) Then it calls self.compute_rpcapi.reboot_instance() which does an rpc cast.
5) That message gets dropped on the floor due to communication issues between 
the controller and the compute.
6) Now we're stuck with a task_state of REBOOTING. 

Currently when doing a reboot we set the REBOOTING task_state in the
database in compute-api and then send an RPC cast. That seems awfully
risky given that if that message gets lost or the call fails for any
reason we could end up stuck in the REBOOTING state forever.  I think it
might make sense to have the power state audit clear the REBOOTING state
if appropriate, but others with more experience should make that call.


It didn't happen to use, but I think we could get into this state another way:

1) nova-compute was running reboot_instance()
2) we reboot the controller
3) reboot_instance() times out trying to update the instance with the the new 
power state and a task_state of None.
4) Later on in _sync_power_states() we would update the power_state, but 
nothing would update the task_state.  


The timeline that I have looks like this.  We had some buggy code that
sent all the instances for a reboot when the controller came up.  The
first two are in the controller logs below, and these are the ones that
failed.

controller: (running everything but nova-compute)
nova-api log:

/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:23.712 8187 INFO 
nova.compute.api [req-a84e25bd-85b4-478c-a845-7e8034df3ab2 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:23.898 8187 INFO 
nova.osapi_compute.wsgi.server [req-a84e25bd-85b4-478c-a845-7e8034df3ab2 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4/action
 HTTP/1.1 status: 202 len: 185 time: 0.2299521
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:25.152 8128 INFO 
nova.compute.api [req-429feb82-a50d-4bf0-a9a4-bca036e55356 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
17169e6d-6693-4e95-9900-ba250dad5a39] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:25.273 8128 INFO 
nova.osapi_compute.wsgi.server [req-429feb82-a50d-4bf0-a9a4-bca036e55356 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/17169e6d-6693-4e95-9900-ba250dad5a39/action
 HTTP/1.1 status: 202 len: 185 time: 0.1583798

After this there are other reboot requests for the other instances, and
those ones passed.


Interestingly, we later see this
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:45.476 8134 INFO 
nova.compute.api [req-2e0b67a0-0cd9-471f-b115-e4f07436f1c4 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:45.477 8134 INFO 
nova.osapi_compute.wsgi.server [req-2e0b67a0-0cd9-471f-b115-e4f07436f1c4 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4/action
 HTTP/1.1 status: 409 len: 303 time: 0.1177511
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:48.831 8143 INFO 
nova.compute.api [req-afeb680b-91fd-4446-b4d8-fd264541369d 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
17169e6d-6693-4e95-9900-ba250dad5a39] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:48.832 8143 INFO 
nova.osapi_compute.wsgi.server [req-afeb680b-91fd-4446-b4d8-fd264541369d 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/17169e6d-6693-4e95-9900-ba250dad5a39/action
 HTTP/1.1 status: 409 len: 303 time: 0.0366399


Presumably the 409 responses are because nova thinks that these instances are 
currently rebooting.


compute:
2014-03-20 11:33:14.213 12229 INFO nova.openstack.common.rpc.common [-] 
Reconnecting to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:14.225 12229 INFO nova.openstack.common.rpc.common [-] 
Reconnecting to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:14.244 12229 INFO nova.openstack.common.rpc.common [-] 
Connected to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:14.246 12229 INFO 

[Yahoo-eng-team] [Bug 1270347] Re: tempest.api.compute.v3.admin.test_servers.ServersAdminV3TestXML.test_list_servers_by_admin_with_all_tenants failed gate tests

2014-03-24 Thread Joe Gordon
*** This bug is a duplicate of bug 1258620 ***
https://bugs.launchpad.net/bugs/1258620

** This bug is no longer a duplicate of bug 1269687
   
tempest.api.compute.v3.admin.test_servers.ServersAdminV3TestJSON.test_list_servers_by_admin_with_all_tenants
 FAIL due to Infocache failure in nova conductor
** This bug has been marked a duplicate of bug 1258620
   Make network_cache more robust with neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270347

Title:
  
tempest.api.compute.v3.admin.test_servers.ServersAdminV3TestXML.test_list_servers_by_admin_with_all_tenants
  failed gate tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  See: http://logs.openstack.org/04/65604/3/check/check-tempest-dsvm-
  postgres-full/cf2d9cf/console.html

  2014-01-18 02:37:34.449 | 
==
  2014-01-18 02:37:34.449 | FAIL: 
tempest.api.compute.v3.admin.test_servers.ServersAdminV3TestXML.test_list_servers_by_admin_with_all_tenants[gate]
  2014-01-18 02:37:34.449 | 
tempest.api.compute.v3.admin.test_servers.ServersAdminV3TestXML.test_list_servers_by_admin_with_all_tenants[gate]
  2014-01-18 02:37:34.449 | 
--
  2014-01-18 02:37:34.450 | _StringException: Empty attachments:
  2014-01-18 02:37:34.450 |   stderr
  2014-01-18 02:37:34.450 |   stdout
  2014-01-18 02:37:34.450 | 
  2014-01-18 02:37:34.450 | pythonlogging:'': {{{
  2014-01-18 02:37:34.450 | 2014-01-18 02:07:36,719 Request: GET 
http://127.0.0.1:8774/v3/servers/detail?all_tenants=
  2014-01-18 02:37:34.450 | 2014-01-18 02:07:36,719 Request Headers: 
{'Content-Type': 'application/xml', 'Accept': 'application/xml', 
'X-Auth-Token': 'Token omitted'}
  2014-01-18 02:37:34.450 | 2014-01-18 02:07:36,890 Response Status: 404
  2014-01-18 02:37:34.450 | 2014-01-18 02:07:36,890 Nova request id: 
req-1a4cc465-7f1c-4179-96a2-273352191507
  2014-01-18 02:37:34.451 | 2014-01-18 02:07:36,890 Response Headers: 
{'content-length': '142', 'date': 'Sat, 18 Jan 2014 02:07:36 GMT', 
'content-type': 'application/xml; charset=UTF-8', 'connection': 'close'}
  2014-01-18 02:37:34.451 | 2014-01-18 02:07:36,890 Response Body: 
itemNotFound code=404 
xmlns=http://docs.openstack.org/compute/api/v1.1;messageThe resource could 
not be found./message/itemNotFound
  2014-01-18 02:37:34.451 | }}}
  2014-01-18 02:37:34.451 | 
  2014-01-18 02:37:34.451 | Traceback (most recent call last):
  2014-01-18 02:37:34.451 |   File 
tempest/api/compute/v3/admin/test_servers.py, line 72, in 
test_list_servers_by_admin_with_all_tenants
  2014-01-18 02:37:34.451 | resp, body = 
self.client.list_servers_with_detail(params)
  2014-01-18 02:37:34.451 |   File 
tempest/services/compute/v3/xml/servers_client.py, line 261, in 
list_servers_with_detail
  2014-01-18 02:37:34.451 | resp, body = self.get(url, self.headers)
  2014-01-18 02:37:34.451 |   File tempest/common/rest_client.py, line 305, 
in get
  2014-01-18 02:37:34.451 | return self.request('GET', url, headers)
  2014-01-18 02:37:34.451 |   File tempest/common/rest_client.py, line 436, 
in request
  2014-01-18 02:37:34.452 | resp, resp_body)
  2014-01-18 02:37:34.452 |   File tempest/common/rest_client.py, line 481, 
in _error_checker
  2014-01-18 02:37:34.452 | raise exceptions.NotFound(resp_body)
  2014-01-18 02:37:34.452 | NotFound: Object not found
  2014-01-18 02:37:34.452 | Details: itemNotFound code=404 
xmlns=http://docs.openstack.org/compute/api/v1.1;messageThe resource could 
not be found./message/itemNotFound

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296995] [NEW] Add image cache purging settings to nova.conf doc

2014-03-24 Thread Andy Dugas
Public bug reported:

The Icehouse release features settings in nova.conf that enable you to
configure automatic purging of unused images from the image cache.

Related to  Change-Id: Iec47c28a38761c187226c5eff3ab69da503437f6

** Affects: nova
 Importance: Undecided
 Assignee: Andy Dugas (adugas)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Andy Dugas (adugas)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296995

Title:
  Add image cache purging settings to nova.conf doc

Status in OpenStack Compute (Nova):
  New

Bug description:
  The Icehouse release features settings in nova.conf that enable you to
  configure automatic purging of unused images from the image cache.

  Related to  Change-Id: Iec47c28a38761c187226c5eff3ab69da503437f6

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290486] Re: dhcp agent not serving responses

2014-03-24 Thread James Polley
I've been able to track down what I believe is the root problem.

If ovsdb-server (run by the openvswitch-switch service) restarts, the
neutron-openvswitch-agent loses its connection and needs to be manually
restarted in order to reconnect.

Causes of this bug I've seen have included ovsdb-server segfaulting,
being kill -9ed, and being gracefully restarted with service
openvswitch-switch restart.

The errors recorded in /var/log/upstart/neutron-openvswitch-agent.log
vary depending on why ovsdb-server went away:

2014-03-23 20:10:01.883 20375 ERROR neutron.agent.linux.ovsdb_monitor 
[req-a776b981-b86b-4437-ab65-0c6be6070094 None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)
2014-03-24 01:40:17.617 20375 ERROR neutron.agent.linux.ovsdb_monitor 
[req-a776b981-b86b-4437-ab65-0c6be6070094 None] Error received from ovsdb 
monitor: 2014-03-24T01:40:17Z|1|fatal_signal|WARN|terminating with signal 
15 (Terminated)
2014-03-24 04:08:59.718 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)
2014-03-24 22:44:22.174 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)
2014-03-24 22:44:52.220 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: ovsdb-client: failed to connect to unix:/var/run/openvswitch/db.sock 
(Connection refused)
2014-03-24 22:45:22.266 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: ovsdb-client: failed to connect to unix:/var/run/openvswitch/db.sock 
(Connection refused)
2014-03-24 22:45:52.310 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: ovsdb-client: failed to connect to unix:/var/run/openvswitch/db.sock 
(Connection refused)
2014-03-24 22:46:22.355 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: ovsdb-client: failed to connect to unix:/var/run/openvswitch/db.sock 
(Connection refused)
2014-03-24 22:49:27.179 8455 ERROR neutron.agent.linux.ovsdb_monitor 
[req-d2c2cbd5-a77a-4455-84ac-0a8ec69b41e8 None] Error received from ovsdb 
monitor: 2014-03-24T22:49:27Z|1|fatal_signal|WARN|terminating with signal 
15 (Terminated)
2014-03-24 22:55:45.441 16033 ERROR neutron.agent.linux.ovsdb_monitor 
[req-5fe682ce-138e-46d6-aa7e-f0d43ab576ee None] Error received from ovsdb 
monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End 
of file)

In all cases, the result is the same: until neutron-openvswitch-agent is
restarted, no traffic is passed onto the tapX interface inside the
dhcp-X netns

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290486

Title:
  dhcp agent not serving responses

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  The DHCP requests were not being responded to after they were seen on
  the undercloud network interface.  The neutron services were restarted
  in an attempt to ensure they had the newest configuration and knew
  they were supposed to respond to the requests.

  Rather than using the heat stack create (called in
  devtest_overcloud.sh) to test, it was simple to use the following to
  directly boot a baremetal node.

  nova boot --flavor $(nova flavor-list | grep 
|[[:space:]]*baremetal[[:space:]]*| | awk '{print $2}) \
--image $(nova image-list | grep 
|[[:space:]]*overcloud-control[[:space:]]*| | awk '{print $2}') \
bm-test1

  Whilst the baremetal node was attempting to pxe boot a restart of the
  neutron services was performed.  This allowed the baremetal node to
  boot.

  It has been observed that a neutron restart was needed for each
  subsequent reboot of the baremetal nodes to succeed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296912] Re: python-qpid-python package does not exist

2014-03-24 Thread David Andrew
** Also affects: heat (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296912

Title:
  python-qpid-python package does not exist

Status in OpenStack Neutron (virtual network service):
  New
Status in “heat” package in Ubuntu:
  New

Bug description:
  Broken package dependencies in the icehouse ubuntu cloud archive for precise:
  python-qpid-python is listed as a dependency for python-heat


  root@icontrol:~# apt-get install python-qpid-python
  Reading package lists... Done
  Building dependency tree
  Reading state information... Done
  Package python-qpid-python is not available, but is referred to by another 
package.
  This may mean that the package is missing, has been obsoleted, or
  is only available from another source

  E: Package 'python-qpid-python' has no installation candidate

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1296912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296912] Re: python-qpid-python package does not exist

2014-03-24 Thread David Andrew
** No longer affects: neutron

** Summary changed:

- python-qpid-python package does not exist
+ python-qpid-python package does not exist in precise

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1296912

Title:
  python-qpid-python package does not exist in precise

Status in “heat” package in Ubuntu:
  New

Bug description:
  Broken package dependencies in the icehouse ubuntu cloud archive for precise:
  python-qpid-python is listed as a dependency for python-heat


  root@icontrol:~# apt-get install python-qpid-python
  Reading package lists... Done
  Building dependency tree
  Reading state information... Done
  Package python-qpid-python is not available, but is referred to by another 
package.
  This may mean that the package is missing, has been obsoleted, or
  is only available from another source

  E: Package 'python-qpid-python' has no installation candidate

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/heat/+bug/1296912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297016] [NEW] flake8 configuration does not ignore rope artifacts

2014-03-24 Thread Maru Newby
Public bug reported:

The tox.ini configuration for 'exclude' overrides any global definition,
so excluding .ropeproject in tox.ini is the only way to avoid false
negatives from confusing users of rope.

** Affects: neutron
 Importance: Low
 Assignee: Maru Newby (maru)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297016

Title:
  flake8 configuration does not ignore rope artifacts

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The tox.ini configuration for 'exclude' overrides any global
  definition, so excluding .ropeproject in tox.ini is the only way to
  avoid false negatives from confusing users of rope.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1297016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296348] Re: /v3/auth/tokens cannot be used for issuing unscoped tokens during federated authn

2014-03-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/82532
Committed: 
https://git.openstack.org/cgit/openstack/identity-api/commit/?id=cd9f137d5a572aebdf3d91d5c887f11c7e191c67
Submitter: Jenkins
Branch:master

commit cd9f137d5a572aebdf3d91d5c887f11c7e191c67
Author: Marek Denis marek.de...@cern.ch
Date:   Mon Mar 24 17:12:54 2014 +0100

Add dedicated URL for federated authentication.

Describe new URL for federated authentication
``/v3/OS-FEDERATION/identity_providers/{identity_provider}/
protocols/{protocol}/auth``  and available HTTP methods.

Change-Id: Ic25e726c9c146050575b68c29ed3c6c8dab27016
Closes-Bug: #1296348


** Changed in: openstack-api-site
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1296348

Title:
  /v3/auth/tokens cannot be used for issuing unscoped tokens during
  federated authn

Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack API documentation site:
  Fix Released

Bug description:
  URL /v3/auth/tokens cannot be used when issuing unscoped federated
  tokens, as such URL must be configured as protected in the mod_shib
  configuration. Thus, a dedicated URL must be able to run federated
  authentication. Also, as usually during federated authentication
  initial data used by the client is lost (due to many HTTP redirections
  between SP and IdP) it's advised for clients to access URL with IdP
  and protocol specified in the URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1296348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296521] Re: NotImplementedError exception is not handled for get_dns_domains

2014-03-24 Thread Haiwei Xu
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296521

Title:
  NotImplementedError exception is not handled for get_dns_domains

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When executing 'nova dns-domains', I got a 500 exception.

  $nova dns-domains

  2014-03-24 23:21:30.815 ERROR nova.api.openstack 
[req-3dbd5f05-4be8-45c6-864e-daaaf0552042 admin demo] Caught error:
  2014-03-24 23:21:30.815 TRACE nova.api.openstack Traceback (most recent call 
last):
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2014-03-24 23:21:30.815 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2014-03-24 23:21:30.815 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 601, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return self.app(env, 
start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 929, in __call__
  2014-03-24 23:21:30.815 TRACE nova.api.openstack content_type, body, 
accept)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 991, in _process_stack
  2014-03-24 23:21:30.815 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1078, in dispatch
  2014-03-24 23:21:30.815 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/contrib/floating_ip_dns.py, line 
141, in index
  2014-03-24 23:21:30.815 TRACE nova.api.openstack domains = 
self.network_api.get_dns_domains(context)
  2014-03-24 23:21:30.815 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 1121, in get_dns_domains
  2014-03-24 23:21:30.815 TRACE nova.api.openstack raise 
NotImplementedError()
  2014-03-24 23:21:30.815 TRACE nova.api.openstack NotImplementedError
  2014-03-24 23:21:30.815 TRACE nova.api.openstack

  This is because NotImplementedError is not handled by the api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296521/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297050] [NEW] failed to access horizon web site if no install mox module

2014-03-24 Thread huangyaowen
Public bug reported:

After installing the latest horizon codes and start the httpd,  the
horizon web site can't be accessed due to the following exception about
ImportError: No module named mox.

I did the initial investigation and found it's caused by the
change(https://review.openstack.org/#/c/59580/) committed by Maxime
Vidori on file horizon/site_urls.py. In this change, he add one new
import linefrom horizon.test.jasmine import jasmine, then it track
back to file horizon/test/helpers.py and try to import mox module.

After install the mox module and restart httpd, then I can access to the
horizon site.

However, from my view,  it does NOT make sense: when I install openstack 
horizon on practical product env,  I should not need to install the mox 
module(which is one python UT framework). it's better to change the file 
horizon/site_urls.py to dynamically import jasmine within the section when 
setting.DEBUG is true using import_module() method possibly like that:
if settings.DEBUG:
try:
mod = import_module('jasmine', 'horizon.test.jasmine')
urlpatterns += patterns('',
url(r'^qunit/$',
TemplateView.as_view(template_name=horizon/qunit.html),
name='qunit_tests'),
url(r'^jasmine/(.*?)$', mod.dispatcher))
except ImportError:
urlpatterns += patterns('',
url(r'^qunit/$',
TemplateView.as_view(template_name=horizon/qunit.html),
name='qunit_tests'))

Detail exception can be seen below:
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] mod_wsgi (pid=23842): 
Exception occurred processing WSGI script 
'/usr/share/openstack-dashboard/opensta
ck_dashboard/wsgi/django.wsgi'.
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] Traceback (most recent 
call last):
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/wsgi.py, line 241, in 
__call__
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] response = 
self.get_response(request)
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 179, in 
get_response
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] response = 
self.handle_uncaught_exception(request, resolver, sys.exc_info())
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/core/handlers/base.py, line 224, in 
handle_uncaught_ex
ception
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] if 
resolver.urlconf_module is None:
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/core/urlresolvers.py, line 323, in 
urlconf_module
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] 
self._urlconf_module = import_module(self.urlconf_name)
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/utils/importlib.py, line 35, in 
import_module
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] __import__(name)
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/urls.py,
 lin
e 38, in module
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] url(r'', 
include(horizon.urls))
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/conf/urls/__init__.py, line 25, in 
include
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] patterns = 
getattr(urlconf_module, 'urlpatterns', urlconf_module)
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/utils/functional.py, line 184, in 
inner
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] self._setup()
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/utils/functional.py, line 248, in 
_setup
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] self._wrapped = 
self._setupfunc()
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/horizon/base.py, line 733, in url_patterns
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] return 
self._urls()[0]
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/horizon/base.py, line 739, in _urls
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] urlpatterns = 
self._get_default_urlpatterns()
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/horizon/base.py, line 82, in 
_get_default_urlpatterns
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234] mod = 
import_module(self.urls)
[Mon Mar 24 08:09:17 2014] [error] [client 9.111.44.234]   File 
/usr/lib/python2.6/site-packages/django/utils/importlib.py, line 35, in 
import_module
[Mon 

[Yahoo-eng-team] [Bug 1297052] [NEW] resize fail didn't should a correct info when --poll specified

2014-03-24 Thread jichencom
Public bug reported:

[root@controller ~]# nova resize --poll a9dd1fd6-27fb-4128-92e6-93bcab085a98 100
Instance resizing... 100% complete
Finished


but the instance is not finished yet and error logs in nova log

** Affects: nova
 Importance: Undecided
 Assignee: jichencom (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = jichencom (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297052

Title:
  resize fail didn't should a correct info when --poll specified

Status in OpenStack Compute (Nova):
  New

Bug description:
  [root@controller ~]# nova resize --poll a9dd1fd6-27fb-4128-92e6-93bcab085a98 
100
  Instance resizing... 100% complete
  Finished

  
  but the instance is not finished yet and error logs in nova log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297059] [NEW] Migrate 43 fails on old sqlalchemy

2014-03-24 Thread Jamie Lennox
Public bug reported:

When using sqlalchemy 0.7.10 running migration
043_fixup_region_description.py fails with the error:

Traceback (most recent call last):
  File keystone/tests/test_sql_upgrade.py, line 2546, in 
test_upgrade_region_unique_description
self.upgrade(43)
  File keystone/tests/test_sql_upgrade.py, line 139, in upgrade
self._migrate(*args, **kwargs)
  File keystone/tests/test_sql_upgrade.py, line 156, in _migrate
self.schema.runchange(ver, change, changeset.step)
  File 
/home/jamie/.virtualenvs/keystone2/lib/python2.7/site-packages/migrate/versioning/schema.py,
 line 91, in runchange
change.run(self.engine, step)
  File 
/home/jamie/.virtualenvs/keystone2/lib/python2.7/site-packages/migrate/versioning/script/py.py,
 line 145, in run
script_func(engine)
  File 
/home/jamie/work/keystone/keystone/common/sql/migrate_repo/versions/043_fixup_region_description.py,
 line 78, in upgrade
region_table = sql.Table(_REGION_TABLE_NAME, meta, autoload=True)
  File 
/home/jamie/.virtualenvs/keystone2/lib/python2.7/site-packages/sqlalchemy/util/_collections.py,
 line 106, in __getattr__
raise AttributeError(key)
AttributeError: values

Upgrading to sqlalchemy 0.8.5 fixes the error, however our
requirements.txt file lists: SQLAlchemy=0.7.8,=0.8.99 so this should
still be valid.

I can't quite tell when the values() function was added, i assume it was
0.8 but i'm not familiar with the migration to know exactly what is
being accomplished there.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1297059

Title:
  Migrate 43 fails on old sqlalchemy

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When using sqlalchemy 0.7.10 running migration
  043_fixup_region_description.py fails with the error:

  Traceback (most recent call last):
File keystone/tests/test_sql_upgrade.py, line 2546, in 
test_upgrade_region_unique_description
  self.upgrade(43)
File keystone/tests/test_sql_upgrade.py, line 139, in upgrade
  self._migrate(*args, **kwargs)
File keystone/tests/test_sql_upgrade.py, line 156, in _migrate
  self.schema.runchange(ver, change, changeset.step)
File 
/home/jamie/.virtualenvs/keystone2/lib/python2.7/site-packages/migrate/versioning/schema.py,
 line 91, in runchange
  change.run(self.engine, step)
File 
/home/jamie/.virtualenvs/keystone2/lib/python2.7/site-packages/migrate/versioning/script/py.py,
 line 145, in run
  script_func(engine)
File 
/home/jamie/work/keystone/keystone/common/sql/migrate_repo/versions/043_fixup_region_description.py,
 line 78, in upgrade
  region_table = sql.Table(_REGION_TABLE_NAME, meta, autoload=True)
File 
/home/jamie/.virtualenvs/keystone2/lib/python2.7/site-packages/sqlalchemy/util/_collections.py,
 line 106, in __getattr__
  raise AttributeError(key)
  AttributeError: values

  Upgrading to sqlalchemy 0.8.5 fixes the error, however our
  requirements.txt file lists: SQLAlchemy=0.7.8,=0.8.99 so this should
  still be valid.

  I can't quite tell when the values() function was added, i assume it
  was 0.8 but i'm not familiar with the migration to know exactly what
  is being accomplished there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1297059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296972] Re: RPC code in Havana doesn't handle connection errors

2014-03-24 Thread Chris Friesen
Looks like I misread that patch below, it's adding back the channel
error check, not the connection error check.

This may be due to a bad patch on our end, sorry for the noise.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296972

Title:
  RPC code in Havana doesn't handle connection errors

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  We've got an HA controller setup using pacemaker and were stress-
  testing it by doing multiple controlled switchovers while doing other
  activity.  Generally this works okay, but last night we ran into a
  problem where nova-compute got into a state where it was unable to
  reconnect with the AMQP server.  Logs are at the bottom, they repeat
  every minute and did this for 7+ hours until the system was manually
  cleaned up.

  
  I've found something in the code that looks a bit suspicious.  The  
Unexpected exception occurred 61 time(s)... retrying. message comes from 
forever_retry_uncaught_exceptions() in excutils.py.  It looks like we're raising

  RecoverableConnectionError: connection already closed

  down in /usr/lib64/python2.7/site-packages/amqp/abstract_channel.py,
  but nothing handles it.

  It looks like the most likely place that should be handling it is
  nova.openstack.common.rpc.impl_kombu.Connection.ensure().

  In the current oslo.messaging code the ensure() routine explicitly
  handles connection errors (which RecoverableConnectionError is) and
  socket timeouts--the ensure() routine in Havana doesn't do this.

  Maybe we should look at porting
  
https://github.com/openstack/oslo.messaging/commit/0400cbf4f83cf8d58076c7e65e08a156ec3508a8
  to the Havana RPC code?


  Logs showing the start of the problem and the first few iterations of
  the repeating issue:

  
  2014-03-24 09:24:33.566 6620 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
  2014-03-24 09:24:34.126 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-4', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x180, cpulist=[7, 8] pinned, nodelist=[0], node=0 
  2014-03-24 09:24:34.126 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-1', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x60, cpulist=[5, 6] pinned, nodelist=[0], node=0 
  2014-03-24 09:24:34.126 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'load_balancer', vm_state=u'active', task_state=None, vcpus=3, 
cpuset=0x1c00, cpulist=[10, 11, 12] pinned, nodelist=[1], node=1 
  2014-03-24 09:24:34.182 6620 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 111290, per-node: [52286, 59304], numa nodes:2
  2014-03-24 09:24:34.183 6620 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 29
  2014-03-24 09:24:34.183 6620 AUDIT nova.compute.resource_tracker [-] Free 
vcpus: 170, free per-node float vcpus: [48, 112], free per-node pinned vcpus: 
[3, 7]
  2014-03-24 09:24:34.183 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
vcpus:20, Free vcpus:170, 16.0x overcommit, per-cpu float cpulist: [3, 4, 9, 
13, 14, 15, 16, 17, 18, 19]
  2014-03-24 09:24:34.244 6620 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute-0:compute-0
  2014-03-24 09:25:36.564 6620 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
  2014-03-24 09:25:37.122 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-4', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x180, cpulist=[7, 8] pinned, nodelist=[0], node=0 
  2014-03-24 09:25:37.122 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'sgw-1', vm_state=u'active', task_state=None, vcpus=2, 
cpuset=0x60, cpulist=[5, 6] pinned, nodelist=[0], node=0 
  2014-03-24 09:25:37.122 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
instance: name=u'load_balancer', vm_state=u'active', task_state=None, vcpus=3, 
cpuset=0x1c00, cpulist=[10, 11, 12] pinned, nodelist=[1], node=1 
  2014-03-24 09:25:37.182 6620 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 111290, per-node: [52286, 59304], numa nodes:2
  2014-03-24 09:25:37.182 6620 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 29
  2014-03-24 09:25:37.183 6620 AUDIT nova.compute.resource_tracker [-] Free 
vcpus: 170, free per-node float vcpus: [48, 112], free per-node pinned vcpus: 
[3, 7]
  2014-03-24 09:25:37.183 6620 INFO nova.compute.resource_tracker [-] DETAIL: 
vcpus:20, Free vcpus:170, 16.0x overcommit, per-cpu float cpulist: [3, 4, 9, 
13, 14, 15, 16, 17, 18, 19]
  2014-03-24 09:25:37.245 6620 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute-0:compute-0
  2014-03-24 09:26:47.324 6620 ERROR root [-] Unexpected exception occurred 1 
time(s)... retrying.
  2014-03-24 09:26:47.324 

[Yahoo-eng-team] [Bug 1266590] Re: db connection string is cleartext in debug log

2014-03-24 Thread John Griffith
Cinder has the secret=True setting in the conf options already, so the
DNE Cinder.

** No longer affects: cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1266590

Title:
  db connection string is cleartext in debug log

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Oslo - a Library of Common OpenStack Code:
  Fix Released

Bug description:
  
  When I start up keystone-all with --debug it logs the config settings. The 
config setting for the database connection string is printed out:

  (keystone-all): 2014-01-06 16:32:56,983 DEBUG cfg log_opt_values
  database.connection=
  mysql://root:rootpwd@127.0.0.1/keystone?charset=utf8

  The database connection string will typically contain the user
  password, so this value should be masked (like admin_token).

  This is a regression from Havana, which masked the db connection
  string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1266590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1289135] Re: cinderclient AmbiguousEndpoints in Nova API when deleting nested stack

2014-03-24 Thread Christopher Yeoh
Thanks Mike - any chance you could attach the output from keystone
service-list and endpoint-list ?

Adding python-cinderclient as I'm guessing the problem might be in there
somewhere.

** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289135

Title:
  cinderclient AmbiguousEndpoints in Nova API when deleting nested stack

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Cinder:
  New

Bug description:
  While chasing down some errors I found the first one was the
  following, found in the log from the Nova API process.

  2014-03-06 22:17:41.713 ERROR nova.api.openstack 
[req-0a2e7b6b-8ea8-48f1-b6c9-4c6a20ba27b4 admin admin] Caught error: 
AmbiguousEndpoints: [{u'url': 
u'http://10.10.0.24:8776/v1/a4c140dd5649439987f9c61d0a91c76e', u'region': 
u'RegionOne', u'legacy_endpoint_id': u'58ac5510148c4641ab48e1499c0bb4ec', 
'serviceName': None, u'interface': u'admin', u'id': 
u'154c830dce20478a8b269b5f85f7bca3'}, {u'url': 
u'http://10.10.0.24:8776/v1/a4c140dd5649439987f9c61d0a91c76e', u'region': 
u'RegionOne', u'legacy_endpoint_id': u'58ac5510148c4641ab48e1499c0bb4ec', 
'serviceName': None, u'interface': u'public', u'id': 
u'4129f440fa42491f997984455b9727af'}, {u'url': 
u'http://10.10.0.24:8776/v1/a4c140dd5649439987f9c61d0a91c76e', u'region': 
u'RegionOne', u'legacy_endpoint_id': u'58ac5510148c4641ab48e1499c0bb4ec', 
'serviceName': None, u'interface': u'internal', u'id': 
u'7f2013973d0248f1ba64ece67e3df7bb'}]
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 596, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 925, in __call__
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack content_type, 
body, accept)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 987, in _process_stack
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-03-06 22:17:41.713 17339 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, 

[Yahoo-eng-team] [Bug 1297088] [NEW] unit test of test_delete_ports_by_device_id always failed

2014-03-24 Thread shihanzhang
Public bug reported:

I found int test_db_plugin.py, the test test_delete_ports_by_device_id
always failed, the error log is below:

INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on subnet 9579ede3-4bc4-43ea-939c-42c9ab027a53. One or more 
ports have an IP allocation from this subnet.
INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on network 5f2ec397-31c7-4e92-acda-79d6093636ba. There are 
one or more ports still in use on the network.
 }}}
 
 Traceback (most recent call last):
   File neutron/tests/unit/test_db_plugin.py, line 1681, in 
test_delete_ports_by_device_id
 expected_code=webob.exc.HTTPOk.code)
   File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__
 self.gen.throw(type, value, traceback)
   File neutron/tests/unit/test_db_plugin.py, line 567, in subnet
 self._delete('subnets', subnet['subnet']['id'])
   File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__
 self.gen.throw(type, value, traceback)
   File neutron/tests/unit/test_db_plugin.py, line 534, in network
 self._delete('networks', network['network']['id'])
   File neutron/tests/unit/test_db_plugin.py, line 450, in _delete
 self.assertEqual(res.status_int, expected_code)
   File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
 self.assertThat(observed, matcher, message)
   File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
 raise mismatch_error
 MismatchError: 409 != 204

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297088

Title:
  unit test of test_delete_ports_by_device_id always failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I found int test_db_plugin.py, the test test_delete_ports_by_device_id
  always failed, the error log is below:

  INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on subnet 9579ede3-4bc4-43ea-939c-42c9ab027a53. One or more 
ports have an IP allocation from this subnet.
  INFO [neutron.api.v2.resource] delete failed (client error): Unable to 
complete operation on network 5f2ec397-31c7-4e92-acda-79d6093636ba. There are 
one or more ports still in use on the network.
   }}}
   
   Traceback (most recent call last):
 File neutron/tests/unit/test_db_plugin.py, line 1681, in 
test_delete_ports_by_device_id
   expected_code=webob.exc.HTTPOk.code)
 File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__
   self.gen.throw(type, value, traceback)
 File neutron/tests/unit/test_db_plugin.py, line 567, in subnet
   self._delete('subnets', subnet['subnet']['id'])
 File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__
   self.gen.throw(type, value, traceback)
 File neutron/tests/unit/test_db_plugin.py, line 534, in network
   self._delete('networks', network['network']['id'])
 File neutron/tests/unit/test_db_plugin.py, line 450, in _delete
   self.assertEqual(res.status_int, expected_code)
 File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 321, in assertEqual
   self.assertThat(observed, matcher, message)
 File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 406, in assertThat
   raise mismatch_error
   MismatchError: 409 != 204

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1297088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294132] Re: Volume status set to error extending when driver fails to extend the volume

2014-03-24 Thread Mike Perez
Santiago, that was exactly my point earlier. Those are the only places
that this happens and I don't think it's as simple as you've discussed
to recover. I do agree with Cory in the review that this issue needs to
be addressed, but I think for cases where the other operations to the
backend that would take place while the extend is in progress would
result in false positives of recovering.

** Changed in: cinder
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1294132

Title:
  Volume status set to error extending when driver fails to extend the
  volume

Status in Cinder:
  Opinion
Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  If the driver can't extend the volume because there's no enough space,
  the volume status is set to error_extending and the volume becomes
  unusable. The only options to the users are to delete the volume and
  create it again.

  Expected behavior would be: if the back-end doesn't have enough
  capacity to extend the volume, warn user with proper information and
  set volume status back to 'available' since the volume is untouched.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1294132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp