[Yahoo-eng-team] [Bug 1370342] [NEW] AttributeError: object has no attribute 'update_security_group_rules'

2014-09-17 Thread zhu zhu
Public bug reported:

With recent merged code support If19be8579ca734a899cdd673c919eee8165aaa0e 
(Refactor security group rpc call)
Introduce two methods for firewall driver used by the securitygroup_rpc.py

update_security_group_rules
update_security_group_members

def _update_security_group_info(self, security_groups,
security_group_member_ips):
LOG.debug(Update security group information)
for sg_id, sg_rules in security_groups.items():
self.firewall.update_security_group_rules(sg_id, sg_rules)
for remote_sg_id, member_ips in security_group_member_ips.items():
self.firewall.update_security_group_members(
remote_sg_id, member_ips)

Since these two methods are added in iptables_firewall.py. This will
cause other firewall driver to fail when neutron agent startup.  Such as
HyperV agent and HyperVSecurityGroupsDriver.

2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1402, in rpc_loop
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1201, in process_network_ports
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py, line 
316, in setup_port_filters
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py, line 
211, in prepare_devices_filter
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent security_groups, 
security_group_member_ips)
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib64/python2.6/contextlib.py, line 34, in __exit__
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.throw(type, 
value, traceback)
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/firewall.py, line 104, in 
defer_apply
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent yield
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py, line 
211, in prepare_devices_filter
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent security_groups, 
security_group_member_ips)
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py, line 
217, in _update_security_group_info
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.firewall.update_security_group_rules(sg_id, sg_rules)
2014-09-17 12:02:50.620 1789 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent AttributeError: 
'NoopFirewallDriver' object has no attribute 'update_security_group_rules'

** Affects: neutron
 Importance: Undecided
 Assignee: zhu zhu  (zhuzhubj)
 Status: In Progress

** Description changed:

  With recent merged code support If19be8579ca734a899cdd673c919eee8165aaa0e 
(Refactor security group rpc call)
- Introduce two methods for the securitygroup_rpc.py
+ Introduce two methods for firewall driver used by the securitygroup_rpc.py
  
  update_security_group_rules
  update_security_group_members
  
  def _update_security_group_info(self, security_groups,
- security_group_member_ips):
- LOG.debug(Update security group information)
- for sg_id, sg_rules in security_groups.items():
- self.firewall.update_security_group_rules(sg_id, sg_rules)
- for remote_sg_id, member_ips in security_group_member_ips.items():
- self.firewall.update_security_group_members(
- remote_sg_id, member_ips)
+ security_group_member_ips):
+ LOG.debug(Update security group information)
+ for sg_id, sg_rules in security_groups.items():
+ 

[Yahoo-eng-team] [Bug 1370348] [NEW] Using macvtap vnic_type is not working with vif_type=hw_veb

2014-09-17 Thread Itzik Brown
Public bug reported:

When trying to boot an instance with a port using vnic_type=macvtap and
vif_type=hw_veb I get this error in Compute log:

TRACE nova.compute.manager  mlibvirtError: unsupported configuration: an
interface of type 'direct' is requesting a vlan tag, but that is not
supported for this type of connection

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370348

Title:
  Using macvtap vnic_type is not working with vif_type=hw_veb

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to boot an instance with a port using vnic_type=macvtap
  and vif_type=hw_veb I get this error in Compute log:

  TRACE nova.compute.manager  mlibvirtError: unsupported configuration:
  an interface of type 'direct' is requesting a vlan tag, but that is
  not supported for this type of connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370359] [NEW] HTTP 500 is returned when using an invalid network id to attach interface

2014-09-17 Thread Qin Zhao
Public bug reported:

When I post an 'attach interface' request to Nova with an invalid
network id, Nova returns an HTTP 500 which informs me that the attach
interface operation fails.

REQ: curl -i 
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/4e863fad-e868-48a1-8735-2da9a38561e8/os-interface'
 -X POST -H Accept: application/json -H Content-Type: application/json -H 
User-Agent: python-novaclient -H X-Auth-Project-Id: Public -H 
X-Auth-Token: {SHA1}e8bf3f6e4599a94e4ce1625abc3bc8a2dfd2742c -d 
'{interfaceAttachment: {fixed_ips: [{ip_address: 10.100.99.170}], 
net_id: 173854d5-333f-4c78-b5a5-10d2e9c8d82z}}'
INFO (connectionpool:187) Starting new HTTP connection (1): 10.90.10.24
DEBUG (connectionpool:357) POST 
/v2/19abae5746b242d489d1c2862b228d8b/servers/4e863fad-e868-48a1-8735-2da9a38561e8/os-interface
 HTTP/1.1 500 72
RESP: [500] {'date': 'Wed, 17 Sep 2014 06:13:02 GMT', 'content-length': '72', 
'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 
'req-93700c2a-fab2-42b9-9427-7136ac3f0f59'}
RESP BODY: {computeFault: {message: Failed to attach interface, code: 
500}}


In fact, Nova get an empty network list from Neutron due to my incorrect input. 
Nova should be able to address the error and return an HTTP 400 error in order 
to to inform the user to correct the network id in request.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370359

Title:
  HTTP 500 is returned when using an invalid network id to attach
  interface

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I post an 'attach interface' request to Nova with an invalid
  network id, Nova returns an HTTP 500 which informs me that the attach
  interface operation fails.

  REQ: curl -i 
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/4e863fad-e868-48a1-8735-2da9a38561e8/os-interface'
 -X POST -H Accept: application/json -H Content-Type: application/json -H 
User-Agent: python-novaclient -H X-Auth-Project-Id: Public -H 
X-Auth-Token: {SHA1}e8bf3f6e4599a94e4ce1625abc3bc8a2dfd2742c -d 
'{interfaceAttachment: {fixed_ips: [{ip_address: 10.100.99.170}], 
net_id: 173854d5-333f-4c78-b5a5-10d2e9c8d82z}}'
  INFO (connectionpool:187) Starting new HTTP connection (1): 10.90.10.24
  DEBUG (connectionpool:357) POST 
/v2/19abae5746b242d489d1c2862b228d8b/servers/4e863fad-e868-48a1-8735-2da9a38561e8/os-interface
 HTTP/1.1 500 72
  RESP: [500] {'date': 'Wed, 17 Sep 2014 06:13:02 GMT', 'content-length': '72', 
'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 
'req-93700c2a-fab2-42b9-9427-7136ac3f0f59'}
  RESP BODY: {computeFault: {message: Failed to attach interface, code: 
500}}

  
  In fact, Nova get an empty network list from Neutron due to my incorrect 
input. Nova should be able to address the error and return an HTTP 400 error in 
order to to inform the user to correct the network id in request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370361] [NEW] Neutron need to reduce number of network db calls during get_devices_details_list

2014-09-17 Thread zhu zhu
Public bug reported:

Currently each Neutron agent will impose db calls to Neutron Server to
query devices, port and networks when it get start up.

Take ml2 rpc.py method get_device_details for example:
It can be noticed that during this call:

it will get each port and then get each network that port associated
within get_bound_port_context.  Actually in prodcution env.  Most ports
are belonging to a much less number of networks.  Maybe even one compute
host with 500VM with single network.

As the netwok information is static information and was used to
construct the device json. It's suggested to initially having these
network prefetched. instead of query it each time within each port. This
will reduce the db calls a lot.

with less db calls, it will benefit neutron server performance and
database hit when VM number in large.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: performance

** Tags added: performance

** Summary changed:

- Neutron need to reduce nubmer of network db calls during 
get_devices_details_list
+ Neutron need to reduce number of network db calls during 
get_devices_details_list

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370361

Title:
  Neutron need to reduce number of network db calls during
  get_devices_details_list

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently each Neutron agent will impose db calls to Neutron Server to
  query devices, port and networks when it get start up.

  Take ml2 rpc.py method get_device_details for example:
  It can be noticed that during this call:

  it will get each port and then get each network that port associated
  within get_bound_port_context.  Actually in prodcution env.  Most
  ports are belonging to a much less number of networks.  Maybe even one
  compute host with 500VM with single network.

  As the netwok information is static information and was used to
  construct the device json. It's suggested to initially having these
  network prefetched. instead of query it each time within each port.
  This will reduce the db calls a lot.

  with less db calls, it will benefit neutron server performance and
  database hit when VM number in large.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370022] Re: Keystone cannot cope with being behind an SSL terminator for version list

2014-09-17 Thread Andrey Pavlov
** Changed in: keystone
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1370022

Title:
  Keystone cannot cope with being behind an SSL terminator for version
  list

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When keystone set up behind SSL termintator then it returns 'http' as
  protocol in URLs returned by version list command -

  user@host:~$ curl https://MYHOST:5000/

  {versions: {values: [{status: stable, updated:
  2013-03-06T00:00:00Z, media-types: [{base: application/json,
  type: application/vnd.openstack.identity-v3+json}, {base:
  application/xml, type:
  application/vnd.openstack.identity-v3+xml}], id: v3.0, links:
  [{href: http://MYHOST:5000/v3/;, rel: self}]}, {status:
  stable, updated: 2014-04-17T00:00:00Z, media-types: [{base:
  application/json, type:
  application/vnd.openstack.identity-v2.0+json}, {base:
  application/xml, type:
  application/vnd.openstack.identity-v2.0+xml}], id: v2.0,
  links: [{href: http://MYHOST:5000/v2.0/;, rel: self},
  {href: http://docs.openstack.org/api/openstack-identity-
  service/2.0/content/, type: text/html, rel: describedby},
  {href: http://docs.openstack.org/api/openstack-identity-service/2.0
  /identity-dev-guide-2.0.pdf, type: application/pdf, rel:
  describedby}]}]}}

  my ha_proxyconfig -

  frontend keystone_main_frontend
  bind 172.31.7.253:5000
  bind 172.31.7.252:5000 ssl crt /etc/haproxy/certs/runtime
  reqadd X-Forwarded-Proto:\ https if { ssl_fc }
  default_backend keystone_main_backend
  option httpclose
  option http-pretend-keepalive
  option forwardfor

  backend keystone_main_backend
  server HOST1 172.31.0.10:5000 check
  server HOST2 172.31.0.12:5000 check
  server HOST3 172.31.0.16:5000 check

  Similar bug is here https://bugs.launchpad.net/heat/+bug/123

  And because of this bug last cinder client doesn't work -

  user@host:~$cinder --os-username admin --os-tenant-name admin --os-password 
password --os-auth-url https://MYHOST:5000/v2.0/ --endpoint-type publicURL 
--debug list
  ERROR: Unable to establish connection to http://MYHOST:5000/v2.0/tokens

  
  Also - if I set public_endpoint and admin_endpoint in keystone.conf to use 
'https' proto then all works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1370022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370387] [NEW] Fail to fetch non-raw image on image_backend=RBD

2014-09-17 Thread Sébastien Han
Public bug reported:

Using the image_backend RBD and booting a non-raw image results in a fallback 
to fetch_to_raw function where the goal is to download the qcow2 image on the 
compute, convert it into raw and import it into ceph.
Fetching the image fails with the following errors:

On the nova-compute logs:

2014-09-16 10:16:23.061 ERROR nova.compute.manager 
[req-4c42af30-bc04-4648-bd12-14419031be80 admin admin] [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] Instance failed to spaw
n
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] Traceback (most recent call last):
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/compute/manager.py, line 2203, in _build_resourc
es
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] yield resources
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/compute/manager.py, line 2082, in _build_and_run
_instance
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] block_device_info=block_device_info)
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2598, in spawn
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] admin_pass=admin_password)
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2974, in _create_im
age
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] 
project_id=instance['project_id'])2014-09-16 10:16:23.061 TRACE 
nova.compute.manager [instance: 18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 206, in 
cache2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] *args, **kwargs)2014-09-16 
10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 396, in 
create_image2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] prepare_template(target=base, 
max_size=size, *args, **kwargs)2014-09-16 10:16:23.061 TRACE 
nova.compute.manager [instance: 18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 270, in 
inner2014-09-16 10:16:23.061 TRACE nov
 a.compute.manager [instance: 18d8659a-0247-4472-97be-9c6c9007689b] return 
f(*args, **kwargs)
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/libvirt/imagebackend.py, line 196, in 
fetch_func_sync
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] fetch_func(target=target, *args, 
**kwargs)2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/libvirt/utils.py, line 452, in fetch_image
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] max_size=max_size)2014-09-16 
10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/images.py, line 73, in fetch_to_raw
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] max_size=max_size)2014-09-16 
10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/virt/images.py, line 67, in fetch
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] IMAGE_API.download(context, 
image_href, dest_path=path)2014-09-16 10:16:23.061 TRACE nova.compute.manager 
[instance: 18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/image/api.py, line 178, in download
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] dst_path=dest_path)2014-09-16 
10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/image/glance.py, line 352, in download
2014-09-16 10:16:23.061 TRACE nova.compute.manager [instance: 
18d8659a-0247-4472-97be-9c6c9007689b] 
_reraise_translated_image_exception(image_id)2014-09-16 10:16:23.061 TRACE 
nova.compute.manager [instance: 18d8659a-0247-4472-97be-9c6c9007689b]   File 
/opt/stack/nova/nova/image/glance.py, line 350, in download2014-09-16 
10:16:23.061 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1370384] [NEW] Cannot expand root volume by EC2 API

2014-09-17 Thread Feodor Tersin
Public bug reported:

AWS provides a scenario to expand volumes of an instance 
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html). It 
consist of:
1 Stop the instance
2 Create a snapshot of the volume
3 Create a new volume from the snapshot
4 Detach the old volum
5 Attach the new volume using the same device name
6 Start the instance

In Nova this works for non-root devices, but doesn't for a root device.

Now in current version (Juno) since
https://review.openstack.org/#/c/75552/ was merged it's not able to
detach root volume at all.

$ nova volume-detach inst 02f60d80-47ee-47ed-a795-cb4d05f5103e
ERROR (Forbidden): Can't detach root device volume (HTTP 403) (Request-ID: 
req-e25134dc-1330-4fe1-9d21-abc274e75a1d)

Before this commit it was able, but it was unable to attach the root
volume back.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370384

Title:
  Cannot expand root volume by EC2 API

Status in OpenStack Compute (Nova):
  New

Bug description:
  AWS provides a scenario to expand volumes of an instance 
(http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html). It 
consist of:
  1 Stop the instance
  2 Create a snapshot of the volume
  3 Create a new volume from the snapshot
  4 Detach the old volum
  5 Attach the new volume using the same device name
  6 Start the instance

  In Nova this works for non-root devices, but doesn't for a root
  device.

  Now in current version (Juno) since
  https://review.openstack.org/#/c/75552/ was merged it's not able to
  detach root volume at all.

  $ nova volume-detach inst 02f60d80-47ee-47ed-a795-cb4d05f5103e
  ERROR (Forbidden): Can't detach root device volume (HTTP 403) (Request-ID: 
req-e25134dc-1330-4fe1-9d21-abc274e75a1d)

  Before this commit it was able, but it was unable to attach the root
  volume back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370390] [NEW] Resize instace will not change the NUMA topology of a running instance to the one from the new flavor

2014-09-17 Thread Nikola Đipanov
Public bug reported:

When we resize (change the flavor) of an instance that has a NUMA
topology defined, the NUMA info from the new flavor will not be
considered during scheduling. The instance will get re-scheduled based
on the old NUMA information, but the claiming on the host will use the
new flavor data. Once the instane sucessfully lands on a host, we will
still use the old data when provisioning it on the new host.

We should be considering only the new flavor information in resizes.

** Affects: nova
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370390

Title:
  Resize instace will not change the NUMA topology of a running instance
  to the one from the new flavor

Status in OpenStack Compute (Nova):
  New

Bug description:
  When we resize (change the flavor) of an instance that has a NUMA
  topology defined, the NUMA info from the new flavor will not be
  considered during scheduling. The instance will get re-scheduled based
  on the old NUMA information, but the claiming on the host will use the
  new flavor data. Once the instane sucessfully lands on a host, we will
  still use the old data when provisioning it on the new host.

  We should be considering only the new flavor information in resizes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370394] [NEW] neutron.tests.unit.cisco.test_network_plugin.TestCiscoRouterInterfacesV2XML.test_port_list_filtered_by_router_id failed in gate due to exception logged

2014-09-17 Thread Ihar Hrachyshka
Public bug reported:

It failed in Icehouse:

Traceback (most recent call last):
  File neutron/tests/unit/cisco/test_network_plugin.py, line 1114, in 
test_port_list_filtered_by_router_id
self.assertFalse(self.log_exc_count)
  File /usr/lib/python2.7/unittest/case.py, line 414, in assertFalse
raise self.failureException(msg)
AssertionError: 1 is not false

This means that unexpected exception occurred and was logged while
running the unit test.

It failed once for https://review.openstack.org/#/c/120418/
Logs are at: 
http://logs.openstack.org/18/120418/3/gate/gate-neutron-python27/c8fc567/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370394

Title:
  
neutron.tests.unit.cisco.test_network_plugin.TestCiscoRouterInterfacesV2XML.test_port_list_filtered_by_router_id
  failed in gate due to exception logged

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It failed in Icehouse:

  Traceback (most recent call last):
File neutron/tests/unit/cisco/test_network_plugin.py, line 1114, in 
test_port_list_filtered_by_router_id
  self.assertFalse(self.log_exc_count)
File /usr/lib/python2.7/unittest/case.py, line 414, in assertFalse
  raise self.failureException(msg)
  AssertionError: 1 is not false

  This means that unexpected exception occurred and was logged while
  running the unit test.

  It failed once for https://review.openstack.org/#/c/120418/
  Logs are at: 
http://logs.openstack.org/18/120418/3/gate/gate-neutron-python27/c8fc567/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370450] [NEW] Create Volume, Size should have default value

2014-09-17 Thread Bradley Jones
Public bug reported:

The minimum size for a Volume is 1GB, therefore this should be the
default for the create volume modal

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Bradley Jones (bradjones)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370450

Title:
  Create Volume, Size should have default value

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The minimum size for a Volume is 1GB, therefore this should be the
  default for the create volume modal

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318544] Re: XenServer - Nova-Compute StorageError Waiting for device

2014-09-17 Thread Sean Dague
Long incomplete bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1318544

Title:
  XenServer - Nova-Compute StorageError Waiting for device

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Hi All,

  I started building a Openstack cloud based on the new LTS version of
  Ubuntu (14.04). I installed both Control and Compute nodes as VM's on
  a XenServer. I selected the option 'Other Operating System' so this is
  running as HVM.

  The cluster is up and running: i can start instances, create storage
  and allocate IP addresses. But the error i got is popping up when the
  instance is in spawning state. The error i attached is from the nova-
  compute node.

  I use a IBM storwize backend with Cinder connection using iSCSI. I can
  see the instances being created, the storage being connected but the
  following error is keep coming back.

  Locally on the XenServer SDB and SDC existed, which leaded to a
  earlier error in which HDB was HDC. I removed SDC locally from the
  XenServer using: echo 1  /sys/block/sdc/device/delete. This changed
  the HDC error message to HDB. The same thing i did for SDB. Both where
  virtual drives from DRAC. But still this message is keep popping up
  with HDB.

  I read an other article about a similar error, this guy was pointed in
  the direction of a HVM vs PV solution because the VM is communicating
  with the DOM0 via it's kernel. I checked and after installing the
  XenServer tools the UUID is there and is readable.

  While trying the solution above i also focused on the HDB reference,
  this was strange because with all the rewrite options in nova i would
  expect a reference to xvda: xenapi_remap_vbd_dev=false

  Could you guys point me in the right direction or help me solve this
  problem? I think this isn't a mis-configuration because i tried all
  possible configurations and this message keeps popping up.

   This is the last step towards a working openstack cloud with Ubuntu
  14.04 and XenServer 6.2 all patches applied.

  2014-05-12 09:00:30.918 6660 ERROR nova.compute.manager 
[req-595bc50b-e8b2-4de4-8c38-09887c8d4c82 f8609ec1f5254fcfb1fdf6df76876805 
41971b840c7a405d9da5052a2506a1c9] [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] Error: Timeout waiting for device hdb to 
be created
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] Traceback (most recent call last):
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1311, in 
_build_instance
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] set_access_ip=set_access_ip)
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 399, in 
decorated_function
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] return function(self, context, *args, 
**kwargs)
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1723, in _spawn
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] six.reraise(self.type_, self.value, 
self.tb)
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1720, in _spawn
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] block_device_info)
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py, line 230, in 
spawn
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9] admin_password, network_info, 
block_device_info)
  2014-05-12 09:00:30.918 6660 TRACE nova.compute.manager [instance: 
00e2c60e-372b-43d2-a2fd-bcc3628c24a9]   File 
/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py, line 357, in spawn
  2014-05-12 09:00:30.918 6660 TRACE 

[Yahoo-eng-team] [Bug 1291127] Re: If vm start of bootable volume does not work properly.

2014-09-17 Thread Sean Dague
Old incomplete bug, please reopen if you can respond to the issue at
hand

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291127

Title:
  If vm start of bootable volume does not work properly.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  it does not operate normally then start after you stop the case vm
  bootable volume.

  The following error may occur. 
  OSError: [Errno 2] No such file or directory: 
'/var/lib/nova/instances/a7cdb294-a8ce-4aa2-9c49-ac3484b73262/disk'

  This problem, because using a local disk information without using
  connection_info the volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1253049] Re: Fixed Network (vLAN) didn't get released on Tenant deletion

2014-09-17 Thread Sean Dague
Old incomplete bug. Please reopen if it's still an issue.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1253049

Title:
  Fixed Network (vLAN) didn't get released on Tenant deletion

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am using OpenStack - Grizzly.
  I have 100 vLANs in my OpenStack setup. I have created 100 Tenant/Projects 
and when I created VM instances each in all 100 Tenants, the vLANs were 
automatically assigned to Tenants.
  That means when I create an instance in a tenant , a vLAN is automatically 
assigned to that tenant.
  So, my all 100 vLANs are mapped to 100 Tenants.

  but, when I delete the VM instance and the respective Tenant, its vLAN didn't 
get released automatically.
  and now if I try to create any new tenant and try to create an instance in 
that tenant, instance cannot be create and got stuck at 'scheduling' state.
  When I check the nova-compute log files, I found the error RemoteError: 
Remote error: NoMoreNetworks An unknown exception occurred.
  So, when I checked the nova network-list using 'nova-manage network list' ,  
I got the list of all the Fixed-Networks (vLANs) with their associated Tenants.
  I found that my deleted tenant's Tenant-ID is still associated with its vLAN.

  So, what happens is when we delete any tenant, its associated vLAN didn't get 
released automatically, and vLAN contains the entry of invalid and non-existing 
tenant.
  Hence, those vLANs are neither occupied nor available for other tenants. So, 
we need to disassociate those vLANs manually using CLI.
  OpenStack must provide a way to disassociate the vLANs with tenants 
automatically when their tenants are deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1253049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 966087] Re: Instance remains in deleting state if libvirt down during deletion

2014-09-17 Thread Sean Dague
Long incomplete bug. Please reopen if still useful.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/966087

Title:
  Instance remains in deleting state if libvirt down during deletion

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Scenario: While terminating an instance, if the libvirt process goes
  down just before the 'destroy' method gets called from
  virt.libvirt.connection module, instance remains in 'deleting' state.

  Expected Behavior: Compute must handle the exception and update the instance 
vm_state='error'
  Actual Result: The instance remains in the following state: 
vm_state='active', task_state='deleting', power_state=1.

  Compute manager must handle the libvirt exception and set the instance
  to error state in this scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/966087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 753280] Re: We should use policy routing for VM's.

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/753280

Title:
  We should use policy routing for VM's.

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  We currently modify the host's default routing table. We should leave
  that alone and apply a different routing table for VM's.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/753280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1173413] Re: in _build_instance UnexpectedTaskStateError does not deallocate the network

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1173413

Title:
  in _build_instance UnexpectedTaskStateError does not deallocate the
  network

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  It appears that when a build  spawn fails with a
  UnexpectedTaskStateError there is no attempt to deallocate the
  network, this might be a feature or flaw (not entirely clear). If its
  a flaw, its likely leaving the allocated network orphaned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1173413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1044103] Re: need instanceId, instanceName and auto_assigned in the payload of _associate_floating_ip()

2014-09-17 Thread Sean Dague
This doesn't look valid any more.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1044103

Title:
  need instanceId, instanceName and auto_assigned in the payload of
  _associate_floating_ip()

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The payload of _associate_floating_ip() has only project_id and
  floating_ip address. We might need to know the other information
  associated with the floating_ip after it associated with the instance.
  For example, instance_id, instance_name and auto_assigned. Otherwise,
  the client needs to get those information by calling another DB get()
  to obtain them from the DB after receiving the notification. It will
  save another call if the payload has those information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1044103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1152303] Re: nova.compute ImageNotAuthorized when using strategy keystone

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1152303

Title:
  nova.compute ImageNotAuthorized when using strategy keystone

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This compute node is running the latest code from: 
  http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main


  When glance is secured with keystone then nova-compute is not
  authorized to deploy an image from glance. This is a problem specific
  to the context of the token.

  per this question:
  https://answers.launchpad.net/nova/+question/218145

  I am getting the same error.

  http://codepad.org/jYi5GZ72

  I have updated the code in nova.image.glance to this:

  def _create_glance_client(context, host, port, use_ssl, version=1):
  Instantiate a new glanceclient.Client object
  if use_ssl:
  scheme = 'https'
  else:
  scheme = 'http'
  params = {}
  params['insecure'] = FLAGS.glance_api_insecure
  if FLAGS.auth_strategy == 'keystone':
  dicttoken = context.to_dict().get('auth_token')
  contexttoken = context.auth_token
  LOG.error(### dict token is %s % dicttoken)
  LOG.error(### context token is %s % contexttoken)
  params['token'] = context.auth_token
  endpoint = '%s://%s:%s' % (scheme, host, port)
  return glanceclient.Client(str(version), endpoint, **params)

  And as you can see from the paste

  The params['token'] code is being called twice.

  The second time the context.auth_token call is failing.

  root@server12:~# grep req-f66255ef-13fe-4791-b137-f76855197aa4 
/var/log/nova/nova-compute.log | grep ERROR
  2013-03-07 11:07:44 ERROR nova.image.glance 
[req-f66255ef-13fe-4791-b137-f76855197aa4 5e363b8f0665443d89ca9d9787a19a81 
eb4f9252e66843b3b7eaa6662d6062c8] ### dict token is 
fff534d1a18c4b4a816c076d4fce0e70

  2013-03-07 11:07:44 ERROR nova.image.glance [req-f66255ef-
  13fe-4791-b137-f76855197aa4 5e363b8f0665443d89ca9d9787a19a81
  eb4f9252e66843b3b7eaa6662d6062c8] ### context token is
  fff534d1a18c4b4a816c076d4fce0e70

  2013-03-07 11:07:49 ERROR nova.image.glance [req-f66255ef-
  13fe-4791-b137-f76855197aa4 5e363b8f0665443d89ca9d9787a19a81
  eb4f9252e66843b3b7eaa6662d6062c8] ### dict token is
  fff534d1a18c4b4a816c076d4fce0e70

  2013-03-07 11:07:49 ERROR nova.image.glance [req-f66255ef-
  13fe-4791-b137-f76855197aa4 5e363b8f0665443d89ca9d9787a19a81
  eb4f9252e66843b3b7eaa6662d6062c8] ### context token is None

  2013-03-07 11:07:49 ERROR nova.compute.manager [req-f66255ef-
  13fe-4791-b137-f76855197aa4 5e363b8f0665443d89ca9d9787a19a81
  eb4f9252e66843b3b7eaa6662d6062c8] [instance: 3e89c0a7-11c8-4b4f-8b4b-
  b04ea97a9d88] Instance failed to spawn

  If I use the dict option the token works and I am no longer blocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1152303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1169929] Re: Wrong memory and storage computing in resource tracer that counted the error status instance's resource

2014-09-17 Thread Sean Dague
Very old incomplete bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1169929

Title:
  Wrong memory and storage computing in resource tracer that counted the
  error status instance's resource

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  There is an error in available resource computing logic within nova
  resource tracker, that caused available resource counted the all
  instances including error status one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1169929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1072014] Re: instance cannot get ip automatically under FlatDHCP mode

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1072014

Title:
  instance cannot get ip automatically under FlatDHCP mode

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I installed the folsom-2012.2 nova/glance/keystone in a singe node and all 
things seemed right
  except for that the instance can't get their private fixed IP via DHCP 
protocol under FlatDHCP network mode.

  After I used vnc to get access to vm instance and use command `ip addr add 
%FIXED-IP%/%NETMASK% dev eth0` to configure
  the ip address for instance manually, the network of the instance became to 
work.
  (the image of the instances is debian-6.0.4-amd64-standard)

  Then I tried to use tcpdump to capture the udp packets on br100(which 
nova-network is using), found that the dnsmasq(which acted as dhcp server)
  didn't respond to the DHCPDISCOVER requests send by dhcp client inside the 
instance.

  
  I also found similar problem reported by others, but in this case my server's 
kernel version is 3.2

  DHCP broken for Openstack Nova instances since kernel v3.3
  https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1035172


  Bellow are debug infos, can anyone give some help? thx

  kernel version

  hzzhanggy % uname -a
  Linux DEV6 3.2.0-3-amd64 #1 SMP Mon Jul 23 02:45:17 UTC 2012 x86_64 
GNU/Linux

  
  dnsmasq version

  hzzhanggy % sudo aptitude show dnsmasq
  Package: dnsmasq
  State: installed
  Automatically installed: no
  Version: 2.63-4
  Priority: optional
  Section: net
  Maintainer: Simon Kelley si...@thekelleys.org.uk
  Architecture: all
  Uncompressed Size: 39.9 k
  Depends: netbase, dnsmasq-base (= 2.63-4)
  Suggests: resolvconf
  Conflicts: resolvconf ( 1.15)
  Description: Small caching DNS proxy and DHCP/TFTP server
   Dnsmasq is a lightweight, easy to configure, DNS forwarder and DHCP 
server. It is designed to provide DNS and optionally, DHCP, to a small network. 
It can serve the names of local machines which are
   not in the global DNS. The DHCP server integrates with the DNS server 
and allows machines with DHCP-allocated addresses to appear in the DNS with 
names configured either in each host or in a central
   configuration file. Dnsmasq supports static and dynamic DHCP leases and 
BOOTP/TFTP for network booting of diskless machines.

  
  network interface

  hzzhanggy % ip a
  1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  inet 169.254.169.254/32 scope link lo
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP 
qlen 1000
  link/ether 5c:f3:fc:98:97:d8 brd ff:ff:ff:ff:ff:ff
  inet xxx.xxx.xxx.6/24 brd xxx.xxx.xxx.255 scope global eth0
  inet6 fe80::5ef3:fcff:fe98:97d8/64 scope link 
 valid_lft forever preferred_lft forever
  3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq master br100 
state UP qlen 1000
  link/ether 5c:f3:fc:98:97:da brd ff:ff:ff:ff:ff:ff
  inet6 fe80::5ef3:fcff:fe98:97da/64 scope link 
 valid_lft forever preferred_lft forever
  4: usb0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000
  link/ether 5e:f3:fc:9c:97:db brd ff:ff:ff:ff:ff:ff
  78: br101: BROADCAST,MULTICAST,PROMISC mtu 1500 qdisc noqueue state 
DOWN 
  link/ether 2a:bc:e5:2f:4b:4c brd ff:ff:ff:ff:ff:ff
  inet 10.120.33.1/25 brd 10.120.33.127 scope global br101
  81: br100: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state 
UP 
  link/ether 5c:f3:fc:98:97:da brd ff:ff:ff:ff:ff:ff
  inet 10.120.33.1/25 brd 10.120.33.127 scope global br100
  inet6 fe80::5ef3:fcff:fe98:97da/64 scope link 
 valid_lft forever preferred_lft forever
  102: vnet1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast 
master br100 state UNKNOWN qlen 500
  link/ether fe:16:3e:06:c4:71 brd ff:ff:ff:ff:ff:ff
  inet6 fe80::fc16:3eff:fe06:c471/64 scope link 
 valid_lft forever preferred_lft forever
  103: vnet0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast 
master br100 state UNKNOWN qlen 500
  link/ether fe:16:3e:08:1d:0d brd ff:ff:ff:ff:ff:ff
  inet6 fe80::fc16:3eff:fe08:1d0d/64 scope link 
 valid_lft forever preferred_lft forever

  
  dnsmasq process

  hzzhanggy % ps aux|grep dnsmasq
  nobody   24867  0.0  0.0  21360   952 ?SN   17:49   0:00 
/usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= 
--domain=novalocal 

[Yahoo-eng-team] [Bug 1134604] Re: New IPs not available after subnet changes

2014-09-17 Thread Sean Dague
This is sort of working as designed. It might be nice to get rid of the
static pre allocation of addresses, but we're not there yet.

** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1134604

Title:
  New IPs not available after subnet changes

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  We recently run out IP address in the subnet used for our instances
  and we had to extend out subnet from /24 to /23.

  We edited /etc/nova/nova.conf, restared nova-network using 
/etc/init.d/nova-network restart.
  ifconfig bf100 was showing correct (new) netmask so was nova-manage 
network list

  When new instances were about to use new range they failed to get IPs:

  On compute node:
  2013-02-27 10:46:38 ERROR nova.compute.manager [req-  ] 
[instance: ] Instance failed network setup

  And corresponding on our controller:
  2013-02-27 10:46:38 DEBUG nova.network.manager [req-  ] 
[instance: ] networks retrieved for instance: 
|[nova.db.sqlalchemy.models.Network object at 0x4997850]| from (pid=2687) 
allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:982
  2013-02-27 10:46:38 ERROR nova.openstack.common.rpc.amqp [-] Exception during 
message handling
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 275, 
in _process_data
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 145, in dispatch
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 257, in wrapped
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp return 
func(self, context, *args, **kwargs)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 320, in 
allocate_for_instance
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp **kwargs)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 257, in wrapped
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp return 
func(self, context, *args, **kwargs)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 986, in 
allocate_for_instance
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp 
requested_networks=requested_networks)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 213, in 
_allocate_fixed_ips
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp vpn=vpn, 
address=address)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 1265, in 
allocate_fixed_ip
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp 
instance_ref['uuid'])
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 442, in 
fixed_ip_associate_pool
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp instance_uuid, 
host)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 111, in 
wrapper
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp return f(*args, 
**kwargs)
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line 1089, in 
fixed_ip_associate_pool
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp raise 
exception.NoMoreFixedIps()
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp NoMoreFixedIps: Zero 
fixed ips available.
  2013-02-27 10:46:38 TRACE nova.openstack.common.rpc.amqp 
  2013-02-27 10:46:38 ERROR nova.openstack.common.rpc.common [-] Returning 
exception Zero fixed ips available. to caller

  After some investigation we realised that fixed_ips table in nova DB only had 
IPs from the old range. We added new IPs using:
  INSERT INTO fixed_ips (created_at, deleted, allocated, leased, reserved, 
network_id, address) VALUES (CURDATE(), 0,0,0,0, 1, xx.xx.xx.xx);
  which allowed our instances to get IPs from new range.

  On compute node we are running:
  ii  nova-common  

[Yahoo-eng-team] [Bug 1116427] Re: Wishlist: Dynamic quotas based on compute node resources

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1116427

Title:
  Wishlist: Dynamic quotas based on compute node resources

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  It would be cool to have quotas calculated automaticly, based on current 
available resource of the nodes. It can be auto config option with 
configurable multiplier for overcommitment of resources.
  Most scheduler software like IBM LoadLeveler use dynamic quoting instead of 
fixed values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1116427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1210390] Re: schedule disk_filter issue

2014-09-17 Thread Sean Dague
Honestly, I'm pretty sure this works as you expect. Unless you can
provide a reproduce scenario where it doesn't, I'm not sure there is
anything to fix here.

** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1210390

Title:
  schedule disk_filter issue

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  In filter/disk_filter.py, we use the the whole free disk on the host to 
caculate, But I think we need to use the free disk for store in stance here.  
  Example, the whole free disk may be 100G on the host, but in 
/var/lib/nova/instances/ the free disk  may only have 10M, I think that we 
should use this 10M  in this filter, not 100G.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1210390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184549] Re: In launching multi instances case, the name displayed not as we set

2014-09-17 Thread Sean Dague
From recent openstack:

nova boot --image cirros-0.3.2-x86_64-uec --flavor m1.tiny --min-count 3
test

os1:~ nova list
+--+---+++-+--+
| ID   | Name   
   | Status | Task State | Power State | Networks |
+--+---+++-+--+
| 2a8144d0-ee57-4d8e-84b6-e353808321d9 | 
test-2a8144d0-ee57-4d8e-84b6-e353808321d9 | BUILD  | spawning   | NOSTATE | 
private=10.0.0.2 |
| 74f09a8c-b695-4d05-af27-80877eb2f4c2 | 
test-74f09a8c-b695-4d05-af27-80877eb2f4c2 | BUILD  | spawning   | NOSTATE | 
private=10.0.0.6 |
| 99278c89-ed8a-424c-9025-ef34b69737d8 | 
test-99278c89-ed8a-424c-9025-ef34b69737d8 | BUILD  | spawning   | NOSTATE | 
private=10.0.0.5 |
+--+---+++-+--+


Which I believe is the expected behavior. I expect we had a bad string slice 
before which is since fixed.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1184549

Title:
  In launching multi instances case, the name displayed not as we set

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In the multi-instance case, the display_name are overwritten, and the
  names displayed isn't readable.

  Steps to reproduce:
  1. Go to Project-Instance.
  2. Launch instance with 'Instance Count' set to 10 and 'Instance Name' set to 
'test'.
  3. When finished. You can find the names of the 10 new instances are like 
 't-18140c8a-3f33-4825-a7ca-88de54b8f84a' but not the name we set.

  Cause names are all like 't-18140c8a-3f33-4825-a7ca-88de54b8f84a', I
  can't tell which is which.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1184549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1186343] Re: test_get_guest_config fails when qemu is selected for libvirt_type due to timers

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1186343

Title:
  test_get_guest_config fails when qemu is selected for libvirt_type due
  to timers

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When libvirt_type is set to 'qemu',
  nova.tests.virt.libvirt.test_libvirt.test_get_guest_config fails on
  'self.assertEquals (len(cfg.clock.times), 2)'.

  ==
  FAIL: 
nova.tests.virt.libvirt.test_libvirt.LibvirtConnTestCase.test_get_guest_config
  --
  _StringException: pythonlogging:'': {{{
  Loading network driver 'nova.network.linux_net'
  URI qemu:///system does not support events
  }}}

  Traceback (most recent call last):
File /opt/stack/nova/nova/tests/virt/libvirt/test_libvirt.py, line 439, 
in test_get_guest_config
  self.assertEquals(len(cfg.clock.timers), 2)
  MismatchError: 0 != 2

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1186343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370485] [NEW] Cannot update firewall rule protocol to ANY

2014-09-17 Thread Cedric Brandily
Public bug reported:

when updating a firewall rule, setting protocol to ANY is not possible
... only TCP, UDP and ICMP are available

** Affects: horizon
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: In Progress


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370485

Title:
  Cannot update firewall rule protocol to ANY

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  when updating a firewall rule, setting protocol to ANY is not possible
  ... only TCP, UDP and ICMP are available

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1213798] Re: when i run nova live-migration , nova-compute raise AttributeError: 'NoneType' object has no attribute 'nwfilterDefineXML'

2014-09-17 Thread Sean Dague
long incomplete bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1213798

Title:
  when i run nova live-migration , nova-compute raise  AttributeError:
  'NoneType' object has no attribute 'nwfilterDefineXML'

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2013-08-19 13:37:51.577 ERROR nova.compute.manager 
[req-fea59eb3-164e-4937-beb3-cbd9bf207c9d 1085902a0ed44d1496e9158e046d9c5d 
153e0f0b60844bd2848b232dc233cb22] [instance: 
867c5caa-fd04-47f4-8828-944755385379] Pre live migration failed at  openstack
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] Traceback (most recent call last):
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3149, in 
live_migration
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] block_migration, disk, dest, 
migrate_data)
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/compute/rpcapi.py, line 408, in 
pre_live_migration
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] version='2.21')
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py, line 80, 
in call
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] return rpc.call(context, 
self._get_topic(topic), msg, timeout)
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/__init__.py, line 
140, in call
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] return _get_impl().call(CONF, 
context, topic, msg, timeout)
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py, 
line 798, in call
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] rpc_amqp.get_connection_pool(conf, 
Connection))
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 612, 
in call
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] rv = list(rv)
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 561, 
in __iter__
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] raise result
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] AttributeError: 'NoneType' object has no 
attribute 'nwfilterDefineXML'
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] Traceback (most recent call last):
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] 
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 430, 
in _process_data
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] rval = self.proxy.dispatch(ctxt, 
version, method, **args)
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] 
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 133, in dispatch
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] return getattr(proxyobj, 
method)(ctxt, **kwargs)
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379] 
  2013-08-19 13:37:51.577 32273 TRACE nova.compute.manager [instance: 
867c5caa-fd04-47f4-8828-944755385379]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3125, in 
pre_live_migration
  2013-08-19 

[Yahoo-eng-team] [Bug 1200640] Re: nova-compute fails with an error VirtDriverNotFound: Could not find driver for connection_type None on CentOS 6.4

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1200640

Title:
  nova-compute fails with an error VirtDriverNotFound: Could not find
  driver for connection_type None  on CentOS 6.4

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  OS: CentOS 6.4

  Following the nova-compute installation instructions from
  
http://docs.openstack.org/folsom/openstack-compute/install/yum/content/nova-conf-file.html

  * nova-manage version returns 2013.1.2

  upon starting nova-compute, it fails with the error:
  VirtDriverNotFound: Could not find driver for connection_type None

  even though the configuration file nova.conf clearly specifies:

  compute_driver=libvirt.LibvirtDriver
  libvirt_type = kvm

  according to the above tutorial.

  NOTE, starting nova-compute with nova-compute --connection_type=libvirt 
remediates the issue,
  although still issues the warning:
  (...)
  2013-07-12 15:28:06 24635 WARNING nova.common.deprecated [-] Deprecated 
Config: Specifying virt driver via connec
  tion_type is deprecated. Use compute_driver=classname instead.
  2013-07-12 15:28:06 24635 AUDIT nova.service [-] Starting compute node 
(version 2012.2.4-1.el6)
  (...)

  Similar to
  https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1076353, but
  affects nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1200640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1187255] Re: _sync_power_states shutes instances down when rebooted from within instance

2014-09-17 Thread Sean Dague
Long incomplete bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1187255

Title:
  _sync_power_states shutes instances down when rebooted from within
  instance

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Periodic task _sync_power_states  shutes instances down when the instance was 
rebooted from wthin itself. This happens when sync_power_states checks this 
instance and this instance is just rebooting for some reason (issued by scripts 
or users from within then instance).  Hypervisor is libvirt/XEN. 
  In libvirt/XEN the instance has status PAUSED for a short time when it it is 
rebooted from within. _sync_power_states shutes every machine down wich is in 
PAUSED state. So we should wait for a short time an recheck if machine is still 
in PAUSED-state before shuting it down .

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1187255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1197573] Re: nova interface-attach doesn't work with quantum when port|network|fixed id is omitted

2014-09-17 Thread Sean Dague
Long incomplete bug with no response

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1197573

Title:
  nova interface-attach doesn't work with quantum when
  port|network|fixed id is omitted

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am using the latest quantum and nova.  I am running tempest against
  it and the
  
empest.api.compute.servers.test_attach_interfaces:AttachInterfacesTestJSON.test_create_list_show_delete_interfaces
  test fails.

  Tempest shows that it is making the request to attach the interface
  with no args ({interfaceAttachment: {}})

  tempest.common.rest_client: INFO: Request: POST 
http://nova-controller.example.com:8774/v2/tempest/servers/40adfdf0-06f6-4da2-ac5b-41991d0c416a/os-interface
  tempest.common.rest_client: DEBUG: Request Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': 'Token 
omitted'}
  tempest.common.rest_client: DEBUG: Request Body: {interfaceAttachment: {}}

  
  When it does that I see this in the nova-compute log  (The below paste is 
also available in a readable format at http://paste.openstack.org/show/39535/)

  
  Traceback (most recent call last):
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/nova/network/api.py,
 line 84, in update_instance_cache_with_nw_info
  cache)
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/nova/db/api.py,
 line 818, in instance_info_cache_update
  return IMPL.instance_info_cache_update(context, instance_uuid, values)
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/nova/cmd/compute.py,
 line 50, in __call__
  raise exception.DBNotAllowed('nova-compute')
  DBNotAllowed: nova-compute
  2013-07-03 01:00:50,847 (nova.compute.manager): ERROR manager 
attach_interface allocate_port_for_instance returned 3 ports
  2013-07-03 01:00:50,848 (nova.openstack.common.rpc.amqp): ERROR amqp 
_process_data Exception during message handling
  Traceback (most recent call last):
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py,
 line 433, in _process_data
  **args)
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py,
 line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/nova/compute/manager.py,
 line 3262, in attach_interface
  raise exception.InterfaceAttachFailed(instance=instance)

  Note that I have quantum configured with three networks.  When tempest
  creates an instance it does not specify a network so quantum creates
  an interface on all three networks.

  It is not clear to me what the expected behavior is when
  interfaceAttachment is equal to {}.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1197573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202199] Re: Emulated network driver is not working when using xen over libvirt

2014-09-17 Thread Sean Dague
I think this is an upstream xen issue and it's not clear we should be
working around it based on the patch commentary

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1202199

Title:
  Emulated network driver is not working when using xen over libvirt

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When using XEN over libvirt, emulated network drivers (rtl8139,e1000)
  will not work as the interface name is too long.

  By default XEN when running VMs in HVM creates two interfaces on host
  system:

  vifX.X (for PV drivers)
  vifX.X-emu (for emulated drives)

  -emu suffix combined with 14 characters long tap interface name is too
  long for a linux system and it results that vifX.X-emu can't be
  renamed to openstack standard.

  Additionally openstack expects tap interface to be without -emu suffix
  and all security groups settings will not also cover this interface
  (even if we will shortcut its name).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1202199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1204169] Re: compute instance.update messages sometimes have the wrong values for instance state

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
   Importance: Undecided = Medium

** Changed in: nova
 Assignee: Phil Day (philip-day) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1204169

Title:
  compute instance.update messages sometimes have the wrong values for
  instance state

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Compute instance.update messages that are not triggered by a state
  change (e.g. setting the host in the resource tracker) have default
  (None) values for task_state, old_vm_state and old_ task_state.

  This can make the instance state sequence look wrong to anything
  consuming the messages (e.g stacktach)

   compute.instance.update  None(None) - Building(none)
   scheduler.run_instance.scheduled 
   compute.instance.update  building(None) -  building(scheduling)
   compute.instance.create.start 
   compute.instance.update  building(None) -  building(None)
   compute.instance.update  building(None) -  building(networking)
   compute.instance.update  building(networking) - 
building(block_device_mapping)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1204169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1264268] Re: check_uptodate.sh breaks when CONF.import_opt has a group name

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1264268

Title:
  check_uptodate.sh breaks when CONF.import_opt has a group name

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When the following code was used:
  import time

  from oslo.config import cfg

  from nova import exception
  from nova.openstack.common.gettextutils import _
  from nova.openstack.common import log as logging
  from nova.virt.vmwareapi import driver  # noqa
  from nova.virt.vmwareapi import vim_util
  from nova.virt.vmwareapi import vm_util

  LOG = logging.getLogger(__name__)
  CONF = cfg.CONF
  CONF.import_opt('api_retry_count', 'nova.virt.vmwareapi.driver',
  group='vmware')

  The following error appears:
  using tox.ini: /home/gkotton/nova/tox.ini
  using tox-1.6.1 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc
  pep8 reusing: /home/gkotton/nova/.tox/pep8
/home/gkotton/nova$ /home/gkotton/nova/.tox/pep8/bin/python 
/home/gkotton/nova/setup.py --name 
  pep8 develop-inst-nodeps: /home/gkotton/nova
/home/gkotton/nova$ /home/gkotton/nova/.tox/pep8/bin/pip install -U -e 
/home/gkotton/nova --no-deps /home/gkotton/nova/.tox/pep8/log/pep8-433.log
  pep8 runtests: commands[0] | flake8
/home/gkotton/nova$ /home/gkotton/nova/.tox/pep8/bin/flake8 
  pep8 runtests: commands[1] | /home/gkotton/nova/tools/config/check_uptodate.sh
/home/gkotton/nova$ /home/gkotton/nova/tools/config/check_uptodate.sh 
  cannot import name driver
  1c1,3415
   2013-12-26 02:13:01.927 23264 CRITICAL nova [-] Unable to import module 
nova.console.vmrc
  ---
   [DEFAULT]
   
   #

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1264268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254815] Re: Nova image-list throws error if image not present

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254815

Title:
  Nova image-list throws error if image not present

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If any image is not added to glance then nova image-list throws 500
  error

  root@test-server:/home/administrator# nova image-list
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-b69c817e-e43b-456e-bc3e-1ac9211837be)

  Expected it should show a null value rather than 500 Internal Server
  error

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246189] Re: nova boot with a block-device-mapping void of a device_path fails

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246189

Title:
  nova boot with a block-device-mapping void of a device_path fails

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  From
  
https://github.com/openstack/nova/commit/3b1f7c55c10eec9d438a7d147aba1124b5ff9452:

  This patch removes the check in the API that prevented having block
  devices without 'device_name' and makes the compute manager capable of
  guessing those names.

  The patch doesn't seem to work on 'nova boot' however.

  REQ: curl -i http://IP:8774/v2/549d075f78c045e984bb03e62f6ba91c/os-
  volumes_boot -X POST -H X-Auth-Project-Id: admin -H User-Agent:
  python-novaclient -H Content-Type: application/json -H Accept:
  application/json -H X-Auth-Token: REDACTED -d '{server: {name:
  chuckle, imageRef: 61ecd9fb-58c3-42ea-9dde-65d33b7f4fce,
  block_device_mapping: [{volume_size: 5, volume_id:
  b7a47460-94a7-4bf6-8b83-99a50a223f35, delete_on_termination:
  1}], flavorRef: 9, max_count: 1, min_count: 1,
  personality: [{path: /etc/guest_info, contents:
  
W0RFRkFVTFRdCmd1ZXN0X2lkPTVjYWY4MzEyLTg5MjAtNGUwYi04NDRkLTVjM2I3NDYxOTA1ZApz\nZXJ2aWNlX3R5cGU9bXlzcWwKdGVuYW50X2lkPTU0OWQwNzVmNzhjMDQ1ZTk4NGJiMDNlNjJmNmJh\nOTFjCg==\n}]}}'

  Results in (on n-api):

  2013-10-29 22:53:09.620 DEBUG nova.api.openstack.wsgi 
[req-eaa95da5-78c8-4bf3-b5b4-7c382420b82c admin admin] Action: 'create', body: 
{server: {name: chuckle, imageRef: 
61ecd9fb-58c3-42ea-9dde-65d33b7f4fce, block_device_mapping: 
[{volume_size: 5, volume_id: b7a47460-94a7-4bf6-8b83-99a50a223f35, 
delete_on_termination: 1}], flavorRef: 9, max_count: 1, min_count: 
1, personality: [{path: /etc/guest_info, contents: 
W0RFRkFVTFRdCmd1ZXN0X2lkPTVjYWY4MzEyLTg5MjAtNGUwYi04NDRkLTVjM2I3NDYxOTA1ZApz\nZXJ2aWNlX3R5cGU9bXlzcWwKdGVuYW50X2lkPTU0OWQwNzVmNzhjMDQ1ZTk4NGJiMDNlNjJmNmJh\nOTFjCg==\n}]}}
 from (pid=25304) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:940
  2013-10-29 22:53:09.620 DEBUG nova.api.openstack.wsgi 
[req-eaa95da5-78c8-4bf3-b5b4-7c382420b82c admin admin] Calling method bound 
method Controller.create of nova.api.openstack.compute.servers.Controller 
object at 0x318ccd0 from (pid=25304) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:941
  2013-10-29 22:53:09.622 INFO nova.api.openstack.wsgi 
[req-eaa95da5-78c8-4bf3-b5b4-7c382420b82c admin admin] HTTP exception thrown: 
Block Device Mapping is Invalid: Device name empty or too long.
  2013-10-29 22:53:09.623 DEBUG nova.api.openstack.wsgi 
[req-eaa95da5-78c8-4bf3-b5b4-7c382420b82c admin admin] Returning 400 to user: 
Block Device Mapping is Invalid: Device name empty or too long. from 
(pid=25304) __call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1203

  The check being performed is located at
  
https://github.com/openstack/nova/blob/f10b93b2930a0e82eb994c9d7429213f134cd955/nova/api/openstack/compute/servers.py#L833

  If the check is amended to resemble
  
https://github.com/openstack/nova/blob/308bba10f621513c2f99b0b6bc83ff5320970f78/nova/block_device.py#L117-L118,
  it seems to work.

  I'd submit a patch-set, but the test at
  
https://github.com/openstack/nova/blob/f10b93b2930a0e82eb994c9d7429213f134cd955/nova/tests/api/openstack/compute/test_servers.py#L2432-L2444
  seems to indicate this validation is/was quite purposeful, causing me
  some trepidation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236411] Re: failed volume detach at libvirt driver can set a wrong volume state

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236411

Title:
  failed volume detach at libvirt driver can set a wrong volume state

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In libvirt driver, detach_volume, a DiskNotFound is raised if the
  volume is no longer attached to the guest. However the exception is
  then caught and self.volume_api.roll_detaching(context, volume_id) is
  called which will make the volume 'in-use'. The it is reraised and the
  volume detach is aborted.

  This doesn't seem correct, especially if two detach messages for the
  same volume manages to get to the compute manager, the volume would
  now be fully removed from host but remain 'in-use' in cinder db.

  It would seem more appropriate to just ignore (after logging)
  DiskNotFound and continuing to detach from the volume driver and
  billing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1236411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268828] Re: Nova network doesn't work with neutron_url being None

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1268828

Title:
  Nova network doesn't work with neutron_url being None

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  when configure neutron_url to None or neutron_url param is None, Nova
  network will throw error:

  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 992, in 
_process_stack
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi action_result 
= self.dispatch(meth, request, action_args)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 1073, in 
dispatch
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/os_tenant_networks.py,
 line 99, in index
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi networks = 
self.network_api.get_all(context)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, line 673, in 
get_all
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi networks = 
client.list_networks().get('networks')
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 108, in 
with_params
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi ret = 
self.function(instance, *args, **kwargs)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 325, in 
list_networks
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi **_params)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1197, in 
list
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi for r in 
self._pagination(collection, path, **params):
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1210, in 
_pagination
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi res = 
self.get(path, params=params)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1183, in 
get
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi 
headers=headers, params=params)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1168, in 
retry_request
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi 
headers=headers, params=params)
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/v2_0/client.py, line 1097, in 
do_request
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi 
self.httpclient.authenticate_and_fetch_endpoint_url()
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/client.py, line 179, in 
authenticate_and_fetch_endpoint_url
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi 
self.endpoint_url = self._get_endpoint_url()
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi   File 
/usr/lib/python2.6/site-packages/neutronclient/client.py, line 253, in 
_get_endpoint_url
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi url = 
self.auth_url + '/tokens/%s/endpoints' % self.auth_token
  2014-01-13 03:37:23.121 18582 TRACE nova.api.openstack.wsgi TypeError: 
unsupported operand type(s) for +: 'NoneType' and 'str'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1268828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1050979] Re: deleting instances prevent security group deletion

2014-09-17 Thread Sean Dague
I'm going to assume vish's bug fixes a bunch of this.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1050979

Title:
  deleting instances prevent security group deletion

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  After bug 938853, bug 817872 have been fixed, it's not possible to
  delete a security group if some instances are still using it.

  This can be problematic in case of compute host failures. If such node
  goes down, instances will remain in the database. The only thing that
  the user can do is to schedule their deletion (task_state - 
  deleting). This will remove them once the compute service is available
  again, but until then security groups removal will not work.

  I propose a change that should work for iptables-based deployment, but
  needs some review for other backends.

  It should be safe to drop the security group association when the instance is 
marked as deleting. The host does not need knowledge of the specific group to 
cleanup iptables, since every element related to the instance will be in its 
own chain. If the instance is scheduled for deletion it doesn't need (and may 
not be able) to receive notifications about other hosts in its security group.
  I can't see any reason to keep the instance - security group connection 
once deletion is scheduled.

  (This may be a separate class of bugs, rather than just a security
  groups specific issue. Deleting other elements connected to instance
  in deleting state could be verified.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1050979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1208364] Re: The compute service seems not been notified/casted after long run

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1208364

Title:
  The compute service seems not been notified/casted after long run

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The tester was running a workload of around 1000 VMs for about 96 hours. The 
first 2 days, everything was fine, all VMs deployed were ACTIVE. Starting on 
the 3rd day, some of the VMs deployed (about 3%) stuck in BUILD state. They 
never change to ACTIVE or ERROR. He checked these VMs, most of them (all but 2) 
were scheduled to go to one of the hosts cn24. When he checked cn24, its 
quantum and compute services were both up and running.
  On the host side, the compute logs don't have any entries regarding this VM, 
it looks like the host never got the notification of spawning this VM. Somehow, 
there seems to be a disconnection between the controller and host, although the 
host is showing up and running from the controller.
  After he restarted the network and compute services on the host, subsequent 
VMs deployed to this host are ACTIVE. So it seems maybe the compute service on 
the host has some problem? although it shows it is running, but it actually 
isn't fully running?
  For the experts here, please share your opinions, thanks in advance!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1208364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370492] [NEW] calling curl HEAD ops time out on /v3/auth/tokens

2014-09-17 Thread Mike Abrams
Public bug reported:

the following command works --
'curl -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

but this command does not work.  it does not return (hangs indefinitely) --
'curl -X HEAD -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1370492

Title:
  calling curl HEAD ops time out on /v3/auth/tokens

Status in OpenStack Identity (Keystone):
  New

Bug description:
  the following command works --
  'curl -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

  but this command does not work.  it does not return (hangs indefinitely) --
  'curl -X HEAD -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1370492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257420] Re: boot instance fails, libvirt unable to allocate memory

2014-09-17 Thread Sean Dague
I think you are legitimately just running out of memory. So... shrug?

** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257420

Title:
  boot instance fails, libvirt unable to allocate memory

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Intermittent failures trying to boot an instance using devstack/master
  on precise VM.  In most cases deleting the failed instance and
  retrying the boot command seems to work.

  2013-12-03 11:28:24.514 DEBUG nova.compute.manager 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Re-scheduling run_instance: attempt 1 
from (pid=5873) _reschedule /opt/stack/nova/nova/compute/mana
  ger.py:1167
  2013-12-03 11:28:24.514 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Making synchronous call on 
conductor ... from (pid=5873) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553
  2013-12-03 11:28:24.514 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] MSG_ID is 
ea9adfa2f6564cd193d6baec7bf7f8a3 from (pid=5873) multicall 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556
  2013-12-03 11:28:24.515 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] UNIQUE_ID is 
33300a17273f4529bd36156c4406ada3. from (pid=5873) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
  2013-12-03 11:28:24.627 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Got semaphore 
compute_resources from (pid=5873) lock 
/opt/stack/nova/nova/openstack/common/lockutils.py:167
  2013-12-03 11:28:24.628 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Got semaphore / lock 
update_usage from (pid=5873) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:247
  2013-12-03 11:28:24.628 DEBUG nova.openstack.common.lockutils 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Semaphore / lock released 
update_usage from (pid=5873) inner 
/opt/stack/nova/nova/openstack/common/lockutils.py:251
  2013-12-03 11:28:24.630 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] Making asynchronous cast 
on scheduler... from (pid=5873) cast 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:582
  2013-12-03 11:28:24.630 DEBUG nova.openstack.common.rpc.amqp 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] UNIQUE_ID is 
501ebe16dd814daaa37c648f8f9848df. from (pid=5873) _add_unique_id 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341
  2013-12-03 11:28:24.642 ERROR nova.compute.manager 
[req-2f0c2c13-c726-4738-a541-60dbcf6e5ea4 demo demo] [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Error: internal error process exited 
while connecting to monitor: char device redirected to /dev/pt
  s/30
  Failed to allocate 536870912 B: Cannot allocate memory

  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] Traceback (most recent call last):
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/compute/manager.py, line 1049, in _build_instance
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] set_access_ip=set_access_ip)
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/compute/manager.py, line 1453, in _spawn
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/compute/manager.py, line 1450, in _spawn
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] block_device_info)
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 2161, in spawn
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] block_device_info)
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236]   File 
/opt/stack/nova/nova/virt/libvirt/driver.py, line 3395, in 
_create_domain_and_network
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 
a186938b-f4a1-4ff7-8681-9243bb1f4236] domain = self._create_domain(xml, 
instance=instance, power_on=power_on)
  2013-12-03 11:28:24.642 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1218494] Re: Nova security policies are being ignored

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218494

Title:
  Nova security policies are being ignored

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have a multi-node Openstack Grizzly setup: 1 front-end network node
  (3 nics) and 2 compute nodes (3 nics). Everything seems to work
  perfectly: VM's have external access, I can ping the VM's from the
  virtual router, VM's can communicate between themselves...

  However, I am unable to ping the VM's from any compute node to the
  VM's. I have added the virtual router to the routing table, I changed
  the default security permissions...

  practicas@lemarq:~$ route
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse Iface
  default 192.168.0.1 0.0.0.0 UG0  00 br-ex
  10.5.5.0192.168.0.100   255.255.255.0   UG0  00 br-ex 
 # VIRTUAL ROUTER
  192.168.0.0 *   255.255.255.0   U 0  00 br-ex
  192.168.100.0   *   255.255.255.0   U 1  00 eth1

  practicas@lemarq:~$ nova secgroup-list-rules default
  +-+---+-+---+--+
  | IP Protocol | From Port | To Port | IP Range  | Source Group |
  +-+---+-+---+--+
  | icmp| -1| -1  | 0.0.0.0/0 |  |
  | tcp | 1 | 65535   | 0.0.0.0/0 |  |
  +-+---+-+---+--+

  
  In order to prove that it is a problem with nova security permissions I have 
done the following experiment. I tried to ping from the compute node 
192.168.0.204 to a VM 10.5.5.4. The VM's interface in br-int (in the compute 
node) is qvoc55f44c6-af. I executed tcpdump in qvoc55f44c6-af and I see the 
icmp package. However, inside the VM, I did tcpdump in eth0 and no sign of this 
icmp package appeared. If I ping from the virtual router this does not happen.  
Thank you in advance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250496] Re: Need to sync the instance on controller db with the VM on compute node(kvm)

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1250496

Title:
  Need to sync the instance on controller db with the VM on compute
  node(kvm)

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  It seems as if it works as designed or it is a limitations. (any links
  for design/limitation?)

  Steps to recreate the bug:  (havana)
  1. boot instance (before it is active, do step 2)
  2. stop the /etc/init.d/libvirtd service, then the instance may be in 
scheduling or spawning
  3. the instance will always be  scheduling or spawning
  4. It seems as if there is no mechanism to update the instance state with the 
VM state on KVM (in nova  now).
  And in fact, There is no way to get the current VM state on KVM.

  controller(instance on controller DB) --compute 
(nova-compute)XKVM (VM on kvm)
  (X  : The meaning is the connection is broken)

  
  5. At this time, We just can start /etc/init.d/libvirtd service and restart 
the nova-compute

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1250496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221899] Re: test_resize_server_from_auto_to_manual: server failed to reach VERIFY_RESIZE status within the required time

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1221899

Title:
  test_resize_server_from_auto_to_manual: server failed to reach
  VERIFY_RESIZE status within the required time

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  2013-09-06 19:24:49.608 | Traceback (most recent call last):
  2013-09-06 19:24:49.608 |   File 
tempest/api/compute/servers/test_disk_config.py, line 114, in 
test_resize_server_from_auto_to_manual
  2013-09-06 19:24:49.609 | 
self.client.wait_for_server_status(server['id'], 'VERIFY_RESIZE')
  2013-09-06 19:24:49.609 |   File 
tempest/services/compute/xml/servers_client.py, line 331, in 
wait_for_server_status
  2013-09-06 19:24:49.609 | raise exceptions.TimeoutException(message)
  2013-09-06 19:24:49.609 | TimeoutException: Request timed out
  2013-09-06 19:24:49.609 | Details: Server 
dabbdc8d-3194-4e88-bc9c-c897a1fe5f78 failed to reach VERIFY_RESIZE status 
within the required time (400 s). Current status: RESIZE.

  
  
http://logs.openstack.org/48/45248/2/check/gate-tempest-devstack-vm-full/66d555c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1221899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267040] Re: v3 API list deleted servers raise 404

2014-09-17 Thread Sean Dague
We fixed this in icehouse IIRC. It was an inner loop issue.

** Changed in: nova
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267040

Title:
  v3 API list deleted servers raise 404

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  nova api support list deleted servers by using admin user. The api
  looks like below:

  v3: GET http://openstack.org:8774/v3/servers/detail?deleted=True
  v2: GET http://openstack.org:8774/v2/{tenant}/servers/detail?deleted=True

  v2 api works very well but v3 will return by 404.

  The traceback from nova-api show below, I suspect there was something wrong 
with instance object:
  2014-01-08 09:42:15.145 ERROR nova.api.openstack 
[req-dd73dfe5-96ab-488a-8c31-98b5b063ed95 admin admin] Caught error: Instance 
60ec98b5-7496-4e04-aebb-3f951e660295 could not be found.
  2014-01-08 09:42:15.145 TRACE nova.api.openstack Traceback (most recent call 
last):
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 121, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2014-01-08 09:42:15.145 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2014-01-08 09:42:15.145 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 581, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return self.app(env, 
start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return resp(environ, 
start_response)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-01-08 09:42:15.145 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 930, in __call__
  2014-01-08 09:42:15.145 TRACE nova.api.openstack content_type, body, 
accept)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 1018, in _process_stack
  2014-01-08 09:42:15.145 TRACE nova.api.openstack request, action_args)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/wsgi.py, line 900, in 
post_process_extensions
  2014-01-08 09:42:15.145 TRACE nova.api.openstack **action_args)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/pci.py, line 79, in 
detail
  2014-01-08 09:42:15.145 TRACE nova.api.openstack 
self._extend_server(server, instance)
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/pci.py, line 58, in 
_extend_server
  2014-01-08 09:42:15.145 TRACE nova.api.openstack for dev in 
instance.pci_devices:
  2014-01-08 09:42:15.145 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/objects/base.py, line 63, in getter
  2014-01-08 09:42:15.145 TRACE nova.api.openstack self.obj_load_attr(name)
  2014-01-08 09:42:15.145 TRACE 

[Yahoo-eng-team] [Bug 1226393] Re: hard_reboot fails to start instance when libvirt_images_type=lvm and using config_drive

2014-09-17 Thread Sean Dague
Long incomplete bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1226393

Title:
  hard_reboot fails to start instance when libvirt_images_type=lvm and
  using config_drive

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  even when libvirt_images_type is set to LVM, the config_drive is still
  created as a file backed image.

  When an instance is powercycled, nova is unable to launch the instance
  as it expects that the config_drive path is an LVM path, which it is
  not.

  The error message reported in nova-compute.log:

  2013-09-16 20:52:30.429 ERROR nova.compute.manager 
[req-e2259ddf-1af5-4b05-bdac-6c7b735f3082 5bbbcc1c565546c38be9c47f89c90335 
21a619da59ebbab0e1d998ea035c3469] [instance: 
ecfa7090-758c-4024-b5e0-410c945e1772] Cannot reboot instance: Unexpected error 
while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf lvs -o lv_size 
--noheadings --units b --nosuffix 
/dev/nova-volume/instance-0296_/var/lib/nova/instances/ecfa7090-758c-4024-b5e0-410c945e1772/disk.config
  Exit code: 5
  Stdout: ''
  Stderr: ' 
nova-volume/instance-0296_/var/lib/nova/instances/ecfa7090-758c-4024-b5e0-410c945e1772/disk.config:
 Invalid path for Logical Volume\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1226393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1258856] Re: tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_stop_start_server fails with quota error

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1258856

Title:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_stop_start_server
  fails with quota error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  See: http://logs.openstack.org/10/56710/10/check/check-tempest-dsvm-
  full/187705b/console.html.gz

  2013-12-07 01:35:54.908 | 
==
  2013-12-07 01:35:54.909 | FAIL: 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_stop_start_server[gate]
  2013-12-07 01:35:54.909 | 
tempest.api.compute.servers.test_server_actions.ServerActionsTestXML.test_stop_start_server[gate]
  2013-12-07 01:35:54.909 | 
--
  2013-12-07 01:35:54.909 | _StringException: Empty attachments:
  2013-12-07 01:35:54.909 |   stderr
  2013-12-07 01:35:54.909 |   stdout
  2013-12-07 01:35:54.910 | 
  2013-12-07 01:35:54.910 | pythonlogging:'': {{{
  2013-12-07 01:35:54.910 | 2013-12-07 01:15:59,684 Request: GET 
http://127.0.0.1:8774/v2/04d6f70250e94a5e94e506386812bba3/servers/562fe7bb-f952-4cc3-af24-5d612dc3f022
  2013-12-07 01:35:54.910 | 2013-12-07 01:15:59,685 Request Headers: 
{'Content-Type': 'application/xml', 'Accept': 'application/xml', 
'X-Auth-Token': 'Token omitted'}
  2013-12-07 01:35:54.910 | 2013-12-07 01:15:59,745 Response Status: 404
  2013-12-07 01:35:54.910 | 2013-12-07 01:15:59,745 Nova request id: 
req-ee502842-f694-4b3f-b696-0b0d1a2ce925
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,745 Response Headers: 
{'content-length': '137', 'date': 'Sat, 07 Dec 2013 01:15:59 GMT', 
'content-type': 'application/xml; charset=UTF-8', 'connection': 'close'}
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,746 Response Body: 
itemNotFound code=404 
xmlns=http://docs.openstack.org/compute/api/v1.1;messageInstance could not 
be found/message/itemNotFound
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,746 Request: DELETE 
http://127.0.0.1:8774/v2/04d6f70250e94a5e94e506386812bba3/servers/562fe7bb-f952-4cc3-af24-5d612dc3f022
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,746 Request Headers: 
{'X-Auth-Token': 'Token omitted'}
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,801 Response Status: 404
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,802 Nova request id: 
req-703c4944-902f-4a14-a27c-0ae724ce66b1
  2013-12-07 01:35:54.911 | 2013-12-07 01:15:59,802 Response Headers: 
{'content-length': '73', 'date': 'Sat, 07 Dec 2013 01:15:59 GMT', 
'content-type': 'application/json; charset=UTF-8', 'connection': 'close'}
  2013-12-07 01:35:54.912 | 2013-12-07 01:15:59,802 Response Body: 
{itemNotFound: {message: Instance could not be found, code: 404}}
  2013-12-07 01:35:54.912 | 2013-12-07 01:15:59,803 Object not found
  2013-12-07 01:35:54.912 | Details: {itemNotFound: {message: Instance 
could not be found, code: 404}}
  2013-12-07 01:35:54.912 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base Traceback (most recent call last):
  2013-12-07 01:35:54.912 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base   File tempest/api/compute/base.py, line 188, in 
rebuild_server
  2013-12-07 01:35:54.912 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base cls.servers_client.delete_server(server_id)
  2013-12-07 01:35:54.912 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base   File 
tempest/services/compute/xml/servers_client.py, line 238, in delete_server
  2013-12-07 01:35:54.913 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base return self.delete(servers/%s % str(server_id))
  2013-12-07 01:35:54.913 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base   File tempest/common/rest_client.py, line 308, in 
delete
  2013-12-07 01:35:54.913 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base return self.request('DELETE', url, headers)
  2013-12-07 01:35:54.913 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base   File tempest/common/rest_client.py, line 436, in 
request
  2013-12-07 01:35:54.913 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base resp, resp_body)
  2013-12-07 01:35:54.913 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base   File tempest/common/rest_client.py, line 481, in 
_error_checker
  2013-12-07 01:35:54.914 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base raise exceptions.NotFound(resp_body)
  2013-12-07 01:35:54.914 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base NotFound: Object not found
  2013-12-07 01:35:54.914 | 2013-12-07 01:15:59.803 5212 TRACE 
tempest.api.compute.base Details: {itemNotFound: {message: Instance could 
not be found, code: 404}}
  2013-12-07 01:35:54.914 | 

[Yahoo-eng-team] [Bug 1370496] [NEW] Failed to establish authenticated ssh connection to cirros - error: [Errno 113] No route to host

2014-09-17 Thread Matt Riedemann
Public bug reported:

Saw this in the gate today, I think it's separate from bug 1349617, at
least the error is different and the hits in logstash are different.
The root cause might be the same.

This is in both nova-network and neutron jobs.

http://logs.openstack.org/01/116001/3/gate/gate-grenade-dsvm-partial-
ncpu/3dce5d7/logs/tempest.txt.gz#_2014-09-16_22_32_14_888

2014-09-16 22:32:14.888 6636 ERROR tempest.common.ssh [-] Failed to establish 
authenticated ssh connection to cirros@172.24.4.4 after 15 attempts
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh Traceback (most recent 
call last):
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
tempest/common/ssh.py, line 76, in _get_ssh_connection
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
timeout=self.channel_timeout, pkey=self.pkey)
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 236, in 
connect
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/util.py, line 278, in 
retry_on_signal
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh return function()
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 236, in 
lambda
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/lib/python2.7/socket.py, line 224, in meth
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh return 
getattr(self._sock,name)(*args)
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh error: [Errno 113] No 
route to host
2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 

message:_get_ssh_connection AND message:error: [Errno 113] No route
to host AND tags:tempest.txt

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiX2dldF9zc2hfY29ubmVjdGlvblwiIEFORCBtZXNzYWdlOlwiZXJyb3I6IFtFcnJubyAxMTNdIE5vIHJvdXRlIHRvIGhvc3RcIiBBTkQgdGFnczpcInRlbXBlc3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wOS0xMFQxMjo0ODowMyswMDowMCIsInRvIjoiMjAxNC0wOS0xN1QxMjo0ODowMyswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDEwOTU4MTU3MTY3LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

63 hits in 10 days, check and gate, all failures.

** Affects: nova
 Importance: High
 Status: New


** Tags: gate-failure network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370496

Title:
  Failed to establish authenticated ssh connection to cirros - error:
  [Errno 113] No route to host

Status in OpenStack Compute (Nova):
  New

Bug description:
  Saw this in the gate today, I think it's separate from bug 1349617, at
  least the error is different and the hits in logstash are different.
  The root cause might be the same.

  This is in both nova-network and neutron jobs.

  http://logs.openstack.org/01/116001/3/gate/gate-grenade-dsvm-partial-
  ncpu/3dce5d7/logs/tempest.txt.gz#_2014-09-16_22_32_14_888

  2014-09-16 22:32:14.888 6636 ERROR tempest.common.ssh [-] Failed to establish 
authenticated ssh connection to cirros@172.24.4.4 after 15 attempts
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh Traceback (most recent 
call last):
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
tempest/common/ssh.py, line 76, in _get_ssh_connection
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
timeout=self.channel_timeout, pkey=self.pkey)
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 236, in 
connect
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/util.py, line 278, in 
retry_on_signal
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh return function()
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/local/lib/python2.7/dist-packages/paramiko/client.py, line 236, in 
lambda
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 
retry_on_signal(lambda: sock.connect(addr))
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh   File 
/usr/lib/python2.7/socket.py, line 224, in meth
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh return 
getattr(self._sock,name)(*args)
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh error: [Errno 113] No 
route to host
  2014-09-16 22:32:14.888 6636 TRACE tempest.common.ssh 

  message:_get_ssh_connection AND message:error: [Errno 113] No route
  to host AND tags:tempest.txt

 

[Yahoo-eng-team] [Bug 1285288] Re: gate-nova-docs No module named ....

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285288

Title:
  gate-nova-docs  No module named 

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Using openstack theme from 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/oslosphinx/theme
  2014-02-26 09:46:40.661 | loading intersphinx inventory from 
http://docs.python.org/objects.inv...
  2014-02-26 09:46:42.430 | loading intersphinx inventory from 
http://swift.openstack.org/objects.inv...
  2014-02-26 09:46:42.778 | building [html]: all source files
  2014-02-26 09:46:42.778 | updating environment: 48 added, 0 changed, 0 removed
  2014-02-26 09:46:42.779 | reading sources... [  2%] 
devref/addmethod.openstackapi
  2014-02-26 09:46:42.795 | reading sources... [  4%] devref/aggregates
  2014-02-26 09:46:42.815 | reading sources... [  6%] devref/api
  2014-02-26 09:46:42.831 | Traceback (most recent call last):
  2014-02-26 09:46:42.831 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:42.831 | __import__(self.modname)
  2014-02-26 09:46:42.831 | ImportError: No module named cloud
  2014-02-26 09:46:43.754 | Traceback (most recent call last):
  2014-02-26 09:46:43.754 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:43.754 | __import__(self.modname)
  2014-02-26 09:46:43.754 | ImportError: No module named backup_schedules
  2014-02-26 09:46:43.757 | Traceback (most recent call last):
  2014-02-26 09:46:43.757 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:43.757 | __import__(self.modname)
  2014-02-26 09:46:43.757 | ImportError: No module named faults
  2014-02-26 09:46:43.760 | Traceback (most recent call last):
  2014-02-26 09:46:43.761 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:43.761 | __import__(self.modname)
  2014-02-26 09:46:43.761 | ImportError: No module named flavors
  2014-02-26 09:46:43.764 | Traceback (most recent call last):
  2014-02-26 09:46:4

  Sample at http://logs.openstack.org/17/66917/2/check/gate-nova-
  docs/6a4637f/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1285288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291048] Re: Incorrect number of security groups in Project Overview after restacking

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291048

Title:
  Incorrect number of security groups in Project Overview after
  restacking

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Security Groups each project after restacking is reported as 0
  (default: Used 0 of 10) even though the default security group is
  the default security group of each project. So it should report as 1
  (Used 1 of 10).

  Steps to reproduce:
  1. Restack
  2. Login to Horizon
  3. Go to Project  Overview panel
  4. You'll notice that Security Groups are reported 0 of 10 -- which is 
incorrect
  5. Now go to Access  Security
  6. Go back to Overview
  7. You'll notice that Security Groups are now reported 1 of 10 -- which 
is correct

  NOTE: Neutron should not be enabled to reproduce this bug.

  This bug is related to bug #1271381, which fixes the reported number
  for most cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257799] Re: tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name fail in gate with BuildExceptionError

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257799

Title:
  
tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name
  fail in gate with BuildExceptionError

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name
  fail in gate with BuildExceptionError

  See: http://logs.openstack.org/79/59879/2/gate/gate-tempest-dsvm-
  postgres-full/ceb9759/console.html

  2013-12-04 06:11:28.345 | 
==
  2013-12-04 06:11:28.372 | FAIL: 
tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name[gate]
  2013-12-04 06:11:28.372 | 
tempest.api.compute.servers.test_servers.ServersTestJSON.test_create_with_existing_server_name[gate]
  2013-12-04 06:11:28.373 | 
--
  2013-12-04 06:11:28.373 | _StringException: Empty attachments:
  2013-12-04 06:11:28.373 |   stderr
  2013-12-04 06:11:28.374 |   stdout
  2013-12-04 06:11:28.374 | 
  2013-12-04 06:11:28.374 | pythonlogging:'': {{{
  2013-12-04 06:11:28.374 | 2013-12-04 05:54:55,289 Request: POST 
http://127.0.0.1:8774/v2/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers
  2013-12-04 06:11:28.375 | 2013-12-04 05:54:55,289 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': 'Token omitted'}
  2013-12-04 06:11:28.375 | 2013-12-04 05:54:55,290 Request Body: {server: 
{flavorRef: 42, name: server-tempest-2040253626, imageRef: 
465be8b0-dd45-47c5-91d4-efb628aa375e}}
  2013-12-04 06:11:28.375 | 2013-12-04 05:54:55,594 Response Status: 202
  2013-12-04 06:11:28.376 | 2013-12-04 05:54:55,594 Nova request id: 
req-57892045-1ed9-4f17-b96e-0babe4bebaf9
  2013-12-04 06:11:28.376 | 2013-12-04 05:54:55,594 Response Headers: 
{'content-length': '434', 'location': 
'http://127.0.0.1:8774/v2/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac',
 'date': 'Wed, 04 Dec 2013 05:54:55 GMT', 'content-type': 'application/json', 
'connection': 'close'}
  2013-12-04 06:11:28.376 | 2013-12-04 05:54:55,594 Response Body: {server: 
{security_groups: [{name: default}], OS-DCF:diskConfig: MANUAL, id: 
3abd4be9-b3dc-4b87-9cf5-f5b597173cac, links: [{href: 
http://127.0.0.1:8774/v2/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac;,
 rel: self}, {href: 
http://127.0.0.1:8774/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac;,
 rel: bookmark}], adminPass: 3F5iaFVxr8Pi}}
  2013-12-04 06:11:28.377 | 2013-12-04 05:54:55,594 Request: GET 
http://127.0.0.1:8774/v2/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac
  2013-12-04 06:11:28.377 | 2013-12-04 05:54:55,594 Request Headers: 
{'X-Auth-Token': 'Token omitted'}
  2013-12-04 06:11:28.377 | 2013-12-04 05:54:55,706 Response Status: 200
  2013-12-04 06:11:28.378 | 2013-12-04 05:54:55,706 Nova request id: 
req-85e0828c-baad-432a-b28f-9433b293eb5c
  2013-12-04 06:11:28.378 | 2013-12-04 05:54:55,707 Response Headers: 
{'content-length': '1347', 'content-location': 
u'http://127.0.0.1:8774/v2/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac',
 'date': 'Wed, 04 Dec 2013 05:54:55 GMT', 'content-type': 'application/json', 
'connection': 'close'}
  2013-12-04 06:11:28.378 | 2013-12-04 05:54:55,707 Response Body: {server: 
{status: BUILD, updated: 2013-12-04T05:54:55Z, hostId: , 
addresses: {}, links: [{href: 
http://127.0.0.1:8774/v2/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac;,
 rel: self}, {href: 
http://127.0.0.1:8774/6ddc4dd2e1bc4683bfb199e225f8c9e9/servers/3abd4be9-b3dc-4b87-9cf5-f5b597173cac;,
 rel: bookmark}], key_name: null, image: {id: 
465be8b0-dd45-47c5-91d4-efb628aa375e, links: [{href: 
http://127.0.0.1:8774/6ddc4dd2e1bc4683bfb199e225f8c9e9/images/465be8b0-dd45-47c5-91d4-efb628aa375e;,
 rel: bookmark}]}, OS-EXT-STS:task_state: scheduling, 
OS-EXT-STS:vm_state: building, OS-SRV-USG:launched_at: null, flavor: 
{id: 42, links: [{href: 
http://127.0.0.1:8774/6ddc4dd2e1bc4683bfb199e225f8c9e9/flavors/42;, rel: 
bookmark}]}, id: 3abd4be9-b3dc-4b87-9cf5-f5b597173cac, security_groups: 
[{name: default}], OS-SRV-USG:term
 inated_at: null, OS-EXT-AZ:availability_zone: nova, user_id: 
21689027acab4b11ae0885e5cbd26a4b, name: server-tempest-2040253626, 
created: 2013-12-04T05:54:55Z, tenant_id: 
6ddc4dd2e1bc4683bfb199e225f8c9e9, OS-DCF:diskConfig: MANUAL, 
os-extended-volumes:volumes_attached: [], accessIPv4: , accessIPv6: , 
progress: 0, OS-EXT-STS:power_state: 0, config_drive: , metadata: {}}}

  .
  .
  .

  2013-12-04 06:11:28.407 | 2013-12-04 05:55:03,128 Request: 

[Yahoo-eng-team] [Bug 1218251] Re: xenapi: permission denied on block device

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218251

Title:
  xenapi: permission denied on block device

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Sometimes this error appears in n-cpu, making the builds fail:

  ERROR nova.virt.xenapi.vm_utils [req-1 demo demo] [instance-1] Failed to 
fetch glance image
  TRACE nova.virt.xenapi.vm_utils [instance-1] Traceback (most recent call 
last):
  TRACE nova.virt.xenapi.vm_utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 1344, in _fetch_disk_image
  TRACE nova.virt.xenapi.vm_utils [instance-1] session, image.stream_to, 
image_type, virtual_size, dev)
  TRACE nova.virt.xenapi.vm_utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 2011, in _stream_disk
  TRACE nova.virt.xenapi.vm_utils [instance-1] with open(dev_path, 'wb') as 
f:
  TRACE nova.virt.xenapi.vm_utils [instance-1] IOError: [Errno 13] Permission 
denied: '/dev/xvdd'
  TRACE nova.virt.xenapi.vm_utils [instance-1] 
  ERROR nova.utils [req-1 demo demo] [instance-1] Failed to spawn, rolling back
  TRACE nova.utils [instance-1] Traceback (most recent call last):
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 497, in spawn
  TRACE nova.utils [instance-1] vdis = create_disks_step(undo_mgr, 
disk_image_type, image_meta)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 153, in inner
  TRACE nova.utils [instance-1] rv = f(*args, **kwargs)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 377, in create_disks_step
  TRACE nova.utils [instance-1] block_device_info=block_device_info)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 348, in _create_disks
  TRACE nova.utils [instance-1] block_device_info=block_device_info)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 540, in 
get_vdis_for_instance
  TRACE nova.utils [instance-1] context, session, instance, name_label, 
image, image_type)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 1119, in _create_image
  TRACE nova.utils [instance-1] image_id, image_type)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 1062, in 
_create_cached_image
  TRACE nova.utils [instance-1] image_id, image_type)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 1142, in _fetch_image
  TRACE nova.utils [instance-1] image_id, image_type)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 1344, in _fetch_disk_image
  TRACE nova.utils [instance-1] session, image.stream_to, image_type, 
virtual_size, dev)
  TRACE nova.utils [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vm_utils.py, line 2011, in _stream_disk
  TRACE nova.utils [instance-1] with open(dev_path, 'wb') as f:
  TRACE nova.utils [instance-1] IOError: [Errno 13] Permission denied: 
'/dev/xvdd'
  TRACE nova.utils [instance-1] 
  ERROR nova.compute.manager [req-1 demo demo] [instance-1] Instance failed to 
spawn
  TRACE nova.compute.manager [instance-1] Traceback (most recent call last):
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/compute/manager.py, line 1286, in _spawn
  TRACE nova.compute.manager [instance-1] block_device_info)
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/driver.py, line 180, in spawn
  TRACE nova.compute.manager [instance-1] admin_password, network_info, 
block_device_info)
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 514, in spawn
  TRACE nova.compute.manager [instance-1] 
undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/utils.py, line 981, in rollback_and_reraise
  TRACE nova.compute.manager [instance-1] self._rollback()
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 497, in spawn
  TRACE nova.compute.manager [instance-1] vdis = 
create_disks_step(undo_mgr, disk_image_type, image_meta)
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 153, in inner
  TRACE nova.compute.manager [instance-1] rv = f(*args, **kwargs)
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 377, in create_disks_step
  TRACE nova.compute.manager [instance-1] 
block_device_info=block_device_info)
  TRACE nova.compute.manager [instance-1]   File 
/opt/stack/nova/nova/virt/xenapi/vmops.py, line 348, in 

[Yahoo-eng-team] [Bug 1290767] Re: Error. Unable to associate floating ip in nova.network.neutronv2.api in _get_port_id_by_fixed_address

2014-09-17 Thread Sean Dague
I expect the eventing interface in icehouse invalidates this bug. If
it's still cropping up in juno, please reopen.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290767

Title:
  Error. Unable to associate floating ip in nova.network.neutronv2.api
  in _get_port_id_by_fixed_address

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Stacktrace (most recent call last):

File nova/api/openstack/compute/contrib/floating_ips.py, line 255, in 
_add_floating_ip
  fixed_address=fixed_address)
File nova/network/api.py, line 50, in wrapper
  res = f(self, context, *args, **kwargs)
File nova/network/neutronv2/api.py, line 649, in associate_floating_ip
  fixed_address)
File nova/network/neutronv2/api.py, line 634, in 
_get_port_id_by_fixed_address
  raise exception.FixedIpNotFoundForAddress(address=address)

  Surely Nova should retry an assignment if the port is not ready?

  This was created by a tempest test suite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273268] Re: live-migration - instance could not be found

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273268

Title:
  live-migration - instance could not be found

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Starting a live-migration using nova live-migration
  a7a78e36-e088-416c-9479-e95aa1a0f7ef failes due to the fact that he's
  trying to detach the volume from the instance of the destination host
  instead of the source host.

  * Start live migration
  * Check logs on both Source and Destination Host

  === Source Host ===
  2014-01-27 15:03:57.554 2681 ERROR nova.virt.libvirt.driver [-] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Live Migration failure: End of file while 
reading data: Input/output error

  === Destination Host ===
  2014-01-27 15:02:13.129 3742 AUDIT nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detach volume 
2ab8cb25-8f79-4b8e-bc93-c52351df84ee from mountpoint vda
  2014-01-27 15:02:13.134 3742 WARNING nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detaching volume from unknown instance
  2014-01-27 15:02:13.138 3742 ERROR nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Failed to detach volume 
2ab8cb25-8f79-4b8e-bc93-c52351df84ee from vda
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Traceback (most recent call last):
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3725, in 
_detach_volume
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] encryption=encryption)
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 1202, in 
detach_volume
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] virt_dom = 
self._lookup_by_name(instance_name)
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3085, in 
_lookup_by_name
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] raise 
exception.InstanceNotFound(instance_id=instance_name)
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] InstanceNotFound: Instance 
instance-0084 could not be found.
  2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]
  2014-01-27 15:02:13.139 3742 DEBUG nova.volume.cinder 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] Cinderclient connection created using URL: 
http://10.3.0.2:8776/v1/cd0e923440eb4bbc8f3388e38544b977 cinderclient 
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py:96
  2014-01-27 15:02:13.142 3742 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 10.3.0.2
  2014-01-27 15:02:13.230 3742 DEBUG urllib3.connectionpool [-] POST 
/v1/cd0e923440eb4bbc8f3388e38544b977/volumes/2ab8cb25-8f79-4b8e-bc93-c52351df84ee/action
 HTTP/1.1 202 0 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:296

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259114] Re: nova failed to schedule in check queue job: ComputeFilter returned 0 hosts

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259114

Title:
  nova failed to schedule in check queue job: ComputeFilter returned 0
  hosts

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Gate check failed on https://review.openstack.org/#/c/59049/

  http://logs.openstack.org/49/59049/1/gate/gate-tempest-dsvm-postgres-
  full/a1934fd/

  2013-12-08 14:30:09.915 | 
  2013-12-08 14:30:09.915 | 
==
  2013-12-08 14:30:09.915 | FAIL: setUpClass 
(tempest.api.compute.v3.servers.test_instance_actions.InstanceActionsV3TestJSON)
  2013-12-08 14:30:09.915 | setUpClass 
(tempest.api.compute.v3.servers.test_instance_actions.InstanceActionsV3TestJSON)
  2013-12-08 14:30:09.915 | 
--
  2013-12-08 14:30:09.916 | _StringException: Traceback (most recent call last):
  2013-12-08 14:30:09.916 |   File 
tempest/api/compute/v3/servers/test_instance_actions.py, line 30, in 
setUpClass
  2013-12-08 14:30:09.916 | resp, server = 
cls.create_test_server(wait_until='ACTIVE')
  2013-12-08 14:30:09.916 |   File tempest/api/compute/base.py, line 117, in 
create_test_server
  2013-12-08 14:30:09.916 | server['id'], kwargs['wait_until'])
  2013-12-08 14:30:09.916 |   File 
tempest/services/compute/v3/json/servers_client.py, line 167, in 
wait_for_server_status
  2013-12-08 14:30:09.917 | extra_timeout=extra_timeout)
  2013-12-08 14:30:09.917 |   File tempest/common/waiters.py, line 73, in 
wait_for_server_status
  2013-12-08 14:30:09.917 | raise 
exceptions.BuildErrorException(server_id=server_id)
  2013-12-08 14:30:09.917 | BuildErrorException: Server 
34e5426b-9b61-4ee8-9d0f-c9ff34a26759 failed to build and is in ERROR status
  2013-12-08 14:30:09.917 | 
  2013-12-08 14:30:09.917 | 
  2013-12-08 14:30:09.918 | 
==
  2013-12-08 14:30:09.918 | FAIL: process-returncode
  2013-12-08 14:30:09.918 | process-returncode
  2013-12-08 14:30:09.918 | 
--
  2013-12-08 14:30:09.918 | _StringException: Binary content:
  2013-12-08 14:30:09.918 |   traceback (test/plain; charset=utf8)
  2013-12-08 14:30:09.919 | 
  2013-12-08 14:30:09.919 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1259114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223361] Re: unclear error message for quota in horizon (possibly taking wrong one from logs)

2014-09-17 Thread Sam Betts
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1223361

Title:
  unclear error message for quota in horizon (possibly taking wrong one
  from logs)

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Compute (Nova):
  New

Bug description:
  I tried running 100 large instances. 
  horizon shows the flowing error: 

  Error: Quota exceeded for cores,instances,ram: Requested 800, but
  already used 0 of 20 cores (HTTP 413) (Request-ID: req-0c854c89-89d9
  -48ce-96e2-eca07bbbc8f5)

  the first part of the message is fine (Quota exceeded for 
cores,instances,ram) but the second part is a bit unclear... 
  I asked to run 100 instances with the largest flavor (my host cannot support 
that), so the 800 value is not very clear
  also, 'but already used 0 of 20 cores' means none of the cores were used.

  this is the complete log, and I think that we possibly take the wrong
  errors:

  [root@opens-vdsb ~(keystone_admin)]# egrep 
0c854c89-89d9-48ce-96e2-eca07bbbc8f5 /var/log/*/*
  /var/log/horizon/horizon.log:Recoverable error: Quota exceeded for 
cores,instances,ram: Requested 800, but already used 0 of 20 cores (HTTP 413) 
(Request-ID: req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5)
  /var/log/nova/api.log:2013-09-10 16:35:09.049 WARNING nova.compute.api 
[req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5 5f38d619dfe744f0a8e08818033fc37e 
89f7caf549e04aec85c3a8737a43a37c] cores,instances,ram quota exceeded for 
89f7caf549e04aec85c3a8737a43a37c, tried to run 100 instances. Can only run 2 
more instances of this type.
  /var/log/nova/api.log:2013-09-10 16:35:09.049 INFO nova.api.openstack.wsgi 
[req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5 5f38d619dfe744f0a8e08818033fc37e 
89f7caf549e04aec85c3a8737a43a37c] HTTP exception thrown: Quota exceeded for 
cores,instances,ram: Requested 800, but already used 0 of 20 cores
  /var/log/nova/api.log:2013-09-10 16:35:09.050 INFO 
nova.osapi_compute.wsgi.server [req-0c854c89-89d9-48ce-96e2-eca07bbbc8f5 
5f38d619dfe744f0a8e08818033fc37e 89f7caf549e04aec85c3a8737a43a37c] 10.35.101.10 
POST /v2/89f7caf549e04aec85c3a8737a43a37c/servers HTTP/1.1 status: 413 len: 
373 time: 0.1629190

  
  i think that we should be taking 'tried to run 100 instances. Can only run 2 
more instances of this type.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1223361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1204169] Re: compute instance.update messages sometimes have the wrong values for instance state

2014-09-17 Thread Sean Dague
Ok, I've been looking at this block of code, and it's terribly
confusing. The big issue is that the selection of whether or not we
should send an update is mixed up with the send parts.

The solution would be a refactor of send_update to extract to the front
all the change determination logic, so we have an answer of should we
send an update, and what has changed

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1204169

Title:
  compute instance.update messages sometimes have the wrong values for
  instance state

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Compute instance.update messages that are not triggered by a state
  change (e.g. setting the host in the resource tracker) have default
  (None) values for task_state, old_vm_state and old_ task_state.

  This can make the instance state sequence look wrong to anything
  consuming the messages (e.g stacktach)

   compute.instance.update  None(None) - Building(none)
   scheduler.run_instance.scheduled 
   compute.instance.update  building(None) -  building(scheduling)
   compute.instance.create.start 
   compute.instance.update  building(None) -  building(None)
   compute.instance.update  building(None) -  building(networking)
   compute.instance.update  building(networking) - 
building(block_device_mapping)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1204169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299135] Re: nova unit test fail as exceptions.ExternalIpAddressExhaustedClient not found

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299135

Title:
  nova unit test fail as exceptions.ExternalIpAddressExhaustedClient not
  found

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  unit test fail likely a regression caused by
  
https://github.com/openstack/nova/commit/cd1423cef7621e0b557cbe0260f661d08811236b.

  I'm unsure why it's not caught in the gate.

  Apparently exceptions.ExternalIpAddressExhaustedClient is not defined.
  Refer to attached log for more details.

  traceback-1: {{{
  Traceback (most recent call last):
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py,
 line 286, in VerifyAll
  mock_obj._Verify()
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py,
 line 506, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  ExpectedMethodCallsError: Verify: Expected methods never called:
0.  MultipleTimesGroup default
  }}}

  traceback-2: {{{
  Traceback (most recent call last):
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/fixture.py,
 line 112, in cleanUp
  return self._cleanups(raise_errors=raise_first)
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/callmany.py,
 line 88, in __call__
  reraise(error[0], error[1], error[2])
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/fixtures/callmany.py,
 line 82, in __call__
  cleanup(*args, **kwargs)
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py,
 line 286, in VerifyAll
  mock_obj._Verify()
File 
/home/jenkins/workspace/csi-nova-upstream/.tox/py26/lib/python2.6/site-packages/mox.py,
 line 506, in _Verify
  raise ExpectedMethodCallsError(self._expected_calls_queue)
  ExpectedMethodCallsError: Verify: Expected methods never called:
0.  MultipleTimesGroup default
  }}}

  Traceback (most recent call last):
File nova/tests/network/test_neutronv2.py, line 1639, in 
test_allocate_floating_ip_exhausted_fail
  AndRaise(exceptions.ExternalIpAddressExhaustedClient)
  AttributeError: 'module' object has no attribute 
'ExternalIpAddressExhaustedClient'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1299135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306907] Re: Trival: Need remove rendundant parentheses of cfg help strings

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306907

Title:
  Trival: Need remove rendundant parentheses of cfg help strings

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  There is a hacking review https://review.openstack.org/#/c/74493/.
  That makes me be aware we maybe have rendunant parentheses.
  We'd better clean the parentheses, Python will handle that well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306907/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370511] [NEW] allow adding multiple roles for same user

2014-09-17 Thread Dafna Ron
Public bug reported:

[root@tigris01 ~(keystone_admin)]# keystone help user-role-add
usage: keystone user-role-add --user user --role role [--tenant tenant]

Add role to user.

Arguments:
  --user user, --user-id user, --user_id user
Name or ID of user.
  --role role, --role-id role, --role_id role
Name or ID of role.
  --tenant tenant, --tenant-id tenant
Name or ID of tenant.


in cli we can add multiple roles but can only add one in horizon. it would be 
good if we can add more than one role

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370511

Title:
  allow adding multiple roles for same user

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  [root@tigris01 ~(keystone_admin)]# keystone help user-role-add
  usage: keystone user-role-add --user user --role role [--tenant tenant]

  Add role to user.

  Arguments:
--user user, --user-id user, --user_id user
  Name or ID of user.
--role role, --role-id role, --role_id role
  Name or ID of role.
--tenant tenant, --tenant-id tenant
  Name or ID of tenant.

  
  in cli we can add multiple roles but can only add one in horizon. it would be 
good if we can add more than one role

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299317] Re: use 'interface-attach' without option parameter happen ERROR

2014-09-17 Thread Sean Dague
This can't be reproduced without more logs from the failure. Please
provide to reopen.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299317

Title:
  use 'interface-attach' without option parameter happen ERROR

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  
  I use 'interface-attach'  without any option parameter to add vnic to a VM, 
nova return failed, but 'nova list' will see the vnic infor add to that vm, I 
do it as follow:

  root@ubuntu01:/var/log/nova# nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | Power 
State | Networks   |
  
+--+--+++-++
  | 663dc949-11f9-4aab-aaf7-6f5bd761ab6f | test | ACTIVE | None   | Running 
| test=10.10.0.5 |
  
+--+--+++-++
  root@ubuntu01:/var/log/nova# nova interface-attach test
  ERROR: Failed to attach interface (HTTP 500) (Request-ID: 
req-5af0e807-521f-45a2-a329-fd61ec74779e)
  root@ubuntu01:/var/log/nova# nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | Power 
State | Networks   |
  
+--+--+++-++
  | 663dc949-11f9-4aab-aaf7-6f5bd761ab6f | test | ACTIVE | None   | Running 
| test=10.10.0.5, 10.10.0.5, 10.10.0.12; test2=20.20.0.2 |
  
+--+--+++-++

  the error log in nova computer is:
   ERROR nova.openstack.common.rpc.amqp 
[req-5af0e807-521f-45a2-a329-fd61ec74779e bcac7970f8ae41f38f79e01dece39bd8 
d13fb5f6d2354320bf4767f9b71df820] Exception during message handling
   TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
   TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
   TRACE nova.openstack.common.rpc.amqp **args)
   TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
   TRACE nova.openstack.common.rpc.amqp result = getattr(proxyobj, 
method)(ctxt, **kwargs)
   TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3892, in 
attach_interface
   TRACE nova.openstack.common.rpc.amqp raise 
exception.InterfaceAttachFailed(instance=instance)
   TRACE nova.openstack.common.rpc.amqp InterfaceAttachFailed: Failed to attach 
network adapter device to {u'vm_state': u'active', u'availability_zone': 
u'nova', u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_id': 3, 
u'user_data': None, u'cleaned': False, u'vm_mode': None, u'deleted_at': None, 
u'reservation_id': u'r-0542q330', u'id': 1, u'security_groups': [], 
u'disable_terminate': False, u'display_name': u'test', u'uuid': 
u'663dc949-11f9-4aab-aaf7-6f5bd761ab6f', u'default_swap_device': None, 
u'info_cache': {u'instance_uuid': u'663dc949-11f9-4aab-aaf7-6f5bd761ab6f', 
u'network_info': [{u'ovs_interfaceid': u'3c959010-25c5-4fe9-91c3-fdcfff57b870', 
u'network': {u'bridge': u'br-int', u'subnets': [{u'ips': [{u'floating_ips': [], 
u'meta': {}, u'type': u'fixed', u'version': 4, u'address': u'10.10.0.5'}], 
u'version': 4, u'meta': {u'dhcp_server': u'10.10.0.3'}, u'dns': [], u'routes': 
[], u'cidr': u'10.10.0.0/24', u'gateway': {u'meta': {}, u'type': u'gateway', 
u'version': 4, u'addr
 ess': u'10.10.0.1'}}], u'meta': {u'injected': False, u'tenant_id': 
u'd13fb5f6d2354320bf4767f9b71df820'}, u'id': 
u'72d612c3-24e8-4a5c-8a45-7a5afb51c2f2', u'label': u'test'}, u'devname': 
u'tap3c959010-25', u'qbh_params': None, u'meta': {}, u'address': 
u'fa:16:3e:41:06:c6', u'type': u'ovs', u'id': 
u'3c959010-25c5-4fe9-91c3-fdcfff57b870', u'qbg_params': None}]}, u'hostname': 
u'test', u'launched_on': u'ubuntu01', u'display_description': u'test', 
u'key_data': None, u'kernel_id': u'', u'power_state': 1, 
u'default_ephemeral_device': None, u'progress': 0, u'project_id': 
u'd13fb5f6d2354320bf4767f9b71df820', u'launched_at': 
u'2014-03-27T21:03:05.00', u'scheduled_at': u'2014-03-27T21:02:59.00', 
u'node': u'ubuntu01', u'ramdisk_id': u'', u'access_ip_v6': None, 
u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': 
u'2014-03-27T21:03:05.00', 

[Yahoo-eng-team] [Bug 1310874] Re: nova host-update --status disabled host was not implemented for kvm

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310874

Title:
  nova host-update --status disabled host was not implemented for kvm

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  liugya@liugya-ubuntu:~$ nova host-update --status disabled liugya-ubuntu
  ERROR (BadRequest): Invalid status: 'disabled' (HTTP 400) (Request-ID: 
req-e37c5c23-e6db-44fd-a814-cda41b967297)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297853] Re: failed to launch an instance from ISO image: TRACE nova.compute.manager MessagingTimeout: Timed out waiting for a reply to message ID

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297853

Title:
  failed to launch an instance from ISO image: TRACE
  nova.compute.manager MessagingTimeout: Timed out waiting for a reply
  to message ID

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Description of problem:
  The Openstack is installed as AIO (with nova networking) on fedora 20. The 
instance was launched on a flavor that has the following parameters:  
  
+--+---+---+--+---+--+---+-+---+
  | ID   | Name  | Memory_MB | Disk | 
Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
  
+--+---+---+--+---+--+---+-+---+

  | 8ddee6ea-c7b3-4482-97b3-f4c9ca6a2c19 | m1.medium | 4096  | 40   | 40
|  | 2 | 1.0 | True  |
  
+--+---+---+--+---+--+---+-+---+

  On the first try the instance status was stuck on 'spawning', even after a 
time out stopped the process. 
  On the second try the instance status was changed from 'spawning' to 'Error'. 

  The nova compute log:
  2014-03-26 15:33:58.548 15699 DEBUG nova.compute.manager [-] An error 
occurred _heal_instance_info_cache 
/usr/lib/python2.7/site-packages/nova/compute/manager.py:4569
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager Traceback (most 
recent call last):
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4565, in 
_heal_instance_info_cache
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager 
self._get_instance_nw_info(context, instance, use_slave=True)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 908, in 
_get_instance_nw_info
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager instance)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/api.py, line 94, in wrapped
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager return 
func(self, context, *args, **kwargs)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/api.py, line 389, in 
get_instance_nw_info
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager result = 
self._get_instance_nw_info(context, instance)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/api.py, line 405, in 
_get_instance_nw_info
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager nw_info = 
self.network_rpcapi.get_instance_nw_info(context, **args)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/network/rpcapi.py, line 222, in 
get_instance_nw_info
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager host=host, 
project_id=project_id)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager 
wait_for_reply=True, timeout=timeout)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/transport.py, line 90, in 
_send
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager timeout=timeout)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
409, in send
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
400, in _send
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
280, in wait
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager reply, ending, 
trylock = self._poll_queue(msg_id, timeout)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
220, in _poll_queue
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout)
  2014-03-26 15:33:58.548 15699 TRACE nova.compute.manager   File 

[Yahoo-eng-team] [Bug 1370515] [NEW] allow edit of user role

2014-09-17 Thread Dafna Ron
Public bug reported:

I think it would be helpful to allow changing and updating a user's role
from horizon

root@tigris01 ~(keystone_admin)]# keystone help |grep role
role-create Create new role.
role-delete Delete role.
role-getDisplay role details.
role-list   List all roles.
user-role-add   Add role to user.
user-role-list  List roles granted to a user.
user-role-removeRemove role from user.
bootstrap   Grants a new role to a new user on a new tenant, after
[root@tigris01 ~(keystone_admin)]# keystone help user-role-add
usage: keystone user-role-add --user user --role role [--tenant tenant]

Add role to user.

we can actually user role-delete + role-create or role-create + --role
role, --role-id role, --role_id role

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370515

Title:
  allow edit of user role

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I think it would be helpful to allow changing and updating a user's
  role from horizon

  root@tigris01 ~(keystone_admin)]# keystone help |grep role
  role-create Create new role.
  role-delete Delete role.
  role-getDisplay role details.
  role-list   List all roles.
  user-role-add   Add role to user.
  user-role-list  List roles granted to a user.
  user-role-removeRemove role from user.
  bootstrap   Grants a new role to a new user on a new tenant, after
  [root@tigris01 ~(keystone_admin)]# keystone help user-role-add
  usage: keystone user-role-add --user user --role role [--tenant tenant]

  Add role to user.

  we can actually user role-delete + role-create or role-create + --role
  role, --role-id role, --role_id role

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316077] Re: live migration fails on NoValidHost although the host seems very valid

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316077

Title:
  live migration fails on NoValidHost although the host seems very valid

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am trying to perform a live-migration of one instance from one server to 
another, and while it fails, I see in the conductor log file:
  File /usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py, 
line 140, in select_destinations raise exception.NoValidHost(reason='')

  1. if I do: nova host-list and nova host-describe on the destination host I 
see:
  hostname | compute | nova

  and
  hostname | (total)| 12  | 31957 | 442
  hostname | (used_now) | 0   | 512   | 0
  which means nova knows there is a valid host available.

  2. why is the reason empty?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310791] Re: Server resize error message indicates field which does not match request

2014-09-17 Thread Sean Dague
I think this is using an out of tree extension

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310791

Title:
  Server resize error message indicates field which does not match
  request

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  With a post of this resize body to the nova API, ram is specified:

  Body :

  
{resize:{flavor:{vcpus:1,ram:99,disk:20,extra_specs:{key:smt:-1

  The response refers to the same field as memory_mb:

  Response :

  {
  badRequest: {
  message: Invalid input received: memory_mb must be lt;= 2147483647
  code: 400
  }-

  It would seem to make more sense if the response referred to the same
  field as the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284938] Re: EC2 authorize/revoke_rules in ec2 api broken with neutron

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284938

Title:
  EC2 authorize/revoke_rules in ec2 api broken with neutron

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Authorize/Revoke security_group_rules in ec2 api is unavailable with
  neutron.

  The codes in cloud.py, only works with nova-network:

  --
  def _rule_dict_last_step():

  source_security_group = db.security_group_get_by_name(
  context.elevated(),
  source_project_id,
  source_security_group_name)

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284938/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1301128] Re: nova floating-ip-list has blank server id field

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1301128

Title:
  nova floating-ip-list has blank server id field

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  See for instance http://paste.ubuntu.com/7192574/

  +---+---+-+-+
  | Ip| Server Id | Fixed Ip| Pool|
  +---+---+-+-+
  | 138.35.77.109 |   | -   | ext-net |
  | 138.35.77.50  |   | -   | ext-net |
  | 138.35.77.36  |   | -   | ext-net |

  This appears to be a regression - its breaking nodepool (see bug
  130 for context) and we used to have this working before we
  recently upgraded our nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1301128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1264925] Re: Setting up the configuration rpc_zmq_topic_backlog causing zmq receiver to silently ignore all messages

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1264925

Title:
  Setting up the configuration rpc_zmq_topic_backlog causing zmq
  receiver to silently ignore all messages

Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Triaged

Bug description:
  Setting up the configuration parameter rpc_zmq_topic_backlog causing
  zmq receiver to silently ignore all messages - I had run strace on
  nova-rpc-zmq-receiver, from where I see that, the issue was with one
  configuration option “rpc_zmq_topic_backlog”, due to which the code
  was silently returning without processing that message – there was no
  logs or no trace in the zmq receiver log even after enabling debug.

  What  I see is that this option is set in zmq_opts array in
  impl_zmq.py, but when a message come in,  the class ZmqProxy check
  this config item , and the function __getattr__ in the class
  ConfigOpts (oslo/config/cfg.py) is raising error saying no such option
  and then return it without processing that message further.

  If I just comment out the configuration entry, it just works fine.

  Please see attachment for Strace output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1264925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1331160] Re: UnexpectedTaskStateError: Unexpected task state: expecting [None] but the actual state is powering-off when running tempest.api.compute.v3.images.test_images.Image

2014-09-17 Thread Sean Dague
*** This bug is a duplicate of bug 1334345 ***
https://bugs.launchpad.net/bugs/1334345

** This bug has been marked a duplicate of bug 1334345
   Unexpected task state: expecting [None] but the actual state is powering-off

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1331160

Title:
  UnexpectedTaskStateError: Unexpected task state: expecting [None] but
  the actual state is powering-off when running
  
tempest.api.compute.v3.images.test_images.ImagesV3Test.test_create_image_from_stopped_server
  in gate job gate-tempest-dsvm-full

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  2014-06-17 14:02:28.274 ERROR nova.api.openstack 
[req-511f6615-0e70-4dee-b926-65cdd07f2653 ImagesV3Test-873712061 
ImagesV3Test-286613536] Caught error: Unexpected task state: expecting [None]
   but the actual state is powering-off
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/__init__.py, line 125, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 659, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
self.app(env, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 917, in __call__
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack content_type, 
body, accept)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 983, in _process_stack
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/wsgi.py, line 1067, in dispatch
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/common.py, line 481, in inner
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack return f(*args, 
**kwargs)
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/compute/plugins/v3/servers.py, line 
922, in _action_create_image
  2014-06-17 14:02:28.274 14800 TRACE nova.api.openstack 
extra_properties=props)
  2014-06-17 

[Yahoo-eng-team] [Bug 1308981] Re: Nova-compute does not recover controller switch over

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308981

Title:
  Nova-compute does not recover controller switch over

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have two controllers which form a RabbitMQ cluster and then a
  compute node. The problem occurs when I have all the nodes first up
  and then I shut down one of the controllers. Then in nova.log in
  compute node the below exception is logged.

  182Apr 17 14:36:37 compute-01 nova-nova.compute.resource_tracker INFO: 
Compute_service record updated for compute-01:compute-01.trelab.tieto.com
  179Apr 17 14:37:42 compute-01 nova-nova.servicegroup.drivers.db ERROR: 
model server went away
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nova/servicegroup/drivers/db.py, 
line 96, in _report_state
  service.service_ref, state_catalog)
File /usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 269, in 
service_update
  return self._manager.service_update(context, service, values)
File /usr/lib/python2.7/dist-packages/nova/conductor/rpcapi.py, line 397, 
in service_update
  service=service_p, values=values)
File /usr/lib/python2.7/dist-packages/nova/rpcclient.py, line 85, in call
  return self._invoke(self.proxy.call, ctxt, method, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/rpcclient.py, line 63, in 
_invoke
  return cast_or_call(ctxt, msg, **self.kwargs)
File /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/proxy.py, 
line 130, in call
  exc.info, real_topic, msg.get('method'))
  Timeout: Timeout while waiting on RPC response - topic: conductor, RPC 
method: service_update info: unknown
  180Apr 17 14:38:08 compute-01 nova-nova.compute.resource_tracker AUDIT: 
Auditing locally available compute resources
  182Apr 17 14:40:09 compute-01 nova-nova.compute.manager INFO: Updating 
bandwidth usage cache
  180Apr 17 14:44:39 compute-01 nova-nova.compute.resource_tracker AUDIT: 
Auditing locally available compute resources

  The compute node goes to down state in nova service-list and it does
  not recover. Only when I start the other controller again the compute
  node recovers. Sometimes it is needed to restart nova-compute to
  recover.

  I have a havana level system. In the system I have upgraded RabbitMQ
  to 3.2.4 version and created a policy so that mirrored queues are used
  in RabbitMQ.

  $ rabbitmqctl set_policy HA '^(?!amq\.).*' '{ha-mode: all}'

  rabbitmqctl cluster_status is showing both controllers as running
  nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286527] Re: Quota usages update should check all usage in tenant not only per user

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286527

Title:
  Quota usages update should check all usage in tenant not only per user

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  After Grizzly - Havana upgrade the quota_usages table was 
  wiped out due to bug #1245746

  Quota_usages is then updated after a user creates/delete an instance.
  The problem is that quota_usages is updated per user in a tenant.

  For tenants that are shared by different users this means that users that 
  didn't have created instances previous are able to use the full quota for the 
tenant.

  Example:
  instance quota for tenant_X = 10
  user_a and user_b can create instances in tenant_X

   - user_a creates 8 instances;
   - user_b didn't have instances;
   - grizzly - havana upgrade (usage_quotas wipe)
   - user_b is able to create 10 instances
  Problematic for clouds that rely in tenant quotas and not billing directly 
users.

  Even if previous example is associated with bug #1245746
  this can happen if a user quota usage for a tenant gets out of sync.

  
  Quota usages should be updated and sync considering all resources in the 
tenant and 
  not only the resources of the user that is doing the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1286527/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240383] Re: pci passthrough: unicode object support does not work with the unkown fileds

2014-09-17 Thread Sean Dague
The object model specifically is designed to not allow arbitrary fields.

** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
   Status: Invalid = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240383

Title:
  pci passthrough: unicode object support does not work with the unkown
  fileds

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  2013-10-17 06:00:23.066 ERROR nova.openstack.common.threadgroup [-] (u'A 
string is required here, not %s', 'list')
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):

  objects support now requied known the value you saved in the objects,
  but pci extra_info design for 'unkonw' usage. so can not get the type
  info.

  
  refer the trace:

  
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 117, in wait
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup x.wait()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 49, in wait
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in 
wait
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in 
switch
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in 
main
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 65, in run_service
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 164, in start
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/manager.py, line 801, in pre_start_hook
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/manager.py, line 4868, in 
update_available_resource
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/lockutils.py, line 246, in inner
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup return 
f(*args, **kwargs)
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/resource_tracker.py, line 292, in 
update_available_resource
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
'pci_passthrough_devices')))
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/pci/pci_manager.py, line 189, in set_hvdevs
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup dev_obj = 
pci_device.PciDevice.create(dev)
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/objects/pci_device.py, line 174, in create
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
pci_device.update_device(dev_dict)
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/objects/pci_device.py, line 136, in update_device
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
self.extra_info = extra_info
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/objects/base.py, line 68, in setter
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup 
field.coerce(self, name, value))
  2013-10-17 06:00:23.066 TRACE nova.openstack.common.threadgroup   File 

[Yahoo-eng-team] [Bug 1310791] Re: Server resize error message indicates field which does not match request

2014-09-17 Thread Matt Riedemann
*** This bug is a duplicate of bug 1350751 ***
https://bugs.launchpad.net/bugs/1350751

The root issue reported here was addressed in bug 1350751 and this
change:

https://review.openstack.org/#/c/110891/

Basically make the error message that the user sees have the same fields
that are in the API request rather than the DB model fields.

** This bug has been marked a duplicate of bug 1350751
   Nova responses unexpected error messages when fail to create flavor

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310791

Title:
  Server resize error message indicates field which does not match
  request

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  With a post of this resize body to the nova API, ram is specified:

  Body :

  
{resize:{flavor:{vcpus:1,ram:99,disk:20,extra_specs:{key:smt:-1

  The response refers to the same field as memory_mb:

  Response :

  {
  badRequest: {
  message: Invalid input received: memory_mb must be lt;= 2147483647
  code: 400
  }-

  It would seem to make more sense if the response referred to the same
  field as the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1255029] Re: Fail to start nova-network when set dmz_cidr is empty

2014-09-17 Thread Sean Dague
** Changed in: nova
   Importance: Undecided = Wishlist

** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1255029

Title:
  Fail to start nova-network when set dmz_cidr is empty

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  I can not start the nova-network when set the dmz_cidr is empty in
  nova.conf:

  2013-11-26 04:50:49.916 23022 ERROR nova.openstack.common.threadgroup [-] 
Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf iptables-restore -c
  Exit code: 2
  Stdout: ''
  Stderr: iptables-restore v1.4.7: host/network `None' not found\nError 
occurred at line: 64\nTry `iptables-restore -h' or 'iptables-restore --help' 
for more information.\n
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
x.wait()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 168, in wait
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/event.py, line 116, in wait
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 187, in switch
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/eventlet/greenthread.py, line 194, in main
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/service.py, line 448, 
in run_service
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
service.start()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/service.py, line 154, in start
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/network/manager.py, line 1713, in 
init_host
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
self.l3driver.initialize(fixed_range=False, networks=networks)
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/network/l3.py, line 90, in initialize
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
self.initialize_network(network['cidr'])
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/network/l3.py, line 101, in 
initialize_network
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
linux_net.init_host(cidr)
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/network/linux_net.py, line 700, in 
init_host
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
iptables_manager.apply()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/network/linux_net.py, line 421, in apply
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
self._apply()
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py, line 
248, in inner
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.6/site-packages/nova/network/linux_net.py, line 452, in 
_apply
  2013-11-26 04:50:49.916 23022 TRACE nova.openstack.common.threadgroup 
attempts=5)
  

[Yahoo-eng-team] [Bug 1286992] Re: Exists notification not sent after completion of certain operations

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1286992

Title:
  Exists notification not sent after completion of certain operations

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Nova currently sends exists notifications only when the resize/resize-
  revert/rebuild/rescue operations begin. When these operations span
  audit periods, this causes some values from the period between the
  time that the previous audit period ended to the time at which the
  rebuild/resize finishes, to be ignored. This should be fixed by
  sending an exists notification just after the operations finish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1286992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370531] [NEW] Linux guest images on Hyper-V fail to access local storage when using resized differencing VHDX disks

2014-09-17 Thread Alessandro Pilotti
Public bug reported:

Description of the issue

Create a differencing disk of a Linux image VHDX (any Linux distro with LIS), 
resizing it in the process (without resizing the base disk)
Create and boot a VM with the disk attached on the IDE controller

During boot the hv_storvsc module will start logging repeatedly the
following message:

hv_storvsc vmbus_0_1: cmd 0x28 scsi status 0x2 srb status 0x4

(along with various I/O errors on sda1).

The machine manages to boot eventually after a long delay.

The following PowerShell script can be used to reproduce the issue:
http://paste.openstack.org/raw/110766/

Tested on: Hyper-V 2012 R2 and with various Linux guests.

Workaround:

Apply the same technique used for VHD disks, where a copy of the base
image is resized before the differencing one is created.

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370531

Title:
  Linux guest images on Hyper-V fail to access local storage when using
  resized differencing VHDX disks

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Description of the issue

  Create a differencing disk of a Linux image VHDX (any Linux distro with LIS), 
resizing it in the process (without resizing the base disk)
  Create and boot a VM with the disk attached on the IDE controller

  During boot the hv_storvsc module will start logging repeatedly the
  following message:

  hv_storvsc vmbus_0_1: cmd 0x28 scsi status 0x2 srb status 0x4

  (along with various I/O errors on sda1).

  The machine manages to boot eventually after a long delay.

  The following PowerShell script can be used to reproduce the issue:
  http://paste.openstack.org/raw/110766/

  Tested on: Hyper-V 2012 R2 and with various Linux guests.

  Workaround:

  Apply the same technique used for VHD disks, where a copy of the base
  image is resized before the differencing one is created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323786] Re: Can not delete instance Havana

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323786

Title:
  Can not delete instance Havana

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Havana release: I have several instances running and while I can delete most 
instances there are times where I can not delete one that seems to get stuck in 
'deleting' state.  I have tried to reset from the command line to error and 
delete and also back to active and delete with no success.   I did wait quite a 
time after each to allow the deletion to occur.The only way I could delete 
the instances was to go into the mysql db and get rid of the specific 
entries/tables for this instance.
  (this worked).
  I can see in my nova log that it was trying to delete:
  2014-05-20 14:55:20.343 19358 DEBUG routes.middleware [-] Matched DELETE 
/a843cb922df24063b50c3accdc6ded44/servers/dbbc203a-9a18-493a-b3d5-da82e5232f59 
__call__ 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:100
  2014-05-20 14:55:20.344 19358 DEBUG routes.middleware [-] Route path: 
'/{project_id}/servers/:(id)', defaults: {'action': u'delete', 'controller': 
nova.api.openstack.wsgi.Resource object at 0x2b3cc50} __call__ 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:102
  2014-05-20 14:55:20.344 19358 DEBUG routes.middleware [-] Match dict: 
{'action': u'delete', 'controller': nova.api.openstack.wsgi.Resource object at 
0x2b3cc50, 'project_id': u'a843cb922df24063b50c3accdc6ded44', 'id': 
u'dbbc203a-9a18-493a-b3d5-da82e5232f59'} __call__ 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:103
  2014-05-20 14:55:20.344 19358 DEBUG nova.api.openstack.wsgi 
[req-5e93a3b0-5430-4a1f-a86f-8ece4e13eece 161172840f564803ab4e6b2cde1f13de 
a843cb922df24063b50c3accdc6ded44] No Content-Type provided in request get_body 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:835
  2014-05-20 14:55:20.345 19358 DEBUG nova.api.openstack.wsgi 
[req-5e93a3b0-5430-4a1f-a86f-8ece4e13eece 161172840f564803ab4e6b2cde1f13de 
a843cb922df24063b50c3accdc6ded44] Calling method bound method 
Controller.delete of nova.api.openstack.compute.servers.Controller object at 
0x29931d0 _process_stack 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:962

  but I see this many times over
  2014-05-20 14:55:26.541 19351 DEBUG routes.middleware [-] Matched DELETE 
/a843cb922df24063b50c3accdc6ded44/servers/dbbc203a-9a18-493a-b3d5-da82e5232f59 
__call__ 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:100
  2014-05-20 14:55:26.542 19351 DEBUG routes.middleware [-] Match dict: 
{'action': u'delete', 'controller': nova.api.openstack.wsgi.Resource object at 
0x2b3cc50, 'project_id': u'a843cb922df24063b50c3accdc6ded44', 'id': 
u'dbbc203a-9a18-493a-b3d5-da82e5232f59'} __call__ 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:103

  2014-05-20 14:55:26.583 19351 DEBUG nova.compute.api [req-3278602c-
  d0ef-455f-af8e-a6e20447322e 161172840f564803ab4e6b2cde1f13de
  a843cb922df24063b50c3accdc6ded44] [instance: dbbc203a-9a18-493a-
  b3d5-da82e5232f59] Going to try to terminate instance delete
  /usr/lib/python2.6/site-packages/nova/compute/api.py:1564

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323788] Re: block_device_mapping_v2 needs better input validation

2014-09-17 Thread Sean Dague
No reproduce, incomplete for  30 days

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1323788

Title:
  block_device_mapping_v2 needs better input validation

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  using Havana (2013.2.2) on Ubuntu 12.04 (cloud archive packages)

  It is possible to create an instace that nova believes has a volume
  attached when in fact it is not.  The user is then unable to delete
  the instance becaus enove errors trying to detatch the volume which is
  not attached (deleting the offending volume does allow instancve
  termination but one presumes users want the data on their cinder
  volumes)

  While there are likely deeper causes to this confised state the
  proximal cause appear to be the users specifying --block-device
  id=id,source=volume on the boot command line rather than --block-
  device id=id,source=volume,dest=volume which properly attaches the
  requested volume and produces a norally running and normally deletable
  instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1323788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312874] Re: resizing an instance - causes the drives to disappear - With it hung during reboot

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312874

Title:
  resizing an instance - causes the drives to disappear  -  With it hung
  during reboot

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I have my openstack cluster on Havvana.

  Resizing an instance (Deb-7-based) to have more disk space - followed
  by a reboot (soft) - causes the drives to dissapear. And the instance
  would be hung permanently at the boot screen for it cannot find any
  drives.

  STEPS:
  --
  1. Create an instance (deb-7)
  2. Resize the instance - with a flavour to have more disk space.
  3. After the instance is resized, the instance is permanently set in ERROR 
state - eventhough you can take a console of it and login as usual.

  amande@ZZZ:~/VMs/IMAGES$ nova list
  
+--+---+++-+---+
  | ID   | Name  | Status | 
Task State | Power State | Networks  |
  
+--+---+++-+---+
  | cc867684-0fe9-48a7-95e9-60890d6e4fd0 | vapp  | ERROR  | 
None   | Running | 172_22-public=172.22.0.49 |
  
+--+---+++-+---+

  4. Reset state of the instance.

  amande@ZZZ:~/VMs/IMAGES$ nova reset-state --active 
cc867684-0fe9-48a7-95e9-60890d6e4fd0
  amande@axcient:~/VMs/IMAGES$ nova list
  
+--+---+++-+---+
  | ID   | Name  | Status | 
Task State | Power State | Networks  |
  
+--+---+++-+---+
  | cc867684-0fe9-48a7-95e9-60890d6e4fd0 | vapp  | ACTIVE | 
None   | Running | 172_22-public=172.22.0.49 |
  
+--+---+++-+---+

  5. After it has its state changed, Soft Reboot the instance.

  6. right at this stage, the drives are not to be seen. and the
  instance will be hung at the boot screen forever- for it cannot find
  any drives to mount.

  LOGS
  
  2014-04-24 23:14:48 DEBUG nova.virt.disk.api 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Checking if we can resize image 
/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk. 
size=563714457600 can_resize_image 
/usr/lib/python2.6/site-packages/nova/virt/disk/api.py:157
  2014-04-24 23:14:48 DEBUG nova.openstack.common.processutils 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Running cmd (subprocess): env LC_ALL=C 
LANG=C qemu-img info 
/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk execute 
/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
  2014-04-24 23:14:48 DEBUG nova.openstack.common.processutils 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Running cmd (subprocess): qemu-img resize 
/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk 563714457600 
execute 
/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py:147
  2014-04-24 23:14:48 DEBUG nova.virt.disk.api 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Checking if we can resize filesystem inside 
/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk. CoW=True 
is_image_partitionless 
/usr/lib/python2.6/site-packages/nova/virt/disk/api.py:171
  2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Instance for image 
imgfile=/var/lib/nova/instances/cc867684-0fe9-48a7-95e9-60890d6e4fd0/disk 
imgfmt=qcow2 partition=None instance_for_image 
/usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:31
  2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Trying to import guestfs instance_for_image 
/usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:34
  2014-04-24 23:14:48 DEBUG nova.virt.disk.vfs.api 
[req-99f896e7-549e-4d70-9a84-ee54cf54eaf3 Amrita Mande 
df9536c185814408adce9b8c1cafcf1c]  Using primary VFSGuestFS instance_for_image 
/usr/lib/python2.6/site-packages/nova/virt/disk/vfs/api.py:41
  2014-04-24 23:14:48 DEBUG 

[Yahoo-eng-team] [Bug 1327096] Re: Instance has not been resized

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327096

Title:
  Instance has not been resized

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/86/97886/2/check/check-tempest-dsvm-
  full/1b4220d/console.html

  2014-06-05 05:54:25.217 | {3} 
tempest.api.compute.v3.admin.test_migrations.MigrationsAdminV3Test.test_list_migrations_in_flavor_resize_situation
 [25.907243s] ... FAILED
  2014-06-05 05:54:25.217 | 
  2014-06-05 05:54:25.217 | Captured traceback:
  2014-06-05 05:54:25.217 | ~~~
  2014-06-05 05:54:25.217 | Traceback (most recent call last):
  2014-06-05 05:54:25.218 |   File 
tempest/api/compute/v3/admin/test_migrations.py, line 43, in 
test_list_migrations_in_flavor_resize_situation
  2014-06-05 05:54:25.218 | 
self.servers_client.confirm_resize(server_id)
  2014-06-05 05:54:25.218 |   File 
tempest/services/compute/v3/json/servers_client.py, line 282, in 
confirm_resize
  2014-06-05 05:54:25.218 | return self.action(server_id, 
'confirm_resize', None, **kwargs)
  2014-06-05 05:54:25.218 |   File 
tempest/services/compute/v3/json/servers_client.py, line 211, in action
  2014-06-05 05:54:25.218 | post_body)
  2014-06-05 05:54:25.218 |   File tempest/common/rest_client.py, line 
209, in post
  2014-06-05 05:54:25.218 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-06-05 05:54:25.218 |   File tempest/common/rest_client.py, line 
419, in request
  2014-06-05 05:54:25.218 | resp, resp_body)
  2014-06-05 05:54:25.218 |   File tempest/common/rest_client.py, line 
468, in _error_checker
  2014-06-05 05:54:25.218 | raise exceptions.BadRequest(resp_body)
  2014-06-05 05:54:25.219 | BadRequest: Bad request
  2014-06-05 05:54:25.219 | Details: {u'message': u'Instance has not been 
resized.', u'code': 400}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282910] Re: fixed ip address assigned twice to the vm

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1282910

Title:
  fixed ip address assigned twice to the vm

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  If the vm fails to spawn on the nova compute node where it was suppose to 
run, then it reschedules to run the vm on the another compute node. 
  Network of VM is deleted before trying to re-schedule it on another host. But 
system_metadata of instance is not updated ('network_allocated' is not
  set to False). We should also update system_metadata after network is deleted 
from VM.

  While allocating network to instance we set system_metadata 
'network_allocated' as True, but while deallocating network we do not set 
'network_allocated'
  to False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1282910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267294] Re: Change default value of resize_confirm_window to -1

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

** Changed in: nova
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267294

Title:
  Change default value of resize_confirm_window to -1

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  
  Now the default value of resize_confirm_window is 0,  0 means auto confirm 
was disabled.

  For some cases, admin might want to confirm immediately, but the
  minimum value is 1 which means that we need to wait at least one
  second.

  Also the auto confirm resize logic was in a periodic task, if the
  periodic task interval was 60s, and even if we set
  resize_confirm_window as 1 we might still need to wait 60s before auto
  confirm.

  So we should set the default value of  resize_confirm_window to -1
  which means auto confirm was disabled and 0 means auto confirm
  immediately

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1267294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272456] Re: Instance fails network setup TRACE in tempest tests

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1272456

Title:
  Instance fails network setup TRACE in tempest tests

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Gate test tempest-dsvm-large-ops fails due to failure setting up
  network on instance.

  
  
http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-ops/69a94b4/

  Relevant Trace in n-cpu logs are here:
  http://logs.openstack.org/26/68726/1/check/gate-tempest-dsvm-large-
  ops/69a94b4/logs/screen-n-cpu.txt.gz#_2014-01-23_19_36_58_565

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1272456/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369386] Re: ipset_enabled = True default is not upgrade friendly

2014-09-17 Thread Armando Migliaccio
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369386

Title:
  ipset_enabled = True default is not upgrade friendly

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in “neutron” package in Ubuntu:
  New

Bug description:
  Adds ipset support for Security Groups

  In commit 2562a9271c828e982a74593e8fd07be13b0cfc4a we recently added
  ipset support for Security Groups.

  The default is currently set to True which is not upgrade friendly in
  that anyone upgrading to the most recent Neutron code immediately has
  to have the ipset binary installed or their commands fail.

  It seems like we should set this to false by default (as a safe
  default) and allow users to opt in since it is really an
  optimization...

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236116] Re: attaching all devstack quantum networks to a nova server results in un-deletable server

2014-09-17 Thread Sean Dague
Old incomplete bug. Please reopen if still an issue.

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236116

Title:
  attaching all devstack quantum networks to a nova server results in
  un-deletable server

Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Attaching multiple networks results in a backtrace and undeleteable
  instance when used with Neutron.

  Run reproducer as follows:
  [sdake@bigiron ~]$ less reproducer
  #!/bin/bash

  glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 
--container-f
  ormat=bare  cirros.img
  id1=`neutron net-list -c id -f csv --quote none | grep -v id | tail -1 | tr 
-d '
  \r'`
  id2=`neutron net-list -c id -f csv --quote none | grep -v id | head -1 | tr 
-d '
  \r'`
  nova boot --flavor m1.tiny --image cirros-0.3.0-x86_64 --security_group 
default 
  --nic net-id=$id1 --nic net-id=$id2 cirros

  Run nova list waiting for server to become active.  Once server is
  active, delete the server via the nova delete id operation.  Server
  will enter an undeleteable saying it is ACTIVE.  It is important
  that both networks are connected when the delete operation is run, as
  for some reason one of the networks gets disconnected by some
  component (not sure which).

  Further delete operations are either unsuccessful or block further
  ability to create instances with instances finishing in the ERROR
  state after creation.

  n-cpu backtraces with:
  2013-10-06 18:03:11.269 ERROR nova.openstack.common.rpc.amqp 
[req-4f7cf630-d1eb-4fcd-af22-c11fa77fd3dd admin admin] Exception during message 
handling
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in _process_data
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, in dispatch
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 353, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 90, in wrapped
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 73, in wrapped
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 243, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 229, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 294, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 271, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 258, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 294, in decorated_function
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 271, in decorated_function
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp e, 

[Yahoo-eng-team] [Bug 1370536] [NEW] DB migrations can go unchecked

2014-09-17 Thread Dan Smith
Public bug reported:

Currently DB migrations can be added to the tree without the
corresponding migration tests. This is bad and means that we have some
that are untested in the tree already.

** Affects: nova
 Importance: Medium
 Assignee: Dan Smith (danms)
 Status: In Progress


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370536

Title:
  DB migrations can go unchecked

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Currently DB migrations can be added to the tree without the
  corresponding migration tests. This is bad and means that we have some
  that are untested in the tree already.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370544] [NEW] turn user to disabled instead of removing from project when we remove all roles

2014-09-17 Thread Dafna Ron
Public bug reported:

under admin - project - modify user, if we remove all user roles and save the 
changes, user is removed from the list of users which I think is not a 
desirable behaviour since if the admin wanted to remove the user completely 
they would do so and not remove the roles instead. 
 
I think that we should move the user to disabled and if role is added later on 
we should move user to enabled alternatively we can just keep the user with no 
role.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370544

Title:
  turn user to disabled instead of removing from project when we remove
  all roles

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  under admin - project - modify user, if we remove all user roles and save 
the changes, user is removed from the list of users which I think is not a 
desirable behaviour since if the admin wanted to remove the user completely 
they would do so and not remove the roles instead. 
   
  I think that we should move the user to disabled and if role is added later 
on we should move user to enabled alternatively we can just keep the user with 
no role.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287492] Re: Add one configuration item for cache-using in libvirt/hypervisor

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1287492

Title:
  Add one configuration item for cache-using in libvirt/hypervisor

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  We met one scenario which needs to close the cache on linux
  hypervisor.

  But some codes written in libvirt/driver.py (including
  'suspend'/'snapshot' actions) are hard-coded.

  For example:
  ---
  def suspend(self, instance):
  Suspend the specified instance.
  dom = self._lookup_by_name(instance['name'])
  self._detach_pci_devices(dom,
  pci_manager.get_instance_pci_devs(instance))
  dom.managedSave(0)

  So, we need to add one configuration item for it in nova.conf, and let
  operator can choose it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1287492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1320273] Re: _resize files are not cleaned when we destroy instances after failed resize because of disk space

2014-09-17 Thread Sean Dague
no reproduce

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1320273

Title:
  _resize files are not cleaned when we destroy instances after failed
  resize because of disk space

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I configured my setup to work with preallocation. 
  I resized instances and they fail to resize on disk space. 
  when you destroy the instances the _resize file is not removed from the 
/var/lib/nova/instances dir:

  [root@orange-vdsf instances(keystone_admin)]# ls -l /var/lib/nova/instances/
  total 52
  drwxr-xr-x 2 nova nova 4096 May 16 17:34 1a98eefe-ba41-49b7-931a-ebe54796e343
  drwxr-xr-x 2 nova nova 4096 May 16 17:33 
1a98eefe-ba41-49b7-931a-ebe54796e343_resize
  drwxr-xr-x 2 nova nova 4096 May 16 17:32 5f87fd7e-4f85-4c41-ab12-3bf680160281
  drwxr-xr-x 2 nova nova 4096 May 16 17:29 
5f87fd7e-4f85-4c41-ab12-3bf680160281_resize
  drwxr-xr-x 2 nova nova 4096 May 16 17:39 7610109e-b631-40a5-9fa0-1bf150560503
  drwxr-xr-x 2 nova nova 4096 May 16 17:38 
7610109e-b631-40a5-9fa0-1bf150560503_resize
  drwxr-xr-x 2 nova nova 4096 May 16 17:31 a4aefac9-feb8-40d7-9228-1df6a02d9e13
  drwxr-xr-x 2 nova nova 4096 May 16 17:29 
a4aefac9-feb8-40d7-9228-1df6a02d9e13_resize
  drwxr-xr-x 2 nova nova 4096 May 16 13:20 _base
  -rw-r--r-- 1 nova nova0 May 16 17:48 compute_nodes
  drwxr-xr-x 2 nova nova 4096 May 16 17:40 d1e28a92-3ba2-402e-a798-ff8ff9a6bd98
  drwxr-xr-x 2 nova nova 4096 May 16 17:38 
d1e28a92-3ba2-402e-a798-ff8ff9a6bd98_resize
  drwxr-xr-x 2 nova nova 4096 May 15 11:09 locks
  drwxr-xr-x 2 nova nova 4096 May 15 18:34 snapshots
  [root@orange-vdsf instances(keystone_admin)]# 
  [root@orange-vdsf instances(keystone_admin)]# 
  [root@orange-vdsf instances(keystone_admin)]# 
  [root@orange-vdsf instances(keystone_admin)]# 
  [root@orange-vdsf instances(keystone_admin)]# 
  [root@orange-vdsf instances(keystone_admin)]# 
  [root@orange-vdsf instances(keystone_admin)]# ls -l /var/lib/nova/instances/
  total 32
  drwxr-xr-x 2 nova nova 4096 May 16 17:33 
1a98eefe-ba41-49b7-931a-ebe54796e343_resize
  drwxr-xr-x 2 nova nova 4096 May 16 17:29 
5f87fd7e-4f85-4c41-ab12-3bf680160281_resize
  drwxr-xr-x 2 nova nova 4096 May 16 17:38 
7610109e-b631-40a5-9fa0-1bf150560503_resize
  drwxr-xr-x 2 nova nova 4096 May 16 17:29 
a4aefac9-feb8-40d7-9228-1df6a02d9e13_resize
  drwxr-xr-x 2 nova nova 4096 May 16 13:20 _base
  -rw-r--r-- 1 nova nova0 May 16 17:48 compute_nodes
  drwxr-xr-x 2 nova nova 4096 May 16 17:38 
d1e28a92-3ba2-402e-a798-ff8ff9a6bd98_resize
  drwxr-xr-x 2 nova nova 4096 May 15 11:09 locks
  drwxr-xr-x 2 nova nova 4096 May 15 18:34 snapshots
  [root@orange-vdsf instances(keystone_admin)]# 

  
  running qemu-img info I can see that the disk is not deleted as well:

  [root@orange-vdsf instances(keystone_admin)]# qemu-img info 
/var/lib/nova/instances/1a98eefe-ba41-49b7-931a-ebe54796e343_resize/disk
  image: 
/var/lib/nova/instances/1a98eefe-ba41-49b7-931a-ebe54796e343_resize/disk
  file format: raw
  virtual size: 1.0G (1073741824 bytes)
  disk size: 1.0G
  [root@orange-vdsf instances(keystone_admin)]# 

  (the instances were launched with tiny flavour which is 1GB).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1320273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281217] Re: Improve error notification when schedule_run_instance fails

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1281217

Title:
  Improve error notification when schedule_run_instance fails

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When trying to spawn a new instance on an OpenStack installation, and
  the spawn fails with an error (for instance a driver error), you get a
  message No valid host was found. with an empty reason

  It would help to have a more meaningful message when possible in order
  to diagnose the problem

  This would result for instance in having a message like

  Error: No valid host was found. Error from host: precise64 (node
  precise64): InstanceDeployFailure: Image container format not
  supported (ami) ].

  instead of just (as it is currently)

  Error: No valid host was found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1281217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340787] Re: nova unit test virtual environment creation issue

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340787

Title:
  nova unit test virtual environment creation issue

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I tried to run nova unit test in Read Hat virtual machine 
  Linux rhel6-madhu 2.6.32-431.20.3.el6.x86_64 #1 SMP Fri Jun 6 18:30:54 EDT 
2014 x86_64 x86_64 x86_64 GNU/Linux

  cd /etc/nova/
  ./run_tests.sh -V nova.tests.scheduler

  I am getting the error while when the run test creates .venv and tries
  trying to upgrade glance when installing

  pip install cryptography.

  I even tried to install manually pip install cryptography in read hat and 
gave the same error .
  Error pasted here : http://pastebin.com/tAsSRFuA

  I made sure the following are done ,

  yum upgrade
  yum install gcc libffi-devel python-devel openssl-devel

  This issue happens only in readhat . I tried in Centos 6.5 and it
  works fine. Any help to fix this issue would be appreciable .

  Note: This issue is reproducible when you tried to do it in any
  Readhat.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262075] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescue_unrescue_instance[gate, smoke] failed

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1262075

Title:
  
tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML.test_rescue_unrescue_instance[gate,smoke]
  failed

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/20/62520/1/gate/gate-tempest-dsvm-postgres-
  full/f3a9033/

  2013-12-18 05:47:02,936 Request: GET 
http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd
  2013-12-18 05:47:02,936 Request Headers: {'Content-Type': 'application/xml', 
'Accept': 'application/xml', 'X-Auth-Token': 'Token omitted'}
  2013-12-18 05:47:03,077 Response Status: 200
  2013-12-18 05:47:03,077 Nova request id: 
req-1456f54f-7299-49fc-b3f5-66455360eb67
  2013-12-18 05:47:03,077 Response Headers: {'content-length': '2167', 
'content-location': 
u'http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd',
 'date': 'Wed, 18 Dec 2013 05:47:03 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
  2013-12-18 05:47:03,078 Response Body: ?xml version='1.0' encoding='UTF-8'?
  server 
xmlns:OS-DCF=http://docs.openstack.org/compute/ext/disk_config/api/v1.1; 
xmlns:os-extended-volumes=http://docs.openstack.org/compute/ext/extended_volumes/api/v1.1;
 xmlns:OS-EXT-IPS=http://docs.openstack.org/compute/ext/extended_ips/api/v1.1; 
xmlns:atom=http://www.w3.org/2005/Atom; 
xmlns:OS-EXT-IPS-MAC=http://docs.openstack.org/compute/ext/extended_ips_mac/api/v1.1;
 xmlns:OS-SRV-USG=http://docs.openstack.org/compute/ext/server_usage/api/v1.1; 
xmlns:OS-EXT-STS=http://docs.openstack.org/compute/ext/extended_status/api/v1.1;
 
xmlns:OS-EXT-AZ=http://docs.openstack.org/compute/ext/extended_availability_zone/api/v2;
 xmlns=http://docs.openstack.org/compute/api/v1.1; status=SHUTOFF 
updated=2013-12-18T05:43:58Z 
hostId=eeb1a42b0840fab07838a1e353499b1a9c944d2197844cf6814eddad 
name=ServerRescueTestXML-instance-tempest-189395512 
created=2013-12-18T05:43:13Z userId=a2733398afa247febff6e65b90712351 
tenantId=0c07b2e10c454d3fa80773627d2dea67 accessIPv4= accessIPv6= 
 id=b7e78cb2-a215-4909-8fee-ed0f9cdd11cd key_name=None config_drive= 
OS-SRV-USG:terminated_at=None OS-SRV-USG:launched_at=2013-12-18 
05:43:47.580612 OS-EXT-STS:vm_state=stopped OS-EXT-STS:task_state=None 
OS-EXT-STS:power_state=4 OS-EXT-AZ:availability_zone=nova 
OS-DCF:diskConfig=MANUALimage 
id=31de6d39-e307-4c81-9959-413efc2e5fa7atom:link 
href=http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/images/31de6d39-e307-4c81-9959-413efc2e5fa7;
 rel=bookmark//imageflavor id=42atom:link 
href=http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/flavors/42; 
rel=bookmark//flavormetadata/addressesnetwork id=privateip 
OS-EXT-IPS:type=fixed version=4 addr=10.1.0.64 
OS-EXT-IPS-MAC:mac_addr=fa:16:3e:67:09:e2//network/addressesatom:link 
href=http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd;
 rel=self/atom:link 
href=http://127.0.0.1:8774/0c07b2e10c454d3fa80773627d2dea67/ser
 vers/b7e78cb2-a2
  2013-12-18 05:47:03,078 Large body (2167) md5 summary: 
d0d26dbee429d4a58d9073037362e4fa
  2013-12-18 05:47:04,080 Request: GET 
http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd
  2013-12-18 05:47:04,080 Request Headers: {'Content-Type': 'application/xml', 
'Accept': 'application/xml', 'X-Auth-Token': 'Token omitted'}
  2013-12-18 05:47:04,158 Response Status: 200
  2013-12-18 05:47:04,158 Nova request id: 
req-0a3509d6-9c61-439f-9be5-056de21d60ef
  2013-12-18 05:47:04,158 Response Headers: {'content-length': '2167', 
'content-location': 
u'http://127.0.0.1:8774/v2/0c07b2e10c454d3fa80773627d2dea67/servers/b7e78cb2-a215-4909-8fee-ed0f9cdd11cd',
 'date': 'Wed, 18 Dec 2013 05:47:04 GMT', 'content-type': 'application/xml', 
'connection': 'close'}
  2013-12-18 05:47:04,159 Response Body: ?xml version='1.0' encoding='UTF-8'?
  server 
xmlns:OS-DCF=http://docs.openstack.org/compute/ext/disk_config/api/v1.1; 
xmlns:os-extended-volumes=http://docs.openstack.org/compute/ext/extended_volumes/api/v1.1;
 xmlns:OS-EXT-IPS=http://docs.openstack.org/compute/ext/extended_ips/api/v1.1; 
xmlns:atom=http://www.w3.org/2005/Atom; 
xmlns:OS-EXT-IPS-MAC=http://docs.openstack.org/compute/ext/extended_ips_mac/api/v1.1;
 xmlns:OS-SRV-USG=http://docs.openstack.org/compute/ext/server_usage/api/v1.1; 
xmlns:OS-EXT-STS=http://docs.openstack.org/compute/ext/extended_status/api/v1.1;
 
xmlns:OS-EXT-AZ=http://docs.openstack.org/compute/ext/extended_availability_zone/api/v2;
 xmlns=http://docs.openstack.org/compute/api/v1.1; status=SHUTOFF 
updated=2013-12-18T05:43:58Z 
hostId=eeb1a42b0840fab07838a1e353499b1a9c944d2197844cf6814eddad 
name=ServerRescueTestXML-instance-tempest-189395512 

[Yahoo-eng-team] [Bug 1224412] Re: ec2 metadata block device mapping duplicate entry

2014-09-17 Thread Sean Dague
old bug

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224412

Title:
  ec2 metadata block device mapping duplicate entry

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The ec2 metadata now seems to have both an ami and root disk entry.

  It seems like the work to add the root disk into the block device
  mapping the the database has caused this duplicate entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1224412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244762] Re: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance fails sporadically

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244762

Title:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance
  fails sporadically

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  See: http://logs.openstack.org/87/44787/16/check/check-tempest-
  devstack-vm-neutron/d2ede4d/console.html

  2013-10-25 18:06:37.957 | 
==
  2013-10-25 18:06:37.959 | FAIL: 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance[gate,smoke]
  2013-10-25 18:06:37.959 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance[gate,smoke]
  2013-10-25 18:06:37.959 | 
--
  2013-10-25 18:06:37.959 | _StringException: Empty attachments:
  2013-10-25 18:06:37.959 |   stderr
  2013-10-25 18:06:37.960 |   stdout
  2013-10-25 18:06:37.960 | 
  2013-10-25 18:06:37.960 | pythonlogging:'': {{{
  2013-10-25 18:06:37.960 | 2013-10-25 17:59:08,821 state: pending
  2013-10-25 18:06:37.960 | 2013-10-25 17:59:14,092 State transition pending 
== error 5 second
  2013-10-25 18:06:37.961 | }}}
  2013-10-25 18:06:37.961 | 
  2013-10-25 18:06:37.961 | Traceback (most recent call last):
  2013-10-25 18:06:37.961 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 150, in 
test_run_stop_terminate_instance
  2013-10-25 18:06:37.961 | self.assertInstanceStateWait(instance, 
running)
  2013-10-25 18:06:37.961 |   File tempest/thirdparty/boto/test.py, line 356, 
in assertInstanceStateWait
  2013-10-25 18:06:37.962 | state = self.waitInstanceState(lfunction, 
wait_for)
  2013-10-25 18:06:37.962 |   File tempest/thirdparty/boto/test.py, line 341, 
in waitInstanceState
  2013-10-25 18:06:37.962 | self.valid_instance_state)
  2013-10-25 18:06:37.962 |   File tempest/thirdparty/boto/test.py, line 332, 
in state_wait_gone
  2013-10-25 18:06:37.962 | self.assertIn(state, valid_set | self.gone_set)
  2013-10-25 18:06:37.963 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 328, in 
assertIn
  2013-10-25 18:06:37.963 | self.assertThat(haystack, Contains(needle))
  2013-10-25 18:06:37.963 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 417, in 
assertThat
  2013-10-25 18:06:37.963 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-25 18:06:37.963 | MismatchError: u'error' not in set(['paused', 
'terminated', 'running', 'stopped', 'pending', '_GONE', 'stopping', 
'shutting-down'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315580] Re: FloatingIpNotAssociated results in BotoServerError: 500 Internal Server Error

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1315580

Title:
  FloatingIpNotAssociated results in BotoServerError: 500 Internal
  Server Error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Saw a tempest failure in the gate:

  http://logs.openstack.org/32/91732/1/check/check-tempest-dsvm-
  full/c0e77e9/console.html#_2014-05-02_07_29_01_528

  2014-05-02 07:29:01.527 | Traceback (most recent call last):
  2014-05-02 07:29:01.527 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 334, in 
test_compute_with_volumes
  2014-05-02 07:29:01.527 | address.disassociate()
  2014-05-02 07:29:01.527 |   File 
/usr/local/lib/python2.7/dist-packages/boto/ec2/address.py, line 126, in 
disassociate
  2014-05-02 07:29:01.527 | dry_run=dry_run
  2014-05-02 07:29:01.527 |   File 
/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py, line 1996, in 
disassociate_address
  2014-05-02 07:29:01.528 | return self.get_status('DisassociateAddress', 
params, verb='POST')
  2014-05-02 07:29:01.528 |   File 
/usr/local/lib/python2.7/dist-packages/boto/connection.py, line 1182, in 
get_status
  2014-05-02 07:29:01.528 | response = self.make_request(action, params, 
path, verb)
  2014-05-02 07:29:01.528 |   File 
/usr/local/lib/python2.7/dist-packages/boto/connection.py, line 1089, in 
make_request
  2014-05-02 07:29:01.528 | return self._mexe(http_request)
  2014-05-02 07:29:01.528 |   File 
/usr/local/lib/python2.7/dist-packages/boto/connection.py, line 1002, in _mexe
  2014-05-02 07:29:01.528 | raise BotoServerError(response.status, 
response.reason, body)
  2014-05-02 07:29:01.528 | BotoServerError: BotoServerError: 500 Internal 
Server Error
  2014-05-02 07:29:01.528 | ?xml version=1.0?
  2014-05-02 07:29:01.528 | 
ResponseErrorsErrorCodeFloatingIpNotAssociated/CodeMessageUnknown 
error 
occurred./Message/Error/ErrorsRequestIDreq-bd454aa5-541f-4e35-b050-b0a0a56afef4/RequestID/Response

  
  Seems to be during some pretty generic teardown during a test that was 
otherwise passing...

  The issue seems to be a race in the api, or a timeout on the test
  side, there was two requests to disassociate the ip:

  req-ce60e9aa-db14-4c93-b943-d3fcf51f9673 -
  http://logs.openstack.org/32/91732/1/check/check-tempest-dsvm-
  full/c0e77e9/logs/screen-n-api.txt.gz#_2014-05-02_07_19_29_827

  And then later another that raised the error:

  req-bd454aa5-541f-4e35-b050-b0a0a56afef4 -
  http://logs.openstack.org/32/91732/1/check/check-tempest-dsvm-
  full/c0e77e9/logs/screen-n-api.txt.gz#_2014-05-02_07_20_03_393

  What confuses me is that later (after the failed request) - I see
  another log line for req-ce60e9aa-db14-4c93-b943-d3fcf51f9673 that
  seems to indicate success?

  http://logs.openstack.org/32/91732/1/check/check-tempest-dsvm-
  full/c0e77e9/logs/screen-n-api.txt.gz#_2014-05-02_07_20_04_099

  So that doesn't seem right...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1315580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290746] Re: Nova should allow HARD_REBOOT to instances in the state REBOOTING_HARD

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290746

Title:
  Nova should allow HARD_REBOOT to instances in the state REBOOTING_HARD

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Currently when trying to issue a hard reboot to an instance, the logic
  in nova/compute/api.py says:

  if (reboot_type == 'HARD' and instance['task_state'] == 
task_states.REBOOTING_HARD)):
  raise exception.InstanceInvalidState

  This means there's no user-facing way to rescue an instance that is
  stuck in REBOOTING_HARD except for DELETE.

  We should allow hard reboot to happen in the state REBOOTING_HARD.
  Some new locking code will be required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1351124] Re: py26 Unit test failure: nova.tests.integrated.test_api_samples.AdminActionsSamplesXmlTest.test_post_unlock_server

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1351124

Title:
  py26 Unit test failure:
  
nova.tests.integrated.test_api_samples.AdminActionsSamplesXmlTest.test_post_unlock_server

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  jenkins py26 unit test failed:

  2014-07-31 12:12:40.050 | Traceback (most recent call last):
  2014-07-31 12:12:40.050 |   File nova/tests/integrated/test_api_samples.py, 
line 103, in setUp
  2014-07-31 12:12:40.050 | super(ApiSampleTestBaseV2, self).setUp()
  2014-07-31 12:12:40.050 |   File 
nova/tests/integrated/integrated_helpers.py, line 67, in setUp
  2014-07-31 12:12:40.050 | super(_IntegratedTestBase, self).setUp()
  2014-07-31 12:12:40.050 |   File nova/tests/virt/baremetal/db/base.py, line 
49, in setUp
  2014-07-31 12:12:40.051 | sqlite_clean_db=None)
  2014-07-31 12:12:40.200 |   File nova/test.py, line 98, in __init__
  2014-07-31 12:12:40.200 | if db_migrate.db_version()  
db_migrate.db_initial_version():
  2014-07-31 12:12:40.200 |   File nova/virt/baremetal/db/migration.py, line 
35, in db_version
  2014-07-31 12:12:40.200 | return IMPL.db_version()
  2014-07-31 12:12:40.200 |   File nova/utils.py, line 426, in __getattr__
  2014-07-31 12:12:40.201 | backend = self.__get_backend()
  2014-07-31 12:12:40.201 |   File nova/utils.py, line 422, in __get_backend
  2014-07-31 12:12:40.201 | self.__backend = __import__(name, None, None, 
fromlist)
  2014-07-31 12:12:40.201 | ImportError: No module named migration
  2014-07-31 12:12:40.201 | 
==
  2014-07-31 12:12:40.201 | FAIL: 
nova.tests.integrated.test_api_samples.BareMetalNodesXmlTest.test_delete_node
  2014-07-31 12:12:40.201 | tags: worker-3
  2014-07-31 12:12:40.201 | 
--
  2014-07-31 12:12:40.201 | Empty attachments:
  2014-07-31 12:12:40.201 |   pythonlogging:''
  2014-07-31 12:12:40.201 |   stderr
  2014-07-31 12:12:40.202 |   stdout
  2014-07-31 12:12:40.202 | 
  2014-07-31 12:12:40.202 | Traceback (most recent call last):
  2014-07-31 12:12:40.202 |   File nova/tests/integrated/test_api_samples.py, 
line 103, in setUp
  2014-07-31 12:12:40.202 | super(ApiSampleTestBaseV2, self).setUp()
  2014-07-31 12:12:40.202 |   File 
nova/tests/integrated/integrated_helpers.py, line 67, in setUp
  2014-07-31 12:12:40.202 | super(_IntegratedTestBase, self).setUp()
  2014-07-31 12:12:40.202 |   File nova/tests/virt/baremetal/db/base.py, line 
49, in setUp
  2014-07-31 12:12:40.202 | sqlite_clean_db=None)
  2014-07-31 12:12:40.202 |   File nova/test.py, line 98, in __init__
  2014-07-31 12:12:40.202 | if db_migrate.db_version()  
db_migrate.db_initial_version():
  2014-07-31 12:12:40.202 |   File nova/virt/baremetal/db/migration.py, line 
35, in db_version
  2014-07-31 12:12:40.203 | return IMPL.db_version()
  2014-07-31 12:12:40.203 |   File nova/utils.py, line 426, in __getattr__
  2014-07-31 12:12:40.203 | backend = self.__get_backend()
  2014-07-31 12:12:40.203 |   File nova/utils.py, line 422, in __get_backend
  2014-07-31 12:12:40.203 | self.__backend = __import__(name, None, None, 
fromlist)
  2014-07-31 12:12:40.203 | ImportError: No module named migration
  2014-07-31 12:12:40.203 | 
==
  2014-07-31 12:12:40.203 | FAIL: 
nova.tests.integrated.test_api_samples.BareMetalNodesXmlTest.test_show_node

  
  2014-07-31 12:12:40.046 | Traceback (most recent call last):
  2014-07-31 12:12:40.046 |   File nova/tests/integrated/test_api_samples.py, 
line 3818, in test_show_interfaces
  2014-07-31 12:12:40.046 | instance_uuid = self._post_server()
  2014-07-31 12:12:40.046 |   File nova/tests/integrated/test_api_samples.py, 
line 180, in _post_server
  2014-07-31 12:12:40.046 | response = self._do_post('servers', 
'server-post-req', subs)
  2014-07-31 12:12:40.046 |   File 
nova/tests/integrated/api_samples_test_base.py, line 312, in _do_post
  2014-07-31 12:12:40.046 | body = self._read_template(name) % subs
  2014-07-31 12:12:40.046 |   File 
nova/tests/integrated/api_samples_test_base.py, line 101, in _read_template
  2014-07-31 12:12:40.047 | with open(template) as inf:
  2014-07-31 12:12:40.047 | IOError: [Errno 2] No such file or directory: 
'/home/jenkins/workspace/gate-nova-python26/CA/nova/tests/integrated/api_samples/os-attach-interfaces/server-post-req.xml.tpl'
  2014-07-31 12:12:40.047 | 
==
  2014-07-31 12:12:40.047 | FAIL: 
nova.tests.integrated.test_api_samples.BareMetalExtStatusJsonTest.test_create_node_with_address
  2014-07-31 12:12:40.047 | tags: worker-3

  below is the 

[Yahoo-eng-team] [Bug 1350892] Re: Nova VMWare provisioning errors

2014-09-17 Thread Sean Dague
30 days no response

** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350892

Title:
  Nova VMWare provisioning errors

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am trying to provision a RHEL VMWare image (custom vmdk created
  through template)

  Openstack dashboard shows provisioning status for a long time, however
  no activity on vCenter. PS- CirrOS VMDK (Conveterted from qemu-img
  gets deployed with out errors)

  Request help here

  2014-07-31 16:48:44.017 2931 WARNING nova.openstack.common.loopingcall [-] 
task run outlasted interval by 11.98221 sec
  2014-07-31 16:50:27.015 2931 WARNING nova.openstack.common.loopingcall [-] 
task run outlasted interval by 12.987183 sec
  2014-07-31 16:51:57.715 2931 WARNING nova.openstack.common.loopingcall [-] 
task run outlasted interval by 0.696367 sec
  2014-07-31 16:58:32.860 2931 ERROR suds.client [-] ?xml version=1.0 
encoding=UTF-8?
  SOAP-ENV:Envelope xmlns:ns0=urn:vim25 
xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/;
 ns1:Body
ns0:SessionIsActive
   ns0:_this type=SessionManagerSessionManager/ns0:_this
   ns0:sessionID5216dd75-609c-3c5a-b7e6-9708bd7dc786/ns0:sessionID
   ns0:userNameAdministrator/ns0:userName
/ns0:SessionIsActive
 /ns1:Body
  /SOAP-ENV:Envelope
  2014-07-31 16:58:32.863 2931 WARNING nova.virt.vmwareapi.driver 
[req-e6f5ba33-a37a-476b-a6b6-801ccd80bac6 6b875fcfe8344addb87382298c1a75be 
dad97a29e60849a2a6ad9d0ffb353161] Unable to validate session 
5216dd75-609c-3c5a-b7e6-9708bd7dc786!
  2014-07-31 16:58:32.863 2931 WARNING nova.virt.vmwareapi.driver 
[req-e6f5ba33-a37a-476b-a6b6-801ccd80bac6 6b875fcfe8344addb87382298c1a75be 
dad97a29e60849a2a6ad9d0ffb353161] Session 5216dd75-609c-3c5a-b7e6-9708bd7dc786 
is inactive!
  2014-07-31 16:58:48.406 2931 ERROR suds.client [-] ?xml version=1.0 
encoding=UTF-8?
  SOAP-ENV:Envelope xmlns:ns0=urn:vim25 
xmlns:ns1=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/;
 ns1:Body
ns0:TerminateSession
   ns0:_this type=SessionManagerSessionManager/ns0:_this
   ns0:sessionId5216dd75-609c-3c5a-b7e6-9708bd7dc786/ns0:sessionId
/ns0:TerminateSession
 /ns1:Body
  /SOAP-ENV:Envelope

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1350892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328293] Re: tempest test_delete_server_while_in_attached_volume fails Invalid volume status available/error

2014-09-17 Thread Sean Dague
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328293

Title:
  tempest test_delete_server_while_in_attached_volume fails Invalid
  volume status available/error

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  This keystone change [1] failed with an error in the
  
tempest.api.compute.servers.test_delete_server.DeleteServersTestXML.test_delete_server_while_in_attached_volume
  test in the gate-tempest-dsvm-full job.

  Here's the log:

  http://logs.openstack.org/45/84945/13/gate/gate-tempest-dsvm-
  full/600e742/console.html.gz#_2014-06-06_18_53_45_253

  The error tempest reports is

   Details: Volume None failed to reach in-use status within the
  required time (196 s).

  So it's waiting for the volume to reach a status which it doesn't get
  to in 196 s. Looks like the volume is in attaching status.

  So maybe tempest isn't waiting long enough, or nova / cinder is hung
  or just takes too long?

  There's also a problem in that tempest says the volume is None when it
  should be the volume ID (49fbde74-6e6a-4781-a271-787aa2deb674)

  [1] https://review.openstack.org/#/c/84945/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >