[Yahoo-eng-team] [Bug 1227027] Re: [OSSA 2014-001] Insecure directory permissions with snapshot code (CVE-2013-7048)

2014-01-27 Thread Thierry Carrez
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1227027

Title:
  [OSSA 2014-001] Insecure directory permissions with snapshot code
  (CVE-2013-7048)

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  Fix Committed
Status in OpenStack Compute (nova) havana series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  In the following commit:

  commit 46de2d1e2d0abd6fdcd4da13facaf3225c721f5e
  Author: Rafi Khardalian r...@metacloud.com
  Date:   Sat Jan 26 09:02:19 2013 +

  Libvirt: Add support for live snapshots
  
  blueprint libvirt-live-snapshots
  

  There was the following chunk of code

   snapshot_directory = CONF.libvirt_snapshots_directory
   fileutils.ensure_tree(snapshot_directory)
   with utils.tempdir(dir=snapshot_directory) as tmpdir:
   try:
   out_path = os.path.join(tmpdir, snapshot_name)
  -snapshot.extract(out_path, image_format)
  +if live_snapshot:
  +# NOTE (rmk): libvirt needs to be able to write to the
  +# temp directory, which is owned nova.
  +utils.execute('chmod', '777', tmpdir, run_as_root=True)
  +self._live_snapshot(virt_dom, disk_path, out_path,
  +image_format)
  +else:
  +snapshot.extract(out_path, image_format)

  Making the temporary directory 777 does indeed give QEMU and libvirt
  permission to write there, because it gives every user on the whole
  system permission to write there. Yes, the directory name is
  unpredictable since it uses 'tempdir', this does not eliminate the
  security risk of making it world writable though.

  This flaw is highlighted by the following public commit which makes
  the mode configurable, but still defaults to insecure 777.

  https://review.openstack.org/#/c/46645/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1227027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249409] Re: IMage v2 api allows reupload of images, but does not update size

2014-01-27 Thread Flavio Percoco
Marked as invalid based on @David's comment

** Changed in: glance
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1249409

Title:
  IMage v2 api allows reupload of images, but does not update size

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  IMage v2 api allows reupload of images, but does not update size

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1249409/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273139] [NEW] HostBinaryNotFound exception isn't catched in service_update

2014-01-27 Thread Haiwei Xu
Public bug reported:

When I update a service with not existing hostname or binary , I got the
500 error in nova api log.

2014-01-28 03:15:29.829 ERROR nova.api.openstack.extensions 
[req-5b1f3fc5-349a-4415-a4f5-63eab1c259a0 admin demo] Unexpected exception in 
API method
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/extensions.py, line 470, in wrapped
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/services.py, line 172, 
in update
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions 
self.host_api.service_update(context, host, binary, status_detail)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 3122, in service_update
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions binary)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/objects/base.py, line 112, in wrapper
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/objects/service.py, line 105, in get_by_args
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions db_service = 
db.service_get_by_args(context, host, binary)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/db/api.py, line 131, in service_get_by_args
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions return 
IMPL.service_get_by_args(context, host, binary)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 112, in wrapper
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 469, in service_get_by_args
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions raise 
exception.HostBinaryNotFound(host=host, binary=binary)
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions HostBinaryNotFound: 
Could not find binary nova-cert on host xu-de.
2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions

** Affects: nova
 Importance: Undecided
 Assignee: Haiwei Xu (xu-haiwei)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Haiwei Xu (xu-haiwei)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273139

Title:
  HostBinaryNotFound exception isn't catched in service_update

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I update a service with not existing hostname or binary , I got
  the 500 error in nova api log.

  2014-01-28 03:15:29.829 ERROR nova.api.openstack.extensions 
[req-5b1f3fc5-349a-4415-a4f5-63eab1c259a0 admin demo] Unexpected exception in 
API method
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/extensions.py, line 470, in wrapped
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/compute/plugins/v3/services.py, line 172, 
in update
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions 
self.host_api.service_update(context, host, binary, status_detail)
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/compute/api.py, line 3122, in service_update
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions binary)
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/objects/base.py, line 112, in wrapper
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/objects/service.py, line 105, in get_by_args
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions db_service = 
db.service_get_by_args(context, host, binary)
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/db/api.py, line 131, in service_get_by_args
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions return 
IMPL.service_get_by_args(context, host, binary)
  2014-01-28 03:15:29.829 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 112, in wrapper
  

[Yahoo-eng-team] [Bug 1266513] Re: Some Python requirements are not hosted on PyPI

2014-01-27 Thread Flavio Percoco
** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1266513

Title:
  Some Python requirements are not hosted on PyPI

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone havana series:
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Core Infrastructure:
  In Progress
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Object Storage (Swift):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Pip 1.5 (released January 2nd, 2014) will by default refuse to
  download packages which are linked from PyPI but not hosted on
  pypi.python.org. The workaround is to whitelist these package names
  individually with both the --allow-external and --allow-insecure
  options.

  These options are new in pip 1.4, so encoding them will break for
  people trying to use pip 1.3.x or earlier. Those earlier versions of
  pip are not secure anyway since they don't connect via HTTPS with host
  certificate validation, so we should be encouraging people to use 1.4
  and later anyway.

  The --allow-insecure option is transitioning to a clearer --allow-
  unverified option name starting with 1.5, but the new form does not
  work with pip before 1.5 so we should use the old version for now to
  allow people to transition gracefully. The --allow-insecure form won't
  be removed until at least pip 1.7 according to comments in the source
  code.

  Virtualenv 1.11 (released the same day) bundles pip 1.5 by default,
  and so requires these workarounds when using requirements external to
  PyPI. Be aware that 1.11 is broken for projects using
  sitepackages=True in their tox.ini. The fix is
  https://github.com/pypa/virtualenv/commit/a6ca6f4 which is slated to
  appear in 1.11.1 (no ETA available). We've worked around it on our
  test infrastructure with https://git.openstack.org/cgit/openstack-
  infra/config/commit/?id=20cd18a for now, but that is hiding the
  external-packages issue since we're currently running all tests with
  pip 1.4.1 as a result.

  This bug will also be invisible in our test infrastructure for
  projects listed as having the PyPI mirror enforced in
  openstack/requirements (except for jobs which bypass the mirror, such
  as those for requirements changes), since our update jobs will pull in
  and mirror external packages and pip sees the mirror as being PyPI
  itself in that situation.

  We'll use this bug to track necessary whitelist updates to tox.ini and
  test scripts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1266513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273154] [NEW] Associate floatingip failed

2014-01-27 Thread shihanzhang
Public bug reported:

I create a port in external network, then associate floatingip to this port, it 
failed with bellow log:
404-{u'NeutronError': {u'message': u'External network 
cba5aafb-6dc8-4139-b88e-f057d9f1b7ac is not reachable from subnet 
8989151d-c191-4598-bebe-115343bc513f.  Therefore, cannot associate Port 
a711d8bf-e71c-4ac5-8298-247913205494 with a Floating IP.', u'type': 
u'ExternalGatewayForFloatingIPNotFound', u'detail': u''}}

the detail information is:
neutron port-show a711d8bf-e71c-4ac5-8298-247913205494
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs |   
|
| binding:capabilities  | {port_filter: false}
|
| binding:host_id   |   
|
| binding:vif_type  | unbound   
|
| device_id |   
|
| device_owner  |   
|
| extra_dhcp_opts   |   
|
| fixed_ips | {subnet_id: 8989151d-c191-4598-bebe-115343bc513f, 
ip_address: 172.24.4.3} |
| id| a711d8bf-e71c-4ac5-8298-247913205494  
|
| mac_address   | fa:16:3e:f3:99:34 
|
| name  |   
|
| network_id| cba5aafb-6dc8-4139-b88e-f057d9f1b7ac  
|
| security_groups   | 806ec29e-c2bf-4bbe-a7e8-9a73f5af03f9  
|
| status| DOWN  
|
| tenant_id | 08fa6853d168413a9698a1870a8abfa3  
|
+---+---+

neutron  floatingip-show 67cf212b-ecad-4e42-8806-d84d8f1c6ecb
+-+--+
| Field   | Value|
+-+--+
| fixed_ip_address|  |
| floating_ip_address | 172.24.4.4   |
| floating_network_id | cba5aafb-6dc8-4139-b88e-f057d9f1b7ac |
| id  | 67cf212b-ecad-4e42-8806-d84d8f1c6ecb |
| port_id |  |
| router_id   |  |
| tenant_id   | 08fa6853d168413a9698a1870a8abfa3 |
+-+--+

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273154

Title:
  Associate floatingip failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I create a port in external network, then associate floatingip to this port, 
it failed with bellow log:
  404-{u'NeutronError': {u'message': u'External network 
cba5aafb-6dc8-4139-b88e-f057d9f1b7ac is not reachable from subnet 
8989151d-c191-4598-bebe-115343bc513f.  Therefore, cannot associate Port 
a711d8bf-e71c-4ac5-8298-247913205494 with a Floating IP.', u'type': 
u'ExternalGatewayForFloatingIPNotFound', u'detail': u''}}

  the detail information is:
  neutron port-show a711d8bf-e71c-4ac5-8298-247913205494
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs |  

[Yahoo-eng-team] [Bug 1273171] [NEW] On Glance API, changes-since parameter filters out images which have been update at the same date as the specified timestamp

2014-01-27 Thread Noboru Arai
Public bug reported:

environment:
  Openstack deployed by devstack.


Steps to reproduce:
  1. Check image tables.

  2. Request glance API whith changes-since parameter, whose
value is the same as update_at of an image in images table.

  3.filters out images whose update_at is the same of changes-since
parameter.

Expected result:
  in step3, images are'nt filtered out.

Remark:
  this is simillar to https://review.openstack.org/#/c/60157/.

example)
  -execution:
$mysql -u root glance
$select * from images;

+--+---+-+
| id   |***| updated_at  |
+--+---+-+
| 1d88c716-ecd8-4ca1-9fc5-3bda1cf5affc |***| 2014-01-24 17:18:23 |
| b7bc3608-f19e-4eb1-a178-f3c59af2ba22 |***| 2014-01-24 17:18:24 |
| ca15b4d7-6c8b-4d7e-a4bd-a6186373e4d9 |***| 2014-01-24 17:18:25 |
+--+---+-+

$curl *** 
http://192.168.0.10:9292/v1/images/detail?changes-since=2014-01-24T17:18:25
HTTP/1.1 200 OK
***
{images: []}

  -Expected result:
  image whose id is ca15b4d7-6c8b-4d7e-a4bd-a6186373e4d9 is filter out.
  this image should'nt be filtered out from the viewpoint of its name
   which includes since.

** Affects: glance
 Importance: Undecided
 Assignee: Noboru Arai (arai-h)
 Status: In Progress

** Attachment added: teraterm.log
   
https://bugs.launchpad.net/bugs/1273171/+attachment/3958810/+files/teraterm.log

** Changed in: glance
 Assignee: (unassigned) = Noboru Arai (arai-h)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1273171

Title:
  On Glance API, changes-since parameter filters out images which have
  been update at the same date as the specified timestamp

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  environment:
Openstack deployed by devstack.

  
  Steps to reproduce:
1. Check image tables.

2. Request glance API whith changes-since parameter, whose
  value is the same as update_at of an image in images table.

3.filters out images whose update_at is the same of changes-since
  parameter.

  Expected result:
in step3, images are'nt filtered out.

  Remark:
this is simillar to https://review.openstack.org/#/c/60157/.

  example)
-execution:
  $mysql -u root glance
  $select * from images;

  +--+---+-+
  | id   |***| updated_at  |
  +--+---+-+
  | 1d88c716-ecd8-4ca1-9fc5-3bda1cf5affc |***| 2014-01-24 17:18:23 |
  | b7bc3608-f19e-4eb1-a178-f3c59af2ba22 |***| 2014-01-24 17:18:24 |
  | ca15b4d7-6c8b-4d7e-a4bd-a6186373e4d9 |***| 2014-01-24 17:18:25 |
  +--+---+-+

  $curl *** 
http://192.168.0.10:9292/v1/images/detail?changes-since=2014-01-24T17:18:25
  HTTP/1.1 200 OK
  ***
  {images: []}

-Expected result:
image whose id is ca15b4d7-6c8b-4d7e-a4bd-a6186373e4d9 is filter out.
this image should'nt be filtered out from the viewpoint of its name
 which includes since.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1273171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272681] Re: horizon puppet does not allow to configure OPENSTACK_NEUTRON_NETWORK section

2014-01-27 Thread Julie Pichon
Moving to the openstack puppet modules, I believe this should be handled
in the puppet-horizon module.

** Project changed: horizon = puppet-openstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1272681

Title:
  horizon puppet does not allow to configure OPENSTACK_NEUTRON_NETWORK
  section

Status in Puppet module for OpenStack:
  New

Bug description:
  Using puppet-horizon for installing Openstack Dashboard does not allow
  me to configure the following section within local_settings.py.erb.
  Manual changes will be reverted next time puppet agent is running.

  
  OPENSTACK_NEUTRON_NETWORK = {
  'enable_lb': False,
  'enable_firewall': False,
  'enable_quotas': True,
  'enable_security_group': True,
  'enable_vpn': False,
  # The profile_support option is used to detect if an externa lrouter can 
be
  # configured via the dashboard. When using specific plugins the
  # profile_support can be turned on if needed.
  'profile_support': None,
  #'profile_support': 'cisco',
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/puppet-openstack/+bug/1272681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271553] Re: Items per Page has no effect

2014-01-27 Thread Julie Pichon
*** This bug is a duplicate of bug 1251456 ***
https://bugs.launchpad.net/bugs/1251456

I believe this is a duplicate of bug 1251456, which was fixed in
Icehouse-2.

** This bug has been marked a duplicate of bug 1251456
   page size cookie set only read by settings dash

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271553

Title:
  Items per Page has no effect

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I try to change the 'items per page' setting for accounts in
  Dashboard, it doesn't take effect. The items per page limit remains at
  the default of 20.

  This is using the RDO packages on CentOS 6.5, specifically openstack-
  dashboard-2013.2.1-1.el6.noarch

  A notification that the change was successful does appear in the
  right-hand corner.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273208] [NEW] Action menu gets cut off

2014-01-27 Thread Julie Pichon
Public bug reported:

Long action lists get cut off, see for example the admin Instances menu
on the screenshot, or try to click More on a long action menu for the
last item on any table.

This appears to be since commit f3ca2756cc / bug 1267661, CSS to fix
content appearing below side pane.

** Affects: horizon
 Importance: High
 Status: New


** Tags: ux

** Attachment added: cut_off_menu.png
   
https://bugs.launchpad.net/bugs/1273208/+attachment/3958902/+files/cut_off_menu.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273208

Title:
  Action menu gets cut off

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Long action lists get cut off, see for example the admin Instances
  menu on the screenshot, or try to click More on a long action menu
  for the last item on any table.

  This appears to be since commit f3ca2756cc / bug 1267661, CSS to fix
  content appearing below side pane.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273233] [NEW] Add the Back to Security Groups button to the security group rules page

2014-01-27 Thread Leandro Rosa
Public bug reported:

After editing security group rules I must use the browser's back button to 
return to the security groups tab (under Access  Security page).
It is confusing since most Horizon functionalities don't rely on the browser 
native operations.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273233

Title:
  Add the Back to Security Groups button to the security group rules
  page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After editing security group rules I must use the browser's back button to 
return to the security groups tab (under Access  Security page).
  It is confusing since most Horizon functionalities don't rely on the browser 
native operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273273] [NEW] if run 'keystone-manage db_sync' without 'sudo', there should be a prompt that tables notcreated

2014-01-27 Thread Peter Xu
Public bug reported:

below is the detail:

i have setup keystone database in mysql and grant all privilege to the
keystone user

when i execute 'keystone-manage db_sync'

peter@openstack:~$ keystone-manage db_sync
peter@openstack:~$ 

but actually , in mysql no tables will be create in keystone database

[there should be a error to indicate user that no tables created due to
not enough privilege]

only when :
peter@openstack:~$ sudo keystone-manage db_sync
peter@openstack:~$

all tables will created in keystone database correctly.


summary:
if a prompt displayed, that will be very usefully for user to proceed 
installing keystone successfully

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273273

Title:
  if run 'keystone-manage db_sync' without 'sudo', there should be a
  prompt that tables notcreated

Status in OpenStack Identity (Keystone):
  New

Bug description:
  below is the detail:

  i have setup keystone database in mysql and grant all privilege to the
  keystone user

  when i execute 'keystone-manage db_sync'

  peter@openstack:~$ keystone-manage db_sync
  peter@openstack:~$ 

  but actually , in mysql no tables will be create in keystone database

  [there should be a error to indicate user that no tables created due
  to not enough privilege]

  only when :
  peter@openstack:~$ sudo keystone-manage db_sync
  peter@openstack:~$

  all tables will created in keystone database correctly.

  
  summary:
  if a prompt displayed, that will be very usefully for user to proceed 
installing keystone successfully

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244762] Re: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance fails sporadically

2014-01-27 Thread Matt Riedemann
I thought about bumping the debug level logging for the ec2 error
responses to INFO level but in this one n-api log there are 361
instances of 'EC2 error response' so that's probably not feasible.

http://logs.openstack.org/87/44787/16/check/check-tempest-devstack-vm-
neutron/d2ede4d/logs/screen-n-api.txt.gz

Given the age of this bug and the fact it's not showing up anymore, at
least as far back as logstash keeps records on, I'm going to close it.
We can re-open if it happens again.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1244762

Title:
  
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance
  fails sporadically

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  See: http://logs.openstack.org/87/44787/16/check/check-tempest-
  devstack-vm-neutron/d2ede4d/console.html

  2013-10-25 18:06:37.957 | 
==
  2013-10-25 18:06:37.959 | FAIL: 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance[gate,smoke]
  2013-10-25 18:06:37.959 | 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance[gate,smoke]
  2013-10-25 18:06:37.959 | 
--
  2013-10-25 18:06:37.959 | _StringException: Empty attachments:
  2013-10-25 18:06:37.959 |   stderr
  2013-10-25 18:06:37.960 |   stdout
  2013-10-25 18:06:37.960 | 
  2013-10-25 18:06:37.960 | pythonlogging:'': {{{
  2013-10-25 18:06:37.960 | 2013-10-25 17:59:08,821 state: pending
  2013-10-25 18:06:37.960 | 2013-10-25 17:59:14,092 State transition pending 
== error 5 second
  2013-10-25 18:06:37.961 | }}}
  2013-10-25 18:06:37.961 | 
  2013-10-25 18:06:37.961 | Traceback (most recent call last):
  2013-10-25 18:06:37.961 |   File 
tempest/thirdparty/boto/test_ec2_instance_run.py, line 150, in 
test_run_stop_terminate_instance
  2013-10-25 18:06:37.961 | self.assertInstanceStateWait(instance, 
running)
  2013-10-25 18:06:37.961 |   File tempest/thirdparty/boto/test.py, line 356, 
in assertInstanceStateWait
  2013-10-25 18:06:37.962 | state = self.waitInstanceState(lfunction, 
wait_for)
  2013-10-25 18:06:37.962 |   File tempest/thirdparty/boto/test.py, line 341, 
in waitInstanceState
  2013-10-25 18:06:37.962 | self.valid_instance_state)
  2013-10-25 18:06:37.962 |   File tempest/thirdparty/boto/test.py, line 332, 
in state_wait_gone
  2013-10-25 18:06:37.962 | self.assertIn(state, valid_set | self.gone_set)
  2013-10-25 18:06:37.963 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 328, in 
assertIn
  2013-10-25 18:06:37.963 | self.assertThat(haystack, Contains(needle))
  2013-10-25 18:06:37.963 |   File 
/usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 417, in 
assertThat
  2013-10-25 18:06:37.963 | raise MismatchError(matchee, matcher, mismatch, 
verbose)
  2013-10-25 18:06:37.963 | MismatchError: u'error' not in set(['paused', 
'terminated', 'running', 'stopped', 'pending', '_GONE', 'stopping', 
'shutting-down'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1244762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273255] [NEW] wait_for_url doesn't account for system clock being changed

2014-01-27 Thread Scott Moser
Public bug reported:

wait_for_url takes a 'max_wait' input, and then does:
 start = time.time()
 ...
  now = time.time()

The problem is that when this runs early in boot, ntp (or anything else
really) might have set the clock backwards.

I'm looking at a console log that shows:
2014-01-27 14:46:24,743 - url_helper.py[WARNING]: Calling 'http://169.254.169.2
4/2009-04-04/meta-data/instance-id' failed [-16620/120s]: request error [(urll
compat_monitor0 console  

Ie, the clock got set back 17000 seconds or something.

Asking in # python, I was told that in python3.3 I could use
'time.monotonic'.

In python 2.X, it seems that reading /proc/cpuinfo might be my only
option.

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New = Confirmed

** Changed in: cloud-init
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1273255

Title:
  wait_for_url doesn't account for system clock being changed

Status in Init scripts for use on cloud images:
  Confirmed

Bug description:
  wait_for_url takes a 'max_wait' input, and then does:
   start = time.time()
   ...
now = time.time()

  The problem is that when this runs early in boot, ntp (or anything
  else really) might have set the clock backwards.

  I'm looking at a console log that shows:
  2014-01-27 14:46:24,743 - url_helper.py[WARNING]: Calling 
'http://169.254.169.2
  4/2009-04-04/meta-data/instance-id' failed [-16620/120s]: request error 
[(urll
  compat_monitor0 console  

  Ie, the clock got set back 17000 seconds or something.

  Asking in # python, I was told that in python3.3 I could use
  'time.monotonic'.

  In python 2.X, it seems that reading /proc/cpuinfo might be my only
  option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1273255/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273292] [NEW] Timed out waiting for thing ... to become in-use causes tempest-dsvm-* failures

2014-01-27 Thread Russell Bryant
Public bug reported:

This is a spin-off of bug 1254890.  That bug was originally covering
failures for both timing out waiting for an instance to become ACTIVE,
as well as waiting for a volume to become in-use or available.

It seems valuable to split out the cases of waiting for volumes to
become in-use or available into its own bug.

message:Details: Timed out waiting for thing AND message:to become
AND (message:in-use OR message:available)

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Critical
 Status: Confirmed


** Tags: testing volumes

** Changed in: nova
   Status: New = Confirmed

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
   Importance: High = Critical

** Changed in: nova
Milestone: None = icehouse-3

** Description changed:

  This is a spin-off of bug 1254890.  That bug was originally covering
  failures for both timing out waiting for an instance to become ACTIVE,
  as well as waiting for a volume to become in-use or available.
  
  It seems valuable to split out the cases of waiting for volumes to
- become in-use into its own bug.
+ become in-use or available into its own bug.
  
  message:Details: Timed out waiting for thing AND message:to become
- AND  message:in-use
+ AND (message:in-use OR message:available)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273292

Title:
  Timed out waiting for thing ... to become in-use causes tempest-
  dsvm-* failures

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  This is a spin-off of bug 1254890.  That bug was originally covering
  failures for both timing out waiting for an instance to become ACTIVE,
  as well as waiting for a volume to become in-use or available.

  It seems valuable to split out the cases of waiting for volumes to
  become in-use or available into its own bug.

  message:Details: Timed out waiting for thing AND message:to become
  AND (message:in-use OR message:available)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1273292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273268] [NEW] live-migration - instance could not be found

2014-01-27 Thread darkyat
*** This bug is a duplicate of bug 1044237 ***
https://bugs.launchpad.net/bugs/1044237

Public bug reported:

Starting a live-migration using nova live-migration
a7a78e36-e088-416c-9479-e95aa1a0f7ef failes due to the fact that he's
trying to detach the volume from the instance of the destination host
instead of the source host.

* Start live migration
* Check logs on both Source and Destination Host

=== Source Host ===
2014-01-27 15:03:57.554 2681 ERROR nova.virt.libvirt.driver [-] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Live Migration failure: End of file while 
reading data: Input/output error

=== Destination Host ===
2014-01-27 15:02:13.129 3742 AUDIT nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detach volume 
2ab8cb25-8f79-4b8e-bc93-c52351df84ee from mountpoint vda
2014-01-27 15:02:13.134 3742 WARNING nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detaching volume from unknown instance
2014-01-27 15:02:13.138 3742 ERROR nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Failed to detach volume 
2ab8cb25-8f79-4b8e-bc93-c52351df84ee from vda
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Traceback (most recent call last):
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3725, in 
_detach_volume
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] encryption=encryption)
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 1202, in 
detach_volume
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] virt_dom = 
self._lookup_by_name(instance_name)
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3085, in 
_lookup_by_name
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] raise 
exception.InstanceNotFound(instance_id=instance_name)
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] InstanceNotFound: Instance 
instance-0084 could not be found.
2014-01-27 15:02:13.138 3742 TRACE nova.compute.manager [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef]
2014-01-27 15:02:13.139 3742 DEBUG nova.volume.cinder 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] Cinderclient connection created using URL: 
http://10.3.0.2:8776/v1/cd0e923440eb4bbc8f3388e38544b977 cinderclient 
/usr/lib/python2.7/dist-packages/nova/volume/cinder.py:96
2014-01-27 15:02:13.142 3742 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): 10.3.0.2
2014-01-27 15:02:13.230 3742 DEBUG urllib3.connectionpool [-] POST 
/v1/cd0e923440eb4bbc8f3388e38544b977/volumes/2ab8cb25-8f79-4b8e-bc93-c52351df84ee/action
 HTTP/1.1 202 0 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:296

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migration

** This bug has been marked a duplicate of bug 1044237
   Block Migration doesn't work: Nova searches for the Instance on the 
destination Compute host

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273268

Title:
  live-migration - instance could not be found

Status in OpenStack Compute (Nova):
  New

Bug description:
  Starting a live-migration using nova live-migration
  a7a78e36-e088-416c-9479-e95aa1a0f7ef failes due to the fact that he's
  trying to detach the volume from the instance of the destination host
  instead of the source host.

  * Start live migration
  * Check logs on both Source and Destination Host

  === Source Host ===
  2014-01-27 15:03:57.554 2681 ERROR nova.virt.libvirt.driver [-] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Live Migration failure: End of file while 
reading data: Input/output error

  === Destination Host ===
  2014-01-27 15:02:13.129 3742 AUDIT nova.compute.manager 
[req-8e878227-5ba7-4079-993b-aefa8ba621ed f17443ad86764e3a847667a608cb7181 
cd0e923440eb4bbc8f3388e38544b977] [instance: 
a7a78e36-e088-416c-9479-e95aa1a0f7ef] Detach volume 

[Yahoo-eng-team] [Bug 1273298] [NEW] OPENSTACK_HYPERVISOR_FEATURES['can_set_mount_point'] not taken in account when launching instances and creating a volume

2014-01-27 Thread Yves-Gwenael Bourhis
Public bug reported:

if in openstack_dashboard/test/settings.py we set :

OPENSTACK_HYPERVISOR_FEATURES = {   
'can_set_mount_point': False,   
} 

This hides the 'device' field of the AttachForm, but it should also hide
the 'device_name' field of the 'SetInstanceDetailsAction' workflow
action's form too.

** Affects: horizon
 Importance: Undecided
 Assignee: Yves-Gwenael Bourhis (yves-gwenael-bourhis)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Yves-Gwenael Bourhis (yves-gwenael-bourhis)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1273298

Title:
  OPENSTACK_HYPERVISOR_FEATURES['can_set_mount_point'] not taken in
  account when launching instances and creating a volume

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  if in openstack_dashboard/test/settings.py we set :

  OPENSTACK_HYPERVISOR_FEATURES = { 
  
  'can_set_mount_point': False, 
  
  } 

  This hides the 'device' field of the AttachForm, but it should also
  hide the 'device_name' field of the 'SetInstanceDetailsAction'
  workflow action's form too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1273298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273301] [NEW] Unit test failure: Message objects do not support str() because they may contain non-ascii characters. Please use unicode() or translate() instead.

2014-01-27 Thread Anita Kuno
Public bug reported:

FAIL: 
neutron.tests.unit.test_neutron_manager.NeutronManagerTestCase.test_multiple_plugins_specified_for_service_type
tags: worker-3
--
Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-01-17 09:03:30,605 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
2014-01-17 09:03:30,698 INFO [neutron.manager] Loading Plugin: 
neutron.tests.unit.dummy_plugin.DummyServicePlugin
2014-01-17 09:03:30,698 INFO [neutron.manager] Loading Plugin: 
neutron.tests.unit.dummy_plugin.DummyServicePlugin
}}}

Traceback (most recent call last):
  File neutron/tests/unit/test_neutron_manager.py, line 100, in 
test_multiple_plugins_specified_for_service_type
NeutronManager.get_instance()
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 866, in __exit__
mismatch = matcher.match((exc_type, exc_value, traceback))
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 64, in match
return self.value_re.match(other[1])
  File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 206, in match
after = self.preprocessor(value)
  File neutron/openstack/common/gettextutils.py, line 264, in __str__
raise UnicodeError(msg)
UnicodeError: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273301

Title:
  Unit test failure: Message objects do not support str() because they
  may contain non-ascii characters. Please use unicode() or translate()
  instead.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  FAIL: 
neutron.tests.unit.test_neutron_manager.NeutronManagerTestCase.test_multiple_plugins_specified_for_service_type
  tags: worker-3
  --
  Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2014-01-17 09:03:30,605 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
  2014-01-17 09:03:30,698 INFO [neutron.manager] Loading Plugin: 
neutron.tests.unit.dummy_plugin.DummyServicePlugin
  2014-01-17 09:03:30,698 INFO [neutron.manager] Loading Plugin: 
neutron.tests.unit.dummy_plugin.DummyServicePlugin
  }}}

  Traceback (most recent call last):
File neutron/tests/unit/test_neutron_manager.py, line 100, in 
test_multiple_plugins_specified_for_service_type
  NeutronManager.get_instance()
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 866, in __exit__
  mismatch = matcher.match((exc_type, exc_value, traceback))
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 64, in match
  return self.value_re.match(other[1])
File 
/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 206, in match
  after = self.preprocessor(value)
File neutron/openstack/common/gettextutils.py, line 264, in __str__
  raise UnicodeError(msg)
  UnicodeError: Message objects do not support str() because they may contain 
non-ascii characters. Please use unicode() or translate() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273259] [NEW] Unit test failure: delete_port() got an unexpected keyword argument 'l3_port_check'

2014-01-27 Thread Anita Kuno
Public bug reported:

FAIL: 
neutron.tests.unit.test_extension_ext_gw_mode.TestL3GwModeMixin.test_update_router_gw_with_gw_info_none
tags: worker-3
--
Empty attachments:
pythonlogging:''
stderr
stdout

Traceback (most recent call last):
File 
/home/jenkins/workspace/gate-neutron-python27/neutron/tests/unit/test_extension_ext_gw_mode.py,
 line 251, in test_update_router_gw_with_gw_info_none
self._test_update_router_gw(None, True)
File 
/home/jenkins/workspace/gate-neutron-python27/neutron/tests/unit/test_extension_ext_gw_mode.py,
 line 238, in _test_update_router_gw
self.context, self.router.id, gw_info)
File 
/home/jenkins/workspace/gate-neutron-python27/neutron/db/l3_gwmode_db.py, 
line 62, in _update_router_gw_info
context, router_id, info, router=router)
File /home/jenkins/workspace/gate-neutron-python27/neutron/db/l3_db.py, line 
205, in _update_router_gw_info
   l3_port_check=False)
TypeError: delete_port() got an unexpected keyword argument 'l3_port_check'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273259

Title:
  Unit test failure: delete_port() got an unexpected keyword argument
  'l3_port_check'

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  FAIL: 
neutron.tests.unit.test_extension_ext_gw_mode.TestL3GwModeMixin.test_update_router_gw_with_gw_info_none
  tags: worker-3
  --
  Empty attachments:
  pythonlogging:''
  stderr
  stdout

  Traceback (most recent call last):
  File 
/home/jenkins/workspace/gate-neutron-python27/neutron/tests/unit/test_extension_ext_gw_mode.py,
 line 251, in test_update_router_gw_with_gw_info_none
  self._test_update_router_gw(None, True)
  File 
/home/jenkins/workspace/gate-neutron-python27/neutron/tests/unit/test_extension_ext_gw_mode.py,
 line 238, in _test_update_router_gw
  self.context, self.router.id, gw_info)
  File 
/home/jenkins/workspace/gate-neutron-python27/neutron/db/l3_gwmode_db.py, 
line 62, in _update_router_gw_info
  context, router_id, info, router=router)
  File /home/jenkins/workspace/gate-neutron-python27/neutron/db/l3_db.py, 
line 205, in _update_router_gw_info
 l3_port_check=False)
  TypeError: delete_port() got an unexpected keyword argument 'l3_port_check'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273266] [NEW] Error message is malformed when removing a non-existent security group from an instance

2014-01-27 Thread Phil Day
Public bug reported:

Trying to remove a security group from an instance which is not actually 
associated with the instance produces the following:
 
---
$nova remove-secgroup 71069945-5bea-4d53-b6ab-9026bfeebba4 phil

ERROR: [u'Security group %(security_group_name)s not assocaited with the
instance %(instance)s', {u'instance': u'71069945-5bea-4d53-b6ab-
9026bfeebba4', u'security_group_name': u'phil'}] (HTTP 404) (Request-ID:
req-a334b53d-e7cc-482c-9f1f-7bc61b8367e0)

---

The variables are not being populated correctly, and there is a typo:  
assocaited

** Affects: nova
 Importance: Low
 Assignee: Phil Day (philip-day)
 Status: New

** Changed in: nova
   Importance: Undecided = Low

** Changed in: nova
 Assignee: (unassigned) = Phil Day (philip-day)

** Changed in: nova
Milestone: None = icehouse-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273266

Title:
  Error message is malformed when removing a non-existent security group
  from an instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  Trying to remove a security group from an instance which is not actually 
associated with the instance produces the following:
   
  ---
  $nova remove-secgroup 71069945-5bea-4d53-b6ab-9026bfeebba4 phil

  ERROR: [u'Security group %(security_group_name)s not assocaited with
  the instance %(instance)s', {u'instance': u'71069945-5bea-4d53-b6ab-
  9026bfeebba4', u'security_group_name': u'phil'}] (HTTP 404) (Request-
  ID: req-a334b53d-e7cc-482c-9f1f-7bc61b8367e0)

  ---

  The variables are not being populated correctly, and there is a typo:
   assocaited

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1205520] Re: Cannot set the default project for the User V3

2014-01-27 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/69401

** Changed in: horizon
   Status: Invalid = In Progress

** Changed in: horizon
 Assignee: Lin Hua Cheng (lin-hua-cheng) = Dirk Mueller (dmllr)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1205520

Title:
  Cannot set the default project for the User V3

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Python client library for Keystone:
  Fix Released

Bug description:
  The code implementation is passing the attribute to Keystone as
  project_id..

  
  
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/v3/users.py#L46

  def create(self, name, domain=None, project=None, password=None,
 email=None, description=None, enabled=True):
  return super(UserManager, self).create(
  name=name,
  domain_id=base.getid(domain),
  project_id=base.getid(project),
  password=password,
  email=email,
  description=description,
  enabled=enabled)

  According to identity-api spec (https://github.com/openstack/identity-
  api/blob/master/openstack-identity-api/v3/src/markdown/identity-
  api-v3.md). it should be passed as default_project_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1205520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273326] [NEW] glance image-list does not show deleted/killed images

2014-01-27 Thread Ionuț Arțăriși
Public bug reported:

I have 3 killed images and one in the 'deleted' state which I can see in
the postgres database. When I try to run glance image-list, however,
none of them are shown.

I tried debugging this and placed a pdb.set_trace() call in
glance/registry/api/v1/images.py:Controller._get_images where the
self.db_api.image_get_all call is made
(https://github.com/openstack/glance/blob/stable/havana/glance/registry/api/v1/images.py#L108).
If I try to make this call manually it returns an empty list on the
first try, but all subsequent tries with the same arguments return the
right list with all the 4 images. Does anyone know what's going on?

I have only tried reproducing this on Havana.

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  I have 3 killed images and one in the 'deleted' state which I can see in
  the postgres database. When I try to run glance image-list, however,
  none of them are shown.
  
  I tried debugging this and placed a pdb.set_trace() call in
  glance/registry/api/v1/images.py:Controller._get_images where the
- self.db_api.image_get_all call is made. If I try to make this call
- manually it returns an empty list on the first try, but all subsequent
- tries with the same arguments return the right list with all the 4
- images. Does anyone know what's going on?
+ self.db_api.image_get_all call is made
+ 
(https://github.com/openstack/glance/blob/stable/havana/glance/registry/api/v1/images.py#L108).
+ If I try to make this call manually it returns an empty list on the
+ first try, but all subsequent tries with the same arguments return the
+ right list with all the 4 images. Does anyone know what's going on?
  
  I have only tried reproducing this on Havana.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1273326

Title:
  glance image-list does not show deleted/killed images

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I have 3 killed images and one in the 'deleted' state which I can see
  in the postgres database. When I try to run glance image-list,
  however, none of them are shown.

  I tried debugging this and placed a pdb.set_trace() call in
  glance/registry/api/v1/images.py:Controller._get_images where the
  self.db_api.image_get_all call is made
  
(https://github.com/openstack/glance/blob/stable/havana/glance/registry/api/v1/images.py#L108).
  If I try to make this call manually it returns an empty list on the
  first try, but all subsequent tries with the same arguments return the
  right list with all the 4 images. Does anyone know what's going on?

  I have only tried reproducing this on Havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1273326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1199468] Re: Race around info_cache and bw_usage updates

2014-01-27 Thread Matt Riedemann
This was fixed back on 2013/07/10, so it's in Havana.

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1199468

Title:
  Race around info_cache and bw_usage updates

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  info_cache and bw_usage updates have the semantics of updating if
  exists, otherwise creating a new record.

  If two greenthreads call this method, it's possible they'll race and
  both attempt to create new records, resulting in DBDuplicateEntry
  exceptions.

  For both of these cases, first-one-wins appears to be an acceptable
  solution, so we can just swallow the exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1199468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273336] [NEW] common.sql.migration.find_migrate_repo never used with an argument.

2014-01-27 Thread Ilya Pekelny
Public bug reported:

If we look attentively to a `find_migrate_repo` usage we will see:
$ grep -r find_migrate_repo keystone

keystone/common/sql/migration.py:repo_path = find_migrate_repo()
keystone/common/sql/migration.py:repo_path = find_migrate_repo()
keystone/common/sql/migration.py:repo_path = find_migrate_repo()

The function never been used with an argument. What does it mean? This
mean that the only line is needed in this function
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migration.py#L87.
And at this point the function is redundant in the current state. Must
be refactored.

** Affects: keystone
 Importance: Undecided
 Assignee: Ilya Pekelny (i159)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Ilya Pekelny (i159)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273336

Title:
  common.sql.migration.find_migrate_repo never used with an argument.

Status in OpenStack Identity (Keystone):
  New

Bug description:
  If we look attentively to a `find_migrate_repo` usage we will see:
  $ grep -r find_migrate_repo keystone

  keystone/common/sql/migration.py:repo_path = find_migrate_repo()
  keystone/common/sql/migration.py:repo_path = find_migrate_repo()
  keystone/common/sql/migration.py:repo_path = find_migrate_repo()

  The function never been used with an argument. What does it mean? This
  mean that the only line is needed in this function
  
https://github.com/openstack/keystone/blob/master/keystone/common/sql/migration.py#L87.
  And at this point the function is redundant in the current state. Must
  be refactored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254772] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML setUpClass times-out on attaching volume

2014-01-27 Thread Matt Riedemann
The model server went away race is also tracked in bug 1270470.  We
should leave the model server went away issue for that bug, because I'm
not convinced it has anything to do with this one.

We do have hits on this though:

message:Volume test_attach failed to reach available status within the
required time AND filename:console.html

10 hits in the last 2 weeks, none in the gate:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVm9sdW1lIHRlc3RfYXR0YWNoIGZhaWxlZCB0byByZWFjaCBhdmFpbGFibGUgc3RhdHVzIHdpdGhpbiB0aGUgcmVxdWlyZWQgdGltZVwiIEFORCBmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiJhbGwiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwODQ1OTA4MTUxfQ==

** Tags added: testing volumes

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254772

Title:
  tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML
  setUpClass times-out on attaching volume

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  2013-11-25 15:42:45.769 | 
==
  2013-11-25 15:42:45.770 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | 
--
  2013-11-25 15:42:45.770 | _StringException: Traceback (most recent call last):
  2013-11-25 15:42:45.770 |   File 
tempest/api/compute/servers/test_server_rescue.py, line 51, in setUpClass
  2013-11-25 15:42:45.770 | cls.volume_to_attach['id'], 'available')
  2013-11-25 15:42:45.771 |   File 
tempest/services/compute/xml/volumes_extensions_client.py, line 140, in 
wait_for_volume_status
  2013-11-25 15:42:45.771 | raise exceptions.TimeoutException(message)
  2013-11-25 15:42:45.771 | TimeoutException: Request timed out
  2013-11-25 15:42:45.771 | Details: Volume test_attach failed to reach 
available status within the required time (196 s).
  2013-11-25 15:42:45.771 | 

  
  
http://logs.openstack.org/77/56577/9/check/check-tempest-devstack-vm-postgres-full/f5fe3ff/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1254772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273355] [NEW] Grizzly-havana upgrade fails when migrating from stamped db

2014-01-27 Thread Jakub Libosvar
Public bug reported:

If grizzly is deployed without using quantum-db-manage and letting
neutron to create tables, there are not created tables
servicedefinitions and servicetypes. These tables are dropped later when
using LoadBalancerPlugin. When creating db scheme with quantum-db-
manage, these tables are created and dropped correctly.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273355

Title:
  Grizzly-havana upgrade fails when migrating from stamped db

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If grizzly is deployed without using quantum-db-manage and letting
  neutron to create tables, there are not created tables
  servicedefinitions and servicetypes. These tables are dropped later
  when using LoadBalancerPlugin. When creating db scheme with quantum-
  db-manage, these tables are created and dropped correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254772] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML setUpClass times-out on attaching volume

2014-01-27 Thread Matt Riedemann
Looked this over with sdague, we're thinking the problem is how
test_server_rescue is doing two volume creates in the setUpClass method
and then when it tears down, it doesn't wait for the volumes to be
deleted, it just moves on.  When you have tests running in parallel,
this could cause cinder to backup and then we get racy failures on
volume creation.

Also, out of the 7 test cases, only two of them use the volumes, and
really those are isolated, so you only need to create one volume for the
entire test class and that's only in two of the seven test cases, and
then we can do that in the test cases themselves and cleanup with an
addCleanup call inline.

** No longer affects: nova

** No longer affects: cinder

** Changed in: tempest
   Status: Invalid = Triaged

** Changed in: tempest
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254772

Title:
  tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML
  setUpClass times-out on attaching volume

Status in Tempest:
  Triaged

Bug description:
  2013-11-25 15:42:45.769 | 
==
  2013-11-25 15:42:45.770 | FAIL: setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | setUpClass 
(tempest.api.compute.servers.test_server_rescue.ServerRescueTestXML)
  2013-11-25 15:42:45.770 | 
--
  2013-11-25 15:42:45.770 | _StringException: Traceback (most recent call last):
  2013-11-25 15:42:45.770 |   File 
tempest/api/compute/servers/test_server_rescue.py, line 51, in setUpClass
  2013-11-25 15:42:45.770 | cls.volume_to_attach['id'], 'available')
  2013-11-25 15:42:45.771 |   File 
tempest/services/compute/xml/volumes_extensions_client.py, line 140, in 
wait_for_volume_status
  2013-11-25 15:42:45.771 | raise exceptions.TimeoutException(message)
  2013-11-25 15:42:45.771 | TimeoutException: Request timed out
  2013-11-25 15:42:45.771 | Details: Volume test_attach failed to reach 
available status within the required time (196 s).
  2013-11-25 15:42:45.771 | 

  
  
http://logs.openstack.org/77/56577/9/check/check-tempest-devstack-vm-postgres-full/f5fe3ff/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1254772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273386] [NEW] Neutron namespace metadata proxy triggers kernel crash

2014-01-27 Thread Salvatore Orlando
Public bug reported:

In the past 9 days we have been seeing very frequent occurences of this
kernel crash: http://paste.openstack.org/show/61869/

Even if the particular crash pasted here is triggered by dnsmasq, in
almost all cases the crash is actually triggered by the neutron metada
proxy.

This also affects nova badly since this issue, which appears namespace
related, results in a hang while mounting the ndb device for key
injection.

logstash query:
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImtlcm5lbCBCVUcgYXQgL2J1aWxkL2J1aWxkZC9saW51eC0zLjIuMC9mcy9idWZmZXIuYzoyOTE3XCIgYW5kIGZpbGVuYW1lOnN5c2xvZy50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wMS0xNlQxODo1MDo0OCswMDowMCIsInRvIjoiMjAxNC0wMS0yN1QxOToxNjoxMSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxMzkwODUwMzI2ODY0fQ==

We have seen about 398 hits since the bug started to manifest.
Decreased hit rate in the past few days is due to less neutron patches being 
pushed.

** Affects: neutron
 Importance: Critical
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1273386

Title:
  Neutron namespace metadata proxy triggers kernel crash

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  In the past 9 days we have been seeing very frequent occurences of
  this kernel crash: http://paste.openstack.org/show/61869/

  Even if the particular crash pasted here is triggered by dnsmasq, in
  almost all cases the crash is actually triggered by the neutron metada
  proxy.

  This also affects nova badly since this issue, which appears namespace
  related, results in a hang while mounting the ndb device for key
  injection.

  logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImtlcm5lbCBCVUcgYXQgL2J1aWxkL2J1aWxkZC9saW51eC0zLjIuMC9mcy9idWZmZXIuYzoyOTE3XCIgYW5kIGZpbGVuYW1lOnN5c2xvZy50eHQiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNC0wMS0xNlQxODo1MDo0OCswMDowMCIsInRvIjoiMjAxNC0wMS0yN1QxOToxNjoxMSswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxMzkwODUwMzI2ODY0fQ==

  We have seen about 398 hits since the bug started to manifest.
  Decreased hit rate in the past few days is due to less neutron patches being 
pushed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1273386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267215] Re: policy.v3cloudsample.json contains unparsable items

2014-01-27 Thread Dolph Mathews
** Changed in: keystone/havana
   Status: Invalid = New

** Changed in: keystone/havana
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1267215

Title:
  policy.v3cloudsample.json contains unparsable items

Status in OpenStack Identity (Keystone):
  Invalid
Status in Keystone havana series:
  New

Bug description:
  havana policy.v3cloudsample.json file containts something that can't be 
parsed.  Keystone logs 'Can't load the rule' (or something similar) failing on 
split(':')
  identity:list_role_assignments: [[admin_on_domain_filter],
     [admin_on_project_filter]],

  I guess it should be
  identity:list_role_assignments: [[rule:admin_on_domain_filter],
     [rule:admin_on_project_filter]],

  Also I found that I was hardly able work with grants for projects  inside not 
default domain.
  I solved it by changing the rules (probably ones provided in sample 
policy.json also can be changed):
     admin_on_domain_target : [[rule:admin_required, 
domain_id:%(target.domain.id)s]],
  admin_on_project_target : [[rule:admin_required, 
project_id:%(target.project.id)s]],
  identity:check_grant: [[rule:admin_on_project_target],
   [rule:admin_on_domain_target]],
  identity:list_grants: [[rule:admin_on_project_target],
   [rule:admin_on_domain_target]],
  identity:create_grant: [[rule:admin_on_project_target],
    [rule:admin_on_domain_target]],
  identity:revoke_grant: [[rule:admin_on_project_target],
    [rule:admin_on_domain_target]],

  to
  admin_on_project_target : [[rule:admin_required, 
project_id:%(target.project.id)s]],
  admin_on_project_domain_target : [[rule:admin_required, 
domain_id:%(target.project.domain_id)s]],
  grant_admin : [[rule:admin_on_project_target],
   [rule:admin_on_project_domain_target]],
  identity:check_grant: [[rule:grant_admin]],
  identity:list_grants: [[rule:grant_admin]],
  identity:create_grant: [[rule:grant_admin]],
  identity:revoke_grant: [[rule:grant_admin]],

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1267215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240753] Re: don't use paste to configure authtoken

2014-01-27 Thread Dean Troyer
** Changed in: devstack
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240753

Title:
  don't use paste to configure authtoken

Status in Cinder:
  Fix Released
Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in tripleo - openstack on openstack:
  Fix Committed

Bug description:
  Several services (Nova/Cinder) still default to using api-paste.ini
  for keystoneclient's authtoken configuration. We should move towards
  using a more editable config files (nova.conf/cinder.conf) for this...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1240753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271775] Re: common log format

2014-01-27 Thread Brant Knudson
Turns out it's not keystone's logging.conf has to change. Devstack had
to change to stop using keystone's logging.conf. I believe I fixed this
with https://review.openstack.org/#/c/68530/ in devstack.

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
 Assignee: (unassigned) = Brant Knudson (blk-u)

** Changed in: keystone
   Status: Triaged = Won't Fix

** Changed in: devstack
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1271775

Title:
  common log format

Status in devstack - openstack dev environments:
  Fix Committed
Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  
  The different projects need to use a common log format.

  The requirements are:

  1) same format for all the logs so that they're easily parsed
  2) format must use milliseconds

  There might be a format defined by oslo that we can use.

  Also, this must be what devstack sets up, so it's probably keystone's
  logging.conf that has to change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1271775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266921] Re: Log entry when a regular user does keystone user-list is not helpfu

2014-01-27 Thread Dolph Mathews
Given that this only applies to the v2.0 API and the solution would be a
non-trivial effort at this point, marking as Won't Fix. Happy to re-open
if anyone has a simple solution.

** Changed in: keystone
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1266921

Title:
   Log entry when a regular user does keystone user-list is not helpfu

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  keystone user-list is an admin only command.  When a regular user
  tries to execute it, you get a helpful response at the command line:

  [root@rhel ~(keystone_username)]# keystone user-list
  You are not authorized to perform the requested action: admin_required (HTTP 
403)

  However, this same message is in /var/log/keystone/keystone.log:

  2012-12-17 17:27:29  WARNING [keystone.common.wsgi] You are not
  authorized to perform the requested action: admin_required

  This log entry is not helpful.  As an administrator, all this tells
  you is that *someone* tried to execute *something* that they weren't
  allowed to.  Without any information about who or what, the log entry
  isn't useful.

  Originalluy reported:

  https://bugzilla.redhat.com/show_bug.cgi?id=888066

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1266921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273451] [NEW] improper use of mock with stevedore in tests

2014-01-27 Thread Doug Hellmann
Public bug reported:

The tests in nova/tests/test_hook.py are mocking a private part of the
stevedore API (_load_plugins) instead of using
ExtensionManager.make_test_instance() to create a test version of an
ExtensionManager and passing that somewhere instead.

See https://review.openstack.org/#/c/69475/1 as a first-pass work-
around.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  The tests in nova/tests/test_hook.py are mocking a private part of the
  stevedore API (_load_plugins) instead of using
  ExtensionManager.make_test_instance() to create a test version of an
  ExtensionManager and passing that somewhere instead.
+ 
+ See https://review.openstack.org/#/c/69475/1 as a first-pass work-
+ around.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273451

Title:
  improper use of mock with stevedore in tests

Status in OpenStack Compute (Nova):
  New

Bug description:
  The tests in nova/tests/test_hook.py are mocking a private part of the
  stevedore API (_load_plugins) instead of using
  ExtensionManager.make_test_instance() to create a test version of an
  ExtensionManager and passing that somewhere instead.

  See https://review.openstack.org/#/c/69475/1 as a first-pass work-
  around.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273455] [NEW] stevedore 0.14 changes _load_plugins parameter list, mocking breaks

2014-01-27 Thread Sean Dague
Public bug reported:

In stevedore 0.14 the signature on _load_plugins changed. It now takes
an extra parameter. The nova and ceilometer unit tests mocked to the old
signature, which is causing breaks in the gate.

** Affects: ceilometer
 Importance: Critical
 Assignee: Doug Hellmann (doug-hellmann)
 Status: New

** Affects: nova
 Importance: Critical
 Assignee: Sean Dague (sdague)
 Status: In Progress

** Affects: oslo.messaging
 Importance: Critical
 Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
   Importance: Undecided = Critical

** Changed in: nova
 Assignee: (unassigned) = Sean Dague (sdague)

** Changed in: ceilometer
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273455

Title:
  stevedore 0.14 changes _load_plugins parameter list, mocking breaks

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  New

Bug description:
  In stevedore 0.14 the signature on _load_plugins changed. It now takes
  an extra parameter. The nova and ceilometer unit tests mocked to the
  old signature, which is causing breaks in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1273455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273455] Re: stevedore 0.14 changes _load_plugins parameter list, mocking breaks

2014-01-27 Thread Sean Dague
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: oslo.messaging
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273455

Title:
  stevedore 0.14 changes _load_plugins parameter list, mocking breaks

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  New

Bug description:
  In stevedore 0.14 the signature on _load_plugins changed. It now takes
  an extra parameter. The nova and ceilometer unit tests mocked to the
  old signature, which is causing breaks in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1273455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257617] Re: Nova is unable to authenticate via keystone

2014-01-27 Thread Dolph Mathews
** Also affects: devstack
   Importance: Undecided
   Status: New

** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1257617

Title:
  Nova is unable to authenticate via keystone

Status in devstack - openstack dev environments:
  New

Bug description:
  I updated the code yesterday with devstack. The latest nova code gave
  me the following error, when I tried any nova command.

  
==n-api==
  2013-12-04 13:49:40.151 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
  2013-12-04 13:49:40.472 WARNING keystoneclient.middleware.auth_token [-] 
Retrying on HTTP connection exception: [Errno 1] _ssl.c:504: 
  error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
  2013-12-04 13:49:40.975 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
  2013-12-04 13:49:41.008 WARNING keystoneclient.middleware.auth_token [-] 
Retrying on HTTP connection exception: [Errno 1] _ssl.c:504: 
  error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
  2013-12-04 13:49:42.012 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
  2013-12-04 13:49:42.053 WARNING keystoneclient.middleware.auth_token [-] 
Retrying on HTTP connection exception: [Errno 1] _ssl.c:504: 
  error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
  2013-12-04 13:49:44.058 INFO requests.packages.urllib3.connectionpool [-] 
Starting new HTTPS connection (1): 9.119.148.201
  2013-12-04 13:49:44.100 ERROR keystoneclient.middleware.auth_token [-] HTTP 
connection exception: [Errno 1] _ssl.c:504: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol
  2013-12-04 13:49:44.103 DEBUG keystoneclient.middleware.auth_token [-] Token 
validation failure. from (pid=7023) _validate_user_token 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:826
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token Traceback 
(most recent call last):
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 821, in _validate_user_token
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token data = 
self.verify_uuid_token(user_token, retry)
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 1096, in verify_uuid_token
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
self.auth_version = self._choose_api_version()
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 519, in _choose_api_version
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
versions_supported_by_server = self._get_supported_versions()
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 539, in _get_supported_versions
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
response, data = self._json_request('GET', '/')
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 749, in _json_request
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
response = self._http_request(method, path, **kwargs)
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 714, in _http_request
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token raise 
NetworkError('Unable to communicate with keystone')
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
NetworkError: Unable to communicate with keystone
  2013-12-04 13:49:44.103 TRACE keystoneclient.middleware.auth_token 
  2013-12-04 13:49:44.110 WARNING keystoneclient.middleware.auth_token [-] 
Authorization failed for token 4517bf0837d30dcf7b9cd438075c9d92
  2013-12-04 13:49:44.111 INFO keystoneclient.middleware.auth_token [-] Invalid 
user token - rejecting request
  2013-12-04 13:49:44.114 INFO nova.osapi_compute.wsgi.server [-] 9.119.148.201 
GET /v2/26b6d3afa22340a4aa5896068ab58f97/extensions HTTP/1.1 status: 401 len: 
197 time: 3.9667039

  2013-12-04 13:49:44.117 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token from (pid=7023) __call__ 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py:568
  2013-12-04 13:49:44.121 DEBUG 

[Yahoo-eng-team] [Bug 1273455] Re: stevedore 0.14 changes _load_plugins parameter list, mocking breaks

2014-01-27 Thread Doug Hellmann
** Also affects: python-stevedore
   Importance: Undecided
   Status: New

** Changed in: python-stevedore
   Status: New = Fix Released

** Changed in: python-stevedore
 Assignee: (unassigned) = Doug Hellmann (doug-hellmann)

** Changed in: python-stevedore
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273455

Title:
  stevedore 0.14 changes _load_plugins parameter list, mocking breaks

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  In Progress
Status in Manage plugins for Python applications:
  Fix Released

Bug description:
  In stevedore 0.14 the signature on _load_plugins changed. It now takes
  an extra parameter. The nova and ceilometer unit tests mocked to the
  old signature, which is causing breaks in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1273455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273478] [NEW] NetworkInfoAsyncWrapper __str__ can cause deadlock when called from a log message

2014-01-27 Thread Pedro Marques
Public bug reported:

CPython logging library generates the string representation of the
message to log under a lock.

def handle(self, record):

Conditionally emit the specified logging record.

Emission depends on filters which may have been added to the handler.
Wrap the actual emission of the record with acquisition/release of
the I/O thread lock. Returns whether the filter passed the record for
emission.

rv = self.filter(record)
if rv:
self.acquire()
try:
self.emit(record)
finally:
self.release()
return rv


Nova will use the __str__ method of the NetworkInfoAsyncWrapper when logging a 
message as in:

nova/virt/libvirt/driver.py:to_xml()

LOG.debug(_('Start to_xml instance=%(instance)s '
'network_info=%(network_info)s '
'disk_info=%(disk_info)s '
'image_meta=%(image_meta)s rescue=%(rescue)s'
'block_device_info=%(block_device_info)s'),
  {'instance': instance, 'network_info': network_info,
   'disk_info': disk_info, 'image_meta': image_meta,
   'rescue': rescue, 'block_device_info': block_device_info})

Currently this causes the __str__ method to be called under the logging
lock:

  File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3058, in to_xml
'rescue': rescue, 'block_device_info': block_device_info})
  File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug
self.logger.debug(msg, *args, **kwargs)
  File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug
self._log(DEBUG, msg, args, **kwargs)
  File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log
self.handle(record)
  File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle
self.callHandlers(record)
  File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers
hdlr.handle(record)
  File /usr/lib/python2.7/logging/__init__.py, line 748, in handle
self.emit(record)
  File /usr/lib/python2.7/logging/handlers.py, line 414, in emit
logging.FileHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 930, in emit
StreamHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 846, in emit
msg = self.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 723, in format
return fmt.format(record)
  File /usr/lib/python2.7/dist-packages/nova/openstack/common/log.py, line 
517, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 464, in format
record.message = record.getMessage()
  File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage
msg = msg % self.args
  File /usr/lib/python2.7/dist-packages/nova/network/model.py, line 383, in 
__str__
return self._sync_wrapper(fn, *args, **kwargs)

This then waits for an eventlet to complete. This eventlet may itself
attempt to use a log message as it executes.

This sequence of operations can produce a deadlock between a greenlet
thread waiting for the async operation to finish and the async job
itself, if it decides to log a message.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273478

Title:
  NetworkInfoAsyncWrapper __str__ can cause deadlock when called from a
  log message

Status in OpenStack Compute (Nova):
  New

Bug description:
  CPython logging library generates the string representation of the
  message to log under a lock.

  def handle(self, record):
  
  Conditionally emit the specified logging record.

  Emission depends on filters which may have been added to the handler.
  Wrap the actual emission of the record with acquisition/release of
  the I/O thread lock. Returns whether the filter passed the record for
  emission.
  
  rv = self.filter(record)
  if rv:
  self.acquire()
  try:
  self.emit(record)
  finally:
  self.release()
  return rv

  
  Nova will use the __str__ method of the NetworkInfoAsyncWrapper when logging 
a message as in:

  nova/virt/libvirt/driver.py:to_xml()

  LOG.debug(_('Start to_xml instance=%(instance)s '
  'network_info=%(network_info)s '
  'disk_info=%(disk_info)s '
  'image_meta=%(image_meta)s rescue=%(rescue)s'
  'block_device_info=%(block_device_info)s'),
{'instance': instance, 'network_info': network_info,
 'disk_info': disk_info, 'image_meta': image_meta,
 

[Yahoo-eng-team] [Bug 1269204] Re: tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescue_paused_instance fails sporadically in gate jobs

2014-01-27 Thread Matt Riedemann
*** This bug is a duplicate of bug 1226412 ***
https://bugs.launchpad.net/bugs/1226412

4 hits on this in the last 7 days:

message:Server AND message:failed to reach PAUSED status and task
state \None\ within the required time AND message:Current status:
ACTIVE. AND filename:console.html

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU2VydmVyXCIgQU5EIG1lc3NhZ2U6XCJmYWlsZWQgdG8gcmVhY2ggUEFVU0VEIHN0YXR1cyBhbmQgdGFzayBzdGF0ZSBcXFwiTm9uZVxcXCIgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIG1lc3NhZ2U6XCJDdXJyZW50IHN0YXR1czogQUNUSVZFLlwiIEFORCBmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwODY1OTM3MTcwfQ==

Sometimes the task_state is None, sometimes it's 'pausing', so looks
like the instance is transitioning states when the timeout is reached.

** This bug has been marked a duplicate of bug 1226412
   guest doesn't reach PAUSED state within 200s in the gate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269204

Title:
  
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescue_paused_instance
  fails sporadically in gate jobs

Status in OpenStack Compute (Nova):
  New

Bug description:
  
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescue_paused_instance
  fails sporadically in gate jobs

  See: http://logs.openstack.org/08/66108/3/check/check-tempest-dsvm-
  full/fdbbfd3/console.html

  2014-01-15 00:40:01.541 | 
==
  2014-01-15 00:40:01.542 | FAIL: 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescue_paused_instance[gate,negative]
  2014-01-15 00:40:01.542 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescue_paused_instance[gate,negative]
  2014-01-15 00:40:01.543 | 
--
  2014-01-15 00:40:01.544 | _StringException: Empty attachments:
  2014-01-15 00:40:01.544 |   stderr
  2014-01-15 00:40:01.544 |   stdout
  2014-01-15 00:40:01.544 | 
  2014-01-15 00:40:01.544 | pythonlogging:'': {{{
  2014-01-15 00:40:01.544 | 2014-01-15 00:12:20,256 Request: POST 
http://127.0.0.1:8774/v2/64edf5122f2d486682ecfaa53bd071b6/servers/83997448-4700-4333-8022-99328e6b5e1f/action
  2014-01-15 00:40:01.545 | 2014-01-15 00:12:20,256 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': 'Token omitted'}
  2014-01-15 00:40:01.545 | 2014-01-15 00:12:20,256 Request Body: {pause: {}}
  2014-01-15 00:40:01.545 | 2014-01-15 00:12:20,916 Response Status: 202
  2014-01-15 00:40:01.545 | 2014-01-15 00:12:20,916 Nova request id: 
req-f716db87-36c7-46be-8672-9f5fbecd2c04
  2014-01-15 00:40:01.545 | 2014-01-15 00:12:20,916 Response Headers: 
{'content-length': '0', 'date': 'Wed, 15 Jan 2014 00:12:20 GMT', 
'content-type': 'text/html; charset=UTF-8', 'connection': 'close'}
  .
  .
  .
  2014-01-15 00:40:01.934 | 2014-01-15 00:15:37,090 Request: POST 
http://127.0.0.1:8774/v2/64edf5122f2d486682ecfaa53bd071b6/servers/83997448-4700-4333-8022-99328e6b5e1f/action
  2014-01-15 00:40:01.934 | 2014-01-15 00:15:37,090 Request Headers: 
{'Content-Type': 'application/json', 'Accept': 'application/json', 
'X-Auth-Token': 'Token omitted'}
  2014-01-15 00:40:01.934 | 2014-01-15 00:15:37,090 Request Body: {unpause: 
{}}
  2014-01-15 00:40:01.935 | 2014-01-15 00:15:37,138 Response Status: 409
  2014-01-15 00:40:01.935 | 2014-01-15 00:15:37,138 Nova request id: 
req-ff9827ce-fbae-4f3b-addf-8e0ddf6a29a7
  2014-01-15 00:40:01.935 | 2014-01-15 00:15:37,138 Response Headers: 
{'content-length': '105', 'date': 'Wed, 15 Jan 2014 00:15:37 GMT', 
'content-type': 'application/json; charset=UTF-8', 'connection': 'close'}
  2014-01-15 00:40:01.935 | 2014-01-15 00:15:37,138 Response Body: 
{conflictingRequest: {message: Cannot 'unpause' while instance is in 
vm_state active, code: 409}}
  2014-01-15 00:40:01.935 | }}}
  2014-01-15 00:40:01.935 | 
  2014-01-15 00:40:01.935 | traceback-1: {{{
  2014-01-15 00:40:01.936 | Traceback (most recent call last):
  2014-01-15 00:40:01.936 |   File 
tempest/api/compute/servers/test_server_rescue.py, line 107, in _unpause
  2014-01-15 00:40:01.936 | resp, body = 
self.servers_client.unpause_server(server_id)
  2014-01-15 00:40:01.936 |   File 
tempest/services/compute/json/servers_client.py, line 374, in unpause_server
  2014-01-15 00:40:01.936 | return self.action(server_id, 'unpause', None, 
**kwargs)
  2014-01-15 00:40:01.936 |   File 
tempest/services/compute/json/servers_client.py, line 198, in action
  2014-01-15 00:40:01.936 | post_body, self.headers)
  2014-01-15 00:40:01.937 |   File tempest/common/rest_client.py, line 302, 
in post
  2014-01-15 00:40:01.937 | 

[Yahoo-eng-team] [Bug 1267326] Re: test_create_backup fails due to unexpected image number

2014-01-27 Thread Ken'ichi Ohmichi
Hi David,

OK, I have added glance to this report and will write the details
related to glance.

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1267326

Title:
  test_create_backup fails due to unexpected image number

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Tempest:
  Fix Released

Bug description:
  test_create_backup failed with the following traceback:

  Traceback (most recent call last):
    File tempest/api/compute/servers/test_server_actions.py, line 298, in 
test_create_backup
  self.assertEqual(2, len(image_list))
    File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
324, in assertEqual
  self.assertThat(observed, matcher, message)
    File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
414, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: 2 != 3

  This test expects the gotten backup image number should be 2 which is limited 
with a rotation.
  However, we could get 3 images and the test failed.

  The log are
  * 
http://logs.openstack.org/65/63365/5/check/check-tempest-dsvm-postgres-full/996c8f9/
  * 
http://logs.openstack.org/65/63365/5/gate/gate-tempest-dsvm-postgres-full/4880d61/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1267326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267326] Re: test_create_backup fails due to unexpected image number

2014-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/69369
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=06d37cfbcf387fac74f558a2c6a1c66c33b6d9f2
Submitter: Jenkins
Branch:master

commit 06d37cfbcf387fac74f558a2c6a1c66c33b6d9f2
Author: Ken'ichi Ohmichi oomi...@mxs.nes.nec.co.jp
Date:   Tue Jan 28 07:55:46 2014 +0900

Specify 'active' status for deleting situations

The test creates 3 backups with the rotation 2 and checks that
2 backups exist with getting its image list. test_create_backup
fails sometimes due to the existence of 3 backups.

Glance v1 delete_image API changes an image's status to 'deleted'
then the deleted flag to 'true'. If getting a list between the
status change and the delete flag change, we can get a list including
the deleting backup also like the following:

{images: [
  {status: deleted, name: backup-1-tempest-438772029,
   deleted: false, ..},
  {status: active, name: backup-2-tempest-2111479443,
   deleted: false, ..},

To avoid this situation, this patch adds the status 'active' to the
calls which get the backup list.

Change-Id: I59966534a8eb1430604cada1f64b8c8df46a5f17
Closes-Bug: #1267326


** Changed in: tempest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1267326

Title:
  test_create_backup fails due to unexpected image number

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Tempest:
  Fix Released

Bug description:
  test_create_backup failed with the following traceback:

  Traceback (most recent call last):
    File tempest/api/compute/servers/test_server_actions.py, line 298, in 
test_create_backup
  self.assertEqual(2, len(image_list))
    File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
324, in assertEqual
  self.assertThat(observed, matcher, message)
    File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
414, in assertThat
  raise MismatchError(matchee, matcher, mismatch, verbose)
  MismatchError: 2 != 3

  This test expects the gotten backup image number should be 2 which is limited 
with a rotation.
  However, we could get 3 images and the test failed.

  The log are
  * 
http://logs.openstack.org/65/63365/5/check/check-tempest-dsvm-postgres-full/996c8f9/
  * 
http://logs.openstack.org/65/63365/5/gate/gate-tempest-dsvm-postgres-full/4880d61/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1267326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273478] Re: NetworkInfoAsyncWrapper __str__ can cause deadlock when called from a log message

2014-01-27 Thread Davanum Srinivas (DIMS)
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273478

Title:
  NetworkInfoAsyncWrapper __str__ can cause deadlock when called from a
  log message

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  CPython logging library generates the string representation of the
  message to log under a lock.

  def handle(self, record):
  
  Conditionally emit the specified logging record.

  Emission depends on filters which may have been added to the handler.
  Wrap the actual emission of the record with acquisition/release of
  the I/O thread lock. Returns whether the filter passed the record for
  emission.
  
  rv = self.filter(record)
  if rv:
  self.acquire()
  try:
  self.emit(record)
  finally:
  self.release()
  return rv

  
  Nova will use the __str__ method of the NetworkInfoAsyncWrapper when logging 
a message as in:

  nova/virt/libvirt/driver.py:to_xml()

  LOG.debug(_('Start to_xml instance=%(instance)s '
  'network_info=%(network_info)s '
  'disk_info=%(disk_info)s '
  'image_meta=%(image_meta)s rescue=%(rescue)s'
  'block_device_info=%(block_device_info)s'),
{'instance': instance, 'network_info': network_info,
 'disk_info': disk_info, 'image_meta': image_meta,
 'rescue': rescue, 'block_device_info': block_device_info})

  Currently this causes the __str__ method to be called under the
  logging lock:

File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3058, in to_xml
  'rescue': rescue, 'block_device_info': block_device_info})
File /usr/lib/python2.7/logging/__init__.py, line 1412, in debug
  self.logger.debug(msg, *args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1128, in debug
  self._log(DEBUG, msg, args, **kwargs)
File /usr/lib/python2.7/logging/__init__.py, line 1258, in _log
  self.handle(record)
File /usr/lib/python2.7/logging/__init__.py, line 1268, in handle
  self.callHandlers(record)
File /usr/lib/python2.7/logging/__init__.py, line 1308, in callHandlers
  hdlr.handle(record)
File /usr/lib/python2.7/logging/__init__.py, line 748, in handle
  self.emit(record)
File /usr/lib/python2.7/logging/handlers.py, line 414, in emit
  logging.FileHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 930, in emit
  StreamHandler.emit(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 846, in emit
  msg = self.format(record)
File /usr/lib/python2.7/logging/__init__.py, line 723, in format
  return fmt.format(record)
File /usr/lib/python2.7/dist-packages/nova/openstack/common/log.py, line 
517, in format
  return logging.Formatter.format(self, record)
File /usr/lib/python2.7/logging/__init__.py, line 464, in format
  record.message = record.getMessage()
File /usr/lib/python2.7/logging/__init__.py, line 328, in getMessage
  msg = msg % self.args
File /usr/lib/python2.7/dist-packages/nova/network/model.py, line 383, in 
__str__
  return self._sync_wrapper(fn, *args, **kwargs)

  This then waits for an eventlet to complete. This eventlet may itself
  attempt to use a log message as it executes.

  This sequence of operations can produce a deadlock between a greenlet
  thread waiting for the async operation to finish and the async job
  itself, if it decides to log a message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273496] [NEW] libvirt iSCSI driver sets is_block_dev=False

2014-01-27 Thread Iain MacDonnell
Public bug reported:

Trying to use iSCSI with libvirt/Xen, attaching volumes to instances was
failing. I tracked this down to the libvirt XML looking like:

disk type=block device=disk
  driver name=file type=raw cache=none/
  source 
dev=/dev/disk/by-path/ip-192.168.8.11:3260-iscsi-iqn.1986-03.com.sun:02:ecd142ab-b1c7-6bcf-8f91-f55b6c766bcc-lun-0/
  target bus=xen dev=xvdb/
  seriale8c640c6-641b-4940-88f2-79555cdd5551/serial
/disk


The driver name should be phy, not file.


More digging lead to the iSCSI volume driver in nova/virt/libvirt/volume.py, 
which does:

class LibvirtISCSIVolumeDriver(LibvirtBaseVolumeDriver):
Driver to attach Network volumes to libvirt.
def __init__(self, connection):
super(LibvirtISCSIVolumeDriver,
  self).__init__(connection, is_block_dev=False)


Surely is_block_dev should be True for iSCSI?? Changing this makes the 
problem go away - now pick_disk_driver_name() in nova/virt/libvirt/utils.py 
does the right thing and my volume attaches successfully.

Am I missing something here... ?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273496

Title:
  libvirt iSCSI driver sets is_block_dev=False

Status in OpenStack Compute (Nova):
  New

Bug description:
  Trying to use iSCSI with libvirt/Xen, attaching volumes to instances
  was failing. I tracked this down to the libvirt XML looking like:

  disk type=block device=disk
driver name=file type=raw cache=none/
source 
dev=/dev/disk/by-path/ip-192.168.8.11:3260-iscsi-iqn.1986-03.com.sun:02:ecd142ab-b1c7-6bcf-8f91-f55b6c766bcc-lun-0/
target bus=xen dev=xvdb/
seriale8c640c6-641b-4940-88f2-79555cdd5551/serial
  /disk

  
  The driver name should be phy, not file.

  
  More digging lead to the iSCSI volume driver in nova/virt/libvirt/volume.py, 
which does:

  class LibvirtISCSIVolumeDriver(LibvirtBaseVolumeDriver):
  Driver to attach Network volumes to libvirt.
  def __init__(self, connection):
  super(LibvirtISCSIVolumeDriver,
self).__init__(connection, is_block_dev=False)

  
  Surely is_block_dev should be True for iSCSI?? Changing this makes the 
problem go away - now pick_disk_driver_name() in nova/virt/libvirt/utils.py 
does the right thing and my volume attaches successfully.

  Am I missing something here... ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271806] Re: unable to run tests due to missing deps in the virtual env

2014-01-27 Thread David Peraza
This is affecting neutron as well:

Downloading/unpacking psutil=0.6.1,1.0 (from -r 
/home/openstack/workspace/neutron/requirements.txt (line 19))
  Could not find a version that satisfies the requirement psutil=0.6.1,1.0 
(from -r /home/openstack/workspace/neutron/requirements.txt (line 19)) (from 
versions: 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.2.0, 1.2.1)
  Some externally hosted files were ignored (use --allow-external to allow).
Cleaning up...
No distributions matching the version for psutil=0.6.1,1.0 (from -r 
/home/openstack/workspace/neutron/requirements.txt (line 19))

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271806

Title:
  unable to run tests due to missing deps in the virtual env

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On both my Ubuntu box and my Mac, I've been unable to run the glance
  tests since this evening due to a missing dependency, specifically a
  version of psutil between 0.6 and 1.0. The archive only has 1.1 and
  up. Here are the logs:

  Downloading/unpacking psutil=0.6.1,1.0 (from -r 
/Users/mfischer/code/glance/test-requirements.txt (line 19))

http://tarballs.openstack.org/oslo.messaging/oslo.messaging-1.2.0a11.tar.gz#egg=oslo.messaging-1.2.0a11
 uses an insecure transport scheme (http). Consider using https if 
tarballs.openstack.org has it available
Could not find a version that satisfies the requirement psutil=0.6.1,1.0 
(from -r /Users/mfischer/code/glance/test-requirements.txt (line 19)) (from 
versions: 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.2.0, 1.2.1)
Some externally hosted files were ignored (use --allow-external to allow).
  Cleaning up...
  No distributions matching the version for psutil=0.6.1,1.0 (from -r 
/Users/mfischer/code/glance/test-requirements.txt (line 19))
  Storing debug log for failure in 
/var/folders/d2/qr0r7fc10j35_lwkz9wwmtxcgp/T/tmpBIMPmg

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1271806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270608] [NEW] n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

2014-01-27 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Changes are failing the gate-tempest-*-full gate due to an error message in the 
logs.
The error message is like

2014-01-18 20:13:19.437 | Log File: n-cpu
2014-01-18 20:13:20.482 | 2014-01-18 20:04:05.189 ERROR nova.compute.manager 
[req-25a1842c-ce9a-4035-8975-651f6ee5ddfc 
tempest.scenario.manager-tempest-1060379467-user 
tempest.scenario.manager-tempest-1060379467-tenant] [instance: 
0b1c1b55-b520-4ff2-bac2-8457ba3f4b6a] Error: iSCSI device not found at 
/dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-a6e86002-dc25-4782-943b-58cc0c68238d-lun-1

Here's logstash for the query:

http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIEFORCBtZXNzYWdlOlwiRXJyb3I6IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXQgL2Rldi9kaXNrL2J5LXBhdGgvaXAtMTI3LjAuMC4xOjMyNjAtaXNjc2ktaXFuLjIwMTAtMTAub3JnLm9wZW5zdGFjazp2b2x1bWUtXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAxNTA4NTU5NTJ9

shows several failures starting at 2014-01-17T14:00:00

Maybe tempest is doing something that generates the ERROR message and then 
isn't accepting the error message it should?
Or nova is logging an error message when it shouldn't?

** Affects: nova
 Importance: Undecided
 Assignee: John Griffith (john-griffith)
 Status: Fix Committed


** Tags: testing volumes
-- 
n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail
https://bugs.launchpad.net/bugs/1270608
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270608] Re: n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

2014-01-27 Thread John Griffith
Addressed by: https://review.openstack.org/#/c/69443/

** Changed in: cinder
   Status: New = Fix Committed

** Project changed: cinder = nova-project

** Project changed: nova-project = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270608

Title:
  n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to
  fail

Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  Changes are failing the gate-tempest-*-full gate due to an error message in 
the logs.
  The error message is like

  2014-01-18 20:13:19.437 | Log File: n-cpu
  2014-01-18 20:13:20.482 | 2014-01-18 20:04:05.189 ERROR nova.compute.manager 
[req-25a1842c-ce9a-4035-8975-651f6ee5ddfc 
tempest.scenario.manager-tempest-1060379467-user 
tempest.scenario.manager-tempest-1060379467-tenant] [instance: 
0b1c1b55-b520-4ff2-bac2-8457ba3f4b6a] Error: iSCSI device not found at 
/dev/disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.2010-10.org.openstack:volume-a6e86002-dc25-4782-943b-58cc0c68238d-lun-1

  Here's logstash for the query:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJmaWxlbmFtZTpcImxvZ3Mvc2NyZWVuLW4tY3B1LnR4dFwiIEFORCBtZXNzYWdlOlwiRXJyb3I6IGlTQ1NJIGRldmljZSBub3QgZm91bmQgYXQgL2Rldi9kaXNrL2J5LXBhdGgvaXAtMTI3LjAuMC4xOjMyNjAtaXNjc2ktaXFuLjIwMTAtMTAub3JnLm9wZW5zdGFjazp2b2x1bWUtXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAxNTA4NTU5NTJ9

  shows several failures starting at 2014-01-17T14:00:00

  Maybe tempest is doing something that generates the ERROR message and then 
isn't accepting the error message it should?
  Or nova is logging an error message when it shouldn't?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1270608/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273556] [NEW] Servers sometimes fail to startup in the gate with a permission denied

2014-01-27 Thread Christopher Yeoh
Public bug reported:

Sometimes servers fail to startup - intermittent gate/check bug

Example error:

2014-01-28 02:14:36.309 ERROR nova.scheduler.filter_scheduler 
[req-a617cb8a-4f34-42f9-a4a7-e951cf538219 
InstanceActionsTestJSON-tempest-857832860-user 
InstanceActionsTestJSON-tempest-857832860-tenant] [instance: 
0f9850c1-b5a8-4d62-98ab-11b04ba41165] Error from last host: 
devstack-precise-1390529664 (node devstack-precise-1390529664): [u'Traceback 
(most recent call last):
', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1068, in 
_build_instance
set_access_ip=set_access_ip)
', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 354, in 
decorated_function
return function(self, context, *args, **kwargs)
', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1478, in _spawn
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)
', u'  File /opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, 
in __exit__
six.reraise(self.type_, self.value, self.tb)
', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1475, in _spawn
block_device_info)
', u'  File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2187, in 
spawn
admin_pass=admin_password)
', u'  File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2490, in 
_create_image
project_id=instance[\'project_id\'])
', u'  File /opt/stack/new/nova/nova/virt/libvirt/imagebackend.py, line 180, 
in cache
imagecache.refresh_timestamp(base)
', u'  File /opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 261, in 
refresh_timestamp
inner_refresh_timestamp()
', u'  File /opt/stack/new/nova/nova/openstack/common/lockutils.py, line 249, 
in inner
return f(*args, **kwargs)
', u'  File /opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 258, in 
inner_refresh_timestamp
os.utime(base_file, None)
', uOSError: [Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/_base/307971e863bbda9ebaf13fcfedcb13498a5dbde6'
]

From here:

http://logs.openstack.org/91/58191/4/check/check-tempest-dsvm-
full/0dc8d4d/logs/screen-n-sch.txt.gz#_2014-01-28_02_14_36_309

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273556

Title:
  Servers sometimes fail to startup in the gate with a permission denied

Status in OpenStack Compute (Nova):
  New

Bug description:
  Sometimes servers fail to startup - intermittent gate/check bug

  Example error:

  2014-01-28 02:14:36.309 ERROR nova.scheduler.filter_scheduler 
[req-a617cb8a-4f34-42f9-a4a7-e951cf538219 
InstanceActionsTestJSON-tempest-857832860-user 
InstanceActionsTestJSON-tempest-857832860-tenant] [instance: 
0f9850c1-b5a8-4d62-98ab-11b04ba41165] Error from last host: 
devstack-precise-1390529664 (node devstack-precise-1390529664): [u'Traceback 
(most recent call last):
  ', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1068, in 
_build_instance
  set_access_ip=set_access_ip)
  ', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 354, in 
decorated_function
  return function(self, context, *args, **kwargs)
  ', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1478, in 
_spawn
  LOG.exception(_(\'Instance failed to spawn\'), instance=instance)
  ', u'  File /opt/stack/new/nova/nova/openstack/common/excutils.py, line 68, 
in __exit__
  six.reraise(self.type_, self.value, self.tb)
  ', u'  File /opt/stack/new/nova/nova/compute/manager.py, line 1475, in 
_spawn
  block_device_info)
  ', u'  File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2187, in 
spawn
  admin_pass=admin_password)
  ', u'  File /opt/stack/new/nova/nova/virt/libvirt/driver.py, line 2490, in 
_create_image
  project_id=instance[\'project_id\'])
  ', u'  File /opt/stack/new/nova/nova/virt/libvirt/imagebackend.py, line 
180, in cache
  imagecache.refresh_timestamp(base)
  ', u'  File /opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 261, 
in refresh_timestamp
  inner_refresh_timestamp()
  ', u'  File /opt/stack/new/nova/nova/openstack/common/lockutils.py, line 
249, in inner
  return f(*args, **kwargs)
  ', u'  File /opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 258, 
in inner_refresh_timestamp
  os.utime(base_file, None)
  ', uOSError: [Errno 13] Permission denied: 
'/opt/stack/data/nova/instances/_base/307971e863bbda9ebaf13fcfedcb13498a5dbde6'
  ]

  From here:

  http://logs.openstack.org/91/58191/4/check/check-tempest-dsvm-
  full/0dc8d4d/logs/screen-n-sch.txt.gz#_2014-01-28_02_14_36_309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   

[Yahoo-eng-team] [Bug 1273292] Re: Timed out waiting for thing ... to become in-use causes tempest-dsvm-* failures

2014-01-27 Thread John Griffith
*** This bug is a duplicate of bug 1270608 ***
https://bugs.launchpad.net/bugs/1270608

I believe this is a duplicate of:
https://bugs.launchpad.net/nova/+bug/1270608

Checking here in this instance of the failure:
http://logs.openstack.org/36/69236/2/check/check-tempest-dsvm-
full/8820082/logs/screen-n-cpu.txt.gz#_2014-01-26_22_03_48_841

You can see nova timed out after 15 seconds waiting for the iscsi mount
to complete.  Given the VERY heavy load caused by this test I think it
fits with the theory that these ops are horribly slow under heavy load.

I'm marking this as a duplicate, it's the same root cause regardless of
whether it ends up that waiting longer helps us or not.

** No longer affects: cinder

** Changed in: nova
 Assignee: (unassigned) = John Griffith (john-griffith)

** This bug has been marked a duplicate of bug 1270608
   n-cpu 'iSCSI device not found' log causes gate-tempest-dsvm-*-full to fail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1273292

Title:
  Timed out waiting for thing ... to become in-use causes tempest-
  dsvm-* failures

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  This is a spin-off of bug 1254890.  That bug was originally covering
  failures for both timing out waiting for an instance to become ACTIVE,
  as well as waiting for a volume to become in-use or available.

  It seems valuable to split out the cases of waiting for volumes to
  become in-use or available into its own bug.

  message:Details: Timed out waiting for thing AND message:to become
  AND (message:in-use OR message:available)

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIgQU5EIG1lc3NhZ2U6XCJ0byBiZWNvbWVcIiBBTkQgKG1lc3NhZ2U6XCJpbi11c2VcIiBPUiBtZXNzYWdlOlwiYXZhaWxhYmxlXCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzkwODQwODI1MDkxfQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1273292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp