[Yahoo-eng-team] [Bug 1431784] Re: Traceback in glanceapi and glance registry in devstack

2015-03-13 Thread Anand Shanmugam
This bug is because of logging context enabled in glance-api.conf and
glance-registry.conf.There should not be user_id and project_id.So I
think this is specific to devstack.Will move this to devstack

** Project changed: glance = devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1431784

Title:
  Traceback in glanceapi and glance registry in devstack

Status in devstack - openstack dev environments:
  New

Bug description:
  when running devstack  got the followng traceback whenever  do a
  listing of glance images from cli or horizon.The traceback occurs both
  in glance-ap and glance-registry.

  == glance image-list

  Trace in g-api
  +
  Logged from file policy.py, line 296
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file client.py, line 401
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file client.py, line 124
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'

  Traceback from g-registry
  
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file session.py, line 509
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file images.py, line 186
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'

  This can 

[Yahoo-eng-team] [Bug 1409142] Re: [OSSA 2015-005] Websocket Hijacking Vulnerability in Nova VNC Server (CVE-2015-0259)

2015-03-13 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1409142

Title:
  [OSSA 2015-005] Websocket Hijacking Vulnerability in Nova VNC Server
  (CVE-2015-0259)

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  OpenStack Vulnerability Team:

  Brian Manifold (bmani...@cisco.com) from Cisco has discovered a
  vulnerability in the Nova VNC server implementation. We have a patch for
  this vulnerability and consider this a very high risk.

  Please email Dave McCowan (dmcco...@cisco.com) for more details on the
  attached patch.

  Issue Details:

  Horizon uses a VNC client which uses websockets to pass information.  The
  Nova VNC server does not validate the origin of the websocket request,
  which allows an attacker to make a websocket request from another domain.
  If the victim opens both an attacker's site and the VNC console
  simultaneously, or if the victim has recently been using the VNC console
  and then visits the attacker's site, the attacker can make a websocket
  request to the Horizon domain and proxy the connection to another
  destination.

  This gives the attacker full read-write access to the VNC console of any
  instance recently accessed by the victim.

  Recommendation:
   Verify the origin field in request header on all websocket requests.

  Threat:
    CWE-345
   * Insufficient Verification of Data Authenticity -- The software does not
  sufficiently verify the origin or authenticity of data, in a way that
  causes it to accept invalid data.

    CWE-346
   * Origin Validation Error -- The software does not properly verify that
  the source of data or communication is valid.

    CWE-441
   * Unintended Proxy or Intermediary ('Confused Deputy') -- The software
  receives a request, message, or directive from an upstream component, but
  the software does not sufficiently preserve the original source of the
  request before forwarding the request to an external actor that is outside
  of the software's control sphere. This causes the software to appear to be
  the source of the request, leading it to act as a proxy or other
  intermediary between the upstream component and the external actor.

  Steps to reproduce:
   1. Login to horizon
   2. Pick an instance, go to console/vnc tab, wait for console to be loaded
   3. In another browser tab or window, load a VNC console script from local
  disk or remote site
   4. Point the newly loaded VNC console to the VNC server and a connection
  is made
  Result:
   The original connection has been been hijacked by the second connection

  Root cause:
   Cross-Site WebSocket Hijacking is concept that has been written about in
  various security blogs.
  One of the recommended countermeasures is to check the Origin header of
  the WebSocket handshake request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1409142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356053] Re: Doesn't properly get keystone endpoint when Keystone is configured to use templated catalog

2015-03-13 Thread Dmitry Mescheryakov
The corresponding bug for fuel is moved to
https://bugs.launchpad.net/mos/+bug/1431983

** No longer affects: fuel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1356053

Title:
  Doesn't properly get keystone endpoint when Keystone is configured to
  use templated catalog

Status in devstack - openstack dev environments:
  In Progress
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Python client library for Sahara (ex. Savanna):
  Fix Released
Status in OpenStack Data Processing (Sahara):
  In Progress
Status in Tempest:
  In Progress

Bug description:
  When using the keystone static catalog file to register endpoints 
(http://docs.openstack.org/developer/keystone/configuration.html#file-based-service-catalog-templated-catalog),
 an endpoint registered (correctly) as catalog.region.data_processing gets 
read as data-processing by keystone.
  Thus, when Sahara looks for an endpoint, it is unable to find one for 
data_processing.

  This causes a problem with the commandline interface and the
  dashboard.

  Keystone seems to be converting underscores to dashes here:
  
https://github.com/openstack/keystone/blob/master/keystone/catalog/backends/templated.py#L47

  modifying this line to not perform the replacement seems to work fine
  for me, but may have unintended consequences.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1356053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431987] [NEW] Emit failure notifications for CADF audits events

2015-03-13 Thread Steve Martinelli
Public bug reported:

Currently, notifications are only sent off for successful CRUD events,
however we should also send notifications in the event that an operation
fails.

** Affects: keystone
 Importance: Medium
 Assignee: Steve Martinelli (stevemar)
 Status: In Progress

** Changed in: keystone
   Status: New = Confirmed

** Changed in: keystone
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1431987

Title:
  Emit failure notifications for CADF audits events

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Currently, notifications are only sent off for successful CRUD events,
  however we should also send notifications in the event that an
  operation fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1431987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431981] [NEW] LBaaS V2 Create Listeners Exception on Agent

2015-03-13 Thread Franklin Naval
Public bug reported:

I'm seeing the following exception on creating a load balancer with a listener:
https://gist.github.com/fnaval/64e91dd864030b7bff71

This is running on neutron-lbaas against hash:
28b75a656be2f27807aa3d10a12b361534f84ad9

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1431981

Title:
  LBaaS V2 Create Listeners Exception on Agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm seeing the following exception on creating a load balancer with a 
listener:
  https://gist.github.com/fnaval/64e91dd864030b7bff71

  This is running on neutron-lbaas against hash:
  28b75a656be2f27807aa3d10a12b361534f84ad9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1431981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431685] Re: juju nova-compute charm not enabling live-migration via tcp with auth set to none

2015-03-13 Thread Davanum Srinivas (DIMS)
** Project changed: nova = juju

** Project changed: juju = juju-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431685

Title:
  juju nova-compute charm not enabling live-migration via tcp with auth
  set to none

Status in juju-core:
  New

Bug description:
  nova.cfg 
  nova-compute:
  openstack-origin: cloud:trusty-juno
  enable-resize: true
  enable-live-migration: true
  migration-auth-type: none
  sysctl: '{ kernel.pid_max : 4194303 }'
  libvirt-image-backend: rbd

  libvirtd.conf 
  #listen_tcp = 1
  #auth_tcp = sasl

  After running live-migration command, the log from the original host
  of a given vm:

  /var/log/nova/nova-compute.log 
  2015-03-13 00:30:01.062 1796 ERROR nova.virt.libvirt.driver [-] [instance: 
92e1fb07-1bbe-4209-a98d-bae5e1d6a36c] Live Migration failure: operation failed: 
Failed to connect to remote libvirt URI qemu+tcp://maas-pute-04/system: unable 
to connect to server at 'maas-pute-04:16509': Connection refused

  After changing the config on the /var/lib/juju/agents/unit-nova-
  compute-1/charm/templates/libvirtd.conf to reflect the intended config
  (tcp_listen = 1 and auth_tcp = none) and restarting the service, it
  throws a config-changed hook error. After running the config-changed
  hook, it works and I am able to live-migrate between the nodes with
  the correct config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/juju-core/+bug/1431685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424616] Re: A timeout on Chrome 40/ Ubuntu 14.10 causes the user to be stuck at Log In

2015-03-13 Thread Doug Fish
I stumbled across this accidentally today.  I think the problem is
occurring when the _token_ times out, not when the session times out.  I
re-created the problem on both Chrome + FF.

** Changed in: horizon
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424616

Title:
  A timeout on Chrome 40/ Ubuntu 14.10 causes the user to be stuck at
  Log In

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  If you timeout from Horizon, then clicking Sign In redirects you to
  the Log In page without any warning or error message. This continues
  until the sessionid cookie is manually removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308189] Re: Instances in error state should not have Up Time

2015-03-13 Thread Lin Hua Cheng
The bug submitted does not fix the issue, it only changed the label of
Uptime.

Opening this bug again.

There is an outstanding patch already:
https://review.openstack.org/93630

** Changed in: horizon
Milestone: kilo-2 = kilo-3

** Changed in: horizon
   Status: Fix Released = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1308189

Title:
  Instances in error state should not have Up Time

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  When launch instance fails, the instance status is set to Error and
  the Uptime is updated every time the page is loaded even when the
  instance was never up. If the instance current status is Error then
  Uptime should be either --- or 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1308189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431981] Re: LBaaS V2 Create Listeners Exception on Agent

2015-03-13 Thread Franklin Naval
So this is actually working.  There was some miscommunication that the
agent needed to be run automatically.  However, in the newest version of
neutron_lbass, the agent will start up automatically.  Having 2 agents
running was causing multiple failures.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1431981

Title:
  LBaaS V2 Create Listeners Exception on Agent

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I'm seeing the following exception on creating a load balancer with a 
listener:
  https://gist.github.com/fnaval/64e91dd864030b7bff71

  This is running on neutron-lbaas against hash:
  28b75a656be2f27807aa3d10a12b361534f84ad9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1431981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1423165] Re: https: client can cause nova/cinder to leak sockets for 'get' 'show' 'delete' 'update'

2015-03-13 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423165

Title:
  https: client can cause nova/cinder to leak sockets for 'get' 'show'
  'delete' 'update'

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Glance:
  Fix Released

Bug description:
  
  Other OpenStack services which instantiate a 'https' glanceclient using
  ssl_compression=False and insecure=False (eg Nova, Cinder) are leaking
  sockets due to glanceclient not closing the connection to the Glance
  server.
  
  This could happen for a sub-set of calls, eg 'show', 'delete', 'update'.
  
  netstat -nopd would show the sockets would hang around forever:
  
  ... 127.0.0.1:9292  ESTABLISHED 9552/python  off (0.00/0/0)
  
  urllib's ConnectionPool relies on the garbage collector to tear down
  sockets which are no longer in use. The 'verify_callback' function used to
  validate SSL certs was holding a reference to the VerifiedHTTPSConnection
  instance which prevented the sockets being torn down.

  
  --

  to reproduce, set up devstack with nova talking to glance over https (must be 
performing full cert verification) and
  perform a nova operation such as:

  
   $ nova image-meta 53854ea3-23ed-4682-abf7-8415f2d6b7d9 set foo=bar

  you will see connections from nova to glance which have no timeout
  (off):

   $ netstat -nopd | grep 9292

   tcp0  0 127.0.0.1:34204 127.0.0.1:9292
  ESTABLISHED 9552/python  off (0.00/0/0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1423165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432101] [NEW] non-admin user can't create network by os-tenant-network with nova-network vlanmanager

2015-03-13 Thread Alex Xu
Public bug reported:

alex@hp-pc:~/code/devstack$ nova tenant-network-create net2 10.0.0.0/24

ERROR (ClientException): Create networks failed (HTTP 503) (Request-ID:
req-e1cf8f25-309c-49a8-b460-b56172ac68ce)

get error as below:

2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/conductor/manager.py, line 420, in _object_dispatch
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks return 
getattr(target, method)(*args, **kwargs)
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/objects/base.py, line 207, in wrapper
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks return fn(self, 
self._context, *args, **kwargs)
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/objects/network.py, line 177, in create
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks db_network = 
db.network_create_safe(context, updates)
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/db/api.py, line 970, in network_create_safe
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks return 
IMPL.network_create_safe(context, values)
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/db/sqlalchemy/api.py, line 127, in wrapper
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks 
nova.context.require_admin_context(args[0])
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/context.py, line 226, in require_admin_context
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks raise 
exception.AdminRequired()
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks AdminRequired: User does 
not have admin privileges
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432101

Title:
  non-admin user can't create network by os-tenant-network with nova-
  network vlanmanager

Status in OpenStack Compute (Nova):
  New

Bug description:
  alex@hp-pc:~/code/devstack$ nova tenant-network-create net2
  10.0.0.0/24

  ERROR (ClientException): Create networks failed (HTTP 503) (Request-
  ID: req-e1cf8f25-309c-49a8-b460-b56172ac68ce)

  get error as below:

  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/conductor/manager.py, line 420, in _object_dispatch
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks return 
getattr(target, method)(*args, **kwargs)
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/objects/base.py, line 207, in wrapper
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks return fn(self, 
self._context, *args, **kwargs)
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/objects/network.py, line 177, in create
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks db_network = 
db.network_create_safe(context, updates)
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks
  2015-03-14 12:25:45.225 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
/opt/stack/nova/nova/db/api.py, line 970, in network_create_safe
  2015-03-14 12:25:45.225 TRACE 

[Yahoo-eng-team] [Bug 1432100] [NEW] non-admin user can delete a network without any project assoicate with nova-network's Vlan manager

2015-03-13 Thread Alex Xu
Public bug reported:

Use admin user create new network called 'net1'

alex@hp-pc:~/code/devstack$ source ./openrc admin admin

alex@hp-pc:~/code/devstack$ nova network-list
+--+-+-+
| ID   | Label   | Cidr|
+--+-+-+
| 5fed3168-0ae8-4f2c-904c-dd750698fbca | private | 10.0.0.0/24 |
+--+-+-+

alex@hp-pc:~/code/devstack$ nova network-create net1 --fixed-range-v4
20.0.0.0/24

alex@hp-pc:~/code/devstack$ nova network-list
+--+-+-+
| ID   | Label   | Cidr|
+--+-+-+
| 5fed3168-0ae8-4f2c-904c-dd750698fbca | private | 10.0.0.0/24 |
| e6b5a972-be01-4f54-acfb-eae53ae67cec | net1| 20.0.0.0/24 |
+--+-+-+


alex@hp-pc:~/code/devstack$ nova network-show net1

+-+--+
| Property| Value|
+-+--+
| bridge  | br101|
| bridge_interface| eth0 |
| broadcast   | 20.0.0.255   |
| cidr| 20.0.0.0/24  |
| cidr_v6 | -|
| created_at  | 2015-03-14T04:20:22.00   |
| deleted | False|
| deleted_at  | -|
| dhcp_server | 20.0.0.1 |
| dhcp_start  | 20.0.0.3 |
| dns1| 8.8.4.4  |
| dns2| -|
| enable_dhcp | True |
| gateway | 20.0.0.1 |
| gateway_v6  | -|
| host| -|
| id  | e6b5a972-be01-4f54-acfb-eae53ae67cec |
| injected| False|
| label   | net1 |
| mtu | -|
| multi_host  | False|
| netmask | 255.255.255.0|
| netmask_v6  | -|
| priority| -|
| project_id  | -|
| rxtx_base   | -|
| share_address   | False|
| updated_at  | -|
| vlan| 101  |
| vpn_private_address | 20.0.0.2 |
| vpn_public_address  | -|
| vpn_public_port | 1001 |
+-+--+


Switch to non-admin user 'demo',  then the demo user can't see the net1, but 
demo user can delete it by id directly.


alex@hp-pc:~/code/devstack$ source ./openrc demo demo

alex@hp-pc:~/code/devstack$ nova tenant-network-list

++---+--+
| ID | Label | CIDR |
++---+--+
++---+--+


alex@hp-pc:~/code/devstack$ nova tenant-network-delete 
e6b5a972-be01-4f54-acfb-eae53ae67cec

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432100

Title:
  non-admin user can delete a network without any project assoicate with
  nova-network's Vlan manager

Status in OpenStack Compute (Nova):
  New

Bug description:
  Use admin user create new network called 'net1'

  alex@hp-pc:~/code/devstack$ source ./openrc admin admin

  alex@hp-pc:~/code/devstack$ nova network-list
  +--+-+-+
  | ID   | Label   | Cidr|
  +--+-+-+
  | 5fed3168-0ae8-4f2c-904c-dd750698fbca | private | 10.0.0.0/24 |
  +--+-+-+

  alex@hp-pc:~/code/devstack$ nova network-create net1 --fixed-range-v4
  20.0.0.0/24

  alex@hp-pc:~/code/devstack$ nova network-list
  +--+-+-+
  | ID   | Label   | Cidr|
  

[Yahoo-eng-team] [Bug 1432065] [NEW] DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') 'DELETE FROM ipallocationpools WHERE ipallocationpools.

2015-03-13 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/78/163978/1/check/check-tempest-dsvm-neutron-
full/792a4e4/logs/screen-q-svc.txt.gz?level=TRACE#_2015-03-13_16_07_36_406

2015-03-13 16:07:36.406 ERROR oslo_messaging.rpc.dispatcher 
[req-d42b66e6-5ee7-4e08-b59d-318aebfe92d7 None None] Exception during message 
handling: UPDATE statement on table 'ports' expected to update 1 row(s); 0 were 
matched.
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/plugins/ml2/rpc.py, line 118, in 
get_devices_details_list
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher for 
device in kwargs.pop('devices', [])
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/plugins/ml2/rpc.py, line 95, in 
get_device_details
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher host)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 1255, in 
update_port_status
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
original_port['network_id'])
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 642, in get_network
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher result = 
super(Ml2Plugin, self).get_network(context, id, None)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 915, in 
get_network
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher network = 
self._get_network(context, id)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 87, in 
_get_network
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher network = 
self._get_by_id(context, models_v2.Network, id)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/neutron/neutron/db/common_db_mixin.py, line 130, in _get_by_id
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher return 
query.filter(model.id == id).one()
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2398, in 
one
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher ret = 
list(self)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2440, in 
__iter__
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
self.session._autoflush()
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1264, 
in _autoflush
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
self.flush()
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1985, 
in flush
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
self._flush(objects)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 2103, 
in _flush
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
transaction.rollback(_capture_exception=True)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 
60, in __exit__
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 
compat.reraise(exc_type, exc_value, exc_tb)
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 2067, 
in _flush
2015-03-13 16:07:36.406 22946 TRACE oslo_messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1423165] Re: https: client can cause nova/cinder to leak sockets for 'get' 'show' 'delete' 'update'

2015-03-13 Thread Dr. Jens Rosenboom
Nova stable/juno is still affected by this issue, since the fix is not
available there currently due to the version cap on python-glanceclient.

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1423165

Title:
  https: client can cause nova/cinder to leak sockets for 'get' 'show'
  'delete' 'update'

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Glance:
  Fix Released

Bug description:
  
  Other OpenStack services which instantiate a 'https' glanceclient using
  ssl_compression=False and insecure=False (eg Nova, Cinder) are leaking
  sockets due to glanceclient not closing the connection to the Glance
  server.
  
  This could happen for a sub-set of calls, eg 'show', 'delete', 'update'.
  
  netstat -nopd would show the sockets would hang around forever:
  
  ... 127.0.0.1:9292  ESTABLISHED 9552/python  off (0.00/0/0)
  
  urllib's ConnectionPool relies on the garbage collector to tear down
  sockets which are no longer in use. The 'verify_callback' function used to
  validate SSL certs was holding a reference to the VerifiedHTTPSConnection
  instance which prevented the sockets being torn down.

  
  --

  to reproduce, set up devstack with nova talking to glance over https (must be 
performing full cert verification) and
  perform a nova operation such as:

  
   $ nova image-meta 53854ea3-23ed-4682-abf7-8415f2d6b7d9 set foo=bar

  you will see connections from nova to glance which have no timeout
  (off):

   $ netstat -nopd | grep 9292

   tcp0  0 127.0.0.1:34204 127.0.0.1:9292
  ESTABLISHED 9552/python  off (0.00/0/0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1423165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431685] [NEW] juju nova-compute charm not enabling live-migration via tcp with auth set to none

2015-03-13 Thread Fabricio Costi
Public bug reported:

nova.cfg 
nova-compute:
openstack-origin: cloud:trusty-juno
enable-resize: true
enable-live-migration: true
migration-auth-type: none
sysctl: '{ kernel.pid_max : 4194303 }'
libvirt-image-backend: rbd

libvirtd.conf 
#listen_tcp = 1
#auth_tcp = sasl

After running live-migration command, the log from the original host of
a given vm:

/var/log/nova/nova-compute.log 
2015-03-13 00:30:01.062 1796 ERROR nova.virt.libvirt.driver [-] [instance: 
92e1fb07-1bbe-4209-a98d-bae5e1d6a36c] Live Migration failure: operation failed: 
Failed to connect to remote libvirt URI qemu+tcp://maas-pute-04/system: unable 
to connect to server at 'maas-pute-04:16509': Connection refused

After changing the config on the /var/lib/juju/agents/unit-nova-
compute-1/charm/templates/libvirtd.conf to reflect the intended config
(tcp_listen = 1 and auth_tcp = none) and restarting the service, it
throws a config-changed hook error. After running the config-changed
hook, it works and I am able to live-migrate between the nodes with the
correct config.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431685

Title:
  juju nova-compute charm not enabling live-migration via tcp with auth
  set to none

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova.cfg 
  nova-compute:
  openstack-origin: cloud:trusty-juno
  enable-resize: true
  enable-live-migration: true
  migration-auth-type: none
  sysctl: '{ kernel.pid_max : 4194303 }'
  libvirt-image-backend: rbd

  libvirtd.conf 
  #listen_tcp = 1
  #auth_tcp = sasl

  After running live-migration command, the log from the original host
  of a given vm:

  /var/log/nova/nova-compute.log 
  2015-03-13 00:30:01.062 1796 ERROR nova.virt.libvirt.driver [-] [instance: 
92e1fb07-1bbe-4209-a98d-bae5e1d6a36c] Live Migration failure: operation failed: 
Failed to connect to remote libvirt URI qemu+tcp://maas-pute-04/system: unable 
to connect to server at 'maas-pute-04:16509': Connection refused

  After changing the config on the /var/lib/juju/agents/unit-nova-
  compute-1/charm/templates/libvirtd.conf to reflect the intended config
  (tcp_listen = 1 and auth_tcp = none) and restarting the service, it
  throws a config-changed hook error. After running the config-changed
  hook, it works and I am able to live-migrate between the nodes with
  the correct config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431685/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431746] [NEW] AggregateCoreFilter return incorrect value

2015-03-13 Thread shihanzhang
Public bug reported:

I find AggregateCoreFilter will return incorrect value, the analysis is
bellow:

class AggregateCoreFilter(BaseCoreFilter):
def _get_cpu_allocation_ratio(self, host_state, filter_properties):
# TODO(uni): DB query in filter is a performance hit, especially for
# system with lots of hosts. Will need a general solution here to fix
# all filters with aggregate DB call things.
aggregate_vals = utils.aggregate_values_from_key(
host_state,
'cpu_allocation_ratio')
try:
ratio = utils.validate_num_values(
aggregate_vals, CONF.cpu_allocation_ratio, cast_to=float)
except ValueError as e:
LOG.warning(_LW(Could not decode cpu_allocation_ratio: '%s'), e)
ratio = CONF.cpu_allocation_ratio

in function validate_num_values, it use min() to get the minimum ratio, but for 
aggregate, its 'cpu_allocation_ratio' is a string,
for example: vals=set('10', '9'), the 'validate_num_values' will return 10, but 
correct is 9

def validate_num_values(vals, default=None, cast_to=int, based_on=min):
num_values = len(vals)
if num_values == 0:
return default

if num_values  1:
LOG.info(_LI(%(num_values)d values found, 
 of which the minimum value will be used.),
 {'num_values': num_values})

return cast_to(based_on(vals))

** Affects: nova
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431746

Title:
  AggregateCoreFilter return incorrect value

Status in OpenStack Compute (Nova):
  New

Bug description:
  I find AggregateCoreFilter will return incorrect value, the analysis
  is bellow:

  class AggregateCoreFilter(BaseCoreFilter):
  def _get_cpu_allocation_ratio(self, host_state, filter_properties):
  # TODO(uni): DB query in filter is a performance hit, especially for
  # system with lots of hosts. Will need a general solution here to fix
  # all filters with aggregate DB call things.
  aggregate_vals = utils.aggregate_values_from_key(
  host_state,
  'cpu_allocation_ratio')
  try:
  ratio = utils.validate_num_values(
  aggregate_vals, CONF.cpu_allocation_ratio, cast_to=float)
  except ValueError as e:
  LOG.warning(_LW(Could not decode cpu_allocation_ratio: '%s'), e)
  ratio = CONF.cpu_allocation_ratio

  in function validate_num_values, it use min() to get the minimum ratio, but 
for aggregate, its 'cpu_allocation_ratio' is a string,
  for example: vals=set('10', '9'), the 'validate_num_values' will return 10, 
but correct is 9

  def validate_num_values(vals, default=None, cast_to=int, based_on=min):
  num_values = len(vals)
  if num_values == 0:
  return default

  if num_values  1:
  LOG.info(_LI(%(num_values)d values found, 
   of which the minimum value will be used.),
   {'num_values': num_values})

  return cast_to(based_on(vals))

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431784] Re: Traceback in glanceapi and glance registry in devstack

2015-03-13 Thread Abhishek Kekane
Hi Anand,

Earlier I was getting same error when I have pulled latest glance code.
Then  I have done fresh installation using devstack and this error is
not reproducible.


** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1431784

Title:
  Traceback in glanceapi and glance registry in devstack

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid

Bug description:
  when running devstack  got the followng traceback whenever  do a
  listing of glance images from cli or horizon.The traceback occurs both
  in glance-ap and glance-registry.

  == glance image-list

  Trace in g-api
  +
  Logged from file policy.py, line 296
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file client.py, line 401
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file client.py, line 124
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'

  Traceback from g-registry
  
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file session.py, line 509
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file images.py, line 186
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'

  

[Yahoo-eng-team] [Bug 1431737] [NEW] pycadf should be included in requirement.txt under keystone

2015-03-13 Thread yuntongjin
Public bug reported:

keystone/notifications.py:
from pycadf import cadftaxonomy as taxonomy

so, keystone has dependency of pycadf, but pycadf is not included in
requirement.txt.

How to reproduce:
When i run ./stack.sh from devstack, had an error:

2015-03-13 08:22:27.216 |   File 
/opt/stack/keystone/keystone/notifications.py, line 60, in module
2015-03-13 08:22:27.216 | 'group': taxonomy.SECURITY_GROUP,
2015-03-13 08:22:27.216 | AttributeError: 'module' object has no attribute 
'SECURITY_GROUP'

The reason is that global-requirements.txt has pycadf=0.8.0 and the version of 
it on my box is 0.7.
after include it in keystone requirement, the pycadf will be upgrade to 0.8 
according to global-requirements.

** Affects: keystone
 Importance: Undecided
 Assignee: yuntongjin (yuntongjin)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = yuntongjin (yuntongjin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1431737

Title:
  pycadf should be included in requirement.txt under keystone

Status in OpenStack Identity (Keystone):
  New

Bug description:
  keystone/notifications.py:
  from pycadf import cadftaxonomy as taxonomy

  so, keystone has dependency of pycadf, but pycadf is not included in
  requirement.txt.

  How to reproduce:
  When i run ./stack.sh from devstack, had an error:

  2015-03-13 08:22:27.216 |   File 
/opt/stack/keystone/keystone/notifications.py, line 60, in module
  2015-03-13 08:22:27.216 | 'group': taxonomy.SECURITY_GROUP,
  2015-03-13 08:22:27.216 | AttributeError: 'module' object has no attribute 
'SECURITY_GROUP'

  The reason is that global-requirements.txt has pycadf=0.8.0 and the version 
of it on my box is 0.7.
  after include it in keystone requirement, the pycadf will be upgrade to 0.8 
according to global-requirements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1431737/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402506] Re: The 'Uptime' value Instance overview page shows is not the conventional Uptime

2015-03-13 Thread Rob Cresswell
I think this has been solved by https://review.openstack.org/#/c/145757/
(Bug report - https://bugs.launchpad.net/horizon/+bug/1308189)

** Changed in: horizon
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1402506

Title:
  The 'Uptime' value Instance overview page shows is not the
  conventional Uptime

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In instance overview page the 'Uptime' value is counted as the amount
  of time passed since 'Created At' timestamp. This is not what is
  usually meant by 'uptime'. It shows the time the instance has been in
  active state from the last change be it reboot or shutdown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1402506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431767] [NEW] Table rows aligned incorrectly

2015-03-13 Thread Rob Cresswell
Public bug reported:

The CSS for table rows contains 'vertical-align: top' causing the cell
data to align strangely. There is also an additional top border on the
tables.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

** Changed in: horizon
 Assignee: (unassigned) = Rob Cresswell (robcresswell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1431767

Title:
  Table rows aligned incorrectly

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The CSS for table rows contains 'vertical-align: top' causing the cell
  data to align strangely. There is also an additional top border on the
  tables.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1431767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431784] [NEW] Traceback in glanceapi and glance registry in devstack

2015-03-13 Thread Anand Shanmugam
Public bug reported:

when running devstack  got the followng traceback whenever  do a listing
of glance images from cli or horizon.The traceback occurs both in
glance-ap and glance-registry.

== glance image-list

Trace in g-api
+
Logged from file policy.py, line 296
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_id'
Logged from file client.py, line 401
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_id'
Logged from file client.py, line 124
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_id'

Traceback from g-registry

Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_id'
Logged from file session.py, line 509
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_id'
Logged from file images.py, line 186
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
msg = self.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 69, 
in format
return logging.StreamHandler.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 724, in format
return fmt.format(record)
  File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
return logging.Formatter.format(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 467, in format
s = self._fmt % record.__dict__
KeyError: 'user_id'

This can be bug in glance or devstack

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  when running devstack  got the followng traceback whenever  do a listing
  of glance images from cli or horizon.The traceback occurs both in
  glance-ap and glance-registry.
  
+ == glance image-list
  
  Trace in g-api
  +
  Logged from file policy.py, line 296
  Traceback (most recent call last):
-   File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
- msg = self.format(record)
-   File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
- return logging.StreamHandler.format(self, record)
-  

[Yahoo-eng-team] [Bug 1430751] Re: Launching instance that creates a new volume fails

2015-03-13 Thread jichenjc
e162-45ce-98b0-54d9563bbb1c] VolumeNotCreated: Volume
abc781af-0960-4a65-87d2-a5cb15ce7273 did not finish being created even
after we waited 250 seconds or 61 attempts.

from this line, it indicated that create volume failed , nova did all he can do.
 you need to check whether something wrong in cinder from cinder api or other 
logs , please attach it , thanks

** Project changed: nova = cinder

** Changed in: cinder
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430751

Title:
  Launching instance that creates a new volume fails

Status in Cinder:
  Incomplete

Bug description:
  I'm trying to launch instances from Horizon using the option Boot form
  image - Creates a new volume.

  The instance fails with block_device_mapping ERROR.

  On the controller cinder/api.log and cinder/volume.log shows no error
  or relevant information.

  On the compute node, nova-compute.log does show the problem:

  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] Traceback (most recent call last):
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1819, in 
_prep_block_device
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] do_check_attach=do_check_attach) +
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 407, in 
attach_block_devices
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] map(_log_and_attach, 
block_device_mapping)
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 405, in 
_log_and_attach
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] bdm.attach(*attach_args, 
**attach_kwargs)
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/virt/block_device.py, line 333, in 
attach
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] wait_func(context, vol['id'])
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1263, in 
_await_block_device_map_created
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] attempts=attempts)
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] VolumeNotCreated: Volume 
abc781af-0960-4a65-87d2-a5cb15ce7273 did not finish being created even after we 
waited 250 seconds or 61 attempts.
  2015-03-11 11:23:02.807 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] 
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] Traceback (most recent call last):
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2218, in 
_build_resources
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] block_device_mapping)
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1847, in 
_prep_block_device
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] raise exception.InvalidBDM()
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] InvalidBDM: Block Device Mapping is 
Invalid.
  2015-03-11 11:23:02.809 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] 
  2015-03-11 11:23:02.848 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] Traceback (most recent call last):
  2015-03-11 11:23:02.848 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2030, in 
_do_build_and_run_instance
  2015-03-11 11:23:02.848 72172 TRACE nova.compute.manager [instance: 
2da2594d-e162-45ce-98b0-54d9563bbb1c] filter_properties)
  2015-03-11 11:23:02.848 72172 TRACE 

[Yahoo-eng-team] [Bug 1424595] Re: Create network with no name shows ID in the name column

2015-03-13 Thread Rob Cresswell
This is by design; see Ports, Subnets  Firewalls. Those Name columns
are populated by Name or ID. If we are to change Networks, then we
should change the behaviour across Horizon, and this needs discussion.
Please bring it up in either IRC or the weekly meeting
(https://wiki.openstack.org/wiki/Meetings/Horizon)

** Changed in: horizon
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1424595

Title:
  Create network with no name shows ID in the name column

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  From Horizon, when we create a network with no name, it shows the
  starting bits of the network ID in brackets as the network name which
  is confusing.

  Network ID should not be mentioned in the network Name column when a
  user has not specified any name for the network.

  Instead, the Networks table should display Network ID column (as done
  in cli output) with the network Id mentioned under that.

  Thus, this way, it will Display the ID information for the networks
  which do not have a name specified and will also be consistent with
  the cli output

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1424595/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431825] [NEW] value of status column not translatable in network table

2015-03-13 Thread Masco Kaliyamoorthy
Public bug reported:

network status value is not translatable in network table

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1431825

Title:
  value of status column not translatable in network table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  network status value is not translatable in network table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1431825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431899] [NEW] TestEncryptedCinderVolumes fails with 'NoneType can't be decoded'

2015-03-13 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/19/155319/13/check/check-tempest-dsvm-full-
ceph/4a14a01/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-03-10_17_19_46_145

2015-03-10 17:19:46.145 ERROR oslo_messaging.rpc.dispatcher 
[req-72819513-908b-4210-a4c7-7f5d9ff7fd22 TestEncryptedCinderVolumes-1577051342 
TestEncryptedCinderVolumes-2041152902] Exception during message handling: type 
'NoneType' can't be decoded
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 418, in decorated_function
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 88, in wrapped
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher payload)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 71, in wrapped
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 302, in decorated_function
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher pass
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 287, in decorated_function
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 330, in decorated_function
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 85, in 
__exit__
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 318, in decorated_function
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 4736, in detach_volume
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
self._detach_volume(context, instance, bdm)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 4677, in _detach_volume
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher 
connection_info = jsonutils.loads(bdm.connection_info)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py, line 
215, in loads
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher return 
json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/encodeutils.py, line 33, in 
safe_decode
2015-03-10 17:19:46.145 30975 TRACE oslo_messaging.rpc.dispatcher raise 
TypeError(%s 

[Yahoo-eng-team] [Bug 1431842] [NEW] GET /v3/auth/tokens without X-Subject-Token raises TypeError

2015-03-13 Thread Boris Bobrov
Public bug reported:

[DEFAULT]admin_token = ADMIN

curl -k -H X-Auth-Token:ADMIN http://localhost:35357/v3/auth/tokens |
python -mjson.tool

http://paste.openstack.org/show/192079/

rev 55d940c70be405e6dcf48eaa4aed0c2d766aadeb

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: fernet

** Tags added: fernet

** Description changed:

  [DEFAULT]admin_token = ADMIN
  
  curl -k -H X-Auth-Token:ADMIN http://localhost:35357/v3/auth/tokens |
  python -mjson.tool
  
  http://paste.openstack.org/show/192079/
+ 
+ rev 55d940c70be405e6dcf48eaa4aed0c2d766aadeb

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1431842

Title:
  GET /v3/auth/tokens without X-Subject-Token raises TypeError

Status in OpenStack Identity (Keystone):
  New

Bug description:
  [DEFAULT]admin_token = ADMIN

  curl -k -H X-Auth-Token:ADMIN http://localhost:35357/v3/auth/tokens
  | python -mjson.tool

  http://paste.openstack.org/show/192079/

  rev 55d940c70be405e6dcf48eaa4aed0c2d766aadeb

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1431842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431784] Re: Traceback in glanceapi and glance registry in devstack

2015-03-13 Thread Abhishek Kekane
** Changed in: glance
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1431784

Title:
  Traceback in glanceapi and glance registry in devstack

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  when running devstack  got the followng traceback whenever  do a
  listing of glance images from cli or horizon.The traceback occurs both
  in glance-ap and glance-registry.

  == glance image-list

  Trace in g-api
  +
  Logged from file policy.py, line 296
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file client.py, line 401
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file client.py, line 124
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'

  Traceback from g-registry
  
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file session.py, line 509
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'
  Logged from file images.py, line 186
  Traceback (most recent call last):
    File /usr/lib/python2.7/logging/__init__.py, line 851, in emit
  msg = self.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/handlers.py, line 
69, in format
  return logging.StreamHandler.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 724, in format
  return fmt.format(record)
    File /usr/local/lib/python2.7/dist-packages/oslo_log/formatters.py, line 
235, in format
  return logging.Formatter.format(self, record)
    File /usr/lib/python2.7/logging/__init__.py, line 467, in format
  s = self._fmt % record.__dict__
  KeyError: 'user_id'

  This can be bug in glance or devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1431784/+subscriptions

-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1251521] Re: Volume detach in tempest fails because libvirt refuses connections

2015-03-13 Thread Matt Riedemann
We're still hitting this on master:

http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwibGlidmlydEVycm9yOiBGYWlsZWQgdG8gY29ubmVjdCBzb2NrZXQgdG8gJy92YXIvcnVuL2xpYnZpcnQvbGlidmlydC1zb2NrJzogQ29ubmVjdGlvbiByZWZ1c2VkJ1wiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJvZmZzZXQiOjAsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI2MjYxMTQzNDg2fQ==

10 hits in 7 days, all check queue but multiple changes.

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1251521

Title:
  Volume detach in tempest fails because libvirt refuses connections

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  I just experienced this on https://review.openstack.org/#/c/55492/. It
  looks to me like the detach volume fails because libvirt has become
  unavailable:

  2013-11-15 00:49:45.034 29876 DEBUG nova.openstack.common.rpc.amqp [-] 
received {u'_context_roles': [u'_member_'], u'_context_request_id': 
u'req-0fdc657c-fdb3-4aef-96c6-c7d3c
  6f18b33', u'_context_quota_class': None, u'_context_user_name': 
u'tempest.scenario.manager-tempest-1652099598-user', u'_context_project_name': 
u'tempest.scenario.manager-temp
  est-1652099598-tenant', u'_context_service_catalog': [{u'endpoints_links': 
[], u'endpoints': [{u'adminURL': 
u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u're
  gion': u'RegionOne', u'internalURL': 
u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347', u'serviceName': 
u'cinder', u'id': u'1a7219a8e4e543909f4b2a497810fa7c', u'pu
  blicURL': u'http://127.0.0.1:8776/v1/49a55ed418d44af8b8104157045e8347'}], 
u'type': u'volume', u'name': u'cinder'}], u'_context_tenant': 
u'49a55ed418d44af8b8104157045e8347', u
  '_context_auth_token': 'SANITIZED', u'args': {u'instance': {u'vm_state': 
u'active', u'availability_zone': None, u'terminated_at': None, u'ephemeral_gb': 
0, u'instance_type_
  id': 6, u'user_data': None, u'cleaned': False, u'vm_mode': None, 
u'deleted_at': None, u'reservation_id': u'r-2ivdfgiz', u'id': 101, 
u'security_groups': [{u'deleted_at': None,
   u'user_id': u'2a86d0c9e67a4aa4b6e7b84ce2dd4776', u'description': u'default', 
u'deleted': False, u'created_at': u'2013-11-15T00:48:31.00', u'updated_at': 
None, u'project_
  id': u'49a55ed418d44af8b8104157045e8347', u'id': 90, u'name': u'default'}], 
u'disable_terminate': False, u'root_device_name': u'/dev/vda', u'display_name': 
u'scenario-server-
  -tempest-1998782633', u'uuid': u'9dbd99d4-09d7-43df-b8de-c6e65043e012', 
u'default_swap_device': None, u'info_cache': {u'instance_uuid': 
u'9dbd99d4-09d7-43df-b8de-c6e65043e012
  ', u'deleted': False, u'created_at': u'2013-11-15T00:48:31.00', 
u'updated_at': u'2013-11-15T00:49:18.00', u'network_info': 
[{u'ovs_interfaceid': None, u'network': {u'
  bridge': u'br100', u'label': u'private', u'meta': {u'tenant_id': None, 
u'should_create_bridge': True, u'bridge_interface': u'eth0'}, u'id': 
u'd00c22d4-05b0-4c71-86ba-1c5d60b4
  45bd', u'subnets': [{u'ips': [{u'meta': {}, u'type': u'fixed', 
u'floating_ips': [{u'meta': {}, u'type': u'floating', u'version': 4, 
u'address': u'172.24.4.225'}], u'version':
   4, u'address': u'10.1.0.4'}], u'version': 4, u'meta': {u'dhcp_server': 
u'10.1.0.1'}, u'dns': [{u'meta': {}, u'type': u'dns', u'version': 4, 
u'address': u'8.8.4.4'}], u'route
  s': [], u'cidr': u'10.1.0.0/24', u'gateway': {u'meta': {}, u'type': 
u'gateway', u'version': 4, u'address': u'10.1.0.1'}}, {u'ips': [], u'version': 
None, u'meta': {u'dhcp_serv
  er': None}, u'dns': [], u'routes': [], u'cidr': None, u'gateway': {u'meta': 
{}, u'type': u'gateway', u'version': None, u'address': None}}]}, u'devname': 
None, u'qbh_params': 
  None, u'meta': {}, u'address': u'fa:16:3e:57:00:d4', u'type': u'bridge', 
u'id': u'c7cdc48e-2ef3-43b8-9d8f-88c644afac78', u'qbg_params': None}], 
u'deleted_at': None}, u'hostna
  me': u'scenario-server--tempest-1998782633', u'launched_on': 
u'devstack-precise-check-rax-ord-658168.slave.openstack.org', 
u'display_description': u'scenario-server--tempest-
  1998782633', u'key_data': u'ssh-rsa 
B3NzaC1yc2EDAQABAAABAQDWN4HLjQWmJu2prhyp8mSkcVOx3W4dhK6GB1L4upm83DU7Ogj3Tg2cTuMqmO4bIt3gJv+BZB16auiyq5w+SEK8VVSuTresc7dD5qW7dej+bD
  
aF6w/gLsEbP8s0rOvMo93esqF0Cwt7WyqpBXsRr8DEjdPDkJL9fRjFuuGz6sjpM9qAiKd7e1v37y+z39T2y7PoJA5241b0QDG5H6uHNdrCwxIaWxtX5+ac2kUJSxS7FjjtACPgsoBD0tltcpaEQaxmQANdAm4hkhe1rTpP7vfSrmEN
  I0ZrwSjre2ZbWLA0IcM3JJwmsXWzXdPvjNC+GVqWmltugTNH77vOfwTbec+x Generated by 
Nova\n', u'deleted': False, u'config_drive': u'', u'power_state': 1, 
u'default_ephemeral_device': No
  ne, u'progress': 0, u'project_id': u'49a55ed418d44af8b8104157045e8347', 
u'launched_at': u'2013-11-15T00:48:50.00', u'scheduled_at': 
u'2013-11-15T00:48:31.00', u'node'
  : 

[Yahoo-eng-team] [Bug 1431927] [NEW] netruon client parses arguments incorrectly

2015-03-13 Thread Vitalii
Public bug reported:

The following command worked in Icehouse. It do not work in Juno
anymore.

neutron net-create --tenant-id 7f41e236d56c4e9fa074a9185528cad2
--provider:network_type=flat --provider:physical_network=default
--router:external=True GATEWAY_NET

It returns error:

neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'


If you use spaces instead of '=':
neutron net-create --tenant-id e4e6d468a3ce4e8c8d6de73aa394e395 
--provider:network_type flat --provider:physical_network default 
--router:external true GATEWAY_NET

It raises the following:

Invalid values_specs GATEWAY_NET


And only if you use --name GATEWAY_NET it works. But documentation ( neutron 
help net-create ) tells you that name is positional argument !

positional arguments:
  NAME  Name of network to create.


** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431927

Title:
  netruon client parses arguments incorrectly

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following command worked in Icehouse. It do not work in Juno
  anymore.

  neutron net-create --tenant-id 7f41e236d56c4e9fa074a9185528cad2
  --provider:network_type=flat --provider:physical_network=default
  --router:external=True GATEWAY_NET

  It returns error:
  
  neutron net-create: error: argument --router:external: ignored explicit 
argument u'True'
  

  If you use spaces instead of '=':
  neutron net-create --tenant-id e4e6d468a3ce4e8c8d6de73aa394e395 
--provider:network_type flat --provider:physical_network default 
--router:external true GATEWAY_NET

  It raises the following:
  
  Invalid values_specs GATEWAY_NET
  

  And only if you use --name GATEWAY_NET it works. But documentation ( neutron 
help net-create ) tells you that name is positional argument !
  
  positional arguments:
NAME  Name of network to create.
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1431927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431860] [NEW] Cannot delete vm instance if send duplicate delete requests

2015-03-13 Thread Abhishek Kekane
Public bug reported:

I deployed openstack with icehouse rc1 and booted 100 vms on my nodes. After my 
testing, i tried to delete my vms at the same time. Then i fount all of my vms` 
status change to deleting but cannot be deleted. I checked my openstack, the 
rabbitmq-server crashed . Then i restart rabbitmq-server and my openstack nova 
services, sended the delete requests again and again, the vms still cannot be 
deleted. While , in havana, the vms can be deleted if received duplicate delete 
requests .
I think icehouse should handle duplicate delete requests like havana .


Note:
This bug is already reported in launchpad [1] but the fix [2] proposed to 
resolve it was reverted back as it was breaking the cells.

[1] https://bugs.launchpad.net/nova/+bug/1308342
[2] https://review.openstack.org/121800

** Affects: nova
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: In Progress


** Tags: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431860

Title:
  Cannot delete vm instance if send duplicate delete requests

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  I deployed openstack with icehouse rc1 and booted 100 vms on my nodes. After 
my testing, i tried to delete my vms at the same time. Then i fount all of my 
vms` status change to deleting but cannot be deleted. I checked my openstack, 
the rabbitmq-server crashed . Then i restart rabbitmq-server and my openstack 
nova services, sended the delete requests again and again, the vms still cannot 
be deleted. While , in havana, the vms can be deleted if received duplicate 
delete requests .
  I think icehouse should handle duplicate delete requests like havana .

  
  Note:
  This bug is already reported in launchpad [1] but the fix [2] proposed to 
resolve it was reverted back as it was breaking the cells.

  [1] https://bugs.launchpad.net/nova/+bug/1308342
  [2] https://review.openstack.org/121800

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431945] [NEW] no nova compute node in nova service list

2015-03-13 Thread binou
Public bug reported:

hi 
a im a new user of openstack services with trove
the installation was successful but i can not do anythings like create instance 
or database

when i see the nova nova service there are no nova compute node 
i get this

Binary   Host Zone Status   
  State Updated_At
nova-conductor   ubuntu-VirtualBoxinternal enabled  
  :-)   2015-03-10 11:05:51
nova-certubuntu-VirtualBoxinternal enabled  
  :-)   2015-03-10 11:05:51
nova-network ubuntu-VirtualBoxinternal enabled  
  :-)   2015-03-10 11:05:51
nova-scheduler   ubuntu-VirtualBoxinternal enabled  
  :-)   2015-03-10 11:05:51
nova-consoleauth ubuntu-VirtualBoxinternal enabled  
  :-)   2015-03-10 11:05:51

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431945

Title:
  no nova compute node in nova service list

Status in OpenStack Compute (Nova):
  New

Bug description:
  hi 
  a im a new user of openstack services with trove
  the installation was successful but i can not do anythings like create 
instance or database

  when i see the nova nova service there are no nova compute node 
  i get this

  Binary   Host Zone Status 
State Updated_At
  nova-conductor   ubuntu-VirtualBoxinternal 
enabled:-)   2015-03-10 11:05:51
  nova-certubuntu-VirtualBoxinternal 
enabled:-)   2015-03-10 11:05:51
  nova-network ubuntu-VirtualBoxinternal 
enabled:-)   2015-03-10 11:05:51
  nova-scheduler   ubuntu-VirtualBoxinternal 
enabled:-)   2015-03-10 11:05:51
  nova-consoleauth ubuntu-VirtualBoxinternal 
enabled:-)   2015-03-10 11:05:51

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425258] Re: test_list_baremetal_nodes race fails with a node not found 404

2015-03-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/162773
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=118cd39c61996785f21acfb1afecba5f0d3e7fb9
Submitter: Jenkins
Branch:master

commit 118cd39c61996785f21acfb1afecba5f0d3e7fb9
Author: Adam Gandelman ad...@ubuntu.com
Date:   Mon Mar 9 14:41:36 2015 -0700

Create test nodes for test_baremetal_nodes

This test currently relies on pre-existing resources and races if run
in parallel to other baremetal tests.  This adds creation of 3 test
nodes directly in Ironic to be tested via the Nova API extension.

This also tags said test with the 'baremetal' test attribute.

Closes-bug: #1425258

Change-Id: I4dbd37bdb2019b6eb0140d46a605d5c8392323f4


** Changed in: tempest
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425258

Title:
  test_list_baremetal_nodes race fails with a node not found 404

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  http://logs.openstack.org/35/158435/1/check/check-grenade-dsvm-ironic-
  sideways/2beafaf/logs/new/screen-n-api.txt.gz?level=TRACE

  Apparently this is unhandled and we get a 500 response:

  http://logs.openstack.org/35/158435/1/check/check-grenade-dsvm-ironic-
  sideways/2beafaf/console.html#_2015-02-23_22_11_18_978

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiYmFyZW1ldGFsX25vZGVzLnB5XCIgQU5EIG1lc3NhZ2U6XCJOb2RlXCIgQU5EIG1lc3NhZ2U6XCJjb3VsZCBub3QgYmUgZm91bmRcIiBBTkQgdGFnczpcInNjcmVlbi1uLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQyNDgwNjgwMzM5MX0=

  21 hits in the last 7 days, check and gate, master and stable/juno,
  all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1425258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327218] Re: Volume detach failure because of invalid bdm.connection_info

2015-03-13 Thread Matt Riedemann
This is still an issue and from what I can tell a specific change wasn't
merged against this bug, so re-opening since I couldn't find it via LP
search before (since it was Fix Committed):

http://logs.openstack.org/93/156693/7/check/check-tempest-dsvm-postgres-
full/d3b26e8/logs/screen-n-cpu.txt.gz#_2015-03-12_16_38_17_567

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiX2RldGFjaF92b2x1bWVcIiBBTkQgbWVzc2FnZTpcImNhblxcJ3QgYmUgZGVjb2RlZFwiIEFORCB0YWdzOlwic2NyZWVuLW4tY3B1LnR4dFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI2MjYyNTc2ODI4LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

** Changed in: nova
   Status: Fix Released = Confirmed

** Changed in: nova
Milestone: 2014.2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327218

Title:
  Volume detach failure because of invalid bdm.connection_info

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Example of this here:

  http://logs.openstack.org/33/97233/1/check/check-grenade-
  dsvm/f7b8a11/logs/old/screen-n-cpu.txt.gz?level=TRACE#_2014-06-02_14_13_51_125

     File /opt/stack/old/nova/nova/compute/manager.py, line 4153, in 
_detach_volume
   connection_info = jsonutils.loads(bdm.connection_info)
     File /opt/stack/old/nova/nova/openstack/common/jsonutils.py, line 164, 
in loads
   return json.loads(s)
     File /usr/lib/python2.7/json/__init__.py, line 326, in loads
   return _default_decoder.decode(s)
     File /usr/lib/python2.7/json/decoder.py, line 366, in decode
   obj, end = self.raw_decode(s, idx=_w(s, 0).end())
   TypeError: expected string or buffer

  and this was in grenade with stable/icehouse nova commit 7431cb9

  There's nothing unusual about the test which triggers this - simply
  attaches a volume to an instance, waits for it to show up in the
  instance and then tries to detach it

  logstash query for this:

    message:Exception during message handling AND message:expected
  string or buffer AND message:connection_info =
  jsonutils.loads(bdm.connection_info) AND tags:screen-n-cpu.txt

  but it seems to be very rare

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431958] [NEW] Missing ovs flows results in vm isolation for particular tenants

2015-03-13 Thread Victor Tapia
Public bug reported:

For some reason, some compute nodes are missing ovs flows of various
tenants (not all of them), resulting in vm isolation (no dhcp/metadata
on boot). A particular tenant A might have issues with node B whilst
tenant B might have problems with node A and not B. All of the
affected tenant VMs running in an affected node are isolated.

In those compute nodes, the ovs-vswitchd process has crashed previously:
ubuntu@niagara:~$ ps -ef | grep vswitchd
root  1959 1  0 Feb12 ?00:41:15 ovs-vswitchd: monitoring pid 
62005 (4 crashes: pid 59408 died, killed (Segmentation fault), core dumped)

After restarting the openvswitch-switch service, ps shows that the missing 
flows are being created by neutron (e.g. sudo neutron-rootwrap 
/etc/neutron/rootwrap.conf ovs-ofctl mod-flows br-tun 
table=21,dl_vlan=11,actions=strip_vlan,set_tunnel:4,output:5,6,2,4,7,3).
--- 
ApportVersion: 2.14.1-0ubuntu3.5
Architecture: amd64
DistroRelease: Ubuntu 14.04
Package: neutron-common 1:2014.1.3-0ubuntu1
PackageArchitecture: all
ProcEnviron:
 TERM=xterm
 PATH=(custom, no user)
 XDG_RUNTIME_DIR=set
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcVersionSignature: User Name 3.13.0-45.74-generic 3.13.11-ckt13
Tags:  trusty uec-images
Uname: Linux 3.13.0-45-generic x86_64
UpgradeStatus: No upgrade log present (probably fresh install)
UserGroups: adm audio cdrom dialout dip floppy libvirtd netdev plugdev sudo 
video
_MarkForUpload: True
modified.conffile..etc.neutron.api.paste.ini: [deleted]
modified.conffile..etc.neutron.fwaas.driver.ini: [deleted]
modified.conffile..etc.neutron.l3.agent.ini: [deleted]
modified.conffile..etc.neutron.neutron.conf: [deleted]
modified.conffile..etc.neutron.policy.json: [deleted]
modified.conffile..etc.neutron.rootwrap.conf: [deleted]
modified.conffile..etc.neutron.rootwrap.d.debug.filters: [deleted]
modified.conffile..etc.neutron.rootwrap.d.iptables.firewall.filters: [deleted]
modified.conffile..etc.neutron.rootwrap.d.l3.filters: [deleted]
modified.conffile..etc.neutron.rootwrap.d.vpnaas.filters: [deleted]
modified.conffile..etc.neutron.vpn.agent.ini: [deleted]
modified.conffile..etc.sudoers.d.neutron.sudoers: [deleted]

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: cts

** Tags added: apport-collected trusty uec-images

** Description changed:

  For some reason, some compute nodes are missing ovs flows of various
  tenants (not all of them), resulting in vm isolation (no dhcp/metadata
  on boot). A particular tenant A might have issues with node B whilst
  tenant B might have problems with node A and not B. All of the
  affected tenant VMs running in an affected node are isolated.
  
  In those compute nodes, the ovs-vswitchd process has crashed previously:
  ubuntu@niagara:~$ ps -ef | grep vswitchd
  root  1959 1  0 Feb12 ?00:41:15 ovs-vswitchd: monitoring pid 
62005 (4 crashes: pid 59408 died, killed (Segmentation fault), core dumped)
  
- After restarting the openvswitch-switch service, ps shows that the
- missing flows are being created by neutron (e.g. sudo neutron-rootwrap
- /etc/neutron/rootwrap.conf ovs-ofctl mod-flows br-tun
- table=21,dl_vlan=11,actions=strip_vlan,set_tunnel:4,output:5,6,2,4,7,3).
+ After restarting the openvswitch-switch service, ps shows that the missing 
flows are being created by neutron (e.g. sudo neutron-rootwrap 
/etc/neutron/rootwrap.conf ovs-ofctl mod-flows br-tun 
table=21,dl_vlan=11,actions=strip_vlan,set_tunnel:4,output:5,6,2,4,7,3).
+ --- 
+ ApportVersion: 2.14.1-0ubuntu3.5
+ Architecture: amd64
+ DistroRelease: Ubuntu 14.04
+ Package: neutron-common 1:2014.1.3-0ubuntu1
+ PackageArchitecture: all
+ ProcEnviron:
+  TERM=xterm
+  PATH=(custom, no user)
+  XDG_RUNTIME_DIR=set
+  LANG=en_US.UTF-8
+  SHELL=/bin/bash
+ ProcVersionSignature: User Name 3.13.0-45.74-generic 3.13.11-ckt13
+ Tags:  trusty uec-images
+ Uname: Linux 3.13.0-45-generic x86_64
+ UpgradeStatus: No upgrade log present (probably fresh install)
+ UserGroups: adm audio cdrom dialout dip floppy libvirtd netdev plugdev sudo 
video
+ _MarkForUpload: True
+ modified.conffile..etc.neutron.api.paste.ini: [deleted]
+ modified.conffile..etc.neutron.fwaas.driver.ini: [deleted]
+ modified.conffile..etc.neutron.l3.agent.ini: [deleted]
+ modified.conffile..etc.neutron.neutron.conf: [deleted]
+ modified.conffile..etc.neutron.policy.json: [deleted]
+ modified.conffile..etc.neutron.rootwrap.conf: [deleted]
+ modified.conffile..etc.neutron.rootwrap.d.debug.filters: [deleted]
+ modified.conffile..etc.neutron.rootwrap.d.iptables.firewall.filters: [deleted]
+ modified.conffile..etc.neutron.rootwrap.d.l3.filters: [deleted]
+ modified.conffile..etc.neutron.rootwrap.d.vpnaas.filters: [deleted]
+ modified.conffile..etc.neutron.vpn.agent.ini: [deleted]
+ modified.conffile..etc.sudoers.d.neutron.sudoers: [deleted]

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.