[Yahoo-eng-team] [Bug 1422046] Re: cinder backup-list is always listing all tenants's bug for admin

2017-09-14 Thread Jordan Pittier
** Changed in: ospurge
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1422046

Title:
  cinder backup-list is always listing all tenants's bug for admin

Status in OpenStack Dashboard (Horizon):
  New
Status in ospurge:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in python-cinderclient:
  Fix Released
Status in python-cinderclient package in Ubuntu:
  Fix Released

Bug description:
  cinder backup-list doesn't support '--all-tenants' argument for admin
  wright now. This lead to admin always getting all tenants's backups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1422046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659811] [NEW] /v2.1/servers/detail?tenant_id=XX returns unpredictable results

2017-01-27 Thread Jordan Pittier
Public bug reported:

Found this while investigating why the tempest test
test_list_servers_by_admin_with_specified_tenant randomly fails on Py35.

This test produces the following API call:

GET /v2.1/servers/detail?tenant_id=XXX

Which calls this method in Nova:

nova.api.openstack.compute.servers.ServersController.detail()

In the _get_servers() method that detail() calls, for some reason (bug ?
[1]), the 'project_id' is added to the search options. So now, we see
that get_all() is called with both tenant_id and project_id:

Searching by: {'project_id': '74e1044b53de44d1bac80cded5146504',
'deleted': False, 'tenant_id': '153d2038e0bc4ea99819a21a55cb66ea'}
get_all /opt/stack/new/nova/nova/compute/api.py:2336

Now, in nova/nova/compute/api.py in the get_all() method, there's a dict
called filter_mapping, that is iterated upon. I believe, based on which
order the dict (i.e the search options) is iterated upon, either
tenant_id or project_id gets rewritten. This leads to random return
values.

[1] :
https://github.com/openstack/nova/blob/cba26a6e561c18fa4659efac8ddc0b3c139023fe/nova/api/openstack/compute/servers.py#L322

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659811

Title:
  /v2.1/servers/detail?tenant_id=XX returns unpredictable results

Status in OpenStack Compute (nova):
  New

Bug description:
  Found this while investigating why the tempest test
  test_list_servers_by_admin_with_specified_tenant randomly fails on
  Py35.

  This test produces the following API call:

  GET /v2.1/servers/detail?tenant_id=XXX

  Which calls this method in Nova:

  nova.api.openstack.compute.servers.ServersController.detail()

  In the _get_servers() method that detail() calls, for some reason (bug
  ? [1]), the 'project_id' is added to the search options. So now, we
  see that get_all() is called with both tenant_id and project_id:

  Searching by: {'project_id': '74e1044b53de44d1bac80cded5146504',
  'deleted': False, 'tenant_id': '153d2038e0bc4ea99819a21a55cb66ea'}
  get_all /opt/stack/new/nova/nova/compute/api.py:2336

  Now, in nova/nova/compute/api.py in the get_all() method, there's a
  dict called filter_mapping, that is iterated upon. I believe, based on
  which order the dict (i.e the search options) is iterated upon, either
  tenant_id or project_id gets rewritten. This leads to random return
  values.

  [1] :
  
https://github.com/openstack/nova/blob/cba26a6e561c18fa4659efac8ddc0b3c139023fe/nova/api/openstack/compute/servers.py#L322

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1659811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1659129] Re: n-api throws exception when listing servers

2017-01-27 Thread Jordan Pittier
The stack trace says "Unknown database 'nova_cell0'". Latest version of
Nova now requires to create a  nova_cell0 DB. Please create it, and/or
look how DevStack is setting this up.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1659129

Title:
  n-api throws exception when listing servers

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When I tried to list all servers, n-api throws exception complaining
  about "Unknow database exception". I am using master branch TOT
  version.

  nicira@htb-1n-eng-dhcp8:~/devstack$ openstack server list
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-4c779421-1c75-4e0a-b46f-ca2fe4995c25)

  
  n-api log:
  2017-01-24 19:51:27.176 DEBUG nova.compute.api 
[req-4c779421-1c75-4e0a-b46f-ca2fe4995c25 demo demo] Searching by: {'deleted': 
False, 'project_id': u'c1c62ff8a14348108d4519c65c3db3e1'} from (pid=9385) 
get_all /opt/stack/nova/nova/compute/api.py:2331
  2017-01-24 19:51:27.194 ERROR nova.api.openstack.extensions 
[req-4c779421-1c75-4e0a-b46f-ca2fe4995c25 demo demo] Unexpected exception in 
API method
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 338, in wrapped
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 181, in wrapper
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 181, in wrapper
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 209, in detail
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions servers = 
self._get_servers(req, is_detail=True)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 344, in 
_get_servers
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions 
sort_keys=sort_keys, sort_dirs=sort_dirs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 2416, in get_all
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions 
sort_dirs=sort_dirs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 2493, in _get_instances_by_filters
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions 
expected_attrs=fields, sort_keys=sort_keys, sort_dirs=sort_dirs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions result = 
fn(cls, context, *args, **kwargs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 1220, in get_by_filters
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions 
use_slave=use_slave, sort_keys=sort_keys, sort_dirs=sort_dirs)
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/db/sqlalchemy/api.py", line 235, in wrapper
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions with 
reader_mode.using(context):
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions return 
self.gen.next()
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 944, in _transaction_scope
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions 
allow_async=self._allow_async) as resource:
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions return 
self.gen.next()
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 558, in _session
  2017-01-24 19:51:27.194 TRACE nova.api.openstack.extensions 
bind=self.connection, mode=self.mode)
  2017-01-24 

[Yahoo-eng-team] [Bug 1648767] [NEW] Stack trace when tring to delete an inexisting img

2016-12-09 Thread Jordan Pittier
Public bug reported:

I got the following stack trace in a Gate job run [1]:

ERROR nova.compute.manager [req-xx tempest-ImagesTestJSON-1266595174 
tempest-ImagesTestJSON-1266595174] [instance: XX] Error while trying to clean 
up image bc92d2a5-6fa0-4a84-8ad2-cb9f597da420
ERROR nova.compute.manager [instance: XX] Traceback (most recent call last):
ERROR nova.compute.manager [instance: XX]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 238, in decorated_function
ERROR nova.compute.manager [instance: XX] self.image_api.delete(context, 
image_id)
ERROR nova.compute.manager [instance: XX]   File 
"/opt/stack/new/nova/nova/image/api.py", line 141, in delete
ERROR nova.compute.manager [instance: XX] return session.delete(context, 
image_id)
ERROR nova.compute.manager [instance: XX]   File 
"/opt/stack/new/nova/nova/image/glance.py", line 765, in delete
ERROR nova.compute.manager [instance: XX] raise 
exception.ImageNotFound(image_id=image_id)
ERROR nova.compute.manager [instance: XX] ImageNotFound: Image 
bc92d2a5-6fa0-4a84-8ad2-cb9f597da420 could not be found.

There's no need to panic when we try to delete an inexisting image. We
should print at a lower log level, like INFO.

I am working on a patch right now.

[1] http://logs.openstack.org/56/408056/4/check/gate-tempest-dsvm-full-
devstack-plugin-ceph-ubuntu-
xenial/2c72482/logs/screen-n-cpu.txt.gz?level=ERROR#_2016-12-08_16_43_20_627

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648767

Title:
  Stack trace when tring to delete an inexisting img

Status in OpenStack Compute (nova):
  New

Bug description:
  I got the following stack trace in a Gate job run [1]:

  ERROR nova.compute.manager [req-xx tempest-ImagesTestJSON-1266595174 
tempest-ImagesTestJSON-1266595174] [instance: XX] Error while trying to clean 
up image bc92d2a5-6fa0-4a84-8ad2-cb9f597da420
  ERROR nova.compute.manager [instance: XX] Traceback (most recent call last):
  ERROR nova.compute.manager [instance: XX]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 238, in decorated_function
  ERROR nova.compute.manager [instance: XX] self.image_api.delete(context, 
image_id)
  ERROR nova.compute.manager [instance: XX]   File 
"/opt/stack/new/nova/nova/image/api.py", line 141, in delete
  ERROR nova.compute.manager [instance: XX] return session.delete(context, 
image_id)
  ERROR nova.compute.manager [instance: XX]   File 
"/opt/stack/new/nova/nova/image/glance.py", line 765, in delete
  ERROR nova.compute.manager [instance: XX] raise 
exception.ImageNotFound(image_id=image_id)
  ERROR nova.compute.manager [instance: XX] ImageNotFound: Image 
bc92d2a5-6fa0-4a84-8ad2-cb9f597da420 could not be found.

  There's no need to panic when we try to delete an inexisting image. We
  should print at a lower log level, like INFO.

  I am working on a patch right now.

  [1] http://logs.openstack.org/56/408056/4/check/gate-tempest-dsvm-
  full-devstack-plugin-ceph-ubuntu-
  xenial/2c72482/logs/screen-n-cpu.txt.gz?level=ERROR#_2016-12-08_16_43_20_627

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640115] [NEW] Policies ['update_rbac_policy:target_tenant'] are part of a cyclical reference.

2016-11-08 Thread Jordan Pittier
Public bug reported:

Hi,
This message:

WARNING oslo_policy.policy [XXX] Policies
['update_rbac_policy:target_tenant'] are part of a cyclical reference.

spams the neutron-server logs. It's printed ~30.000 times in an average
Gate runs. This makes the log file hard to read and slows logstash.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1640115

Title:
  Policies ['update_rbac_policy:target_tenant'] are part of a cyclical
  reference.

Status in neutron:
  New

Bug description:
  Hi,
  This message:

  WARNING oslo_policy.policy [XXX] Policies
  ['update_rbac_policy:target_tenant'] are part of a cyclical reference.

  spams the neutron-server logs. It's printed ~30.000 times in an
  average Gate runs. This makes the log file hard to read and slows
  logstash.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1640115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476770] Re: _translate_from_glance fails with "AttributeError: id" in grenade

2015-11-19 Thread Jordan Pittier
** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1476770

Title:
  _translate_from_glance fails with "AttributeError: id" in grenade

Status in Glance:
  Invalid
Status in keystonemiddleware:
  New
Status in openstack-ansible:
  In Progress
Status in OpenStack-Gate:
  Fix Committed
Status in oslo.vmware:
  Fix Released
Status in python-glanceclient:
  In Progress

Bug description:
  http://logs.openstack.org/28/204128/2/check/gate-grenade-
  dsvm/80607dc/logs/old/screen-n-api.txt.gz?level=TRACE

  2015-07-21 17:05:37.447 ERROR nova.api.openstack 
[req-9854210d-b9fc-47ff-9f00-1a0270266e2a tempest-ServersTestJSON-34270062 
tempest-ServersTestJSON-745803609] Caught error: id
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack Traceback (most recent 
call last):
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/__init__.py", line 125, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
req.get_response(self.application)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 634, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 554, in _call_app
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/routes/middleware.py", line 136, in 
__call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
resp(environ, start_response)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 756, in __call__
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack content_type, 
body, accept)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 821, in _process_stack
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/wsgi.py", line 911, in dispatch
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack return 
method(req=request, **action_args)
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack   File 
"/opt/stack/old/nova/nova/api/openstack/compute/servers.py", line 636, in create
  2015-07-21 17:05:37.447 21251 TRACE nova.api.openstack 
self._handle_create_exception(*sys.exc_info())
  2015-07-21 

[Yahoo-eng-team] [Bug 1486475] [NEW] Transient mismatch between OS-EXT-STS:vm_state and OS-EXT-STS:power_state

2015-08-19 Thread Jordan Pittier
Public bug reported:

Hi,

I got this weird Tempest run here:
http://logs.openstack.org/35/206935/1/gate/gate-tempest-dsvm-neutron-
full/ef5a1a9/console.html#_2015-08-18_18_20_26_980 where the server has
OS-EXT-STS:vm_state: active and OS-EXT-STS:power_state: 3.

Power_state 3 is PAUSED according to nova/compute/power_state.py So it
seems theres a kind of mismatch with the vm_state being active.

The discrepancy last only a fraction of second, but as Tempest is
hitting hard on Nova, Tempest got into this intermediary state and my
build was marked as failed.

Could someone confirm that right after the VM transitioned from
BUILD/spawning to ACTIVE/None there could be a window of inconsistency ?

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486475

Title:
  Transient mismatch between OS-EXT-STS:vm_state and OS-EXT-
  STS:power_state

Status in OpenStack Compute (nova):
  New
Status in tempest:
  New

Bug description:
  Hi,

  I got this weird Tempest run here:
  http://logs.openstack.org/35/206935/1/gate/gate-tempest-dsvm-neutron-
  full/ef5a1a9/console.html#_2015-08-18_18_20_26_980 where the server
  has OS-EXT-STS:vm_state: active and OS-EXT-STS:power_state: 3.

  Power_state 3 is PAUSED according to nova/compute/power_state.py So
  it seems theres a kind of mismatch with the vm_state being active.

  The discrepancy last only a fraction of second, but as Tempest is
  hitting hard on Nova, Tempest got into this intermediary state and my
  build was marked as failed.

  Could someone confirm that right after the VM transitioned from
  BUILD/spawning to ACTIVE/None there could be a window of inconsistency
  ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482287] [NEW] Scality volume driver doesn't detect Scality FS is already mounted

2015-08-06 Thread Jordan Pittier
Public bug reported:

A new version of Scality Ring doesn't have a 'sys directory at the root
level of the Scality (distributed) filesystem. But the
LibvirtScalityVolumeDriver in Nova relies on the presence of this 'sys'
directory to detect whether the Scality Filesystem has been properly
mounted. This means that LibvirtScalityVolumeDriver doesnt detect that
the filesystem was already mounted.

See:
https://github.com/openstack/nova/blob/14d7265263702d208dcef18a4200bf395db5bf40/nova/virt/libvirt/volume/scality.py#L111

** Affects: nova
 Importance: Low
 Assignee: Jordan Pittier (jordan-pittier)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Jordan Pittier (jordan-pittier)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482287

Title:
  Scality volume driver doesn't detect Scality FS is already mounted

Status in OpenStack Compute (nova):
  New

Bug description:
  A new version of Scality Ring doesn't have a 'sys directory at the
  root level of the Scality (distributed) filesystem. But the
  LibvirtScalityVolumeDriver in Nova relies on the presence of this
  'sys' directory to detect whether the Scality Filesystem has been
  properly mounted. This means that LibvirtScalityVolumeDriver doesnt
  detect that the filesystem was already mounted.

  See:
  
https://github.com/openstack/nova/blob/14d7265263702d208dcef18a4200bf395db5bf40/nova/virt/libvirt/volume/scality.py#L111

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1482287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362048] Re: SQLite timeout in glance image_cache

2014-09-10 Thread Jordan Pittier
Yeah, I though for a minute that to trigger a reverify in the gate, a
bug in Tempest had to be logged. But that's obviously wrong.

** No longer affects: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1362048

Title:
  SQLite timeout in glance image_cache

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hi,
  Sometime I get the following stack trace in Glance-API : 

  GET /v1/images/42646b2b-cf0b-4b15-b011-19d0a6880ffb HTTP/1.1 200 4970175 
2.403391
  for chunk in image_iter:
File /opt/stack/new/glance/glance/api/middleware/cache.py, line 281, in 
get_from_cache
  yield chunk
File /usr/lib/python2.7/contextlib.py, line 24, in __exit__
  self.gen.next()
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 
373, in open_for_read
  with self.get_db() as db:
File /usr/lib/python2.7/contextlib.py, line 17, in __enter__
  return self.gen.next()
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 
391, in get_db
  conn.execute('PRAGMA synchronous = NORMAL')
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 77, 
in execute
  return self._timeout(lambda: sqlite3.Connection.execute(
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 74, 
in _timeout
  sleep(0.05)
File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 31, 
in sleep
  hub.switch()
File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in 
switch
  return self.greenlet.switch()
  Timeout: 2 seconds

  It happens also from time to time in the Gate. See the following
  logstash request :

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicmV0dXJuIHNlbGYuZ3JlZW5sZXQuc3dpdGNoKClcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTEyNjQ1NjU3NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  
  This caused the gate failure of : 
http://logs.openstack.org/22/116622/2/check/check-tempest-dsvm-postgres-full/f079ef9/logs/screen-g-api.txt.gz?
  (wait for a full load of this page then grep Timeout: 2 seconds)

  Sorry for not being able to investigate more.

  Jordan

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1362048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362048] [NEW] SQLite timeout in glance image_cache

2014-08-27 Thread Jordan Pittier
Public bug reported:

Hi,
Sometime I get the following stack trace in Glance-API : 

GET /v1/images/42646b2b-cf0b-4b15-b011-19d0a6880ffb HTTP/1.1 200 4970175 
2.403391
for chunk in image_iter:
  File /opt/stack/new/glance/glance/api/middleware/cache.py, line 281, in 
get_from_cache
yield chunk
  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__
self.gen.next()
  File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 373, 
in open_for_read
with self.get_db() as db:
  File /usr/lib/python2.7/contextlib.py, line 17, in __enter__
return self.gen.next()
  File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 391, 
in get_db
conn.execute('PRAGMA synchronous = NORMAL')
  File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 77, 
in execute
return self._timeout(lambda: sqlite3.Connection.execute(
  File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 74, 
in _timeout
sleep(0.05)
  File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 31, in 
sleep
hub.switch()
  File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in 
switch
return self.greenlet.switch()
Timeout: 2 seconds

It happens also from time to time in the Gate. See the following
logstash request :

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicmV0dXJuIHNlbGYuZ3JlZW5sZXQuc3dpdGNoKClcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTEyNjQ1NjU3NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


This caused the gate failure of : 
http://logs.openstack.org/22/116622/2/check/check-tempest-dsvm-postgres-full/f079ef9/logs/screen-g-api.txt.gz?
  (wait for a full load of this page then grep Timeout: 2 seconds)

Sorry for not being able to investigate more.

Jordan

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1362048

Title:
  SQLite timeout in glance image_cache

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Tempest:
  New

Bug description:
  Hi,
  Sometime I get the following stack trace in Glance-API : 

  GET /v1/images/42646b2b-cf0b-4b15-b011-19d0a6880ffb HTTP/1.1 200 4970175 
2.403391
  for chunk in image_iter:
File /opt/stack/new/glance/glance/api/middleware/cache.py, line 281, in 
get_from_cache
  yield chunk
File /usr/lib/python2.7/contextlib.py, line 24, in __exit__
  self.gen.next()
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 
373, in open_for_read
  with self.get_db() as db:
File /usr/lib/python2.7/contextlib.py, line 17, in __enter__
  return self.gen.next()
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 
391, in get_db
  conn.execute('PRAGMA synchronous = NORMAL')
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 77, 
in execute
  return self._timeout(lambda: sqlite3.Connection.execute(
File /opt/stack/new/glance/glance/image_cache/drivers/sqlite.py, line 74, 
in _timeout
  sleep(0.05)
File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 31, 
in sleep
  hub.switch()
File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in 
switch
  return self.greenlet.switch()
  Timeout: 2 seconds

  It happens also from time to time in the Gate. See the following
  logstash request :

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicmV0dXJuIHNlbGYuZ3JlZW5sZXQuc3dpdGNoKClcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTEyNjQ1NjU3NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  
  This caused the gate failure of : 
http://logs.openstack.org/22/116622/2/check/check-tempest-dsvm-postgres-full/f079ef9/logs/screen-g-api.txt.gz?
  (wait for a full load of this page then grep Timeout: 2 seconds)

  Sorry for not being able to investigate more.

  Jordan

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1362048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360235] [NEW] swift functional tests are broken

2014-08-22 Thread Jordan Pittier
Public bug reported:

Hi,
The following tests are failing in master : 

GLANCE_TEST_SWIFT_CONF=/etc/glance/glance-api.conf ./run_tests.sh
glance.tests.functional.store.test_swift.TestSwiftStore;

I believe this commit
https://github.com/openstack/glance/commit/63195aaa3b12e56ae787598e001ac44d62e52865
broke them.

The problem , i believe, is that in glance/store/swift.py the line
SWIFT_STORE_REF_PARAMS = swift_store_utils.SwiftParams().params is
evaluated when the file is imported which is too early for
tests/functional/store/test_swift.py:TestSwiftStore (see method setUp).

Also
glance.tests.functional.store.test_swift.TestSwiftStore.test_delayed_delete_with_auth
is broken, because of this commit :
https://github.com/openstack/glance/commit/66d24bb1a130902e824ca76cbee1deb6ef564873

Jordan

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1360235

Title:
  swift functional tests are broken

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Hi,
  The following tests are failing in master : 

  GLANCE_TEST_SWIFT_CONF=/etc/glance/glance-api.conf ./run_tests.sh
  glance.tests.functional.store.test_swift.TestSwiftStore;

  I believe this commit
  
https://github.com/openstack/glance/commit/63195aaa3b12e56ae787598e001ac44d62e52865
  broke them.

  The problem , i believe, is that in glance/store/swift.py the line
  SWIFT_STORE_REF_PARAMS = swift_store_utils.SwiftParams().params is
  evaluated when the file is imported which is too early for
  tests/functional/store/test_swift.py:TestSwiftStore (see method
  setUp).

  Also
  
glance.tests.functional.store.test_swift.TestSwiftStore.test_delayed_delete_with_auth
  is broken, because of this commit :
  
https://github.com/openstack/glance/commit/66d24bb1a130902e824ca76cbee1deb6ef564873

  Jordan

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1360235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307344] [NEW] neutron-db-manage fails with metering enabled

2014-04-14 Thread Jordan Pittier
Public bug reported:

Hello,
I just got a neutron-db-manage failure in the Gate : 
http://logs.openstack.org/77/83777/1/check/check-tempest-dsvm-neutron-heat-slow/a7ad8c0/

Relevant stack trace is


2014-04-13 23:53:15.281 | INFO  [alembic.migration] Running upgrade b65aa907aec 
- 33c3db036fe4, set_length_of_description_field_metering
2014-04-13 23:53:15.296 | Traceback (most recent call last):
2014-04-13 23:53:15.296 |   File /usr/local/bin/neutron-db-manage, line 10, 
in module
.
2014-04-13 23:53:15.340 | sqlalchemy.exc.ProgrammingError: (ProgrammingError) 
(1146, Table 'neutron_ml2.meteringlabels' doesn't exist) 'ALTER TABLE 
meteringlabels CHANGE description description VARCHAR(1024) NULL' ()


** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1307344

Title:
  neutron-db-manage fails with metering enabled

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hello,
  I just got a neutron-db-manage failure in the Gate : 
http://logs.openstack.org/77/83777/1/check/check-tempest-dsvm-neutron-heat-slow/a7ad8c0/

  Relevant stack trace is

  
  2014-04-13 23:53:15.281 | INFO  [alembic.migration] Running upgrade 
b65aa907aec - 33c3db036fe4, set_length_of_description_field_metering
  2014-04-13 23:53:15.296 | Traceback (most recent call last):
  2014-04-13 23:53:15.296 |   File /usr/local/bin/neutron-db-manage, line 10, 
in module
  .
  2014-04-13 23:53:15.340 | sqlalchemy.exc.ProgrammingError: (ProgrammingError) 
(1146, Table 'neutron_ml2.meteringlabels' doesn't exist) 'ALTER TABLE 
meteringlabels CHANGE description description VARCHAR(1024) NULL' ()
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1307344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp