[Yahoo-eng-team] [Bug 1374573] Re: Server hang on external network deletion with FIPs

2014-09-29 Thread Armando Migliaccio
It would be nice if we had a functional test that covered this case.

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374573

Title:
  Server hang on external network deletion with FIPs

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in Tempest:
  New

Bug description:
  This happens on master:

  Follow these steps:

  1) neutron net-create test --router:external=True
  2) neutron subnet-create test 200.0.0.0/22 --name test
  3) neutron floatingip-create test
  4) neutron net-delete test

  Watch command 4) hang (the server never comes back). Expected behavior
  would be for the command to succeed and delete the network
  successfully.

  This looks like a regression caused by commit:
  b1677dcb80ce8b83aadb2180efad3527a96bd3bc
  (https://review.openstack.org/#/c/82945/)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375139] [NEW] LDAP, non ascii characters in CN field couse error while switching projects

2014-09-29 Thread Robert Plestenjak
Public bug reported:

2014-09-22 13:33:31.465 2641 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[22/Sep/2014 13:33:31] POST /v2.0
/tokens HTTP/1.1 200 1284 0.223019
2014-09-22 13:33:31.761 2641 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[22/Sep/2014 13:33:31] GET /v2.0/
tenants HTTP/1.1 200 1814 0.291879
2014-09-22 13:33:31.837 2641 ERROR keystone.common.wsgi [-] 'ascii' codec can't 
encode character u'\u010d' 
in position 13: ordinal not in range(128)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/c
ommon/wsgi.py, line 212, in __call__
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/t
oken/controllers.py, line 89, in authenticate
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi context, auth)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/t
oken/controllers.py, line 205, in _authenticate_token
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi user_id, tenant_id)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/t
oken/controllers.py, line 362, in _get_project_roles_and_ref
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi user_id, tenant_id)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/a
ssignment/core.py, line 181, in get_roles_for_user_and_project
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi user_role_list = 
_get_user_project_roles(user_i
d, project_ref)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/a
ssignment/core.py, line 162, in _get_user_project_roles
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi 
tenant_id=project_ref['id'])
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/c
ommon/manager.py, line 78, in _wrapper
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi return f(*args, 
**kw)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/a
ssignment/backends/ldap.py, line 118, in _get_metadata
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi tenant_id)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/a
ssignment/backends/ldap.py, line 95, in _get_roles_for_just_user_and_project
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi if 
common_ldap.is_dn_equal(a.user_dn, user_dn)]
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/c
ommon/ldap/core.py, line 276, in is_dn_equal
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi dn1 = 
ldap.dn.str2dn(dn1)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib64/python2.6/site-packages/ldap/dn.
py, line 53, in str2dn
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi return 
ldap.functions._ldap_function_call(_ldap
.str2dn,dn,flags)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib64/python2.6/site-packages/ldap/fun
ctions.py, line 57, in _ldap_function_call
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi result = 
func(*args,**kwargs)
2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi UnicodeEncodeError: 
'ascii' codec can't encode character u'\u010d' in position 13: ordinal not in 
range(128)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1375139

Title:
  LDAP, non ascii characters in CN field couse error while switching
  projects

Status in OpenStack Identity (Keystone):
  New

Bug description:
  2014-09-22 13:33:31.465 2641 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[22/Sep/2014 13:33:31] POST /v2.0
  /tokens HTTP/1.1 200 1284 0.223019
  2014-09-22 13:33:31.761 2641 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[22/Sep/2014 13:33:31] GET /v2.0/
  tenants HTTP/1.1 200 1814 0.291879
  2014-09-22 13:33:31.837 2641 ERROR keystone.common.wsgi [-] 'ascii' codec 
can't encode character u'\u010d' 
  in position 13: ordinal not in range(128)
  2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 
/usr/lib/python2.6/site-packages/keystone/c
  ommon/wsgi.py, line 212, in __call__
  2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi result = 
method(context, **params)
  2014-09-22 13:33:31.837 2641 TRACE keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1375129] [NEW] cloud-init: 0.7.2 on Amazon AMI Linux package_update/package_upgrade does not run yum upgrade

2014-09-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I'm trying to do a yum upgrade in cloud-init as described here -
http://cloudinit.readthedocs.org/en/latest/topics/examples.html#run-apt-
or-yum-upgrade

running either

package_upgrade: true

or package_update: true

does not seem to work.  In cloud-init logs, I do not see these executed
even though there are updates that are available to be installed.

For now, we are doing

runcmd:
yum -y upgrade

However,  would like to know if this is a bug running package_upgrade

Anand

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
cloud-init: 0.7.2 on Amazon AMI Linux package_update/package_upgrade does not 
run yum upgrade
https://bugs.launchpad.net/bugs/1375129
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to cloud-init.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375129] Re: cloud-init: 0.7.2 on Amazon AMI Linux package_update/package_upgrade does not run yum upgrade

2014-09-29 Thread Robie Basak
Thank you for taking the time to report this bug.

I presume this is for cloud-init for an RPM-based distro, so redirecting
to the upstream project as this presumably has nothing to do with
Ubuntu.

** Package changed: cloud-init (Ubuntu) = cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1375129

Title:
  cloud-init: 0.7.2 on Amazon AMI Linux package_update/package_upgrade
  does not run yum upgrade

Status in Init scripts for use on cloud images:
  New

Bug description:
  I'm trying to do a yum upgrade in cloud-init as described here -
  http://cloudinit.readthedocs.org/en/latest/topics/examples.html#run-
  apt-or-yum-upgrade

  running either

  package_upgrade: true

  or package_update: true

  does not seem to work.  In cloud-init logs, I do not see these
  executed even though there are updates that are available to be
  installed.

  For now, we are doing

  runcmd:
  yum -y upgrade

  However,  would like to know if this is a bug running package_upgrade

  Anand

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1375129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375206] [NEW] Sort lists by name and not by UUID

2014-09-29 Thread Christian Berendt
Public bug reported:

All used lists (for example the drop down list at the top to choose the
used tenant or the list of available networks when launching a new
instance) should be sorted by the name and not by the UUID. It is very
confusing to have the lists in a non-alphabetical order because of the
UUIDs and I think it is more comfortable to sort the lists by name.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1375206

Title:
  Sort lists by name and not by UUID

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  All used lists (for example the drop down list at the top to choose
  the used tenant or the list of available networks when launching a new
  instance) should be sorted by the name and not by the UUID. It is very
  confusing to have the lists in a non-alphabetical order because of the
  UUIDs and I think it is more comfortable to sort the lists by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1375206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357372] Re: Race condition in VNC port allocation when spawning a instance on VMware

2014-09-29 Thread Jeremy Stanley
Could this behavior be controlled by a would-be attacker, or is it only
up to random chance? If the former then like bug 1058077/bug 1125378 the
VMT would likely deem it a security vulnerability. If the latter like
bug 1255609 we would most probably not.

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357372

Title:
  Race condition in VNC port allocation when spawning a instance on
  VMware

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  When spawning some instances,  nova VMware driver could have a race
  condition in VNC port allocation. Although the get_vnc_port function
  has a lock it not guarantee that the whole vnc port allocation process
  is locked, so another instance could receive the same port if it
  requests the VNC port before nova has finished the vnc port allocation
  to another VM.

  If the instances with the same VNC port are allocated in same host it
  could lead to a improper access to the instance console.

  Reproduce the problem: Launch  two or more instances at same time. In
  some cases one instance could execute the get_vnc_port and pick a port
  but before this instance has finished the _set_vnc_config another
  instance could execute get_vnc_port and pick the same port.

  How often this occurs: unpredictable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371620] Re: Setting up database schema with db_sync fails in migration 039 (SQLITE)

2014-09-29 Thread James Page
** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: keystone (Ubuntu)
   Importance: Undecided = High

** Changed in: keystone (Ubuntu)
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1371620

Title:
  Setting up database schema with db_sync fails in migration 039
  (SQLITE)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in “keystone” package in Ubuntu:
  Triaged

Bug description:
  A fresh clone of master (commit
  ee4ee3b7f570d448f9053547febd86591e600697) won't set up the database.

  On a fresh install of ubuntu 12.04.3 (yes, I know, but it's what I had
  kicking about) in a VM (Vmware), with no special config except for web
  proxies (excluding localhost) I followed

  http://docs.openstack.org/developer/keystone/setup.html

  and, once able to import keystone

  http://docs.openstack.org/developer/keystone/developing.html

  up to the point of running

  bin/keystone-manage db_sync

  this last command results in a stack trace as follows:

  (.venv)david@ubuntu:~/keystone$ bin/keystone-manage db_sync
  2014-09-19 06:54:16.321 12991 CRITICAL keystone [-] OperationalError: 
(OperationalError) database is locked u'DELETE FROM user_project_metadata' ()
  2014-09-19 06:54:16.321 12991 TRACE keystone Traceback (most recent call 
last):
  2014-09-19 06:54:16.321 12991 TRACE keystone   File bin/keystone-manage, 
line 44, in module
  2014-09-19 06:54:16.321 12991 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/cli.py, line 307, in main
  2014-09-19 06:54:16.321 12991 TRACE keystone CONF.command.cmd_class.main()
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/cli.py, line 74, in main
  2014-09-19 06:54:16.321 12991 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/common/sql/migration_helpers.py, line 204, in 
sync_database_to_version
  2014-09-19 06:54:16.321 12991 TRACE keystone _sync_common_repo(version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/common/sql/migration_helpers.py, line 160, in 
_sync_common_repo
  2014-09-19 06:54:16.321 12991 TRACE keystone init_version=init_version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/migration.py,
 line 79, in db_sync
  2014-09-19 06:54:16.321 12991 TRACE keystone return 
versioning_api.upgrade(engine, repository, version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/api.py,
 line 186, in upgrade
  2014-09-19 06:54:16.321 12991 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File string, line 2, in 
_migrate
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py,
 line 160, in with_engine
  2014-09-19 06:54:16.321 12991 TRACE keystone return f(*a, **kw)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/api.py,
 line 366, in _migrate
  2014-09-19 06:54:16.321 12991 TRACE keystone schema.runchange(ver, 
change, changeset.step)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/schema.py,
 line 93, in runchange
  2014-09-19 06:54:16.321 12991 TRACE keystone change.run(self.engine, step)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/script/py.py,
 line 148, in run
  2014-09-19 06:54:16.321 12991 TRACE keystone script_func(engine)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/common/sql/migrate_repo/versions/039_grant_to_assignment.py,
 line 223, in upgrade
  2014-09-19 06:54:16.321 12991 TRACE keystone migrate_grant_table(meta, 
migrate_engine, session, table_name)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/common/sql/migrate_repo/versions/039_grant_to_assignment.py,
 line 85, in migrate_grant_table
  2014-09-19 06:54:16.321 12991 TRACE keystone 
migrate_engine.execute(upgrade_table.delete())
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1752, in execute
  2014-09-19 06:54:16.321 12991 TRACE keystone return 

[Yahoo-eng-team] [Bug 1367354] Re: oslo.db's master breaks unittest in OS projects

2014-09-29 Thread James Page
** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: keystone (Ubuntu)
   Importance: Undecided = Medium

** Changed in: keystone (Ubuntu)
   Status: New = Triaged

** Changed in: keystone (Ubuntu)
 Assignee: (unassigned) = James Page (james-page)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367354

Title:
  oslo.db's master breaks unittest in OS projects

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  Fix Released
Status in “keystone” package in Ubuntu:
  Triaged

Bug description:
  When I run the unittests in different OpennStack projects using the
  latest oslo.db there were a lot of failures on Nova, Neutron and
  Keystone. The lot of tests raises OperationalError: (OperationalError)
  cannot start a transaction within a transaction 'BEGIN' () The right
  approach is to fix these projects, but we are in the end of J release,
  so I'm not sure, that we can merge these fixes fast.

  This issue was caused by commit [1], so the faster and simplest
  approach is - to revert this commit an continue our fork on this in K

  [1]
  
https://github.com/openstack/oslo.db/commit/78fd290a89545de31e5c13f3085df23368a8afaa

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1367354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373886] Re: create a simple way to add/remove policies to new role

2014-09-29 Thread Dolph Mathews
As Lance said, there's definitely work going on in this direction
(although, there are a several separate feature requests above!), but
it's not really within scope for Keystone, as the other services own
their own default policies (and thus, default role definitions). I
completely agree though, it'd be *great* to see a community-wide effort
to establish more granular default roles (just like those that you
suggested).

** Changed in: keystone
   Status: New = Opinion

** Changed in: keystone
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373886

Title:
  create a simple way to add/remove policies to new role

Status in OpenStack Identity (Keystone):
  Opinion

Bug description:
  I wanted to create a unique user role and add some build in policies to it. 
  I can create a new role but than discovered that instead of being able to add 
storage permissions or network permissions for a user (so specific system 
functionality) I have to build my own policies. 
  I opened a bug to Horizon but I think that for them to implement such a 
change in the UX they need keystone to do some work as well. 
  what I am suggesting is that we build some default policies that would allow 
us to add a storage admin, a network admin, an instance admin and so on to a 
new created role without asking the user to edit /etc/keystone/policy.json 
manually. 

  I think adding this functionality would not only improve keystone and
  make it more agile and east to use but improve horizon as well.

  *Before someone marks this as invalid I will add that I am not a coder
  and based on the community decisions to add a technical design to any
  blueprint opened I cannot open a blueprint my self :) *

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374398] Re: Non admin user can update router port

2014-09-29 Thread Mark McClain
If a tenant wants to change the characteristics of a router port they
have to clean up resulting mess. I don't see this as a bug.

** Changed in: neutron
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374398

Title:
  Non admin user can update router port

Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Non admin user can update router's port 
http://paste.openstack.org/show/115575/.
  This can caused problems as server's won't get information about this change 
until next DHCP request so connectivity to and from this network will be lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374398/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370803] Re: Neutron client fetches 'neutron user' token but tries to create port on 'demo' tenant network

2014-09-29 Thread wangrich
Yes, Neutron shows no bugs in this case. My fault.

Neutron behaves well and my bug comes in configuration.

Sorry for annoying and wasting your valuable time.

** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370803

Title:
  Neutron client fetches 'neutron user' token but tries to create port
  on 'demo' tenant network

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I'm following OpenStack Icehouse installation guide on Ubuntu 14.04.
  After Glance, Nova, Neutron and Keystone were deployed, I tried to
  boot a CirrOS instance. However it failed.

  I checked nova-compute.log, and found that before Neutron client tried
  to create a port for the VM on tenant network(username: demo,
  password: demopass, tenant: demo), it connected to Keystone server for
  token, with credential of user 'neutron'(username: neutron, password:
  REDACTED) attached to the request. After the token was returned by
  Keystone, the Neutron client put that token in the request to Neutron
  server to create port. And finally the Neutron server return 'HTTP
  401'.

  Is there a bug in neutron client of misusing the credential or the
  manual misled me in configuring Neutron?

  I don't know which manual page should be attached in this report..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375313] [NEW] Jenkins rights

2014-09-29 Thread svasheka
Public bug reported:

Please add me rights to build custom iso on jenkins 
(http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/
   job)
I need it to get custom iso to reproduce bug.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375313

Title:
  Jenkins rights

Status in OpenStack Compute (Nova):
  New

Bug description:
  Please add me rights to build custom iso on jenkins 
(http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/
   job)
  I need it to get custom iso to reproduce bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375313] Re: Jenkins rights

2014-09-29 Thread svasheka
** Project changed: nova = mos

** Description changed:

- Please add me rights to build custom iso on jenkins 
(http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/
   job)
+ Please grant rights to build custom iso on jenkins 
(http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/
   job)
  I need it to get custom iso to reproduce bug.

** Description changed:

  Please grant rights to build custom iso on jenkins 
(http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/
   job)
- I need it to get custom iso to reproduce bug.
+ Required for bug verification.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375313

Title:
  Jenkins rights

Status in Mirantis OpenStack:
  New

Bug description:
  Please grant rights to build custom iso on jenkins 
(http://jenkins-product.srt.mirantis.net:8080/view/custom_iso/job/custom_master_iso/
   job)
  Required for bug verification.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1375313/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375320] [NEW] VMware: VM does not have network connectivty when there are many port groups defined

2014-09-29 Thread Gary Kotton
Public bug reported:

If the VC did not get the port group in the first response form the VC
then it will not match any of the networks. This happens when there are
many (more than a few hundred port groups defined)

** Affects: nova
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: vmware

** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375320

Title:
  VMware: VM does not have network connectivty when there are many port
  groups defined

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If the VC did not get the port group in the first response form the VC
  then it will not match any of the networks. This happens when there
  are many (more than a few hundred port groups defined)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375320/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367354] Re: oslo.db's master breaks unittest in OS projects

2014-09-29 Thread Launchpad Bug Tracker
This bug was fixed in the package keystone - 1:2014.2~b3-0ubuntu2

---
keystone (1:2014.2~b3-0ubuntu2) utopic; urgency=medium

  * Start failing package builds on unit test failures again:
- d/rules: Fail build on any test failures.
- d/p/skip-pysaml2.patch: Skip federation tests for now as pysaml2 is
  not yet in Ubuntu main and federation is only a contrib module.
- d/p/bug-1371620.patch: Cherry pick fix from upstream VCS for db locking
  issues with sqlite during package install (LP: #1371620).
- d/p/bug-1367354.patch: Cherry pick fix from upstream VCS to ensure that
  downgrade tests on sqlite complete successfully (LP: #1367354).
- d/control: Add missing python-ldappool to BD's.
  * d/control: Align version requirements for pycadf and six with upstream.
  * d/p/series: Re-enable disabled add-version-info.patch.
 -- James Page james.p...@ubuntu.com   Mon, 29 Sep 2014 15:34:45 +0100

** Changed in: keystone (Ubuntu)
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367354

Title:
  oslo.db's master breaks unittest in OS projects

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo Database library:
  Fix Released
Status in “keystone” package in Ubuntu:
  Fix Released

Bug description:
  When I run the unittests in different OpennStack projects using the
  latest oslo.db there were a lot of failures on Nova, Neutron and
  Keystone. The lot of tests raises OperationalError: (OperationalError)
  cannot start a transaction within a transaction 'BEGIN' () The right
  approach is to fix these projects, but we are in the end of J release,
  so I'm not sure, that we can merge these fixes fast.

  This issue was caused by commit [1], so the faster and simplest
  approach is - to revert this commit an continue our fork on this in K

  [1]
  
https://github.com/openstack/oslo.db/commit/78fd290a89545de31e5c13f3085df23368a8afaa

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1367354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371620] Re: Setting up database schema with db_sync fails in migration 039 (SQLITE)

2014-09-29 Thread Launchpad Bug Tracker
This bug was fixed in the package keystone - 1:2014.2~b3-0ubuntu2

---
keystone (1:2014.2~b3-0ubuntu2) utopic; urgency=medium

  * Start failing package builds on unit test failures again:
- d/rules: Fail build on any test failures.
- d/p/skip-pysaml2.patch: Skip federation tests for now as pysaml2 is
  not yet in Ubuntu main and federation is only a contrib module.
- d/p/bug-1371620.patch: Cherry pick fix from upstream VCS for db locking
  issues with sqlite during package install (LP: #1371620).
- d/p/bug-1367354.patch: Cherry pick fix from upstream VCS to ensure that
  downgrade tests on sqlite complete successfully (LP: #1367354).
- d/control: Add missing python-ldappool to BD's.
  * d/control: Align version requirements for pycadf and six with upstream.
  * d/p/series: Re-enable disabled add-version-info.patch.
 -- James Page james.p...@ubuntu.com   Mon, 29 Sep 2014 15:34:45 +0100

** Changed in: keystone (Ubuntu)
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1371620

Title:
  Setting up database schema with db_sync fails in migration 039
  (SQLITE)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in “keystone” package in Ubuntu:
  Fix Released

Bug description:
  A fresh clone of master (commit
  ee4ee3b7f570d448f9053547febd86591e600697) won't set up the database.

  On a fresh install of ubuntu 12.04.3 (yes, I know, but it's what I had
  kicking about) in a VM (Vmware), with no special config except for web
  proxies (excluding localhost) I followed

  http://docs.openstack.org/developer/keystone/setup.html

  and, once able to import keystone

  http://docs.openstack.org/developer/keystone/developing.html

  up to the point of running

  bin/keystone-manage db_sync

  this last command results in a stack trace as follows:

  (.venv)david@ubuntu:~/keystone$ bin/keystone-manage db_sync
  2014-09-19 06:54:16.321 12991 CRITICAL keystone [-] OperationalError: 
(OperationalError) database is locked u'DELETE FROM user_project_metadata' ()
  2014-09-19 06:54:16.321 12991 TRACE keystone Traceback (most recent call 
last):
  2014-09-19 06:54:16.321 12991 TRACE keystone   File bin/keystone-manage, 
line 44, in module
  2014-09-19 06:54:16.321 12991 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/cli.py, line 307, in main
  2014-09-19 06:54:16.321 12991 TRACE keystone CONF.command.cmd_class.main()
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/cli.py, line 74, in main
  2014-09-19 06:54:16.321 12991 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/common/sql/migration_helpers.py, line 204, in 
sync_database_to_version
  2014-09-19 06:54:16.321 12991 TRACE keystone _sync_common_repo(version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/keystone/common/sql/migration_helpers.py, line 160, in 
_sync_common_repo
  2014-09-19 06:54:16.321 12991 TRACE keystone init_version=init_version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/oslo/db/sqlalchemy/migration.py,
 line 79, in db_sync
  2014-09-19 06:54:16.321 12991 TRACE keystone return 
versioning_api.upgrade(engine, repository, version)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/api.py,
 line 186, in upgrade
  2014-09-19 06:54:16.321 12991 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File string, line 2, in 
_migrate
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py,
 line 160, in with_engine
  2014-09-19 06:54:16.321 12991 TRACE keystone return f(*a, **kw)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/api.py,
 line 366, in _migrate
  2014-09-19 06:54:16.321 12991 TRACE keystone schema.runchange(ver, 
change, changeset.step)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/schema.py,
 line 93, in runchange
  2014-09-19 06:54:16.321 12991 TRACE keystone change.run(self.engine, step)
  2014-09-19 06:54:16.321 12991 TRACE keystone   File 
/home/david/keystone/.venv/local/lib/python2.7/site-packages/migrate/versioning/script/py.py,
 line 148, in run
  2014-09-19 06:54:16.321 12991 TRACE keystone 

[Yahoo-eng-team] [Bug 1357379] Re: [OSSA 2014-031] policy admin_only rules not enforced when changing value to default (CVE-2014-6414)

2014-09-29 Thread Thierry Carrez
** Changed in: ossa
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357379

Title:
  [OSSA 2014-031] policy admin_only rules not enforced when changing
  value to default (CVE-2014-6414)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Invalid
Status in neutron icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  If a non-admin user tries to update an attribute, which should be
  updated only by admin, from a non-default value to default,  the
  update is successfully performed and PolicyNotAuthorized exception is
  not raised.

  The reason is that when a rule to match for a given action is built
  there is a verification that each attribute in a body of the resource
  is present and has a non-default value. Thus, if we try to change some
  attribute's value to default, it is not considered to be explicitly
  set and a corresponding rule is not enforced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375379] [NEW] console: wrong check when verify the server response

2014-09-29 Thread sahid
Public bug reported:

When trying to connect to a console with internal_access_path if the
server does not respond by 200 we should raise an exception but the
current code does not insure this case.

https://github.com/openstack/nova/blob/master/nova/console/websocketproxy.py#L68


The method 'find' return -1 on failure not False or 0

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: console

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375379

Title:
  console: wrong check when verify the server response

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to connect to a console with internal_access_path if the
  server does not respond by 200 we should raise an exception but the
  current code does not insure this case.

  
https://github.com/openstack/nova/blob/master/nova/console/websocketproxy.py#L68

  
  The method 'find' return -1 on failure not False or 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375401] [NEW] jshint error: == null in horizon.modals.js

2014-09-29 Thread Doug Fish
Public bug reported:

gate-horizon-jshint fails with

horizon/static/horizon/js/horizon.modals.js: line 236, col 29, Use '==='
to compare with 'null'.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1375401

Title:
  jshint error: == null in horizon.modals.js

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  gate-horizon-jshint fails with

  horizon/static/horizon/js/horizon.modals.js: line 236, col 29, Use
  '===' to compare with 'null'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1375401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375402] [NEW] jshint error: == null in horizon.modals.js

2014-09-29 Thread Doug Fish
*** This bug is a duplicate of bug 1375401 ***
https://bugs.launchpad.net/bugs/1375401

Public bug reported:

gate-horizon-jshint fails with

horizon/static/horizon/js/horizon.modals.js: line 236, col 29, Use '==='
to compare with 'null'.

** Affects: horizon
 Importance: Undecided
 Status: New

** This bug has been marked a duplicate of bug 1375401
   jshint error: == null in horizon.modals.js

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1375402

Title:
  jshint error: == null in horizon.modals.js

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  gate-horizon-jshint fails with

  horizon/static/horizon/js/horizon.modals.js: line 236, col 29, Use
  '===' to compare with 'null'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1375402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375408] [NEW] nova instance delete issue

2014-09-29 Thread Harshil
Public bug reported:

I am seeing a behavior, while deleting nova instance it returns status as an 
Error and fails to delete itself. 
So scenario, where I see this behavior is as follow:
Create a nova VM instance.
Create a cinder volume
Attach this volume to nova vm instance and after that wait for volume to go to 
‘In-use’ state.
Mount the created partition to VM
Unmount the created partition
Detach the volume from VM instance and after that wait for volume to go to 
‘Available’ state
Delete the volume
Delete the nova VM instance == It fails at this step, because status of server 
instance is set to Error.

Based on logs I also see Rabbitmq error, which might explain the problem.
3908 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit 
Traceback (most r ecent call last):
3909 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 648, in 
ensure
3910 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 return method (*args, **kwargs)
3911 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 753, in 
_publish
3912 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 publisher = c ls(self.conf, self.channel, topic, **kwargs)
3913 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 420, in 
__init__
3914 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 super(NotifyP ublisher, self).__init__(conf, channel, topic, **kwargs)
3915 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 396, in 
__init__
3916 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 **options)
3917 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 339, in 
__init__
3918 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 self.reconnec t(channel)
2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   File 
/usr/lib/ python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, 
line 423, in reconnect
3920 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 super(NotifyP ublisher, self).reconnect(channel)
3921 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ 
python2.7/dist-packages/oslo/messaging/_drivers/impl_rabbit.py, line 347, in 
reconnect
3922 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 routing_key=s elf.routing_key)
3923 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/kombu/messaging.py, line 84, in 
__init__
3924 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 self.revive(s elf._channel)
3925 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/kombu/messaging.py, line 218, in 
revive
3926 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 self.declare( )
3927 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/kombu/messaging.py, line 104, in 
declare
3928 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 self.exchange .declare()
3929 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/kombu/entity.py, line 166, in 
declare
3930 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 nowait=nowait , passive=passive,
3931 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/amqp/channel.py, line 613, in 
exchange_declare
3932 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 self._send_me thod((40, 10), args)
3933 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/amqp/abstract_channel.py, line 56, 
in _send_method
3934 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 self.channel_ id, method_sig, args, content,
3935 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit   
File /usr/lib/ python2.7/dist-packages/amqp/method_framing.py, line 221, 
in write_method
3936 2014-09-29 16:14:57.080 29278 TRACE oslo.messaging._drivers.impl_rabbit
 write_frame(1 , channel, payload)
3937 2014-09-29 

[Yahoo-eng-team] [Bug 1375421] [NEW] Empty Filter in Container- objects page creates empty row.

2014-09-29 Thread Amogh
Public bug reported:

1. Login to DevSatck as admin
2. Navigate to Containers and create the container Test_C1
3. Create the Pseudo folder Test_Folder inside the container Test_C1
4. Upload the object inside the folder Test_C1
5. Do not enter any text in filter box and click on filter button.
6. Observe that empty row is being created in the objects page.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Container-objects.PNG
   
https://bugs.launchpad.net/bugs/1375421/+attachment/4219486/+files/Container-objects.PNG

** Description changed:

- 1. Login to Rack 106 RC Build# 5
+ 1. Login to DevSatck as admin
  2. Navigate to Containers and create the container Test_C1
  3. Create the Pseudo folder Test_Folder inside the container Test_C1
  4. Upload the object inside the folder Test_C1
  5. Do not enter any text in filter box and click on filter button.
  6. Observe that empty row is being created in the objects page.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1375421

Title:
  Empty Filter in Container- objects page creates empty row.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Login to DevSatck as admin
  2. Navigate to Containers and create the container Test_C1
  3. Create the Pseudo folder Test_Folder inside the container Test_C1
  4. Upload the object inside the folder Test_C1
  5. Do not enter any text in filter box and click on filter button.
  6. Observe that empty row is being created in the objects page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1375421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375432] [NEW] Duplicate entry in gitignore

2014-09-29 Thread Matthew Treinish
Public bug reported:

The .gitignore file for nova contains the line for the sample config
file, etc/nova/nova.conf.sample, twice.

** Affects: nova
 Importance: Low
 Assignee: Matthew Treinish (treinish)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375432

Title:
  Duplicate entry in gitignore

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The .gitignore file for nova contains the line for the sample config
  file, etc/nova/nova.conf.sample, twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1375432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
   Importance: Undecided = High

** Changed in: cinder/icehouse
   Status: New = Fix Committed

** Changed in: cinder/icehouse
Milestone: None = 2014.1.3

** Changed in: cinder/icehouse
 Assignee: (unassigned) = Vish Ishaya (vishvananda)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire  %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/scheduler/manager.py, line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/api.py, line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
427, in _raise_if_deadlock_error
  2014-07-30 16:26:20.158 17209 TRACE 

[Yahoo-eng-team] [Bug 1348720] Re: Missing index for expire_reservations

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
   Importance: Undecided = High

** Changed in: cinder/icehouse
   Status: New = Fix Committed

** Changed in: cinder/icehouse
Milestone: None = 2014.1.3

** Changed in: cinder/icehouse
 Assignee: (unassigned) = Vish Ishaya (vishvananda)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

   EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire  '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341954] Re: suds client subject to cache poisoning by local attacker

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
   Status: New = Fix Committed

** Changed in: cinder/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341954

Title:
  suds client subject to cache poisoning by local attacker

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Gantt:
  New
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Oslo VMware library for OpenStack projects:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  
  The suds project appears to be largely unmaintained upstream. The default 
cache implementation stores pickled objects to a predictable path in /tmp. This 
can be used by a local attacker to redirect SOAP requests via symlinks or run a 
privilege escalation / code execution attack via a pickle exploit. 

  cinder/requirements.txt:suds=0.4
  gantt/requirements.txt:suds=0.4
  nova/requirements.txt:suds=0.4
  oslo.vmware/requirements.txt:suds=0.4

  
  The details are available here - 
  https://bugzilla.redhat.com/show_bug.cgi?id=978696
  (CVE-2013-2217)

  Although this is an unlikely attack vector steps should be taken to
  prevent this behaviour. Potential ways to fix this are by explicitly
  setting the cache location to a directory created via
  tempfile.mkdtemp(), disabling cache client.set_options(cache=None), or
  using a custom cache implementation that doesn't load / store pickled
  objects from an insecure location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1341954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357462] Re: glance cannot find store for scheme mware_datastore

2014-09-29 Thread Adam Gandelman
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Changed in: glance/icehouse
   Importance: Undecided = High

** Changed in: glance/icehouse
   Status: New = Fix Committed

** Changed in: glance/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1357462

Title:
  glance cannot find store for scheme mware_datastore

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed

Bug description:
   I have python-glance-2014.1.2-1.el7ost.noarch

  when configuring

  default_store=vmware_datastore
  known_stores = glance.store.vmware_datastore.Store
  vmware_server_host = 10.34.69.76
  vmware_server_username=root
  vmware_server_password=qum5net
  vmware_datacenter_path=New Datacenter
  vmware_datastore_name=shared

  glance-api doesn't seem to come up at all.
  glance image-list
  Error communicating with http://172.16.40.9:9292 [Errno 111] Connection 
refused

  there seems to be nothing interesing in the logs. After changing to
  the

default_store=file

glance image-create --disk-format vmdk --container-format bare
  --copy-from
  'http://str-02.rhev/OpenStack/cirros-0.3.1-x86_64-disk.vmdk'
  --name cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
  vmware_disktype=sparse --property vmware_adaptertype=ide
  --property vmware_ostype=ubuntu64Guest --name prdel --store
  vmware_datastore

  or

glance image-create --disk-format vmdk --container-format bare
  --file 'cirros-0.3.1-x86_64-disk.vmdk' --name
  cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
  vmware_disktype=sparse --property vmware_adaptertype=ide
  --property vmware_ostype=ubuntu64Guest --name prdel --store
  vmware_datastore

  the image remains in queued state

  I can see log lines
  2014-08-15 12:38:55.885 24732 DEBUG glance.store [-] Registering store class 
'glance.store.vmware_datastore.Store' with schemes ('vsphere',) create_stores 
/usr/lib/python2.7/site-packages/glance/store/__init__.py:208
  2014-08-15 12:39:54.119 24764 DEBUG glance.api.v1.images [-] Store for scheme 
vmware_datastore not found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057
  2014-08-15 12:43:31.408 24764 DEBUG glance.api.v1.images 
[eac2ff8d-d55a-4e2c-8006-95beef8a0d7b caffabe3f56e4e5cb5cbeb040224fe69 
77e18ad8a31e4de2ab26f52fb15b3cc1 - - -] Store for scheme vmware_datastore not 
found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057

  so it looks like there is inconsistency on the scheme that should be
  used. After hardcoding

STORE_SCHEME = 'vmware_datastore'

  in the

/usr/lib/python2.7/site-packages/glance/store/vmware_datastore.py

  the behaviour changed, but did not improve very much:

glance image-create --disk-format vmdk --container-format bare --file 
'cirros-0.3.1-x86_64-disk.vmdk' --name cirros-0.3.1-x86_64-disk.vmdk 
--is-public true --property vmware_disktype=sparse --property 
vmware_adaptertype=ide --property vmware_ostype=ubuntu64Guest --name 
prdel --store vmware_datastore
  400 Bad Request
  Store for image_id not found: 7edc22ae-f229-4f21-8f7d-fa19a03410be
  (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1357462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236868] Re: image status set to killed even if has been deleted

2014-09-29 Thread Adam Gandelman
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Changed in: glance/icehouse
   Importance: Undecided = Medium

** Changed in: glance/icehouse
   Status: New = Fix Committed

** Changed in: glance/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1236868

Title:
  image status set to killed even if has been deleted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed

Bug description:
  This error occurs with the following sequence of steps:

  1. Upload data to an image e.g. cinder upload-to-image
  2. image status is set to 'saving' as data is uploaded
  3. delete image before upload is complete
  4. image status goes to 'deleted' and image is deleted from backend store
  5. fail the upload
  6. image status then goes to 'killed' when it should stay as 'deleted'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1236868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372416] Re: Test failures due to removed ClientException from Ceilometer client

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided = Critical

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372416

Title:
  Test failures due to removed ClientException from Ceilometer client

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  The deprecated ClientException was removed from Ceilometer client: 
  
https://github.com/openstack/python-ceilometerclient/commit/09ad1ed7a3109a936f0e1bc9cbc904292607d70c

  However, we are still referencing it in Horizon: 
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/test_data/exceptions.py#L76

  It should be replaced with HTTPException.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352919] Re: horizon/workflows/base.py contains add_error() which conflicts with Django 1.7 definition

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided = Wishlist

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352919

Title:
  horizon/workflows/base.py contains add_error() which conflicts with
  Django 1.7 definition

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  As per the subject, horizon/workflows/base.py contains a definition of
  add_error(). Unfortunately, this now a function name used by Django
  1.7. This conflicts with it, and leads to unit test errors when
  running with Django 1.7 installed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1352919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347840] Re: Primary Project should stay selected after user added to new project

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided = Medium

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347840

Title:
  Primary Project should stay selected after user added to new project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  Prereq: multi domain enabled

  == Scenario ==
  1. Have a domain with 2 projects, p1 and p2.
  2. Create userA and set userA's primary project to p1.
  3. Update project members of p2 and add userA as member.  Now, userA is part 
of both projects.
  4. Now go to edit password for userA.  You'll notice on the modal, that the 
Primary Project isn't set.  You have to *reselect* before you can save.  See 
attached image.

  == The Primary Project should have stayed as p1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-09-29 Thread Adam Gandelman
** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Changed in: glance/icehouse
   Importance: Undecided = Medium

** Changed in: glance/icehouse
   Status: New = Fix Committed

** Changed in: glance/icehouse
Milestone: None = 2014.1.3

** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Committed
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317016] Re: User are not allowed to delete object which the user created under Container

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1317016

Title:
  User are not allowed to  delete object which the user created under
  Container

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  Testing step:
  1: create a pseudo-folder object pf1
  2: delete pf1

  Testing result:

  Error: You are not allowed to delete object: pf1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1317016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1314145] Re: In Containers page, long container/object name can break the page.

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided = Medium

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1314145

Title:
  In Containers page, long container/object name can break the page.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  In the containers page, if the name of a container is too long, the
  objects table is no longer visible and the table is out of the screen
  (see screenshot).

  Test with this container name :
  
TESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTESTTES

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1314145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288859] Re: Load balancer can't choose proper port in multi-network configuration

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Importance: Undecided = Medium

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288859

Title:
  Load balancer can't choose proper port in multi-network configuration

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  If LBaaS functionality enabled and instances has more that one network
  interfaces, horizon incorrectly choose members ports to add in the LB
  pool.

  Steps to reproduce:

  0. nova, neutron with configured LBaaS functions, horizon.
  1. Create 1st network (e.g. net1)
  2. Create 2nd network (e.g. net2)
  3. Create few (e.g. 6) instances with networks attached to both networks.
  4. Create LB pool
  5. Go to member page and click 'add members'
  6. Select all instances from step 3, click add

  Expected result:
  all selected interfaces will be in same network.

  Actual result:
  Some interfaces are selected from net1, some from net2. 

  And there is no way to plug instance to LB pool with proper interface
  via horizon, because add member dialog do not allow to choose port of
  instance.

  Checked on havanna and icehouse-2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295128] Re: Error getting keystone related informations when running keystone in httpd

2014-09-29 Thread Adam Gandelman
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Status: New = Fix Committed

** Changed in: horizon/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1295128

Title:
  Error getting keystone related informations when running keystone in
  httpd

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Committed

Bug description:
  1. Need to deploy keystone on apache: 
http://docs.openstack.org/developer/keystone/apache-httpd.html
  2. Update keystone endpoints to, http://192.168.94.129/keystone/main/v2.0 and 
http://192.168.94.129/keystone/main/v2.0 
  3. Edit openstack_dashboard/local/local_settings.py, update 
OPENSTACK_KEYSTONE_URL = http://%s/keystone/main/v2.0; % OPENSTACK_HOST
  4. Visit dashboard, 
   * Error on dashboard: `Error: Unable to retrieve project list.`
   * Error in log:
  Not Found: Not Found (HTTP 404)
  Traceback (most recent call last):
File 
/opt/stack/horizon/openstack_dashboard/dashboards/admin/overview/views.py, 
line 63, in get_data
  projects, has_more = api.keystone.tenant_list(self.request)
File /opt/stack/horizon/openstack_dashboard/api/keystone.py, line 266, in 
tenant_list
  tenants = manager.list(limit, marker)
File /opt/stack/python-keystoneclient/keystoneclient/v2_0/tenants.py, 
line 118, in list
  tenant_list = self._list(/tenants%s % query, tenants)
File /opt/stack/python-keystoneclient/keystoneclient/base.py, line 106, 
in _list
  resp, body = self.client.get(url)
File /opt/stack/python-keystoneclient/keystoneclient/httpclient.py, line 
578, in get
  return self._cs_request(url, 'GET', **kwargs)
File /opt/stack/python-keystoneclient/keystoneclient/httpclient.py, line 
575, in _cs_request
  **kwargs)
File /opt/stack/python-keystoneclient/keystoneclient/httpclient.py, line 
554, in request
  resp = super(HTTPClient, self).request(url, method, **kwargs)
File /opt/stack/python-keystoneclient/keystoneclient/baseclient.py, line 
21, in request
  return self.session.request(url, method, **kwargs)
File /opt/stack/python-keystoneclient/keystoneclient/session.py, line 
209, in request
  raise exceptions.from_response(resp, method, url)
  NotFound: Not Found (HTTP 404)

  
  But using the keystoneclient command line everything works fine..
  $ keystone  tenant-list
  +--++-+
  |id|name| enabled |
  +--++-+
  | 9542f4d212064b96addcfbca9fd530ee |   admin|   True  |
  | 5e317523a51745d1a65f4b166b85dd1b |demo|   True  |
  | 70058501677e4c2ea7cef31a7ddbd48d | invisible_to_admin |   True  |
  | 246ef23151354782aa75850cde8501e8 |  service   |   True  |
  +--++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1295128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2014-09-29 Thread Adam Gandelman
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Changed in: cinder/icehouse
 Assignee: (unassigned) = Clark Boylan (cboylan)

** Changed in: cinder/icehouse
   Importance: Undecided = Medium

** Changed in: cinder/icehouse
   Status: New = Fix Committed

** Changed in: cinder/icehouse
Milestone: None = 2014.1.3

** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided = Medium

** Changed in: keystone/icehouse
   Status: New = Fix Committed

** Changed in: keystone/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in OpenStack Key Management (Barbican):
  Confirmed
Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  New
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Triaged
Status in Python client library for Neutron:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Fix Released
Status in Openstack Database (Trove):
  Fix Committed
Status in Web Services Made Easy:
  New

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313458] Re: v3 catalog not implemented for templated backend

2014-09-29 Thread Adam Gandelman
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided = Wishlist

** Changed in: keystone/icehouse
   Status: New = Fix Committed

** Changed in: keystone/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1313458

Title:
  v3 catalog not implemented for templated backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Committed

Bug description:
  
  The templated backend didn't implement the method to get a v3 catalog. So you 
couldn't get a valid v3 token when the templated catalog backend was configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1313458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306835] Re: V3 list users filter by email address throws exception

2014-09-29 Thread Adam Gandelman
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided = Medium

** Changed in: keystone/icehouse
   Status: New = Fix Committed

** Changed in: keystone/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1306835

Title:
  V3 list users  filter by email address throws exception

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Committed
Status in OpenStack Manuals:
  In Progress

Bug description:
  V3 list_user filter by email throws excpetion. There is no such
  attribute email.

  keystone.common.wsgi): 2014-04-11 23:09:00,422 ERROR type object 'User' has 
no attribute 'email'
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/keystone/common/wsgi.py, line 206, 
in __call__
  result = method(context, **params)
File /usr/lib/python2.7/dist-packages/keystone/common/controller.py, line 
183, in wrapper
  return f(self, context, filters, **kwargs)
File /usr/lib/python2.7/dist-packages/keystone/identity/controllers.py, 
line 284, in list_users
  hints=hints)
File /usr/lib/python2.7/dist-packages/keystone/common/manager.py, line 
52, in wrapper
  return f(self, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 
189, in wrapper
  return f(self, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/keystone/identity/core.py, line 
328, in list_users
  ref_list = driver.list_users(hints or driver_hints.Hints())
File /usr/lib/python2.7/dist-packages/keystone/common/sql/core.py, line 
227, in wrapper
  return f(self, hints, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py, 
line 132, in list_users
  user_refs = sql.filter_limit_query(User, query, hints)
File /usr/lib/python2.7/dist-packages/keystone/common/sql/core.py, line 
374, in filter_limit_query
  query = _filter(model, query, hints)
File /usr/lib/python2.7/dist-packages/keystone/common/sql/core.py, line 
326, in _filter
  filter_dict = exact_filter(model, filter_, filter_dict, hints)
File /usr/lib/python2.7/dist-packages/keystone/common/sql/core.py, line 
312, in exact_filter
  if isinstance(getattr(model, key).property.columns[0].type,
  AttributeError: type object 'User' has no attribute 'email'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1306835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209343] Re: LDAP connection code does not provide ldap.set_option(ldap.OPT_X_TLS_CACERTFILE) for ldaps protocol

2014-09-29 Thread Adam Gandelman
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

** Changed in: keystone/icehouse
   Importance: Undecided = Wishlist

** Changed in: keystone/icehouse
   Status: New = Fix Committed

** Changed in: keystone/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1209343

Title:
  LDAP connection code does not provide
  ldap.set_option(ldap.OPT_X_TLS_CACERTFILE) for ldaps protocol

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed

Bug description:
  The HP Enterprise Directory LDAP servers require a ca certificate file
  for ldaps connections. Sample working Python code:

  ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, 
d:/etc/ssl/certs/hpca2ssG2_ns.cer)
  ldap_client = ldap.initialize(host)
  ldap_client.protocol_version = ldap.VERSION3

  ldap_client.simple_bind_s(binduser,bindpw)

  filter = '(uid=mark.m*)'
  attrs = ['cn', 'mail', 'uid', 'hpStatus']

  r = ldap_client.search_s(base, scope, filter, attrs)

  for dn, entry in r:
  print 'dn=', repr(dn)

  for k in entry.keys():
  print '\t', k, '=', entry[k]

  The current H-2  keystone/common/ldap/core.py file only provides
  this ldap.set_option for TLS connections. I have attached a picture of
  a screen shot showing the change I had to make to file core.py to
  enable the ldap.set_option(ldap.OPT_X_TLS_CACERTFILE,
  tls_cacertfile) statement to also get executed for ldaps connections.
  Basically I pulled the set_option code out of the if tls_cacertfile:
  block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1209343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365961] Re: Dangerous iptables rule generated in case of protocol any and source-port/destination-port usage

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = High

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365961

Title:
  Dangerous iptables rule generated in case of protocol any and
  source-port/destination-port usage

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  Icehouse 2014.1.2, FWaas using iptables driver

  In order to allow DNS (TCP and UDP) request, the following rule was defined:
  neutron firewall-rule-create --protocol any --destination-port 53 --action 
allow

  On L3agent namespace this has been translated in the following iptables rules:
  -A neutron-l3-agent-iv441c58eb2 -j ACCEPT
  -A neutron-l3-agent-ov441c58eb2 -j ACCEPT
  = there is no restriction on the destination port(53), like we could expect 
it !!!

  There is 2 solutions to handle this issue:

  1) Doesn't allow user to create a rule specifing protocol any AND a
  source-port/destination-port.

  2) Generating the following rules (like some firewalls do):
  -A neutron-l3-agent-iv441c58eb2 -p tcp -m tcp --dport 53 -j ACCEPT
  -A neutron-l3-agent-iv441c58eb2 -p udp -m udp --dport 53 -j ACCEPT
  -A neutron-l3-agent-ov441c58eb2 -p tcp -m tcp --dport 53 -j ACCEPT
  -A neutron-l3-agent-ov441c58eb2 -p udp -m udp --dport 53 -j ACCEPT
  = TCP and UDP have been completed.

  The source code affected is located in
  neutron/services/firewall/drivers/linux/iptables_fwaas.py  (L268)

  def _port_arg(self, direction, protocol, port):
  if not (protocol in ['udp', 'tcp'] and port):
  return ''
  return '--%s %s' % (direction, port)

  = trunk code is affected too.

  Nota: This is not a real Neutron security vulnerability but it is a
  real security vulnerability for applications living in the Openstack
  cloud... That's why I tagged it as security vulnerability

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364696] Re: Big Switch: Request context is missing from backend requests

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Low

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364696

Title:
  Big Switch: Request context is missing from backend requests

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The request context that comes into Neutron is not included in the
  request to the backend. This makes it difficult to correlate events in
  the debug logs on the backend such as what incoming Neutron request
  resulted in particular REST calls to the backend and if admin
  privileges were used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368251] Re: migrate_to_ml2 accessing boolean as int fails on postgresql

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368251

Title:
  migrate_to_ml2 accessing boolean as int fails on postgresql

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The allocated variable used in migrate_to_ml2 was defined to be a boolean 
type and in postgresql this type is enforced,
  while in mysql this just maps to tinyint and accepts both numbers and bools.

  Thus the migrate_to_ml2 script breaks on postgresql

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358668] Re: Big Switch: keyerror on filtered get_ports call

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358668

Title:
  Big Switch: keyerror on filtered get_ports call

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  If get_ports is called in the Big Switch plugin without 'id' being one
  of the included fields, _extend_port_dict_binding will fail with the
  following error.

  Traceback (most recent call last):
File neutron/tests/unit/bigswitch/test_restproxy_plugin.py, line 87, in 
test_get_ports_no_id
  context.get_admin_context(), fields=['name'])
File neutron/plugins/bigswitch/plugin.py, line 715, in get_ports
  self._extend_port_dict_binding(context, port)
File neutron/plugins/bigswitch/plugin.py, line 361, in 
_extend_port_dict_binding
  hostid = porttracker_db.get_port_hostid(context, port['id'])
  KeyError: 'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362480] Re: Datacenter moid should be a value not a tuple

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362480

Title:
  Datacenter moid should be a value not a tuple

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  In edge_appliance_driver.py, there is a comma added when setting the
  datacenter moid, so the result is the value datacenter moid is changed
  to the tuple type, that is wrong.

   if datacenter_moid:
  edge['datacenterMoid'] = datacenter_moid,  === Should remove the ','
  return edge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361545] Re: dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Low

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361545

Title:
  dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The enable_isolated_metadata = True options tells DHCP agents that for each 
network under its care, a neutron-ns-metadata-proxy process should be spawned, 
regardless if it's isolated or not.
  This is fine for isolated networks (networks with no routers and no default 
gateways), but for networks which are connected to a router (for which the L3 
agent spawns a separate neutron-ns-metadata-proxy which is attached to the 
router's namespace), 2 different metadata proxies are spawned. For these 
networks, the static routes which are pushed to each instance, letting it know 
where to search for the metadata-proxy, is not pushed and the proxy spawned 
from the DHCP agent is left unused.

  The DHCP agent should know if the network it handles is isolated or
  not, and for non-isolated networks, no neutron-ns-metadata-proxy
  processes should spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357125] Re: Cisco N1kv plugin needs to send subtype on network profile creation

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357125

Title:
  Cisco N1kv plugin needs to send subtype on network profile creation

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Cisco N1kv neutron plugin should send also the subtype for overly
  networks when the a network segment pool is created

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357105] Re: Big Switch: servermanager should retry on 503 instead of failing immediately

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Low

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357105

Title:
  Big Switch: servermanager should retry on 503 instead of failing
  immediately

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  When the backend controller returns a 503 service unavailable, the big
  switch server manager immediately counts the server request as failed.
  Instead it should retry a few times because a 503 occurs when there
  are locks in place for synchronization during upgrade, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360658] Re: Managing functional job hooks in the infra config repo is error prone

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360658

Title:
  Managing functional job hooks in the infra config repo is error prone

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  The hook scripts that support Neutron's functional gate/check job are
  currently defined in openstack-infra/config (https://github.com
  /openstack-
  
infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config
  /neutron-functional.yaml).  They are proving difficult to maintain
  there due to the inability to verify the scripts' functionality before
  merge.  This combined with an overloaded infra core team suggests
  defining the hook scripts in the neutron tree where the job config can
  call them (this strategy is already employed by other projects like
  solum and tripleo).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350326] Re: Migration 1fcfc149aca4_agents_unique_by_type_and_host is not applied to ml2 plugin

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350326

Title:
  Migration 1fcfc149aca4_agents_unique_by_type_and_host is not applied
  to ml2 plugin

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  While it's not anymore an issue of the master since now migrations are
  unconditional, it still makes sense to fix the migration and backport
  it to Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352893] Re: ipv6 cannot be disabled for ovs agent

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352893

Title:
  ipv6 cannot be disabled for ovs agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  If ipv6 module is not loaded in kernel ip6tables command doesn't work
  and fails  in openvswitch-agent when processing ports:

  2014-08-05 15:20:57.089 3944 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing 
VIF ports
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1262, in rpc_loop
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py,
 line 1090, in process_network_ports
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py, line 
247, in setup_port_filters
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py, line 
164, in prepare_devices_filter
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.firewall.prepare_port_filter(device)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib64/python2.7/contextlib.py, line 24, in __exit__
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/firewall.py, line 108, in 
defer_apply
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.filter_defer_apply_off()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py, 
line 370, in filter_defer_apply_off
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.iptables.defer_apply_off()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py, 
line 353, in defer_apply_off
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._apply()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py, 
line 369, in _apply
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent return 
self._apply_synchronized()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py, 
line 400, in _apply_synchronized
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
root_helper=self.root_helper)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py, line 76, in 
execute
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise RuntimeError(m)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError:
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip6tables-restore', '-c']
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 2
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: ip6tables-restore 
v1.4.21: ip6tables-restore: unable to initialize table 

[Yahoo-eng-team] [Bug 1348766] Re: Big Switch: hash shouldn't be updated on unsuccessful calls

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348766

Title:
  Big Switch: hash shouldn't be updated on unsuccessful calls

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  The configuration hash db is updated on every response from the
  backend including errors that contain an empty hash. This is causing
  the hash to be wiped out if a standby controller is contacted first,
  which opens a narrow time window where the backend could become out of
  sync. It should only update the hash on successful REST calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338880] Re: Any user can set a network as external

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = High

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338880

Title:
  Any user can set a network as external

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  Even though the default policy.json restrict the creation of external
  networks to admin_only, any user can update a network as external.

  I could verify this with the following test (PseudoPython):

  project: ProjectA
  user: ProjectMemberA has Member role on project ProjectA.

  with network(name=UpdateNetworkExternalRouter, tenant_id=ProjectA, 
router_external=False) as test_network:
  
self.project_member_a_neutron_client.update_network(network=test_network, 
router_external=True)

  project_member_a_neutron_client encapsulates a python-neutronclient,
  and here it is what the method does.

  def update_network(self, network, name=None, shared=None, 
router_external=None):
  body = {
  'network': {
  }
  }
  if name is not None:
  body['network']['name'] = name
  if shared is not None:
  body['network']['shared'] = shared
  if router_external is not None:
  body['network']['router:external'] = router_external

  self.python_neutronclient.update_network(network=network.id,
  body=body)['network']

  
  The expected behaviour is that the operation should not be allowed, but the 
user without admin privileges is able to perform such change.

  Trying to add an update_network:router:external: rule:admin_only
  policy did not work and broke other operations a regular user should
  be able to do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336596] Re: Cisco N1k: Clear entries in n1kv specific tables on rollbacks

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336596

Title:
  Cisco N1k: Clear entries in n1kv specific tables on rollbacks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  During rollback operations, the resource is cleaned up from the neutron 
database but leaves a few stale entries in the n1kv specific tables.
  Vlan/VXLAN allocation tables are inconsistent during network rollbacks.
  VM-Network table is left inconsistent during port rollbacks.
  Explicitly clearing ProfileBinding table entry (during network profile 
rollbacks) is not required as delete_network_profile internally takes care of 
it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332713] Re: Cisco: Send network and subnet UUID during subnet create

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Low

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332713

Title:
  Cisco: Send network and subnet UUID during subnet create

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  n1kv client is not sending netSegmentName and id fields to the VSM
  (controller) in create_ip_pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330490] Re: can't create security group rule by ip protocol when using postgresql

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330490

Title:
  can't create security group rule by ip protocol when using postgresql

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  when i try to create a rule in sec group using ip protocol number it
  fails if the db in use is postgresql

  i can repeat the problem in havana, icehouse and master

  2014-06-16 08:41:07.009 15134 ERROR neutron.api.v2.resource 
[req-3d2d03a3-2d8a-4ad0-b41d-098aecd5ecb8 None] create failed
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py, line 87, in 
resource
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py, line 419, in create
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_rpc_base.py, line 
43, in create_security_group_rule
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource bulk_rule)[0]
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py, line 266, 
in create_security_group_rule_bulk_native
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
self._check_for_duplicate_rules(context, r)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py, line 394, 
in _check_for_duplicate_rules
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource db_rules = 
self.get_security_group_rules(context, filters)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py, line 421, 
in get_security_group_rules
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
page_reverse=page_reverse)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py, line 197, 
in _get_collection
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource items = 
[dict_func(c, fields) for c in query]
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2353, in 
__iter__
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2368, in 
_execute_and_instances
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, in 
execute
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource params)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 761, in 
_execute_clauseelement
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 874, in 
_execute_context
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, in 
_handle_dbapi_exception
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource exc_info
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, in 
raise_from_cause
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
reraise(type(exception), exception, tb=exc_tb)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, in 
_execute_context
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource context)
  2014-06-16 08:41:07.009 15134 TRACE 

[Yahoo-eng-team] [Bug 1328181] Re: NSX: remove_router_interface might fail because of NAT rule mismatch

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328181

Title:
  NSX: remove_router_interface might fail because of NAT rule mismatch

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The remove_router_interface for the VMware NSX plugin expects a precise 
number of SNAT rules for a subnet.
  If the actual number of NAT rules differs from the expected one, an exception 
is raised.

  The reasons for this might be:
  - earlier failure in remove_router_interface
  - NSX API client tampering with NSX objects
  - etc.

  In any case, the remove_router_interface operation should succeed
  removing every match for the NAT rule to delete from the NSX logical
  router.

  sample traceback: http://paste.openstack.org/show/83427/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325184] Re: add unit tests for the ODL MechanismDriver

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1325184

Title:
  add unit tests for the ODL MechanismDriver

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  All the operations (create, update or delete) haven't been covered by unit 
tests.
  Bug #1324450 about the delete operations would have been caught.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1325184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317094] Re: neutron requires list amqplib dependency

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1317094

Title:
  neutron requires list amqplib dependency

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Neutron does not use amqplib directly (only via oslo.messaging or
  kombu). kombu already depends on either amqp or amqplib, so the extra
  dep is not necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1317094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316618] Re: add host to security group broken

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Low

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316618

Title:
  add host to security group broken

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I am running nova/neutron forked from trunk around 12/30/2013. Neutron
  is configured with openvswitch plugin and security group enabled.

  How to reproduce the issue: create a security group SG1; add a rule to
  allow ingress from SG1 group to port 5000; add host A, B, and C to SG1
  in order.

  It seems that A can talk to B and C over port 5000, B can talk to C,
  but C can talk to neither of A and B. I confirmed that the iptables
  rules are incorrect for A and B. It seems to me that when A is added
  to the group, nothing changed since no other group member exists. When
  B and C were added to the group, A's ingress iptables rules were never
  updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1316618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286412] Re: Add support for router and network scheduling in Cisco N1kv Plugin.

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286412

Title:
  Add support for router and network scheduling in Cisco N1kv Plugin.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  Added functionality to schedule routers and networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1286412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311758] Re: OpenDaylight ML2 Mechanism Driver does not handle authentication errors

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311758

Title:
  OpenDaylight ML2 Mechanism Driver does not handle authentication
  errors

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  This behaviour was noticed when troubleshooting a misconfiguration.
  Authentication with ODL was failing and the exception was being ignored.

  In the sync_resources method of the ODL Mechanism Driver, HTTPError 
exceptions with a status code of 404 are handled but the exception is not 
re-raised if the status code is not 404. 
  It is preferable to re-raise this exception.

  In addition it would be helpful if the obtain_auth_cookies should
  throw a more specific exception than HTTPError when authentication
  with the ODL controller fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1311758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302611] Re: policy.init called too many time for each API request

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302611

Title:
  policy.init called too many time for each API request

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in neutron icehouse series:
  New

Bug description:
  policy.init() checks whether the rule cache is populated and valid,
  and if not reloads the policy cache from the policy.json file.

  As the current code runs init() each time a policy is checked or enforced, 
list operations will call init() several times (*)
  If policy.json is updated while a response is being generated, this will lead 
to a situation where some item are processed according to the old policies, and 
other according to the new ones, which would be wrong.

  Also, init() checks the last update time of the policy file, and
  repeating this check multiple time is wasteful.

  A simple solution would be to explicitly call policy.init from
  api.v2.base.Controller in order to ensure the method is called only
  once per API request.


  (*) a  GET /ports operation returning 1600 ports calls policy.init()
  9606 times

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293184] Re: Can't clear shared flag of unused network

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293184

Title:
  Can't clear shared flag of unused network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  A network marked as external can be used as a gateway for tenant routers, 
even though it's not necessarily marked as shared.
  If the 'shared' attribute is changed from True to False for such a network 
you get an error:
  Unable to reconfigure sharing settings for network sharetest. Multiple 
tenants are using it

  This is clearly not the intention of the 'shared' field, so if there
  are only service ports on the network there is no reason to block
  changing it from shared to not shared.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366921] Re: NSX: create_port should return empty list instead of null for allowed-address-pair

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = High

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366921

Title:
  NSX: create_port should return empty list instead of null for allowed-
  address-pair

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  
  ft133.5: 
tempest.api.network.test_ports.PortsTestJSON.test_show_port[gate,smoke]_StringException:
 pythonlogging:'': {{{2014-09-07 18:53:43,165 17979 INFO 
[tempest.common.rest_client] Request (PortsTestJSON:test_show_port): 200 GET 
http://localhost:9696/v2.0/ports/2827a27a-dee1-4013-b90f-cf2aeeae5f4f 0.030s}}}

  Traceback (most recent call last):
File tempest/api/network/test_ports.py, line 81, in test_show_port
  (port, excluded_keys=['extra_dhcp_opts']))
File 
/opt/stack/tempest/.tox/smoke-serial/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  raise mismatch_error
  MismatchError: Only in actual:
{'binding:vnic_type': normal}
  Differences:
allowed_address_pairs: expected [], actual None

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{2014-09-07 18:53:43,165 17979 INFO
  [tempest.common.rest_client] Request (PortsTestJSON:test_show_port):
  200 GET http://localhost:9696/v2.0/ports/2827a27a-dee1-4013-b90f-
  cf2aeeae5f4f 0.030s}}}

  Traceback (most recent call last):
File tempest/api/network/test_ports.py, line 81, in test_show_port
  (port, excluded_keys=['extra_dhcp_opts']))
File 
/opt/stack/tempest/.tox/smoke-serial/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  raise mismatch_error
  MismatchError: Only in actual:
{'binding:vnic_type': normal}
  Differences:
allowed_address_pairs: expected [], actual None

  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{2014-09-07 18:53:43,165 17979 INFO
  [tempest.common.rest_client] Request (PortsTestJSON:test_show_port):
  200 GET http://localhost:9696/v2.0/ports/2827a27a-dee1-4013-b90f-
  cf2aeeae5f4f 0.030s}}}

  Traceback (most recent call last):
File tempest/api/network/test_ports.py, line 81, in test_show_port
  (port, excluded_keys=['extra_dhcp_opts']))
File 
/opt/stack/tempest/.tox/smoke-serial/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 433, in assertThat
  raise mismatch_error
  MismatchError: Only in actual:
{'binding:vnic_type': normal}
  Differences:
allowed_address_pairs: expected [], actual None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366917] Re: neutron should not use neutronclients utils methods

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Low

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366917

Title:
  neutron should not use neutronclients utils methods

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  2014-09-07 19:17:58.331 | Traceback (most recent call last):
  2014-09-07 19:17:58.331 |   File /usr/local/bin/neutron-debug, line 6, in 
  2014-09-07 19:17:58.332 | from neutron.debug.shell import main
  2014-09-07 19:17:58.332 |   File 
/opt/stack/new/neutron/neutron/debug/shell.py, line 29, in 
  2014-09-07 19:17:58.332 | 'probe-create': utils.import_class(
  2014-09-07 19:17:58.332 | AttributeError: 'module' object has no attribute 
'import_class'
  2014-09-07 19:17:58.375 | + exit_trap
  2014-09-07 19:17:58.375 | + local r=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357972] Re: boot from volume fails on Hyper-V if boot device is not vda

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided = Medium

** Changed in: nova/icehouse
   Status: New = Fix Committed

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357972

Title:
  boot from volume fails on Hyper-V if boot device is not vda

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The Tempest test
  
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
  fails on Hyper-V.

  The cause is related to the fact that the root device is sda and not
  vda.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358719] Re: Live migration fails as get_instance_disk_info is not present in the compute driver base class

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided = Medium

** Changed in: nova/icehouse
   Status: New = Fix Committed

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358719

Title:
  Live migration fails as get_instance_disk_info is not present in the
  compute driver base class

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  The get_instance_disk_info driver has been added to the libvirt
  compute driver in the following commit:

  
https://github.com/openstack/nova/commit/e4974769743d5967626c1f0415113683411a03a4

  This caused regression failures on drivers that do not implement it,
  e.g.:

  http://paste.openstack.org/show/97258/

  The method has been subsequently added to the base class which, but
  raising a NotImplementedError(), which still causes the regression:

  
https://github.com/openstack/nova/commit/2bed16c89356554a193a111d268a9587709ed2f7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358881] Re: jjsonschema 2.3.0 - 2.4.0 upgrade breaking nova.tests.test_api_validation tests

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358881

Title:
  jjsonschema 2.3.0 - 2.4.0 upgrade breaking
  nova.tests.test_api_validation tests

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  the following two failures appeared after upgrading jsonschema to
  2.4.0.  downgrading to 2.3.0 returned the tests to passing.

  ==
  FAIL: 
nova.tests.test_api_validation.TcpUdpPortTestCase.test_validate_tcp_udp_port_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File /home/dev/Desktop/nova-test/nova/tests/test_api_validation.py, line 
602, in test_validate_tcp_udp_port_fails
  expected_detail=detail)
File /home/dev/Desktop/nova-test/nova/tests/test_api_validation.py, line 
31, in check_validation_error
  self.assertEqual(ex.kwargs, expected_kwargs)
File 
/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'code': 400,
   'detail': u'Invalid input for field/attribute foo. Value: 65536. 65536 is 
greater than the maximum of 65535'}
  actual= {'code': 400,
   'detail': 'Invalid input for field/attribute foo. Value: 65536. 65536.0 is 
greater than the maximum of 65535'}

  
  ==
  FAIL: 
nova.tests.test_api_validation.IntegerRangeTestCase.test_validate_integer_range_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [migrate.versioning.api] 215 - 216... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 216 - 217... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 217 - 218... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 218 - 219... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 219 - 220... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 220 - 221... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 221 - 222... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 222 - 223... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 223 - 224... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 224 - 225... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 225 - 226... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 226 - 227... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 227 - 228... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 228 - 229... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 229 - 230... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 230 - 231... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 231 - 232... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 232 - 233... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 233 - 234... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 234 - 235... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 235 - 236... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 236 - 237... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 237 - 238... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 238 - 239... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 239 - 240... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 240 - 241... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 241 - 242... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 242 - 243... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 243 - 244... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 244 - 245... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 245 - 246... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 246 - 247... 
  INFO [migrate.versioning.api] done
  INFO 

[Yahoo-eng-team] [Bug 1365352] Re: metadata agent does not cache auth info

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = High

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365352

Title:
  metadata agent does not cache auth info

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  metadata agent had tried to cache auth info by the means of
  self.auth_info = qclient.get_auth_info() in
  _get_instance_and_tenant_id(), however this qclient is not the exact
  one which would be used in inner methods,  In short, metadata agent
  does not implement auth info caching correctly but still retrieves new
  token from keystone every time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360394] Re: NSX: log request body to NSX as debug

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360394

Title:
  NSX: log request body to NSX as debug

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  Previously we never logged the request body that we sent to NSX. This makes
  things hard to debug when issues arise as we don't actually log the body of
  the request that we made. This patch adds the body to our issue request log
  statement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357599] Re: race condition with neutron in nova migrate code

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357599

Title:
  race condition with neutron in nova migrate code

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The tempest test that does a resize on the instance from time to time
  fails with a neutron virtual interface timeout error. The reason why
  this is occurring is because resize_instance() calls:

  disk_info = self.driver.migrate_disk_and_power_off(
  context, instance, migration.dest_host,
  instance_type, network_info,
  block_device_info)

  which calls destory() which unplugs the vifs(). Then,

  self.driver.finish_migration(context, migration, instance,
   disk_info,
   network_info,
   image, resize_instance,
   block_device_info, power_on)

  is called which expects a vif_plugged event. Since this happens on the
  same host the neutron agent is able to detect that the vif was
  unplugged then plugged because it happens so fast.  To fix this we
  should check if we are migrating to the same host if we are we should
  not expect to get an event.

  8d1] Setting instance vm_state to ERROR
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last):
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3714, in finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3682, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, 
self.tb)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3677, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 5302, in 
finish_migration
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3792, in 
_create_domain_and_network
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise 
exception.VirtualInterfaceCreateException()
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual 
Interface creation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
nova/tests/integrated/test_multiprocess_api.py, line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py,
 line 52, in signal_handler
  2014-08-15 13:46:09.158 | raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362221] Re: VMs fail to start when Ceph is used as a backend for ephemeral drives

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362221

Title:
  VMs fail to start when Ceph is used as a backend for ephemeral drives

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  VMs' drives placement in Ceph option has been chosen
  (libvirt.images_types == 'rbd').

  When user creates a flavor and specifies:
 - root drive size 0
 - ephemeral drive size 0 (important)

  and tries to boot a VM, he gets no valid host was found in the
  scheduler log:

  Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
   File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1305, 
in _build_instance\n set_access_ip=set_access_ip)\n', u' File /usr/l
  ib/python2.6/site-packages/nova/compute/manager.py, line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
   /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
  e)\n', u' File 
/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
  lf.tb)\n', u' File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1714, in 
_spawn\n block_device_info)\n', u' File /usr/lib/py
  thon2.6/site-packages/nova/virt/libvirt/driver.py, line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File /usr/lib/python2.6/site-packages
  /nova/virt/libvirt/driver.py, line 2648, in _create_image\n 
ephemeral_size=ephemeral_gb)\n', u' File 
/usr/lib/python2.6/site-packages/nova/virt/
  libvirt/imagebackend.py, line 186, in cache\n *args, **kwargs)\n', u' File 
/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py,
  line 587, in create_image\n prepare_template(target=base, max_size=size, 
*args, **kwargs)\n', u' File /usr/lib/python2.6/site-packages/nova/opens
  tack/common/lockutils.py, line 249, in inner\n return f(*args, **kwargs)\n', 
u' File /usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebac
  kend.py, line 176, in fetch_func_sync\n fetch_func(target=target, *args, 
**kwargs)\n', u' File /usr/lib/python2.6/site-packages/nova/virt/libvir
  t/driver.py, line 2458, in _create_ephemeral\n disk.mkfs(os_type, fs_label, 
target, run_as_root=is_block_dev)\n', u' File /usr/lib/python2.6/sit
  e-packages/nova/virt/disk/api.py, line 117, in mkfs\n utils.mkfs(default_fs, 
target, fs_label, run_as_root=run_as_root)\n', u' File /usr/lib/pyt
  hon2.6/site-packages/nova/utils.py, line 856, in mkfs\n execute(*args, 
run_as_root=run_as_root)\n', u' File /usr/lib/python2.6/site-packages/nov
  a/utils.py, line 165, in execute\n return processutils.execute(*cmd, 
**kwargs)\n', u' File /usr/lib/python2.6/site-packages/nova/openstack/commo
  n/processutils.py, line 193, in execute\n cmd=\' \'.join(cmd))\n', 
uProcessExecutionError: Unexpected error while running command.\nCommand: sudo
   nova-rootwrap /etc/nova/rootwrap.conf mkfs -t ext3 -F -L ephemeral0 
/var/lib/nova/instances/_base/ephemeral_1_default\nExit code: 1\nStdout: 
''\nStde
  rr: 'mke2fs 1.41.12 (17-May-2010)\\nmkfs.ext3: No such file or directory 
while trying to determine filesystem size\\n'\n]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357476] Re: Timeout waiting for vif plugging callback for instance

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357476

Title:
  Timeout waiting for vif plugging callback for instance

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  n-cpu times out while waiting for neutron.

  
  Logstash
  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTogXCJUaW1lb3V0IHdhaXRpbmcgZm9yIHZpZiBwbHVnZ2luZyBjYWxsYmFjayBmb3IgaW5zdGFuY2VcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODEyMjI1NjY2NiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  message: Timeout waiting for vif plugging callback for instance AND
  tags:screen-n-cpu.txt

  
  Logs
  
  
http://logs.openstack.org/09/108909/4/gate/check-tempest-dsvm-neutron-full/628138b/logs/screen-n-cpu.txt.gz#_2014-08-13_21_14_53_453

  2014-08-13 21:14:53.453 WARNING nova.virt.libvirt.driver [req-
  0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250
  ServerActionsTestXML-1011304525] Timeout waiting for vif plugging
  callback for instance 794ceb8c-a08b-4b02-bdcb-4ad5632f7744

  2014-08-13 21:14:55.408 ERROR nova.compute.manager 
[req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 
ServerActionsTestXML-1011304525] [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Setting instance vm_state to ERROR
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Traceback (most recent call last):
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3714, in finish_resize
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] disk_info, image)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3682, in _finish_resize
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] old_instance_type, sys_meta)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] six.reraise(self.type_, self.value, 
self.tb)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 3677, in _finish_resize
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 5302, in 
finish_migration
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 3792, in 
_create_domain_and_network
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] raise 
exception.VirtualInterfaceCreateException()
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] VirtualInterfaceCreateException: Virtual 
Interface creation failed
  2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] 

  2014-08-13 21:14:56.138 ERROR oslo.messaging.rpc.dispatcher 
[req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 
ServerActionsTestXML-1011304525] Exception during message handling: Virtual 
Interface creation failed
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-08-13 

[Yahoo-eng-team] [Bug 1360817] Re: Hyper-V agent fails on Hyper-V 2008 R2 due to missing remove_all_security_rules method

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = Medium

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360817

Title:
  Hyper-V agent fails on Hyper-V 2008 R2 due to missing
  remove_all_security_rules method

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  A recent regression does not allow the Hyper-V agent to run
  successfully on Hyper-V 2008 R2, which is currently still a supported
  platform.

  The call generating the error is:

  
https://github.com/openstack/neutron/blob/771327adbe9e563506f98ca561de9ded4d987698/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py#L392

  Error stack trace:

  http://paste.openstack.org/show/98471/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357102] Re: Big Switch: Multiple read calls to consistency DB fails

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357102

Title:
  Big Switch: Multiple read calls to consistency DB fails

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The Big Switch consistency DB throws an exception if read_for_update() is 
called multiple times without closing the transaction in between. This was 
originally because there was a DB lock in place and a single thread could 
deadlock if it tried twice. However, 
  there is no longer a point to this protection because the DB lock is gone and 
certain response failures result in the DB being read twice (the second time 
for a retry).

  2014-08-14 21:56:41.496 12939 ERROR neutron.plugins.ml2.managers 
[req-ee311173-b38a-481e-8900-d963c676b05f None] Mechanism driver 'bigswitch' 
failed in update_port_postcommit
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py, line 168, 
in _call_on_drivers
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py,
 line 91, in update_port_postcommit
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
port[network][id], port)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py, 
line 555, in rest_update_port
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
self.rest_create_port(tenant_id, net_id, port)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py, 
line 545, in rest_create_port
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
self.rest_action('PUT', resource, data, errstr)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py, 
line 476, in rest_action
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers timeout)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py, line 
249, in inner
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers return 
f(*args, **kwargs)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py, 
line 423, in rest_call
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
hash_handler=hash_handler)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py, 
line 139, in rest_call
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
headers[HASH_MATCH_HEADER] = hash_handler.read_for_update()
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/db/consistency_db.py,
 line 56, in read_for_update
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers raise 
MultipleReadForUpdateCalls()
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
MultipleReadForUpdateCalls: Only one read_for_update call may be made at a time.
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357063] Re: nova.virt.driver Emitting event log message in stable/icehouse doesn't show anything

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357063

Title:
  nova.virt.driver Emitting event log message in stable/icehouse
  doesn't show anything

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  This is fixed on master with commit
  8c98b601f2db1f078d5f42ab94043d9939608f73 but is useless on
  stable/icehouse, here is an example snip from a stable/icehouse
  tempest run of what this looks like in the n-cpu log:

  2014-08-14 16:18:53.311 473 DEBUG nova.virt.driver [-] Emitting event
  emit_event /opt/stack/new/nova/nova/virt/driver.py:1207

  It would be really nice to use that information in trying to debug
  what's causing all of these hits for InstanceInfoCacheNotFound stack
  traces:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRXhjZXB0aW9uIGRpc3BhdGNoaW5nIGV2ZW50XCIgQU5EIG1lc3NhZ2U6XCJJbmZvIGNhY2hlIGZvciBpbnN0YW5jZVwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIE5PVCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODA0NzMxMzM5Nn0=

  We should backport that repr fix to stable/icehouse for serviceability
  purposes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352428] Re: HyperV Shutting Down state is not mapped

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352428

Title:
  HyperV Shutting Down state is not mapped

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The method which gets VM related information can fail if the VM is in an 
intermediary state such as Shutting down.
  The reason is that some of the Hyper-V specific vm states are not defined as 
possible states.

  This will result into a key error as shown bellow:

  http://paste.openstack.org/show/90015/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354448] Re: The Hyper-V driver should raise a InstanceFaultRollback in case of resize down requests

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354448

Title:
  The Hyper-V driver should raise a InstanceFaultRollback in case of
  resize down requests

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The Hyper-V driver does not support resize down and is currently
  rising an exception if the user attempts to do that, causing the
  instance to go in ERROR state.

  The driver should use the recently introduced instance faults
  exception.InstanceFaultRollback instead, which will leave the
  instance in ACTIVE state as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353697] Re: Hyper-V agent raises UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this endpoint.

2014-09-29 Thread Adam Gandelman
** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

** Changed in: neutron/icehouse
   Importance: Undecided = High

** Changed in: neutron/icehouse
   Status: New = Fix Committed

** Changed in: neutron/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353697

Title:
  Hyper-V agent raises UnsupportedRpcVersion: Specified RPC version,
  1.1, not supported by this endpoint.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Committed

Bug description:
  The Hyper-V agent raises:

  2014-08-06 10:42:37.096 2052 ERROR neutron.openstack.common.rpc.amqp 
[req-46340a1a-9143-45c9-b645-2612d41f20a6 None] Exception during message 
handling
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\neutron\openstack\common\rpc\amqp.py,
 line 462, in _process_data
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp   File 
C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\site-packages\neutron\openstack\common\rpc\dispatcher.py,
 line 178, in dispatch
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
raise rpc_common.UnsupportedRpcVersion(version=version)
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 
UnsupportedRpcVersion: Specified RPC version, 1.1, not supported by this 
endpoint.
  2014-08-06 10:42:37.096 2052 TRACE neutron.openstack.common.rpc.amqp 

  The issue does not affect functionality, but it creates a lot of noise
  in the logs since the error is logged at each iteration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire  %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/scheduler/manager.py, line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/quota.py, line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/api.py, line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/db/sqlalchemy/api.py, line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File /opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py, line 
427, in _raise_if_deadlock_error
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
raise exception.DBDeadlock(operational_error)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying 

[Yahoo-eng-team] [Bug 1344036] Re: Hyper-V agent generates exception when force_hyperv_utils_v1 is True on Windows Server / Hyper-V Server 2012 R2

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Importance: Undecided = Medium

** Changed in: nova/icehouse
   Status: New = Fix Committed

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344036

Title:
  Hyper-V agent generates exception when force_hyperv_utils_v1 is True
  on Windows Server / Hyper-V Server 2012 R2

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  WMI root\virtualization namespace v1 (in Hyper-V) has been removed from 
Windows Server / Hyper-V Server 2012 R2, according to:
  http://technet.microsoft.com/en-us/library/dn303411.aspx

  Because of this, setting the force_hyperv_utils_v1 option on the
  Windows Server 2012 R2 nova compute agent's nova.conf will cause
  exceptions, since it will try to use the removed root\virtualization
  namespace v1.

  Logs:
  http://paste.openstack.org/show/87125/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348720] Re: Missing index for expire_reservations

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

   EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire  '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338451] Re: shelve api does not work in the nova-cell environment

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338451

Title:
  shelve api does not work in the nova-cell environment

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  If you run nova shelve api in nova-cell environment It throws
  following error:

  Nova cell (n-cell-child) Logs:

  2014-07-06 23:57:13.445 ERROR nova.cells.messaging 
[req-a689a1a1-4634-4634-974a-7343b5554f46 admin admin] Error processing message 
locally: save() got an unexpected keyword argument 'expected_task_state'
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 200, in _process_locally
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 1287, in 
_process_message_locally
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/cells/messaging.py, line 700, in run_compute_api_method
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return 
fn(message.ctxt, *args, **method_info['method_kwargs'])
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 192, in wrapped
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 182, in inner
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return function(self, 
context, instance, *args, **kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 163, in inner
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
/opt/stack/nova/nova/compute/api.py, line 2458, in shelve
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging 
instance.save(expected_task_state=[None])
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging TypeError: save() got an 
unexpected keyword argument 'expected_task_state'
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging

  Nova compute log:

  2014-07-07 00:05:19.084 ERROR oslo.messaging.rpc.dispatcher 
[req-9539189d-239b-4e74-8aea-8076740
  31c2f admin admin] Exception during message handling: 'NoneType' object is 
not iterable
  Traceback (most recent call last):

    File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _
  dispatch_and_reply
  incoming.message))

    File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _
  dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

    File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _
  do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

    File /opt/stack/nova/nova/conductor/manager.py, line 351, in 
notify_usage_exists
  system_metadata, extra_usage_info)

    File /opt/stack/nova/nova/compute/utils.py, line 250, in 
notify_usage_exists
  ignore_missing_network_data)

    File /opt/stack/nova/nova/notifications.py, line 285, in bandwidth_usage
  macs = [vif['address'] for vif in nw_info]

  TypeError: 'NoneType' object is not iterable

  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dis
  t-packages/oslo/messaging/rpc/dispatcher.py, line 134, in _dispatch_and_reply
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/nova/nova/exception.py, line 88, in wrapped
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1347777] Re: The compute_driver option description does not include the Hyper-V driver

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/134

Title:
  The compute_driver option description does not include the Hyper-V
  driver

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The description of the option compute_driver should include
  hyperv.HyperVDriver along with the other supported drivers

  
https://github.com/openstack/nova/blob/aa018a718654b5f868c1226a6db7630751613d92/nova/virt/driver.py#L35-L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337860] Re: VirtualInterfaceCreateException: Virtual Interface creation failed

2014-09-29 Thread Adam Gandelman
*** This bug is a duplicate of bug 1292243 ***
https://bugs.launchpad.net/bugs/1292243

** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337860

Title:
  VirtualInterfaceCreateException: Virtual Interface creation failed

Status in OpenStack Compute (Nova):
  New
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  After failing to launch 100 instances on size/memory allocation issue I tried 
launching  100 smaller instances. 
  after the first failure, 2 attempts to launch 100 smaller instances come back 
with error on some of the instances. 
  the 3ed time, I suddenly succeeded to launch all instances with no errors. 
  this is reproduced 100%. 

  to reproduce: 
  make sure you have enough computes to run 100 tiny flavor instances. 

  1. launch 100 instances with largest flavor (you should fail on memory or 
size). 
  2. destroy all instances and run 100 tiny flavor instances - repeat this step 
until all instances are launched successfully. 

  some of the instances will fail to be created with the below error
  after the first failure, even though they should be capable of
  running. after several trials we suddenly manage to run all instances
  (so cache issue perhaps).

  
  2014-07-04 14:46:19.728 15291 DEBUG nova.compute.utils 
[req-327fecfb-3bac-4a6d-aebe-3e06c03132e1 5a67ce69c6824e17b44bf15003ccc29f 
d22192179d3042a587ebd06bd6fd48d1] [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] Virtual Interface creat
  ion failed notify_about_instance_usage 
/usr/lib/python2.7/site-packages/nova/compute/utils.py:336
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] Traceback (most recent call last):
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1191, in 
_run_instance
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] instance, image_meta, 
legacy_bdm_in_spec)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1335, in 
_build_instance
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] network_info.wait(do_raise=False)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1311, in 
_build_instance
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] set_access_ip=set_access_ip)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 399, in 
decorated_function
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] return function(self, context, *args, 
**kwargs)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1723, in _spawn
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] six.reraise(self.type_, self.value, 
self.tb)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1720, in _spawn
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25] block_device_info)
  2014-07-04 14:46:19.728 15291 TRACE nova.compute.utils [instance: 
d2071fd8-8e09-4a43-b3c0-3ffb254b4c25]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2260, in 
spawn
  2014-07-04 14:46:19.728 15291 TRACE 

[Yahoo-eng-team] [Bug 1319182] Re: Pausing a rescued instance should be impossible

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = Fix Committed

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319182

Title:
  Pausing a rescued instance should be impossible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  In the following commands, 'vmtest' is a freshly created virtual
  machine.

  
  $ nova show vmtest | grep -E (status|task_state)
  | OS-EXT-STS:task_state| -
  | status   | ACTIVE

  $ nova rescue vmtest
  +---+--+
  | Property  | Value
  +---+--+
  | adminPass | 2ZxvzZULT4sr
  +---+--+

  $ nova show vmtest | grep -E (status|task_state)
  | OS-EXT-STS:task_state| -
  | status   | RESCUE

  $ nova pause vmtest

  $ nova show vmtest | grep -E (status|task_state)
  | OS-EXT-STS:task_state| -
  | status   | PAUSED

  $ nova unpause vmtest

  $ nova show vmtest | grep -E (status|task_state)
  | OS-EXT-STS:task_state| -
  | status   | ACTIVE

  Here, we would want the vm to be in the 'RESCUE' state, as it was
  before being paused.

  $ nova unrescue vmtest
  ERROR (Conflict): Cannot 'unrescue' while instance is in vm_state active 
(HTTP 409) (Request-ID: req-34b8004d-b072-4328-bbf9-29152bd4c34f)

  The 'unrescue' command fails, which seems to confirm that the VM was
  no longer being rescued.

  
  So, two possibilities:
  1) When unpausing, the vm should go back to 'rescued' state
  2) Rescued vms should not be allowed to be paused, as is indicated by this 
graph: http://docs.openstack.org/developer/nova/devref/vmstates.html

  
  Note that the same issue can be observed with suspend/resume instead of 
pause/unpause, and probably other commands as well.

  WDYT ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321640] Re: [HyperV]: Config drive is not attached to instance after resized or migrated

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321640

Title:
  [HyperV]: Config drive is not attached to instance after resized or
  migrated

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  If we use config-drive (whether set --config-drive=true in boot
  command or set force_config_drive=always in nova.conf), there is bug
  for config-drive when resize or migrate instances on hyperv.

  You can see from current nova codes:
  
https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L269
  when finished migration, there is no code to attach configdrive.iso or 
configdrive.vhd to the resized instance. compared to boot instance 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/vmops.py#L226). 
Although this commit https://review.openstack.org/#/c/55975/ handled coping 
configdrive to resized or migrated instance, there is no code to attach it 
after resized or migrated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327406] Re: The One And Only network is variously visible

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327406

Title:
  The One And Only network is variously visible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  I am testing with the templates in
  https://review.openstack.org/#/c/97366/

  I can create a stack.  I can use `curl` to hit the webhooks to scale
  up and down the old-style group and to scale down the new-style group;
  those all work.  What fails is hitting the webhook to scale up the
  new-style group.  Here is a typescript showing the failure:

  $ curl -X POST
  
'http://10.10.0.125:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3A39675672862f4bd08505bfe1283773e0%3Astacks%2Ftest4
  %2F3cd6160b-
  
d8c5-48f1-a527-4c7df9205fc3%2Fresources%2FNewScaleUpPolicy?Timestamp=2014-06-06T19%3A45%3A27ZSignatureMethod=HmacSHA256AWSAccessKeyId=35678396d987432f87cda8e4c6cdbfb5SignatureVersion=2Signature=W3aJQ6SR7O5lLOxLEQndbzNB%2FUhefr1W7qO9zNZ%2BHVs%3D'

  ErrorResponseErrorMessageThe request processing has failed due to an 
internal error:Remote error: ResourceFailure Error: Nested stack UPDATE failed: 
Error: Resource CREATE failed: NotFound: No Network matching {'label': 
u'private'}. (HTTP 404)
  [u'Traceback (most recent call last):\n', u'  File 
/opt/stack/heat/heat/engine/service.py, line 61, in wrapped\nreturn 
func(self, ctx, *args, **kwargs)\n', u'  File 
/opt/stack/heat/heat/engine/service.py, line 911, in resource_signal\n
stack[resource_name].signal(details)\n', u'  File 
/opt/stack/heat/heat/engine/resource.py, line 879, in signal\nraise 
failure\n', uResourceFailure: Error: Nested stack UPDATE failed: Error: 
Resource CREATE failed: NotFound: No Network matching {'label': u'private'}. 
(HTTP 
404)\n]./MessageCodeInternalFailure/CodeTypeServer/Type/Error/ErrorResponse

  The original sin looks like this in the heat engine log:

  2014-06-06 17:39:20.013 28692 DEBUG urllib3.connectionpool 
[req-2391a9ea-46d6-46f0-9a7b-cf999a8697e9 ] GET 
/v2/39675672862f4bd08505bfe1283773e0/os-networks HTTP/1.1 200 16 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
  2014-06-06 17:39:20.014 28692 ERROR heat.engine.resource 
[req-2391a9ea-46d6-46f0-9a7b-cf999a8697e9 None] CREATE : Server my_instance 
Stack test1-new_style-qidqbd5nrk44-43e7l57kqf5w-4t3xdjrfrr7s 
[20523269-0ebb-45b8-ad59-75f55607f3bd]
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource Traceback (most 
recent call last):
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
/opt/stack/heat/heat/engine/resource.py, line 383, in _do_action
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource handle())
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
/opt/stack/heat/heat/engine/resources/server.py, line 493, in handle_create
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource nics = 
self._build_nics(self.properties.get(self.NETWORKS))
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
/opt/stack/heat/heat/engine/resources/server.py, line 597, in _build_nics
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource network = 
self.nova().networks.find(label=label_or_uuid)
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
/opt/stack/python-novaclient/novaclient/base.py, line 194, in find
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource raise 
exceptions.NotFound(msg)
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource NotFound: No Network 
matching {'label': u'private'}. (HTTP 404)

  Private debug logging reveals that in the scale-up case, the call to
  GET /v2/{tenant-id}/os-networks HTTP/1.1 returns with response code
  200 and an empty list of networks.  Comparing with the corresponding
  call when the stack is being created shows no difference in the calls
  --- because the normal logging omits the headers --- even though the
  results differ (when the stack is being created, the result contains
  the correct list of networks).  Turning on HTTP debug logging in the
  client reveals that the X-Auth-Token headers differ.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326183] Re: detach interface fails as instance info cache is corrupted

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326183

Title:
  detach interface fails as instance info cache is corrupted

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  
  Performing attach/detach interface on a VM sometimes results in an interface 
that can't be detached from the VM.
  I could triage it to the corrupted instance cache info due to non-atomic 
update of that information.
  Details on how to reproduce the bug are as follows. Since this is due to a 
race condition, the test can take quite a bit of time before it hits the bug.

  Steps to reproduce:

  1) Devstack with trunk with the following local.conf:
  disable_service n-net
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-l3
  enable_service q-meta
  enable_service q-metering
  RECLONE=yes
  # and other options as set in the trunk's local

  2) Create few networks:
  $ neutron net-create testnet1
  $ neutron net-create testnet2
  $ neutron net-create testnet3
  $ neutron subnet-create testnet1 192.168.1.0/24
  $ neutron subnet-create testnet2 192.168.2.0/24
  $ neutron subnet-create testnet3 192.168.3.0/24

  2) Create a testvm in testnet1:
  $ nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic 
net-id=`neutron net-list | grep testnet1 | cut -f 2 -d ' '` testvm

  3) Run the following shell script to attach and detach interfaces for this vm 
in the remaining two networks in a loop until we run into the issue at hand:
  
  #! /bin/bash
  c=1
  netid1=`neutron net-list | grep testnet2 | cut -f 2 -d ' '`
  netid2=`neutron net-list | grep testnet3 | cut -f 2 -d ' '`
  while [ $c -gt 0 ]
  do
 echo Round:  $c
 echo -n Attaching two interfaces... 
 nova interface-attach --net-id $netid1 testvm
 nova interface-attach --net-id $netid2 testvm
 echo Done
 echo Sleeping until both those show up in interfaces
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 if [ $count -eq 7 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 echo Waited for  $waittime  seconds
 echo Detaching both... 
 nova interface-list testvm | grep $netid1 | awk '{print deleting ,$4; 
system(nova interface-detach testvm $4  ; sleep 2);}'
 nova interface-list testvm | grep $netid2 | awk '{print deleting ,$4; 
system(nova interface-detach testvm $4  ; sleep 2);}'
 echo Done; check interfaces are gone in a minute.
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 echo line count:  $count
 if [ $count -eq 5 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 if [ $waittime -ge 60 ]
 then
echo bad case
exit 1
 fi
 echo Interfaces are gone
 ((  c-- ))
  done
  -

  Eventually the test will stop with a failure (bad case) and the
  interface remaining either from testnet2 or testnet3 can not be
  detached at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308058] Re: Cannot create volume from glance image without checksum

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = Fix Committed

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308058

Title:
  Cannot create volume from glance image without checksum

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  It is no longer possible to create a volume from an image that does
  not have a checksum set.

  
https://github.com/openstack/cinder/commit/da13c6285bb0aee55cfbc93f55ce2e2b7d6a28f2
  - this patch removes the default of None from the getattr call.

  If this is intended it would be nice to see something more informative
  in the logs.

  2014-04-15 11:52:26.035 19000 ERROR cinder.api.middleware.fault 
[req-cf0f7b89-a9c1-4a10-b1ac-ddf415a28f24 c139cd16ac474d2184237ba837a04141 
83d5198d5f5a461798c6b843f57540d
  f - - -] Caught error: checksum
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault Traceback 
(most recent call last):
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/middleware/fault.py, line 75, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
req.get_response(self.application)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
application, catch_exc_info=False)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault app_iter 
= application(self.environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 615, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.app(env, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault response 
= self.app(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault resp = 
self.call_func(req, *args, **self.kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.func(req, *args, **kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/openstack/wsgi.py, line 895, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
content_type, body, accept)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/openstack/wsgi.py, line 943, in _process_stack
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
action_result = self.dispatch(meth, request, action_args)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
/opt/stack/cinder/cinder/api/openstack/wsgi.py, line 1019, in dispatch
  2014-04-15 11:52:26.035 19000 TRACE 

[Yahoo-eng-team] [Bug 1296478] Re: The Hyper-V driver's list_instances() returns an empty result set on certain localized versions of the OS

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296478

Title:
  The Hyper-V driver's list_instances() returns an empty result set on
  certain localized versions of the OS

Status in OpenStack Compute (Nova):
  Fix Committed
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  This issue is related to different values that MSVM_ComputerSystem's
  Caption property can have on different locales.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py, line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/conductor/manager.py, line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/api.py, line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/api.py, line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/nova/network/rpcapi.py, line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py, line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/transport.py, line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py, line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321186] Re: nova can't show or delete queued image for AttributeError

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
   Status: New = Fix Committed

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321186

Title:
  nova can't show or delete queued image for AttributeError

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Committed

Bug description:
  steps to reproduce:
  1. run glance image-create to create a queued image
  2. run nova image-delete image-id

  it returns:
  Delete for image b31aa5dd-f07a-4748-8f15-398346887584 failed: The server has 
either erred or is incapable of performing the requested operation. (HTTP 500)

  the traceback in log file is:

  Traceback (most recent call last):
File /opt/stack/nova/nova/api/openstack/__init__.py, line 125, in __call__
  return req.get_response(self.application)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, 
in send
  application, catch_exc_info=False)
File /usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, 
in call_application
  app_iter = application(self.environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 632, in __call__
  return self.app(env, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
  response = self.app(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in 
__call__
  return resp(environ, start_response)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File /usr/local/lib/python2.7/dist-packages/webob/dec.py, line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 917, in __call__
  content_type, body, accept)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 983, in 
_process_stack
  action_result = self.dispatch(meth, request, action_args)
File /opt/stack/nova/nova/api/openstack/wsgi.py, line 1067, in dispatch
  return method(req=request, **action_args)
File /opt/stack/nova/nova/api/openstack/compute/images.py, line 139, in 
show
  image = self._image_service.show(context, id)
File /opt/stack/nova/nova/image/glance.py, line 277, in show
  base_image_meta = _translate_from_glance(image)
File /opt/stack/nova/nova/image/glance.py, line 462, in 
_translate_from_glance
  image_meta = _extract_attributes(image)
File /opt/stack/nova/nova/image/glance.py, line 530, in 
_extract_attributes
  output[attr] = getattr(image, attr)
File 
/opt/stack/python-glanceclient/glanceclient/openstack/common/apiclient/base.py,
 line 462, in __getattr__
  return self.__getattr__(k)
File 
/opt/stack/python-glanceclient/glanceclient/openstack/common/apiclient/base.py,
 line 464, in __getattr__
  raise AttributeError(k)
  AttributeError: disk_format

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304593] Re: VMware: waste of disk datastore when root disk size of instance is 0

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304593

Title:
  VMware: waste of disk datastore when root disk size of instance is 0

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  When an instance has 0 root disk size an extra image is created on the
  datastore (uuid.0.vmdk that is identical to uuid.vmdk). This is only
  in the case of a linked clone image and wastes space on the datastore.
  The original image that is cached can be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316373] Re: Can't force delete an errored instance with no info cache

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316373

Title:
  Can't force delete an errored instance with no info cache

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Sometimes when an instance fails to launch for some reason when trying
  to delete it using nova delete or nova force-delete it doesn't work
  and gives the following error:

  This is when using cells but I think it possibly isn't cells related.
  Deleting is expecting an info cache no matter what. Ideally force
  delete should ignore all errors and delete the instance.

  
  2014-05-06 10:48:58.368 21210 ERROR nova.cells.messaging 
[req-a74c59d3-dc58-4318-87e8-0da15ca2a78d d1fa8867e42444cf8724e65fef1da549 
094ae1e2c08f4eddb444a9d9db71ab40] Error processing message locally: Info cache 
for instance bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 200, in _process_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 1532, in _process_message_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 894, in terminate_instance
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self._call_compute_api_with_obj(message.ctxt, instance, 'delete')
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 855, in _call_compute_api_with_obj
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance.refresh(ctxt)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/base.py, line 151, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/instance.py, line 500, in refresh
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.info_cache.refresh()
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/base.py, line 151, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/instance_info_cache.py, line 103, in refresh
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.instance_uuid)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/base.py, line 112, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging result = fn(cls, 
context, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/instance_info_cache.py, line 70, in 
get_by_instance_uuid
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance_uuid=instance_uuid)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
InstanceInfoCacheNotFound: Info cache for instance 
bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304968] Re: Nova cpu full of instance_info_cache stack traces due to attempting to send events about deleted instances

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

** Changed in: nova/icehouse
Milestone: None = 2014.1.3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304968

Title:
  Nova cpu full of instance_info_cache stack traces due to attempting to
  send events about deleted instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  The bulk of the stack traces in n-cpu is because emit_event is getting
  triggered on a VM delete, however by the time we get to emit_event the
  instance is deleted (we see this exception 183 times in this log -
  which means it's happening on *every* compute terminate) so when we
  try to look up the instance we hit the exception found here:

  @base.remotable_classmethod
  def get_by_instance_uuid(cls, context, instance_uuid):
  db_obj = db.instance_info_cache_get(context, instance_uuid)
  if not db_obj:
  raise exception.InstanceInfoCacheNotFound(
  instance_uuid=instance_uuid)
  return InstanceInfoCache._from_db_object(context, cls(), db_obj)

  A log trace of this interaction looks like this:

  
  2014-04-08 11:14:25.475 DEBUG nova.openstack.common.lockutils 
[req-fe9db989-416e-4da0-986c-e68336e3c602 TenantUsagesTestJSON-153098759 
TenantUsagesTestJSON-953946497] Semaphore / lock released 
do_terminate_instance inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:252
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore 
75da98d7-bbd5-42a2-ad6f-7a66e38977fa lock 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:168
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore / lock do_terminate_instance inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:248
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore function _lock_name at 0x41635f0 
lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:168
  2014-04-08 11:14:25.908 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore / lock _clear_events inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:248
  2014-04-08 11:14:25.908 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Semaphore / lock released _clear_events inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:252
  2014-04-08 11:14:25.928 AUDIT nova.compute.manager 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] [instance: 75da98d7-bbd5-42a2-ad6f-7a66e38977fa] 
Terminating instance
  2014-04-08 11:14:25.989 DEBUG nova.objects.instance 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Lazy-loading `system_metadata' on Instance uuid 
75da98d7-bbd5-42a2-ad6f-7a66e38977fa obj_load_attr 
/opt/stack/new/nova/nova/objects/instance.py:519
  2014-04-08 11:14:26.209 DEBUG nova.network.api 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Updating cache with info: [VIF({'ovs_interfaceid': 
None, 'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 
'address': u'10.1.0.2'})], 'version': 4, 'meta': {u'dhcp_server': u'10.1.0.1'}, 
'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 
'routes': [], 'cidr': u'10.1.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 
'type': u'gateway', 'address': u'10.1.0.1'})}), Subnet({'ips': [], 'version': 
None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 
'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': 
None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, 
u'bridge_interface': u'eth0'}, 'id': u'9751787e-f41c-4299-be13-941c901f6d18', 
'label': u'private'}), 'devname': N
 one, 'qbh_params': None, 'meta': {}, 'details': {}, 'address': 
u'fa:16:3e:d8:87:38', 'active': False, 'type': u'bridge', 'id': 
u'db1ac48d-805a-45d3-9bb9-786bb5855673', 'qbg_params': None})] 
update_instance_cache_with_nw_info /opt/stack/new/nova/nova/network/api.py:74
  2014-04-08 11:14:27.661 2894 DEBUG nova.virt.driver [-] Emitting event 
nova.virt.event.LifecycleEvent object at 0x4932e50 emit_event 
/opt/stack/new/nova/nova/virt/driver.py:1207
  

[Yahoo-eng-team] [Bug 1334142] Re: A server creation fails due to adding interface failure

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334142

Title:
  A server creation fails due to adding interface failure

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  http://logs.openstack.org/72/61972/27/gate/gate-tempest-dsvm-
  full/ed1ab55/logs/testr_results.html.gz

  pythonlogging:'': {{{
  2014-06-25 06:45:11,596 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 202 
POST http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers 0.295s
  2014-06-25 06:45:11,674 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.077s
  2014-06-25 06:45:12,977 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.300s
  2014-06-25 06:45:12,978 25675 INFO [tempest.common.waiters] State 
transition BUILD/scheduling == BUILD/spawning after 1 second wait
  2014-06-25 06:45:14,150 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.171s
  2014-06-25 06:45:14,153 25675 INFO [tempest.common.waiters] State 
transition BUILD/spawning == ERROR/None after 3 second wait
  2014-06-25 06:45:14,221 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 400 
POST 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f/action
 0.066s
  2014-06-25 06:45:14,404 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 204 
DELETE 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.182s
  }}}

  Traceback (most recent call last):
File tempest/api/compute/servers/test_delete_server.py, line 97, in 
test_delete_server_while_in_verify_resize_state
  resp, server = self.create_test_server(wait_until='ACTIVE')
File tempest/api/compute/base.py, line 247, in create_test_server
  raise ex
  BadRequest: Bad request
  Details: {'message': 'The server could not comply with the request since it 
is either malformed or otherwise incorrect.', 'code': '400'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303536] Re: Live migration fails. XML error: CPU feature `wdt' specified more than once

2014-09-29 Thread Adam Gandelman
** Also affects: nova/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303536

Title:
  Live migration fails. XML error: CPU feature `wdt' specified more than
  once

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  New

Bug description:
  Description of problem
  ---

  Live migration fails.
  libvirt says XML error: CPU feature `wdt' specified more than once

  Version
  -

  ii  libvirt-bin 1.2.2-0ubuntu2
amd64programs for the libvirt library
  ii  python-libvirt  1.2.2-0ubuntu1
amd64libvirt Python bindings
  ii  nova-compute1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - compute node (KVM)
  ii  nova-cert   1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - certificate management

  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=14.04
  DISTRIB_CODENAME=trusty
  DISTRIB_DESCRIPTION=Ubuntu Trusty Tahr (development branch)
  NAME=Ubuntu
  VERSION=14.04, Trusty Tahr

  
  Test env
  --

  A two node openstack havana on ubuntu 14.04. Migrating a instance to
  other node.

  
  Steps to Reproduce
  --
   - Migrate the instance

  
  And observe /var/log/nova/compute.log and /var/log/libvirt.log

  Actual results
  --

  /var/log/nova-conductor.log

  2014-04-04 13:42:17.128 3294 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py, line 139, in 
inner\nreturn func(*args, **kwargs)\n', '  File 
/usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 668, in 
migrate_server\nblock_migration, disk_over_commit)\n', '  File 
/usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 769, in 
_live_migrate\nraise exception.MigrationError(reason=ex)\n'
 , 'MigrationError: Migration error: Remote error: libvirtError XML error: CPU 
feature `wdt\' specified more than once\n[u\'Traceback (most recent call 
last):\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 133, 
in _dispatch_and_reply\\nincoming.message))\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 176, 
in _dispatch\\nreturn self._do_dispatch(endpoint, method, ctxt, args)\\n\', 
u\'  File /usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch\\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped\\n
payload)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\\nsix.reraise(self.type_, self.value, self.tb)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in wrapped\\n  
   return f(self, context, *args, **kw)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 272, in 
decorated_function\\ne, sys.exc_info())\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__\\nsix.reraise(self.type_, self.value, self.tb)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 259, in 
decorated_function\\nreturn function(self, context, *args, **kwargs)\\n\', 
u\'  File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
4159, in check_can_live_migrate_destination\\nblock_migration, 
disk_over_commit)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4094, in 
check_can_live_migrate_destination\\n
self._compare_cpu(source_cpu_info)\\n\', u\'  File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 4236, in 
_compare_cpu\\nLOG.error(m, {\\\'ret\\\': ret, \\\'u\\\': u})\\n\', u\'
   File /usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, 
line 68, in __exit__\\n

  1   2   >