[Yahoo-eng-team] [Bug 1486335] Re: Create nova.conf with tox -egenconfig : ValueError: (Expected ',' or end-of-list in, Routes!=2.0,!=2.1,=1.12.3; python_version=='2.7', 'at', ; python_version==

2015-08-26 Thread Davanum Srinivas (DIMS)
please update pip/pbr/setuptools and reopen this bug if needed

** Changed in: nova
   Status: Incomplete = Won't Fix

** Changed in: nova
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486335

Title:
  Create nova.conf with tox -egenconfig :ValueError: (Expected ','
  or end-of-list in,
  Routes!=2.0,!=2.1,=1.12.3;python_version=='2.7', 'at',
  ;python_version=='2.7')

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  $git clone https://git.openstack.org/openstack/nova.git
  $pip install tox
  $tox -egenconfig

  
  cmdargs: [local('/home/ubuntu/nova/.tox/genconfig/bin/pip'), 'install', '-U', 
'--force-reinstall', '-r/home/ubuntu/nova/requirements.txt', 
'-r/home/ubuntu/nova/test-requirements.txt']
  env: {'LC_ALL': 'en_US.utf-8', 'XDG_RUNTIME_DIR': '/run/user/1000', 
'VIRTUAL_ENV': '/home/ubuntu/nova/.tox/genconfig', 'LESSOPEN': '| 
/usr/bin/lesspipe %s', 'SSH_CLIENT': '27.189.208.43 5793 22', 'LOGNAME': 
'ubuntu', 'USER': 'ubuntu', 'HOME': '/home/ubuntu', 'PATH': 
'/home/ubuntu/nova/.tox/genconfig/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'XDG_SESSION_ID': '25', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': 
'27.189.208.43 5793 10.0.0.18 22', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 
'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANGUAGE': 
'en_US', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/5', 'OLDPWD': '/home/ubuntu', 
'PWD': '/home/ubuntu/nova', 'PYTHONHASHSEED': '67143794', 'OS_TEST_PATH': 
'./nova/tests/unit', 'MAIL': '/var/mail/ubuntu', 'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tg
 
z=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36
 
:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}

  Exception:
  Traceback (most recent call last):
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main
  status = self.run(options, args)
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 262, in run
  for req in parse_requirements(filename, finder=finder, options=options, 
session=session):
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/req.py,
 line 1631, in parse_requirements
  req = InstallRequirement.from_line(line, comes_from, 
prereleases=getattr(options, pre, None))
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/req.py,
 line 172, in from_line
  return cls(req, comes_from, url=url, prereleases=prereleases)
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/req.py,
 line 70, in __init__
  req = pkg_resources.Requirement.parse(req)
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py,
 line 2606, in parse
  reqs = list(parse_requirements(s))
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py,
 line 2544, in parse_requirements
  line, p, specs = scan_list(VERSION,LINE_END,line,p,(1,2),version spec)
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py,
 line 2522, in scan_list
  Expected ',' or end-of-list in,line,at,line[p:]
  ValueError: (Expected ',' or end-of-list in, 
Routes!=2.0,!=2.1,=1.12.3;python_version=='2.7', 'at', 
;python_version=='2.7')

  Storing debug log for failure in /home/ubuntu/.pip/pip.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to  

[Yahoo-eng-team] [Bug 1433397] Re: OpenContrail plugin code split

2015-08-26 Thread Armando Migliaccio
This has shown no heartbeat for ages...we'll probably end up getting rid
of the entire plugin from Mitaka.

** Changed in: neutron
Milestone: liberty-3 = None

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433397

Title:
  OpenContrail plugin code split

Status in neutron:
  Invalid

Bug description:
  bug report for tracking OpenContrail plugin code split

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1433397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489125] [NEW] FWaaS Long name or description causes 500

2015-08-26 Thread James Arendt
Public bug reported:

The FWaaS REST interface is not checking string lengths, so a name
greater than 255 characters or a description greater than 1024 will
cause creations to fail with a 500 INTERNAL SERVER error due to a Data
too long for column DB error internally.

$ neutron firewall-create --name
NameLongerThan255Characters01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
testpolicy

Request Failed: internal server error while processing your request.


Too long names are a user issue, so with validation would receive a better 
user-focused response:

neutron firewall-create --name
NameLongerThan255Characters01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
testpolicy

Invalid input for name. Reason:
'NameLongerThan255Characters01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'
exceeds maximum length of 255.

** Affects: neutron
 Importance: Undecided
 Assignee: James Arendt (james-arendt-7)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = James Arendt (james-arendt-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489125

Title:
  FWaaS Long name or description causes 500

Status in neutron:
  New

Bug description:
  The FWaaS REST interface is not checking string lengths, so a name
  greater than 255 characters or a description greater than 1024 will
  cause creations to fail with a 500 INTERNAL SERVER error due to a
  Data too long for column DB error internally.

  $ neutron firewall-create --name
  
NameLongerThan255Characters01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
  testpolicy

  Request Failed: internal server error while processing your request.

  
  Too long names are a user issue, so with validation would receive a better 
user-focused response:

  neutron firewall-create --name
  
NameLongerThan255Characters01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
  testpolicy

  Invalid input for name. Reason:
  
'NameLongerThan255Characters01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'
  exceeds maximum length of 255.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489126] [NEW] Filtering by tags is broken in v3

2015-08-26 Thread Mike Fedosin
Public bug reported:

When I want to filter list of artifacts by tag I get a 500 error:

http://localhost:9292/v3/artifacts/myartifact/v2.0/drafts?tag=hyhyhy

html
 head
  title500 Internal Server Error/title
 /head
 body
  h1500 Internal Server Error/h1
  The server has either erred or is incapable of performing the requested 
operation.br /br /


 /body
/html

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1489126

Title:
  Filtering by tags is broken in v3

Status in Glance:
  New

Bug description:
  When I want to filter list of artifacts by tag I get a 500 error:

  http://localhost:9292/v3/artifacts/myartifact/v2.0/drafts?tag=hyhyhy

  html
   head
title500 Internal Server Error/title
   /head
   body
h1500 Internal Server Error/h1
The server has either erred or is incapable of performing the requested 
operation.br /br /


   /body
  /html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1489126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360252] Re: testtools 0.9.36+ breaks unit tests for multiple projects

2015-08-26 Thread Aditi Rajagopal
A fix was released in Havana as per this patch -
https://review.openstack.org/#/c/117037/3

A fix was merged into master as per this patch -
https://review.openstack.org/#/c/70668/

Neutron currently uses testtools=1.4.0 (as per test-requirements.txt)

** Changed in: neutron
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360252

Title:
  testtools 0.9.36+ breaks unit tests for multiple projects

Status in Glance:
  Fix Released
Status in neutron:
  Fix Released
Status in sqlalchemy-migrate:
  Fix Committed

Bug description:
  Tests fails as in [1]:

  Traceback (most recent call last):
File 
neutron/tests/unit/services/loadbalancer/test_loadbalancer_plugin.py, line 
84, in setUp
  super(LoadBalancerExtensionTestCase, self).setUp()
File neutron/tests/unit/testlib_api.py, line 56, in setUp
  super(WebTestCase, self).setUp()
File neutron/tests/base.py, line 52, in setUp
  super(BaseTestCase, self).setUp()
File 
/home/jenkins/workspace/gate-neutron-python26/.tox/py26/lib/python2.6/site-packages/testtools/testcase.py,
 line 663, in setUp
  % (sys.modules[self.__class__.__module__].__file__,))
  ValueError: In File: 
neutron/tests/unit/services/loadbalancer/test_loadbalancer_plugin.pyc
  TestCase.setUp was already called. Do not explicitly call setUp from your 
tests. In your own setUp, use super to call the base setUp.

  This is due to the following check in new testtools [2].

  [1]: 
http://logs.openstack.org/53/108453/3/check/gate-neutron-python26/9f8d04e/testr_results.html.gz
  
[2]:https://github.com/testing-cabal/testtools/commit/5c3b92d90a64efaecdc4010a98002bfe8b888517

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1360252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467008] Re: Unit tests fail with sqlalchemy 1.0+

2015-08-26 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.db
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1467008

Title:
  Unit tests fail with sqlalchemy 1.0+

Status in Glance:
  Fix Released
Status in Keystone:
  Fix Released
Status in oslo.db:
  Invalid

Bug description:
  Unit tests fail with sqlalchemy 1.0+. See
  https://review.openstack.org/#/c/190405/ , which tries to upgrade the
  requirement and fails.

  2015-06-16 19:32:44.080 | 
keystone.tests.unit.test_sql_upgrade.SqlUpgradeTests.test_region_url_upgrade
  2015-06-16 19:32:44.080 | 

  2015-06-16 19:32:44.080 |
  2015-06-16 19:32:44.080 | Captured traceback:
  2015-06-16 19:32:44.081 | ~~~
  2015-06-16 19:32:44.081 | Traceback (most recent call last):
  2015-06-16 19:32:44.081 |   File 
keystone/tests/unit/test_sql_upgrade.py, line 196, in tearDown
  2015-06-16 19:32:44.081 | conn.execute(schema.DropConstraint(fkc))
  2015-06-16 19:32:44.081 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 914, in execute
  2015-06-16 19:32:44.081 | return meth(self, multiparams, params)
  2015-06-16 19:32:44.081 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py,
 line 68, in _execute_on_connection
  2015-06-16 19:32:44.081 | return connection._execute_ddl(self, 
multiparams, params)
  2015-06-16 19:32:44.081 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 968, in _execute_ddl
  2015-06-16 19:32:44.081 | compiled
  2015-06-16 19:32:44.081 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1146, in _execute_context
  2015-06-16 19:32:44.082 | context)
  2015-06-16 19:32:44.082 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1337, in _handle_dbapi_exception
  2015-06-16 19:32:44.082 | util.raise_from_cause(newraise, exc_info)
  2015-06-16 19:32:44.082 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py,
 line 199, in raise_from_cause
  2015-06-16 19:32:44.082 | reraise(type(exception), exception, 
tb=exc_tb)
  2015-06-16 19:32:44.082 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1139, in _execute_context
  2015-06-16 19:32:44.082 | context)
  2015-06-16 19:32:44.082 |   File 
/home/jenkins/workspace/gate-keystone-python27/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py,
 line 450, in do_execute
  2015-06-16 19:32:44.082 | cursor.execute(statement, parameters)
  2015-06-16 19:32:44.082 | sqlalchemy.exc.OperationalError: 
(sqlite3.OperationalError) near DROP: syntax error [SQL: u'ALTER TABLE 
group DROP CONSTRAINT fk_group_domain_id']

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1467008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319232] Re: Periodic tasks run too frequently

2015-08-26 Thread Matt Riedemann
Cinder was fixed back in Juno: https://review.openstack.org/#/c/96512/

** Changed in: cinder
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1319232

Title:
  Periodic tasks run too frequently

Status in Cinder:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released

Bug description:
  Each periodic task can have a spacing, which defines the minimum
  amount of time between executions of that task.  For example, a task
  with periodic_spacing=120 would execute no more often than once every
  2 minutes.  Tasks that do not define an explicit spacing will be run
  every time the periodic task processor runs.  This is commonly loosely
  interpreted as every 60 seconds, but in reality it's more
  complicated than that.

  As a result of these complications, we can actually end up running
  these tasks more frequently -- I've regularly observed them running
  every 20-30 seconds, and in several cases I've seen a task running
  just 1-2 seconds after it previously ran.  This consumes extra
  resources (CPU, database access, etc) without providing any real
  value.

  The reason for these extra runs has to do with how the periodic task
  processor is implemented.  When there are multiple tasks with a
  defined spacing, they can get somewhat staggered and force the
  periodic task processor to run additional iterations.  Since tasks
  with no spacing run every time the periodic task processor runs, they
  get run more frequently than one would expect.

  
  My proposed solution is to redefine the behavior of periodic tasks with no 
explicit spacing so that they run with the default interval (60 seconds).  The 
code change is simple -- in nova/openstack/common/periodic_task.py, change this 
code:

  # A periodic spacing of zero indicates that this task should
  # be run every pass
  if task._periodic_spacing == 0:
  task._periodic_spacing = None

  to:
  # A periodic spacing of zero indicates that this task should
  # be run at the default interval
  if task._periodic_spacing == 0:
  task._periodic_spacing = DEFAULT_INTERVAL

  The actual runtime task processing code doesn't change -- this fix is
  basically the equivalent of finding every @periodic_task that doesn't
  have an explicit spacing, and setting spacing=60.  So it's very low
  risk.  Some may argue that this change in behavior could cause some
  task to behave differently than it used to.  However, there was never
  any guarantee that the task would run more often than every 60
  seconds, and in many cases the tasks may already run less frequently
  than that (due to other long-running tasks).  So this change should
  not introduce any new issues related to the timing of task execution;
  it would only serve to make the timing more regular.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1319232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489118] [NEW] Tests fail with local keystone.conf modifications

2015-08-26 Thread Brant Knudson
Public bug reported:


When there are changes in the local config (/etc/keystone/keystone.conf) some 
tests fail.

For example,
keystone.tests.unit.test_backend_ldap.MultiLDAPandSQLIdentityDomainConfigsInSQL.test_update_user_enable_fails,
fails.

The tests should not be affected by the local config file.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1489118

Title:
  Tests fail with local keystone.conf modifications

Status in Keystone:
  New

Bug description:
  
  When there are changes in the local config (/etc/keystone/keystone.conf) some 
tests fail.

  For example,
  
keystone.tests.unit.test_backend_ldap.MultiLDAPandSQLIdentityDomainConfigsInSQL.test_update_user_enable_fails,
  fails.

  The tests should not be affected by the local config file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1489118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462305] Re: multi-node test causes nova-compute to lockup

2015-08-26 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress = Confirmed

** Changed in: nova
 Assignee: (unassigned) = Davanum Srinivas (DIMS) (dims-v)

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1462305

Title:
  multi-node test causes nova-compute to lockup

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Its not very clear whats going on here, but here is the symptom.

  One of the nova-compute nodes appears to lock up:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_296
  It was just completing the termination of an instance:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_153

  This is also seen in the scheduler reporting the node as down:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/screen-n-sch.txt.gz#_2015-05-29_23_31_02_711

  On further inspection it seems like the other nova compute node had just 
started a migration:
  
http://logs.openstack.org/67/175067/2/check/check-tempest-dsvm-multinode-full/7a95fb0/logs/subnode-2/screen-n-cpu.txt.gz#_2015-05-29_23_27_48_079

  
  We have had issues in the past where olso.locks can lead to deadlocks, its 
not totally clear if thats happening here. all the periodic tasks happen in the 
same greenlet, so you can stop them happening if you hold a lock in an RPC call 
thats being processed, etc. No idea if thats happening here though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1462305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461750] Re: two unit test cases not pass for kilo version

2015-08-26 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461750

Title:
  two unit test cases not pass for kilo version

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. environment  version
  redhat 7
  nova-2015.1.0

  2. log
  ==
  FAIL: 
nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File nova/tests/unit/test_versions.py, line 33, in 
test_version_string_with_package_is_good
  version.version_string_with_package())
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.1.0-g9ec3421'

  
  ==
  FAIL: 
nova.tests.unit.test_wsgi.TestWSGIServerWithSSL.test_app_using_ipv6_and_ssl
  --
  Traceback (most recent call last):
  testtools.testresult.real._StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2015-06-04 11:57:50,550 INFO [nova.wsgi] fake_ssl listening on ::1:55722
  2015-06-04 11:57:50,556 INFO [nova.fake_ssl.wsgi.server] (28339) wsgi 
starting up on https://::1:55722/
  2015-06-04 11:57:50,576 INFO [nova.fake_ssl.wsgi.server] ::1 GET / HTTP/1.1 
status: 200 len: 155 time: 0.0004590
  2015-06-04 11:57:50,577 INFO [nova.wsgi] Stopping WSGI server.
  }}}

  Traceback (most recent call last):
File nova/tests/unit/test_wsgi.py, line 335, in 
test_app_using_ipv6_and_ssl
  server.wait()
File nova/wsgi.py, line 270, in wait
  self._pool.waitall()
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/eventlet/greenpool.py,
 line 120, in waitall
  self.no_coros_running.wait()
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/eventlet/event.py,
 line 121, in wait
  return hubs.get_hub().switch()
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
  return self.greenlet.switch()
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 346, in run
  self.wait(sleep_time)
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 85, in wait
  presult = self.do_poll(seconds)
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/eventlet/hubs/epolls.py,
 line 62, in do_poll
  return self.poll.poll(seconds)
File 
/home/jenkins/workspace/nova_TECS2.0_unittest/.venv/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py,
 line 52, in signal_handler
  raise TimeoutException()
  fixtures._fixtures.timeout.TimeoutException

  3. Reproduce steps:
  1) virtual environment install
  2) pip install for requirements.txt and test-requirements.txt
  3) ./run_tests.sh

  Expected result: all cases passed
  Actual result: two cases failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489105] [NEW] group membership lookup does not support posixGroup (RFC2307)

2015-08-26 Thread Guang Yee
Public bug reported:

Our LDAP lookup users in group logic assumes that the member attribute
contains the user DN.

https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap.py#L168

However, this is not the case for posixGroup (RFC 2307) where the
memberUid is really the uid of the user, not the DN.

Similarly, when looking up groups for a user, we are assuming the member
attribute contains the user DN

https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap.py#L364

This is not the case for posixAccount where user group membership is
done via uidNumber. In this case, we should first lookup the uidNumber,
then use it to construct the LDAP query to lookup the groups for the
user.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1489105

Title:
  group membership lookup does not support posixGroup (RFC2307)

Status in Keystone:
  New

Bug description:
  Our LDAP lookup users in group logic assumes that the member attribute
  contains the user DN.

  
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap.py#L168

  However, this is not the case for posixGroup (RFC 2307) where the
  memberUid is really the uid of the user, not the DN.

  Similarly, when looking up groups for a user, we are assuming the
  member attribute contains the user DN

  
https://github.com/openstack/keystone/blob/master/keystone/identity/backends/ldap.py#L364

  This is not the case for posixAccount where user group membership is
  done via uidNumber. In this case, we should first lookup the
  uidNumber, then use it to construct the LDAP query to lookup the
  groups for the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1489105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489138] [NEW] Add devref documentation to distributed DHCP feature

2015-08-26 Thread Gal Sagie
Public bug reported:

Distributed DHCP implementation in the reference implementation is based
on an abandoned spec and approved bug, the description of the feature
and its limitations needs to be organised and well documented

implementation: https://review.openstack.org/#/c/184423/

** Affects: neutron
 Importance: Undecided
 Assignee: Gal Sagie (gal-sagie)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Gal Sagie (gal-sagie)

** Description changed:

  Distributed DHCP implementation in the reference implementation is based
  on an abandoned spec and approved bug, the description of the feature
  and its limitations needs to be organised and well documented
+ 
+ implementation: https://review.openstack.org/#/c/184423/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489138

Title:
  Add devref documentation to distributed DHCP feature

Status in neutron:
  New

Bug description:
  Distributed DHCP implementation in the reference implementation is
  based on an abandoned spec and approved bug, the description of the
  feature and its limitations needs to be organised and well documented

  implementation: https://review.openstack.org/#/c/184423/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488764] [NEW] Create IPSec site connection with IPSec policy that specifies AH-ESP protocol error

2015-08-26 Thread Dongcan Ye
Public bug reported:

Create IPSec site connection with IPSec policy that specifies AH-ESP
protocol leads to the following error:


2015-08-26 13:29:10.976 ERROR neutron.agent.linux.utils 
[req-7b4a7ccc-286e-4267-9d50-d84afa5b5663 demo 
99b8d178a6784d749920414ac08bce66] 
Command: ['ip', 'netns', 'exec', 
u'qrouter-552bb850-4b33-4bf9-8d6a-c7f47f6e2d27', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.conf',
 u'a9587a5c-ff6e-4257-89c1-475300fc8622']
Exit code: 34
Stdin: 
Stdout: 034 Must do at AH or ESP, not neither. 

Stderr: WARNING: /opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-
c7f47f6e2d27/etc/ipsec.co

2015-08-26 13:29:10.976 ERROR neutron_vpnaas.services.vpn.device_drivers.ipsec 
[req-7b4a7ccc-286e-4267-9d50-d84afa5b5663 demo 
99b8d178a6784d749920414ac08bce66] Failed to enable vpn process on router 
552bb850-4b33-4bf9-8d6a-c7f47f6e2d27
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 251, in enable
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   self.start()
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 433, in start
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   ipsec_site_conn['id']
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File 
/opt/stack/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py,
 line 332, in _execute
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   extra_ok_codes=extra_ok_codes)
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File /opt/stack/neutron/neutron/agent/linux/ip_lib.py, line 719, in execute
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   extra_ok_codes=extra_ok_codes, **kwargs)
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
 File /opt/stack/neutron/neutron/agent/linux/utils.py, line 153, in execute
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec  
   raise RuntimeError(m)
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
RuntimeError: 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Command: ['ip', 'netns', 'exec', 
u'qrouter-552bb850-4b33-4bf9-8d6a-c7f47f6e2d27', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.conf',
 u'a9587a5c-ff6e-4257-89c1-475300fc8622']
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Exit code: 34
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdin: 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stdout: 034 Must do at AH or ESP, not neither. 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
Stderr: WARNING: 
/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.co
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec 
2015-08-26 13:29:10.976 TRACE neutron_vpnaas.services.vpn.device_drivers.ipsec


It seems Openswan doesn't support AH-ESP combined.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488764

Title:
  Create IPSec site connection with IPSec policy that specifies AH-ESP
  protocol error

Status in neutron:
  New

Bug description:
  Create IPSec site connection with IPSec policy that specifies AH-ESP
  protocol leads to the following error:

  
  2015-08-26 13:29:10.976 ERROR neutron.agent.linux.utils 
[req-7b4a7ccc-286e-4267-9d50-d84afa5b5663 demo 
99b8d178a6784d749920414ac08bce66] 
  Command: ['ip', 'netns', 'exec', 
u'qrouter-552bb850-4b33-4bf9-8d6a-c7f47f6e2d27', 'ipsec', 'addconn', 
'--ctlbase', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/var/run/pluto.ctl',
 '--defaultroutenexthop', u'172.24.4.3', '--config', 
u'/opt/stack/data/neutron/ipsec/552bb850-4b33-4bf9-8d6a-c7f47f6e2d27/etc/ipsec.conf',
 u'a9587a5c-ff6e-4257-89c1-475300fc8622']

[Yahoo-eng-team] [Bug 1488771] [NEW] multiple deletes in firewall tempest case: test_create_show_delete_firewall cause l3-agent throws unexpected exception: FirewallNotFound.

2015-08-26 Thread Zhao Yi
Public bug reported:

In kilo or icehouse release: multiple deletes in firewall tempest case:
test_create_show_delete_firewall cause l3-agent throws unexpected
exception: FirewallNotFound.

I am running tempest against kilo release, after running the neutron
case: test_create_show_delete_firewall, my l3-agent reports the
following errors and exceptions:

In this tempest case:
I found delete firewall will be called twice, the second delete_firewall(in the 
method addCleanup), will be called immediately after the 
first(self.client.delete_firewall).
This looks like an async call locking problem, I don't know if the current 
log/implementation/behavior is expected or unexpected.

==
Tempest test case in the file: tempest/api/network/test_fwaas_extensions.py:
==
def test_create_show_delete_firewall(self):
...
self.addCleanup(self._try_delete_firewall, firewall_id)
...
self.client.delete_firewall(firewall_id)

==
my l3-agent log:
==
2015-08-25 08:34:00.420 31255 INFO neutron.wsgi 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] 10.133.5.167 - - [25/Aug/2015 
08:34:00] DELETE /v2.0/fw/firewalls/2b3102d9-1925-47b3-bca3-a8cd0296cc8c 
HTTP/1.1 204 168 0.237354  - First Delete FW call
...
2015-08-25 08:34:00.725 31255 INFO neutron.wsgi 
[req-795bcbcf-5fde-43d6-8a66-5e2b3fdad44f ] 10.133.5.167 - - [25/Aug/2015 
08:34:00] DELETE /v2.0/fw/firewalls/2b3102d9-1925-47b3-bca3-a8cd0296cc8c 
HTTP/1.1 204 168 0.299331  - Second Delete FW call
...
2015-08-25 08:34:01.069 31255 DEBUG neutron_fwaas.db.firewall.firewall_db 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] delete_firewall() called 
delete_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:318  
- First Delete FW database operation
...
2015-08-25 08:34:01.098 31255 ERROR oslo_messaging.rpc.dispatcher 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] Exception during message handling: 
Firewall 2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be found.  -- Second 
Delete FW throw exception
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py,
 line 67, in firewall_deleted
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher fw_db = 
self.plugin._get_firewall(context, firewall_id)
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py, 
line 101, in _get_firewall
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher raise 
fw_ext.FirewallNotFound(firewall_id=id)
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher 
FirewallNotFound: Firewall 2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be 
found.
2015-08-25 08:34:01.098 31255 TRACE oslo_messaging.rpc.dispatcher
2015-08-25 08:34:01.098 31255 ERROR oslo_messaging._drivers.common 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] Returning exception Firewall 
2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be found. to caller
2015-08-25 08:34:01.099 31255 ERROR oslo_messaging._drivers.common 
[req-9cc36e3b-e209-4d95-bb40-9e1012a89621 ] ['Traceback (most recent call 
last):\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply\nexecutor_callback))\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch\nexecutor_callback)\n', '  File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py,
 line 67, in firewall_deleted\nfw_db = self.plugin._get_firewall(context, 
firewall_id)\n', '  File 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py, 
line 101, in _get_firewall\nraise 
fw_ext.FirewallNotFound(firewall_id=id)\n', 'FirewallNotFound: Firewall 
2b3102d9-1925-47b3-bca3-a8cd0296cc8c could not be found.\n']

** Affects: neutron
 Importance: Undecided
 

[Yahoo-eng-team] [Bug 1489061] [NEW] fernet token validation is slow

2015-08-26 Thread Matt Fischer
Public bug reported:

keystone token validation operations are much slower than uuid
operations.  The performance is up to 4x slower which makes other
openstack API calls slower too.

Numbers from Dolph:

Token validation performance

Response time   Requests per second
UUID18.8 ms (baseline)  256.7 (baseline)
Fernet  93.8 ms (400% slower)   48.3 (81% slower)


My numbers running on a basic setup running keystone in a VM without a load 
balancer:

Tokens per second (serial): 
UUID:  14.97
Fernet: 3.66

Tokens per second (concurrent 20 threads):
UUID:   46.18
Fernet: 12.92

Our numbers are similarly bad in production and its impacting OpenStack
performance when we're under load.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: fernet performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1489061

Title:
  fernet token validation is slow

Status in Keystone:
  New

Bug description:
  keystone token validation operations are much slower than uuid
  operations.  The performance is up to 4x slower which makes other
  openstack API calls slower too.

  Numbers from Dolph:

  Token validation performance

  Response time Requests per second
  UUID  18.8 ms (baseline)  256.7 (baseline)
  Fernet93.8 ms (400% slower)   48.3 (81% slower)

  
  My numbers running on a basic setup running keystone in a VM without a load 
balancer:

  Tokens per second (serial): 
  UUID:  14.97
  Fernet: 3.66

  Tokens per second (concurrent 20 threads):
  UUID:   46.18
  Fernet: 12.92

  Our numbers are similarly bad in production and its impacting
  OpenStack performance when we're under load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1489061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489091] [NEW] neutron l3-agent-router-remove is not unscheduling dvr routers from L3-agents

2015-08-26 Thread Stephen Ma
Public bug reported:

In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr'.
On the controller node the L3-agent mode is 'dvr-snat'.
Nova-compute is only running on the compute node.

Start: the compute node has no VMs running, there are no namespaces on
the compute node.

1. Created a network and a router
   neutron net-create my-net
   neutron subnet-create sb-my-net my-net 10.1.2.0/24
   neutron router-create my-router
   neutron router-interface-add my-router sb-my-net
   neutron router-gateway-set my-router public

my-net's UUID is 1162f283-6efc-424a-af37-0fbeeaf5d02a
my-router's UUID is 4f357733-9320-4c67-a0f6-81054d40fdaa

2. Boot a VM
   nova boot --flavor 1 --image IMAGE --nic 
net-id=1162f283-6efc-424a-af37-0fbeeaf5d02a myvm
   - The VM is hosted on the compute node.

3. Assign a floating IP to the VM
neutron port-list --device-id vm-uuid
neutron floatingip-create --port-id vm-port-uuid public

The fip namespace and the qrouter- 4f357733-9320-4c67-a0f6-81054d40fdaa
is found on the compute node.

4. Delete the VM. On the compute node, the fip namespace went away as expected. 
 But the qrouter namespace is left behind, but it should have been deleted. 
Neutron l3-agent-list-hosting-router shows the router is still scheduled on the 
compute node's L3-agent.
stack@Dvr-Ctrl2:~/DEVSTACK/manage$ nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron 
l3-agent-list-hosting-router 4f357733-9320-4c67-a0f6-81054d40fdaa
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 4fb0bc93-2e6b-46c7-9ccd-3c66d1f44cfc | Dvr-Ctrl2   | True   | :-)   | 
 |
| 733e31eb-b49e-488b-aaf1-0dbcda802f66 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+

5. Attempt to use neutron l3-agent-router-remove to remove the router from the 
compute node's L3-agent also didn't work.  The router is still scheduled on the 
agent.
stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron l3-agent-router-remove 
733e31eb-b49e-488b-aaf1-0dbcda802f66 4f357733-9320-4c67-a0f6-81054d40fdaa
Removed router 4f357733-9320-4c67-a0f6-81054d40fdaa from L3 agent

stack@Dvr-Ctrl2:~/DEVSTACK/manage$ ./osadmin neutron 
l3-agent-list-hosting-router 4f357733-9320-4c67-a0f6-81054d40fdaa
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 4fb0bc93-2e6b-46c7-9ccd-3c66d1f44cfc | Dvr-Ctrl2   | True   | :-)   | 
 |
| 733e31eb-b49e-488b-aaf1-0dbcda802f66 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+

The errors in (4) and (5) did not happen on the stable/kilo or the stable/juno 
code:
   i.) In (4) the router should no longer be scheduled on the compute node's L3 
agent.
   ii.) In (5) neutron l3-agent-router-remove should removed the router from 
the compute node's L3 agent.

Both (4) and (5) indicates that no notification to remove the router is
sent to the L3-agent on the compute node.  They represent regressions in
the latest neutron code.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489091

Title:
  neutron l3-agent-router-remove is not unscheduling dvr routers from
  L3-agents

Status in neutron:
  New

Bug description:
  In my environment where there is a compute node and a controller node.
  On the compute node the L3-agent mode is 'dvr'.
  On the controller node the L3-agent mode is 'dvr-snat'.
  Nova-compute is only running on the compute node.

  Start: the compute node has no VMs running, there are no namespaces on
  the compute node.

  1. Created a network and a router
     neutron net-create my-net
     neutron subnet-create sb-my-net my-net 10.1.2.0/24
     neutron router-create my-router
     neutron router-interface-add my-router sb-my-net
     neutron router-gateway-set my-router public

  my-net's UUID is 1162f283-6efc-424a-af37-0fbeeaf5d02a
  my-router's UUID is 4f357733-9320-4c67-a0f6-81054d40fdaa

  2. Boot a VM
     nova boot --flavor 1 --image IMAGE --nic 

[Yahoo-eng-team] [Bug 1489098] [NEW] py34 intermittent failures with No sql_connection parameter is established

2015-08-26 Thread Armando Migliaccio
Public bug reported:

Logstash query:

message:oslo_db.exception.CantStartEngineError: No sql_connection
parameter is established AND build_name:gate-neutron-python34

Seems to have started 8/26

Logstash results:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwib3Nsb19kYi5leGNlcHRpb24uQ2FudFN0YXJ0RW5naW5lRXJyb3I6IE5vIHNxbF9jb25uZWN0aW9uIHBhcmFtZXRlciBpcyBlc3RhYmxpc2hlZFwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1uZXV0cm9uLXB5dGhvbjM0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA2MTEyMjI3NzUsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

An example:

http://logs.openstack.org/07/202207/18/gate/gate-neutron-
python34/769d4ff/testr_results.html.gz

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided = Critical

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489098

Title:
  py34 intermittent failures with No sql_connection parameter is
  established

Status in neutron:
  New

Bug description:
  Logstash query:

  message:oslo_db.exception.CantStartEngineError: No sql_connection
  parameter is established AND build_name:gate-neutron-python34

  Seems to have started 8/26

  Logstash results:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwib3Nsb19kYi5leGNlcHRpb24uQ2FudFN0YXJ0RW5naW5lRXJyb3I6IE5vIHNxbF9jb25uZWN0aW9uIHBhcmFtZXRlciBpcyBlc3RhYmxpc2hlZFwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS1uZXV0cm9uLXB5dGhvbjM0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA2MTEyMjI3NzUsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  An example:

  http://logs.openstack.org/07/202207/18/gate/gate-neutron-
  python34/769d4ff/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1224972] Re: When createing a volume from an image - nova leaves the volume name empty

2015-08-26 Thread OpenStack Infra
** Changed in: nova
   Status: Invalid = In Progress

** Changed in: nova
 Assignee: (unassigned) = Feodor Tersin (ftersin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1224972

Title:
  When createing a volume from an image - nova leaves the volume name
  empty

Status in Cinder:
  Opinion
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When a block device with source=image, dest=volume to nova instance
  boot, nova will instruct Cinder to create the volume, however it will
  not set any name. It would be helpful to set a descriptive name so
  that the user knows where the volume came from.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1224972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476806] Re: Unable to delete instance with attached volumes which failed to boot

2015-08-26 Thread Matt Riedemann
*** This bug is a duplicate of bug 1484194 ***
https://bugs.launchpad.net/bugs/1484194

There was no nova fix committed here, marked nova as invalid since it
was fixed in cinder.

** Changed in: nova
   Status: Fix Committed = Invalid

** This bug has been marked a duplicate of bug 1484194
   Cinder shouldn't fail a detach call for a volume that's not attached

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476806

Title:
  Unable to delete instance with attached volumes which failed to boot

Status in OpenStack Compute (nova):
  Invalid
Status in python-cinderclient:
  In Progress

Bug description:
  I ran devstack deployment on this git nova version:

  commit 35375133398d862a61334783c1e7a90b95f34cdb
  Merge: 83623dd b2c5542
  Author: Jenkins jenk...@review.openstack.org
  Date:   Thu Jul 16 02:01:05 2015 +

  Merge Port crypto to Python 3

  If you try to start an instance with the following config and end up
  with the following erro:

   Error defining a domain with XML: domain type=parallels
uuidf81e862a-644b-4145-ab44-86d5c468106f/uuid
nameinstance-0001/name
memory2097152/memory
vcpu1/vcpu
metadata
  nova:instance xmlns:nova=http://openstack.org/xmlns/libvirt/nova/1.0;
nova:package version=12.0.0/
nova:namect-volume/nova:name
nova:creationTime2015-07-21 17:46:34/nova:creationTime
nova:flavor name=m1.small
  nova:memory2048/nova:memory
  nova:disk20/nova:disk
  nova:swap0/nova:swap
  nova:ephemeral0/nova:ephemeral
  nova:vcpus1/nova:vcpus
/nova:flavor
nova:owner
  nova:user uuid=5ff3594c1b8b4694acf2cf2ee13a27acadmin/nova:user
  nova:project 
uuid=ee1f664443ef4f1e8056b45baa1e83a5demo/nova:project
/nova:owner
  /nova:instance
/metadata
os
  typehvm/type
  boot dev=hd/
/os
clock offset=utc/
devices
  disk type=block device=disk
driver type=raw cache=none/
source 
dev=/dev/disk/by-path/ip-10.27.68.210:3260-iscsi-iqn.2010-10.org.openstack:volume-b147e00f-000f-4fbc-8141-afeb44e92549-lun-1/
target bus=sata dev=sda/
serialb147e00f-000f-4fbc-8141-afeb44e92549/serial
  /disk
  interface type=bridge
mac address=fa:16:3e:3f:f4:1a/
source bridge=qbr5a84792b-d8/
target dev=tap5a84792b-d8/
  /interface
  graphics type=vnc autoport=yes listen=10.27.68.210/
  video
model type=vga/
  /video
/devices
  /domain

  Then you can't terminate the instance with the following error:

  2015-07-21 13:54:15.418 ERROR nova.compute.manager 
[req-8184e7f2-5cec-4c51-9f24-f39a17d8b6eb admin demo] [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] Setting instance vm_state to ERROR
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] Traceback (most recent call last):
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2361, in do_terminate_instance
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] self._delete_instance(context, 
instance, bdms, quotas)
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File /vz/stack/nova/nova/hooks.py, 
line 149, in inner
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] rv = f(*args, **kwargs)
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2340, in _delete_instance
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] quotas.rollback()
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 119, in __exit__
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] six.reraise(self.type_, self.value, 
self.tb)
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2310, in _delete_instance
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f] self._shutdown_instance(context, 
instance, bdms)
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager [instance: 
f81e862a-644b-4145-ab44-86d5c468106f]   File 
/vz/stack/nova/nova/compute/manager.py, line 2246, in _shutdown_instance
  2015-07-21 13:54:15.418 17566 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1489060] [NEW] SR-IOV configuration file should be split into agent and driver pieces

2015-08-26 Thread Ihar Hrachyshka
Public bug reported:

The same as ovs or lb agents do, sr-iov should have ml2 and agent
configuration options split into pieces.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489060

Title:
  SR-IOV configuration file should be split into agent and driver pieces

Status in neutron:
  In Progress

Bug description:
  The same as ovs or lb agents do, sr-iov should have ml2 and agent
  configuration options split into pieces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489060/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489085] [NEW] Inconsistent path naming convention in API calls from neutronclient

2015-08-26 Thread James Reeves
Public bug reported:

Some of the path name calls from the neutronclient use underscores
(bandwidth_limit_rules), and some use a hypen/dash (rbac-policies, for
example). They need to be made consistent

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489085

Title:
  Inconsistent path naming convention in API calls from neutronclient

Status in neutron:
  New

Bug description:
  Some of the path name calls from the neutronclient use underscores
  (bandwidth_limit_rules), and some use a hypen/dash (rbac-policies, for
  example). They need to be made consistent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1221805] Re: LDAP Assignment backend does not support all v3 APIs

2015-08-26 Thread David Stanek
The LDAP assignment is deprecated as of Kilo so I don't see any reason
to allow implementers to use it more.

** Changed in: keystone
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1221805

Title:
  LDAP Assignment backend does not support all v3 APIs

Status in Keystone:
  Won't Fix

Bug description:
  The LDAP assignment backend is missing support for several of the v3
  APIs, for example:

  - Role Grant CRUD
  - GET /role_assignments

  Now that we have split identity, we need to decide how we maintain the
  LDAP assignment backend, i.e.:

  - Bring it up to full spec
  - Freeze as is
  - Depreciate it
  - etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1221805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292591] Re: Database models differs from migrations.

2015-08-26 Thread Dolph Mathews
I'm assuming this was fixed by the last patch. In the future, please use
Closes-Bug on the final patch in your patch sequence -- not just
Partial-Bug on all of them (which leaves the bug open).

** Changed in: keystone
   Status: In Progress = Fix Committed

** Changed in: keystone
Milestone: None = 2015.1.0

** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1292591

Title:
  Database models differs from migrations.

Status in Keystone:
  Fix Released

Bug description:
  As models and migrations don't have any logical relation in code, so
  differences are possible. Furthermore in most of cases differences
  exists. The only way to solve this problem is using of specific test
  such as this https://review.openstack.org/#/c/74081/  .

  This is a diff example form Keystone:

  AssertionError: Models and migration scripts aren't in sync:
  [ [ ( 'modify_nullable',
None,
'federation_protocol',
'mapping_id',
{ 'existing_server_default': None,
  'existing_type': VARCHAR(length=64)},
True,
False)],
[ ( 'modify_nullable',
None,
'region',
'description',
{ 'existing_server_default': None,
  'existing_type': VARCHAR(length=255)},
False,
True)],
( 'remove_index',
  Index(u'ix_revocation_event_revoked_at', Column(u'revoked_at', 
DATETIME(), table=revocation_event, nullable=False))),
[ ( 'modify_nullable',
None,
'token',
'valid',
{ 'existing_server_default': None,
  'existing_type': INTEGER()},
True,
False)]]

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1292591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1278739] Re: trusts in keystone fail in backend when impersonation is not provided

2015-08-26 Thread Lance Bragstad
fixed by - https://review.openstack.org/#/c/104066/

** Changed in: keystone
Milestone: None = 2015.1.0

** Changed in: keystone
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1278739

Title:
  trusts in keystone fail in backend when impersonation is not provided

Status in Keystone:
  Fix Released

Bug description:
  When creating trusts in Keystone, if 'impersonation' is not provided
  Keystone fails out in the backend code. This should probably be handed
  at the controller level to be consistent across all backends.

  lbragstad@precise64:~/curl-examples$ cat create_trust.json
  {
  trust: {
  expires_at: 2014-02-27T18:30:59.99Z,
  project_id: c7e2b98178e64418bb884929d3611b89,
  impersonation: true,
  roles: [
  {
  name: admin
  }
  ],
  trustee_user_id: bf3a4c9ef46d44fa9ce57349462b1998,
  trustor_user_id: 406e6d96a30449069bf4241a00308b23
  }
  }

  lbragstad@precise64:~/curl-examples$ cat create_trust_bad.json
  {
  trust: {
  expires_at: 2014-02-27T18:30:59.99Z,
  project_id: c7e2b98178e64418bb884929d3611b89,
  roles: [
  {
  name: admin
  }
  ],
  trustee_user_id: bf3a4c9ef46d44fa9ce57349462b1998,
  trustor_user_id: 406e6d96a30449069bf4241a00308b23
  }
  }

  Using impersonation in  the create_trust.json file returns a trust
  successfully:

  lbragstad@precise64:~/curl-examples$ curl -si -H X-Auth-Token:$TOKEN -H 
Content-type:application/json -d @create_trust.json 
http://localhost:5000/v3/OS-TRUST/trusts
  HTTP/1.1 201 Created
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 675
  Date: Sun, 09 Feb 2014 04:36:56 GMT

  {trust: {impersonation: true, roles_links: {self:
  http://10.0.2.15:5000/v3/OS-
  TRUST/trusts/12ce9f7214f04c018384f654f5ea9aa5/roles, previous:
  null, next: null}, trustor_user_id:
  406e6d96a30449069bf4241a00308b23, links: {self:
  http://10.0.2.15:5000/v3/OS-
  TRUST/trusts/12ce9f7214f04c018384f654f5ea9aa5}, roles: [{id:
  937488fff5444edb9da1e93d20596d4b, links: {self:
  http://10.0.2.15:5000/v3/roles/937488fff5444edb9da1e93d20596d4b},
  name: admin}], expires_at: 2014-02-27T18:30:59.99Z,
  trustee_user_id: bf3a4c9ef46d44fa9ce57349462b1998, project_id:
  c7e2b98178e64418bb884929d3611b89, id:
  12ce9f7214f04c018384f654f5ea9aa5}}

  When using the request without impersonation defined I get:

  lbragstad@precise64:~/curl-examples$ curl -si -H X-Auth-Token:$TOKEN -H 
Content-type:application/json -d @create_trust_bad.json http://localhos
  t:5000/v3/OS-TRUST/trusts
  HTTP/1.1 500 Internal Server Error
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 618
  Date: Sun, 09 Feb 2014 04:33:08 GMT

  {error: {message: An unexpected error prevented the server from 
fulfilling your request. (OperationalError) (1048, \Column 'impersonation
  ' cannot be null\) 'INSERT INTO trust (id, trustor_user_id, trustee_user_id, 
project_id, impersonation, deleted_at, expires_at, extra) VALUES
  (%s, %s, %s, %s, %s, %s, %s, %s)' ('b49ac0c7558a4450949c22c840db9794', 
'406e6d96a30449069bf4241a00308b23', 'bf3a4c9ef46d44fa9ce57349462b1998',
  'c7e2b98178e64418bb884929d3611b89', None, None, datetime.datetime(2014, 2, 
27, 18, 30, 59, 99), '{\roles\: [{\name\: \admin\}]}'), 
  code: 500, title: Internal Server Error}}

  
  According to the Identity V3 API, 'impersonation' is a requirement when 
creating a trust. 
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md#trusts

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1278739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480931] Re: Can't remove error instance which boot from image with new volume

2015-08-26 Thread Matt Riedemann
*** This bug is a duplicate of bug 1484194 ***
https://bugs.launchpad.net/bugs/1484194

** This bug is no longer a duplicate of bug 1476806
   Unable to delete instance with attached volumes which failed to boot
** This bug has been marked a duplicate of bug 1484194
   Cinder shouldn't fail a detach call for a volume that's not attached

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480931

Title:
  Can't remove error instance which boot from image with new volume

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  We need to prepare a image which will cause error during build instance.
   nova image-meta cirros-0.3.4-x86_64-uec set hw_video_ram=5
  Nova will raise requested ram too high exception when creating instance.
  Then you can get a error instance. But you can't delete it.
  You can find there is error message which was raised at 
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2245

  Why this happen?
  Nova will detach volume if you create instance failed.
  When users try to delete instance, nova will detach again.
  And cinder will throw invalid volume at
  https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L888
  Nova don't have try exception block can handle this exception.
  Therefore nova can't delete instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479066] Re: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6

2015-08-26 Thread Matt Riedemann
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479066

Title:
  DeprecationWarning: BaseException.message has been deprecated as of
  Python 2.6

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.vmware:
  New

Bug description:
  I see these when running tests:

  Captured stderr:
  
  nova/virt/libvirt/volume/volume.py:392: DeprecationWarning: 
BaseException.message has been deprecated as of Python 2.6
if ('device is busy' in exc.message or

  Seems that bug 1447946 was meant to fix some of this but it only
  handles NovaException, not other usage.

  We should be able to use six.text_type(e) for 'if str in e' type
  checks.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGVwcmVjYXRpb25XYXJuaW5nOiBCYXNlRXhjZXB0aW9uLm1lc3NhZ2UgaGFzIGJlZW4gZGVwcmVjYXRlZCBhcyBvZiBQeXRob24gMi42XCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svbm92YVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM4MTA2MTkwOTI3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236116] Re: attaching all devstack quantum networks to a nova server results in un-deletable server

2015-08-26 Thread Steve Baker
** Changed in: heat
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1236116

Title:
  attaching all devstack quantum networks to a nova server results in
  un-deletable server

Status in heat:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Attaching multiple networks results in a backtrace and undeleteable
  instance when used with Neutron.

  Run reproducer as follows:
  [sdake@bigiron ~]$ less reproducer
  #!/bin/bash

  glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 
--container-f
  ormat=bare  cirros.img
  id1=`neutron net-list -c id -f csv --quote none | grep -v id | tail -1 | tr 
-d '
  \r'`
  id2=`neutron net-list -c id -f csv --quote none | grep -v id | head -1 | tr 
-d '
  \r'`
  nova boot --flavor m1.tiny --image cirros-0.3.0-x86_64 --security_group 
default 
  --nic net-id=$id1 --nic net-id=$id2 cirros

  Run nova list waiting for server to become active.  Once server is
  active, delete the server via the nova delete id operation.  Server
  will enter an undeleteable saying it is ACTIVE.  It is important
  that both networks are connected when the delete operation is run, as
  for some reason one of the networks gets disconnected by some
  component (not sure which).

  Further delete operations are either unsuccessful or block further
  ability to create instances with instances finishing in the ERROR
  state after creation.

  n-cpu backtraces with:
  2013-10-06 18:03:11.269 ERROR nova.openstack.common.rpc.amqp 
[req-4f7cf630-d1eb-4fcd-af22-c11fa77fd3dd admin admin] Exception during message 
handling
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/amqp.py, line 461, in _process_data
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py, line 172, in dispatch
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 353, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 90, in wrapped
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/exception.py, line 73, in wrapped
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 243, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 229, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 294, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 271, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 258, in decorated_function
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:03:11.269 TRACE nova.openstack.common.rpc.amqp   File 
2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 294, in decorated_function
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp   File 
/opt/stack/nova/nova/compute/manager.py, line 271, in decorated_function
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-06 18:13:02.653 TRACE nova.openstack.common.rpc.amqp   File 

[Yahoo-eng-team] [Bug 1486335] Re: Create nova.conf with tox -egenconfig : ValueError: (Expected ',' or end-of-list in, Routes!=2.0,!=2.1,=1.12.3; python_version=='2.7', 'at', ; python_version==

2015-08-26 Thread Matt Riedemann
mriedem@ubuntu:~/git/nova$ pip show pbr pip setuptools virtualenv
/usr/local/lib/python2.7/dist-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90:
 InsecurePlatformWarning: A true SSLContext object is not available. This 
prevents urllib3 from configuring SSL appropriately and may cause certain SSL 
connections to fail. For more information, see 
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
You are using pip version 7.0.3, however version 7.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
---
Metadata-Version: 2.0
Name: pip
Version: 7.0.3
Summary: The PyPA recommended tool for installing Python packages.
Home-page: https://pip.pypa.io/
Author: The pip developers
Author-email: python-virtual...@groups.google.com
License: MIT
Location: /usr/local/lib/python2.7/dist-packages
Requires: 
---
Metadata-Version: 2.0
Name: setuptools
Version: 11.3.1
Summary: Easily download, build, install, upgrade, and uninstall Python packages
Home-page: https://bitbucket.org/pypa/setuptools
Author: Python Packaging Authority
Author-email: distutils-...@python.org
License: PSF or ZPL
Location: /usr/local/lib/python2.7/dist-packages
Requires: 
---
Metadata-Version: 2.0
Name: virtualenv
Version: 13.0.3
Summary: Virtual Python Environment builder
Home-page: https://virtualenv.pypa.io/
Author: Jannis Leidel, Carl Meyer and Brian Rosner
Author-email: python-virtual...@groups.google.com
License: MIT
Location: /usr/local/lib/python2.7/dist-packages
Requires: 


** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486335

Title:
  Create nova.conf with tox -egenconfig :ValueError: (Expected ','
  or end-of-list in,
  Routes!=2.0,!=2.1,=1.12.3;python_version=='2.7', 'at',
  ;python_version=='2.7')

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  $git clone https://git.openstack.org/openstack/nova.git
  $pip install tox
  $tox -egenconfig

  
  cmdargs: [local('/home/ubuntu/nova/.tox/genconfig/bin/pip'), 'install', '-U', 
'--force-reinstall', '-r/home/ubuntu/nova/requirements.txt', 
'-r/home/ubuntu/nova/test-requirements.txt']
  env: {'LC_ALL': 'en_US.utf-8', 'XDG_RUNTIME_DIR': '/run/user/1000', 
'VIRTUAL_ENV': '/home/ubuntu/nova/.tox/genconfig', 'LESSOPEN': '| 
/usr/bin/lesspipe %s', 'SSH_CLIENT': '27.189.208.43 5793 22', 'LOGNAME': 
'ubuntu', 'USER': 'ubuntu', 'HOME': '/home/ubuntu', 'PATH': 
'/home/ubuntu/nova/.tox/genconfig/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'XDG_SESSION_ID': '25', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': 
'27.189.208.43 5793 10.0.0.18 22', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 
'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANGUAGE': 
'en_US', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/5', 'OLDPWD': '/home/ubuntu', 
'PWD': '/home/ubuntu/nova', 'PYTHONHASHSEED': '67143794', 'OS_TEST_PATH': 
'./nova/tests/unit', 'MAIL': '/var/mail/ubuntu', 'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tg
 
z=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36
 
:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:'}

  Exception:
  Traceback (most recent call last):
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/basecommand.py,
 line 122, in main
  status = self.run(options, args)
File 
/home/ubuntu/nova/.tox/genconfig/local/lib/python2.7/site-packages/pip/commands/install.py,
 line 262, in run
  for req in parse_requirements(filename, finder=finder, options=options, 
session=session):
File 

[Yahoo-eng-team] [Bug 1486335] Re: Create nova.conf with tox -egenconfig : ValueError: (Expected ',' or end-of-list in, Routes!=2.0,!=2.1,=1.12.3; python_version=='2.7', 'at', ; python_version==

2015-08-26 Thread Matt Riedemann
Works for me:

mriedem@ubuntu:~/git/nova$ tox -r -e genconfig
genconfig create: /home/mriedem/git/nova/.tox/genconfig
genconfig installdeps: -r/home/mriedem/git/nova/requirements.txt, 
-r/home/mriedem/git/nova/test-requirements.txt
genconfig develop-inst: /home/mriedem/git/nova
genconfig installed: 
aioeventlet==0.4,alembic==0.8.2,amqp==1.4.6,anyjson==0.3.3,appdirs==1.4.0,Babel==2.0,bandit==0.13.2,blockdiag==1.5.3,boto==2.38.0,cachetools==1.0.3,cffi==1.2.1,cliff==1.14.0,cmd2==0.6.8,contextlib2==0.4.0,coverage==3.7.1,cryptography==1.0,debtcollector==0.7.0,decorator==4.0.2,discover==0.4.0,docutils==0.12,ecdsa==0.13,enum34==1.0.4,eventlet==0.17.4,extras==0.0.3,fasteners==0.13.0,fixtures==1.3.1,flake8==2.2.4,funcparserlib==0.3.6,funcsigs==0.4,functools32==3.2.3.post2,futures==3.0.3,futurist==0.4.0,greenlet==0.4.7,hacking==0.10.2,httplib2==0.9.1,idna==2.0,ipaddress==1.0.14,iso8601==0.1.10,Jinja2==2.8,jsonpatch==1.11,jsonpointer==1.9,jsonschema==2.5.1,keystonemiddleware==2.1.0,kombu==3.0.26,linecache2==1.0.0,lxml==3.4.4,Mako==1.0.2,MarkupSafe==0.23,mccabe==0.2.1,mock==1.3.0,monotonic==0.3,mox3==0.9.0,msgpack-python==0.4.6,netaddr==0.7.15,netifaces==0.10.4,-e
 
git://git.openstack.org/openstack/nova@b0854ba0c697243aa3d91170d1a22896aed60e02#egg=nova-gerrit_master,nump
 
y==1.9.2,os-brick==0.3.2,os-client-config==1.6.3,os-testr==0.3.0,oslo.concurrency==2.5.0,oslo.config==2.3.0,oslo.context==0.5.0,oslo.db==2.4.1,oslo.i18n==2.5.0,oslo.log==1.10.0,oslo.messaging==2.4.0,oslo.middleware==2.7.0,oslo.reports==0.4.0,oslo.rootwrap==2.3.0,oslo.serialization==1.8.0,oslo.service==0.8.0,oslo.utils==2.4.0,oslo.versionedobjects==0.8.0,oslo.vmware==1.20.0,oslosphinx==3.1.0,oslotest==1.10.0,paramiko==1.15.2,Paste==2.0.2,PasteDeploy==1.5.2,pbr==1.6.0,pep8==1.5.7,Pillow==2.9.0,posix-ipc==1.0.0,prettytable==0.7.2,psutil==1.2.1,psycopg2==2.6.1,pyasn1==0.1.8,pycadf==1.1.0,pycparser==2.14,pycrypto==2.6.1,pyflakes==0.8.1,Pygments==2.0.2,PyMySQL==0.6.6,pyOpenSSL==0.15.1,pyparsing==2.0.3,python-barbicanclient==3.3.0,python-cinderclient==1.3.1,python-editor==0.4,python-glanceclient==0.19.0,python-ironicclient==0.7.0,python-keystoneclient==1.6.0,python-mimeparse==0.1.4,python-neutronclient==2.6.0,python-subunit==1.1.0,pytz==2015.4,PyYAML==3.11,repoze.lru==0.6,requests==2.7.0,r
 
equests-mock==0.6.0,retrying==1.3.3,rfc3986==0.2.2,Routes==2.2,seqdiag==0.9.5,simplejson==3.8.0,six==1.9.0,Sphinx==1.2.3,sphinxcontrib-seqdiag==0.8.4,SQLAlchemy==1.0.8,sqlalchemy-migrate==0.9.7,sqlparse==0.1.16,stevedore==1.7.0,suds-jurko==0.6,tempest-lib==0.7.0,Tempita==0.5.2,testrepository==0.0.20,testresources==0.2.7,testscenarios==0.5.0,testtools==1.8.0,traceback2==1.4.0,trollius==2.0,unicodecsv==0.13.0,unittest2==1.1.0,urllib3==1.11,warlock==1.1.0,webcolors==1.5,WebOb==1.4.1,websockify==0.7.0,wheel==0.24.0,wrapt==1.10.5
genconfig runtests: PYTHONHASHSEED='1084654034'
genconfig runtests: commands[0] | oslo-config-generator 
--config-file=etc/nova/nova-config-generator.conf
___
 summary 
___
  genconfig: commands succeeded
  congratulations :)
mriedem@ubuntu:~/git/nova$ 




** Changed in: nova
   Status: Won't Fix = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486335

Title:
  Create nova.conf with tox -egenconfig :ValueError: (Expected ','
  or end-of-list in,
  Routes!=2.0,!=2.1,=1.12.3;python_version=='2.7', 'at',
  ;python_version=='2.7')

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  $git clone https://git.openstack.org/openstack/nova.git
  $pip install tox
  $tox -egenconfig

  
  cmdargs: [local('/home/ubuntu/nova/.tox/genconfig/bin/pip'), 'install', '-U', 
'--force-reinstall', '-r/home/ubuntu/nova/requirements.txt', 
'-r/home/ubuntu/nova/test-requirements.txt']
  env: {'LC_ALL': 'en_US.utf-8', 'XDG_RUNTIME_DIR': '/run/user/1000', 
'VIRTUAL_ENV': '/home/ubuntu/nova/.tox/genconfig', 'LESSOPEN': '| 
/usr/bin/lesspipe %s', 'SSH_CLIENT': '27.189.208.43 5793 22', 'LOGNAME': 
'ubuntu', 'USER': 'ubuntu', 'HOME': '/home/ubuntu', 'PATH': 
'/home/ubuntu/nova/.tox/genconfig/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'XDG_SESSION_ID': '25', '_': '/usr/local/bin/tox', 'SSH_CONNECTION': 
'27.189.208.43 5793 10.0.0.18 22', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 
'SHELL': '/bin/bash', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANGUAGE': 
'en_US', 'SHLVL': '1', 'SSH_TTY': '/dev/pts/5', 'OLDPWD': '/home/ubuntu', 
'PWD': '/home/ubuntu/nova', 'PYTHONHASHSEED': '67143794', 'OS_TEST_PATH': 
'./nova/tests/unit', 'MAIL': '/var/mail/ubuntu', 'LS_COLORS': 

[Yahoo-eng-team] [Bug 1489183] [NEW] Port is unbound from a compute node, the DVR scheduler needs to check whether the router can be deleted on the L3-agent

2015-08-26 Thread Stephen Ma
Public bug reported:

In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr' on the controller node
the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
compute node.

Start: the compute node has no VMs running, there are no namespaces on
the compute node.

1. Created a network and a router
   neutron net-create demo-net
   neutron subnet-create sb-demo-net demo-net 10.1.2.0/24
   neutron router-create demo-router
   neutron router-interface-add demo-router sb-demo-net
   neutron router-gateway-set demo-router public

my-net's UUID is 0d3f0103-43e9-45a2-8ca2-b29700039297
my-router's UUID is 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b

2. Created a port: 
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ neutron port-create demo-net
The port's UUID is 278743d7-b057-4797-8b2b-faaf5fe13a4a

Note: the port is not associated with a floating IP.

3. Boot up a VM using the port:
nova boot --flavor 1 --image IMAGE_UUID --nic 
port-id=278743d7-b057-4797-8b2b-faaf5fe13a4a  demo-p11vm01

Wait for the VM to come up on the compute node.

4. Deleted the VM.

5. The port still exists and is now unbound from the compute node (device owner 
and binding:host_id are now None):
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron port-show 
278743d7-b057-4797-8b2b-faaf5fe13a4a
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {subnet_id: b45d41ca-134f-4274-bb05-50fab100315e, 
ip_address: 10.1.2.4} |
| id| 278743d7-b057-4797-8b2b-faaf5fe13a4a  
  |
| mac_address   | fa:16:3e:a6:f7:d1 
  |
| name  |   
  |
| network_id| 0d3f0103-43e9-45a2-8ca2-b29700039297  
  |
| port_security_enabled | True  
  |
| security_groups   | 8b68d1c9-cae7-4f0b-8fb5-6adb5a515246  
  |
| status| DOWN  
  |
| tenant_id | a7950bd5a61548ee8b03145cacf90a53  
  |
+---+-+

The Router is still scheduled on the compute node.

stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron 
l3-agent-list-hosting-router 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 2fc1f65b-4c05-4cec-95eb-93dda39a6eec | Dvr-Ctrl2   | True   | :-)   | 
 |
| dae065fb-b140-4ece-8824-779cf6426337 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+


When the port is unbound, the router should no longer be scheduled on the 
compute node as it is no longer needed on the compute node.  The reason is that 
when the port is no longer bound to the compute node, the DVR scheduler didn't 
check whether the router can be removed from an L3-agent.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of 

[Yahoo-eng-team] [Bug 1489184] [NEW] Port is unbound from a compute node, the DVR scheduler needs to check whether the router can be deleted on the L3-agent

2015-08-26 Thread Stephen Ma
Public bug reported:

In my environment where there is a compute node and a controller node.
On the compute node the L3-agent mode is 'dvr' on the controller node
the L3-agent mode is 'dvr-snat'. Nova-compute is only running on the
compute node.

Start: the compute node has no VMs running, there are no namespaces on
the compute node.

1. Created a network and a router
   neutron net-create demo-net
   neutron subnet-create sb-demo-net demo-net 10.1.2.0/24
   neutron router-create demo-router
   neutron router-interface-add demo-router sb-demo-net
   neutron router-gateway-set demo-router public

my-net's UUID is 0d3f0103-43e9-45a2-8ca2-b29700039297
my-router's UUID is 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b

2. Created a port: 
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ neutron port-create demo-net
The port's UUID is 278743d7-b057-4797-8b2b-faaf5fe13a4a

Note: the port is not associated with a floating IP.

3. Boot up a VM using the port:
nova boot --flavor 1 --image IMAGE_UUID --nic 
port-id=278743d7-b057-4797-8b2b-faaf5fe13a4a  demo-p11vm01

Wait for the VM to come up on the compute node.

4. Deleted the VM.

5. The port still exists and is now unbound from the compute node (device owner 
and binding:host_id are now None):
stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron port-show 
278743d7-b057-4797-8b2b-faaf5fe13a4a
+---+-+
| Field | Value 
  |
+---+-+
| admin_state_up| True  
  |
| allowed_address_pairs |   
  |
| binding:host_id   |   
  |
| binding:profile   | {}
  |
| binding:vif_details   | {}
  |
| binding:vif_type  | unbound   
  |
| binding:vnic_type | normal
  |
| device_id |   
  |
| device_owner  |   
  |
| extra_dhcp_opts   |   
  |
| fixed_ips | {subnet_id: b45d41ca-134f-4274-bb05-50fab100315e, 
ip_address: 10.1.2.4} |
| id| 278743d7-b057-4797-8b2b-faaf5fe13a4a  
  |
| mac_address   | fa:16:3e:a6:f7:d1 
  |
| name  |   
  |
| network_id| 0d3f0103-43e9-45a2-8ca2-b29700039297  
  |
| port_security_enabled | True  
  |
| security_groups   | 8b68d1c9-cae7-4f0b-8fb5-6adb5a515246  
  |
| status| DOWN  
  |
| tenant_id | a7950bd5a61548ee8b03145cacf90a53  
  |
+---+-+

The Router is still scheduled on the compute node.

stack@Dvr-Ctrl2:~/DEVSTACK/demo$ ../manage/osadmin neutron 
l3-agent-list-hosting-router 1bbfafde-b1d4-4752-9dd0-4b23bbeca22b
+--+-++---+--+
| id   | host| admin_state_up | alive | 
ha_state |
+--+-++---+--+
| 2fc1f65b-4c05-4cec-95eb-93dda39a6eec | Dvr-Ctrl2   | True   | :-)   | 
 |
| dae065fb-b140-4ece-8824-779cf6426337 | DVR-Compute | True   | :-)   | 
 |
+--+-++---+--+


When the port is unbound, the router should no longer be scheduled on the 
compute node as it is no longer needed on the compute node.  The reason is that 
when the port is no longer bound to the compute node, the DVR scheduler didn't 
check whether the router can be removed from an L3-agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Ma (stephen-ma)
 Status: New


** Tags: l3-dvr-backlog

** 

[Yahoo-eng-team] [Bug 1489159] [NEW] IronicDriverTestCase unit tests are seg-faulting

2015-08-26 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/80/148980/28/check/gate-nova-
python27/f7cb9dd/console.html#_2015-08-25_22_14_53_647

{3}
nova.tests.unit.virt.ironic.test_driver.IronicDriverTestCase.test__unprovision_fail_max_retries
[] ... inprogress

This seems to be new(ish).  Someone I work with was saying they were
hitting it locally too.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: ironic testing

** Tags added: ironic testing

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489159

Title:
  IronicDriverTestCase unit tests are seg-faulting

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/80/148980/28/check/gate-nova-
  python27/f7cb9dd/console.html#_2015-08-25_22_14_53_647

  {3}
  
nova.tests.unit.virt.ironic.test_driver.IronicDriverTestCase.test__unprovision_fail_max_retries
  [] ... inprogress

  This seems to be new(ish).  Someone I work with was saying they were
  hitting it locally too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476752] Re: ZVM Agent ids for Neutron

2015-08-26 Thread Armando Migliaccio
No longer valid.

** Changed in: neutron
   Status: In Progress = Won't Fix

** Changed in: neutron
Milestone: liberty-3 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1476752

Title:
  ZVM Agent ids for Neutron

Status in neutron:
  Won't Fix

Bug description:
  The networking-zvm neutron agent (https://wiki.openstack.org/wiki
  /Networking-zvm) uses this code to patch neutron code on the fly for
  defining VIF_TYPE and the AGENT_TYPE:

     /bin/sed -i $a\VIF_TYPE_ZVM = 'zvm' {toxinidir}/\
   .tox/py27/src/neutron/neutron/extensions/portbindings.py
     /bin/sed -i $a\AGENT_TYPE_ZVM = 'zVM agent' {toxinidir}/\
   .tox/py27/src/neutron/neutron/common/constants.py

  Since those definitions must not ever change it makes a lot of sense
  to have them defined in the neutron code even though the actual driver
  is still in stackforge:

  https://git.openstack.org/cgit/stackforge/networking-zvm/

  This enhancement bug report is about tracking these identifiers to be
  merged into Neutron core.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1476752/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488793] [NEW] CertManager couldn't connect to barbican

2015-08-26 Thread yuanying
Public bug reported:

Barbican client requires endpoint for barbican service, but currently it isn't 
set.
So cert manager couldn't connect to correct endpoint.

Below is error message.

Traceback (most recent call last):
  File .tox/test-cert.py, line 19, in module
certificate_secret.store()
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/secrets.py,
 line 43, in wrapper
return func(self, *args)
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/secrets.py,
 line 320, in store
response = self._api.post(self._entity, json=secret_dict)
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 77, in post
return super(_HTTPClient, self).post(path, *args, **kwargs).json()
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 176, in post
return self.request(url, 'POST', **kwargs)
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 64, in request
self._check_status_code(resp)
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 100, in _check_status_code
self._get_error_message(resp)
  File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 110, in _get_error_message
message = response_data['title']

** Affects: neutron
 Importance: Undecided
 Assignee: yuanying (ootsuka)
 Status: In Progress


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) = yuanying (ootsuka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488793

Title:
  CertManager couldn't connect to barbican

Status in neutron:
  In Progress

Bug description:
  Barbican client requires endpoint for barbican service, but currently it 
isn't set.
  So cert manager couldn't connect to correct endpoint.

  Below is error message.

  Traceback (most recent call last):
File .tox/test-cert.py, line 19, in module
  certificate_secret.store()
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/secrets.py,
 line 43, in wrapper
  return func(self, *args)
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/secrets.py,
 line 320, in store
  response = self._api.post(self._entity, json=secret_dict)
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 77, in post
  return super(_HTTPClient, self).post(path, *args, **kwargs).json()
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 176, in post
  return self.request(url, 'POST', **kwargs)
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 64, in request
  self._check_status_code(resp)
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 100, in _check_status_code
  self._get_error_message(resp)
File 
/Users/yuanying/Projects/OpenStack/magnum/.tox/venv/lib/python2.7/site-packages/barbicanclient/client.py,
 line 110, in _get_error_message
  message = response_data['title']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488807] [NEW] SR-IOV: deprecate agent_required option

2015-08-26 Thread Moshe Levi
Public bug reported:

When SR-IOV introduce in Juno Agent supported only link state change
Some Intel cards don't support setting link state, so to
resolve it the SR-IOV mech driver supports agent and agent less mode.
From Liberty the SR-IOV agent brings more functionality like
qos and port security so we want to make the agent mandatory 

This patch deprecates the agent_required in Liberty
and updates the agent_required default to be True

irc log
http://eavesdrop.openstack.org/meetings/pci_passthrough/2015/pci_passthrough.2015-06-23-13.09.log.txt.

** Affects: neutron
 Importance: Undecided
 Assignee: Moshe Levi (moshele)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488807

Title:
  SR-IOV: deprecate agent_required option

Status in neutron:
  In Progress

Bug description:
  When SR-IOV introduce in Juno Agent supported only link state change
  Some Intel cards don't support setting link state, so to
  resolve it the SR-IOV mech driver supports agent and agent less mode.
  From Liberty the SR-IOV agent brings more functionality like
  qos and port security so we want to make the agent mandatory 

  This patch deprecates the agent_required in Liberty
  and updates the agent_required default to be True

  irc log
  
http://eavesdrop.openstack.org/meetings/pci_passthrough/2015/pci_passthrough.2015-06-23-13.09.log.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488807/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488868] [NEW] failed to run test_qos_plugin.TestQosPlugin independently

2015-08-26 Thread yong sheng gong
Public bug reported:

neutron.tests.unit.services.qos.test_qos_plugin.TestQosPlugin.test_delete_policy_rule
-

Captured pythonlogging:
~~~
2015-08-26 17:13:10,783  WARNING [oslo_config.cfg] Option verbose from 
group DEFAULT is deprecated for removal.  Its value may be silently ignored 
in the future.
2015-08-26 17:13:10,799 INFO [neutron.manager] Loading core plugin: 
neutron.db.db_base_plugin_v2.NeutronDbPluginV2
2015-08-26 17:13:10,800  WARNING [neutron.notifiers.nova] Authenticating to 
nova using nova_admin_* options is deprecated. This should be done using an 
auth plugin, like password
2015-08-26 17:13:10,802 INFO [neutron.manager] Loading Plugin: qos
2015-08-26 17:13:10,804 INFO 
[neutron.services.qos.notification_drivers.manager] Loading message_queue 
(Message queue updates) notification driver for QoS plugin


Captured traceback:
~~~
Traceback (most recent call last):
  File neutron/tests/unit/services/qos/test_qos_plugin.py, line 111, in 
test_delete_policy_rule
self.ctxt, self.rule.id, self.policy.id)
  File neutron/services/qos/qos_plugin.py, line 128, in 
delete_policy_bandwidth_limit_rule
policy.reload_rules()
  File neutron/objects/qos/policy.py, line 63, in reload_rules
rules = rule_obj_impl.get_rules(self._context, self.id)
  File neutron/objects/qos/rule.py, line 37, in get_rules
rules = rule_cls.get_objects(context, qos_policy_id=qos_policy_id)
  File neutron/objects/base.py, line 122, in get_objects
db_objs = db_api.get_objects(context, cls.db_model, **kwargs)
  File neutron/db/api.py, line 87, in get_objects
.filter_by(**kwargs)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2399, in all
return list(self)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2516, in __iter__
return self._execute_and_instances(context)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py,
 line 2531, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 914, in execute
return meth(self, multiparams, params)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/sql/elements.py,
 line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1146, in _execute_context
context)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/util/compat.py,
 line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py,
 line 1139, in _execute_context
context)
  File 
/mnt/data3/opt/stack/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/default.py,
 line 450, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: 
qos_bandwidth_limit_rules [SQL: u'SELECT qos_bandwidth_limit_rules.id AS 
qos_bandwidth_limit_rules_id, qos_bandwidth_limit_rules.qos_policy_id AS 
qos_bandwidth_limit_rules_qos_policy_id, qos_bandwidth_limit_rules.max_kbps AS 
qos_bandwidth_limit_rules_max_kbps, qos_bandwidth_limit_rules.max_burst_kbps AS 
qos_bandwidth_limit_rules_max_burst_kbps \nFROM qos_bandwidth_limit_rules 
\nWHERE qos_bandwidth_limit_rules.qos_policy_id = ?'] [parameters: ('777',)]



==
Totals
==
Ran: 14 tests in 5. sec.
 - Passed: 10
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 4
Sum of execute time for each test: 2.2653 sec.

==
Worker Balance
==
 - Worker 0 (1 tests) = 0:00:00.388500
 - Worker 1 (1 tests) = 0:00:00.390602
 - Worker 2 (1 tests) = 0:00:00.362367
 - Worker 3 (1 tests) = 0:00:00.396344
 - Worker 4 (1 tests) = 0:00:00.125718
 - Worker 5 (2 tests) = 0:00:00.163376
 - Worker 6 (2 tests) = 0:00:00.164033
 - Worker 7 (5 tests) = 0:00:00.282335

Slowest Tests:

Test id 

[Yahoo-eng-team] [Bug 1459958] Re: Error messages returned to the user are not consistent across all apis

2015-08-26 Thread Rajesh Tailor
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) = Rajesh Tailor (rajesh-tailor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459958

Title:
  Error messages returned to the user are not consistent across all apis

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  Error messages returned to the user are not consistent across all apis in 
case of all exceptions derived from NotFound exception,
  e.g., VolumeNotFound, SnapshotNotFound etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1459958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488840] [NEW] nova volume attach in dashboard can not be choose device

2015-08-26 Thread huangpengtaohw
Public bug reported:

we can choose a device use cmd:

nova volume-attach server volume [device]

to attach a volume to a server. But we can not choose device in
dashboard.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488840

Title:
  nova volume attach in dashboard can not be choose device

Status in OpenStack Compute (nova):
  New

Bug description:
  we can choose a device use cmd:

  nova volume-attach server volume [device]

  to attach a volume to a server. But we can not choose device in
  dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488859] [NEW] cleanup failed on these ports trace in kilo tests

2015-08-26 Thread Matthias Runge
Public bug reported:

while running tests on kilo:

Port
 cleanup failed for these port-ids (063cf7f3-ded1-4297-bc4c-31eae876cc91).
Traceback (most recent call last):
  File 
/home/mrunge/tmp/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py,
 line 904, in handle
config_drive=context.get('config_drive'))
  File /home/mrunge/tmp/horizon/.venv/lib/python2.7/site-packages/mox.py, 
line 765, in __call__
return mock_method(*params, **named_params)
  File /home/mrunge/tmp/horizon/.venv/lib/python2.7/site-packages/mox.py, 
line 1010, in __call__
raise expected_method._exception
ClientException: Expected failure.


Traces are not acceptable while running tests.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: kilo-backport-potential

** Tags added: kilo-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488859

Title:
  cleanup failed on these ports trace in kilo tests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while running tests on kilo:

  
Port
 cleanup failed for these port-ids (063cf7f3-ded1-4297-bc4c-31eae876cc91).
  Traceback (most recent call last):
File 
/home/mrunge/tmp/horizon/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py,
 line 904, in handle
  config_drive=context.get('config_drive'))
File /home/mrunge/tmp/horizon/.venv/lib/python2.7/site-packages/mox.py, 
line 765, in __call__
  return mock_method(*params, **named_params)
File /home/mrunge/tmp/horizon/.venv/lib/python2.7/site-packages/mox.py, 
line 1010, in __call__
  raise expected_method._exception
  ClientException: Expected failure.

  
  Traces are not acceptable while running tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488896] [NEW] WALinuxAgentShim fails to handle colons in packed values in dhclient.eth0.leases

2015-08-26 Thread Dan Watkins
Public bug reported:

We currently assume that a single colon in the unknown-245 dhclient key
indicates we are dealing with hex; this is not necessarily true. If any
part of the IP address is '58', then a colon will be included in the
packed values.

We should have more robust determination of if we're dealing with a hex
string.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Dan Watkins (daniel-thewatkins)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1488896

Title:
  WALinuxAgentShim fails to handle colons in packed values in
  dhclient.eth0.leases

Status in cloud-init:
  New

Bug description:
  We currently assume that a single colon in the unknown-245 dhclient
  key indicates we are dealing with hex; this is not necessarily true.
  If any part of the IP address is '58', then a colon will be included
  in the packed values.

  We should have more robust determination of if we're dealing with a
  hex string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1488896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488891] [NEW] WALinuxAgentShim fails to handle escaped characters in dhclient.eth0.leases

2015-08-26 Thread Dan Watkins
Public bug reported:

For example, a line of

  option unknown-245 dH\l;

in dhclient.eth0.leases should yield an IP address of 100.72.34.108 but
instead causes an exception:

Traceback (most recent call last):
  File /usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py, 
line 249, in get_data
fabric_data = metadata_func()
  File /usr/lib/python3/dist-packages/cloudinit/sources/helpers/azure.py, 
line 289, in get_metadata_from_fabric
shim = WALinuxAgentShim()
  File /usr/lib/python3/dist-packages/cloudinit/sources/helpers/azure.py, 
line 210, in __init__
self.endpoint = self.find_endpoint()
  File /usr/lib/python3/dist-packages/cloudinit/sources/helpers/azure.py, 
line 237, in find_endpoint
endpoint_ip_address = socket.inet_ntoa(value)
OSError: packed IP wrong length for inet_ntoa

** Affects: cloud-init
 Importance: Undecided
 Assignee: Dan Watkins (daniel-thewatkins)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

** Summary changed:

- WALinuxAgentShim fails to handle escaped characters
+ WALinuxAgentShim fails to handle escaped characters in dhclient.eth0.leases

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1488891

Title:
  WALinuxAgentShim fails to handle escaped characters in
  dhclient.eth0.leases

Status in cloud-init:
  New

Bug description:
  For example, a line of

option unknown-245 dH\l;

  in dhclient.eth0.leases should yield an IP address of 100.72.34.108
  but instead causes an exception:

  Traceback (most recent call last):
File /usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py, 
line 249, in get_data
  fabric_data = metadata_func()
File /usr/lib/python3/dist-packages/cloudinit/sources/helpers/azure.py, 
line 289, in get_metadata_from_fabric
  shim = WALinuxAgentShim()
File /usr/lib/python3/dist-packages/cloudinit/sources/helpers/azure.py, 
line 210, in __init__
  self.endpoint = self.find_endpoint()
File /usr/lib/python3/dist-packages/cloudinit/sources/helpers/azure.py, 
line 237, in find_endpoint
  endpoint_ip_address = socket.inet_ntoa(value)
  OSError: packed IP wrong length for inet_ntoa

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1488891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472449] Re: download error when the image status is not active

2015-08-26 Thread Erno Kuvaja
Thanks Long Quan Sha,

I added python-glanceclient into the bug and assigned that to you as
that's where your patch is directed to, not glance.

I see two problems here, first you're correct and our client should be
able to handle that situation, but foremost our server should not allow
downloading image while it's still queued.

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: Long Quan Sha (shalq) = (unassigned)

** Changed in: python-glanceclient
 Assignee: (unassigned) = Long Quan Sha (shalq)

** Changed in: glance
   Status: In Progress = New

** Changed in: python-glanceclient
   Status: New = In Progress

** Changed in: glance
   Importance: Undecided = Medium

** Changed in: python-glanceclient
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1472449

Title:
  download error when the image status is not active

Status in Glance:
  New
Status in python-glanceclient:
  In Progress

Bug description:
  
  When the locations is blank, downloading image will show python error, but 
the error message is not correct.

  [root@vm134 pe]# glance image-show 9be94a27-367f-4a26-ae7a-045db3cb7332
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | None |
  | created_at   | 2015-07-02T09:09:22Z |
  | disk_format  | None |
  | id   | 9be94a27-367f-4a26-ae7a-045db3cb7332 |
  | locations| []   |
  | min_disk | 0|
  | min_ram  | 0|
  | name | test |
  | owner| e4b36a5b654942328943a835339a6289 |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2015-07-02T09:09:22Z |
  | virtual_size | None |
  | visibility   | private  |
  +--+--+
  [root@vm134 pe]# glance image-download 9be94a27-367f-4a26-ae7a-045db3cb7332 
--file myimg
  iter() returned non-iterator of type 'NoneType'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1472449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488809] [NEW] [Juno][UCA] Non default configuration sections ignored for nova.conf

2015-08-26 Thread Bogdan Dobrelya
Public bug reported:

Non default configuration sections [glance], [neutron] ignored for
nova.conf then installed from UCA packages:

How to reproduce:
1) Install and configure OpenStack Juno Nova with Neutron at compute node using 
UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages):
python-oslo.config 1:1.2.1-0ubuntu2
python-oslo.messaging 1.3.0-0ubuntu1.2
python-oslo.rootwrap 1.2.0-0ubuntu1
nova-common 1:2014.1.5-0ubuntu1.2
python-nova 1:2014.1.5-0ubuntu1.2
neutron-common 1:2014.1.5-0ubuntu1

/etc/nova/nova.conf example:
[DEFAULT]
debug=True
...
[glance]
api_servers=10.0.0.3:9292

[neutron]
admin_auth_url=http://10.0.0.3:5000/v2.0
admin_username=admin
admin_tenant_name=services
admin_password=admin
url=http://10.0.0.3:9696
...

2) From nova log, check which values has been applied:
# grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

Expected:
configuration options to be applied from [glance], [neutron] sections according 
to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

Actual:
Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo.config
 Importance: Undecided
 Status: New


** Tags: oslo

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: oslo.config
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488809

Title:
  [Juno][UCA] Non default configuration sections ignored for nova.conf

Status in ubuntu-cloud-archive:
  New
Status in OpenStack Compute (nova):
  New
Status in oslo.config:
  New

Bug description:
  Non default configuration sections [glance], [neutron] ignored for
  nova.conf then installed from UCA packages:

  How to reproduce:
  1) Install and configure OpenStack Juno Nova with Neutron at compute node 
using UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages):
  python-oslo.config 1:1.2.1-0ubuntu2
  python-oslo.messaging 1.3.0-0ubuntu1.2
  python-oslo.rootwrap 1.2.0-0ubuntu1
  nova-common 1:2014.1.5-0ubuntu1.2
  python-nova 1:2014.1.5-0ubuntu1.2
  neutron-common 1:2014.1.5-0ubuntu1

  /etc/nova/nova.conf example:
  [DEFAULT]
  debug=True
  ...
  [glance]
  api_servers=10.0.0.3:9292

  [neutron]
  admin_auth_url=http://10.0.0.3:5000/v2.0
  admin_username=admin
  admin_tenant_name=services
  admin_password=admin
  url=http://10.0.0.3:9696
  ...

  2) From nova log, check which values has been applied:
  # grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
  2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

  Expected:
  configuration options to be applied from [glance], [neutron] sections 
according to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

  Actual:
  Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1488809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488903] [NEW] Link attached to Json schema is wrong and heading violations in file

2015-08-26 Thread venkatamahesh
Public bug reported:

In
https://github.com/openstack/keystone/blob/master/doc/source/mapping_schema.rst;,
link attached to Json schema is wrong and also heading violation.

Explanation: for heading 2 the underline should be '~'

** Affects: keystone
 Importance: Undecided
 Assignee: venkatamahesh (venkatamaheshkotha)
 Status: New


** Tags: low-hanging-fruit

** Changed in: keystone
 Assignee: (unassigned) = venkatamahesh (venkatamaheshkotha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1488903

Title:
  Link attached to Json schema is wrong and heading violations in file

Status in Keystone:
  New

Bug description:
  In
  
https://github.com/openstack/keystone/blob/master/doc/source/mapping_schema.rst;,
  link attached to Json schema is wrong and also heading violation.

  Explanation: for heading 2 the underline should be '~'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1488903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488820] [NEW] All floating IPs stop working after associating a new one

2015-08-26 Thread Hauke Bruno
Public bug reported:

This issue occurs on a fresh OpenStack Kilo installation (Ubuntu 14.04
LTS) with a single non-HA network node:

In general public access via floating IPs works, I can ping, ssh and so
on my instances.

But if I associate a new floating IP to a new instance, all floating IPs
(including the new associated one) stop working (no ping and ssh
possible). The strange thing: If I just wait 5 minutes or doing service
openvswitch-switch restart manually, everything went back working like
a charm.

I checked all neutron and ovs logs, but there aren't any errors.

Is there any periodic task running in the background every 5 minutes
which could affect that behavior?

cheers,
hauke

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488820

Title:
  All floating IPs stop working after associating a new one

Status in neutron:
  New

Bug description:
  This issue occurs on a fresh OpenStack Kilo installation (Ubuntu 14.04
  LTS) with a single non-HA network node:

  In general public access via floating IPs works, I can ping, ssh and
  so on my instances.

  But if I associate a new floating IP to a new instance, all floating
  IPs (including the new associated one) stop working (no ping and ssh
  possible). The strange thing: If I just wait 5 minutes or doing
  service openvswitch-switch restart manually, everything went back
  working like a charm.

  I checked all neutron and ovs logs, but there aren't any errors.

  Is there any periodic task running in the background every 5 minutes
  which could affect that behavior?

  cheers,
  hauke

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488912] [NEW] Neutron: security-group-list missing parameters

2015-08-26 Thread Jesse Klint
Public bug reported:

Issue: the option --tenant-id is not mentioned in the usage for
`neutron security-group-list`. This can lead to confusion on how to
provide a tenant per List security groups that belong to a given
tenant.


# neutron help security-group-list
usage: neutron security-group-list [-h] [-f {csv,table}] [-c COLUMN]
   [--max-width integer]
   [--quote {all,minimal,none,nonnumeric}]
   [--request-format {json,xml}] [-D]
   [-F FIELD] [-P SIZE] [--sort-key FIELD]
   [--sort-dir {asc,desc}]

List security groups that belong to a given tenant.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  -D, --show-detailsShow detailed information.
  -F FIELD, --field FIELD
Specify the field(s) to be returned by server. You can
repeat this option.
  -P SIZE, --page-size SIZE
Specify retrieve unit of each request, then split one
request to several requests.
  --sort-key FIELD  Sorts the list by the specified fields in the
specified directions. You can repeat this option, but
you must specify an equal number of sort_dir and
sort_key values. Extra sort_dir options are ignored.
Missing sort_dir options use the default asc value.
  --sort-dir {asc,desc}
Sorts the list in the specified direction. You can
repeat this option.

output formatters:
  output formatter options

  -f {csv,table}, --format {csv,table}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width integer
Maximum display width, 0 to disable

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
when to include quotes, defaults to nonnumeric

** Affects: python-neutronclient
 Importance: Undecided
 Status: New

** Project changed: bagpipe-l2 = neutron

** Project changed: neutron = python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488912

Title:
  Neutron: security-group-list missing parameters

Status in python-neutronclient:
  New

Bug description:
  Issue: the option --tenant-id is not mentioned in the usage for
  `neutron security-group-list`. This can lead to confusion on how to
  provide a tenant per List security groups that belong to a given
  tenant.

  
  # neutron help security-group-list
  usage: neutron security-group-list [-h] [-f {csv,table}] [-c COLUMN]
 [--max-width integer]
 [--quote {all,minimal,none,nonnumeric}]
 [--request-format {json,xml}] [-D]
 [-F FIELD] [-P SIZE] [--sort-key FIELD]
 [--sort-dir {asc,desc}]

  List security groups that belong to a given tenant.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json,xml}
  The XML or JSON request format.
-D, --show-detailsShow detailed information.
-F FIELD, --field FIELD
  Specify the field(s) to be returned by server. You can
  repeat this option.
-P SIZE, --page-size SIZE
  Specify retrieve unit of each request, then split one
  request to several requests.
--sort-key FIELD  Sorts the list by the specified fields in the
  specified directions. You can repeat this option, but
  you must specify an equal number of sort_dir and
  sort_key values. Extra sort_dir options are ignored.
  Missing sort_dir options use the default asc value.
--sort-dir {asc,desc}
  Sorts the list in the specified direction. You can
  repeat this option.

  output formatters:
output formatter options

-f {csv,table}, --format {csv,table}
  the output format, defaults to table
-c COLUMN, --column COLUMN
  specify the column(s) to include, can be repeated

  table formatter:
--max-width integer
  Maximum display width, 0 to disable

  CSV Formatter:
--quote {all,minimal,none,nonnumeric}
  when to include quotes, defaults to 

[Yahoo-eng-team] [Bug 1488912] [NEW] Neutron: security-group-list missing parameters

2015-08-26 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Issue: the option --tenant-id is not mentioned in the usage for
`neutron security-group-list`. This can lead to confusion on how to
provide a tenant per List security groups that belong to a given
tenant.


# neutron help security-group-list
usage: neutron security-group-list [-h] [-f {csv,table}] [-c COLUMN]
   [--max-width integer]
   [--quote {all,minimal,none,nonnumeric}]
   [--request-format {json,xml}] [-D]
   [-F FIELD] [-P SIZE] [--sort-key FIELD]
   [--sort-dir {asc,desc}]

List security groups that belong to a given tenant.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  -D, --show-detailsShow detailed information.
  -F FIELD, --field FIELD
Specify the field(s) to be returned by server. You can
repeat this option.
  -P SIZE, --page-size SIZE
Specify retrieve unit of each request, then split one
request to several requests.
  --sort-key FIELD  Sorts the list by the specified fields in the
specified directions. You can repeat this option, but
you must specify an equal number of sort_dir and
sort_key values. Extra sort_dir options are ignored.
Missing sort_dir options use the default asc value.
  --sort-dir {asc,desc}
Sorts the list in the specified direction. You can
repeat this option.

output formatters:
  output formatter options

  -f {csv,table}, --format {csv,table}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width integer
Maximum display width, 0 to disable

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
when to include quotes, defaults to nonnumeric

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Neutron: security-group-list missing parameters
https://bugs.launchpad.net/bugs/1488912
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488924] [NEW] Example Link of Resource type association is wrong

2015-08-26 Thread venkatamahesh
Public bug reported:

In https://github.com/openstack/glance/blob/master/doc/source/metadefs-
concepts.rst the link given for one example is
https://github.com/openstack/nova-specs/blob/master/specs/juno/virt-
driver-vcpu-topology.rst. After clicking this error: page not found

Solution: The correct one is https://github.com/openstack/nova-
specs/blob/master/specs/juno/implemented/virt-driver-vcpu-topology.rst

** Affects: glance
 Importance: Undecided
 Assignee: venkatamahesh (venkatamaheshkotha)
 Status: New


** Tags: low-hanging-fruit

** Changed in: glance
 Assignee: (unassigned) = venkatamahesh (venkatamaheshkotha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488924

Title:
  Example Link of Resource type association is wrong

Status in Glance:
  New

Bug description:
  In https://github.com/openstack/glance/blob/master/doc/source
  /metadefs-concepts.rst the link given for one example is
  https://github.com/openstack/nova-specs/blob/master/specs/juno/virt-
  driver-vcpu-topology.rst. After clicking this error: page not found

  Solution: The correct one is https://github.com/openstack/nova-
  specs/blob/master/specs/juno/implemented/virt-driver-vcpu-
  topology.rst

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488913] [NEW] Deprecation warnings (removal in Django 1.8) in Data Processing

2015-08-26 Thread Rob Cresswell
Public bug reported:

Noticed below message in logs in Data Processing panels:

WARNING:py.warnings:RemovedInDjango18Warning: In Django 1.8, widget
attribute placeholder=True will be rendered as 'placeholder'. To
preserve current behavior, use the string 'True' instead of the boolean
value.

** Affects: horizon
 Importance: Undecided
 Assignee: Vitaly Gridnev (vgridnev)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488913

Title:
  Deprecation warnings (removal in Django 1.8) in Data Processing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Noticed below message in logs in Data Processing panels:

  WARNING:py.warnings:RemovedInDjango18Warning: In Django 1.8, widget
  attribute placeholder=True will be rendered as 'placeholder'. To
  preserve current behavior, use the string 'True' instead of the
  boolean value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489019] [NEW] ovs agen _bind_devices should query only existing ports

2015-08-26 Thread Rossella Sblendido
Public bug reported:

If a port is deleted right before calling _bind_devices ,
get_ports_attributes will throw an exception because the row is not
found

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489019

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  New

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489015] [NEW] ovs agen _bind_devices should query only existing ports

2015-08-26 Thread Rossella Sblendido
Public bug reported:

If a port is deleted right before calling _bind_devices ,
get_ports_attributes will throw an exception because the row is not
found

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489015

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  New

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312199] Re: cirros 0.3.1 fails to boot

2015-08-26 Thread Scott Moser
** Changed in: cirros
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312199

Title:
  cirros 0.3.1 fails to boot

Status in CirrOS:
  Won't Fix
Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Logstash query: message: MP-BIOS bug AND tags:console

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIk1QLUJJT1MgYnVnXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTgzNDg0NzIzNzcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  cirros-0.3.1-x86_64-uec  sometimes fails to boot with libvirt/ soft
  qemu in the openstack gate jobs.

  The VM's serial console log ends with:

  [1.096067] ftrace: allocating 27027 entries in 106 pages
  [1.140070] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
  [1.148071] ..MP-BIOS bug: 8254 timer not connected to IO-APIC
  [1.148071] ...trying to set up timer (IRQ0) through the 8259A ...
  [1.148071] . (found apic 0 pin 2) ...
  [1.152071] ... failed.
  [1.152071] ...trying to set up timer as Virtual Wire IRQ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1312199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434161] Re: nova.tests.unit.conductor.test_conductor.ConductorTaskAPITestCase.test_rebuild_instance_with_scheduler_group_failure race fails in the gate

2015-08-26 Thread Matt Riedemann
We haven't seen this in a long time so marking it as fixed (since Kilo).

** Changed in: nova
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434161

Title:
  
nova.tests.unit.conductor.test_conductor.ConductorTaskAPITestCase.test_rebuild_instance_with_scheduler_group_failure
  race fails in the gate

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/30/164330/2/gate/gate-nova-
  python27/19869e2/console.html#_2015-03-19_15_30_04_664

  2015-03-19 15:30:04.664 | {5} 
nova.tests.unit.conductor.test_conductor.ConductorTaskAPITestCase.test_rebuild_instance_with_scheduler_group_failure
 [0.396073s] ... FAILED
  2015-03-19 15:30:04.664 | 
  2015-03-19 15:30:04.664 | Captured traceback:
  2015-03-19 15:30:04.664 | ~~~
  2015-03-19 15:30:04.664 | Traceback (most recent call last):
  2015-03-19 15:30:04.664 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
  2015-03-19 15:30:04.664 | return func(*args, **keywargs)
  2015-03-19 15:30:04.664 |   File 
nova/tests/unit/conductor/test_conductor.py, line 1746, in 
test_rebuild_instance_with_scheduler_group_failure
  2015-03-19 15:30:04.664 | exception, request_spec)
  2015-03-19 15:30:04.664 |   File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 845, in assert_called_once_with
  2015-03-19 15:30:04.665 | raise AssertionError(msg)
  2015-03-19 15:30:04.665 | AssertionError: Expected to be called once. 
Called 2 times.

  Seeing it on two changes so far:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwibm92YS50ZXN0cy51bml0LmNvbmR1Y3Rvci50ZXN0X2NvbmR1Y3Rvci5Db25kdWN0b3JUYXNrQVBJVGVzdENhc2UudGVzdF9yZWJ1aWxkX2luc3RhbmNlX3dpdGhfc2NoZWR1bGVyX2dyb3VwX2ZhaWx1cmVcIiBBTkQgbWVzc2FnZTpcIkZBSUxFRFwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI2NzgxMzA5MDYwfQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1434161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID 255 not recognized

2015-08-26 Thread John Griffith
** Also affects: cinder/kilo
   Importance: Undecided
   Status: New

** Tags removed: volumes
** Tags added: fibre-channel ibm

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID 255 not recognized

Status in Cinder:
  New
Status in Cinder kilo series:
  In Progress
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  In Progress

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID 255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs =0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

  Note: The change for Liberty os-brick is ready. I would also like to
  patch it back to Kilo. Since os-brick has been integrated with
  Liberty, but was separate before, I need to release a patch for Nova,
  Cinder, and os-brick. Unfortunately there is no option on this page to
  nominate the patch for Kilo. Can somebody help? Thank you!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489014] [NEW] ovs agen _bind_devices should query only existing ports

2015-08-26 Thread Rossella Sblendido
Public bug reported:

If a port is deleted right before calling _bind_devices ,
get_ports_attributes will throw an exception because the row is not
found

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489014

Title:
  ovs agen _bind_devices should query only existing ports

Status in neutron:
  New

Bug description:
  If a port is deleted right before calling _bind_devices ,
  get_ports_attributes will throw an exception because the row is not
  found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488860] [NEW] Mistake in description of manual api-ref-compute-v2.1

2015-08-26 Thread zhangjingwen
Public bug reported:

The orignal description in manual is as follows:
Lists the IP addresses assigned to an instance or show details for a specified 
IP address. 

It should be shows details insteadof show details.

The referrence link is:
http://developer.openstack.org/api-ref-compute-v2.1.html

** Affects: openstack-api-site
 Importance: Undecided
 Status: New


** Tags: api low-hanging-fruit

** Project changed: neutron = openstack-api-site

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488860

Title:
  Mistake in description of manual api-ref-compute-v2.1

Status in openstack-api-site:
  New

Bug description:
  The orignal description in manual is as follows:
  Lists the IP addresses assigned to an instance or show details for a 
specified IP address. 

  It should be shows details insteadof show details.

  The referrence link is:
  http://developer.openstack.org/api-ref-compute-v2.1.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1488860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488884] [NEW] xapi live migration rollback should clean up any volume attached during migration

2015-08-26 Thread Sulochan Acharya
Public bug reported:

When live migrating an instance with volume attached the volume gets
attached to the destination during the process. If after that there is a
failure, the volume is left attachd to the destination, and then gets
attached to the source.

Source cleanup is explicit, however, we do not do any cleanup during
rollback for xapi on live migration failure.

We need to simply forget the sr as a part of migration rollback.

** Affects: nova
 Importance: Undecided
 Assignee: Sulochan Acharya (sulochan-acharya)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Sulochan Acharya (sulochan-acharya)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/144

Title:
  xapi live migration rollback should clean up any volume attached
  during migration

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When live migrating an instance with volume attached the volume gets
  attached to the destination during the process. If after that there is
  a failure, the volume is left attachd to the destination, and then
  gets attached to the source.

  Source cleanup is explicit, however, we do not do any cleanup during
  rollback for xapi on live migration failure.

  We need to simply forget the sr as a part of migration rollback.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488840] Re: nova volume attach in dashboard can not be choose device

2015-08-26 Thread jichenjc
not related to nova

** Project changed: nova = horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488840

Title:
  nova volume attach in dashboard can not be choose device

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  we can choose a device use cmd:

  nova volume-attach server volume [device]

  to attach a volume to a server. But we can not choose device in
  dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488740] Re: neutron dbsync fails with 44621190bc02_add_uniqueconstraint_ipavailability_ranges.py

2015-08-26 Thread Emilien Macchi
I think it might be invalid, because RDO needs Alembic upgade.
I'll close it, and re-open in case of I still have the issue.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488740

Title:
  neutron dbsync fails with
  44621190bc02_add_uniqueconstraint_ipavailability_ranges.py

Status in neutron:
  Invalid

Bug description:
  2015-08-26 02:50:39.383 | Debug: Executing 'neutron-db-manage
  --config-file /etc/neutron/neutron.conf --config-file
  /etc/neutron/plugin.ini upgrade head'

  2015-08-26 02:50:41.398 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: No handlers could 
be found for logger neutron.quota
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Context impl MySQLImpl.
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Will assume non-transactional DDL.
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Context impl MySQLImpl.
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Will assume non-transactional DDL.
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Running upgrade  - juno, juno_initial
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Running upgrade juno - 44621190bc02, 
add_uniqueconstraint_ipavailability_ranges
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Running upgrade 
for neutron ...
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/bin/neutron-db-manage, line 10, in module
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 519, in 
main
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
CONF.command.func(config, CONF.command.name)
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 152, in 
do_upgrade
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 106, in 
do_alembic_command
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
getattr(alembic_command, cmd)(config, *args, **kwargs)
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/command.py, line 165, in upgrade
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script.run_env()
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/script.py, line 382, in run_env
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/util.py, line 242, in 
load_python_file
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/compat.py, line 79, in load_module_py
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py,
 line 126, in module
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:  

[Yahoo-eng-team] [Bug 1488507] Re: Wily daily MAAS cloud image fails to fully install.

2015-08-26 Thread Scott Moser
not sure why this wasn't marked fix-released by 0.7.7~bzr1138-0ubuntu1 ,
but it should be fixed now.


** Changed in: cloud-init
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1488507

Title:
  Wily daily MAAS cloud image fails to fully install.

Status in cloud-init:
  Fix Released

Bug description:
  Aug 24 20:47:32 ubuntu cloud-init[804]: 2015-08-24 20:47:32,158 - 
util.py[WARNING]: Getting data from class 
cloudinit.sources.DataSourceMAAS.DataSourceMAAS' failed
  Aug 24 20:47:32 ubuntu pollinate[850]: ERROR: Network communication failed 
[6]\n  % Total% Received % Xferd  Average Speed   TimeTime Time  
Current#012 Dload  Upload   Total   Spent
Left  Speed#012#015  0 00 00 0  0  0 --:--:-- 
--:--:-- --:--:-- 020:47:32.316865 * Could not resolve host: 
entropy.ubuntu.com#01220:47:32.316975 * Closing connection 0#012curl: (6) Could 
not resolve host: entropy.ubuntu.com
  Aug 24 20:47:32 ubuntu pollinate[706]: Aug 24 20:47:32 ubuntu 13Aug 24 
20:47:32 pollinate[850]: ERROR: Network communication failed [6]\n  % Total
% Received % Xferd  Average Speed   TimeTime Time  Current
  Aug 24 20:47:32 ubuntu pollinate[706]: Dload  Upload   Total   SpentLeft  
Speed
  Aug 24 20:47:32 ubuntu pollinate[706]: 0 00 00 0  0   
   0 --:--:-- --:--:-- --:--:-- 020:47:32.316865 * Could not resolve host: 
entropy.ubuntu.com
  Aug 24 20:47:32 ubuntu pollinate[706]: 20:47:32.316975 * Closing connection 0
  Aug 24 20:47:32 ubuntu pollinate[706]: curl: (6) Could not resolve host: 
entropy.ubuntu.com
  Aug 24 20:47:32 ubuntu systemd[1]: pollinate.service: Main process exited, 
code=exited, status=1/FAILURE
  Aug 24 20:47:32 ubuntu systemd[1]: Failed to start Seed the pseudo random 
number generator on first boot.
  Aug 24 20:47:32 ubuntu systemd[1]: pollinate.service: Unit entered failed 
state.
  Aug 24 20:47:32 ubuntu systemd[1]: pollinate.service: Failed with result 
'exit-code'.
  Aug 24 20:47:42 ubuntu cloud-init[804]: 2015-08-24 20:47:42,170 - 
DataSourceCloudSigma.py[WARNING]: failed to get hypervisor product name via dmi 
data
  Aug 24 20:47:42 ubuntu cloud-init[804]: 2015-08-24 20:47:42,171 - 
util.py[WARNING]: Getting data from class 
'cloudinit.sources.DataSourceSmartOS.DataSourceSmartOS' failed
  Aug 24 20:48:32 ubuntu cloud-init[804]: 2015-08-24 20:48:32,226 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(requests.packages.urllib3.connection.HTTPConnection object 
at 0x7fd6503fe048, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  Aug 24 20:49:23 ubuntu cloud-init[804]: 2015-08-24 20:49:23,285 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: 
request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries 
exceeded with url: /2009-04-04/meta-data/instance-id (Caused by 
ConnectTimeoutError(requests.packages.urllib3.connection.HTTPConnection object 
at 0x7fd6503f11d0, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]

  
  See the attached syslog.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1488507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488996] [NEW] QoS doesn't work when l2pop is enabled

2015-08-26 Thread John Schwarz
Public bug reported:

My ml2 configuration file contains the following:

[ml2]
extension_drivers = port_security,qos
mechanism_drivers = openvswitch,l2population


However, when trying to get a list of available rule types, the neutron-server 
logs this to the log file:

WARNING neutron.plugins.ml2.managers [req-19db3de7-1a1a-
42b5-b4c0-b9f146a6bcac admin b44ee578c44a426e81752b4df76c1a89]
l2population does not support QoS; no rule types available

Seems to me like this should not be the case, as l2pop has nothing to do
with QoS. Probably other mechanism drivers also produce the same error.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l2-pop qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488996

Title:
  QoS doesn't work when l2pop is enabled

Status in neutron:
  New

Bug description:
  My ml2 configuration file contains the following:

  [ml2]
  extension_drivers = port_security,qos
  mechanism_drivers = openvswitch,l2population

  
  However, when trying to get a list of available rule types, the 
neutron-server logs this to the log file:

  WARNING neutron.plugins.ml2.managers [req-19db3de7-1a1a-
  42b5-b4c0-b9f146a6bcac admin b44ee578c44a426e81752b4df76c1a89]
  l2population does not support QoS; no rule types available

  Seems to me like this should not be the case, as l2pop has nothing to
  do with QoS. Probably other mechanism drivers also produce the same
  error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488986] [NEW] nova scheduler for race condition

2015-08-26 Thread hougangliu
Public bug reported:

a) nova compute service updates info of compute-node by run 
update_available_resource every CONF.update_resources_interval(60s by default). 
b) for every scheduler request:
1. select_destinations is called and get all HostStates(if compute-node is 
newer that local hoststate info based on updated_at, update the HostStates with 
the compute info from DB)
2. check if the host resource can meet instance requirement one by one with 
updating the HostState resource iteratively, if yes, send 
build_and_run_instance cast RPC to the corresponding compute node.
3.compute service accept the amqp message and consumed the instance requirement 
and write new compute info into DB.
4.compute try to spawn the instance, once failed, roll back step 3.

My question:
if user set CONF.update_resources_interval 1s, that is, compute node service 
updates compute info into DB every 1s. 
For the case: the user sends multi nova boot request,  and the first boot 
request goes to step 2 and the compute node service runs periodic task 
update_available_resource at the same time. And the second boot request go to 
step 1 and the first request still not goes to step3, so the second boot 
request gets HostStates set without the first instance's consumption and 
scheduler service will schedule a host for it without considering the first 
instance consumption. And the following request repeats.

Can this race condition occur?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488986

Title:
  nova scheduler for race condition

Status in OpenStack Compute (nova):
  New

Bug description:
  a) nova compute service updates info of compute-node by run 
update_available_resource every CONF.update_resources_interval(60s by default). 
  b) for every scheduler request:
  1. select_destinations is called and get all HostStates(if compute-node is 
newer that local hoststate info based on updated_at, update the HostStates with 
the compute info from DB)
  2. check if the host resource can meet instance requirement one by one with 
updating the HostState resource iteratively, if yes, send 
build_and_run_instance cast RPC to the corresponding compute node.
  3.compute service accept the amqp message and consumed the instance 
requirement and write new compute info into DB.
  4.compute try to spawn the instance, once failed, roll back step 3.

  My question:
  if user set CONF.update_resources_interval 1s, that is, compute node service 
updates compute info into DB every 1s. 
  For the case: the user sends multi nova boot request,  and the first boot 
request goes to step 2 and the compute node service runs periodic task 
update_available_resource at the same time. And the second boot request go to 
step 1 and the first request still not goes to step3, so the second boot 
request gets HostStates set without the first instance's consumption and 
scheduler service will schedule a host for it without considering the first 
instance consumption. And the following request repeats.

  Can this race condition occur?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489194] [NEW] hw_scsi_model from glance image is not used when booting instance from new volume

2015-08-26 Thread Logan V
Public bug reported:

When creating an instance backed by a cinder volume, the disk device /dev/vda 
is used regardless of image settings. I am using the following image metadata 
to set virtio-scsi driver on my instances:
hw_disk_bus=scsi
hw_scsi_model=virtio-scsi

When I boot instances using a normal root device (boot from image),
they are using /dev/sda and virtio-scsi as expected. When booting from
volume (either with a new volume or an existing image-based volume),
they use target dev='vda' bus='virtio'/, ignoring the image
metadata.

According to this spec: http://specs.openstack.org/openstack/nova-
specs/specs/juno/approved/add-virtio-scsi-bus-for-bdm.html

A work item was: Nova retrieve “hw_scsi_model” property from volume’s
glance_image_metadata when booting from cinder volume

I would expect this work is what would implement setting virtio-scsi on
volume backed instances, however none of the reviews I have looked
through for that spec appear to implement anything regarding volume
backed instances.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489194

Title:
  hw_scsi_model from glance image is not used when booting instance from
  new volume

Status in OpenStack Compute (nova):
  New

Bug description:
  When creating an instance backed by a cinder volume, the disk device /dev/vda 
is used regardless of image settings. I am using the following image metadata 
to set virtio-scsi driver on my instances:
  hw_disk_bus=scsi
  hw_scsi_model=virtio-scsi

  When I boot instances using a normal root device (boot from image),
  they are using /dev/sda and virtio-scsi as expected. When booting from
  volume (either with a new volume or an existing image-based volume),
  they use target dev='vda' bus='virtio'/, ignoring the image
  metadata.

  According to this spec: http://specs.openstack.org/openstack/nova-
  specs/specs/juno/approved/add-virtio-scsi-bus-for-bdm.html

  A work item was: Nova retrieve “hw_scsi_model” property from
  volume’s glance_image_metadata when booting from cinder volume

  I would expect this work is what would implement setting virtio-scsi
  on volume backed instances, however none of the reviews I have looked
  through for that spec appear to implement anything regarding volume
  backed instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489197] [NEW] Flavor service profiles map lacks tenant_id

2015-08-26 Thread James Arendt
Public bug reported:

The neutron v2 base.py auto populates a 'tenant_id' attribute on calls
if the attribute is not passed, pulling from the context.

This causes a POST to create a flavor service binding to fail when verifying 
attributes, as currently the tenant_id is not
part of the expected attribute map.

curl -g -i -X POST http://192.168.181.169:9696/v2.0/flavors/e38b4b6d-
872e-4656-b7bc-70e15455ee46/service_profiles.json -H User-Agent:
python-neutronclient -H Content-Type: application/json -H Accept:
application/json -H X-Auth-Token: AnAuthToken -d
'{service_profile: {id: 7fd54b73-6bf6-4fc8--0045a808eec2}}'

RESP BODY: {NeutronError: {message: Unrecognized attribute(s)
'tenant_id', type: HTTPBadRequest, detail: }}

The solution, used by folks like QOS, is to add the tenant_id as a
common field to the attribute map.

** Affects: neutron
 Importance: Undecided
 Assignee: James Arendt (james-arendt-7)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = James Arendt (james-arendt-7)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489197

Title:
  Flavor service profiles map lacks tenant_id

Status in neutron:
  New

Bug description:
  The neutron v2 base.py auto populates a 'tenant_id' attribute on calls
  if the attribute is not passed, pulling from the context.

  This causes a POST to create a flavor service binding to fail when verifying 
attributes, as currently the tenant_id is not
  part of the expected attribute map.

  curl -g -i -X POST http://192.168.181.169:9696/v2.0/flavors/e38b4b6d-
  872e-4656-b7bc-70e15455ee46/service_profiles.json -H User-Agent:
  python-neutronclient -H Content-Type: application/json -H Accept:
  application/json -H X-Auth-Token: AnAuthToken -d
  '{service_profile: {id: 7fd54b73-6bf6-4fc8--0045a808eec2}}'

  RESP BODY: {NeutronError: {message: Unrecognized attribute(s)
  'tenant_id', type: HTTPBadRequest, detail: }}

  The solution, used by folks like QOS, is to add the tenant_id as a
  common field to the attribute map.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489200] [NEW] Upon VM deletes, SG iptables not cleaned up, garbage piles up

2015-08-26 Thread Ramu Ramamurthy
Public bug reported:


Summary:  40 VMs are created and then deleted on the same host. At the end of 
this, I find that iptables rules for some ports are not cleaned up, and remain 
as garbage. This garbage keeps piling up, as more VMs are created and deleted. 

Topology:
 Neutron Network using OVS  neutron security groups.

Test Case:

 1) create 1 network, 1 subnetwork
 2) boot 40 VMs on one hypervisor  and 40 VMs on another 
hypervisor using the default Security Group
 3) Run some traffic tests between VMs
 4) delete all VMs

Result:
   Find that iptables rules are not cleaned up for the ports of 
the VMs

Root Cause:
 In the neutron-ovs-agent polling loop, there is an exception 
during the processing of port events.
As a result of this exception, the neutron-ovs-agent resyncs 
with plugin. This takes a while, At the same
   time, VM ports are getting deleted. In this scenario, the 
neutron-ovs-agent misses some deleted ports, and
  does not cleanup SG filters for those missed ports

Reproducability:

  Happens almost every time. With more number of VMs, it
is more likely

Logs:

 Attached are a set of neutron-ovs-agent logs, and the
garbage iptables rules that remain.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489200

Title:
  Upon VM deletes, SG iptables not cleaned up, garbage piles up

Status in neutron:
  New

Bug description:
  
  Summary:  40 VMs are created and then deleted on the same host. At the end of 
this, I find that iptables rules for some ports are not cleaned up, and remain 
as garbage. This garbage keeps piling up, as more VMs are created and deleted. 

  Topology:
   Neutron Network using OVS  neutron security groups.

  Test Case:
  
   1) create 1 network, 1 subnetwork
   2) boot 40 VMs on one hypervisor  and 40 VMs on another 
hypervisor using the default Security Group
   3) Run some traffic tests between VMs
   4) delete all VMs

  Result:
 Find that iptables rules are not cleaned up for the ports 
of the VMs

  Root Cause:
   In the neutron-ovs-agent polling loop, there is an exception 
during the processing of port events.
  As a result of this exception, the neutron-ovs-agent resyncs 
with plugin. This takes a while, At the same
 time, VM ports are getting deleted. In this scenario, the 
neutron-ovs-agent misses some deleted ports, and
does not cleanup SG filters for those missed ports

  Reproducability:

Happens almost every time. With more number of VMs,
  it is more likely

  Logs:

   Attached are a set of neutron-ovs-agent logs, and the
  garbage iptables rules that remain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488282] Re: Gate failures with 'the resource could not be found'

2015-08-26 Thread Salvatore Orlando
Actually the root cause for the failure I did observe was different.

This appears to be a genuine nova error where a server is deleted by
another test while the list operation is in progress. It is also
interesting that nova fails with a 404 here - this appears to really be
a bug.

In support of my thesis I can provide examples where the same failure
trace occurs with[1], [2] and without [3], [4] neutron

Also, during a server list operation there is no interaction between
nova and neutron.

[1] 
http://logs.openstack.org/04/215604/15/gate/gate-tempest-dsvm-neutron-full/37eb7aa/console.html
[2] 
http://logs.openstack.org/04/215604/15/gate/gate-tempest-dsvm-neutron-full/37eb7aa/logs/screen-n-api.txt.gz#_2015-08-26_15_32_57_698
[3] 
http://logs.openstack.org/67/214067/2/gate/gate-tempest-dsvm-full/1b348a3/console.html
[4] 
http://logs.openstack.org/67/214067/2/gate/gate-tempest-dsvm-full/1b348a3/logs/screen-n-api.txt.gz#_2015-08-25_14_50_13_779

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488282

Title:
  Gate failures with 'the resource could not be found'

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  There have been spurious failures happening in the gate. The most
  prominent one is:

  
  ft1.186: 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_list_servers_by_admin_with_all_tenants[id-9f5579ae-19b4-4985-a091-2a5d56106580]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2015-08-24 22:55:50,083 32355 INFO [tempest_lib.common.rest_client] 
Request (ServersAdminTestJSON:test_list_servers_by_admin_with_all_tenants): 404 
GET 
http://127.0.0.1:8774/v2/fb99c79318b54e668713b25afc52f81a/servers/detail?all_tenants=
 0.834s
  2015-08-24 22:55:50,083 32355 DEBUG[tempest_lib.common.rest_client] 
Request - Headers: {'X-Auth-Token': 'omitted', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
  Body: None
  Response - Headers: {'content-length': '78', 'date': 'Mon, 24 Aug 2015 
22:55:50 GMT', 'connection': 'close', 'content-type': 'application/json; 
charset=UTF-8', 'x-compute-request-id': 
'req-387b21a9-4ada-48ee-89ed-9acfe5274ef7', 'status': '404'}
  Body: {itemNotFound: {message: The resource could not be 
found., code: 404}}
  }}}

  Traceback (most recent call last):
File tempest/api/compute/admin/test_servers.py, line 81, in 
test_list_servers_by_admin_with_all_tenants
  body = self.client.list_servers(detail=True, **params)
File tempest/services/compute/json/servers_client.py, line 159, in 
list_servers
  resp, body = self.get(url)
File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 271, in get
  return self.request('GET', url, extra_headers, headers)
File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 643, in request
  resp, resp_body)
File 
/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py,
 line 695, in _error_checker
  raise exceptions.NotFound(resp_body)
  tempest_lib.exceptions.NotFound: Object not found
  Details: {u'code': 404, u'message': u'The resource could not be found.'}

  
  but there are other similar failure modes. This seems to be related to bug 
#1269284

  The logstash query:

  message:tempest_lib.exceptions.NotFound: Object not found AND
  build_name:gate-tempest-dsvm-neutron-full

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdF9saWIuZXhjZXB0aW9ucy5Ob3RGb3VuZDogT2JqZWN0IG5vdCBmb3VuZFwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS10ZW1wZXN0LWRzdm0tbmV1dHJvbi1mdWxsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA0NjIwNzcyMjksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489212] [NEW] 'Public' and 'Protect' cant' be set to True in metadata

2015-08-26 Thread wangxiyuan
Public bug reported:

Reproduce:

1.Click System - Metadata Definitions

2.Set 'Public' or 'Protected' to False in any section.

3.Set them to True. But it can't work and still be False.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489212

Title:
  'Public' and 'Protect' cant' be set to True in metadata

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Reproduce:

  1.Click System - Metadata Definitions

  2.Set 'Public' or 'Protected' to False in any section.

  3.Set them to True. But it can't work and still be False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487323] Re: Grammar misses in manual “Networking API v2.0 (CURRENT)”

2015-08-26 Thread zhangjingwen
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487323

Title:
  Grammar misses in manual “Networking API v2.0 (CURRENT)”

Status in neutron:
  Invalid

Bug description:
  
  The manual link is:
  http://developer.openstack.org/api-ref-networking-v2.html

  1.The description in manual is as below:
  Use virtual networking services among devices that are managed by the 
OpenStack Compute serviceEnables users to associate IP address blocks and 
other network configuration settings with an OpenStack Networking network. 

  Because Use is used by the first sentence and it is imperative
  sentence, Enables in the last sentence is not correct. I think it
  should be Enable.

  2.The description in same location as above  is as below:
  The Networking (neutron) API v2.0 combines the API v1.1 functionality with 
some essential Internet Protocol Address Management (IPAM) functionality.

  Because some is used and i think some here does not mean
  certain, the last functionality should be functionalities.

  3.The description in same location as above  is as below:
  Enables users to associate IP address blocks and other network configuration 
settings with an OpenStack Networking network. You can choose a specific IP 
address from the block or let OpenStack Networking choose the first available 
IP address. 

  Acorrding to the context, the block in the last sentence should be
  blocks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489260] [NEW] trust details unvailable for admin token

2015-08-26 Thread Gilles Dubreuil
Public bug reported:

When authenticated via admin token, trusts details are not available.

Trusts can be listed:
---
# openstack trust list -f csv
ID,Expires At,Impersonation,Project ID,Trustee User ID,Trustor User 
ID
259d57b4998c484892ae3bdd7a84f147,2101-01-01T01:01:01.00Z,False,a41030cd0872497893c0f00a29996961,64eea97a9ea54981a41cc7e40944a181,6bb8aef337134b948dcbc0bd6ac34633
---

But details cannot be shown:
---
# openstack trust show 259d57b4998c484892ae3bdd7a84f147
ERROR: openstack No trust with a name or ID of 
'259d57b4998c484892ae3bdd7a84f147' exists.
---

From the debug mode we can see the rejected authorization to perform the 
requested action:
http://paste.openstack.org/raw/427927/

I discussed the issue with jamielennox who confirmed that the trust details are 
visible only by the trustor/trustee:
https://github.com/openstack/keystone/blob/master/keystone/trust/controllers.py#L75


But I believe (and jamielennox) that the admin token should have access to it 
too.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1489260

Title:
  trust details unvailable for admin token

Status in Keystone:
  New

Bug description:
  When authenticated via admin token, trusts details are not available.

  Trusts can be listed:
  ---
  # openstack trust list -f csv
  ID,Expires At,Impersonation,Project ID,Trustee User ID,Trustor 
User ID
  
259d57b4998c484892ae3bdd7a84f147,2101-01-01T01:01:01.00Z,False,a41030cd0872497893c0f00a29996961,64eea97a9ea54981a41cc7e40944a181,6bb8aef337134b948dcbc0bd6ac34633
  ---

  But details cannot be shown:
  ---
  # openstack trust show 259d57b4998c484892ae3bdd7a84f147
  ERROR: openstack No trust with a name or ID of 
'259d57b4998c484892ae3bdd7a84f147' exists.
  ---

  From the debug mode we can see the rejected authorization to perform the 
requested action:
  http://paste.openstack.org/raw/427927/

  I discussed the issue with jamielennox who confirmed that the trust details 
are visible only by the trustor/trustee:
  
https://github.com/openstack/keystone/blob/master/keystone/trust/controllers.py#L75

  
  But I believe (and jamielennox) that the admin token should have access to it 
too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1489260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427015] Re: too many subnet-create cause q-dhcp failure

2015-08-26 Thread watanabe.isao
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427015

Title:
  too many subnet-create cause q-dhcp failure

Status in neutron:
  In Progress
Status in tempest:
  New

Bug description:
  DHCP max port is only validated when port is creating.
  But when create_port_repuest has been sent in subnet create or update, 
validate will not be functionally.
  This results the number of total DHCP ports excess max_fixed_ips_per_port.
  If so, the DHCP agent will export error, and cannot restart itself.
  Also, user is not announced about that Fixed IP not been created after the 
subnet creation, even the enable_dhcp of subnet shows True.

  [reproduce]
  1. neutron net create testnet
  2. neutron dhcp-agent-network-add dhcp_agent_id testnet
  3. neutron subnet-create testnet CIDR1 --name testsub1
  4. neutron subnet-create testnet CIDR2 --name testsub2
  5. neutron subnet-create testnet CIDR3 --name testsub3
  6. neutron subnet-create testnet CIDR4 --name testsub4
  7. neutron subnet-create testnet CIDR5 --name testsub5
  since default value of max_fixed_ips_per_port is 5, it is ok till here.
  8-1. neutron subnet-create testnet CIDR6 --name testsub6
  error repetly occured in q-dhcp.log.

  Also, confirmed the following case cause the same error
  9-1. neutron subnet-create testnet CIDR6 --name testsub6 --enable_dhcp False
  9-2. neutron subnet-update testsub6 --enable_dhcp True

  [trace log]
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 112, in call_driver
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 132, in restart
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent self.enable()
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 205, in enable
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 919, in setup
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent port = 
self.setup_dhcp_port(network)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 863, in setup_dhcp_port
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 'fixed_ips': 
port_fixed_ips}})
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 441, in update_dhcp_port
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
port_id=port_id, port=port, host=self.host)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py, line 
156, in call
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
retry=self.retry)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py, line 90, 
in _send
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent 
timeout=timeout, retry=retry)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 349, in send
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent retry=retry)
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 340, in _send
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent raise result
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent RemoteError: 
Remote error: InvalidInput Invalid input for operation: Exceeded maximim amount 
of fixed ips per port.
  2015-02-28 00:31:45.548 3011 TRACE neutron.agent.dhcp.agent [u'Traceback 
(most recent call last):\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch\nexecutor_callback)\n', u'  File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
/opt/stack/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 312, in 
update_dhcp_port\nreturn 

[Yahoo-eng-team] [Bug 1489268] [NEW] [VPNaaS] DVR unit tests in VPNaaS failing

2015-08-26 Thread venkata anil
Public bug reported:

VPNaaS unit tests for DVR are failing with below error

AttributeError: 'DvrEdgeRouter' object has no attribute
'create_snat_namespace'

Captured traceback:
~~~
Traceback (most recent call last):
  File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
934, in setUp
ipsec_process)
  File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
638, in setUp
self._make_dvr_edge_router_info_for_test()
  File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
646, in _make_dvr_edge_router_info_for_test
router.create_snat_namespace()
AttributeError: 'DvrEdgeRouter' object has no attribute 
'create_snat_namespace'


The following 12 test cases related to dvr_edge_router are failing

failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489268

Title:
  [VPNaaS] DVR unit tests in VPNaaS failing

Status in neutron:
  New

Bug description:
  VPNaaS unit tests for DVR are failing with below error

  AttributeError: 'DvrEdgeRouter' object has no attribute
  'create_snat_namespace'

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
934, in setUp
  ipsec_process)
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
638, in setUp
  self._make_dvr_edge_router_info_for_test()
File 
neutron_vpnaas/tests/unit/services/vpn/device_drivers/test_ipsec.py, line 
646, in _make_dvr_edge_router_info_for_test
  router.create_snat_namespace()
  AttributeError: 'DvrEdgeRouter' object has no attribute 
'create_snat_namespace'

  
  The following 12 test cases related to dvr_edge_router are failing

  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPSecDeviceDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_get_namespace_for_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecFedoraStrongswanDeviceDriverDVR.test_iptables_apply_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_add_nat_rule_with_dvr_edge_router
 [ multipart
  failure: 
neutron_vpnaas.tests.unit.services.vpn.device_drivers.test_ipsec.IPsecStrongswanDeviceDriverDVR.test_remove_rule_with_dvr_edge_router
 [ multipart
  failure: 

[Yahoo-eng-team] [Bug 1489226] [NEW] Nova should support specifying the block devices(/dev/sd*) name to attach to the instance

2015-08-26 Thread Sibiao Luo
Public bug reported:

nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/sd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*)  to attach to the instance and the instance can map the block 
devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd*   - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
e.g.:
Instance: /dev/sd*   - ../../xvd*
Dashboard: /dev/sd*  vol-xx  /dev/xvd*

While Amazon EC2 has supported this function very well.

In the EC2 Attach Volume dialog box, start typing the name or ID of the
instance to attach the volume to in the Instance box, and select it from
the list of suggestion options (only instances in the same Availability
Zone as the volume are displayed).

Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe
block devices, the block device mapping is used by Amazon EC2 to specify
the block devices to attach to an EC2 instance.

Please refer to the following picture for detail.
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/bogo-ami_Instance_with_new_volume.png

Best Regards,
Sibiao Luo

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/vd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*)  to attach to the instance and the instance can map the block 
devices(e.g. /dev/vd*) with a symlink(e.g. /dev/sd*   - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
+ nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/sd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*)  to attach to the instance and the instance can map the block 
devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd*   - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
  e.g.:
  Instance: /dev/sd*   - ../../xvd*
  Dashboard: /dev/sd*  vol-xx  /dev/xvd*
  
  While Amazon EC2 has support this function very well.
  
  In the EC2 Attach Volume dialog box, start typing the name or ID of the
  instance to attach the volume to in the Instance box, and select it from
  the list of suggestion options (only instances in the same Availability
  Zone as the volume are displayed).
  
  Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe
  block devices, the block device mapping is used by Amazon EC2 to specify
  the block devices to attach to an EC2 instance.
  
  Please refer to the following picture for detail.
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/bogo-ami_Instance_with_new_volume.png
  
  Best Regards,
  Sibiao Luo

** Summary changed:

- specify the block devices(/dev/sd*) to attach to an instance
+ Nova should support specifying the block devices(/dev/sd*) name to attach to 
the instance

** Description changed:

  nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/sd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*)  to attach to the instance and the instance can map the block 
devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd*   - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
  e.g.:
  Instance: /dev/sd*   - ../../xvd*
  Dashboard: /dev/sd*  vol-xx  /dev/xvd*
  
- While Amazon EC2 has support this function very well.
+ While Amazon EC2 has supported this function very well.
  
  In the EC2 Attach Volume dialog box, start typing the name or ID of the
  instance to attach the volume to in the Instance box, and select it from
  the list of suggestion options (only instances in the same Availability
  Zone as the volume are displayed).
  
  Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe
  block devices, the block device mapping is used by Amazon EC2 to specify
  the block devices to attach to an EC2 instance.
  
  Please refer to the following picture for detail.
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/bogo-ami_Instance_with_new_volume.png
  
  Best Regards,
  Sibiao Luo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489226

Title:
  Nova should support specifying the block devices(/dev/sd*) name to
 

[Yahoo-eng-team] [Bug 1489227] [NEW] Horizon should support specifying the block devices(/dev/sd*) name to attach to the instance

2015-08-26 Thread Sibiao Luo
Public bug reported:

Dashboard (Horizon) also should support that the same with nova bug:
https://bugs.launchpad.net/nova/+bug/1489226

nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/sd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*) to attach to the instance and the instance can map the block 
devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd* - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
 e.g.:
 Instance: /dev/sd* - ../../xvd*
 Dashboard: /dev/sd*  vol-xx  /dev/xvd*

While Amazon EC2 has supported this function very well.

In the EC2 Attach Volume dialog box, start typing the name or ID of the
instance to attach the volume to in the Instance box, and select it from
the list of suggestion options (only instances in the same Availability
Zone as the volume are displayed).

Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe
block devices, the block device mapping is used by Amazon EC2 to specify
the block devices to attach to an EC2 instance.

Please refer to the following picture for detail.
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/bogo-ami_Instance_with_new_volume.png

Best Regards,
Sibiao Luo

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- specify the block devices(/dev/sd*) to attach to an instance
+ Horizon should support specifying the block devices(/dev/sd*) to attach to an 
instance

** Summary changed:

- Horizon should support specifying the block devices(/dev/sd*) to attach to an 
instance
+ Horizon should support specifying the block devices(/dev/sd*) name to attach 
to the instance

** Description changed:

- Dashboard (Horizon) also should support that the same to nova bug:
+ Dashboard (Horizon) also should support that the same with nova bug:
  https://bugs.launchpad.net/nova/+bug/1489226
  
  nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/sd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*) to attach to the instance and the instance can map the block 
devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd* - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
-  e.g.:
-  Instance: /dev/sd* - ../../xvd*
-  Dashboard: /dev/sd*  vol-xx  /dev/xvd*
+  e.g.:
+  Instance: /dev/sd* - ../../xvd*
+  Dashboard: /dev/sd*  vol-xx  /dev/xvd*
  
  While Amazon EC2 has support this function very well.
  
  In the EC2 Attach Volume dialog box, start typing the name or ID of the
  instance to attach the volume to in the Instance box, and select it from
  the list of suggestion options (only instances in the same Availability
  Zone as the volume are displayed).
  
  Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe
  block devices, the block device mapping is used by Amazon EC2 to specify
  the block devices to attach to an EC2 instance.
  
  Please refer to the following picture for detail.
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/bogo-ami_Instance_with_new_volume.png
  
  Best Regards,
-  Sibiao Luo
+  Sibiao Luo

** Description changed:

  Dashboard (Horizon) also should support that the same with nova bug:
  https://bugs.launchpad.net/nova/+bug/1489226
  
  nova and horizon dashboard should support that specify the block devices(e.g. 
/dev/sd*) to attach to an instance. Users can type the block devices(e.g. 
/dev/sd*) to attach to the instance and the instance can map the block 
devices(e.g. /dev/sd*) with a symlink(e.g. /dev/sd* - ../../xvd*), and the 
horizon dashboard can display this symlink relation to users.
   e.g.:
   Instance: /dev/sd* - ../../xvd*
   Dashboard: /dev/sd*  vol-xx  /dev/xvd*
  
  While Amazon EC2 has support this function very well.
  
  In the EC2 Attach Volume dialog box, start typing the name or ID of the
  instance to attach the volume to in the Instance box, and select it from
  the list of suggestion options (only instances in the same Availability
  Zone as the volume are displayed).
  
  Device names like /dev/sdh and xvdh are used by Amazon EC2 to describe
  block devices, the block device mapping is used by Amazon EC2 to specify
  the block devices to attach to an EC2 instance.
  
  Please refer to the following picture for detail.
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeMenu.png
  
http://www.bogotobogo.com/DevOps/AWS/images/AttachingVolume/AttachVolumeDialog.png
  

[Yahoo-eng-team] [Bug 1489238] [NEW] Validation UI error in workflow when wizard parameter is set to True

2015-08-26 Thread sanjana
Public bug reported:

The required fields in the workflow if not filled should prompt the user for 
input.
The error message is no doubt displayed , but the box that accepts the 
contents, shifts to the right. (Ideally it should be in the same place with the 
message This field is required printed below the box).

This behaviour of the wizard can be seen in the projects folder in
horizon dashboard. The networks menu has a create network workflow page
which is implemented as a wizard. The boxes are offset when the required
fields are not filled by the user.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489238

Title:
  Validation UI error in workflow when wizard parameter is set to True

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The required fields in the workflow if not filled should prompt the user for 
input.
  The error message is no doubt displayed , but the box that accepts the 
contents, shifts to the right. (Ideally it should be in the same place with the 
message This field is required printed below the box).

  This behaviour of the wizard can be seen in the projects folder in
  horizon dashboard. The networks menu has a create network workflow
  page which is implemented as a wizard. The boxes are offset when the
  required fields are not filled by the user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489240] Re: Mistake in description of manual Compute API v2.1 (CURRENT)

2015-08-26 Thread venkatamahesh
** Project changed: horizon = openstack-api-site

** Tags added: api low-hanging-fruit

** Changed in: openstack-api-site
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489240

Title:
  Mistake in description of manual Compute API v2.1 (CURRENT)

Status in openstack-api-site:
  Confirmed

Bug description:
  The original description is:
  Clears the encrypted copy of the password from the metadata server after the 
client gets the password and determines that it no longer needs it in the 
metadata server.

  that it no longer needs it  above should be that it is no longer
  needed

  The referrence link is:
  
http://developer.openstack.org/api-ref-compute-v2.1.html#os-admin-password-v2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1489240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489240] [NEW] Mistake in description of manual Compute API v2.1 (CURRENT)

2015-08-26 Thread zhangjingwen
Public bug reported:

The original description is:
Clears the encrypted copy of the password from the metadata server after the 
client gets the password and determines that it no longer needs it in the 
metadata server.

that it no longer needs it  above should be that it is no longer
needed

The referrence link is:
http://developer.openstack.org/api-ref-compute-v2.1.html#os-admin-password-v2.1

** Affects: openstack-api-site
 Importance: Undecided
 Assignee: zhangjingwen (zhangjingwen)
 Status: Confirmed


** Tags: api low-hanging-fruit

** Changed in: horizon
 Assignee: (unassigned) = zhangjingwen (zhangjingwen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489240

Title:
  Mistake in description of manual Compute API v2.1 (CURRENT)

Status in openstack-api-site:
  Confirmed

Bug description:
  The original description is:
  Clears the encrypted copy of the password from the metadata server after the 
client gets the password and determines that it no longer needs it in the 
metadata server.

  that it no longer needs it  above should be that it is no longer
  needed

  The referrence link is:
  
http://developer.openstack.org/api-ref-compute-v2.1.html#os-admin-password-v2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1489240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp