[Yahoo-eng-team] [Bug 1474236] [NEW] unexpect response when create an image with owner or locations in api v2

2015-07-14 Thread wangxiyuan
Public bug reported:

When create an image with the parameter 'owner' or 'locations' in api
v2, it will raise an error like:

403 Forbidden
Attribute 'owner' is reserved.

403 Forbidden
Attribute 'locations' is reserved.


Reproduce:

Create an image like:

POSThttp://hostip:v2/images

body:

{
  name: v2_test,
  tags: [
ubuntu,
quantal
  ],
  disk_format: qcow2,
  container_format: bare,
  owner: x,
  locations: [
{
  url: xx,
  metadata:{}
}
  ]
}

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1474236

Title:
  unexpect response when create an image with owner or locations in api
  v2

Status in Glance:
  New

Bug description:
  When create an image with the parameter 'owner' or 'locations' in api
  v2, it will raise an error like:

  403 Forbidden
  Attribute 'owner' is reserved.

  403 Forbidden
  Attribute 'locations' is reserved.

  
  Reproduce:

  Create an image like:

  POSThttp://hostip:v2/images

  body:

  {
name: v2_test,
tags: [
ubuntu,
quantal
],
disk_format: qcow2,
container_format: bare,
owner: x,
locations: [
  {
url: xx,
metadata:{}
  }
]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1474236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403003] Re: No error/warning raised when attempting to re-upload image data

2015-07-14 Thread Itzik Brown
jelly,
Thanks. You are right - it's fixed.

** Changed in: glance
   Status: Incomplete = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1403003

Title:
  No error/warning raised when attempting to re-upload image data

Status in Glance:
  Fix Released

Bug description:
  When modifying an image file and then updating the image by using glance  
image-update --file filename Image Name
  doesn't update the image 

  How to reproduce
  
  Download an image and upload it:

  # wget 
http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
  # glance image-create --name fedora21b --disk-format qcow2  
--container-format bare --is-public True --file 
/tmp/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

  Create some dummy file in /tmp/dummy and modify the image
  #  virt-copy-in -a Fedora-Cloud-Base-20141203-21.x86_64.qcow2 /tmp/dummy /etc

  Update the image:
  #glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2  
fedora21

  Verify the image is not updated by comparing the checksum
  # md5sum /var/lib/glance/images/bd84ac96-c2a8-4268-a19c-a0e69c703baf
  # md5sum Fedora-Cloud-Base-20141203-21.x86_64.qcow2

  When using --checksum the checksum in the image properties is updated but the 
the image itself  not:
  #glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2 
--checksum 2c98b17b3f27d14e2e7a840fef464cfe fedora21

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1403003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474219] [NEW] cinder latest requirements breaking things, must be a string

2015-07-14 Thread Victor Laza
Public bug reported:

The below trace brings into my attention: 
https://github.com/openstack/cinder/commit/238df8b82e8d078279f98d90cf9bcfd39dd7cbc8
and more specific:

requirements.txt
+enum34;python_version=='2.7' or python_version=='2.6'

that breaks things


04:45:35 error in setup command: 'install_requires' must be a string or list of 
strings containing valid project/version requirement specifiers; Expected 
version spec in enum34;python_version=='2.7' or python_version=='2.6' at 
;python_version=='2.7' or python_version=='2.6'
04:45:35 ExecRetry : System.Management.Automation.RuntimeException: Failed to 
install 
04:45:35 cinder from repo
04:45:35 At C:\cinder-ci\windows\scripts\create-environment.ps1:147 char:1
04:45:35 + ExecRetry {
04:45:35 + ~~~
04:45:35 + CategoryInfo  : NotSpecified: (:) [Write-Error], 
WriteErrorExcep 
04:45:35tion
04:45:35 + FullyQualifiedErrorId : 
Microsoft.PowerShell.Commands.WriteErrorExceptio 
04:45:35n,ExecRetry

** Affects: cinder
 Importance: Undecided
 Status: New

** Project changed: neutron = cinder

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474219

Title:
  cinder latest requirements breaking things, must be a string

Status in Cinder:
  New

Bug description:
  The below trace brings into my attention: 
https://github.com/openstack/cinder/commit/238df8b82e8d078279f98d90cf9bcfd39dd7cbc8
  and more specific:

  requirements.txt
  +enum34;python_version=='2.7' or python_version=='2.6'

  that breaks things

  
  04:45:35 error in setup command: 'install_requires' must be a string or list 
of strings containing valid project/version requirement specifiers; Expected 
version spec in enum34;python_version=='2.7' or python_version=='2.6' at 
;python_version=='2.7' or python_version=='2.6'
  04:45:35 ExecRetry : System.Management.Automation.RuntimeException: Failed to 
install 
  04:45:35 cinder from repo
  04:45:35 At C:\cinder-ci\windows\scripts\create-environment.ps1:147 char:1
  04:45:35 + ExecRetry {
  04:45:35 + ~~~
  04:45:35 + CategoryInfo  : NotSpecified: (:) [Write-Error], 
WriteErrorExcep 
  04:45:35tion
  04:45:35 + FullyQualifiedErrorId : 
Microsoft.PowerShell.Commands.WriteErrorExceptio 
  04:45:35n,ExecRetry

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1474219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474240] [NEW] Implement Neutron-ONOS ML2 driver.

2015-07-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Implement Neutron-ONOS ML2 driver.

This is a whishlist bug for developing plugin library for Open Network
Operating System (ONOS) via ML2 MechanismDriver in neutron which will
handle the back-end communication of neutron with Open Network Operating
System (ONOS) controller.

** Affects: neutron
 Importance: Wishlist
 Assignee: vikram.choudhary (vikschw)
 Status: New

-- 
Implement Neutron-ONOS ML2 driver.
https://bugs.launchpad.net/bugs/1474240
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473965] Re: the port of scecurity group rule for TCP or UDP should not be 0

2015-07-14 Thread shihanzhang
** Changed in: neutron
   Status: Opinion = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473965

Title:
  the port of scecurity group rule for TCP or UDP should not be 0

Status in neutron:
  New

Bug description:
  for TCP or UDP protocol, 0 is a reserved port, but for neutron
  security group rule, if a rule with TCP protocol, and its port-range-
  min is 0, the port-range-max will be invalid, because for port-range-
  min being 0 means that it allow all package pass, so I think it should
  not create a rule with port-range-min being 0, if user want to allow
  all TCP/UDP package pass, he can create a security group rule with
  port-range-min and port-range-max being None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473965/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474228] [NEW] inline edit failed in user table because description doesn't exits

2015-07-14 Thread jelly
Public bug reported:

inline edit failed in user table because description doesn't exits

Internal Server Error: /identity/users/
Traceback (most recent call last):
  File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File /home/user/github/horizon/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /home/user/github/horizon/horizon/decorators.py, line 52, in dec
return view_func(request, *args, **kwargs)
  File /home/user/github/horizon/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File /home/user/github/horizon/horizon/tables/views.py, line 224, in post
return self.get(request, *args, **kwargs)
  File /home/user/github/horizon/horizon/tables/views.py, line 160, in get
handled = self.construct_tables()
  File /home/user/github/horizon/horizon/tables/views.py, line 145, in 
construct_tables
preempted = table.maybe_preempt()
  File /home/user/github/horizon/horizon/tables/base.py, line 1533, in 
maybe_preempt
new_row)
  File /home/user/github/horizon/horizon/tables/base.py, line 1585, in 
inline_edit_handle
error = exceptions.handle(request, ignore=True)
  File /home/user/github/horizon/horizon/exceptions.py, line 361, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File /home/user/github/horizon/horizon/tables/base.py, line 1580, in 
inline_edit_handle
cell_name)
  File /home/user/github/horizon/horizon/tables/base.py, line 1606, in 
inline_update_action
self.request, datum, obj_id, cell_name, new_cell_value)
  File /home/user/github/horizon/horizon/tables/actions.py, line 952, in 
action
self.update_cell(request, datum, obj_id, cell_name, new_cell_value)
  File 
/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py,
 line 210, in update_cell
horizon_exceptions.handle(request, ignore=True)
  File /home/user/github/horizon/horizon/exceptions.py, line 361, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
/home/user/github/horizon/openstack_dashboard/dashboards/identity/users/tables.py,
 line 200, in update_cell
description=user_obj.description,
  File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/keystoneclient/openstack/common/apiclient/base.py,
 line 494, in __getattr__
raise AttributeError(k)
AttributeError: description

** Affects: horizon
 Importance: Undecided
 Assignee: jelly (coding1314)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = jelly (coding1314)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474228

Title:
  inline edit failed in user table because description doesn't exits

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  inline edit failed in user table because description doesn't exits

  Internal Server Error: /identity/users/
  Traceback (most recent call last):
File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 111, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /home/user/github/horizon/horizon/decorators.py, line 36, in dec
  return view_func(request, *args, **kwargs)
File /home/user/github/horizon/horizon/decorators.py, line 52, in dec
  return view_func(request, *args, **kwargs)
File /home/user/github/horizon/horizon/decorators.py, line 36, in dec
  return view_func(request, *args, **kwargs)
File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
  return self.dispatch(request, *args, **kwargs)
File 
/home/user/.virtualenvs/test-horizon/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
  return handler(request, *args, **kwargs)
File /home/user/github/horizon/horizon/tables/views.py, line 224, in post
  return self.get(request, *args, **kwargs)
File /home/user/github/horizon/horizon/tables/views.py, line 160, in get
  handled = self.construct_tables()
File /home/user/github/horizon/horizon/tables/views.py, line 145, in 
construct_tables
  preempted = table.maybe_preempt()
File /home/user/github/horizon/horizon/tables/base.py, line 1533, in 
maybe_preempt
  new_row)
File 

[Yahoo-eng-team] [Bug 1474283] [NEW] Boot instance from volume snapshot fail

2015-07-14 Thread lyanchih
Public bug reported:

How to reproduce:
1.  create new volume and volume source is image
2.  Snapshot volume which was created at step1
3.  Launch as instance from snapshot volume which was created at step2
Then horizon will display Block Device Mapping is Invalid: failed to get 
volume

Because horizon will send volume source type instead of snapshot source type to 
nova. 
Therefore nova api will try to fetch volume tough volume id instead of 
snapshot. 
Nova create server request's data was incorrect at following link and line.
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L874

** Affects: horizon
 Importance: Undecided
 Assignee: lyanchih (lyanchih)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = lyanchih (lyanchih)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474283

Title:
  Boot instance from volume snapshot fail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:
  1.  create new volume and volume source is image
  2.  Snapshot volume which was created at step1
  3.  Launch as instance from snapshot volume which was created at step2
  Then horizon will display Block Device Mapping is Invalid: failed to get 
volume

  Because horizon will send volume source type instead of snapshot source type 
to nova. 
  Therefore nova api will try to fetch volume tough volume id instead of 
snapshot. 
  Nova create server request's data was incorrect at following link and line.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L874

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474253] [NEW] Cannot rebuild a instance booted from volume

2015-07-14 Thread Hiroyuki Eguchi
Public bug reported:

User rebuild a instance booted from volume in the following steps.

1. Stop a instance
2. Detach a root device volume.
3. Attach a new root device volume.
4. Start a instance

But, currently, It's impossible by these reasons.

1. User not allowed to detach a root device volume.
   - detach boot device volume without warning 
 (https://bugs.launchpad.net/nova/+bug/1279300)

2. User not allowed to attach a root device volume expect when creating a 
instance.
   - A get_next_device_name which is executed when attaching volume, never 
return a root_device_name.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474253

Title:
  Cannot rebuild a instance booted from volume

Status in OpenStack Compute (nova):
  New

Bug description:
  User rebuild a instance booted from volume in the following steps.

  1. Stop a instance
  2. Detach a root device volume.
  3. Attach a new root device volume.
  4. Start a instance

  But, currently, It's impossible by these reasons.

  1. User not allowed to detach a root device volume.
 - detach boot device volume without warning 
   (https://bugs.launchpad.net/nova/+bug/1279300)

  2. User not allowed to attach a root device volume expect when creating a 
instance.
 - A get_next_device_name which is executed when attaching volume, never 
return a root_device_name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474266] [NEW] py34 unit test starts to fail after Routes 2.1 is excluded in global requirements

2015-07-14 Thread Akihiro Motoki
Public bug reported:

Routes 2.1 was recently excluded in global-requirements as Nova requests it.
As a result, we have no version of Routes which is compatible with Python 3.
We need to excluded affected tests until we have python3 compat Routes.
This bug is now blocking requirements update by the cron job.

http://logs.openstack.org/46/182746/73/check/gate-neutron-
python34/be0d939/console.html

2015-07-13 22:23:32.840 | 
==
2015-07-13 22:23:32.840 | ERROR: 
neutron.tests.unit.extensions.test_portsecurity.TestPortSecurity.test_update_port_remove_port_security_security_group_read
2015-07-13 22:23:32.840 | 
--
2015-07-13 22:23:32.840 | Empty attachments:
2015-07-13 22:23:32.840 |   pythonlogging:'neutron.api.extensions'
2015-07-13 22:23:32.840 | 
2015-07-13 22:23:32.840 | pythonlogging:'': {{{2015-07-13 22:23:25,263 INFO 
[neutron.manager] Loading core plugin: 
neutron.tests.unit.extensions.test_portsecurity.PortSecurityTestPlugin}}}
2015-07-13 22:23:32.840 | 
2015-07-13 22:23:32.840 | Traceback (most recent call last):
2015-07-13 22:23:32.840 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/extensions/test_portsecurity.py,
 line 171, in setUp
2015-07-13 22:23:32.841 | super(PortSecurityDBTestCase, self).setUp(plugin)
2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/extensions/test_portsecurity.py,
 line 40, in setUp
2015-07-13 22:23:32.841 | super(PortSecurityTestCase, 
self).setUp(plugin=plugin, ext_mgr=ext_mgr)
2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/db/test_db_base_plugin_v2.py,
 line 121, in setUp
2015-07-13 22:23:32.841 | self.api = router.APIRouter()
2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/api/v2/router.py, line 
104, in __init__
2015-07-13 22:23:32.841 | mapper.connect('index', '/', 
controller=Index(RESOURCES))
2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/routes/mapper.py,
 line 487, in connect
2015-07-13 22:23:32.841 | route = Route(*args, **kargs)
2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/routes/route.py,
 line 85, in __init__
2015-07-13 22:23:32.841 | self._setup_route()
2015-07-13 22:23:32.842 |   File 
/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/routes/route.py,
 line 101, in _setup_route
2015-07-13 22:23:32.842 | for key, val in self.reqs.iteritems():
2015-07-13 22:23:32.842 | AttributeError: 'dict' object has no attribute 
'iteritems'

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474266

Title:
  py34 unit test starts to fail after Routes 2.1 is excluded in global
  requirements

Status in neutron:
  In Progress

Bug description:
  Routes 2.1 was recently excluded in global-requirements as Nova requests it.
  As a result, we have no version of Routes which is compatible with Python 3.
  We need to excluded affected tests until we have python3 compat Routes.
  This bug is now blocking requirements update by the cron job.

  http://logs.openstack.org/46/182746/73/check/gate-neutron-
  python34/be0d939/console.html

  2015-07-13 22:23:32.840 | 
==
  2015-07-13 22:23:32.840 | ERROR: 
neutron.tests.unit.extensions.test_portsecurity.TestPortSecurity.test_update_port_remove_port_security_security_group_read
  2015-07-13 22:23:32.840 | 
--
  2015-07-13 22:23:32.840 | Empty attachments:
  2015-07-13 22:23:32.840 |   pythonlogging:'neutron.api.extensions'
  2015-07-13 22:23:32.840 | 
  2015-07-13 22:23:32.840 | pythonlogging:'': {{{2015-07-13 22:23:25,263 
INFO [neutron.manager] Loading core plugin: 
neutron.tests.unit.extensions.test_portsecurity.PortSecurityTestPlugin}}}
  2015-07-13 22:23:32.840 | 
  2015-07-13 22:23:32.840 | Traceback (most recent call last):
  2015-07-13 22:23:32.840 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/extensions/test_portsecurity.py,
 line 171, in setUp
  2015-07-13 22:23:32.841 | super(PortSecurityDBTestCase, 
self).setUp(plugin)
  2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/extensions/test_portsecurity.py,
 line 40, in setUp
  2015-07-13 22:23:32.841 | super(PortSecurityTestCase, 
self).setUp(plugin=plugin, ext_mgr=ext_mgr)
  2015-07-13 22:23:32.841 |   File 

[Yahoo-eng-team] [Bug 1474279] [NEW] FWaaS let connection opened if delete allow rule, beacuse of conntrack

2015-07-14 Thread Peter
Public bug reported:

Hi,

I've faced a problem with FWaaS plugin in Neutron (Juno).
The firewall works, but when I delete a rule from the policy, the
connection will still works because of conntrack... (I tried with ping,
and ssh)
It's okay, if the connection will kept alive, if it's really alive, (an
active SSH for example) but if I delete the ICMP rule, and stop pinging,
and restart pinging, the ping will still works...

If I go to my neutron server, and do a conntrack -F command on my
relevant qrouter, the firewall starts working based on the valid rules...

Are there any way, to configure the conntrack cleanup when FWaaS
configuration modified by user?

If not, can somebody help me, where to make changes on code, to run that
command in the proper namespace after the iptables rule-generation?


Regards,
 Peter

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474279

Title:
  FWaaS let connection opened if delete allow rule, beacuse of conntrack

Status in neutron:
  New

Bug description:
  Hi,

  I've faced a problem with FWaaS plugin in Neutron (Juno).
  The firewall works, but when I delete a rule from the policy, the
  connection will still works because of conntrack... (I tried with ping,
  and ssh)
  It's okay, if the connection will kept alive, if it's really alive, (an
  active SSH for example) but if I delete the ICMP rule, and stop pinging,
  and restart pinging, the ping will still works...

  If I go to my neutron server, and do a conntrack -F command on my
  relevant qrouter, the firewall starts working based on the valid rules...

  Are there any way, to configure the conntrack cleanup when FWaaS
  configuration modified by user?

  If not, can somebody help me, where to make changes on code, to run that
  command in the proper namespace after the iptables rule-generation?

  
  Regards,
   Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474240] [NEW] Implement Neutron-ONOS ML2 driver.

2015-07-14 Thread vikram.choudhary
Public bug reported:

Implement Neutron-ONOS ML2 driver.

This is a whishlist bug for developing plugin library for Open Network
Operating System (ONOS) via ML2 MechanismDriver in neutron which will
handle the back-end communication of neutron with Open Network Operating
System (ONOS) controller.

** Affects: neutron
 Importance: Wishlist
 Assignee: vikram.choudhary (vikschw)
 Status: New

** Project changed: networking-onos = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474240

Title:
  Implement Neutron-ONOS ML2 driver.

Status in neutron:
  New

Bug description:
  Implement Neutron-ONOS ML2 driver.

  This is a whishlist bug for developing plugin library for Open Network
  Operating System (ONOS) via ML2 MechanismDriver in neutron which will
  handle the back-end communication of neutron with Open Network
  Operating System (ONOS) controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474241] [NEW] Need a way to disable simple tenant usage

2015-07-14 Thread Radomir Dopieralski
Public bug reported:

Frequent calls to Nova's API when displaying the simple tenant usage can
lead to efficiency problems and even crash on the Nova side, especially
when there are a lot of deleted nodes in the database. We are working on
resolving that, but in the mean time, it would be nice to have a way of
disabling the simple tenant usage stats on the Horizon side as a
workaround.

Horizon enabled that option depending on whether it's supported on the
Nova side. In the 2.0 version of API we can simply disable the support
for it on the Nova side, but that won't be possible in version 2.1
anymore, so we need a configuration option on the Horizon side.

** Affects: horizon
 Importance: Undecided
 Assignee: Radomir Dopieralski (thesheep)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Radomir Dopieralski (thesheep)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474241

Title:
  Need a way to disable simple tenant usage

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Frequent calls to Nova's API when displaying the simple tenant usage
  can lead to efficiency problems and even crash on the Nova side,
  especially when there are a lot of deleted nodes in the database. We
  are working on resolving that, but in the mean time, it would be nice
  to have a way of disabling the simple tenant usage stats on the
  Horizon side as a workaround.

  Horizon enabled that option depending on whether it's supported on the
  Nova side. In the 2.0 version of API we can simply disable the support
  for it on the Nova side, but that won't be possible in version 2.1
  anymore, so we need a configuration option on the Horizon side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474241/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474240] Re: Implement Neutron-ONOS ML2 driver.

2015-07-14 Thread Kevin Benton
This doesn't affect neutron if you have a separate repo setup for it.

** Project changed: neutron = networking-onos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474240

Title:
  Implement Neutron-ONOS ML2 driver.

Status in networking-onos:
  New

Bug description:
  Implement Neutron-ONOS ML2 driver.

  This is a whishlist bug for developing plugin library for Open Network
  Operating System (ONOS) via ML2 MechanismDriver in neutron which will
  handle the back-end communication of neutron with Open Network
  Operating System (ONOS) controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-onos/+bug/1474240/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474265] [NEW] ml2 triggers attribute error for dvr_deletens_if_no_port

2015-07-14 Thread Isaku Yamahata
Public bug reported:

When using odl mechanism driver and l3 odl plugin, the following
exception occurs

2015-07-13 19:09:22.568 ERROR oslo_messaging.rpc.dispatcher 
[req-e9a7622a-b904-4c43-9c17-2914a7f45963 None None] Exception during message 
handling: 'OpenDaylightL3RouterPlug
in' object has no attribute 'dvr_deletens_if_no_port'
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_repl
y
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 170, in 
release_dhcp_por
t
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
plugin.delete_ports_by_device_id(context, device_id, network_id)
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/db/db_base_plugin_v2.py, line 884, in 
delete_ports_by_devic
e_id
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
self.delete_port(context, port_id)
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 146, in wrapper
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher ectxt.value = 
e.inner_exc
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 119, in 
__exit__
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 136, in wrapper
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher return f(*args, 
**kwargs)
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/plugins/ml2/plugin.py, line 1291, in delete_port
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher removed_routers 
= l3plugin.dvr_deletens_if_no_port(
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
'OpenDaylightL3RouterPlugin' object has no attribute 'dvr_deletens_if_no_port'
2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474265

Title:
  ml2 triggers attribute error for dvr_deletens_if_no_port

Status in neutron:
  New

Bug description:
  When using odl mechanism driver and l3 odl plugin, the following
  exception occurs

  2015-07-13 19:09:22.568 ERROR oslo_messaging.rpc.dispatcher 
[req-e9a7622a-b904-4c43-9c17-2914a7f45963 None None] Exception during message 
handling: 'OpenDaylightL3RouterPlug
  in' object has no attribute 'dvr_deletens_if_no_port'
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_repl
  y
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 170, in 
release_dhcp_por
  t
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
plugin.delete_ports_by_device_id(context, device_id, network_id)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/db/db_base_plugin_v2.py, line 884, in 
delete_ports_by_devic
  e_id
  2015-07-13 19:09:22.568 TRACE 

[Yahoo-eng-team] [Bug 1474277] [NEW] [lbaas]Change lbaasv2 scheduler to support moving loadbalancer from one lbaas v2 agent to another.

2015-07-14 Thread lee jian
Public bug reported:

when multi neutron-lbassv2-agent exist, each loadbalnacer is scheduled randomly 
on creation, and once a loadbalancer is binded to a agent, it's fixed, and 
cann't be changed according user's needs, this is not flexible.
The patch here implement functions liking l3-agent-router-add  
l3-agent-router-remove, and can be used to remove loadbalancer from its 
default lbaasv2 agent, or add one loadbalancer(without hosting agent) to a 
specified agent. it is helpful in production environment, since it can balance 
the network traffic, and may also be useful in lbaas ha implemention.

Remove the loadbalancer from one agent:
curl -g -i -X DELETE 
http://CONTROLLER:9696/v2.0/agents/0fc961f1-2279-414c-8e91-172965319276/agent-loadbalancers/4b4d8b7a-c70d-4a5c-a4cb-bb906273d1b2.json
 -H User-Agent: python-neutronclient -H Accept: application/json -H 
X-Auth-Token: cdb8f136cacb467fb3eeecc5d331db4a

Add the loadbalancer to the agent:
curl -g -i -X POST 
http://CONTROLLER:9696/v2.0/agents/0fc961f1-2279-414c-8e91-172965319276/agent-loadbalancers.json
 -H User-Agent: python-neutronclient -H Content-Type: application/json -H 
Accept: application/json -H X-Auth-Token: e8f45c060e82447d9b07fcd3e3c7e048 
-d '{loadbalancer_id: 4b4d8b7a-c70d-4a5c-a4cb-bb906273d1b2, provider: 
haproxy}'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474277

Title:
  [lbaas]Change lbaasv2 scheduler to support  moving loadbalancer from
  one lbaas v2 agent to another.

Status in neutron:
  New

Bug description:
  when multi neutron-lbassv2-agent exist, each loadbalnacer is scheduled 
randomly on creation, and once a loadbalancer is binded to a agent, it's fixed, 
and cann't be changed according user's needs, this is not flexible.
  The patch here implement functions liking l3-agent-router-add  
l3-agent-router-remove, and can be used to remove loadbalancer from its 
default lbaasv2 agent, or add one loadbalancer(without hosting agent) to a 
specified agent. it is helpful in production environment, since it can balance 
the network traffic, and may also be useful in lbaas ha implemention.

  Remove the loadbalancer from one agent:
  curl -g -i -X DELETE 
http://CONTROLLER:9696/v2.0/agents/0fc961f1-2279-414c-8e91-172965319276/agent-loadbalancers/4b4d8b7a-c70d-4a5c-a4cb-bb906273d1b2.json
 -H User-Agent: python-neutronclient -H Accept: application/json -H 
X-Auth-Token: cdb8f136cacb467fb3eeecc5d331db4a

  Add the loadbalancer to the agent:
  curl -g -i -X POST 
http://CONTROLLER:9696/v2.0/agents/0fc961f1-2279-414c-8e91-172965319276/agent-loadbalancers.json
 -H User-Agent: python-neutronclient -H Content-Type: application/json -H 
Accept: application/json -H X-Auth-Token: e8f45c060e82447d9b07fcd3e3c7e048 
-d '{loadbalancer_id: 4b4d8b7a-c70d-4a5c-a4cb-bb906273d1b2, provider: 
haproxy}'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1474277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474303] Re: Unexpected error while running command.\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf blkid -o value -s TYPE /dev/nbd0\nExit code: 2\nStdout: u''\nStderr: u''

2015-07-14 Thread Dmitry Pyzhov
** Changed in: fuel
 Assignee: Nova (nova) = (unassigned)

-- 
You received this bug notification because you are a member of Nova,
which is a bug assignee.
https://bugs.launchpad.net/bugs/1474303

Title:
  Unexpected error while running command.\nCommand: sudo nova-rootwrap
  /etc/nova/rootwrap.conf blkid -o value -s TYPE /dev/nbd0\nExit code:
  2\nStdout: u''\nStderr: u''\n]

Status in Fuel for OpenStack:
  New

Bug description:
  1. Create new environment (Ubuntu Kilo)
  2. Choose Neutron, VLAN segmentation
  3. Add 1 controller, 1 compute, 1 cinder
  4. Start deployment. It was successful
  5. Start OSTF tests. Tests for stack failed
  6. There are errors and warnings in nova-conductor.log on controller (node-1):
  http://paste.openstack.org/show/373923/

  Logs are here:
  
https://drive.google.com/a/mirantis.com/file/d/0B6SjzarTGFxaalNfWVFuOFF0dGc/view?usp=sharing

  build_id: 2015-07-12_15-52-44, build_number: 31,
  release_versions: {2014.2.2-7.0: {VERSION: {build_id:
  2015-07-12_15-52-44, build_number: 31, api: 1.0, fuel-
  library_sha: 49c7ddeb5e4257bb52862bc5aa22600df71bb52a,
  nailgun_sha: 60f9bf536e30efd896b7b4da1830e71adda19e30,
  feature_groups: [mirantis], openstack_version: 2014.2.2-7.0,
  production: docker, python-fuelclient_sha:
  accd6493bf034ba7c70c987ace8f1dcd960cbdf5, astute_sha:
  9cbb8ae5adbe6e758b24b3c1021aac1b662344e8, fuel-ostf_sha:
  62785c16f8399f30526d24c52bb9ca23e1585bfb, release: 7.0,
  fuelmain_sha: 28551be12a050acb9a633933ed6a8b25e2dc411c}}},
  auth_required: true, api: 1.0, fuel-library_sha:
  49c7ddeb5e4257bb52862bc5aa22600df71bb52a, nailgun_sha:
  60f9bf536e30efd896b7b4da1830e71adda19e30, feature_groups:
  [mirantis], openstack_version: 2014.2.2-7.0, production:
  docker, python-fuelclient_sha:
  accd6493bf034ba7c70c987ace8f1dcd960cbdf5, astute_sha:
  9cbb8ae5adbe6e758b24b3c1021aac1b662344e8, fuel-ostf_sha:
  62785c16f8399f30526d24c52bb9ca23e1585bfb, release: 7.0,
  fuelmain_sha: 28551be12a050acb9a633933ed6a8b25e2dc411c

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1474303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474303] [NEW] Unexpected error while running command.\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf blkid -o value -s TYPE /dev/nbd0\nExit code: 2\nStdout: u''\nStderr: u

2015-07-14 Thread Anastasia Palkina
Public bug reported:

1. Create new environment (Ubuntu Kilo)
2. Choose Neutron, VLAN segmentation
3. Add 1 controller, 1 compute, 1 cinder
4. Start deployment. It was successful
5. Start OSTF tests. Tests for stack failed
6. There are errors and warnings in nova-conductor.log on controller (node-1):
http://paste.openstack.org/show/373923/

Logs are here:
https://drive.google.com/a/mirantis.com/file/d/0B6SjzarTGFxaalNfWVFuOFF0dGc/view?usp=sharing

build_id: 2015-07-12_15-52-44, build_number: 31,
release_versions: {2014.2.2-7.0: {VERSION: {build_id:
2015-07-12_15-52-44, build_number: 31, api: 1.0, fuel-
library_sha: 49c7ddeb5e4257bb52862bc5aa22600df71bb52a, nailgun_sha:
60f9bf536e30efd896b7b4da1830e71adda19e30, feature_groups:
[mirantis], openstack_version: 2014.2.2-7.0, production:
docker, python-fuelclient_sha:
accd6493bf034ba7c70c987ace8f1dcd960cbdf5, astute_sha:
9cbb8ae5adbe6e758b24b3c1021aac1b662344e8, fuel-ostf_sha:
62785c16f8399f30526d24c52bb9ca23e1585bfb, release: 7.0,
fuelmain_sha: 28551be12a050acb9a633933ed6a8b25e2dc411c}}},
auth_required: true, api: 1.0, fuel-library_sha:
49c7ddeb5e4257bb52862bc5aa22600df71bb52a, nailgun_sha:
60f9bf536e30efd896b7b4da1830e71adda19e30, feature_groups:
[mirantis], openstack_version: 2014.2.2-7.0, production:
docker, python-fuelclient_sha:
accd6493bf034ba7c70c987ace8f1dcd960cbdf5, astute_sha:
9cbb8ae5adbe6e758b24b3c1021aac1b662344e8, fuel-ostf_sha:
62785c16f8399f30526d24c52bb9ca23e1585bfb, release: 7.0,
fuelmain_sha: 28551be12a050acb9a633933ed6a8b25e2dc411c

** Affects: fuel
 Importance: Critical
 Status: New

-- 
You received this bug notification because you are a member of Nova,
which is a bug assignee.
https://bugs.launchpad.net/bugs/1474303

Title:
  Unexpected error while running command.\nCommand: sudo nova-rootwrap
  /etc/nova/rootwrap.conf blkid -o value -s TYPE /dev/nbd0\nExit code:
  2\nStdout: u''\nStderr: u''\n]

Status in Fuel for OpenStack:
  New

Bug description:
  1. Create new environment (Ubuntu Kilo)
  2. Choose Neutron, VLAN segmentation
  3. Add 1 controller, 1 compute, 1 cinder
  4. Start deployment. It was successful
  5. Start OSTF tests. Tests for stack failed
  6. There are errors and warnings in nova-conductor.log on controller (node-1):
  http://paste.openstack.org/show/373923/

  Logs are here:
  
https://drive.google.com/a/mirantis.com/file/d/0B6SjzarTGFxaalNfWVFuOFF0dGc/view?usp=sharing

  build_id: 2015-07-12_15-52-44, build_number: 31,
  release_versions: {2014.2.2-7.0: {VERSION: {build_id:
  2015-07-12_15-52-44, build_number: 31, api: 1.0, fuel-
  library_sha: 49c7ddeb5e4257bb52862bc5aa22600df71bb52a,
  nailgun_sha: 60f9bf536e30efd896b7b4da1830e71adda19e30,
  feature_groups: [mirantis], openstack_version: 2014.2.2-7.0,
  production: docker, python-fuelclient_sha:
  accd6493bf034ba7c70c987ace8f1dcd960cbdf5, astute_sha:
  9cbb8ae5adbe6e758b24b3c1021aac1b662344e8, fuel-ostf_sha:
  62785c16f8399f30526d24c52bb9ca23e1585bfb, release: 7.0,
  fuelmain_sha: 28551be12a050acb9a633933ed6a8b25e2dc411c}}},
  auth_required: true, api: 1.0, fuel-library_sha:
  49c7ddeb5e4257bb52862bc5aa22600df71bb52a, nailgun_sha:
  60f9bf536e30efd896b7b4da1830e71adda19e30, feature_groups:
  [mirantis], openstack_version: 2014.2.2-7.0, production:
  docker, python-fuelclient_sha:
  accd6493bf034ba7c70c987ace8f1dcd960cbdf5, astute_sha:
  9cbb8ae5adbe6e758b24b3c1021aac1b662344e8, fuel-ostf_sha:
  62785c16f8399f30526d24c52bb9ca23e1585bfb, release: 7.0,
  fuelmain_sha: 28551be12a050acb9a633933ed6a8b25e2dc411c

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1474303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474333] [NEW] submmit testing (ignore it)

2015-07-14 Thread Liyankun
Public bug reported:

report a bug

** Affects: keystone
 Importance: Undecided
 Assignee: Liyankun (liyankun)
 Status: In Progress


** Tags: sd

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474333

Title:
  submmit testing (ignore it)

Status in Keystone:
  In Progress

Bug description:
  report a bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1474333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474265] Re: ml2 triggers attribute error for dvr_deletens_if_no_port

2015-07-14 Thread Isaku Yamahata
** Also affects: networking-odl
   Importance: Undecided
   Status: New

** Changed in: networking-odl
 Assignee: (unassigned) = Isaku Yamahata (yamahata)

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474265

Title:
  ml2 triggers attribute error for dvr_deletens_if_no_port

Status in networking-odl:
  New

Bug description:
  When using odl mechanism driver and l3 odl plugin, the following
  exception occurs

  2015-07-13 19:09:22.568 ERROR oslo_messaging.rpc.dispatcher 
[req-e9a7622a-b904-4c43-9c17-2914a7f45963 None None] Exception during message 
handling: 'OpenDaylightL3RouterPlug
  in' object has no attribute 'dvr_deletens_if_no_port'
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_repl
  y
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/api/rpc/handlers/dhcp_rpc.py, line 170, in 
release_dhcp_por
  t
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
plugin.delete_ports_by_device_id(context, device_id, network_id)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/db/db_base_plugin_v2.py, line 884, in 
delete_ports_by_devic
  e_id
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
self.delete_port(context, port_id)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 146, in wrapper
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher ectxt.value = 
e.inner_exc
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 119, in 
__exit__
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 136, in wrapper
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher   File 
/odl/neutron/neutron/plugins/ml2/plugin.py, line 1291, in delete_port
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher 
removed_routers = l3plugin.dvr_deletens_if_no_port(
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher AttributeError: 
'OpenDaylightL3RouterPlugin' object has no attribute 'dvr_deletens_if_no_port'
  2015-07-13 19:09:22.568 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1474265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1196924] Re: Stop and Delete operations should give the Guest a chance to shutdown

2015-07-14 Thread Chris J Arges
** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1196924

Title:
  Stop and Delete operations should give the Guest a chance to shutdown

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  In Progress
Status in nova source package in Trusty:
  New

Bug description:
  [Impact]

   * VMs being shutdown with any signal/notification from the The
  hypervisor level, services running inside VMs have no chance to
  perform a clean shutoff

  [Test Case]

   * 1. stop a VM 
 2. the VM is shutdown without any notification

  [Regression Potential]

   * none


  Currently in libvirt stop and delete operations simply destroy the
  underlying VM. Some GuestOS's do not react well to this type of
  power failure, and it would be better if these operations followed the
  same approach a a soft_reboot and give the guest a chance to shutdown
  gracefully.   Even where VM is being deleted, it may be booted from a
  volume which will be reused on another server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1196924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293540] Re: nova should make sure the bridge exists before resuming a VM after an offline snapshot

2015-07-14 Thread Kevin Carter
** No longer affects: openstack-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293540

Title:
  nova should make sure the bridge exists before resuming a VM after an
  offline snapshot

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  My setup is based on icehouse-2, KVM, Neutron setup with ML2 and the linux 
bridge agent, CentOS 6.5 and LVM as the ephemeral backend.
  The OS should not matter in this, LVM should not matter either, just make 
sure the snapshot takes the VM offline.

  How to reproduce:
  1. create one VM on a compute node (make sure only one VM is present).
  2. snapshot the VM (offline).
  3. linux bridge removes the tap interface from the bridge and decides to 
remove the bridge also since there are no other interfaces present.
  4. nova tries to resume the VM and fails since no bridge is present (libvirt 
error, can't get the bridge MTU).

  Side question:
  Why do both neutron and nova deal with the bridge ?
  I can understand the need to remove empty bridges but I believe nova should 
be the one to do it if nova is dealing mainly with the bridge itself.

  More information:

  During the snapshot Neutron (linux bridge) is called:
  (neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent)
  treat_devices_removed is called and removes the tap interface and calls 
self.br_mgr.remove_empty_bridges

  On resume:
  nova/virt/libvirt/driver.py in the snapshot method fails at:
  if CONF.libvirt.virt_type != 'lxc' and not live_snapshot:
  if state == power_state.RUNNING:
  new_dom = self._create_domain(domain=virt_dom)

  Having more than one VM on the same bridge works fine since neutron
  (the linux bridge agent) only removes an empty bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474467] [NEW] default_schedule_zone should be list

2015-07-14 Thread Kevin Stevens
Public bug reported:

I'd like to re-open or re-state the issue reported in
https://bugs.launchpad.net/nova/+bug/1037371 .

Let us say that I have 3 availability zones: nova, az1, az2. I do not
care if I land in nova or az1 if no AZ is specified on boot but az2 is
special and I do *not* want to land there default. The only way around
this that I can think of would be to disable the hypervisors in the az2
AZ and boot to them manually. However, if I disable the nodes in AZ2 I
cannot simply boot to az2 and let the scheduler make the appropriate
choice about where to schedule the instance.

It seems like it would make sense for default_schedule_zone to be a list
option or since that might be a pain to keep track of... for there to be
a sort of inverse option like excluded_schedule_zones.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474467

Title:
  default_schedule_zone should be list

Status in OpenStack Compute (nova):
  New

Bug description:
  I'd like to re-open or re-state the issue reported in
  https://bugs.launchpad.net/nova/+bug/1037371 .

  Let us say that I have 3 availability zones: nova, az1, az2. I do not
  care if I land in nova or az1 if no AZ is specified on boot but az2 is
  special and I do *not* want to land there default. The only way
  around this that I can think of would be to disable the hypervisors in
  the az2 AZ and boot to them manually. However, if I disable the nodes
  in AZ2 I cannot simply boot to az2 and let the scheduler make the
  appropriate choice about where to schedule the instance.

  It seems like it would make sense for default_schedule_zone to be a
  list option or since that might be a pain to keep track of... for
  there to be a sort of inverse option like excluded_schedule_zones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474491] [NEW] keystone.tests.unit.test_config fails in isolation

2015-07-14 Thread Dolph Mathews
Public bug reported:

While investigating bug 1474069, I discovered this test fails when run
in isolation as well.

$ tox -e py27 keystone.tests.unit.test_config
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpXPUxi4
==
FAIL: keystone.tests.unit.test_config.DeprecatedOverrideTestCase.test_sql
tags: worker-0
--
Empty attachments:
  pythonlogging:''-1
  stderr
  stdout

pythonlogging:'': {{{Adding cache-proxy
'keystone.tests.unit.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File keystone/tests/unit/test_config.py, line 81, in test_sql
self.assertEqual('sqlite://new', CONF.database.connection)
  File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 1902, in __getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option: database
==
FAIL: keystone.tests.unit.test_config.DeprecatedTestCase.test_sql
tags: worker-0
--
Empty attachments:
  pythonlogging:''-1
  stderr
  stdout

pythonlogging:'': {{{Adding cache-proxy
'keystone.tests.unit.test_cache.CacheIsolatingProxy' to backend.}}}

Traceback (most recent call last):
  File keystone/tests/unit/test_config.py, line 65, in test_sql
self.assertEqual('sqlite://deprecated', CONF.database.connection)
  File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 1902, in __getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option: database
Ran 4 (+3) tests in 0.032s (-0.357s)
FAILED (id=5988, failures=2 (+2))

The individual tests fail when run in isolation as well. For example:

$ tox -e py27 
keystone.tests.unit.test_config.DeprecatedOverrideTestCase.test_sql
$ tox -e py27 keystone.tests.unit.test_config.DeprecatedTestCase.test_sql

** Affects: keystone
 Importance: Low
 Status: Triaged


** Tags: low-hanging-fruit test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474491

Title:
  keystone.tests.unit.test_config fails in isolation

Status in Keystone:
  Triaged

Bug description:
  While investigating bug 1474069, I discovered this test fails when run
  in isolation as well.

  $ tox -e py27 keystone.tests.unit.test_config
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpXPUxi4
  ==
  FAIL: keystone.tests.unit.test_config.DeprecatedOverrideTestCase.test_sql
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''-1
stderr
stdout

  pythonlogging:'': {{{Adding cache-proxy
  'keystone.tests.unit.test_cache.CacheIsolatingProxy' to backend.}}}

  Traceback (most recent call last):
File keystone/tests/unit/test_config.py, line 81, in test_sql
  self.assertEqual('sqlite://new', CONF.database.connection)
File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 1902, in __getattr__
  raise NoSuchOptError(name)
  oslo_config.cfg.NoSuchOptError: no such option: database
  ==
  FAIL: keystone.tests.unit.test_config.DeprecatedTestCase.test_sql
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''-1
stderr
stdout

  pythonlogging:'': {{{Adding cache-proxy
  'keystone.tests.unit.test_cache.CacheIsolatingProxy' to backend.}}}

  Traceback (most recent call last):
File keystone/tests/unit/test_config.py, line 65, in test_sql
  self.assertEqual('sqlite://deprecated', CONF.database.connection)
File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 1902, in __getattr__
  raise 

[Yahoo-eng-team] [Bug 1474498] [NEW] jasmine tests redirect to login

2015-07-14 Thread Tyr Johanson
Public bug reported:

https://review.openstack.org/#/c/200725 introduced a strange error.

Run Horizon, then visit http://localhost:8000/jasmine/ServicesTests.

It breaks the test runner and redirect to the login page. On my machine
this happens in Chrome, but not Safari.

Chrome:  43.0.2357.132
Safari: 8.0.4

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474498

Title:
  jasmine tests redirect to login

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/#/c/200725 introduced a strange error.

  Run Horizon, then visit http://localhost:8000/jasmine/ServicesTests.

  It breaks the test runner and redirect to the login page. On my
  machine this happens in Chrome, but not Safari.

  Chrome:  43.0.2357.132
  Safari: 8.0.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474162] Re: ldap unicode issue when doing a show user

2015-07-14 Thread Dolph Mathews
Ah, then we need to backport the fix for bug 1448286 (which is already
tagged for backporting), along with the fix for bug 1454968 (which my
fix for the first bug triggered).

Closing this bug as we need to track against the bugs merged to master.

** Changed in: keystone
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474162

Title:
  ldap unicode issue when doing a show user

Status in Keystone:
  Invalid

Bug description:
  In stable/kilo release, when the username contains non ascii charaters, 
showing the user from ldap with the following command -
  openstack user show --domain=ad Test Accent Communiquè
  will throw an exception. And this has been addressed in the Master branch, so 
what needs to be done is just to backport the changes to stable/kilo. 

  I tested the changes in the Master branch and works fine.

  This is similar to https://bugs.launchpad.net/keystone/+bug/1419187





  (keystone.common.wsgi): 2015-07-10 21:25:26,351 INFO wsgi __call__ GET 
/domains?name=ad
  (keystone.common.wsgi): 2015-07-10 21:25:26,385 ERROR wsgi __call__ 'ascii' 
codec can't encode character u'\xe8' in position 21: ordinal not in range(128)
  Traceback (most recent call last):
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/wsgi.py,
 line 452, in __call__
  response = request.get_response(self.application)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/request.py,
 line 1317, in send
  application, catch_exc_info=False)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/request.py,
 line 1281, in call_application
  app_iter = application(self.environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 

[Yahoo-eng-team] [Bug 1465922] Re: Password visible in clear text in keystone.log when user created and keystone debug logging is enabled

2015-07-14 Thread Dolph Mathews
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Also affects: keystone/juno
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1465922

Title:
  Password visible in clear text in keystone.log when user created and
  keystone debug logging is enabled

Status in Keystone:
  Fix Committed
Status in Keystone juno series:
  New
Status in Keystone kilo series:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  grep CLEARTEXTPASSWORD keystone.log

  2015-06-16 06:44:39.770 20986 DEBUG keystone.common.controller [-]
  RBAC: Authorizing identity:create_user(user={u'domain_id': u'default',
  u'password': u'CLEARTEXTPASSWORD', u'enabled': True,
  u'default_project_id': u'0175b43419064ae38c4b74006baaeb8d', u'name':
  u'DermotJ'}) _build_policy_check_credentials /usr/lib/python2.7/site-
  packages/keystone/common/controller.py:57

  Issue code:
  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L57

  LOG.debug('RBAC: Authorizing %(action)s(%(kwargs)s)', {
  'action': action,
  'kwargs': ', '.join(['%s=%s' % (k, kwargs[k]) for k in kwargs])})

  Shadow the values of sensitive fields like 'password' by some
  meaningless garbled text like X is one way to fix.

  Well, in addition to this, I think we should never pass the 'password'
  with its original value along the code and save it in any persistence,
  instead we should convert it to a strong hash value as early as
  possible. With the help of a good hash system, we never have to need
  the original value of the password, right?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1465922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474512] [NEW] STATIC_URL statically defined for stack graphics

2015-07-14 Thread David Lyle
Public bug reported:

The svg and gif images are still using '/static/' as the base url. Since
WEBROOT is configurable and STATIC_URL is as well. This is needs to be
fixed or the images won't be found when either has been set.

** Affects: horizon
 Importance: Medium
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474512

Title:
  STATIC_URL statically defined for stack graphics

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The svg and gif images are still using '/static/' as the base url.
  Since WEBROOT is configurable and STATIC_URL is as well. This is needs
  to be fixed or the images won't be found when either has been set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454968] Re: hard to understand the uri printed in the log

2015-07-14 Thread Dolph Mathews
** Also affects: keystone/juno
   Importance: Undecided
   Status: New

** Changed in: keystone/juno
   Status: New = In Progress

** Changed in: keystone/juno
   Importance: Undecided = Medium

** Changed in: keystone/juno
 Assignee: (unassigned) = Dolph Mathews (dolph)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454968

Title:
  hard to understand the uri printed in the log

Status in Keystone:
  Fix Released
Status in Keystone juno series:
  In Progress
Status in Keystone kilo series:
  In Progress

Bug description:
  In keystone's log file, we can easily find some uri printed like this:
  
http://127.0.0.1:35357/v3/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens

  seems there is something wrong when we are trying to log the uri in the log 
file.
  LOG.info('%(req_method)s %(uri)s', {
  'req_method': req.environ['REQUEST_METHOD'].upper(),
  'uri': wsgiref.util.request_uri(req.environ),
  })

  code is here:
  
https://github.com/openstack/keystone/blob/0debc2fbf448b44574da6f3fef7d457037c59072/keystone/common/wsgi.py#L232
  but seems it has already been wrong when the req is passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474550] [NEW] network allocation randomly failing with InstanceUpdateConflict after compare and swap was merged

2015-07-14 Thread Matt Riedemann
Public bug reported:

Seeing this quite a bit:

http://logs.openstack.org/85/197185/10/gate/gate-tempest-dsvm-
cells/7ef1949/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-07-14_16_39_06_862

2015-07-14 16:39:06.862 ERROR nova.compute.manager 
[req-94a9749e-2aed-433c-8dea-f672b61b7fcf 
tempest-DeleteServersTestJSON-193542712 
tempest-DeleteServersTestJSON-409686918] [instance: 
f143b952-9edd-4971-86fe-227d61294a76] Failed to allocate network(s)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] Traceback (most recent call last):
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 2057, in _build_resources
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] requested_networks, security_groups)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1572, in 
_build_networks_for_instance
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] requested_networks, macs, 
security_groups, dhcp_options)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/compute/manager.py, line 1606, in _allocate_network
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] 
instance.save(expected_task_state=[None])
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/objects/base.py, line 100, in wrapper
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] self._context, self, fn.__name__, 
args, kwargs)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/conductor/rpcapi.py, line 266, in object_action
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] objmethod=objmethod, args=args, 
kwargs=kwargs)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py, line 
158, in call
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] retry=self.retry)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py, line 90, 
in _send
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] timeout=timeout, retry=retry)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 361, in send
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] retry=retry)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py, 
line 352, in _send
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] raise result
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] InstanceUpdateConflict_Remote: Conflict 
updating instance f143b952-9edd-4971-86fe-227d61294a76
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] Traceback (most recent call last):
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] 
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/conductor/manager.py, line 437, in _object_dispatch
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] return getattr(target, method)(*args, 
**kwargs)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] 
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76]   File 
/opt/stack/new/nova/nova/objects/base.py, line 116, in wrapper
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] return fn(self, *args, **kwargs)
2015-07-14 16:39:06.862 3472 ERROR nova.compute.manager [instance: 
f143b952-9edd-4971-86fe-227d61294a76] 

[Yahoo-eng-team] [Bug 1474551] [NEW] static_url should be configurable

2015-07-14 Thread David Lyle
Public bug reported:

STATIC_URL does not have to live under Horizon's WEBROOT. This was
mistakenly tied when the WEBROOT was changed, this should be flexible.

** Affects: horizon
 Importance: High
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: horizon-core

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474551

Title:
  static_url should be configurable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  STATIC_URL does not have to live under Horizon's WEBROOT. This was
  mistakenly tied when the WEBROOT was changed, this should be flexible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474618] [NEW] N1KV network and port creates failing from dashboard

2015-07-14 Thread Saksham Varma
Public bug reported:

Due to the change in name of the profile attribute in Neutron
attribute extensions for networks and ports, network and port creations
fail from the dashboard since dashboard is still using n1kv:profile_id
rather than n1kv:profile.

** Affects: horizon
 Importance: Undecided
 Assignee: Saksham Varma (sakvarma)
 Status: New


** Tags: n1kv

** Changed in: horizon
 Assignee: (unassigned) = Saksham Varma (sakvarma)

** Summary changed:

- Fix N1KV network and port-creates through dashboard
+ N1KV network and port creates failing from dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1474618

Title:
  N1KV network and port creates failing from dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Due to the change in name of the profile attribute in Neutron
  attribute extensions for networks and ports, network and port
  creations fail from the dashboard since dashboard is still using
  n1kv:profile_id rather than n1kv:profile.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1474618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354258] Re: nova-api will go wrong if AZ name has space in it when memcach is used

2015-07-14 Thread Davanum Srinivas (DIMS)
** Changed in: oslo-incubator
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354258

Title:
  nova-api will go wrong if AZ name has space in it when memcach is used

Status in OpenStack Compute (nova):
  Invalid
Status in oslo-incubator:
  Won't Fix

Bug description:
  Description:
  1. memcahe is enabled
  2. AZ name has space in it such as vmware region

  Then the nova-api will go wrong:
  [root@rs-144-1 init.d]# nova list
  ERROR: The server has either erred or is incapable of performing the 
requested operation. (HTTP 500) (Request-ID: 
req-a26c1fd3-ce08-4875-aacf-f8db8f73b089)

  Reason:
  Memcach retrieve the AZ name as key and check it. It will raise an error if 
there are unexpected character in the key.

  LOG in /var/log/api.log

  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/availability_zones.py, line 145, in 
get_instance_availability_zone
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack az = 
cache.get(cache_key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 898, in get
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack return 
self._get('get', key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 847, in _get
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack self.check_key(key)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/memcache.py, line 1065, in check_key
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack #Control 
characters not allowed)
  2014-08-08 03:22:50.525 23184 TRACE nova.api.openstack 
MemcachedKeyCharacterError: Control characters not allowed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474622] [NEW] test submit bug

2015-07-14 Thread Liyankun
Public bug reported:

test

** Affects: keystone
 Importance: Undecided
 Assignee: Liyankun (liyankun)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Liyankun (liyankun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474622

Title:
  test submit bug

Status in Keystone:
  New

Bug description:
  test

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1474622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474660] [NEW] Stack trace if federation mapping references unavailable request variables

2015-07-14 Thread Julian Edwards
Public bug reported:

If the federation mapping depends on a request variable that is not
present (e.g. REMOTE_USER is not set) then keystone blows up with a
stack trace.

Ideally, something a little more user-friendly might happen.

Trace below:


[Wed Jul 15 04:29:44 2015] [error] 1827 ERROR keystone.common.wsgi [-] tuple 
index out of range
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi Traceback 
(most recent call last):
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/common/wsgi.py, line 239, in __call__
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi result = 
method(context, **params)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/contrib/federation/controllers.py, line 292, in 
federated_sso_auth
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
protocol_id)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/contrib/federation/controllers.py, line 267, in 
federated_authentication
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi return 
self.authenticate_for_token(context, auth=auth)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/auth/controllers.py, line 377, in 
authenticate_for_token
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
self.authenticate(context, auth_info, auth_context)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/auth/controllers.py, line 502, in authenticate
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
auth_context)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/auth/plugins/mapped.py, line 70, in authenticate
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
self.identity_api)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/auth/plugins/mapped.py, line 144, in 
handle_unscoped_token
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
federation_api, identity_api)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/auth/plugins/mapped.py, line 193, in 
apply_mapping_filter
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
mapped_properties = rule_processor.process(assertion)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/contrib/federation/utils.py, line 471, in process
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
new_local = self._update_local_mapping(local, direct_maps)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/contrib/federation/utils.py, line 613, in 
_update_local_mapping
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
new_value = self._update_local_mapping(v, direct_maps)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/contrib/federation/utils.py, line 615, in 
_update_local_mapping
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
new_value = v.format(*direct_maps)
[Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi IndexError: 
tuple index out of range

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474660

Title:
  Stack trace if federation mapping references unavailable request
  variables

Status in Keystone:
  New

Bug description:
  If the federation mapping depends on a request variable that is not
  present (e.g. REMOTE_USER is not set) then keystone blows up with a
  stack trace.

  Ideally, something a little more user-friendly might happen.

  Trace below:

  
  [Wed Jul 15 04:29:44 2015] [error] 1827 ERROR keystone.common.wsgi [-] tuple 
index out of range
  [Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi Traceback 
(most recent call last):
  [Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/common/wsgi.py, line 239, in __call__
  [Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi result 
= method(context, **params)
  [Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 
/opt/stack/keystone/keystone/contrib/federation/controllers.py, line 292, in 
federated_sso_auth
  [Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi 
protocol_id)
  [Wed Jul 15 04:29:44 2015] [error] 1827 TRACE keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1474266] Re: py34 unit test starts to fail after Routes 2.1 is excluded in global requirements

2015-07-14 Thread YAMAMOTO Takashi
** Also affects: networking-midonet
   Importance: Undecided
   Status: New

** Changed in: networking-midonet
 Assignee: (unassigned) = YAMAMOTO Takashi (yamamoto)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474266

Title:
  py34 unit test starts to fail after Routes 2.1 is excluded in global
  requirements

Status in networking-midonet:
  New
Status in neutron:
  Fix Committed

Bug description:
  Routes 2.1 was recently excluded in global-requirements as Nova requests it.
  As a result, we have no version of Routes which is compatible with Python 3.
  We need to excluded affected tests until we have python3 compat Routes.
  This bug is now blocking requirements update by the cron job.

  http://logs.openstack.org/46/182746/73/check/gate-neutron-
  python34/be0d939/console.html

  2015-07-13 22:23:32.840 | 
==
  2015-07-13 22:23:32.840 | ERROR: 
neutron.tests.unit.extensions.test_portsecurity.TestPortSecurity.test_update_port_remove_port_security_security_group_read
  2015-07-13 22:23:32.840 | 
--
  2015-07-13 22:23:32.840 | Empty attachments:
  2015-07-13 22:23:32.840 |   pythonlogging:'neutron.api.extensions'
  2015-07-13 22:23:32.840 | 
  2015-07-13 22:23:32.840 | pythonlogging:'': {{{2015-07-13 22:23:25,263 
INFO [neutron.manager] Loading core plugin: 
neutron.tests.unit.extensions.test_portsecurity.PortSecurityTestPlugin}}}
  2015-07-13 22:23:32.840 | 
  2015-07-13 22:23:32.840 | Traceback (most recent call last):
  2015-07-13 22:23:32.840 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/extensions/test_portsecurity.py,
 line 171, in setUp
  2015-07-13 22:23:32.841 | super(PortSecurityDBTestCase, 
self).setUp(plugin)
  2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/extensions/test_portsecurity.py,
 line 40, in setUp
  2015-07-13 22:23:32.841 | super(PortSecurityTestCase, 
self).setUp(plugin=plugin, ext_mgr=ext_mgr)
  2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/tests/unit/db/test_db_base_plugin_v2.py,
 line 121, in setUp
  2015-07-13 22:23:32.841 | self.api = router.APIRouter()
  2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/neutron/api/v2/router.py, line 
104, in __init__
  2015-07-13 22:23:32.841 | mapper.connect('index', '/', 
controller=Index(RESOURCES))
  2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/routes/mapper.py,
 line 487, in connect
  2015-07-13 22:23:32.841 | route = Route(*args, **kargs)
  2015-07-13 22:23:32.841 |   File 
/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/routes/route.py,
 line 85, in __init__
  2015-07-13 22:23:32.841 | self._setup_route()
  2015-07-13 22:23:32.842 |   File 
/home/jenkins/workspace/gate-neutron-python34/.tox/py34/lib/python3.4/site-packages/routes/route.py,
 line 101, in _setup_route
  2015-07-13 22:23:32.842 | for key, val in self.reqs.iteritems():
  2015-07-13 22:23:32.842 | AttributeError: 'dict' object has no attribute 
'iteritems'

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1474266/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474639] [NEW] _delete_port is still used by dvr

2015-07-14 Thread YAMAMOTO Takashi
Public bug reported:

recent ipam change I81806a43ecc6f0a7b293ce3e70d09d1e266b9f02
effectively removed _delete_port from NeutronDbPluginV2.
unfortunately it's still used directly by l3_dvr_db.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkF0dHJpYnV0ZUVycm9yOiAnTWwyUGx1Z2luJyBvYmplY3QgaGFzIG5vIGF0dHJpYnV0ZSAnX2RlbGV0ZV9wb3J0J1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM2OTMxOTA3NjkwfQ==

2015-07-14 21:34:17.248 ERROR neutron.api.v2.resource 
[req-7f4c4fbd-3354-484a-befa-401b5c461d5c 
tempest-FloatingIPsTestJSON-1081786723 tempest-FloatingIPsTestJSON-110914648] 
delete failed
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 146, in wrapper
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 119, in 
__exit__
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 136, in wrapper
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 498, in delete
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 254, in delete_floatingip
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource admin_ctx, 
floatingip)
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 247, in 
_clear_unused_fip_agent_gw_port
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource context, 
fip_hostid, floatingip_db['floating_network_id'])
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/db/l3_dvr_db.py, line 538, in 
_delete_floatingip_agent_gateway_port
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource 
self._core_plugin._delete_port(context, p['id'])
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource AttributeError: 
'Ml2Plugin' object has no attribute '_delete_port'
2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1474639

Title:
  _delete_port is still used by dvr

Status in neutron:
  New

Bug description:
  recent ipam change I81806a43ecc6f0a7b293ce3e70d09d1e266b9f02
  effectively removed _delete_port from NeutronDbPluginV2.
  unfortunately it's still used directly by l3_dvr_db.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkF0dHJpYnV0ZUVycm9yOiAnTWwyUGx1Z2luJyBvYmplY3QgaGFzIG5vIGF0dHJpYnV0ZSAnX2RlbGV0ZV9wb3J0J1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDM2OTMxOTA3NjkwfQ==

  2015-07-14 21:34:17.248 ERROR neutron.api.v2.resource 
[req-7f4c4fbd-3354-484a-befa-401b5c461d5c 
tempest-FloatingIPsTestJSON-1081786723 tempest-FloatingIPsTestJSON-110914648] 
delete failed
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 83, in resource
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 146, in wrapper
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 119, in 
__exit__
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2015-07-14 21:34:17.248 22651 ERROR neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 136, 

[Yahoo-eng-team] [Bug 1362528] Re: cirros starts with file system in read only mode

2015-07-14 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362528

Title:
  cirros starts with file system in read only mode

Status in neutron:
  Expired
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  The VM boots incorrectly, the SSH service does not start, and the
  connection fails.

  http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-
  pg-full/603e3c6/console.html.gz#_2014-08-26_08_59_39_951

  Only observed with neutron, 1 gate hit in 7 days.
  No hint about the issue in syslog or libvirt logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454968] Re: hard to understand the uri printed in the log

2015-07-14 Thread Dolph Mathews
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
 Assignee: (unassigned) = Dolph Mathews (dolph)

** Changed in: keystone/kilo
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1454968

Title:
  hard to understand the uri printed in the log

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  In Progress

Bug description:
  In keystone's log file, we can easily find some uri printed like this:
  
http://127.0.0.1:35357/v3/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens/auth/tokens

  seems there is something wrong when we are trying to log the uri in the log 
file.
  LOG.info('%(req_method)s %(uri)s', {
  'req_method': req.environ['REQUEST_METHOD'].upper(),
  'uri': wsgiref.util.request_uri(req.environ),
  })

  code is here:
  
https://github.com/openstack/keystone/blob/0debc2fbf448b44574da6f3fef7d457037c59072/keystone/common/wsgi.py#L232
  but seems it has already been wrong when the req is passed in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1454968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448286] Re: unicode query string raises UnicodeEncodeError

2015-07-14 Thread Dolph Mathews
In order for this to be safely backported to kilo, the fix for bug
1454968 needs to be included as well (the fix for this bug revealed the
problem fixed in bug 1454968).

** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Changed in: keystone/kilo
 Assignee: (unassigned) = Dolph Mathews (dolph)

** Changed in: keystone/kilo
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1448286

Title:
  unicode query string raises UnicodeEncodeError

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  In Progress

Bug description:
  The logging in keystone.common.wsgi is unable to handle unicode query
  strings. The simplest example would be:

$ curl http://localhost:35357/?Ϡ

  This will fail with a backtrace similar to:

2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi   File 
.../keystone/keystone/common/wsgi.py, line 234, in __call__
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi 'params': 
urllib.urlencode(req.params)})
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi   File 
/usr/lib/python2.7/urllib.py, line 1311, in urlencode
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi k = 
quote_plus(str(k))
2015-04-24 19:57:45.860 22255 TRACE keystone.common.wsgi 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u03e0' in position 
0: ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1448286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474501] [NEW] Bad search filter: None in query

2015-07-14 Thread Eric Brown
Public bug reported:

Environment: Ubuntu 14.04 with stable/kilo openstack packages installed

I configured keystone to have one domain ('Default') configured with SQL
as the backend to service the service users.  I configured a secondary
domain ('ldap.vmware.com') to service all of the LDAP users.  I did this
using the multi-domain backend support.

I was successful in creating users for the services (nova, cinder,
glance, neutron, etc) and creating grants with admin role on service
tenant.  Then I need to grant the admin role on a admin project on the
ldap domain.  This is where things broke.

In order to assign the admin role to the ldap user, I need to know the
user id for the openstackclient.  To do this, I used:

openstack --os-identity-api-version 3 --os-url
http://localhost:35357/v3; --os-token 52c6706iDcaDAf7u45se user show
--domain ldap.vmware.com vio-autou...@vmware.com

This command results in a 500 error from keystone.
http://paste.openstack.org/show/375004/

The root cause is that there is a 'None' in the search filter.
(None(userPrincipalName=vio-autou...@vmware.com))

Strangely, everything works perfectly if I stick with a single 'Default'
domain with LDAP backend.  It might be related to using the openstack
CLI since that is also new in this environment.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474501

Title:
  Bad search filter: None in query

Status in Keystone:
  New

Bug description:
  Environment: Ubuntu 14.04 with stable/kilo openstack packages
  installed

  I configured keystone to have one domain ('Default') configured with
  SQL as the backend to service the service users.  I configured a
  secondary domain ('ldap.vmware.com') to service all of the LDAP users.
  I did this using the multi-domain backend support.

  I was successful in creating users for the services (nova, cinder,
  glance, neutron, etc) and creating grants with admin role on service
  tenant.  Then I need to grant the admin role on a admin project on the
  ldap domain.  This is where things broke.

  In order to assign the admin role to the ldap user, I need to know the
  user id for the openstackclient.  To do this, I used:

  openstack --os-identity-api-version 3 --os-url
  http://localhost:35357/v3; --os-token 52c6706iDcaDAf7u45se user show
  --domain ldap.vmware.com vio-autou...@vmware.com

  This command results in a 500 error from keystone.
  http://paste.openstack.org/show/375004/

  The root cause is that there is a 'None' in the search filter.
  (None(userPrincipalName=vio-autou...@vmware.com))

  Strangely, everything works perfectly if I stick with a single
  'Default' domain with LDAP backend.  It might be related to using the
  openstack CLI since that is also new in this environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1474501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474162] Re: ldap unicode issue when doing a show user

2015-07-14 Thread Dolph Mathews
*** This bug is a duplicate of bug 1448286 ***
https://bugs.launchpad.net/bugs/1448286

For reference, here's a direct link to the stable/kilo backport of both
issues: https://review.openstack.org/#/c/201708/

** This bug has been marked a duplicate of bug 1448286
   unicode query string raises UnicodeEncodeError

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474162

Title:
  ldap unicode issue when doing a show user

Status in Keystone:
  Invalid

Bug description:
  In stable/kilo release, when the username contains non ascii charaters, 
showing the user from ldap with the following command -
  openstack user show --domain=ad Test Accent Communiquè
  will throw an exception. And this has been addressed in the Master branch, so 
what needs to be done is just to backport the changes to stable/kilo. 

  I tested the changes in the Master branch and works fine.

  This is similar to https://bugs.launchpad.net/keystone/+bug/1419187





  (keystone.common.wsgi): 2015-07-10 21:25:26,351 INFO wsgi __call__ GET 
/domains?name=ad
  (keystone.common.wsgi): 2015-07-10 21:25:26,385 ERROR wsgi __call__ 'ascii' 
codec can't encode character u'\xe8' in position 21: ordinal not in range(128)
  Traceback (most recent call last):
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/keystone/common/wsgi.py,
 line 452, in __call__
  response = request.get_response(self.application)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/request.py,
 line 1317, in send
  application, catch_exc_info=False)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/request.py,
 line 1281, in call_application
  app_iter = application(self.environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/webob/dec.py, 
line 144, in __call__
  return resp(environ, start_response)
File 
/opt/stack/service/keystone/venv/lib/python2.7/site-packages/routes/middleware.py,
 line 136, in __call__
  response = self.app(environ, start_response)
File 

[Yahoo-eng-team] [Bug 1474490] [NEW] keystone.tests.unit.common.test_notifications.NotificationsTestCase fails in isolation

2015-07-14 Thread Dolph Mathews
Public bug reported:

While investigating bug 1474069, I discovered this test fails when run
in isolation as well.

$ tox -e py27 
keystone.tests.unit.common.test_notifications.NotificationsTestCase
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpARu0Eo
==
FAIL: 
keystone.tests.unit.common.test_notifications.NotificationsTestCase.test_send_notification
tags: worker-0
--
Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File keystone/tests/unit/common/test_notifications.py, line 182, in setUp
fixture.config(rpc_backend='fake', notification_driver=['fake'])
  File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/fixture.py, 
line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 1824, in __inner
result = f(self, *args, **kwargs)
  File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 2103, in set_override
opt_info = self._get_opt_info(name, group)
  File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 2421, in _get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option: notification_driver
Ran 1 tests in 0.005s (-0.431s)
FAILED (id=7051, failures=1 (+1))

** Affects: keystone
 Importance: Low
 Status: Triaged


** Tags: low-hanging-fruit test-improvement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1474490

Title:
  keystone.tests.unit.common.test_notifications.NotificationsTestCase
  fails in isolation

Status in Keystone:
  Triaged

Bug description:
  While investigating bug 1474069, I discovered this test fails when run
  in isolation as well.

  $ tox -e py27 
keystone.tests.unit.common.test_notifications.NotificationsTestCase
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpARu0Eo
  ==
  FAIL: 
keystone.tests.unit.common.test_notifications.NotificationsTestCase.test_send_notification
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File keystone/tests/unit/common/test_notifications.py, line 182, in setUp
  fixture.config(rpc_backend='fake', notification_driver=['fake'])
File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/fixture.py, 
line 65, in config
  self.conf.set_override(k, v, group)
File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 1824, in __inner
  result = f(self, *args, **kwargs)
File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 2103, in set_override
  opt_info = self._get_opt_info(name, group)
File 
/home/dolph/venv/os/local/lib/python2.7/site-packages/oslo_config/cfg.py, 
line 2421, in _get_opt_info
  raise NoSuchOptError(opt_name, group)
  oslo_config.cfg.NoSuchOptError: no such option: notification_driver
  Ran 1 tests in 0.005s (-0.431s)
  FAILED (id=7051, failures=1 (+1))

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1474490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp