[Yahoo-eng-team] [Bug 1493216] [NEW] Stacks event tab wouldn't display data

2015-09-08 Thread Chung Chih, Hung
Public bug reported:

I deploy new devstack environment and launch new stacks.
Enter the stack page then go to event tab.
It will not display event and only show loading, attach file is screen snapshot.
Console will have one 500 response at following link
http://ip/dashboard/project/stacks/stack/92f0c65a-6dd3-43c8-964f-6e45c94d3b3e/?tab=stack_details__events
Following link is django log.
http://paste.openstack.org/show/449776/

** Affects: horizon
 Importance: Undecided
 Assignee: Chung Chih, Hung (lyanchih)
 Status: New

** Attachment added: "event.png"
   https://bugs.launchpad.net/bugs/1493216/+attachment/4459016/+files/event.png

** Changed in: horizon
 Assignee: (unassigned) => Chung Chih, Hung (lyanchih)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493216

Title:
  Stacks event tab wouldn't display data

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I deploy new devstack environment and launch new stacks.
  Enter the stack page then go to event tab.
  It will not display event and only show loading, attach file is screen 
snapshot.
  Console will have one 500 response at following link
  
http://ip/dashboard/project/stacks/stack/92f0c65a-6dd3-43c8-964f-6e45c94d3b3e/?tab=stack_details__events
  Following link is django log.
  http://paste.openstack.org/show/449776/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481370] Re: system logging module is still in use in many places

2015-09-08 Thread Ilya Shakhat
** Changed in: stackalytics
Milestone: None => 0.9

** Changed in: stackalytics
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1481370

Title:
  system logging module is still in use in many places

Status in murano:
  Fix Committed
Status in neutron:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in tuskar:
  In Progress

Bug description:
  The system logging module is still in use in many places, i suggest to
  use the oslo.log library. Form the 1.8 version of oslo.log we can use
  the constants of the log levels (INFO, DEBUG, etc) directly from log
  module instead of system logging module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1481370/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493270] [NEW] The dependency of pbr in ryu does not match neutron

2015-09-08 Thread Hong Hui Xiao
Public bug reported:

I want to use neutron with latest code. In the [1], ryu was added as
dependency for neutron. However, when I want to install ryu.  I got this
error.

[root@test]# pip install ryu
Downloading/unpacking ryu
  Downloading ryu-3.25.tar.gz (1.3MB): 1.3MB downloaded
  Running setup.py egg_info for package ryu
Traceback (most recent call last):
  File "", line 16, in 
  File "/tmp/pip_build_root/ryu/setup.py", line 30, in 
pbr=True)
  File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
  File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, in 
__init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
  File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, in 
fetch_build_eggs
parse_requirements(requires), installer=self.fetch_build_egg
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 630, in 
resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (pbr 1.6.0 
(/usr/lib/python2.7/site-packages), Requirement.parse('pbr<1.0'))

And I can find from [2] that ryu will need pbr < 1.0
But in my env, the pbr was installed with a newer version. According to [3]. 


[1] https://review.openstack.org/#/c/153946/136/requirements.txt
[2] https://github.com/osrg/ryu/blob/master/setup.py#L29
[3] https://github.com/openstack/neutron/blob/master/requirements.txt#L4

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493270

Title:
  The dependency of pbr in ryu does not match neutron

Status in neutron:
  New

Bug description:
  I want to use neutron with latest code. In the [1], ryu was added as
  dependency for neutron. However, when I want to install ryu.  I got
  this error.

  [root@test]# pip install ryu
  Downloading/unpacking ryu
Downloading ryu-3.25.tar.gz (1.3MB): 1.3MB downloaded
Running setup.py egg_info for package ryu
  Traceback (most recent call last):
File "", line 16, in 
File "/tmp/pip_build_root/ryu/setup.py", line 30, in 
  pbr=True)
File "/usr/lib64/python2.7/distutils/core.py", line 112, in setup
  _setup_distribution = dist = klass(attrs)
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 265, 
in __init__
  self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python2.7/site-packages/setuptools/dist.py", line 289, 
in fetch_build_eggs
  parse_requirements(requires), installer=self.fetch_build_egg
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 630, in 
resolve
  raise VersionConflict(dist,req) # XXX put more info here
  pkg_resources.VersionConflict: (pbr 1.6.0 
(/usr/lib/python2.7/site-packages), Requirement.parse('pbr<1.0'))

  And I can find from [2] that ryu will need pbr < 1.0
  But in my env, the pbr was installed with a newer version. According to [3]. 


  [1] https://review.openstack.org/#/c/153946/136/requirements.txt
  [2] https://github.com/osrg/ryu/blob/master/setup.py#L29
  [3] https://github.com/openstack/neutron/blob/master/requirements.txt#L4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493271] [NEW] Glance allows to create circular dependencies between artifacts

2015-09-08 Thread dshakhray
Public bug reported:

Glance allows to create circular dependencies between several artifacts
which cause logic failures and inability to interact with artifacts

** Affects: glance
 Importance: Undecided
 Assignee: dshakhray (dshakhray)
 Status: New


** Tags: artifacts

** Changed in: glance
 Assignee: (unassigned) => dshakhray (dshakhray)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1493271

Title:
  Glance allows to create circular dependencies between artifacts

Status in Glance:
  New

Bug description:
  Glance allows to create circular dependencies between several
  artifacts which cause logic failures and inability to interact with
  artifacts

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1493271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349888] Re: Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-08 Thread Louis Bouchard
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1349888

Title:
  Attempting to attach the same volume multiple times can cause bdm
  record for existing attachment to be deleted.

Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  New
Status in nova source package in Trusty:
  New

Bug description:
  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the
  volume is deleted however it is not necessarily the one that was just
  created. The following steps show how a volume can get stuck detaching
  because of this.

  
  $ nova list
  
c+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -  | 
Running | private=10.0.0.2 |
  
+--++++-+--+

  $ cinder list
  
+--+---++--+-+--+-+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---++--+-+--+-+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   | lvm1 
   |  false   | |
  
+--+---++--+-+--+-+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +--+--+

  $ cinder list
  
+--+++--+-+--+--+
  |  ID  | Status |  Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--+++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   | lvm1
|  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+++--+-+--+--+

  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) 
(Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)

  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4

  $ cinder list
  
+--+---++--+-+--+--+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to  |
  
+--+---++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | detaching | test10 |  1   | lvm1 
   |  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+---++--+-+--+--+


  
  2014-07-29 14:47:13.952 ERROR oslo.messaging.rpc.dispatcher 
[req-134dfd17-14da-4de0-93fc-5d8d7bbf65a5 admin admin] Exception during message 
handling:  can't be decoded
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-29 14:47:13.952 31588 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-29 14:47:13.952 31588 TRACE 

[Yahoo-eng-team] [Bug 1493252] [NEW] Wording for Edit Instance/Security Groups is incorrect

2015-09-08 Thread Masco Kaliyamoorthy
Public bug reported:

The wording on the Security Groups tab for the Edit Instance window
currently says: Add and remove security groups to this project ...

it is wrong. it should use instance instead of project.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493252

Title:
  Wording for Edit Instance/Security Groups is incorrect

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The wording on the Security Groups tab for the Edit Instance window
  currently says: Add and remove security groups to this project ...

  it is wrong. it should use instance instead of project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480196] Re: Request-id is not getting returned if glance throws 500 error

2015-09-08 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480196

Title:
  Request-id is not getting returned if glance throws 500 error

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New

Bug description:
  If glance throws Internal Server Error (500) for some reason,
  then in that case 'request-id' is not getting returned in response headers.

  Request-id is required to analyse logs effectively on failure and it should be
  returned from headers.

  For ex. -

  image-create api returns 500 error if property name exceeds 255 characters
  (fix for this issue is in progress : https://review.openstack.org/#/c/203948/)

  curl command:

  $ curl -g -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-
  meta-container_format: ami' -H 'x-image-meta-property-
  
:
  jskg' -H 'Accept: */*' -H 'X-Auth-Token:
  b94bd7b3a0fb4fada73fe170fe7d49cb' -H 'Connection: keep-alive' -H 'x
  -image-meta-is_public: None' -H 'User-Agent: python-glanceclient' -H
  'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format:
  ami' http://10.69.4.173:9292/v1/images

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Fri, 31 Jul 2015 08:27:31 GMT
  Connection: close

  Here request-id is not part of response header.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478690] Re: Request ID has a double req- at the start

2015-09-08 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1478690

Title:
  Request ID has a double req- at the start

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New
Status in OpenStack Search (Searchlight):
  Fix Committed

Bug description:
  ➜  vagrant git:(master) http http://192.168.121.242:9393/v1/search 
X-Auth-Token:$token query:='{"match_all" : {}}'
  HTTP/1.1 200 OK
  Content-Length: 138
  Content-Type: application/json; charset=UTF-8
  Date: Mon, 27 Jul 2015 20:21:31 GMT
  X-Openstack-Request-Id: req-req-0314bf5b-9c04-4bed-bf86-d2e76d297a34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1478690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491758] Re: Unable to delete instance

2015-09-08 Thread Ioana-Madalina Patrichi
** Changed in: nova
   Status: Incomplete => Opinion

** Changed in: nova
   Status: Opinion => Invalid

** Description changed:

  Openstack version: Kilo
  
  I am trying to delete an instance that was initially stuck in a Paused
  and then an Error state. I gave up trying to bring it up again, however
  now I am unable to delete it from Openstack.
  
  I have taken the following steps:
  1. I have initially tried to delete the instance directly from Openstack 
Dashboard while the instance was in an error state the operation was reported 
as being successful, however, the instance hasn't been removed.
- 2. I have tried resetting the state of the instance to Active: 
-  $ nova reset-state --active 27d8f8d0-efd5-42bd-9c56-4ddd159833d1
+ 2. I have tried resetting the state of the instance to Active:
+  $ nova reset-state --active 27d8f8d0-efd5-42bd-9c56-4ddd159833d1
  3. Deleted the instance using the nova-api:
-  $ nova delete 27d8f8d0-efd5-42bd-9c56-4ddd159833d1
-  Request to delete server 27d8f8d0-efd5-42bd-9c56-4ddd159833d1 has been 
accepted.
- 
- In addition, the Fault section of the instance on Openstack Dashboard
- displays the following:
- 
- Message Cannot call obj_load_attr on orphaned Instance object
- Code   500
- 
- None of these steps have been successful. I know that I can delete the
- instance from the database, I would like to address this issue.
- 
- Logs from nova-api:
- 2015-09-03 11:06:31.364 2574 DEBUG nova.api.openstack.wsgi 
[req-a663fee3-b720-4d9e-bc7d-646d4d80922b ba5daa262bb947079f9d2fc54f5e9234 
d819429055d4416bbfc3d693b1571388 - - -] Action: 'action', calling method: 
>, body: {"os-getConsoleOutput": {"length": null}} 
_process_stack /usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:780
- 2015-09-03 11:06:31.365 2574 DEBUG nova.compute.api 
[req-a663fee3-b720-4d9e-bc7d-646d4d80922b ba5daa262bb947079f9d2fc54f5e9234 
d819429055d4416bbfc3d693b1571388 - - -] [instance: 
a23088f2-444f-4de4-89e0-593e5502be41] Fetching instance by UUID get 
/usr/lib/python2.7/dist-packages/nova/compute/api.py:1911
- 2015-09-03 11:06:31.522 2574 INFO nova.osapi_compute.wsgi.server 
[req-a663fee3-b720-4d9e-bc7d-646d4d80922b ba5daa262bb947079f9d2fc54f5e9234 
d819429055d4416bbfc3d693b1571388 - - -] 10.83.100.0 "POST 
/v2/d819429055d4416bbfc3d693b1571388/servers/a23088f2-444f-4de4-89e0-593e5502be41/action
 HTTP/1.1" status: 200 len: 8215 time: 0.2391620
- 2015-09-03 11:12:54.877 2586 DEBUG keystoneclient.session [-] REQ: curl -g -i 
-X GET http://10.83.100.0:35357/v3/auth/tokens -H "X-Subject-Token: 
{SHA1}605ad7f8b06f1a6319321f83741e7dfa6a7b7418" -H "User-Agent: 
python-keystoneclient" -H "Accept: application/json" -H "X-Auth-Token: 
{SHA1}4887951ff0677461e8630e305ace2b3194f3477c" _http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:195
- 2015-09-03 11:12:54.960 2586 DEBUG keystoneclient.session [-] RESP: [200] 
content-length: 7110 x-subject-token: 
{SHA1}605ad7f8b06f1a6319321f83741e7dfa6a7b7418 vary: X-Auth-Token connection: 
keep-alive date: Thu, 03 Sep 2015 10:12:54 GMT content-type: application/json 
x-distribution: Ubuntu
- RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": 
"9fe2ff9ee4384b1894a90878d3e92bab", "name": "_member_"}, {"id": 
"8d4178ea39e04db68e9d30c1105a2bd8", "name": "admin"}], "expires_at": 
"2015-09-06T10:12:54.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "6560b54895bf4966bf332659f1c32b32", "name": "admin"}, 
"catalog": "", "extras": {}, "user": {"domain": {"id": "default", 
"name": "Default"}, "id": "a5296228ddd6417eb8b63201fc258a6f", "name": "admin"}, 
"audit_ids": ["E1HDBQ7sQgudJje8VvW7rg"], "issued_at": 
"2015-09-03T10:12:54.864888"}}
-  _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:224
- 2015-09-03 11:12:54.968 2586 DEBUG nova.api.openstack.wsgi 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] Calling method '>' _process_stack 
/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py:783
- 2015-09-03 11:12:54.969 2586 DEBUG nova.compute.api 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] [instance: 
27d8f8d0-efd5-42bd-9c56-4ddd159833d1] Fetching instance by UUID get 
/usr/lib/python2.7/dist-packages/nova/compute/api.py:1911
- 2015-09-03 11:12:55.023 2586 DEBUG nova.objects.instance 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] Lazy-loading `flavor' on Instance uuid 
27d8f8d0-efd5-42bd-9c56-4ddd159833d1 obj_load_attr 
/usr/lib/python2.7/dist-packages/nova/objects/instance.py:995
- 2015-09-03 11:12:55.076 2586 DEBUG nova.objects.instance 
[req-622c5c35-d42c-4aa4-bf69-7cdae5f74ac5 a5296228ddd6417eb8b63201fc258a6f 
6560b54895bf4966bf332659f1c32b32 - - -] Lazy-loading `fault' on Instance uuid 

[Yahoo-eng-team] [Bug 1493341] [NEW] l2 pop failed if live-migrate a VM with multiple neutron-server workers

2015-09-08 Thread shihanzhang
Public bug reported:

Now if we set neutron-server with 2 more workers or two neutron-server node 
behind a loadbalancer, then we live-migrate a VM will 
cause l2 pop failed(not always), the reason is that:
1. when nova finish live-migrating a VM, it update port host id to destination 
host
2. one neutron-server worker receive this request and do l2 pop, it check this 
port's host id was changed, but status is ACTIVE, then it
   record this port to its memory
3. when l2 agent scans this port, and update this port's status from 
ACTIVE->BUILD-ACTIVE, but another neutron-server workerreceive this RPC 
request, then l2 pop will fail for this port 


def update_port_postcommit(self, context):
...
if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
if context.status == const.PORT_STATUS_ACTIVE:
self._update_port_up(context)
if context.status == const.PORT_STATUS_DOWN:
agent_host = context.host
fdb_entries = self._get_agent_fdb(
context, port, agent_host)
self.L2populationAgentNotify.remove_fdb_entries(
self.rpc_ctx, fdb_entries)
elif (context.host != context.original_host
and context.status == const.PORT_STATUS_ACTIVE
and not self.migrated_ports.get(orig['id'])):
# The port has been migrated. We have to store the original
# binding to send appropriate fdb once the port will be set
# on the destination host
self.migrated_ports[orig['id']] = (
(orig, context.original_host))

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493341

Title:
  l2 pop failed if live-migrate a VM with multiple neutron-server
  workers

Status in neutron:
  New

Bug description:
  Now if we set neutron-server with 2 more workers or two neutron-server node 
behind a loadbalancer, then we live-migrate a VM will 
  cause l2 pop failed(not always), the reason is that:
  1. when nova finish live-migrating a VM, it update port host id to 
destination host
  2. one neutron-server worker receive this request and do l2 pop, it check 
this port's host id was changed, but status is ACTIVE, then it
 record this port to its memory
  3. when l2 agent scans this port, and update this port's status from 
ACTIVE->BUILD-ACTIVE, but another neutron-server workerreceive this RPC 
request, then l2 pop will fail for this port 

  
  def update_port_postcommit(self, context):
  ...
  if port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE:
  if context.status == const.PORT_STATUS_ACTIVE:
  self._update_port_up(context)
  if context.status == const.PORT_STATUS_DOWN:
  agent_host = context.host
  fdb_entries = self._get_agent_fdb(
  context, port, agent_host)
  self.L2populationAgentNotify.remove_fdb_entries(
  self.rpc_ctx, fdb_entries)
  elif (context.host != context.original_host
  and context.status == const.PORT_STATUS_ACTIVE
  and not self.migrated_ports.get(orig['id'])):
  # The port has been migrated. We have to store the original
  # binding to send appropriate fdb once the port will be set
  # on the destination host
  self.migrated_ports[orig['id']] = (
  (orig, context.original_host))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493354] [NEW] Some angular unit tests use a wrong way of checking if an element exists

2015-09-08 Thread Timur Sufiev
Public bug reported:

I've noticed a few occurrences of the following pattern in the Horizon
unit tests for Angular directives:
`expect(element.find(someSelector)).toBeDefined()`. That is going to
always true, since element.find(someSelector) returns an Array-like
object which _is_ defined. The correct pattern to use with such
selectors is `expect(element.find(someSelector).length).toBe(1)`.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: unittest

** Tags added: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493354

Title:
  Some angular unit tests use a wrong way of checking if an element
  exists

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I've noticed a few occurrences of the following pattern in the Horizon
  unit tests for Angular directives:
  `expect(element.find(someSelector)).toBeDefined()`. That is going to
  always true, since element.find(someSelector) returns an Array-like
  object which _is_ defined. The correct pattern to use with such
  selectors is `expect(element.find(someSelector).length).toBe(1)`.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493492] [NEW] VPNaaS: ipsec.secrets file permissions prevents LibreSwan from starting

2015-09-08 Thread Brent Eagles
Public bug reported:

The man pages for ipsec.secrets generally state that the file should be
owned by root or super-user and access blocked to everyone else (chmod
0600).  Recent changes have dealt with the file permissions issue.
However, in neutron vpnaas the file ownership is that of the process and
due to strict permission checks through "capabilities", this actually
results in a failure to establish connections with LibreSwan since pluto
runs as root. This seems to be LibreSwan specific.

** Affects: neutron
 Importance: Undecided
 Assignee: Brent Eagles (beagles)
 Status: New

** Summary changed:

- VPNaaS: ipsec.secrets file should be owned by root/super-user
+ VPNaaS: ipsec.secrets file permissions prevents LibreSwan from starting

** Changed in: neutron
 Assignee: (unassigned) => Brent Eagles (beagles)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493492

Title:
  VPNaaS: ipsec.secrets file permissions prevents LibreSwan from
  starting

Status in neutron:
  New

Bug description:
  The man pages for ipsec.secrets generally state that the file should
  be owned by root or super-user and access blocked to everyone else
  (chmod 0600).  Recent changes have dealt with the file permissions
  issue. However, in neutron vpnaas the file ownership is that of the
  process and due to strict permission checks through "capabilities",
  this actually results in a failure to establish connections with
  LibreSwan since pluto runs as root. This seems to be LibreSwan
  specific.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293540] Re: nova should make sure the bridge exists before resuming a VM after an offline snapshot

2015-09-08 Thread Matt Riedemann
*** This bug is a duplicate of bug 1328546 ***
https://bugs.launchpad.net/bugs/1328546

This should be resolved now on the neutron side given Sean Collins
removed the code in the neutron linuxbridge agent that removed empty
bridges:

https://review.openstack.org/#/q/I4ccc96566a5770384eacbbdc492bf09a514f5b31,n,z

That's been backported to stable/juno so we should be good - I don't
think we need the nova side changes now.

** This bug has been marked a duplicate of bug 1328546
   Race condition when hard rebooting instance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293540

Title:
  nova should make sure the bridge exists before resuming a VM after an
  offline snapshot

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  My setup is based on icehouse-2, KVM, Neutron setup with ML2 and the linux 
bridge agent, CentOS 6.5 and LVM as the ephemeral backend.
  The OS should not matter in this, LVM should not matter either, just make 
sure the snapshot takes the VM offline.

  How to reproduce:
  1. create one VM on a compute node (make sure only one VM is present).
  2. snapshot the VM (offline).
  3. linux bridge removes the tap interface from the bridge and decides to 
remove the bridge also since there are no other interfaces present.
  4. nova tries to resume the VM and fails since no bridge is present (libvirt 
error, can't get the bridge MTU).

  Side question:
  Why do both neutron and nova deal with the bridge ?
  I can understand the need to remove empty bridges but I believe nova should 
be the one to do it if nova is dealing mainly with the bridge itself.

  More information:

  During the snapshot Neutron (linux bridge) is called:
  (neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent)
  treat_devices_removed is called and removes the tap interface and calls 
self.br_mgr.remove_empty_bridges

  On resume:
  nova/virt/libvirt/driver.py in the snapshot method fails at:
  if CONF.libvirt.virt_type != 'lxc' and not live_snapshot:
  if state == power_state.RUNNING:
  new_dom = self._create_domain(domain=virt_dom)

  Having more than one VM on the same bridge works fine since neutron
  (the linux bridge agent) only removes an empty bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441733] Re: pip install or python setup.py install should include httpd/keystone.py

2015-09-08 Thread Richard Megginson
On RedHat family platforms, the path to the wsgi script is hard coded.
Not only that, but now there are two different and apparently not
interchangeable scripts, one for admin and one for public (still with
this distinction without a difference anymore!!!), which means
additional parameters/variables for the keystone::params and
keystone::wsgi::apache

** Also affects: puppet-keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1441733

Title:
  pip install or python setup.py install should include
  httpd/keystone.py

Status in Keystone:
  Incomplete
Status in puppet-keystone:
  New

Bug description:
  Now the recommended way to install keystone is via apache.  But
  httpd/keystone.py is not included when we do  python setup.py install
  in keystone. It should be included

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1441733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493512] [NEW] n-cpu misreports shared storage when attempting to block migrate

2015-09-08 Thread Sean Severson
Public bug reported:

If using an environment where there are both nodes that share the nova
instances directory and nodes that do NOT share the directory, any
attempt to block-migrate to non-shared nodes from the controller node
will fail.

Example:
1. Create an environment with one controller node and two cpu nodes.
2. Have the controller node act as an NFS server by adding the instances 
directory to /etc/exports/
3. Add this new share to the fstab of just one of the CPU nodes.
4. Mount the new share after stacking the appropriate CPU node.
5. Stack the unshared CPU node.

The next step applied to my scenario, but may not be necessary.
6. Create a bootable volume in Cinder.
7. Launch an instance from that volume, using a flavor that does NOT create a 
local disk. (This landed on the controller node in my scenario)
8. Once the instance is running, attempt to block-migrate to the unshared node. 
(nova live-migration --block-migrate  )

In the past it was possible to block-migrate to and from the unshared
node, then migrate without block between the shared nodes. Now in
Liberty (master) the following errors appears:


2015-09-08 12:09:17.052 ERROR oslo_messaging.rpc.dispatcher 
[req-2f320d08-ce60-4f5c-bfa3-c044246a3a18 admin admin] Exception during message 
handling: cld5b12 is not on local storage: Block migration can not be used with 
shared storage.
Traceback (most recent call last):

  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
executor_callback))

  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
executor_callback)

  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
result = func(ctxt, **new_args)

  File "/opt/stack/nova/nova/exception.py", line 89, in wrapped
payload)

  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
195, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/opt/stack/nova/nova/exception.py", line 72, in wrapped
return f(self, context, *args, **kw)

  File "/opt/stack/nova/nova/compute/manager.py", line 399, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/opt/stack/nova/nova/compute/manager.py", line 377, in 
decorated_function
kwargs['instance'], e, sys.exc_info())

  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 
195, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/opt/stack/nova/nova/compute/manager.py", line 365, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/opt/stack/nova/nova/compute/manager.py", line 4929, in 
check_can_live_migrate_source
block_device_info)

  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5169, in 
check_can_live_migrate_source
raise exception.InvalidLocalStorage(reason=reason, path=source)

InvalidLocalStorage: cld5b12 is not on local storage: Block migration
can not be used with shared storage.

2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
129, in _do_dispatch
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 89, in wrapped
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher payload)
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 72, in wrapped
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/compute/manager.py", line 399, in decorated_function
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-09-08 12:09:17.052 TRACE oslo_messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1493524] [NEW] IPv6 support for DVR routers

2015-09-08 Thread Swaminathan Vasudevan
Public bug reported:

This bug would capture all the IPv6 related work on DVR routers going
forward.

** Affects: neutron
 Importance: Undecided
 Assignee: Swaminathan Vasudevan (swaminathan-vasudevan)
 Status: In Progress


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493524

Title:
  IPv6 support for DVR routers

Status in neutron:
  In Progress

Bug description:
  This bug would capture all the IPv6 related work on DVR routers going
  forward.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419734] Re: Nova snapshot fails when the instance is running(ceph backend)

2015-09-08 Thread Matt Riedemann
*** This bug is a duplicate of bug 1328546 ***
https://bugs.launchpad.net/bugs/1328546

** This bug is no longer a duplicate of bug 1293540
   nova should make sure the bridge exists before resuming a VM after an 
offline snapshot
** This bug has been marked a duplicate of bug 1328546
   Race condition when hard rebooting instance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1419734

Title:
  Nova snapshot fails when the instance is running(ceph backend)

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  I have setup openstack with ceph storage backend(nova, glance and
  cinder all use it). Linux Bridge-ml2 is my networking plugin. When I
  try to do a snapshot when the instance is running, snapshot fails. It
  seems that nova is trying to freeze the  VM and try to take a cold
  snapshot, but when it resumes, it fails and the snapshot that gets is
  deleted immediately. Please check the nova.log attached. Taken during
  the snapshot process.:

  I have checked glance logs , but no errors. If you need other logs, I
  will attach. I have already checked,
  https://bugs.launchpad.net/nova/+bug/1334398 and
  https://bugs.launchpad.net/mos/+bug/1381072, they are all related to
  live snapshots , but in this case even the cold snapshot is not
  working. When I stop the VM and take the snap , it works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1419734/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493576] [NEW] Incorrect usage of python-novaclient

2015-09-08 Thread Andrey Kurilin
Public bug reported:

All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
Direct import of versioned client object(i.e. novaclient.v2.client) is a way to 
"shoot yourself in the foot".

Horizon:
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31

** Affects: horizon
 Importance: Undecided
 Status: New

** Also affects: horizon
   Importance: Undecided
   Status: New

** Description changed:

  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".
+ 
+ Horizon:
+ 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493576

Title:
  Incorrect usage of python-novaclient

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".

  Horizon:
  
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493578] [NEW] Selenium tests broken by django-openstack-auth user object

2015-09-08 Thread Richard Jones
Public bug reported:

This is related to https://bugs.launchpad.net/django-openstack-
auth/+bug/1491117 but is a different symptom: the selenium tests for
LazyLoadedTabsTests break at the same point.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493578

Title:
  Selenium tests broken by django-openstack-auth user object

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is related to https://bugs.launchpad.net/django-openstack-
  auth/+bug/1491117 but is a different symptom: the selenium tests for
  LazyLoadedTabsTests break at the same point.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493576] Re: Incorrect usage of python-novaclient

2015-09-08 Thread Andrey Kurilin
** Changed in: python-novaclient
   Status: New => Invalid

** No longer affects: python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493576

Title:
  Incorrect usage of python-novaclient

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".

  Horizon:
  
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493581] [NEW] navigation checkmarks in safari display poorly

2015-09-08 Thread Eric Peterson
Public bug reported:

The check marks for selected project etc do not display well in safari /
mac.  They essentially get placed on top of the project name, and it
looks bad.

** Affects: horizon
 Importance: Undecided
 Assignee: Eric Peterson (ericpeterson-l)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Eric Peterson (ericpeterson-l)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493581

Title:
  navigation checkmarks in safari display poorly

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The check marks for selected project etc do not display well in safari
  / mac.  They essentially get placed on top of the project name, and it
  looks bad.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479385] Re: Cause conflicts within glance public metadefs

2015-09-08 Thread Tristan Cacqueray
Until this can be safely backported, the OSSA task is switched to Won't
fix.

** Changed in: ossa
   Status: Triaged => Won't Fix

** Information type changed from Public Security to Public

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1479385

Title:
  Cause conflicts within glance public metadefs

Status in Glance:
  Triaged
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Overview:

  Through creation of a new public namespace by any user of the system,
  you can create a clash of namespaces, that breaks all accessibility to
  that namespace. This therefore can be used to cause a denial of
  service attack or you have to disable the service completely.

  How to produce:

  As a regular user run the command:
  curl -v -X POST http://16.49.138.140:9292/v2/metadefs/namespaces -H 
"Content-Type: application/json" -H "X-Auth-Token: 
1a499605071a46a8b9b2a938fac5fac7" -d '{"namespace": "OS::Computer::WebServers", 
"visibility": "public"}'

  This will create a new namespace with the same name as the existing 
namespace. This has now rendered the original namespace inaccessible. If a GET 
request is done to the namespaces name by any other user via (or viewing in 
horizon):
  curl -v -X GET 
http://16.49.138.140:9292/v2/metadefs/namespaces/OS::Computer::WebServers -H 
"Content-Type: application/json" -H "X-Auth-Token: 
1a499605071a46a8b9b2a938fac5fac7"

  It will cause the following output in the api console:
  2015-07-28 23:41:42.175 ERROR glance.api.v2.metadef_properties 
[req-e3a80995-6f37-4e5c-b7dd-a1ce978478c7 f76c222365fb490792300f9e49ec9bd0 
9db14ac3320b4396b58222f99dd04e4e] Multiple rows were found for one()

  Returning a 500 to the user and therefore the namespace inaccessible
  meaning a successful denial of service to most of the metadefs api as
  most require it.

  Attempted preventative measures:
  In the policy.json files there are only the following values:
  "get_metadef_namespace": "",
   "get_metadef_namespaces":"",
   "modify_metadef_namespace":"",
  "add_metadef_namespace":"",
  meaning that creating namespaces has to be disabled completely(not default ) 
as there in no publicize option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1479385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493205] Re: Create Keypair failed on latest DevStack

2015-09-08 Thread Sean Dague
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
   Status: New => Confirmed

** Changed in: python-novaclient
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493205

Title:
  Create Keypair failed on latest DevStack

Status in OpenStack Dashboard (Horizon):
  New
Status in python-novaclient:
  Confirmed

Bug description:
  Deploy latest stack.

  1. Login as admin, try to create new Keypair
  2. Observe that Create Keypair fails.

  Refer the screenshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493205/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492140] Re: consoleauth token displayed in log file

2015-09-08 Thread Jeremy Stanley
I've added a bugtask for oslo.utils because of partial fix
https://review.openstack.org/220620 in that repository.

** Also affects: oslo.utils
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492140

Title:
  consoleauth token displayed in log file

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.utils:
  New
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  when instance console is accessed auth token is displayed nova-
  consoleauth.log

  nova-consoleauth.log:874:2015-09-02 14:20:36 29941 INFO 
nova.consoleauth.manager [req-6bc7c116-5681-43ee-828d-4b8ff9d566d0 
fe3cd6b7b56f44c9a0d3f5f2546ad4db 37b377441b174b8ba2deda6a6221e399] Received 
Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, {'instance_uuid': 
u'dd29a899-0076-4978-aa50-8fb752f0c3ed', 'access_url': 
u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92-935e-4c22ec90d5f7',
 'token': u'f8ea537c-b924-4d92-935e-4c22ec90d5f7', 'last_activity_at': 
1441203636.387588, 'internal_access_path': None, 'console_type': u'novnc', 
'host': u'192.168.245.6', 'port': u'5900'}
  nova-consoleauth.log:881:2015-09-02 14:20:52 29941 INFO 
nova.consoleauth.manager [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0 None None] 
Checking Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, True

  and

  nova-novncproxy.log:30:2015-09-02 14:20:52 31927 INFO
  nova.console.websocketproxy [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0
  None None]   3: connect info: {u'instance_uuid':
  u'dd29a899-0076-4978-aa50-8fb752f0c3ed', u'internal_access_path':
  None, u'last_activity_at': 1441203636.387588, u'console_type':
  u'novnc', u'host': u'192.168.245.6', u'token': u'f8ea537c-b924-4d92
  -935e-4c22ec90d5f7', u'access_url':
  u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92
  -935e-4c22ec90d5f7', u'port': u'5900'}

  This token has a short lifetime but the exposure still represents a
  potential security weakness, especially as the log record in question
  are INFO level and thus available via centralized logging. A user with
  real time access to these records could mount a denial of service
  attack by accessing the instance console and performing a ctl alt del
  to reboot it

  Alternatively data privacy could be compromised if the attacker were
  able to obtain user credentials

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493396] [NEW] Enable rootwrap daemon logging during functional tests

2015-09-08 Thread Assaf Muller
Public bug reported:

When triaging bugs found during functional tests (Either legit bugs with
Neutron, or issues related to the testing infrastructure), it is useful
to view the Oslo rootwrap daemon logs. It has an option to log to
syslog, but it is turned off by default. It should be turned on during
functional tests to provide additional useful information.

** Affects: neutron
 Importance: High
 Assignee: Cedric Brandily (cbrandily)
 Status: New


** Tags: functional-tests

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493396

Title:
  Enable rootwrap daemon logging during functional tests

Status in neutron:
  New

Bug description:
  When triaging bugs found during functional tests (Either legit bugs
  with Neutron, or issues related to the testing infrastructure), it is
  useful to view the Oslo rootwrap daemon logs. It has an option to log
  to syslog, but it is turned off by default. It should be turned on
  during functional tests to provide additional useful information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492951] Re: Juno keystone installation fail to import oslo_i18n

2015-09-08 Thread Dolph Mathews
Moved this to oslo.i18n, but it sounds like openstack/requirements for
stable/juno just need to be fixed to reflect the reality (that
oslo.utils 1.4.0 requires oslo.i18n>=1.3.0).

** Project changed: keystone => oslo.i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1492951

Title:
  Juno keystone installation fail to import oslo_i18n

Status in oslo.i18n:
  In Progress

Bug description:
  Test keystone installation on stable/juno recently, but failed to
  import module oslo_i18n

  + keystone-manage db_sync
  Traceback (most recent call last):
File "/usr/bin/keystone-manage", line 30, in 
  from keystone import cli
File "/usr/lib/python2.7/site-packages/keystone/cli.py", line 22, in 

  from keystone import assignment
File "/usr/lib/python2.7/site-packages/keystone/assignment/__init__.py", 
line 15, in 
  from keystone.assignment import controllers  # noqa
File "/usr/lib/python2.7/site-packages/keystone/assignment/controllers.py", 
line 26, in 
  from keystone.common import controller
File "/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 
21, in 
  from keystone.common import utils
File "/usr/lib/python2.7/site-packages/keystone/common/utils.py", line 26, 
in 
  from oslo.utils import strutils
File "/usr/lib/python2.7/site-packages/oslo/utils/strutils.py", line 13, in 

  from oslo_utils.strutils import *  # noqa
File "/usr/lib/python2.7/site-packages/oslo_utils/strutils.py", line 26, in 

  from oslo_utils._i18n import _
File "/usr/lib/python2.7/site-packages/oslo_utils/_i18n.py", line 21, in 

  import oslo_i18n
  ImportError: No module named oslo_i18n

  I checked global requirements for stable/juno
  
http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt?h=stable/juno

  I found oslo.utils is below
  oslo.utils>=1.4.0,<1.5.0 # Apache-2.0

  But oslo.i18n still below
  oslo.i18n>=1.0.0,<=1.3.1 # Apache-2.0

  And from requirements of oslo.utils 1.4.0
  
http://git.openstack.org/cgit/openstack/oslo.utils/tree/requirements.txt?id=1.4.0

  If we install oslo.utils version 1.4.0, it will require oslo.i18n >= 1.3.0
  oslo.i18n>=1.3.0  # Apache-2.0

  So if we only install oslo.i18n 1.0.0, it will output can not find
  module oslo_i18n

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.i18n/+bug/1492951/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459874] Re: Ironic driver needs microversion support

2015-09-08 Thread Michael Davies
Closing this one down as a duplicate in favour of
https://bugs.launchpad.net/nova/+bug/1493094

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459874

Title:
  Ironic driver needs microversion support

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Ironic has recently moved to using microversions in the interface
  between ironicclient and ironic server.  The Nova ironic driver
  (nova.virt.ironic) needs updating to specify the microversion it wants
  so as to ensure that the interface remains stable independent of
  development activities in Nova and Ironic respectively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459874/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493576] Re: Incorrect usage of python-novaclient

2015-09-08 Thread Andrey Kurilin
** Description changed:

  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".
  
- Horizon:
- 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31
+ Horizon: 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31
+ Manila: 
https://github.com/openstack/manila/blob/master/manila/compute/nova.py#L23

** Also affects: manila
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493576

Title:
  Incorrect usage of python-novaclient

Status in OpenStack Dashboard (Horizon):
  New
Status in Manila:
  New

Bug description:
  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".

  Horizon: 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31
  Manila: 
https://github.com/openstack/manila/blob/master/manila/compute/nova.py#L23

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493433] [NEW] "No volume type" in volume create screen is not a great default

2015-09-08 Thread Eric Peterson
Public bug reported:

If a deployment has defined volume types, having "No volume type" show
up as the first item in the list is not a great experience.

Instead:
"No volume type" should show up when cinder has 0 volume types defined.
-otherwise-
Use the list from cinder's api call.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493433

Title:
  "No volume type" in volume create screen is not a great default

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If a deployment has defined volume types, having "No volume type" show
  up as the first item in the list is not a great experience.

  Instead:
  "No volume type" should show up when cinder has 0 volume types defined.
  -otherwise-
  Use the list from cinder's api call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483382] Re: Able to request a V2 token for user and project in a non-default domain

2015-09-08 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1483382

Title:
  Able to request a V2 token for user and project in a non-default
  domain

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  In Progress
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Using the latest devstack, I am able to request a V2 token for user
  and project in a non-default domain. This problematic as non-default
  domains are not suppose to be visible to V2 APIs.

  Steps to reproduce:

  1) install devstack

  2) run these commands

  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 domain list
  
+--+-+-+--+
  | ID   | Name| Enabled | Description  
|
  
+--+-+-+--+
  | 769ad7730e0c4498b628aa8dc00e831f | foo | True|  
|
  | default  | Default | True| Owns users and 
tenants (i.e. projects) available on Identity API v2. |
  
+--+-+-+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 user list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+--+
  | ID   | Name |
  +--+--+
  | cf0aa0b2d5db4d67a94d1df234c338e5 | bar  |
  +--+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 project list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+-+
  | ID   | Name|
  +--+-+
  | 413abdbfef5544e2a5f3e8ac6124dd29 | foo-project |
  +--+-+
  gyee@dev:~$ curl -k -H 'Content-Type: application/json' -d '{"auth": 
{"passwordCredentials": {"userId": "cf0aa0b2d5db4d67a94d1df234c338e5", 
"password": "secrete"}, "tenantId": "413abdbfef5544e2a5f3e8ac6124dd29"}}' 
-XPOST http://localhost:35357/v2.0/tokens | python -mjson.tool
    % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100  3006  100  2854  100   152  22164   1180 --:--:-- --:--:-- --:--:-- 22472
  {
  "access": {
  "metadata": {
  "is_admin": 0,
  "roles": [
  "2b7f29ebd1c8453fb91e9cd7c2e1319b",
  "9fe2ff9ee4384b1894a90878d3e92bab"
  ]
  },
  "serviceCatalog": [
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "id": "3a92a79a21fb41379fa3e135be65eeff",
  "internalURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "publicURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "nova",
  "type": "compute"
  },
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "id": "64338d9eb3054598bcee30443c678e2a",
  "internalURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "publicURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29;,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "cinderv2",
  "type": "volumev2"
  },
  {
  "endpoints": [
  {
     

[Yahoo-eng-team] [Bug 1493424] [NEW] Linuxbridge agent pass the config as parameter

2015-09-08 Thread Hirofumi Ichihara
Public bug reported:

Instead of using the global cfg.CONF, we enable linuxbridge agent to pass the 
config as parameter like openvswith agent[1]. This is very useful to test the 
agent without having to override
the global config.

[1]: https://review.openstack.org/#/c/190638/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493424

Title:
  Linuxbridge agent pass the config as parameter

Status in neutron:
  New

Bug description:
  Instead of using the global cfg.CONF, we enable linuxbridge agent to pass the 
config as parameter like openvswith agent[1]. This is very useful to test the 
agent without having to override
  the global config.

  [1]: https://review.openstack.org/#/c/190638/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493642] [NEW] Spelling mistake of comment in api/ceilometer.py

2015-09-08 Thread ZhuChunzhan
Public bug reported:

There has a spelling mistake in the comment of class ResourceAggregate.

"Aggregate of resources can be obtain by specifing multiple ids in one 
parameter or by not specifying one parameter."
Should be:
"Aggregate of resources can be obtain by specifying multiple ids in one 
parameter or by not specifying one parameter."

** Affects: horizon
 Importance: Undecided
 Assignee: ZhuChunzhan (zhucz)
 Status: In Progress


** Tags: spelling

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
 Assignee: (unassigned) => ZhuChunzhan (zhucz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493642

Title:
  Spelling mistake of comment in api/ceilometer.py

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There has a spelling mistake in the comment of class
  ResourceAggregate.

  "Aggregate of resources can be obtain by specifing multiple ids in one 
parameter or by not specifying one parameter."
  Should be:
  "Aggregate of resources can be obtain by specifying multiple ids in one 
parameter or by not specifying one parameter."

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491926] Re: Remove padding from Fernet tokens

2015-09-08 Thread Dolph Mathews
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

** Tags removed: kilo-backport-potential

** Changed in: keystone/kilo
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1491926

Title:
  Remove padding from Fernet tokens

Status in Keystone:
  Fix Committed
Status in Keystone kilo series:
  New

Bug description:
  In bug 1433372, we determined that we should percent encode Fernet
  tokens, because the padding characters (=) aren't considered URL safe
  by some RFCs.

  We also fail some tempest tests because clients sometimes decode or
  encode responses [0]. We should just remove the padding, that way
  clients don't have to worry about it. When we go to validate a token,
  we can determine what the padding is based on the length of the token:

  missing_padding = 4 - len(token) % 4
  if missing_padding:
  token += b'=' * missing_padding

  [0] http://cdn.pasteraw.com/es3j52dpfgem4nom62e7vktk7g5u2j1

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1491926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493653] [NEW] DVR: port with None binding:host_id can't be deleted

2015-09-08 Thread shihanzhang
Public bug reported:

On Neutron master branch,  in bellow use case,  a port can't be deleted
1. create a DVR router
2. create a network, a subnet which disable dhcp
3. create a port with device_owner=compute:None

when we delete this port,  we will get a error:
root@compute:/var/log/neutron# neutron port-delete 
830d6db6-cd00-46ff-8f17-f32f363de1fd
Agent with agent_type=L3 agent and host= could not be found

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493653

Title:
  DVR: port with None binding:host_id can't be deleted

Status in neutron:
  New

Bug description:
  On Neutron master branch,  in bellow use case,  a port can't be deleted
  1. create a DVR router
  2. create a network, a subnet which disable dhcp
  3. create a port with device_owner=compute:None

  when we delete this port,  we will get a error:
  root@compute:/var/log/neutron# neutron port-delete 
830d6db6-cd00-46ff-8f17-f32f363de1fd
  Agent with agent_type=L3 agent and host= could not be found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493446] [NEW] booting a server from volume in Nova V2.1 is not fully backward-compatible

2015-09-08 Thread Andrey Kurilin
Public bug reported:

Nova V2.0 supports request `nova boot  --flavor :::1`, which is broken in Nova V2.1

Logs:
http://paste.openstack.org/show/450439/

Guide for such way of boot: 

http://docs.rackspace.com/servers/api/v2/cs-devguide/content/create_volume_from_image_and_boot.html

Found-by:
 - Trove gates
 - Rally gates

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: python-novaclient
 Importance: Undecided
 Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1493446

Title:
  booting a server from volume in Nova V2.1 is not fully backward-
  compatible

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New

Bug description:
  Nova V2.0 supports request `nova boot  --flavor :::1`, which is broken in Nova
  V2.1

  Logs:
  http://paste.openstack.org/show/450439/

  Guide for such way of boot: 
  
http://docs.rackspace.com/servers/api/v2/cs-devguide/content/create_volume_from_image_and_boot.html

  Found-by:
   - Trove gates
   - Rally gates

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1493446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398997] Re: cloud-init does not have the SmartOS data source as a configuration option

2015-09-08 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.10

---
cloud-init (0.7.5-0ubuntu1.10) trusty; urgency=medium

  [ Daniel Watkins ]
  * d/patches/lp-1490796-azure-fix-mount_cb-for-symlinks.patch:
  - Fix a regression caused by switching to /dev/disk symlinks
(LP: #1490796).

 -- Ben Howard   Wed, 02 Sep 2015 10:57:30 -0600

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

** Changed in: cloud-init (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1398997

Title:
  cloud-init does not have the SmartOS data source as a configuration
  option

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released
Status in cloud-init source package in Wily:
  Fix Released

Bug description:
  The generic Ubuntu "*-server-cloudimg-amd64-disk1.img" images
  available here...

http://cloud-images.ubuntu.com/utopic/current/

  ... do not work on a SmartOS hypervisor: they try to detect a
  datasource, but ultimately fail.  It appears that this is because the
  "SmartOS" datasource is not in the list of datasources to try.  This
  appears to be an oversight, as the "cloud-init" project source
  includes a fallback configuration for when no configuration is
  provided by the image:

  ---
  - cloudinit/settings.py
  ---
  CFG_BUILTIN = {
  'datasource_list': [
  'NoCloud',
  'ConfigDrive',
  'OpenNebula',
  'Azure',
  'AltCloud',
  'OVF',
  'MAAS',
  'GCE',
  'OpenStack',
  'Ec2',
  'CloudSigma',
  'CloudStack',
  'SmartOS',
  # At the end to act as a 'catch' when none of the above work...
  'None',
  ],
  ...
  ---

  This list seems to be overridden in the generic images as shipped on
  ubuntu.com:

  ---
  - etc/cloud/cloud.cfg.d/90_dpkg.cfg
  ---
  # to update this file, run dpkg-reconfigure cloud-init
  datasource_list: [ NoCloud, ConfigDrive, OpenNebula, Azure, AltCloud,OVF, 
MAAS, GCE, OpenStack, CloudSigma, Ec2, CloudStack, None ]
  ---

  SmartOS is the only datasource type that appears in the default
  CFG_BUILTIN list but is missing from the overridden list as shipped in
  the images.  Can this list please be updated for at least the 14.04
  and 14.10 generic cloud images to include SmartOS?

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1398997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490796] Re: [SRU] cloud-init must check/format Azure ephemeral disks each boot

2015-09-08 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.6.3-0ubuntu1.20

---
cloud-init (0.6.3-0ubuntu1.20) precise; urgency=medium

  * debian/patches/lp-1490796-azure-fix-mount_cb-for-symlinks.patch:
  - Fix a regression caused by switching to /dev/disk symlinks
(LP: #1490796).

 -- Daniel Watkins   Wed, 02 Sep 2015
13:24:28 +0100

** Changed in: cloud-init (Ubuntu Precise)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1490796

Title:
  [SRU] cloud-init must check/format Azure ephemeral disks each boot

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released

Bug description:
  
  [Impact]
  Sometimes when rebooting an instance in Azure, a different ephemeral disk 
will be presented. Azure ephemeral disks are presented as NTFS disks, but we 
reformat them to ext4.  With the last cloud-init upload, we shifted to using 
/dev/disk symlinks to mount these disks (in case they are not presented as the 
same physical device).  Unfortunately, the code that determines if we have a 
new ephemeral disk was not updated to handle symlinks, so never detects a new 
disk.

  [Test Case]
  1) Boot an Azure instance and install the new cloud-init.
  2) Change the size of the instance using the Azure web interface (as this 
near-guarantees that the ephemeral disk will be replaced with a new one). This 
will reboot the instance.
  3) Once the instance is rebooted, SSH in and confirm that:
   a) An ext4 ephemeral disk is mounted at /mnt, and
   b) cloud-init.log indicates that a fabric formatted ephemeral disk was found 
on this boot.

  [Regression Potential]
  Limited; two LOCs change, to dereference symlinks instead of using paths 
verbatim.

  [Original Report]
  Ubuntu 14.04.3 (20150805) on Azure with cloud-init package 0.7.5-0ubuntu1.8.

  On Azure cloud-init prepares the ephemeral device as ext4 for the
  first boot.  However, if the VM is ever moved to another Host for any
  reason, then a new ephemeral disk might be provided to the VM.  This
  ephemeral disk is NTFS formatted, so for subsequent reboots cloud-init
  must detect and reformat the new disk as ext4.  However, with cloud-
  init 0.7.5-0ubuntu1.8 subsequent boots may result in fuse mounted NTFS
  file system.

  This issue occurred in earlier versions of cloud-init, but was fixed
  with bug 1292648 (https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1292648).  So this appears to be a regression.

  Repro:
    - Create an Ubuntu 14.04.3 VM on Azure
    - Resize the VM to a larger size (this typically moves the VM)
    - Log in and run 'blkid' to show an ntfs formatted ephemeral disk:

  # blkid
  /dev/sdb1: LABEL="Temporary Storage" UUID="A43C43DD3C43A95E" TYPE="ntfs"

  Expected results:
    - After resizing the ephemeral disk should be formatted as ext4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1490796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493440] [NEW] Login error if database is set for session backend

2015-09-08 Thread Ekaterina Chernova
Public bug reported:

Steps to reproduce

1) login to horizon
perform some actions, make token expire

2) login again

--
setting has the following

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': '/home/fervent/murano-db.sqlite',
}
}

SESSION_ENGINE = 'django.contrib.sessions.backends.db'
-

Actual result

A server error occurred.  Please contact the administrator.

Login successful for user "kate".
Traceback (most recent call last):
  File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py",
 line 63, in __call__
return self.application(environ, start_response)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
 line 189, in __call__
response = self.get_response(request)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 218, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/views/decorators/debug.py",
 line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/utils/decorators.py",
 line 110, in _wrapped_view
response = view_func(request, *args, **kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/views/decorators/cache.py",
 line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/openstack_auth/views.py",
 line 112, in login
**kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/views/decorators/debug.py",
 line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/utils/decorators.py",
 line 110, in _wrapped_view
response = view_func(request, *args, **kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/views/decorators/cache.py",
 line 57, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/contrib/auth/views.py",
 line 51, in login
auth_login(request, form.get_user())
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py",
 line 102, in login
if _get_user_session_key(request) != user.pk or (
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py",
 line 59, in _get_user_session_key
return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
  File 
"/home/fervent/Projects/horizon/.tox/venv/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
 line 969, in to_python
params={'value': value},
ValidationError: [u"'4b938e23c97940b18882d0fed87d809d' value must be an 
integer."]

Database is attached
Workaround: clear browser cookies

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sessions

** Tags added: sessions

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493440

Title:
  Login error if database is set for session backend

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce

  1) login to horizon
  perform some actions, make token expire

  2) login again

  --
  setting has the following

  DATABASES = {
  'default': {
  'ENGINE': 'django.db.backends.sqlite3',
  'NAME': '/home/fervent/murano-db.sqlite',
  }
  }

  SESSION_ENGINE = 'django.contrib.sessions.backends.db'
  
-

  Actual result

  A server error occurred.  Please contact the administrator.

  Login successful for user "kate".
  Traceback (most recent call last):
    File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
  self.result = application(self.environ, self.start_response)
    File 

[Yahoo-eng-team] [Bug 1349888] Re: [SRU] Attempting to attach the same volume multiple times can cause bdm record for existing attachment to be deleted.

2015-09-08 Thread Edward Hope-Morley
** Changed in: nova (Ubuntu)
   Status: In Progress => Fix Released

** Description changed:

  [Impact]
  
-  * Ensure attching already attached volume to second instance does not
-interfere with attached instance volume record.
+  * Ensure attching already attached volume to second instance does not
+    interfere with attached instance volume record.
  
  [Test Case]
  
-  * Create cinder volume vol1 and two instances vm1 and vm2
+  * Create cinder volume vol1 and two instances vm1 and vm2
  
-  * Attach vol1 to vm1 and check that attach was successful by doing:
+  * Attach vol1 to vm1 and check that attach was successful by doing:
  
-- cinder list
-- nova show 
+    - cinder list
+    - nova show 
  
-e.g. http://paste.ubuntu.com/12314443/
+    e.g. http://paste.ubuntu.com/12314443/
  
-  * Attach vol1 to vm2 and check that attach fails and, crucially, that the
-first attach is unaffected (as above). You also check the Nova db as
-follows:
+  * Attach vol1 to vm2 and check that attach fails and, crucially, that the
+    first attach is unaffected (as above). You can also check the Nova db as
+    follows:
  
-select * from block_device_mapping where source_type='volume' and \
-(instance_uuid='' or instance_uuid='');
+    select * from block_device_mapping where source_type='volume' and \
+    (instance_uuid='' or instance_uuid='');
  
-from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
-shows that vol1 is attached to vm1 and vm2 attach failed.
+    from which you would expect e.g. http://paste.ubuntu.com/12314416/ which
+    shows that vol1 is attached to vm1 and vm2 attach failed.
  
-  * finally detach vol1 from vm1 and ensure that it succeeds.
+  * finally detach vol1 from vm1 and ensure that it succeeds.
  
  [Regression Potential]
  
-  * none
+  * none
  
     
  
  nova assumes there is only ever one bdm per volume. When an attach is
  initiated a new bdm is created, if the attach fails a bdm for the volume
  is deleted however it is not necessarily the one that was just created.
  The following steps show how a volume can get stuck detaching because of
  this.
  
  $ nova list
  
c+--++++-+--+
  | ID   | Name   | Status | Task State | Power 
State | Networks |
  
+--++++-+--+
  | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 | test13 | ACTIVE | -  | 
Running | private=10.0.0.2 |
  
+--++++-+--+
  
  $ cinder list
  
+--+---++--+-+--+-+
  |  ID  |   Status  |  Name  | Size | Volume 
Type | Bootable | Attached to |
  
+--+---++--+-+--+-+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | available | test10 |  1   | lvm1 
   |  false   | |
  
+--+---++--+-+--+-+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdb |
  | id   | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  | serverId | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  | volumeId | c1e38e93-d566-4c99-bfc3-42e77a428cc4 |
  +--+--+
  
  $ cinder list
  
+--+++--+-+--+--+
  |  ID  | Status |  Name  | Size | Volume Type 
| Bootable | Attached to  |
  
+--+++--+-+--+--+
  | c1e38e93-d566-4c99-bfc3-42e77a428cc4 | in-use | test10 |  1   | lvm1
|  false   | cb5188f8-3fe1-4461-8a9d-3902f7cc8296 |
  
+--+++--+-+--+--+
  
  $ nova volume-attach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  ERROR (BadRequest): Invalid volume: status must be 'available' (HTTP 400) 
(Request-ID: req-1fa34b54-25b5-4296-9134-b63321b0015d)
  
  $ nova volume-detach test13 c1e38e93-d566-4c99-bfc3-42e77a428cc4
  
  $ cinder list
  
+--+---++--+-+--+--+
  |  ID  |   Status  

[Yahoo-eng-team] [Bug 1490796] Re: [SRU] cloud-init must check/format Azure ephemeral disks each boot

2015-09-08 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.10

---
cloud-init (0.7.5-0ubuntu1.10) trusty; urgency=medium

  [ Daniel Watkins ]
  * d/patches/lp-1490796-azure-fix-mount_cb-for-symlinks.patch:
  - Fix a regression caused by switching to /dev/disk symlinks
(LP: #1490796).

 -- Ben Howard   Wed, 02 Sep 2015 10:57:30 -0600

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

** Changed in: cloud-init (Ubuntu Vivid)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1490796

Title:
  [SRU] cloud-init must check/format Azure ephemeral disks each boot

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Precise:
  Fix Released
Status in cloud-init source package in Trusty:
  Fix Released
Status in cloud-init source package in Vivid:
  Fix Released

Bug description:
  
  [Impact]
  Sometimes when rebooting an instance in Azure, a different ephemeral disk 
will be presented. Azure ephemeral disks are presented as NTFS disks, but we 
reformat them to ext4.  With the last cloud-init upload, we shifted to using 
/dev/disk symlinks to mount these disks (in case they are not presented as the 
same physical device).  Unfortunately, the code that determines if we have a 
new ephemeral disk was not updated to handle symlinks, so never detects a new 
disk.

  [Test Case]
  1) Boot an Azure instance and install the new cloud-init.
  2) Change the size of the instance using the Azure web interface (as this 
near-guarantees that the ephemeral disk will be replaced with a new one). This 
will reboot the instance.
  3) Once the instance is rebooted, SSH in and confirm that:
   a) An ext4 ephemeral disk is mounted at /mnt, and
   b) cloud-init.log indicates that a fabric formatted ephemeral disk was found 
on this boot.

  [Regression Potential]
  Limited; two LOCs change, to dereference symlinks instead of using paths 
verbatim.

  [Original Report]
  Ubuntu 14.04.3 (20150805) on Azure with cloud-init package 0.7.5-0ubuntu1.8.

  On Azure cloud-init prepares the ephemeral device as ext4 for the
  first boot.  However, if the VM is ever moved to another Host for any
  reason, then a new ephemeral disk might be provided to the VM.  This
  ephemeral disk is NTFS formatted, so for subsequent reboots cloud-init
  must detect and reformat the new disk as ext4.  However, with cloud-
  init 0.7.5-0ubuntu1.8 subsequent boots may result in fuse mounted NTFS
  file system.

  This issue occurred in earlier versions of cloud-init, but was fixed
  with bug 1292648 (https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1292648).  So this appears to be a regression.

  Repro:
    - Create an Ubuntu 14.04.3 VM on Azure
    - Resize the VM to a larger size (this typically moves the VM)
    - Log in and run 'blkid' to show an ntfs formatted ephemeral disk:

  # blkid
  /dev/sdb1: LABEL="Temporary Storage" UUID="A43C43DD3C43A95E" TYPE="ntfs"

  Expected results:
    - After resizing the ephemeral disk should be formatted as ext4.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1490796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493414] [NEW] OVS Neutron agent is marking port as dead before they are deleted

2015-09-08 Thread Artur Korzeniewski
Public bug reported:

The situation is happening on Liberty-3.

When trying to clear the gateway port and tenant network interface
delete in router, the OVS agent is marking the port as dead instead of
treat them as removed: security group removed and port_unbound

This is causing to left stale OVS flows in br-int, and it may affect the
port_unbound() logic in ovs_neutron_agent.py.

The ovs_neutron_agent is in one iteration of rpc_loop processing the
deleted port via process_deleted_ports() method, marking the qg- port as
dead (ovs flow rule to drop the traffic) and in another iteration, the
ovs_neutron_agent is processing the removed port by
treat_devices_removed() method.

In first iteration, the port deleting is triggered by port_delete() method:
2015-09-04 14:16:20.337 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-e43234b1-633b-404d-92d0-0f844dadb586 admin 
0f6c0469ea6e4d95a27782c46021243a] port_delete message processed for port 
1c749258-74fb-498b-9a08-1fec6725a1cf from (pid=136030) port_delete 
/opt/openstack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:410

and in second iteration, the device removed is triggered by ovsdb:
2015-09-04 14:16:20.848 DEBUG neutron.agent.linux.ovsdb_monitor [-] Output 
received from ovsdb monitor: 
{"data":[["bab86f35-d004-4df6-95c2-0f7432338edb","delete","qg-1c749258-74",49,["map",[["attached-mac","fa:16:3e:99:37:68"],["iface-id","1c749258-74fb-498b-9a08-1fec6725a1cf"],["iface-status","active"],"headings":["row","action","name","ofport","external_ids"]}
 from (pid=136030) _read_stdout 
/opt/openstack/neutron/neutron/agent/linux/ovsdb_monitor.py:50

Log from ovs neutron agent:
http://paste.openstack.org/show/445479/

Steps to reproduce:
1. Create router
2. Add tenant network interface to the router
3. Launch a VM
4. Add external network gateway to created router
5. Check the br-int for current port numbers
6. Remove external network gateway
7. Check the br-int for dead port flows (removed port qg-)
8. Remove the network interface from tenant network
9. Check the br-int for dead port flows.

Repeat the steps 4-9 few times to see if dead port flows will appear in
br-int.

This is affecting the legacy, dvr and HA router.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493414

Title:
  OVS Neutron agent is marking port as dead before they are deleted

Status in neutron:
  New

Bug description:
  The situation is happening on Liberty-3.

  When trying to clear the gateway port and tenant network interface
  delete in router, the OVS agent is marking the port as dead instead of
  treat them as removed: security group removed and port_unbound

  This is causing to left stale OVS flows in br-int, and it may affect
  the port_unbound() logic in ovs_neutron_agent.py.

  The ovs_neutron_agent is in one iteration of rpc_loop processing the
  deleted port via process_deleted_ports() method, marking the qg- port
  as dead (ovs flow rule to drop the traffic) and in another iteration,
  the ovs_neutron_agent is processing the removed port by
  treat_devices_removed() method.

  In first iteration, the port deleting is triggered by port_delete() method:
  2015-09-04 14:16:20.337 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-e43234b1-633b-404d-92d0-0f844dadb586 admin 
0f6c0469ea6e4d95a27782c46021243a] port_delete message processed for port 
1c749258-74fb-498b-9a08-1fec6725a1cf from (pid=136030) port_delete 
/opt/openstack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:410

  and in second iteration, the device removed is triggered by ovsdb:
  2015-09-04 14:16:20.848 DEBUG neutron.agent.linux.ovsdb_monitor [-] Output 
received from ovsdb monitor: 
{"data":[["bab86f35-d004-4df6-95c2-0f7432338edb","delete","qg-1c749258-74",49,["map",[["attached-mac","fa:16:3e:99:37:68"],["iface-id","1c749258-74fb-498b-9a08-1fec6725a1cf"],["iface-status","active"],"headings":["row","action","name","ofport","external_ids"]}
   from (pid=136030) _read_stdout 
/opt/openstack/neutron/neutron/agent/linux/ovsdb_monitor.py:50

  Log from ovs neutron agent:
  http://paste.openstack.org/show/445479/

  Steps to reproduce:
  1. Create router
  2. Add tenant network interface to the router
  3. Launch a VM
  4. Add external network gateway to created router
  5. Check the br-int for current port numbers
  6. Remove external network gateway
  7. Check the br-int for dead port flows (removed port qg-)
  8. Remove the network interface from tenant network
  9. Check the br-int for dead port flows.

  Repeat the steps 4-9 few times to see if dead port flows will appear
  in br-int.

  This is affecting the legacy, dvr and HA router.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1493407] [NEW] Can't add member role on a project on latest DevStack

2015-09-08 Thread svasheka
Public bug reported:

Deploy latest stack.

1. Login as admin, create project with a member (Identity -> Projects-> +Create 
project)
2. Try to add more roles to a member

Actual result: Not all of the tries, but often enough you will be Unauthorized, 
and after your login page layout will be broken(refer screenshot 
"second_login.png").
When you are not Unauthorized error message shown "Unable to modify project"

Expected result: No error appeard, and roles added

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "second_login.png"
   
https://bugs.launchpad.net/bugs/1493407/+attachment/4459261/+files/second_login.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493407

Title:
  Can't add member role on a project on latest DevStack

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Deploy latest stack.

  1. Login as admin, create project with a member (Identity -> Projects-> 
+Create project)
  2. Try to add more roles to a member

  Actual result: Not all of the tries, but often enough you will be 
Unauthorized, and after your login page layout will be broken(refer screenshot 
"second_login.png").
  When you are not Unauthorized error message shown "Unable to modify project"

  Expected result: No error appeard, and roles added

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311700] Re: "Setting oauth clockskew" messages on booting node; slows down boot

2015-09-08 Thread LaMont Jones
This is a cloud-init bug.

** Changed in: maas
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1311700

Title:
  "Setting oauth clockskew" messages on booting node; slows down boot

Status in cloud-init:
  Triaged
Status in MAAS:
  Invalid

Bug description:
  On booting nodes for enlistment, commissioning, and starting, I
  sometimes see repeated errors like this:

  2014-04-23 14:15:50,622 - DataSourceMAAS.py[WARNING]: Setting oauth
  clockskew to -2

  (See attached screen shot, too.) These scroll by, once every second or
  so, for about a minute before the node continues booting. Although a
  minute here or there isn't all THAT big of a deal, it would be nice to
  see this delay go away, particularly for occasions when I have to
  repeatedly test a node.

  Here's my version information:

  $ dpkg -l '*maas*'|cat
  Desired=Unknown/Install/Remove/Purge/Hold
  | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
  |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
  ||/ Name  Version 
Architecture Description
  
+++-=-===--===
  ii  maas  
1.5+bzr2252-0ubuntu1all  MAAS server 
all-in-one metapackage
  ii  maas-cli  
1.5+bzr2252-0ubuntu1all  MAAS command 
line API tool
  ii  maas-cluster-controller   
1.5+bzr2252-0ubuntu1all  MAAS server 
cluster controller
  ii  maas-common   
1.5+bzr2252-0ubuntu1all  MAAS server 
common files
  ii  maas-dhcp 
1.5+bzr2252-0ubuntu1all  MAAS DHCP 
server
  ii  maas-dns  
1.5+bzr2252-0ubuntu1all  MAAS DNS server
  ii  maas-region-controller
1.5+bzr2252-0ubuntu1all  MAAS server 
complete region controller
  ii  maas-region-controller-min
1.5+bzr2252-0ubuntu1all  MAAS Server 
minimum region controller
  ii  python-django-maas
1.5+bzr2252-0ubuntu1all  MAAS server 
Django web framework
  ii  python-maas-client
1.5+bzr2252-0ubuntu1all  MAAS python 
API client
  ii  python-maas-provisioningserver
1.5+bzr2252-0ubuntu1all  MAAS server 
provisioning libraries

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1311700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491311] Re: glanceclient/common/utils.py safe_header throws an exception on X-Auth-Token with None value

2015-09-08 Thread David Edery
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => David Edery (david-edery)

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491311

Title:
  glanceclient/common/utils.py safe_header throws an exception on X
  -Auth-Token with None value

Status in Glance:
  New

Bug description:
  When using the glance-client with an internally generated admin
  context (e.g. using nova_context.get_admin_context) the
  log_curl_request fails due to safe_header trying to
  "value.encode('utf-8')" on a value that is None.

  An example of a stack trace (coming from RH juno distro's
  nova.compute.manager but the potential problem exists in Kilo &
  Liberty):

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 364, 
in decorated_function
  *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3021, 
in snapshot_instance
  task_states.IMAGE_SNAPSHOT)
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3052, 
in _snapshot_instance
  update_task_state)
File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
1701, in snapshot
  self._detach_sriov_ports(instance, virt_dom)
File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
3239, in _detach_sriov_ports
  instance)
File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
3189, in _prepare_args_for_get_config
  context, self._image_api, image_ref, instance)
File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 201, in 
get_image_metadata
  image = image_api.get(context, image_id_or_uri)
File "/usr/lib/python2.7/site-packages/nova/image/api.py", line 89, in get
  include_locations=include_locations)
File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 311, in 
show
  _reraise_translated_image_exception(image_id)
File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 309, in 
show
  image = self._client.call(context, version, 'get', image_id)
File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 232, in 
call
  return getattr(client.images, method)(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 
126, in get
  % urlparse.quote(str(image_id)))
File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 
250, in head
  return self._request('HEAD', url, **kwargs)
File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 
194, in _request
  self.log_curl_request(method, conn_url, headers, data, kwargs)
File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 
101, in log_curl_request
  header = '-H \'%s: %s\'' % safe_header(key, value)
File "/usr/lib/python2.7/site-packages/glanceclient/common/utils.py", line 
394, in safe_header
  v = value.encode('utf-8')
  AttributeError: 'NoneType' object has no attribute 'encode'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1491311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493453] [NEW] vendor_data isn't parsed properly when using the nocloud datasource

2015-09-08 Thread Stéphane Graber
Public bug reported:

The following fix is needed:

"self.vendordata = mydata['vendor-data']" must be changed to
"self.vendordata_raw = mydata['vendor-data']"

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: cloud-init (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: cloud-init (Ubuntu Trusty)
 Importance: Undecided
 Status: New

** Affects: cloud-init (Ubuntu Vivid)
 Importance: Undecided
 Status: New

** Affects: cloud-init (Ubuntu Wily)
 Importance: Undecided
 Status: New

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Wily)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1493453

Title:
  vendor_data isn't parsed properly when using the nocloud datasource

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Vivid:
  New
Status in cloud-init source package in Wily:
  New

Bug description:
  The following fix is needed:

  "self.vendordata = mydata['vendor-data']" must be changed to
  "self.vendordata_raw = mydata['vendor-data']"

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1493453/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493426] [NEW] Angular quota pie charts are broken after applying bp horizon-theme-css-reorg

2015-09-08 Thread Timur Sufiev
Public bug reported:

They are broken due to fixed width and height properties being assigned
to the .pie-chart class (which sits on top of div containing both chart
title, chart itself and its legend). It should be assigned to .svg-pie-
chart class and be applied only to chart itself.

** Affects: horizon
 Importance: Undecided
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493426

Title:
  Angular quota pie charts are broken after applying bp horizon-theme-
  css-reorg

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  They are broken due to fixed width and height properties being
  assigned to the .pie-chart class (which sits on top of div containing
  both chart title, chart itself and its legend). It should be assigned
  to .svg-pie-chart class and be applied only to chart itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1493426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490497] Re: pep8-incompliant filenames missing in gate console logs

2015-09-08 Thread Dolph Mathews
Leaving this as Incomplete unless someone can reproduce.

** Also affects: hacking
   Importance: Undecided
   Status: New

** Changed in: hacking
   Status: New => Incomplete

** Changed in: keystone
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490497

Title:
  pep8-incompliant filenames missing in gate console logs

Status in hacking:
  Incomplete
Status in Keystone:
  Incomplete

Bug description:
  Jenkins reported gate-keystone-pep8 failure on patch set 12 @ 
https://review.openstack.org/#/c/209524/  .
  But the console logs didn't contain the filenames that are incompliant with 
pep8.
  
http://logs.openstack.org/24/209524/12/check/gate-keystone-pep8/b2b7500/console.html
  
  ...
  2015-08-30 22:34:11.101 | pep8 runtests: PYTHONHASHSEED='3894393079'
  2015-08-30 22:34:11.102 | pep8 runtests: commands[0] | flake8
  2015-08-30 22:34:11.102 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
  2015-08-30 22:34:16.619 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
  2015-08-30 22:34:16.620 | ___ summary 

  2015-08-30 22:34:16.620 | ERROR:   pep8: commands failed
  ...
  

  Typically, it contains the filenames as well.
  Eg. Console logs pf patchset 1 contains the filenames.
  
http://logs.openstack.org/24/209524/1/check/gate-keystone-pep8/19f2885/console.html
  
  ...
  2015-08-05 14:45:15.247 | pep8 runtests: PYTHONHASHSEED='1879982710'
  2015-08-05 14:45:15.247 | pep8 runtests: commands[0] | flake8
  2015-08-05 14:45:15.247 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
  2015-08-05 14:45:20.518 | ./keystone/assignment/backends/ldap.py:37:5: E301 
expected 1 blank line, found 0
  2015-08-05 14:45:20.518 | @versionutils.deprecated(
  2015-08-05 14:45:20.518 | ^
  ...
  2015-08-05 14:45:20.872 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
  2015-08-05 14:45:20.872 | ___ summary 

  2015-08-05 14:45:20.873 | ERROR:   pep8: commands failed
  ...
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/hacking/+bug/1490497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470622] Re: Devref documentation for client command extension support

2015-09-08 Thread Doug Hellmann
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

** Changed in: python-neutronclient
Milestone: None => 3.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470622

Title:
  Devref documentation for client command extension support

Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Released

Bug description:
  The only documentation for client extensibility is the commit message
  in https://review.openstack.org/148318

  The information should be in an official devref document.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455102] Re: some test jobs broken by tox 2.0 not passing env variables

2015-09-08 Thread Doug Hellmann
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

** Changed in: python-neutronclient
Milestone: None => 3.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455102

Title:
  some test jobs broken by tox 2.0 not passing env variables

Status in Magnum:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in OpenStack-Gate:
  Confirmed
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-manilaclient:
  Fix Committed
Status in python-neutronclient:
  Fix Released
Status in python-novaclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released

Bug description:
  Tox 2.0 brings environment isolation, which is good. Except a lot of
  test jobs assume passing critical variables via environment (like
  credentials).

  There are multiple ways to fix this:

  1. stop using environment to pass things, instead use a config file of
  some sort

  2. allow explicit pass through via -
  http://tox.readthedocs.org/en/latest/config.html#confval-passenv
  =SPACE-SEPARATED-GLOBNAMES

  This bug mostly exists for tracking patches, and ensuring that people
  realize there is a larger change here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1455102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444146] Re: Subnet creation from a subnet pool can get wrong ip_version

2015-09-08 Thread Doug Hellmann
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

** Changed in: python-neutronclient
Milestone: None => 3.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444146

Title:
  Subnet creation from a subnet pool can get wrong ip_version

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in python-neutronclient:
  Fix Released

Bug description:
  The following command ends up creating a subnet with ip_version set to
  4 even though the pool is an ipv6 pool.

$ neutron subnet-create --subnetpool ext-subnet-pool --prefixlen 64
  network1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465440] Re: Firewall attribute "Shared" is set to None by default instead of 'False'

2015-09-08 Thread Doug Hellmann
** Changed in: python-neutronclient
   Status: Fix Committed => Fix Released

** Changed in: python-neutronclient
Milestone: None => 3.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465440

Title:
  Firewall attribute "Shared" is set to None by default instead of
  'False'

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in neutron:
  Won't Fix
Status in python-neutronclient:
  Fix Released

Bug description:
  In the current implementation, when a firewall is created, the default
  value of the attribute 'Shared' is set to 'None' instead of 'False'.
  When Firewall attributes are seen from Horizon, the display value is
  shown as 'Maybe' instead of 'No' due to value being 'None'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446017] Re: In Kilo code release, nova boot failed on keystoneclient returns 500 error

2015-09-08 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446017

Title:
  In Kilo code release, nova boot failed on keystoneclient returns 500
  error

Status in OpenStack Compute (nova):
  Expired

Bug description:
  nova --debug boot --flavor 1 --image dockerc7  --nic 
net-id=40945ae1-344c-4ebd-a25b-2776feb0f409 d01
  ...

  DEBUG (iso8601:184) Parsed 2015-04-19T22:56:02Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'22', 'daydash': u'19', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'02', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'04', 'day': None, 'minute': u'56'} 
with default timezone 
  DEBUG (iso8601:140) Got u'2015' for 'year' with default None
  DEBUG (iso8601:140) Got u'04' for 'monthdash' with default 1
  DEBUG (iso8601:140) Got 4 for 'month' with default 4
  DEBUG (iso8601:140) Got u'19' for 'daydash' with default 1
  DEBUG (iso8601:140) Got 19 for 'day' with default 19
  DEBUG (iso8601:140) Got u'22' for 'hour' with default None
  DEBUG (iso8601:140) Got u'56' for 'minute' with default None
  DEBUG (iso8601:140) Got u'02' for 'second' with default None
  DEBUG (iso8601:184) Parsed 2015-04-19T22:56:02Z into {'tz_sign': None, 
'second_fraction': None, 'hour': u'22', 'daydash': u'19', 'tz_hour': None, 
'month': None, 'timezone': u'Z', 'second': u'02', 'tz_minute': None, 'year': 
u'2015', 'separator': u'T', 'monthdash': u'04', 'day': None, 'minute': u'56'} 
with default timezone 
  DEBUG (iso8601:140) Got u'2015' for 'year' with default None
  DEBUG (iso8601:140) Got u'04' for 'monthdash' with default 1
  DEBUG (iso8601:140) Got 4 for 'month' with default 4
  DEBUG (iso8601:140) Got u'19' for 'daydash' with default 1
  DEBUG (iso8601:140) Got 19 for 'day' with default 19
  DEBUG (iso8601:140) Got u'22' for 'hour' with default None
  DEBUG (iso8601:140) Got u'56' for 'minute' with default None
  DEBUG (iso8601:140) Got u'02' for 'second' with default None
  DEBUG (session:195) REQ: curl -g -i -X POST 
http://10.0.0.244:8774/v2/959d7f7e020b48509aea18dcec819491/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}d66f80c62fd0af6d801994e38c69d8e2a1833370" -d '{"server": {"name": "d01", 
"imageRef": "b4ca9864-9ceb-4b42-9c82-620f0e2fd60d", "flavorRef": "1", 
"max_count": 1, "min_count": 1, "networks": [{"uuid": 
"40945ae1-344c-4ebd-a25b-2776feb0f409"}]}}'
  DEBUG (retry:155) Converted retries value: 0 -> Retry(total=0, connect=None, 
read=None, redirect=0)
  DEBUG (connectionpool:383) "POST /v2/959d7f7e020b48509aea18dcec819491/servers 
HTTP/1.1" 500 128
  DEBUG (session:223) RESP:
  DEBUG (shell:914) The server has either erred or is incapable of performing 
the requested operation. (HTTP 500)
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 911, in 
main
  OpenStackComputeShell().main(argv)
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 838, in 
main
  args.func(self.cs, args)
File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 500, 
in do_boot
  server = cs.servers.create(*boot_args, **boot_kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 929, 
in create
  **boot_kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 557, 
in _boot
  return_raw=return_raw, **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 152, in 
_create
  _resp, body = self.api.client.post(url, body=body)
File "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 
171, in post
  return self.request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 97, in 
request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: The server has either erred or is incapable of performing 
the requested operation. (HTTP 500)
  ERROR (ClientException): The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

  
  Traced back found the http 500 was returned by  request() in sessions.py, 
even the token string was valid.

  > 
/usr/lib/python2.7/site-packages/requests/sessions.py(309)request()->
  -> return resp
  (Pdb) 
  > 
/usr/lib/python2.7/site-packages/keystoneclient/session.py(435)_send_request()
  -> if log:
  (Pdb) p resp
  
  (Pdb) p resp.status_code
  500

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1446017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1493658] [NEW] neutronclient is case-insensitive when deleting firewall rule

2015-09-08 Thread Reedip
Public bug reported:

- User can create a firewall rules named demo_rule1 and Demo_Rule1 in Horizon 
and neutron-client.
However on trying to delete firewall using  " neutron firewall-rule-delete 
demo_rule1", we get an error:
NeutronClientNoUniqueMatch: Multiple firewall_rule matches found for name 
'demo_rule1', use an ID to be more specific.
Multiple firewall_rule matches found for name 'demo_rule1', use an ID to be 
more specific.

Ideally Neutron Client/Neutron itself, should be able to differentiate
between Demo_Rule1 and demo_rule1

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New


** Tags: fwaas

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493658

Title:
  neutronclient is case-insensitive when deleting firewall rule

Status in neutron:
  New

Bug description:
  - User can create a firewall rules named demo_rule1 and Demo_Rule1 in Horizon 
and neutron-client.
  However on trying to delete firewall using  " neutron firewall-rule-delete 
demo_rule1", we get an error:
  NeutronClientNoUniqueMatch: Multiple firewall_rule matches found for name 
'demo_rule1', use an ID to be more specific.
  Multiple firewall_rule matches found for name 'demo_rule1', use an ID to be 
more specific.

  Ideally Neutron Client/Neutron itself, should be able to differentiate
  between Demo_Rule1 and demo_rule1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493658/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493662] [NEW] ovs-agent-plugin polling manager fails to monitor interface table

2015-09-08 Thread huan
Public bug reported:

My environment is XenServer + Neutron with ML2 plugin, OVS as driver, VLan type.
This is a single box environment installed by devstack.
When it began to run, I found q-agt always had error logs like this:

2015-09-09 05:15:23.653 ERROR neutron.agent.linux.ovsdb_monitor [req-
2dfc6939-277b-4d54-83d5-d0aecbfdc07c None None] Interface monitor is not
active

Dig deep into the code, I found the callstack is trace from
OVSNeutronAgent.rpc_loop() to neutron/agent/linux/utils.py
remove_abs_path(). So I temporarily added debug log and found

2015-09-09 05:15:23.653 DEBUG neutron.agent.linux.utils 
[req-2dfc6939-277b-4d54-83d5-d0aecbfdc07c None None] cmd_matches_expected, 
cmd:['/usr/bin/python', '/usr/local/bin/neutron-rootwrap-xen-dom0', 
'/etc/neutron/rootwrap.conf', 'ovsdb-client', 'monitor', 'Interface', 
'name,ofport,external_ids', '--format=json'], expect:['ovsdb-client', 
'monitor', 'Interface', 'name,ofport,external_ids', '--format=json'] from 
(pid=11595) cmd_matches_expected 
/opt/stack/neutron/neutron/agent/linux/utils.py:303
 
So, it's clear that after remove absolute path, the command still cannot match, 
so it will lead to the ERROR log of "Interface monitor is not active"

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493662

Title:
  ovs-agent-plugin polling manager fails to monitor interface table

Status in neutron:
  New

Bug description:
  My environment is XenServer + Neutron with ML2 plugin, OVS as driver, VLan 
type.
  This is a single box environment installed by devstack.
  When it began to run, I found q-agt always had error logs like this:

  2015-09-09 05:15:23.653 ERROR neutron.agent.linux.ovsdb_monitor [req-
  2dfc6939-277b-4d54-83d5-d0aecbfdc07c None None] Interface monitor is
  not active

  Dig deep into the code, I found the callstack is trace from
  OVSNeutronAgent.rpc_loop() to neutron/agent/linux/utils.py
  remove_abs_path(). So I temporarily added debug log and found

  2015-09-09 05:15:23.653 DEBUG neutron.agent.linux.utils 
[req-2dfc6939-277b-4d54-83d5-d0aecbfdc07c None None] cmd_matches_expected, 
cmd:['/usr/bin/python', '/usr/local/bin/neutron-rootwrap-xen-dom0', 
'/etc/neutron/rootwrap.conf', 'ovsdb-client', 'monitor', 'Interface', 
'name,ofport,external_ids', '--format=json'], expect:['ovsdb-client', 
'monitor', 'Interface', 'name,ofport,external_ids', '--format=json'] from 
(pid=11595) cmd_matches_expected 
/opt/stack/neutron/neutron/agent/linux/utils.py:303
   
  So, it's clear that after remove absolute path, the command still cannot 
match, so it will lead to the ERROR log of "Interface monitor is not active"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493662/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp