[Yahoo-eng-team] [Bug 1362416] [NEW] midonet: Deletes incorrect nested DHCP subnet

2014-08-27 Thread Angus Lees
Public bug reported:

midonet "delete_dhcp" function doesn't check subnet length when
searching for the DHCP entries to delete.  This means it can delete the
wrong subnet entry if two are nested (ie: have the same prefix, but
different lengths).

Disclaimer: I don't have midonet equipment and don't know if it is even
capable of supporting nested DHCP subnets - I just found this while
wondering why 'net_len' variable was unused.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362416

Title:
  midonet: Deletes incorrect nested DHCP subnet

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  midonet "delete_dhcp" function doesn't check subnet length when
  searching for the DHCP entries to delete.  This means it can delete
  the wrong subnet entry if two are nested (ie: have the same prefix,
  but different lengths).

  Disclaimer: I don't have midonet equipment and don't know if it is
  even capable of supporting nested DHCP subnets - I just found this
  while wondering why 'net_len' variable was unused.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362405] [NEW] 'Force' option broken for quota update

2014-08-27 Thread Kieran Spear
Public bug reported:

This change broke the ability to force quotas below the current in-use
value by adding new validation checks:

https://review.openstack.org/#/c/28232/


$ nova quota-update --force --cores 0 132
ERROR (BadRequest): Quota limit must be greater than 1. (HTTP 400) (Request-ID: 
req-ff0751a9-9e87-443e-9965-a30768f91d9f)

** Affects: nova
 Importance: Undecided
 Assignee: Kieran Spear (kspear)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Kieran Spear (kspear)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362405

Title:
  'Force' option broken for quota update

Status in OpenStack Compute (Nova):
  New

Bug description:
  This change broke the ability to force quotas below the current in-use
  value by adding new validation checks:

  https://review.openstack.org/#/c/28232/

  
  $ nova quota-update --force --cores 0 132
  ERROR (BadRequest): Quota limit must be greater than 1. (HTTP 400) 
(Request-ID: req-ff0751a9-9e87-443e-9965-a30768f91d9f)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358155] Re: Using CLI I am able to stop server when it is locked

2014-08-27 Thread Thang Pham
I cannot reproduce this bug based on the steps you listed above.  I get
the following error, which is correct:

$ nova stop test1
ERROR (Conflict): Instance f7961e90-29c8-4e5d-8e8e-34ad7e46b834 is locked (HTTP 
409) (Request-ID: req-5193f0fb-3cd5-4603-80ea-74585caae612)

However, if I attempt to use the admin user to stop the instance, it
works, which is what I excepted.


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358155

Title:
  Using CLI I am able to stop server when it is locked

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am using devstack + tempest -

  Operations are as -

  1.

  [raies@localhost devstack]$ nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | d44993fc-c81f-4e4c-9adf-09019859cb31 | test-server | ACTIVE | -  | 
Running | public=172.24.4.7 |
  
+--+-+++-+---+

  2.

  [raies@localhost devstack]$ nova lock d44993fc-c81f-4e4c-9adf-
  09019859cb31

  3.

  [raies@localhost devstack]$ nova stop d44993fc-c81f-4e4c-9adf-
  09019859cb31

  
  All the above commands are successful.

  But third command should raise exception (conflict) but all command
  are successful.

  
  Above can be confirmed from the API testing in tempest - 

  tempest/tempest/api/compute/servers/test_server_actions.py  -->
  test_lock_unlock_server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358158] Re: unable to get servers locked status

2014-08-27 Thread Ghanshyam Mann
*** This bug is a duplicate of bug 1326553 ***
https://bugs.launchpad.net/bugs/1326553

It nothing to do with Tempest :)


** Project changed: tempest => nova

** This bug has been marked a duplicate of bug 1326553
   Instance lock/unlock state is not presented anywhere

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358158

Title:
  unable to get servers locked status

Status in OpenStack Compute (Nova):
  New

Bug description:
  1. When one stop a server then it's status is - "SHUTOFF"
  2. When I start a server successfully it's status is  - "ACTIVE"

  but when I lock a server using following command -

  # nova lock 

  and when I check the server status using - # nova show  ,
  or  # nova list | grep 

  status is still "ACTIVE".

  This should be "LOCKED" or "ACTIVE + LOCKED",  not "ACTIVE"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358158] [NEW] unable to get servers locked status

2014-08-27 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

1. When one stop a server then it's status is - "SHUTOFF"
2. When I start a server successfully it's status is  - "ACTIVE"

but when I lock a server using following command -

# nova lock 

and when I check the server status using - # nova show  , or
# nova list | grep 

status is still "ACTIVE".

This should be "LOCKED" or "ACTIVE + LOCKED",  not "ACTIVE"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
unable to get servers locked status
https://bugs.launchpad.net/bugs/1358158
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358155] Re: Using CLI I am able to stop server when it is locked

2014-08-27 Thread Ghanshyam Mann
What was the Role? was it Admin.

I think you have performed this operation with Admin role (Please confirm). 
Server can be stopped only from two state ACTIVE or ERROR. If server is locked 
then only Admin can stop the server. Non admin user will get the conflict error 
if they try the same.
Tempest tests 
(api/compute/servers/test_server_actions.py:test_lock_unlock_server) confirm 
the same and validate this behavior. 

It nothing to do with Tempest, if there is any issue then it should be
Nova.

** Project changed: tempest => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358155

Title:
  Using CLI I am able to stop server when it is locked

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am using devstack + tempest -

  Operations are as -

  1.

  [raies@localhost devstack]$ nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | d44993fc-c81f-4e4c-9adf-09019859cb31 | test-server | ACTIVE | -  | 
Running | public=172.24.4.7 |
  
+--+-+++-+---+

  2.

  [raies@localhost devstack]$ nova lock d44993fc-c81f-4e4c-9adf-
  09019859cb31

  3.

  [raies@localhost devstack]$ nova stop d44993fc-c81f-4e4c-9adf-
  09019859cb31

  
  All the above commands are successful.

  But third command should raise exception (conflict) but all command
  are successful.

  
  Above can be confirmed from the API testing in tempest - 

  tempest/tempest/api/compute/servers/test_server_actions.py  -->
  test_lock_unlock_server

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358155/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358155] [NEW] Using CLI I am able to stop server when it is locked

2014-08-27 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I am using devstack + tempest -

Operations are as -

1.

[raies@localhost devstack]$ nova list
+--+-+++-+---+
| ID   | Name| Status | Task State | 
Power State | Networks  |
+--+-+++-+---+
| d44993fc-c81f-4e4c-9adf-09019859cb31 | test-server | ACTIVE | -  | 
Running | public=172.24.4.7 |
+--+-+++-+---+

2.

[raies@localhost devstack]$ nova lock d44993fc-c81f-4e4c-9adf-
09019859cb31

3.

[raies@localhost devstack]$ nova stop d44993fc-c81f-4e4c-9adf-
09019859cb31


All the above commands are successful.

But third command should raise exception (conflict) but all command are
successful.


Above can be confirmed from the API testing in tempest - 

tempest/tempest/api/compute/servers/test_server_actions.py  -->
test_lock_unlock_server

** Affects: nova
 Importance: Undecided
 Status: New

-- 
Using CLI I am able to stop server when it is locked
https://bugs.launchpad.net/bugs/1358155
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362347] [NEW] gneutron-ns-meta invokes oom-killer during gate runs

2014-08-27 Thread Matthew Treinish
Public bug reported:

Occasionally a neutron gate job fails because the node runs out of
memory. oom-killer is invoked and it starts killing processes to save
the node. (which just causes cascading issues) The kernel logs show that
oom-killer is being invoked by neutron-ns-meta.

An example of  one such failure is:

http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-
full/ab17a70/

With the kernel log:

http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-
full/ab17a70/logs/syslog.txt.gz#_Aug_26_04_59_03

Using logstash this failure can be isolated to only neutron gate jobs.
So there is probably something triggering neutron to occasionally make
the job consume in excess of 8GB of ram.

I also noted in the neutron svc log that first out of memory error came
from using keystone-middleware:

http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-neutron-
full/ab17a70/logs/screen-q-svc.txt.gz#_2014-08-26_04_56_39_602

but that may just be a red herring.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362347

Title:
  gneutron-ns-meta invokes oom-killer during gate runs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Occasionally a neutron gate job fails because the node runs out of
  memory. oom-killer is invoked and it starts killing processes to save
  the node. (which just causes cascading issues) The kernel logs show
  that oom-killer is being invoked by neutron-ns-meta.

  An example of  one such failure is:

  http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-
  neutron-full/ab17a70/

  With the kernel log:

  http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-
  neutron-full/ab17a70/logs/syslog.txt.gz#_Aug_26_04_59_03

  Using logstash this failure can be isolated to only neutron gate jobs.
  So there is probably something triggering neutron to occasionally make
  the job consume in excess of 8GB of ram.

  I also noted in the neutron svc log that first out of memory error
  came from using keystone-middleware:

  http://logs.openstack.org/75/116775/2/check/check-tempest-dsvm-
  neutron-full/ab17a70/logs/screen-q-svc.txt.gz#_2014-08-26_04_56_39_602

  but that may just be a red herring.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362343] [NEW] weak digest algorithm for PKI

2014-08-27 Thread Brant Knudson
Public bug reported:

The digest algorithm for PKI tokens is the openssl default of sha1. This
is a weak algorithm and some security standards require a stronger
algorithm such as sha256. Keystone should make the token digest hash
algorithm configurable so that deployments can use a stronger algorithm.

Also, the default could be stronger.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

** Affects: python-keystoneclient
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New

** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

** Changed in: python-keystoneclient
 Assignee: (unassigned) => Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362343

Title:
  weak digest algorithm for PKI

Status in OpenStack Identity (Keystone):
  New
Status in Python client library for Keystone:
  New

Bug description:
  The digest algorithm for PKI tokens is the openssl default of sha1.
  This is a weak algorithm and some security standards require a
  stronger algorithm such as sha256. Keystone should make the token
  digest hash algorithm configurable so that deployments can use a
  stronger algorithm.

  Also, the default could be stronger.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362344] [NEW] Tests for Trusts don't actually test the roles provided in the trust reference

2014-08-27 Thread Lance Bragstad
Public bug reported:

The tests for Trusts leverage the test_v3.py module and are housed in
test_v3_auth.py. These tests use the new_trust_ref() provided in
test_v3.py to build new trust references [1]. The new_trust_ref method
is suppose to go through and build a list of role ids that can be used
in the request. In the trust, implementation, the controller expects
that you pass in roles by 'id' or by 'name' [2]. The trust['roles']
list/dict that is returned from new_trust_ref isn't actually tested here
since the new_trust_ref() method doesn't build the list of roles with
'name':  or 'id':  expectations that the
implementation expects [3].

I discovered this by removing the new_ref() call in new_trust_ref()
since a trust doesn't share any common attributes like 'name',
'description' and so on [4].

Once you remove the call to new_ref, and the del ref['id'] calls, the
test_v3_auth.py unit tests start failing because of this exception [4].


Here is an example of the test failures: http://paste.openstack.org/show/101240/


[1] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/tests/test_v3.py#L340-L371
[2] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/trust/controllers.py#L101-L118
[3] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/trust/controllers.py#L105-L108
[4] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/trust/controllers.py#L115-L117

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362344

Title:
  Tests for Trusts don't actually test the roles provided in the trust
  reference

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The tests for Trusts leverage the test_v3.py module and are housed in
  test_v3_auth.py. These tests use the new_trust_ref() provided in
  test_v3.py to build new trust references [1]. The new_trust_ref method
  is suppose to go through and build a list of role ids that can be used
  in the request. In the trust, implementation, the controller expects
  that you pass in roles by 'id' or by 'name' [2]. The trust['roles']
  list/dict that is returned from new_trust_ref isn't actually tested
  here since the new_trust_ref() method doesn't build the list of roles
  with 'name':  or 'id':  expectations that
  the implementation expects [3].

  I discovered this by removing the new_ref() call in new_trust_ref()
  since a trust doesn't share any common attributes like 'name',
  'description' and so on [4].

  Once you remove the call to new_ref, and the del ref['id'] calls, the
  test_v3_auth.py unit tests start failing because of this exception
  [4].

  
  Here is an example of the test failures: 
http://paste.openstack.org/show/101240/


  [1] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/tests/test_v3.py#L340-L371
  [2] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/trust/controllers.py#L101-L118
  [3] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/trust/controllers.py#L105-L108
  [4] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/trust/controllers.py#L115-L117

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362325] [NEW] Dashboard with slug "sahara" is not registered

2014-08-27 Thread Matt Riedemann
Public bug reported:

Saw this in a nova patch going through the check queue:

http://logs.openstack.org/03/103703/5/check/check-tempest-dsvm-
full/525cfaf/logs/horizon_error.txt.gz

[Wed Aug 27 18:07:56.251879 2014] [:error] [pid 20373:tid 140464345409280] 
Internal Server Error: /project/
[Wed Aug 27 18:07:56.251917 2014] [:error] [pid 20373:tid 140464345409280] 
Traceback (most recent call last):
[Wed Aug 27 18:07:56.251924 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", 
line 137, in get_response
[Wed Aug 27 18:07:56.251929 2014] [:error] [pid 20373:tid 140464345409280] 
response = response.render()
[Wed Aug 27 18:07:56.251934 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/response.py", line 
105, in render
[Wed Aug 27 18:07:56.251940 2014] [:error] [pid 20373:tid 140464345409280] 
self.content = self.rendered_content
[Wed Aug 27 18:07:56.251945 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/response.py", line 
82, in rendered_content
[Wed Aug 27 18:07:56.251950 2014] [:error] [pid 20373:tid 140464345409280] 
content = template.render(context)
[Wed Aug 27 18:07:56.251955 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
140, in render
[Wed Aug 27 18:07:56.251960 2014] [:error] [pid 20373:tid 140464345409280] 
return self._render(context)
[Wed Aug 27 18:07:56.251965 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
[Wed Aug 27 18:07:56.251970 2014] [:error] [pid 20373:tid 140464345409280] 
return self.nodelist.render(context)
[Wed Aug 27 18:07:56.251974 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
[Wed Aug 27 18:07:56.251979 2014] [:error] [pid 20373:tid 140464345409280] 
bit = self.render_node(node, context)
[Wed Aug 27 18:07:56.251984 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 
78, in render_node
[Wed Aug 27 18:07:56.251989 2014] [:error] [pid 20373:tid 140464345409280] 
return node.render(context)
[Wed Aug 27 18:07:56.251994 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 123, in render
[Wed Aug 27 18:07:56.251999 2014] [:error] [pid 20373:tid 140464345409280] 
return compiled_parent._render(context)
[Wed Aug 27 18:07:56.252003 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
134, in _render
[Wed Aug 27 18:07:56.252009 2014] [:error] [pid 20373:tid 140464345409280] 
return self.nodelist.render(context)
[Wed Aug 27 18:07:56.252013 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
[Wed Aug 27 18:07:56.252018 2014] [:error] [pid 20373:tid 140464345409280] 
bit = self.render_node(node, context)
[Wed Aug 27 18:07:56.252023 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 
78, in render_node
[Wed Aug 27 18:07:56.252028 2014] [:error] [pid 20373:tid 140464345409280] 
return node.render(context)
[Wed Aug 27 18:07:56.252032 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 62, in render
[Wed Aug 27 18:07:56.252037 2014] [:error] [pid 20373:tid 140464345409280] 
result = block.nodelist.render(context)
[Wed Aug 27 18:07:56.252042 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
[Wed Aug 27 18:07:56.252047 2014] [:error] [pid 20373:tid 140464345409280] 
bit = self.render_node(node, context)
[Wed Aug 27 18:07:56.252053 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/debug.py", line 
78, in render_node
[Wed Aug 27 18:07:56.252065 2014] [:error] [pid 20373:tid 140464345409280] 
return node.render(context)
[Wed Aug 27 18:07:56.252070 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py", 
line 62, in render
[Wed Aug 27 18:07:56.252076 2014] [:error] [pid 20373:tid 140464345409280] 
result = block.nodelist.render(context)
[Wed Aug 27 18:07:56.252081 2014] [:error] [pid 20373:tid 140464345409280]   
File "/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 
840, in render
[Wed Aug 27 18:07:56.252086 2014] [:error] [pid 20373:tid 140464345409280] 
bit = self

[Yahoo-eng-team] [Bug 1362308] [NEW] Use dict_extend_functions mechanism to populate provider network attributes in Nuage Plugin

2014-08-27 Thread Divya ChanneGowda
Public bug reported:

Use dict_extend_functions mechanism to handle populating additional
provider network attributes into Network model.

** Affects: neutron
 Importance: Undecided
 Assignee: Divya ChanneGowda (divya-hc)
 Status: New


** Tags: nuage

** Tags added: nuage

** Changed in: neutron
 Assignee: (unassigned) => Divya ChanneGowda (divya-hc)

** Description changed:

- Use register_dict_extend_funcs for provider network attributes in Nuage
- Plugin
+ Use dict_extend_functions mechanism to handle populating additional
+ provider network attributes into Network model.

** Summary changed:

- Use register_dict_extend_funcs for provider network attributes in Nuage Plugin
+ Use dict_extend_functions mechanism to populate provider network attributes 
in Nuage Plugin

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362308

Title:
  Use dict_extend_functions mechanism to populate provider network
  attributes in Nuage Plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Use dict_extend_functions mechanism to handle populating additional
  provider network attributes into Network model.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362309] [NEW] Creating an endpoint with an invalid service_id returns the wrong error code

2014-08-27 Thread David Stanek
Public bug reported:

When creating or updating an endpoint with an invalid service_id
specified the server returns a 404 instead of a 400. While it's true
that the service can't be found, that's not the resource the client is
attempting to access.  This is misleading.

** Affects: keystone
 Importance: Undecided
 Assignee: David Stanek (dstanek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362309

Title:
  Creating an endpoint with an invalid service_id returns the wrong
  error code

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  When creating or updating an endpoint with an invalid service_id
  specified the server returns a 404 instead of a 400. While it's true
  that the service can't be found, that's not the resource the client is
  attempting to access.  This is misleading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1056437] Re: L3 agent should support provider external networks

2014-08-27 Thread Cedric Brandily
This bug has been solved in trunk[1] (icehouse) and backported to
stable/havana[2]: When external_network_bridge and
gateway_external_network_id are empty, l3-agent uses provider attributes
to deploy the external network

[1] https://review.openstack.org/59359
[2] https://review.openstack.org/68601

** Changed in: neutron
   Status: Confirmed => Fix Released

** Tags removed: havana-backport-potential l3-ipam-dhcp
** Tags added: in-stable-havana

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1056437

Title:
  L3 agent should support provider external networks

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The l3-agent requires an "external" quantum network, and creates ports
  on that network, but does not actually use the external quantum
  network for network connectivity. Instead, it requires manual
  configuration of an external_network_bridge for connectivity.

  Now that the provider network extension allows creation of quantum
  networks that do provide external connectivity, the l3-agent should
  support external connectivity through such networks. This can be done
  in a backward-compatible way by interpreting an empty
  external_network_bridge configuration variable as indicating that the
  external network should actually be used for connectivity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1056437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362301] [NEW] neutron.plugins.openvswitch.agent.ovs_neutron_agent fails to configure bridges

2014-08-27 Thread Tom Carroll
Public bug reported:

I'm testing icehouse on Xenserver release Creedance (what will be 6.5).
The kernel is 3.10. When iptables-restore is called via neutron-
rootwrap-xen-dom0, the iptables-restore produces warnings that the state
netfilter module is obsolete (but the command does exits with 0). The
response cannot be parsed by rootwrap as valid XenAPI xml rpc. rootwrap
terminates with exit code 96 and the previous iptables rules are
restored. The effect of this failure is that the bridges are not
configured. Log entries are provided at the end of this message.

If instances of "-m state --state" are translated to "-m conntrack
--ctstate", the error stops and the bridges can be correctly configured.

TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Command:
['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf',
'iptables-restore', '-c']

2014-08-27 06:35:28.609 1392 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 96
2014-08-27 06:35:28.609 1392 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
2014-08-27 06:35:28.609 1392 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: 'Traceback (most 
recent call last):\n  File "/usr/bin/neutron-rootwrap-xen-dom0", line 118, in 
run_command\n{\'cmd\': json.dumps(user_args), \'cmd_input\': 
json.dumps(cmd_input)})\n  File "/usr/lib/python2.7/dist-packages/XenAPI.py", 
line 235, in __call__\nreturn self.__send(self.__name, args)\n  File 
"/usr/lib/python2.7/dist-packages/XenAPI.py", line 139, in xenapi_request\n
result = _parse_result(getattr(self, methodname)(*full_params))\n  File 
"/usr/lib/python2.7/dist-packages/XenAPI.py", line 209, in _parse_result\n
raise Failure(result[\'ErrorDescription\'])\nFailure: 
[\'XENAPI_PLUGIN_FAILURE\', \'run_command\', \'PluginError\', \'WARNING: The 
state match is obsolete. Use conntrack instead.\\nWARNING: The state match is 
obsolete. Use conntrack instead.\\nWARNING: The state match is obsolete. Use 
conntrack instead.\\nWARNING: The state match is obsolete. Use conntrack inst
 ead.\\n\']\n'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362301

Title:
  neutron.plugins.openvswitch.agent.ovs_neutron_agent fails to configure
  bridges

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I'm testing icehouse on Xenserver release Creedance (what will be
  6.5). The kernel is 3.10. When iptables-restore is called via neutron-
  rootwrap-xen-dom0, the iptables-restore produces warnings that the
  state netfilter module is obsolete (but the command does exits with
  0). The response cannot be parsed by rootwrap as valid XenAPI xml rpc.
  rootwrap terminates with exit code 96 and the previous iptables rules
  are restored. The effect of this failure is that the bridges are not
  configured. Log entries are provided at the end of this message.

  If instances of "-m state --state" are translated to "-m conntrack
  --ctstate", the error stops and the bridges can be correctly
  configured.

  TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Command:
  ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf',
  'iptables-restore', '-c']

  2014-08-27 06:35:28.609 1392 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 96
  2014-08-27 06:35:28.609 1392 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
  2014-08-27 06:35:28.609 1392 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: 'Traceback (most 
recent call last):\n  File "/usr/bin/neutron-rootwrap-xen-dom0", line 118, in 
run_command\n{\'cmd\': json.dumps(user_args), \'cmd_input\': 
json.dumps(cmd_input)})\n  File "/usr/lib/python2.7/dist-packages/XenAPI.py", 
line 235, in __call__\nreturn self.__send(self.__name, args)\n  File 
"/usr/lib/python2.7/dist-packages/XenAPI.py", line 139, in xenapi_request\n
result = _parse_result(getattr(self, methodname)(*full_params))\n  File 
"/usr/lib/python2.7/dist-packages/XenAPI.py", line 209, in _parse_result\n
raise Failure(result[\'ErrorDescription\'])\nFailure: 
[\'XENAPI_PLUGIN_FAILURE\', \'run_command\', \'PluginError\', \'WARNING: The 
state match is obsolete. Use conntrack instead.\\nWARNING: The state match is 
obsolete. Use conntrack instead.\\nWARNING: The state match is obsolete. Use 
conntrack instead.\\nWARNING: The state match is obsolete. Use conntrack in
 stead.\\n\']\n'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362291] Re: Project creation attributes in Identity API are inconsistent with implementation

2014-08-27 Thread Lance Bragstad
The implementation does make sure a `domain_id` is supplied in the
project reference:

https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/assignment/controllers.py#L399

This invalids this bug since the user doesn't *have* to specific a
`domain_id` in the request, the Keystone server will populate it
automatically, making `domain_id` optional. The documentation was
correct.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362291

Title:
  Project creation attributes in Identity API are inconsistent with
  implementation

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  The Identity API lists `domain_id` as an optional attribute when
  creating a project. If a `domain_id` is optional, it can be supplied
  in the request as a valid id string, supplied in the request with
  value None, or not supplied in the request at all.

  Currently, the Keystone implementation doesn't allow the `domain_id`
  attribute to be nullable when creating a project [2].

  Either the implementation has to be updated to allow for truly
  optional `domain_id` attributes in project references, or `domain_id`
  attributes need to be represented as required in the Identity API so
  that it is not misleading to our users.

  
  [1] 
https://github.com/openstack/identity-api/blob/fadef23172a32d4c98c92edff4e6cc75d2fb5248/v3/src/markdown/identity-api-v3.md#projects-v3projects
  [2] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/common/sql/migrate_repo/versions/034_havana.py#L114

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362291] [NEW] Project creation attributes in Identity API are inconsistent with implementation

2014-08-27 Thread Lance Bragstad
Public bug reported:

The Identity API lists `domain_id` as an optional attribute when
creating a project. If a `domain_id` is optional, it can be supplied in
the request as a valid id string, supplied in the request with value
None, or not supplied in the request at all.

Currently, the Keystone implementation doesn't allow the `domain_id`
attribute to be nullable when creating a project [2].

Either the implementation has to be updated to allow for truly optional
`domain_id` attributes in project references, or `domain_id` attributes
need to be represented as required in the Identity API so that it is not
misleading to our users.


[1] 
https://github.com/openstack/identity-api/blob/fadef23172a32d4c98c92edff4e6cc75d2fb5248/v3/src/markdown/identity-api-v3.md#projects-v3projects
[2] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/common/sql/migrate_repo/versions/034_havana.py#L114

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362291

Title:
  Project creation attributes in Identity API are inconsistent with
  implementation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The Identity API lists `domain_id` as an optional attribute when
  creating a project. If a `domain_id` is optional, it can be supplied
  in the request as a valid id string, supplied in the request with
  value None, or not supplied in the request at all.

  Currently, the Keystone implementation doesn't allow the `domain_id`
  attribute to be nullable when creating a project [2].

  Either the implementation has to be updated to allow for truly
  optional `domain_id` attributes in project references, or `domain_id`
  attributes need to be represented as required in the Identity API so
  that it is not misleading to our users.

  
  [1] 
https://github.com/openstack/identity-api/blob/fadef23172a32d4c98c92edff4e6cc75d2fb5248/v3/src/markdown/identity-api-v3.md#projects-v3projects
  [2] 
https://github.com/openstack/keystone/blob/f4f0bdf092edf7ba6e74019f5524629fd2ad85ce/keystone/common/sql/migrate_repo/versions/034_havana.py#L114

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361413] Re: LBaaS documentation is outdated , shows listeners instead of VIPs

2014-08-27 Thread Anne Gentle
Neutron bug triager, you can move to the openstack-api-site project.

** Changed in: neutron
   Status: Incomplete => Confirmed

** Also affects: openstack-api-site
   Importance: Undecided
   Status: New

** Changed in: openstack-api-site
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361413

Title:
  LBaaS documentation is outdated , shows listeners instead of VIPs

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack API documentation site:
  Confirmed

Bug description:
  The documentation for the LBaaS REST API endpoints listed on the office docs 
website does not match the REST API exposed by neutron.
  Documentation URL: 
http://developer.openstack.org/api-ref-networking-v2.html#lbaas

  In the API docs there is a reference to /listeners. However, neutron
  doesn't have an API for /listeners, it only has an API for /vips

  Below is a curl command demonstrating the issue:
  Listing VIPs: *WORKS
  curl -i http://infracont.rnd.cloud:9696/v2.0/lb/vips -X GET -H "X-Auth-Token: 
5c5b55bb54cc4c90971fc695ff44923d" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "User-Agent: python-neutronclient"

  Listing Listeners: *FAILS
  curl -i http://infracont.rnd.cloud:9696/v2.0/lb/listeners -X GET -H 
"X-Auth-Token: 5c5b55bb54cc4c90971fc695ff44923d" -H "Content-Type: 
application/json" -H "Accept: application/json" -H "User-Agent: 
python-neutronclient"

  
  Openstack icehouse deployment.
  Running neutron version 2.3.4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362268] [NEW] Data Table update is recursive.

2014-08-27 Thread Charles V Bock
Public bug reported:

Data table refresh seems to be recursive causing a very large number of
Ajax refresh calls if more than one table row is being updated.

Check out this line in the tables JS:
https://github.com/openstack/horizon/blob/master/horizon/static/horizon/js/horizon.tables.js#L106

For example lets say you're on the instances pane and 3 instances are in
some non-active state (building, resizing, etc)

Each of the three rows gets updated and on completion of each row the
update function gets a timeout set, well that's per row, not per table,
so when those timeouts trigger each one will cause three more updates so
the number of calls goes up geometrically.

See image attached for Firefox network activity after just allowing the
instances page to sit open for a little bit, you can see its calling the
update row action much too frequently.

I believe the fix may be as simple as moving the set timeout outside the
for each row loop.

Thoughts?

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ajax datatable recursive refresh table

** Attachment added: "recursive ajax table refresh datatable"
   
https://bugs.launchpad.net/bugs/1362268/+attachment/4188443/+files/Recursive_Ajax.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1362268

Title:
  Data Table update is recursive.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Data table refresh seems to be recursive causing a very large number
  of Ajax refresh calls if more than one table row is being updated.

  Check out this line in the tables JS:
  
https://github.com/openstack/horizon/blob/master/horizon/static/horizon/js/horizon.tables.js#L106

  For example lets say you're on the instances pane and 3 instances are
  in some non-active state (building, resizing, etc)

  Each of the three rows gets updated and on completion of each row the
  update function gets a timeout set, well that's per row, not per
  table, so when those timeouts trigger each one will cause three more
  updates so the number of calls goes up geometrically.

  See image attached for Firefox network activity after just allowing
  the instances page to sit open for a little bit, you can see its
  calling the update row action much too frequently.

  I believe the fix may be as simple as moving the set timeout outside
  the for each row loop.

  Thoughts?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1362268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362242] [NEW] bridge_mappings isn't bound to any segment warning from l2pop

2014-08-27 Thread Carl Baldwin
Public bug reported:

Rossella asked me about this yesterday [1].  A brief discussion in the
DVR meeting this morning [2] seems to indicate it is not a serious
problem.  But, I thought I'd submit this bug as a place to land for
others who see this warning.  Hopefully at some point we can get it
cleaned up.

Here is the warning line from the log snippet in the pastebin [1].

57582 2014-08-27 14:15:50.401 16987 WARNING
neutron.plugins.ml2.drivers.l2pop.mech_driver [req-
ba914881-f88d-4793-a635-f4844855c9dd None] Port 2aba  57cd-5739
-433e-bf9a-60193b6bc4e8 updated by agent
 isn't bound to any segment

[1] http://paste.openstack.org/raw/101070/
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2014-08-27.log
 at 2014-08-27T15:23:22

** Affects: neutron
 Importance: Low
 Status: Confirmed


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362242

Title:
  bridge_mappings isn't bound to any segment warning from l2pop

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Rossella asked me about this yesterday [1].  A brief discussion in the
  DVR meeting this morning [2] seems to indicate it is not a serious
  problem.  But, I thought I'd submit this bug as a place to land for
  others who see this warning.  Hopefully at some point we can get it
  cleaned up.

  Here is the warning line from the log snippet in the pastebin [1].

  57582 2014-08-27 14:15:50.401 16987 WARNING
  neutron.plugins.ml2.drivers.l2pop.mech_driver [req-
  ba914881-f88d-4793-a635-f4844855c9dd None] Port 2aba  57cd-5739
  -433e-bf9a-60193b6bc4e8 updated by agent
   isn't bound to any segment

  [1] http://paste.openstack.org/raw/101070/
  [2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2014-08-27.log
 at 2014-08-27T15:23:22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362244] [NEW] Issues with Nova Evacuate API

2014-08-27 Thread Majid
Public bug reported:

We have deployed OpenStack Icehouse with Legacy Networking setup and we are 
building our instances on shared storage (NFS). We want to protect VMs when a 
host (compute node) fails. When we use Nova Evacuate API with 
"--on-shared-storage" option two issues happens:
1- Sometimes the evacuated VM will come up on new host but it will be re-built 
from the base image. 
2- Sometimes the instance will not come up at all. For this case if we use 
"nova reboot --hard " the VM will come up on the new host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362244

Title:
  Issues with Nova Evacuate API

Status in OpenStack Compute (Nova):
  New

Bug description:
  We have deployed OpenStack Icehouse with Legacy Networking setup and we are 
building our instances on shared storage (NFS). We want to protect VMs when a 
host (compute node) fails. When we use Nova Evacuate API with 
"--on-shared-storage" option two issues happens:
  1- Sometimes the evacuated VM will come up on new host but it will be 
re-built from the base image. 
  2- Sometimes the instance will not come up at all. For this case if we use 
"nova reboot --hard " the VM will come up on the new host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362245] [NEW] Update Endpoint Filter APIs

2014-08-27 Thread Bob Thyne
Public bug reported:

Henry-Nash made a good comment regarding the existing/new Endpoint
Filter API. Currently the API looks like:

GET /OS-EP-FILTER/projects/$project_id/endpoints/$endpoint_id
HEAD /OS-EP-FILTER/projects/$project_id/endpoints/$endpoint_id
DELETE /OS-EP-FILTER/projects/$project_id/endpoints/$endpoint_id
GET /OS-EP-FILTER/endpoints/$endpoint_id/projects
GET /OS-EP-FILTER/projects/$project_id/endpoints

GET /OS-EP-FILTER/endpoint_groups
POST /OS-EP-FILTER/endpoint_groups
GET /OS-EP-FILTER/endpoint_groups/$endpoint_group_id
HEAD /OS-EP-FILTER/endpoint_groups/$endpoint_group_id
PATCH /OS-EP-FILTER/endpoint_groups/$endpoint_group_id
DELETE /OS-EP-FILTER/endpoint_groups/$endpoint_group_id

GET /OS-EP-FILTER/endpoint_groups/$endpoint_group_id/projects
GET /OS-EP-FILTER/endpoint_groups/$endpoint_group_id/endpoints

PUT /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/$project_id
GET /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/$project_id
HEAD /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/$project_id
DELETE /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/
$project_id

OS-EP-FILTER should come after project e.g.

GET /projects/$project_id/OS-EP-FILTER/endpoints/$endpoint_id

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362245

Title:
  Update  Endpoint Filter APIs

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Henry-Nash made a good comment regarding the existing/new Endpoint
  Filter API. Currently the API looks like:

  GET /OS-EP-FILTER/projects/$project_id/endpoints/$endpoint_id
  HEAD /OS-EP-FILTER/projects/$project_id/endpoints/$endpoint_id
  DELETE /OS-EP-FILTER/projects/$project_id/endpoints/$endpoint_id
  GET /OS-EP-FILTER/endpoints/$endpoint_id/projects
  GET /OS-EP-FILTER/projects/$project_id/endpoints

  GET /OS-EP-FILTER/endpoint_groups
  POST /OS-EP-FILTER/endpoint_groups
  GET /OS-EP-FILTER/endpoint_groups/$endpoint_group_id
  HEAD /OS-EP-FILTER/endpoint_groups/$endpoint_group_id
  PATCH /OS-EP-FILTER/endpoint_groups/$endpoint_group_id
  DELETE /OS-EP-FILTER/endpoint_groups/$endpoint_group_id

  GET /OS-EP-FILTER/endpoint_groups/$endpoint_group_id/projects
  GET /OS-EP-FILTER/endpoint_groups/$endpoint_group_id/endpoints

  PUT /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/$project_id
  GET /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/$project_id
  HEAD 
/OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/$project_id
  DELETE /OS-EP-FILTER/endpoint_groups/$endpoint_group/projects/
  $project_id

  OS-EP-FILTER should come after project e.g.

  GET /projects/$project_id/OS-EP-FILTER/endpoints/$endpoint_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362233] [NEW] instance_create() DB API method implicitly creates additional DB transactions

2014-08-27 Thread Roman Podoliaka
Public bug reported:

In DB API code we have a notion of 'public' and 'private' methods. The
former are conceptually executed within a *single* DB transaction and
the latter can either create a new transaction or participate in the
existing one. The whole point is to be able to roll back the results of
DB API methods easily and be able to retry method calls on connection
failures. We had a bp (https://blueprints.launchpad.net/nova/+spec/db-
session-cleanup) in which all DB API have been re-factored to maintain
these properties.

instance_create() is one of the methods that currently violates the
rules of 'public' DB API methods and creates a concurrent transaction
implicitly.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: db

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362233

Title:
  instance_create() DB API method implicitly creates additional DB
  transactions

Status in OpenStack Compute (Nova):
  New

Bug description:
  In DB API code we have a notion of 'public' and 'private' methods. The
  former are conceptually executed within a *single* DB transaction and
  the latter can either create a new transaction or participate in the
  existing one. The whole point is to be able to roll back the results
  of DB API methods easily and be able to retry method calls on
  connection failures. We had a bp
  (https://blueprints.launchpad.net/nova/+spec/db-session-cleanup) in
  which all DB API have been re-factored to maintain these properties.

  instance_create() is one of the methods that currently violates the
  rules of 'public' DB API methods and creates a concurrent transaction
  implicitly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362221] [NEW] VMs fail to start when Ceph is used as a backend for ephemeral drives

2014-08-27 Thread Roman Podoliaka
Public bug reported:

VMs' drives placement in Ceph option has been chosen
(libvirt.images_types == 'rbd').

When user creates a flavor and specifies:
   - root drive size >0
   - ephemeral drive size >0 (important)

and tries to boot a VM, he gets "no valid host was found" in the
scheduler log:

Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
 File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, in 
_build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/l
ib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
 "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
e)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
lf.tb)\n', u' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", 
line 1714, in _spawn\n block_device_info)\n', u' File "/usr/lib/py
thon2.6/site-packages/nova/virt/libvirt/driver.py", line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File "/usr/lib/python2.6/site-packages
/nova/virt/libvirt/driver.py", line 2648, in _create_image\n 
ephemeral_size=ephemeral_gb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/
libvirt/imagebackend.py", line 186, in cache\n *args, **kwargs)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py",
line 587, in create_image\n prepare_template(target=base, max_size=size, *args, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/opens
tack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', 
u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebac
kend.py", line 176, in fetch_func_sync\n fetch_func(target=target, *args, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvir
t/driver.py", line 2458, in _create_ephemeral\n disk.mkfs(os_type, fs_label, 
target, run_as_root=is_block_dev)\n', u' File "/usr/lib/python2.6/sit
e-packages/nova/virt/disk/api.py", line 117, in mkfs\n utils.mkfs(default_fs, 
target, fs_label, run_as_root=run_as_root)\n', u' File "/usr/lib/pyt
hon2.6/site-packages/nova/utils.py", line 856, in mkfs\n execute(*args, 
run_as_root=run_as_root)\n', u' File "/usr/lib/python2.6/site-packages/nov
a/utils.py", line 165, in execute\n return processutils.execute(*cmd, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/commo
n/processutils.py", line 193, in execute\n cmd=\' \'.join(cmd))\n', 
u"ProcessExecutionError: Unexpected error while running command.\nCommand: sudo
 nova-rootwrap /etc/nova/rootwrap.conf mkfs -t ext3 -F -L ephemeral0 
/var/lib/nova/instances/_base/ephemeral_1_default\nExit code: 1\nStdout: 
''\nStde
rr: 'mke2fs 1.41.12 (17-May-2010)\\nmkfs.ext3: No such file or directory while 
trying to determine filesystem size\\n'\n"]

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: ceph libvirt rbd

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362221

Title:
  VMs fail to start when Ceph is used as a backend for ephemeral drives

Status in OpenStack Compute (Nova):
  New

Bug description:
  VMs' drives placement in Ceph option has been chosen
  (libvirt.images_types == 'rbd').

  When user creates a flavor and specifies:
 - root drive size >0
 - ephemeral drive size >0 (important)

  and tries to boot a VM, he gets "no valid host was found" in the
  scheduler log:

  Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, 
in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/l
  ib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
   "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
  e)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
  lf.tb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1714, in 
_spawn\n block_device_info)\n', u' File "/usr/lib/py
  thon2.6/site-packages/nova/virt/libvirt/driver.py", line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File "/usr/lib/python2.6/site-packages
  /nova/virt/libvirt/driver.py", line 2648, in _create_image\n 
ephemeral_size=ephemer

[Yahoo-eng-team] [Bug 1362213] [NEW] haproxy configuration spams logged-in users when no servers are available

2014-08-27 Thread John Schwarz
Public bug reported:

On certain systems which use the default syslog configuration, using
haproxy-based LBaaS causes error logs to spam all the logged-in users:

Message from syslogd@alpha-controller at Jun  9 01:32:07 ...
 haproxy[2719]:backend 32fce5ee-b7f7-4415-a572-a83eba1be6b0 has no server 
available!

Message from syslogd@alpha-controller at Jun  9 01:32:07 ...
 haproxy[2719]:backend 32fce5ee-b7f7-4415-a572-a83eba1be6b0 has no server 
available!


The error message is valid - it happens when, for example, there are no backend 
servers available to handle the service requests because all members are down.
However, there is no point in sending the messages to all the logged-in users. 
The wanted result is that each namespace will have its own log file, which will 
contain all the log messages the relevant haproxy process produces.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => John Schwarz (jschwarz)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362213

Title:
  haproxy configuration spams logged-in users when no servers are
  available

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  On certain systems which use the default syslog configuration, using
  haproxy-based LBaaS causes error logs to spam all the logged-in users:

  Message from syslogd@alpha-controller at Jun  9 01:32:07 ...
   haproxy[2719]:backend 32fce5ee-b7f7-4415-a572-a83eba1be6b0 has no server 
available!

  Message from syslogd@alpha-controller at Jun  9 01:32:07 ...
   haproxy[2719]:backend 32fce5ee-b7f7-4415-a572-a83eba1be6b0 has no server 
available!

  
  The error message is valid - it happens when, for example, there are no 
backend servers available to handle the service requests because all members 
are down.
  However, there is no point in sending the messages to all the logged-in 
users. The wanted result is that each namespace will have its own log file, 
which will contain all the log messages the relevant haproxy process produces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362200] [NEW] Port details overview Fixed IP field "none" not translated

2014-08-27 Thread Aaron Sahlin
Public bug reported:

In 
/openstack_dashboard/dashboards/project/networks/templates/networks/ports/_detail_overview.html,
"None" is not translated.  Should be   {% trans "None" %}


  {% if port.fixed_ips.items|length > 1 %}
  {% for ip in port.fixed_ips %}
  {% trans "IP address:" %} {{ ip.ip_address }},
  {% trans "Subnet ID" %} {{ ip.subnet_id }}
  {% endfor %}
  {% else %}
  "None"
  {% endif %}
...

** Affects: horizon
 Importance: Undecided
 Assignee: Aaron Sahlin (asahlin)
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1362200

Title:
  Port details overview Fixed IP field "none" not translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In 
/openstack_dashboard/dashboards/project/networks/templates/networks/ports/_detail_overview.html,
  "None" is not translated.  Should be   {% trans "None" %}

  
{% if port.fixed_ips.items|length > 1 %}
{% for ip in port.fixed_ips %}
{% trans "IP address:" %} {{ ip.ip_address }},
{% trans "Subnet ID" %} {{ ip.subnet_id }}
{% endfor %}
{% else %}
"None"
{% endif %}
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1362200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362194] [NEW] _sync_power_state locks during deployment

2014-08-27 Thread Kyle L. Henderson
Public bug reported:

In nova/compute/manager.py, _sync_power_states() uses a synchronized()
lock to lock on the instance uuid before reconciling the power states of
the instance between OpenStack and the compute driver.  It purposely
conflicts with the instance uuid lock on deployment
(do_build_and_run_instance) .  However, inside the lock (in
_query_driver_power_state_and_sync) the first thing it does is to see if
there is a task state and if so skips sync'ing the power state.

The result of synchronizing with the deployment is that all periodic
tasks can be hung up behind a single deploy of an instance.  There is no
reason, that I can see, why the checking the task state can't be done
outside the synchronized lock which would make it skip instances being
deployed and would prevent periodic tasks from being hung behind a
deployment.

So basically, the check for a task state should be moved to be outside
the lock and that would avoid hanging periodic tasks and would result in
the same semantics of the existing syncing of power states.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362194

Title:
  _sync_power_state locks during deployment

Status in OpenStack Compute (Nova):
  New

Bug description:
  In nova/compute/manager.py, _sync_power_states() uses a synchronized()
  lock to lock on the instance uuid before reconciling the power states
  of the instance between OpenStack and the compute driver.  It
  purposely conflicts with the instance uuid lock on deployment
  (do_build_and_run_instance) .  However, inside the lock (in
  _query_driver_power_state_and_sync) the first thing it does is to see
  if there is a task state and if so skips sync'ing the power state.

  The result of synchronizing with the deployment is that all periodic
  tasks can be hung up behind a single deploy of an instance.  There is
  no reason, that I can see, why the checking the task state can't be
  done outside the synchronized lock which would make it skip instances
  being deployed and would prevent periodic tasks from being hung behind
  a deployment.

  So basically, the check for a task state should be moved to be outside
  the lock and that would avoid hanging periodic tasks and would result
  in the same semantics of the existing syncing of power states.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362191] [NEW] Deprecated & remove the libvirt volume_drivers config parameter

2014-08-27 Thread Daniel Berrange
Public bug reported:

In this thread the topic of deprecating and removing config parameters
related to extension points for non-public APIs was discussed. The
consensus was that such extension points should not be exposed and that
instead people wishing to develop extensions should be doing so on a
nova branch, instead of entirely separate repository.

  https://www.mail-archive.com/openstack-
d...@lists.openstack.org/msg30206.html

The vif_drivers parameter is now removed, and this bug is to track
deprecation & removal of the volume_drivers parameter since that serves
an identical purpose

It will be deprecated in Kilo and deleted in L

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362191

Title:
  Deprecated & remove the libvirt volume_drivers config parameter

Status in OpenStack Compute (Nova):
  New

Bug description:
  In this thread the topic of deprecating and removing config parameters
  related to extension points for non-public APIs was discussed. The
  consensus was that such extension points should not be exposed and
  that instead people wishing to develop extensions should be doing so
  on a nova branch, instead of entirely separate repository.

https://www.mail-archive.com/openstack-
  d...@lists.openstack.org/msg30206.html

  The vif_drivers parameter is now removed, and this bug is to track
  deprecation & removal of the volume_drivers parameter since that
  serves an identical purpose

  It will be deprecated in Kilo and deleted in L

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362181] [NEW] Multi-domain has problems with domain drivers

2014-08-27 Thread Marcos Lobo
Public bug reported:

My Environment
--
I've install RDO Openstack Icehouse, then I've upgrade to keystone-2014.2.b2 
from launchpad tarball. I'm using SQL (not LDAP). With standard installation, 
we have only 1 domain, the "default" domain. I did not configure nothing more.

What I want to achieve
--
Now, I want to configure multidomain feature on Keystone Juno 2 and I'm 
following the official documentation 
http://docs.openstack.org/developer/keystone/configuration.html#domain-specific-drivers

The problem

If I execute this command:

$ curl --insecure -H "X-Auth-Token:ADMIN" http://localhost:5000/v3/users

Ok, no problems, Keystone returns the json user list. Now, I'll
configure multi-domain feature.

1.- Edit /etc/keystone/keystone.conf file like

- # domain_specific_drivers_enabled=False
+ domain_specific_drivers_enabled=True
- # domain_config_dir=/etc/keystone/domains
+ domain_config_dir=/etc/keystone/domains

2.- Create default domain file.

2.1 cd /etc/keystone; mkdir domains; chown keystone:keystone domains; cd 
domains;
2.2 vim keystone.default.conf

[identity]
driver = keystone.identity.backends.sql.Identity

[ldap]

2.3 chown keystone:keystone keystone.default.conf

3.- service openstack-keystone restart

Now, if try the same CURL command I obtain this error:

$ curl --insecure -H "X-Auth-Token:ADMIN" http://localhost:5000/v3/users
{
"error": {
"code": 401,
"message": "The request you have made requires authentication. (Disable 
debug mode to suppress these details.)",
"title": "Unauthorized"
}
}


And, in the log file, I have 3 different errors:

2014-08-27 15:25:43.669 23078 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {} process_request 
/usr/lib/python2.6/site-packages/keystone/middleware/core.py:286
2014-08-27 15:25:43.764 23078 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.6/site-packages/keystone/common/wsgi.py:181
2014-08-27 15:25:43.765 23078 WARNING keystone.common.controller [-] RBAC: 
Bypassing authorization
2014-08-27 15:25:48.051 23078 DEBUG oslo.db.sqlalchemy.session [-] MySQL server 
mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _mysql_check_effective_sql_mode 
/usr/lib/python2.6/site-packages/oslo/db/sqlalchemy/session.py:401
2014-08-27 15:25:48.081 23078 WARNING keystone.common.controller [-] Invalid 
token found while getting domain ID for list request
2014-08-27 15:25:48.084 23078 WARNING keystone.common.wsgi [-] Authorization 
failed. The request you have made requires authentication. (Disable debug mode 
to suppress these details.) (Disable debug mode to suppress these details.) 
from 127.0.0.1
2014-08-27 15:25:48.090 23078 INFO eventlet.wsgi.server [-] 127.0.0.1 - - 
[27/Aug/2014 15:25:48] "GET /v3/users HTTP/1.1" 401 357 4.421301

And some seconds later, keystone raises this error:

2014-08-27 15:26:35.707 23078 DEBUG keystone.middleware.core [-] Auth token not 
in the request header. Will not build auth context. process_request 
/usr/lib/python2.6/site-packages/keystone/middleware/core.py:276
2014-08-27 15:26:35.731 23078 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /usr/lib/python2.6/site-packages/keystone/common/wsgi.py:181
2014-08-27 15:26:35.741 23078 ERROR keystone.common.wsgi [-] object.__init__() 
takes no parameters
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/common/wsgi.py", line 212, in 
__call__
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi result = 
method(context, **params)
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 99, in 
authenticate
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi context, auth)
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/token/controllers.py", line 279, in 
_authenticate_local
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi username, 
CONF.identity.default_domain_id)
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 181, in 
wrapper
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi self.driver, 
self.assignment_api)
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 137, in 
setup_domain_drivers
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi 
-len(DOMAIN_CONF_FTAIL)])
2014-08-27 15:26:35.741 23078 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.6/site-packages/keystone/identity/core.py", line 116, in 
_load_config
2014-08-27 15:26:35.741 23078 TRACE keystone.common

[Yahoo-eng-team] [Bug 1362171] [NEW] Reuse process management classes from dnsmasq for radvd

2014-08-27 Thread Henry Gessau
Public bug reported:

When reviewing/discussing the change to add functional tests for
radvd[1] it was requested that radvd should be managed similar to the
dnsmasq process. We should reuse the classes already existing for
dnsmasq. Extract common functionality to allow them to work for both
radvd and dnsmasq. This will allow some common functional testing too.

[1] https://review.openstack.org/109889

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Henry Gessau (gessau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362171

Title:
  Reuse process management classes from dnsmasq for radvd

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When reviewing/discussing the change to add functional tests for
  radvd[1] it was requested that radvd should be managed similar to the
  dnsmasq process. We should reuse the classes already existing for
  dnsmasq. Extract common functionality to allow them to work for both
  radvd and dnsmasq. This will allow some common functional testing too.

  [1] https://review.openstack.org/109889

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362129] [NEW] For rbd image backend, disk IO rate limiting isn't supported

2014-08-27 Thread Yaguang Tang
Public bug reported:

when using rbd as disk backend.   images_type=rbd in nova.conf

disk IO tunning doesn't work as described
https://wiki.openstack.org/wiki/InstanceResourceQuota

** Affects: nova
 Importance: Undecided
 Assignee: Yaguang Tang (heut2008)
 Status: New


** Tags: icehouse-backport-potential rbd

** Changed in: nova
 Assignee: (unassigned) => Yaguang Tang (heut2008)

** Tags added: rbd

** Tags added: icehouse-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362129

Title:
  For rbd image backend, disk IO rate limiting isn't supported

Status in OpenStack Compute (Nova):
  New

Bug description:
  when using rbd as disk backend.   images_type=rbd in nova.conf

  disk IO tunning doesn't work as described
  https://wiki.openstack.org/wiki/InstanceResourceQuota

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362129/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332660] Re: Update statistics from computes if RBD ephemeral is used

2014-08-27 Thread Roman Podoliaka
** Changed in: mos/5.0.x
   Status: Fix Committed => Fix Released

** Changed in: fuel/5.0.x
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1332660

Title:
  Update statistics from computes if RBD ephemeral is used

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 4.1.x series:
  In Progress
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  If we use RBD as the backend for ephemeral drives, compute nodes still 
calculate their available disk size looking back to the local disks.
  This is the path how they do it:

  * nova/compute/manager.py

  def update_available_resource(self, context):
  """See driver.get_available_resource()

  Periodic process that keeps that the compute host's understanding of
  resource availability and usage in sync with the underlying 
hypervisor.

  :param context: security context
  """
  new_resource_tracker_dict = {}
  nodenames = set(self.driver.get_available_nodes())
  for nodename in nodenames:
  rt = self._get_resource_tracker(nodename)
  rt.update_available_resource(context)
  new_resource_tracker_dict[nodename] = rt
  
  def _get_resource_tracker(self, nodename):
  rt = self._resource_tracker_dict.get(nodename)
  if not rt:
  if not self.driver.node_is_available(nodename):
  raise exception.NovaException(
  _("%s is not a valid node managed by this "
"compute host.") % nodename)

  rt = resource_tracker.ResourceTracker(self.host,
self.driver,
nodename)
  self._resource_tracker_dict[nodename] = rt
  return rt

  * nova/compute/resource_tracker.py

  def update_available_resource(self, context):
  """Override in-memory calculations of compute node resource usage 
based
  on data audited from the hypervisor layer.

  Add in resource claims in progress to account for operations that have
  declared a need for resources, but not necessarily retrieved them from
  the hypervisor layer yet.
  """
  LOG.audit(_("Auditing locally available compute resources"))
  resources = self.driver.get_available_resource(self.nodename)

  * nova/virt/libvirt/driver.py

  def get_local_gb_info():
  """Get local storage info of the compute node in GB.

  :returns: A dict containing:
   :total: How big the overall usable filesystem is (in gigabytes)
   :free: How much space is free (in gigabytes)
   :used: How much space is used (in gigabytes)
  """

  if CONF.libvirt_images_type == 'lvm':
  info = libvirt_utils.get_volume_group_info(
   CONF.libvirt_images_volume_group)
  else:
  info = libvirt_utils.get_fs_info(CONF.instances_path)

  for (k, v) in info.iteritems():
  info[k] = v / (1024 ** 3)

  return info

  
  It would be nice to have something like "libvirt_utils.get_rbd_info" which 
could be used in case CONF.libvirt_images_type == 'rbd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1332660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362100] [NEW] Pre-created ports get deleted on interface-detach

2014-08-27 Thread Tomasz GÅ‚uch
Public bug reported:

Neutron ports which exist prior to being attached to instance are
removed on interface-detach.

Way to repeat:
1) Create port (neutron port-create )
2) Create instance (nova boot ...)
3) Attach port created in 1) to instance (nova interface-attach --port-id 
 )
4) Detach port (nova interface-detach  )

Expected behaviour:
Port should still exist, as it had already existed prior to attaching.

Observed behaviour:
Port has been removed (neutron port-show  :
Unable to find port with name 'PORTID'

This issue is similar to https://bugs.launchpad.net/nova/+bug/1158684
Tested on IceHouse.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362100

Title:
  Pre-created ports get deleted on interface-detach

Status in OpenStack Compute (Nova):
  New

Bug description:
  Neutron ports which exist prior to being attached to instance are
  removed on interface-detach.

  Way to repeat:
  1) Create port (neutron port-create )
  2) Create instance (nova boot ...)
  3) Attach port created in 1) to instance (nova interface-attach --port-id 
 )
  4) Detach port (nova interface-detach  )

  Expected behaviour:
  Port should still exist, as it had already existed prior to attaching.

  Observed behaviour:
  Port has been removed (neutron port-show  :
  Unable to find port with name 'PORTID'

  This issue is similar to https://bugs.launchpad.net/nova/+bug/1158684
  Tested on IceHouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362075] [NEW] Live migration fails on Hyper-V when boot from volume is used

2014-08-27 Thread Alessandro Pilotti
Public bug reported:

Live migration fails on Hyper-V when boot from volume is used with CoW,
as the target host tries to cache the root disk image in
pre_live_migration, but in this case the image_ref is empty.

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362075

Title:
  Live migration fails on Hyper-V when boot from volume is used

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Live migration fails on Hyper-V when boot from volume is used with
  CoW, as the target host tries to cache the root disk image in
  pre_live_migration, but in this case the image_ref is empty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362075/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362073] [NEW] Wrong string format result in exception can't be record into log in nova.virt.disk.vfs.guestfs.

2014-08-27 Thread Rui Chen
Public bug reported:

In current master,  nova.virt.disk.vfs.guestfs

136 except AttributeError as ex:
137 # set_backend_settings method doesn't exist in older
138 # libguestfs versions, so nothing we can do but ignore
139  ->   LOG.info(_LI("Unable to force TCG mode, libguestfs too old?"),
140  ex)
141 pass
142
143 try:
144 self.handle.add_drive_opts(self.imgfile, format=self.imgfmt)
145 self.handle.launch()


Traceback (most recent call last):
  File "/usr/lib/python2.7/logging/__init__.py", line 846, in emit
msg = self.format(record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 685, in format
return logging.StreamHandler.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
return fmt.format(record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 649, in format
return logging.Formatter.format(self, record)
  File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
record.message = record.getMessage()
  File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting

** Affects: nova
 Importance: Undecided
 Assignee: Rui Chen (kiwik-chenrui)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Rui Chen (kiwik-chenrui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362073

Title:
  Wrong string format result in exception can't be record into log in
  nova.virt.disk.vfs.guestfs.

Status in OpenStack Compute (Nova):
  New

Bug description:
  In current master,  nova.virt.disk.vfs.guestfs

  136 except AttributeError as ex:
  137 # set_backend_settings method doesn't exist in older
  138 # libguestfs versions, so nothing we can do but ignore
  139  ->   LOG.info(_LI("Unable to force TCG mode, libguestfs too 
old?"),
  140  ex)
  141 pass
  142
  143 try:
  144 self.handle.add_drive_opts(self.imgfile, 
format=self.imgfmt)
  145 self.handle.launch()

  
  Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 846, in emit
  msg = self.format(record)
File "/opt/stack/nova/nova/openstack/common/log.py", line 685, in format
  return logging.StreamHandler.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 723, in format
  return fmt.format(record)
File "/opt/stack/nova/nova/openstack/common/log.py", line 649, in format
  return logging.Formatter.format(self, record)
File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
  record.message = record.getMessage()
File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
  msg = msg % self.args
  TypeError: not all arguments converted during string formatting

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362058] [NEW] osapi_compute_link_prefix configuration option has incorrect behavior for certain urls

2014-08-27 Thread Ishant Tyagi
Public bug reported:

I am using osapi_compute_link_prefix conf option  in nova.conf to get
some other URL in response.

I am using osapi_compute_link_prefix =
https://16.125.106.106:8774/rest/compute

output which I get is

{
"servers": [
{
"id": "c85f3e06-daef-468f-a298-e5427b6095cc",
"links": [
{
"href": 
"https://16.125.106.106/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
"rel": "self"
},
{
"href": 
"https://16.125.106.106/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
"rel": "bookmark"
}
],
"name": "vm1"
}
]
}

The URL in the response is incorrect . It should be
"https://16.125.106.106/rest/compute/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06
-daef-468f-a298-e5427b6095cc"

** Affects: nova
 Importance: Undecided
 Assignee: Ishant Tyagi (ishant-tyagi)
 Status: New

** Description changed:

  I am using osapi_compute_link_prefix conf option  in nova.conf to get
  some other URL in response.
  
- I am using
- osapi_compute_link_prefix = https://16.125.106.106:8774/rest/compute
+ I am using osapi_compute_link_prefix =
+ https://16.125.106.106:8774/rest/compute
  
  output which I get is
  
  {
- "servers": [
- {
- "id": "c85f3e06-daef-468f-a298-e5427b6095cc",
- "links": [
- {
- "href": 
"https://16.125.106.106/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
- "rel": "self"
- },
- {
- "href": 
"https://16.125.106.106/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
- "rel": "bookmark"
- }
- ],
- "name": "vm1"
- }
- ]
+ "servers": [
+ {
+ "id": "c85f3e06-daef-468f-a298-e5427b6095cc",
+ "links": [
+ {
+ "href": 
"https://16.125.106.106/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
+ "rel": "self"
+ },
+ {
+ "href": 
"https://16.125.106.106/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
+ "rel": "bookmark"
+ }
+ ],
+ "name": "vm1"
+ }
+ ]
  }
  
- 
- The URL in the response is incorrect . It should be 
"https://16.125.106.106/rest/compute/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";
+ The URL in the response is incorrect . It should be
+ 
"https://16.125.106.106/rest/compute/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06
+ -daef-468f-a298-e5427b6095cc"

** Changed in: nova
 Assignee: (unassigned) => Ishant Tyagi (ishant-tyagi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362058

Title:
  osapi_compute_link_prefix  configuration option has incorrect behavior
  for certain urls

Status in OpenStack Compute (Nova):
  New

Bug description:
  I am using osapi_compute_link_prefix conf option  in nova.conf to get
  some other URL in response.

  I am using osapi_compute_link_prefix =
  https://16.125.106.106:8774/rest/compute

  output which I get is

  {
  "servers": [
  {
  "id": "c85f3e06-daef-468f-a298-e5427b6095cc",
  "links": [
  {
  "href": 
"https://16.125.106.106/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
  "rel": "self"
  },
  {
  "href": 
"https://16.125.106.106/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06-daef-468f-a298-e5427b6095cc";,
  "rel": "bookmark"
  }
  ],
  "name": "vm1"
  }
  ]
  }

  The URL in the response is incorrect . It should be
  
"https://16.125.106.106/rest/compute/v1.1/ccbea08bdd8c42dfaad04e1c27dadfc9/servers/c85f3e06
  -daef-468f-a298-e5427b6095cc"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362048] [NEW] SQLite timeout in glance image_cache

2014-08-27 Thread Jordan Pittier
Public bug reported:

Hi,
Sometime I get the following stack trace in Glance-API : 

GET /v1/images/42646b2b-cf0b-4b15-b011-19d0a6880ffb HTTP/1.1" 200 4970175 
2.403391
for chunk in image_iter:
  File "/opt/stack/new/glance/glance/api/middleware/cache.py", line 281, in 
get_from_cache
yield chunk
  File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
  File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 373, 
in open_for_read
with self.get_db() as db:
  File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
  File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 391, 
in get_db
conn.execute('PRAGMA synchronous = NORMAL')
  File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 77, 
in execute
return self._timeout(lambda: sqlite3.Connection.execute(
  File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 74, 
in _timeout
sleep(0.05)
  File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 31, in 
sleep
hub.switch()
  File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in 
switch
return self.greenlet.switch()
Timeout: 2 seconds

It happens also from time to time in the Gate. See the following
logstash request :

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicmV0dXJuIHNlbGYuZ3JlZW5sZXQuc3dpdGNoKClcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTEyNjQ1NjU3NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==


This caused the gate failure of : 
http://logs.openstack.org/22/116622/2/check/check-tempest-dsvm-postgres-full/f079ef9/logs/screen-g-api.txt.gz?
  (wait for a full load of this page then grep "Timeout: 2 seconds")

Sorry for not being able to investigate more.

Jordan

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1362048

Title:
  SQLite timeout in glance image_cache

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Tempest:
  New

Bug description:
  Hi,
  Sometime I get the following stack trace in Glance-API : 

  GET /v1/images/42646b2b-cf0b-4b15-b011-19d0a6880ffb HTTP/1.1" 200 4970175 
2.403391
  for chunk in image_iter:
File "/opt/stack/new/glance/glance/api/middleware/cache.py", line 281, in 
get_from_cache
  yield chunk
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  self.gen.next()
File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 
373, in open_for_read
  with self.get_db() as db:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 
391, in get_db
  conn.execute('PRAGMA synchronous = NORMAL')
File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 77, 
in execute
  return self._timeout(lambda: sqlite3.Connection.execute(
File "/opt/stack/new/glance/glance/image_cache/drivers/sqlite.py", line 74, 
in _timeout
  sleep(0.05)
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 31, 
in sleep
  hub.switch()
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in 
switch
  return self.greenlet.switch()
  Timeout: 2 seconds

  It happens also from time to time in the Gate. See the following
  logstash request :

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwicmV0dXJuIHNlbGYuZ3JlZW5sZXQuc3dpdGNoKClcIiBBTkQgZmlsZW5hbWU6XCJsb2dzL3NjcmVlbi1nLWFwaS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTEyNjQ1NjU3NywibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  
  This caused the gate failure of : 
http://logs.openstack.org/22/116622/2/check/check-tempest-dsvm-postgres-full/f079ef9/logs/screen-g-api.txt.gz?
  (wait for a full load of this page then grep "Timeout: 2 seconds")

  Sorry for not being able to investigate more.

  Jordan

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1362048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362039] [NEW] Cannot Upgrade from Keystone Essex to Keystone Icehouse

2014-08-27 Thread Leigh Hayward
Public bug reported:

When trying to update from Essex to Icehosue in a test environment with
an existing keystone daabase table I get the following error:

6:07:38.888 11464 TRACE keystone return versioning_api.upgrade(engine, 
repository, version)#0122014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 185, in 
upgrade#0122014-08-26 16:07:38.888 11464 TRACE keystone return 
_migrate(url, repository, version, upgrade=True, err=err, **opts)#0122014-08-26 
16:07:38.888 11464 TRACE keystone   File "", line 2, in 
_migrate#0122014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py", line 
160, in with_engine#0122014-08-26 16:07:38.888 11464 TRACE keystone return 
f(*a, **kw)#012
2014-08-26 16:07:38.888 11464 CRITICAL keystone [-] OperationalError: 
(OperationalError) (1060, "Duplicate column name 'valid'") '\nALTER TABLE token 
ADD valid BOOL' ()
2014-08-26 16:07:38.888 11464 TRACE keystone Traceback (most recent call last):
2014-08-26 16:07:38.888 11464 TRACE keystone   File "/usr/bin/keystone-manage", 
line 51, in 
2014-08-26 16:07:38.888 11464 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/cli.py", line 190, in main
2014-08-26 16:07:38.888 11464 TRACE keystone CONF.command.cmd_class.main()
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/cli.py", line 66, in main
2014-08-26 16:07:38.888 11464 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migration_helpers.py", 
line 139, in sync_database_to_version
2014-08-26 16:07:38.888 11464 TRACE keystone 
migration.db_sync(sql.get_engine(), abs_path, version=version)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/openstack/common/db/sqlalchemy/migration.py",
 line 197, in db_sync
2014-08-26 16:07:38.888 11464 TRACE keystone return 
versioning_api.upgrade(engine, repository, version)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 185, in 
upgrade
2014-08-26 16:07:38.888 11464 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
2014-08-26 16:07:38.888 11464 TRACE keystone   File "", line 2, in 
_migrate
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py", line 
160, in with_engine
2014-08-26 16:07:38.888 11464 TRACE keystone return f(*a, **kw)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/api.py", line 364, in 
_migrate
2014-08-26 16:07:38.888 11464 TRACE keystone schema.runchange(ver, change, 
changeset.step)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/schema.py", line 90, in 
runchange
2014-08-26 16:07:38.888 11464 TRACE keystone change.run(self.engine, step)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/versioning/script/py.py", line 145, 
in run
2014-08-26 16:07:38.888 11464 TRACE keystone script_func(engine)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/keystone/common/sql/migrate_repo/versions/003_token_valid.py",
 line 28, in upgrade
2014-08-26 16:07:38.888 11464 TRACE keystone valid.create(token, 
populate_default=True)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/changeset/schema.py", line 526, in 
create
2014-08-26 16:07:38.888 11464 TRACE keystone 
engine._run_visitor(visitorcallable, self, connection, **kwargs)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 1479, in 
_run_visitor
2014-08-26 16:07:38.888 11464 TRACE keystone 
conn._run_visitor(visitorcallable, element, **kwargs)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 1122, in 
_run_visitor
2014-08-26 16:07:38.888 11464 TRACE keystone 
**kwargs).traverse_single(element)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib/python2.6/site-packages/migrate/changeset/ansisql.py", line 55, in 
traverse_single
2014-08-26 16:07:38.888 11464 TRACE keystone ret = super(AlterTableVisitor, 
self).traverse_single(elem)
2014-08-26 16:07:38.888 11464 TRACE keystone   File 
"/usr/lib64/python2.6/site-packages/sqlalchemy/sql/visitors.py", line 122, in 
traverse_single
2014-08-26 16:07:38.888 11464 TRACE keystone return meth(obj, **kw)
2014-08-26 16:07:38.888 11464 TRACE keys

[Yahoo-eng-team] [Bug 1362007] [NEW] Empty supported backend list in Keystone architecture documentation

2014-08-27 Thread Alexander
Public bug reported:

Found just before:
http://docs.openstack.org/developer/keystone/architecture.html#rules
in block: 
http://docs.openstack.org/developer/keystone/architecture.html#approach-to-authorization-policy

...Backends included in Keystone are:


** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362007

Title:
  Empty supported backend list in Keystone architecture documentation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Found just before:
  http://docs.openstack.org/developer/keystone/architecture.html#rules
  in block: 
  
http://docs.openstack.org/developer/keystone/architecture.html#approach-to-authorization-policy

  ...Backends included in Keystone are:
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1362007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp