[Yahoo-eng-team] [Bug 1488347] [NEW] Can't specify identity endpoint for token validation among several keystone servers in keystonemiddleware

2015-08-24 Thread Chaoyi Huang
Public bug reported:

Issue: Can't specify identity endpoint among several keystone servers in
keystonemiddleware

A prototype was executed to verify that KeyStone fernet token can work
in multi-site OPNFV cloud(in OpenStack terms, multi-OpenStack regions):
https://etherpad.opnfv.org/p/multisite_identity_management.

the requirement is "a user should, using a single authentication point
be able to manage virtual resources spread over multiple OpenStack
regions"

We have two regions: Kista and Solna, each one with KeyStone server
installed, these two keystone servers will have MySql cluster as the
backend, and the master MySql cluster in Kista, the slave MySql cluster
in Solna  which will be configured for aync-replication from the Kista
MySql cluster, therefore the data in KeyStone database.

root@51fa2177d59d:~# openstack endpoint list
+--++--+--+-+---+--+
| ID   | Region | Service Name | Service Type | 
Enabled | Interface | URL  |
+--++--+--+-+---+--+
| 09977a67a5fd4231bf54bfdbfc311b4e | Solna  | keystone | identity | 
True| internal  | http://172.17.0.98:5000  |
| 18389f1ff42640cf905351a7f9b8a6f7 | Kista  | glance   | image| 
True| internal  | http://172.17.0.41:9292  |
| 3bd662e362e24f45a9db2b77ad0682bb | Solna  | glance   | image| 
True| internal  | http://172.17.0.119:9292 |
| 425b14d499264aa1bad8170a99afce88 | Kista  | keystone | identity | 
True| admin | http://172.17.0.36:35357 |
| 60a02a99078642d0974843323bbb8836 | Solna  | glance   | image| 
True| public| http://172.17.0.119:9292 |
| 712d42d06ade4fedb8820e6f6ed33574 | Kista  | glance   | image| 
True| public| http://172.17.0.41:9292  |
| 8000a62a8406437dad4759960bad837f | Kista  | keystone | identity | 
True| public| http://172.17.0.36:5000  |
| a7ec590712364e9f876f0b82d1879a99 | Kista  | keystone | identity | 
True| internal  | http://172.17.0.36:5000  |
| b253565ee000417ab9b3d7ab3f4b4d48 | Solna  | keystone | identity | 
True| admin | http://172.17.0.98:35357 |
| bf9d05de9be64f5bb886959eb6bb367d | Solna  | glance   | image| 
True| admin | http://172.17.0.119:9292 |
| d1cb2f7d7d594199909b14a0004f37fe | Kista  | glance   | image| 
True| admin | http://172.17.0.41:9292  |
| eab9fbcb129741728bc72f36b72e27e2 | Solna  | keystone | identity | 
True| public| http://172.17.0.98:5000  |
+--++--+--+-+---+--+

Even the glance in Solna is configured with Solna KeyStone server for
the fernet token validation locally, the token validation request was
still routed to Kista KeyStone, it doesn't work as expected.

The following dock describe the issue in detail:
https://docs.google.com/document/d/1pvYWQprRH3jnzX2j-
zQwAErdPWg9zwkguSyLx1EBKas/edit

And this doc provides a patch to show how to make the configuration item
being in effect for token validation locally:
https://docs.google.com/document/d/1258g0VTC4wktevo2ymS7SaNhDeY8-S2QWY45them7ZM/edit#

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1488347

Title:
  Can't specify identity endpoint for token validation among several
  keystone servers in  keystonemiddleware

Status in Keystone:
  New

Bug description:
  Issue: Can't specify identity endpoint among several keystone servers
  in  keystonemiddleware

  A prototype was executed to verify that KeyStone fernet token can work
  in multi-site OPNFV cloud(in OpenStack terms, multi-OpenStack
  regions): https://etherpad.opnfv.org/p/multisite_identity_management.

  the requirement is "a user should, using a single authentication point
  be able to manage virtual resources spread over multiple OpenStack
  regions"

  We have two regions: Kista and Solna, each one with KeyStone server
  installed, these two keystone servers will have MySql cluster as the
  backend, and the master MySql cluster in Kista, the slave MySql
  cluster in Solna  which will be configured for aync-replication from
  the Kista MySql cluster, therefore the data in KeyStone database.

  root@51fa2177d59d:~# openstack endpoint list
  
+--++--+--+-+---+--+
  | ID   | Region | Service Name | Service Type | 
Enabled | Interface | URL  |
  
+--++--+--+---

[Yahoo-eng-team] [Bug 1488332] [NEW] instance fault message is truncated

2015-08-24 Thread IWAMOTO Toshihiro
Public bug reported:

Please refer https://bugs.launchpad.net/nova/+bug/1431203 for the
original bug report.

As message handling of RescheduledException is suboptimal, when an
instance fails to boot with retries, actual reason of the failure isn't
recorded in the DB.

MariaDB [nova]> select message,details from instance_faults;
| Build of instance 526886f8-445b-432d-8d5e-efe575ff0e2d was re-scheduled: 
Failed to provision instance 526886f8-445b-432d-8d5e-efe575ff0e2d: None |   
File "/opt/stack/nova/nova/compute/manager.py", line 1893, in 
_do_build_and_run_instance
filter_properties)
  File "/opt/stack/nova/nova/compute/manager.py", line 2038, in 
_build_and_run_instance
instance_uuid=instance.uuid, reason=six.text_type(e))
 |

The nova log has some useful information but the above is what you see
from nova API and horizon.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488332

Title:
  instance fault message is truncated

Status in OpenStack Compute (nova):
  New

Bug description:
  Please refer https://bugs.launchpad.net/nova/+bug/1431203 for the
  original bug report.

  As message handling of RescheduledException is suboptimal, when an
  instance fails to boot with retries, actual reason of the failure
  isn't recorded in the DB.

  MariaDB [nova]> select message,details from instance_faults;
  | Build of instance 526886f8-445b-432d-8d5e-efe575ff0e2d was re-scheduled: 
Failed to provision instance 526886f8-445b-432d-8d5e-efe575ff0e2d: None |   
File "/opt/stack/nova/nova/compute/manager.py", line 1893, in 
_do_build_and_run_instance
  filter_properties)
File "/opt/stack/nova/nova/compute/manager.py", line 2038, in 
_build_and_run_instance
  instance_uuid=instance.uuid, reason=six.text_type(e))
   |

  The nova log has some useful information but the above is what you see
  from nova API and horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488311] [NEW] import task still in progress although there occured an error

2015-08-24 Thread wangxiyuan
Public bug reported:

Reproduce:
Create a import task like:
{
"type":"import",
"input":{
"import_from":"file://wrong_address", #1. wrong image address

  # 2.lack of "import_from_format" , or even "import_from",  
"image_properties"
"image_properties":{
"disk_format":"qcow2",
"container_format":"bare",
"name":"test-task1"
}
}
}

It will return an error :
1. URLError: 
2. Invalid: Input does not contain 'import_from_format' field

But  command  'glance task-show task_id'  : return that the task status
still in progress and will never be chaned.

Expect result  :  Task status is changed into failure.

The reason is that glance only checked the parameters before taskflow
start, but didn't  do any dispose about task status.

As in taskflow, it has checked the parameters already, we could remove
the checking before  the taskflow.

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488311

Title:
  import task still in progress although there occured an error

Status in Glance:
  New

Bug description:
  Reproduce:
  Create a import task like:
  {
  "type":"import",
  "input":{
  "import_from":"file://wrong_address", #1. wrong image address

# 2.lack of "import_from_format" , or even "import_from",  
"image_properties"
  "image_properties":{
  "disk_format":"qcow2",
  "container_format":"bare",
  "name":"test-task1"
  }
  }
  }

  It will return an error :
  1. URLError: 
  2. Invalid: Input does not contain 'import_from_format' field

  But  command  'glance task-show task_id'  : return that the task
  status still in progress and will never be chaned.

  Expect result  :  Task status is changed into failure.

  The reason is that glance only checked the parameters before taskflow
  start, but didn't  do any dispose about task status.

  As in taskflow, it has checked the parameters already, we could remove
  the checking before  the taskflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488306] [NEW] grammar miss in manual: Data Processing API v1.1 (CURRENT)

2015-08-24 Thread wangyulin
Public bug reported:

In manual [Data Processing API v1.1 (CURRENT)], origninal description is as 
following:

A template configures Hadoop processes and VM characteristics, such as the 
number of reduce slots for task tracker, the number of CPUs, and the amount of 
RAM. 


I think " reduce slots " should be modified to " reduced slots ".

** Affects: openstack-api-site
 Importance: Undecided
 Assignee: wangyulin (wangyl-fnst)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => wangyulin (wangyl-fnst)

** Project changed: neutron => openstack-api-site

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488306

Title:
  grammar miss in manual: Data Processing API v1.1 (CURRENT)

Status in openstack-api-site:
  New

Bug description:
  In manual [Data Processing API v1.1 (CURRENT)], origninal description is as 
following:
  
  A template configures Hadoop processes and VM characteristics, such as the 
number of reduce slots for task tracker, the number of CPUs, and the amount of 
RAM. 
  

  I think " reduce slots " should be modified to " reduced slots ".

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1488306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473276] Re: dashboard conflict with php, which result in httpd.sevice restart failed

2015-08-24 Thread ZTE-Zhengwei Su
It is http's problem.Just add following words to
/etc/httpd/conf/httpd.conf.

IncludeOptional "/etc/httpd/conf.modules.d/*.conf"

** Project changed: horizon => apache2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473276

Title:
  dashboard conflict with php,which result in httpd.sevice restart
  failed

Status in Apache2 Web Server:
  New

Bug description:
  When I install openstack into redhat 7, I find one strange thing.
  When I install dashboard firstly and then install php, httpd.sevice restart 
failed.
  error info:
  httpd[32735]: AH00526: Syntax error on line 31 of /etc/httpd/conf.d/php.conf:
  httpd[32735]: Invalid command 'php_value', perhaps misspelled or defined by a 
module not included in the server configuration

  When I install php firstly and then install dashboard, httpd.sevice still 
restart failed.
  error info:
   httpd[20260]: AH00526: Syntax error on line 1 of 
/etc/httpd/conf.d/openstack-dashboard.conf:
   httpd[20260]: Invalid command 'WSGIDaemonProcess', perhaps misspelled or 
defined by a module not included in the server configuration

  
  This is a very strange thing. I think it might be http's problem, but I don't 
know how to figure it out.
  ps:If I just only install php or dashboard, httpd.sevice could restart 
successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/apache2/+bug/1473276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488303] [NEW] ‘result’ and 'message' are in wrong order

2015-08-24 Thread wangxiyuan
Public bug reported:

According to Task class in glance.domain.__init__ .py:

https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L352

  'message' should after 'result' in TaskFactory.new_task:

https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L479

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => wangxiyuan (wangxiyuan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488303

Title:
  ‘result’ and 'message' are in wrong order

Status in Glance:
  In Progress

Bug description:
  According to Task class in glance.domain.__init__ .py:

  https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L352

'message' should after 'result' in TaskFactory.new_task:

  https://github.com/openstack/glance/blob/master/glance/domain/__init__.py#L479

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488298] [NEW] lb-agent: delete-vlan-bridge and delete-vlan doesn't only work with vlan

2015-08-24 Thread Li Ma
Public bug reported:

In the linuxbridge-agent code, delete-vlan-bridge and delete-vlan
function doesn't only work with vlan, but also, for example vxlan. A
refactor is needed to make it clear.

** Affects: neutron
 Importance: Undecided
 Assignee: Li Ma (nick-ma-z)
 Status: New


** Tags: linuxbridge

** Description changed:

  In the linuxbridge-agent code, delete-vlan-bridge and delete-vlan
  function doesn't only work with vlan, but also, for example vxlan. A
- refactor is needed to make the it clear.
+ refactor is needed to make it clear.

** Changed in: neutron
 Assignee: (unassigned) => Li Ma (nick-ma-z)

** Tags added: linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488298

Title:
  lb-agent: delete-vlan-bridge and delete-vlan doesn't only work with
  vlan

Status in neutron:
  New

Bug description:
  In the linuxbridge-agent code, delete-vlan-bridge and delete-vlan
  function doesn't only work with vlan, but also, for example vxlan. A
  refactor is needed to make it clear.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488284] [NEW] _clean_updated_sg_member_conntrack_entries() is throwing an AttributeError occasionally

2015-08-24 Thread Brian Haley
Public bug reported:

The IP conntrack manager code that recently merged in
https://review.openstack.org/#/c/147713/ is sometimes throwing an
AttributeError, see:

http://logs.openstack.org/91/215791/2/check/gate-tempest-dsvm-neutron-
dvr/4e71df4/logs/screen-q-agt.txt.gz?level=WARNING#_2015-08-24_20_19_11_591

This is due to some bad logic in an if statement:

if not (device_info or pre_device_info):
continue

That is meant to catch the case where either variable is False, but it
can't the way it's written, it should be:

if not device_info or not pre_device_info:
continue

Change Id was Ibfd2d6a11aa970ea9e5009f4c4b858544d8b7463

** Affects: neutron
 Importance: Undecided
 Assignee: Brian Haley (brian-haley)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488284

Title:
  _clean_updated_sg_member_conntrack_entries() is throwing an
  AttributeError occasionally

Status in neutron:
  New

Bug description:
  The IP conntrack manager code that recently merged in
  https://review.openstack.org/#/c/147713/ is sometimes throwing an
  AttributeError, see:

  http://logs.openstack.org/91/215791/2/check/gate-tempest-dsvm-neutron-
  dvr/4e71df4/logs/screen-q-agt.txt.gz?level=WARNING#_2015-08-24_20_19_11_591

  This is due to some bad logic in an if statement:

  if not (device_info or pre_device_info):
  continue

  That is meant to catch the case where either variable is False, but it
  can't the way it's written, it should be:

  if not device_info or not pre_device_info:
  continue

  Change Id was Ibfd2d6a11aa970ea9e5009f4c4b858544d8b7463

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488282] [NEW] Gate failures with 'the resource could not be found'

2015-08-24 Thread Armando Migliaccio
Public bug reported:

There have been spurious failures happening in the gate. The most
prominent one is:


ft1.186: 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_list_servers_by_admin_with_all_tenants[id-9f5579ae-19b4-4985-a091-2a5d56106580]_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2015-08-24 22:55:50,083 32355 INFO [tempest_lib.common.rest_client] Request 
(ServersAdminTestJSON:test_list_servers_by_admin_with_all_tenants): 404 GET 
http://127.0.0.1:8774/v2/fb99c79318b54e668713b25afc52f81a/servers/detail?all_tenants=
 0.834s
2015-08-24 22:55:50,083 32355 DEBUG[tempest_lib.common.rest_client] Request 
- Headers: {'X-Auth-Token': '', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
Body: None
Response - Headers: {'content-length': '78', 'date': 'Mon, 24 Aug 2015 
22:55:50 GMT', 'connection': 'close', 'content-type': 'application/json; 
charset=UTF-8', 'x-compute-request-id': 
'req-387b21a9-4ada-48ee-89ed-9acfe5274ef7', 'status': '404'}
Body: {"itemNotFound": {"message": "The resource could not be found.", 
"code": 404}}
}}}

Traceback (most recent call last):
  File "tempest/api/compute/admin/test_servers.py", line 81, in 
test_list_servers_by_admin_with_all_tenants
body = self.client.list_servers(detail=True, **params)
  File "tempest/services/compute/json/servers_client.py", line 159, in 
list_servers
resp, body = self.get(url)
  File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 271, in get
return self.request('GET', url, extra_headers, headers)
  File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
resp, resp_body)
  File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 695, in _error_checker
raise exceptions.NotFound(resp_body)
tempest_lib.exceptions.NotFound: Object not found
Details: {u'code': 404, u'message': u'The resource could not be found.'}


but there are other similar failure modes. This seems to be related to bug 
#1269284

The logstash query:

message:"tempest_lib.exceptions.NotFound: Object not found" AND
build_name:"gate-tempest-dsvm-neutron-full"

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVtcGVzdF9saWIuZXhjZXB0aW9ucy5Ob3RGb3VuZDogT2JqZWN0IG5vdCBmb3VuZFwiIEFORCBidWlsZF9uYW1lOlwiZ2F0ZS10ZW1wZXN0LWRzdm0tbmV1dHJvbi1mdWxsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDA0NjIwNzcyMjksIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

** Affects: neutron
 Importance: High
 Status: New

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488282

Title:
  Gate failures with 'the resource could not be found'

Status in neutron:
  New

Bug description:
  There have been spurious failures happening in the gate. The most
  prominent one is:

  
  ft1.186: 
tempest.api.compute.admin.test_servers.ServersAdminTestJSON.test_list_servers_by_admin_with_all_tenants[id-9f5579ae-19b4-4985-a091-2a5d56106580]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2015-08-24 22:55:50,083 32355 INFO [tempest_lib.common.rest_client] 
Request (ServersAdminTestJSON:test_list_servers_by_admin_with_all_tenants): 404 
GET 
http://127.0.0.1:8774/v2/fb99c79318b54e668713b25afc52f81a/servers/detail?all_tenants=
 0.834s
  2015-08-24 22:55:50,083 32355 DEBUG[tempest_lib.common.rest_client] 
Request - Headers: {'X-Auth-Token': '', 'Accept': 'application/json', 
'Content-Type': 'application/json'}
  Body: None
  Response - Headers: {'content-length': '78', 'date': 'Mon, 24 Aug 2015 
22:55:50 GMT', 'connection': 'close', 'content-type': 'application/json; 
charset=UTF-8', 'x-compute-request-id': 
'req-387b21a9-4ada-48ee-89ed-9acfe5274ef7', 'status': '404'}
  Body: {"itemNotFound": {"message": "The resource could not be 
found.", "code": 404}}
  }}}

  Traceback (most recent call last):
File "tempest/api/compute/admin/test_servers.py", line 81, in 
test_list_servers_by_admin_with_all_tenants
  body = self.client.list_servers(detail=True, **params)
File "tempest/services/compute/json/servers_client.py", line 159, in 
list_servers
  resp, body = self.get(url)
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 271, in get
  return self.request('GET', url, extra_headers, headers)
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
  resp, resp_body)
File 
"/opt/stack/new/tempest/.tox/full/local/lib/pyth

[Yahoo-eng-team] [Bug 1488276] [NEW] new launch instance doesn't work on network topology

2015-08-24 Thread Travis Tripp
Public bug reported:


Set in local_settings.py:

LAUNCH_INSTANCE_NG_ENABLED = True LAUNCH_INSTANCE_LEGACY_ENABLED = False

Restart horizon.

Click the Launch Instance button. It will hang for a bit with just the
spinner, but eventually you'll see the following errors in the
javascript console:

horizon.modals.js:323 Uncaught TypeError: Cannot read property 'modal' of 
nullhorizon.modals._request.$.ajax.complete @ horizon.modals.js:323fire @ 
jquery.js:3048self.fireWith @ jquery.js:3160done @ jquery.js:8250callback @ 
jquery.js:8778
VM4679:305 Uncaught TypeError: Cannot read property 'loadAngular' of 
undefinedhorizon.addInitFunction.horizon.modals.init @ VM4679:305horizon.init @ 
VM4651:24jQuery.Callbacks.fire @ VM4647:3048jQuery.Callbacks.self.fireWith @ 
VM4647:3160jQuery.extend.ready @ VM4647:433

horizon.networktopology.js:138 Uncaught TypeError: Cannot read property
'get' of undefinedhorizon.network_topology.select_draw_mode @
horizon.networktopology.js:138horizon.network_topology.data_convert @
horizon.networktopology.js:165(anonymous function) @
horizon.networktopology.js:129jQuery.Callbacks.fire @
VM4647:3048jQuery.Callbacks.self.fireWith @ VM4647:3160done @
VM4647:8235jQuery.ajaxTransport.send.call

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488276

Title:
  new launch instance doesn't work on network topology

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Set in local_settings.py:

  LAUNCH_INSTANCE_NG_ENABLED = True LAUNCH_INSTANCE_LEGACY_ENABLED =
  False

  Restart horizon.

  Click the Launch Instance button. It will hang for a bit with just the
  spinner, but eventually you'll see the following errors in the
  javascript console:

  horizon.modals.js:323 Uncaught TypeError: Cannot read property 'modal' of 
nullhorizon.modals._request.$.ajax.complete @ horizon.modals.js:323fire @ 
jquery.js:3048self.fireWith @ jquery.js:3160done @ jquery.js:8250callback @ 
jquery.js:8778
  VM4679:305 Uncaught TypeError: Cannot read property 'loadAngular' of 
undefinedhorizon.addInitFunction.horizon.modals.init @ VM4679:305horizon.init @ 
VM4651:24jQuery.Callbacks.fire @ VM4647:3048jQuery.Callbacks.self.fireWith @ 
VM4647:3160jQuery.extend.ready @ VM4647:433

  horizon.networktopology.js:138 Uncaught TypeError: Cannot read
  property 'get' of undefinedhorizon.network_topology.select_draw_mode @
  horizon.networktopology.js:138horizon.network_topology.data_convert @
  horizon.networktopology.js:165(anonymous function) @
  horizon.networktopology.js:129jQuery.Callbacks.fire @
  VM4647:3048jQuery.Callbacks.self.fireWith @ VM4647:3160done @
  VM4647:8235jQuery.ajaxTransport.send.call

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488261] [NEW] update code so data-selenium is consistently available

2015-08-24 Thread Doug Fish
Public bug reported:

while reviewing https://review.openstack.org/#/c/214671/ in patch set 6,
Timur proposed a patch that consistently added the data-selenium
attribute to table values. I believe this is the right approach and it
would be nice if all of our integration tests could rely on this value
being present. However, several unit tests fail after this attribute is
added.

This bug is to update the data-selenium -updating code in
horizon/tables/base.py Cell.__init__ to always add this attribute as
well as update the related unit tests.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488261

Title:
  update code so data-selenium is consistently available

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while reviewing https://review.openstack.org/#/c/214671/ in patch set
  6, Timur proposed a patch that consistently added the data-selenium
  attribute to table values. I believe this is the right approach and it
  would be nice if all of our integration tests could rely on this value
  being present. However, several unit tests fail after this attribute
  is added.

  This bug is to update the data-selenium -updating code in
  horizon/tables/base.py Cell.__init__ to always add this attribute as
  well as update the related unit tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487038] Re: nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS

2015-08-24 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.messaging
   Status: Fix Committed => Fix Released

** Changed in: oslo.messaging
Milestone: None => 2.4.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487038

Title:
  nova.exception._cleanse_dict should use
  oslo_utils.strutils._SANITIZE_KEYS

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.messaging:
  Fix Released

Bug description:
  The wrap_exception decorator in nova.exception uses the _cleanse_dict
  helper method to remove any keys from the args/kwargs list of the
  method that was called, but only checks those keys of the form *_pass:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/exception.py?id=12.0.0.0b2#n57

  def _cleanse_dict(original):
  """Strip all admin_password, new_pass, rescue_pass keys from a dict."""
  return {k: v for k, v in six.iteritems(original) if "_pass" not in k}

  The oslo_utils.strutils module has it's own list of keys to sanitized
  used in it's mask_password method:

  
http://git.openstack.org/cgit/openstack/oslo.utils/tree/oslo_utils/strutils.py#n54

  _SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password',
'auth_token', 'new_pass', 'auth_password', 'secret_uuid',
'sys_pswd']

  The nova code should probably be using some form of the same thing
  that strutils is using for mask_password, which uses a regex to find
  hits.  For example, if the arg was 'auth_token' or simply 'password',
  _cleanse_dict would fail to filter it out.

  You could also argue that the oslo.messaging log notifier should be
  using oslo_utils.strutils.mask_password before it logs the message -
  which isn't happening in that library today.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487038/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482585] Re: Fix duplicate-key pylint issue

2015-08-24 Thread Davanum Srinivas (DIMS)
** Changed in: oslo.log
   Status: Fix Committed => Fix Released

** Changed in: oslo.log
Milestone: None => 1.10.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1482585

Title:
  Fix duplicate-key pylint issue

Status in Keystone:
  Fix Committed
Status in oslo.log:
  Fix Released

Bug description:
  steps to list duplicate-key pylint issue:
  (1) Run below command:
  pylint --rcfile=.pylintrc -f parseable cinder/ | grep '\[W0109'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1482585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-08-24 Thread Doug Hellmann
** Changed in: ironic
   Status: Fix Committed => Fix Released

** Changed in: ironic
Milestone: None => 4.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in Keystone:
  Fix Released
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Released
Status in murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Committed
Status in Trove:
  Fix Released

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1436568] Re: Ironic Nova driver makes two calls to delete a node

2015-08-24 Thread Doug Hellmann
** Changed in: ironic
   Status: Fix Committed => Fix Released

** Changed in: ironic
Milestone: None => 4.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1436568

Title:
  Ironic Nova driver makes two calls to delete a node

Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Committed

Bug description:
  When deleting an instance in Nova, it sets the provision state to
  DELETED and then when that completes (node is in CLEANING, CLEANFAIL,
  or NOSTATE/AVAILABLE), it makes another call to remove the instance
  UUID. The instance UUID should be cleared out when Ironic clears out
  node.instance_info, and Nova should delete the instance as soon as the
  node is one of the states above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1436568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437904] Re: generate_sample.sh uses MODULEPATH environment variable, conflicts with environment-modules

2015-08-24 Thread Doug Hellmann
** Changed in: ironic
   Status: Fix Committed => Fix Released

** Changed in: ironic
Milestone: None => 4.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437904

Title:
  generate_sample.sh uses MODULEPATH environment variable, conflicts
  with environment-modules

Status in Cinder:
  In Progress
Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The generate_sample.sh script refers to a MODULEPATH variable without
  clearing it first.  On a system using the environment-modules package,
  MODULEPATH is a PATH-like environment variable, which leads
  generate_sample.sh to fail like this:

  No module named
  
/etc/scl/modulefiles:/etc/scl/modulefiles:/usr/share/Modules/modulefiles:/etc/modulefiles:/usr/share/modulefiles

  The solution is to either explicitly clear this variable at the start
  of the script, or use a different name if this is something that is
  expected to be set externally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1437904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488233] Re: FC with LUN ID >255 not recognized

2015-08-24 Thread Stefan Amann
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Stefan Amann (stefan-amann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488233

Title:
  FC with LUN ID >255 not recognized

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New
Status in os-brick:
  In Progress

Bug description:
  (s390 architecture/System z Series only) FC LUNs with LUN ID >255 are not 
recognized by neither Cinder nor Nova when trying to attach the volume.
  The issue is that Fibre-Channel volumes need to be added using the unit_add 
command with a properly formatted LUN string.
  The string is set correctly for LUNs <=0xff. But not for LUN IDs within the 
range 0xff and 0x.
  Due to this the volumes do not get properly added to the hypervisor 
configuration and the hypervisor does not find them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1488233/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442501] Re: Fail to boot VM from image with one ephemeral disk

2015-08-24 Thread BalaGopalaKrishna
Thanks jichenjc,  This bug is because of python-novaclient version
difference

as you said we need to backport https://review.openstack.org/#/c/165932/
to stable/kilo to resolve this bug.

** Project changed: nova => python-novaclient

** Changed in: python-novaclient
 Assignee: Noel Nelson Dsouza (noelnelson) => BalaGopalaKrishna (bala-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1442501

Title:
  Fail to boot VM from image with one ephemeral disk

Status in python-novaclient:
  Confirmed

Bug description:
  Kilo latest code

  I attempt to boot a VM from qcow2 image, and also create one ephemeral
  disk, but my request is refused by nova-api.

  [root@icm ~]# nova boot --flavor 1 --image cirros --ephemeral size=2 zhaoqin
  ERROR (BadRequest): Block Device Mapping is Invalid: Boot sequence for the 
instance and image/block device mapping combination is not valid. (HTTP 400) 
(Request-ID: req-6e272b55-a20e-4e1c-9b76-c1fb9d232fa6)

  
  The error is raised from _validate_bdm() in nova/compute/api.py.  Since there 
is only one ephemeral disk in block_device_mapping_v2 list and its boot_index 
is -1,  boot_indexes becomes an empty list. And the error is raised, because 0 
is not in boot_indexes list:

  if 0 not in boot_indexes or not _subsequent_list(boot_indexes):
  raise exception.InvalidBDMBootSequence()

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1442501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488218] [NEW] [Sahara] Cannot choose libs for Java job type

2015-08-24 Thread Vitaly Gridnev
Public bug reported:

Java job type can require extra libs during execution. Now I cannot
choose any libs for that job type

** Affects: horizon
 Importance: Undecided
 Assignee: Vitaly Gridnev (vgridnev)
 Status: In Progress


** Tags: sahara

** Tags added: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488218

Title:
  [Sahara] Cannot choose libs for Java job type

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Java job type can require extra libs during execution. Now I cannot
  choose any libs for that job type

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488208] [NEW] Revoking a role assignment revokes unscoped tokens too

2015-08-24 Thread Dolph Mathews
Public bug reported:

When you delete a role assignment using a user+role+project pairing,
unscoped tokens between the user+project are unnecessarily revoked as
well. In fact, two events are created for each role assignment deletion
(one that is scoped correctly and one that is scoped too broadly).

The test failure in https://review.openstack.org/#/c/216236/ illustrates
this issue:

  http://logs.openstack.org/36/216236/1/check/gate-keystone-
python27/3f44af1/

** Affects: keystone
 Importance: Medium
 Assignee: Dolph Mathews (dolph)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1488208

Title:
  Revoking a role assignment revokes unscoped tokens too

Status in Keystone:
  In Progress

Bug description:
  When you delete a role assignment using a user+role+project pairing,
  unscoped tokens between the user+project are unnecessarily revoked as
  well. In fact, two events are created for each role assignment
  deletion (one that is scoped correctly and one that is scoped too
  broadly).

  The test failure in https://review.openstack.org/#/c/216236/
  illustrates this issue:

http://logs.openstack.org/36/216236/1/check/gate-keystone-
  python27/3f44af1/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1488208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488174] [NEW] A tag with a long length will return a 500

2015-08-24 Thread Niall Bunting
Public bug reported:

Overview: 
The tag length is not checked, it tries to insert the tag into the database and 
can't. This then causes a database exception.

How to produce:
curl -v -X PUT  
http://10.0.0.8:9292/v2/images/4e1cd2f0-8704-4e19-953f-62ff14d1b22a/tags/0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
 -H "X-Auth-Token: eb51438851c64ffab72163092924e1cf"
> PUT 
> /v2/images/4e1cd2f0-8704-4e19-953f-62ff14d1b22a/tags/0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
>  HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 10.0.0.8:9292
> Accept: */*
> X-Auth-Token: eb51438851c64ffab72163092924e1cf
> 

Actual result:
< HTTP/1.1 500 Internal Server Error
< Content-Length: 228
< Content-Type: text/html; charset=UTF-8
< X-Openstack-Request-Id: req-8df454b9-32e4-4356-a23c-5de4a60b49fa
< Date: Mon, 24 Aug 2015 16:41:42 GMT
< 

 
  500 Internal Server Error
 
 
  500 Internal Server Error
  The server has either erred or is incapable of performing the requested 
operation.
 

Expected:
400

** Affects: glance
 Importance: Undecided
 Assignee: Niall Bunting (niall-bunting)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Niall Bunting (niall-bunting)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488174

Title:
  A tag with a long length will return a 500

Status in Glance:
  New

Bug description:
  Overview: 
  The tag length is not checked, it tries to insert the tag into the database 
and can't. This then causes a database exception.

  How to produce:
  curl -v -X PUT  
http://10.0.0.8:9292/v2/images/4e1cd2f0-8704-4e19-953f-62ff14d1b22a/tags/0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
 -H "X-Auth-Token: eb51438851c64ffab72163092924e1cf"
  > PUT 
/v2/images/4e1cd2f0-8704-4e19-953f-62ff14d1b22a/tags/0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789
 HTTP/1.1
  > User-Agent: curl/7.35.0
  > Host: 10.0.0.8:9292
  > Accept: */*
  > X-Auth-Token: eb51438851c64ffab72163092924e1cf
  > 

  Actual result:
  < HTTP/1.1 500 Internal Server Error
  < Content-Length: 228
  < Content-Type: text/html; charset=UTF-8
  < X-Openstack-Request-Id: req-8df454b9-32e4-4356-a23c-5de4a60b49fa
  < Date: Mon, 24 Aug 2015 16:41:42 GMT
  < 
  
   
500 Internal Server Error
   
   
500 Internal Server Error
The server has either erred or is incapable of performing the requested 
operation.
   

  Expected:
  400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488172] [NEW] "Any Availability Zone" should not be shown in single-AZ environments

2015-08-24 Thread Ying Zuo
Public bug reported:

When creating a new volume, "Any Availability Zone" is confusing because
there is only one available zone.

** Affects: horizon
 Importance: Undecided
 Assignee: Ying Zuo (yinzuo)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Ying Zuo (yinzuo)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488172

Title:
  "Any Availability Zone" should not be shown in single-AZ environments

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When creating a new volume, "Any Availability Zone" is confusing
  because there is only one available zone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364876] Re: Specifying both rpc_workers and api_workers make stoping neutron-server fail

2015-08-24 Thread Elena Ezhova
This problem was fixed when service code was a part of oslo-incubator and is no 
longer observed. 
Related bug: https://bugs.launchpad.net/neutron/+bug/1432995

** Changed in: oslo.service
   Status: New => Invalid

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364876

Title:
  Specifying both rpc_workers and api_workers make stoping neutron-
  server fail

Status in neutron:
  Invalid
Status in oslo-incubator:
  Invalid
Status in oslo.service:
  Invalid

Bug description:
  Hi,

  By setting both rpc_workers and api_workers to something bigger than
  1, when you try to stop the service with e.g. upstart the stop doesn't
  kill all neutron-server processes, which result to failure when
  starting neutron-server again.

  Details:
  ==

  neutron-server will create 2 openstack.common.service.ProcessLauncher
  instances one the RPC service, the other for the WSGI API service, now
  the ProcessLauncher wasn't meant to be instantiated more than once in
  a single process and here is why:

  1. Each ProcessLauncher instance register a callback to catch signals like 
SIGTERM, SIGINT and SIGHUB, having two instances of ProcessLauncher mean 
signal.signal will be called twice with different callbacks, only the last one 
executed will take effect, i.e. Only one ProcessLauncher instance will
  catch the signal and do the cleaning.

  2. Each ProcessLauncher think that he own all children processes of
  the parent process, for example take a look at "_wait_child" method
  that will catch all killed children processes i.e. os.waitpid(0, ... .

  3. When only one ProcessLauncher instance is handling the process
  termination while the other doesn't (Point 1), this lead to race
  condition between both:

  3.1. Running "stop neutron-server" will kill also children
  processes too, but b/c we have 2 ProcessLauncher the one that didn't
  catch the kill signal will keep respawning new children processes when
  it detect that a child process died, the other wont because
  self.running was set to False.

  3.2. When children processes dies (i.e. stop neutron-server), one
  of the ProcessLauncher will catch that with os.waitpid(0, os.WNOHANG)
  (both do that), and if the death of a child process is catched by the
  wrong ProcessLauncher i.e. not the one that has it in his
  self.children instance variable, the parent process will hang forever
  in the loop below b/c self.children will always contain that child
  process:

   if self.children:
  LOG.info(_LI('Waiting on %d children to exit'), 
len(self.children))
  while self.children:
  self._wait_child()

  3.3. When a child process die if his death is catch by the wrong
  ProcessLauncher instance (i.e. not the one that have in in it's
  seld.children) then a replacement will never be spawned.

  Hopefully I made this clear.

  Cheers,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488167] [NEW] neutron stable jobs are busted by alembic 0.8.1

2015-08-24 Thread Ihar Hrachyshka
Public bug reported:

neutron functional tests are broken [1] by new alembic 0.8.1 release
because it now catches some differences between models and migration
scripts that were not caught before (specifically, in foreign key
attributes).

[1]: http://logs.openstack.org/66/211166/2/gate/gate-neutron-dsvm-
functional/867114c/

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488167

Title:
  neutron stable jobs are busted by alembic 0.8.1

Status in neutron:
  In Progress

Bug description:
  neutron functional tests are broken [1] by new alembic 0.8.1 release
  because it now catches some differences between models and migration
  scripts that were not caught before (specifically, in foreign key
  attributes).

  [1]: http://logs.openstack.org/66/211166/2/gate/gate-neutron-dsvm-
  functional/867114c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488168] [NEW] missing mock in loadbalancer tests

2015-08-24 Thread David Lyle
Public bug reported:

Recent merge introduced unmocked API calls for loadbalancer tests:

Error while checking action permissions.
Traceback (most recent call last):
  File "/home/david-lyle/horizon/horizon/tables/base.py", line 1266, in 
_filter_action
return action._allowed(request, datum) and row_matched
  File "/home/david-lyle/horizon/horizon/tables/actions.py", line 136, in 
_allowed
return self.allowed(request, datum)
  File 
"/home/david-lyle/horizon/openstack_dashboard/dashboards/project/loadbalancers/tables.py",
 line 312, in allowed
if not api.network.floating_ip_supported(request):
  File "/home/david-lyle/horizon/openstack_dashboard/api/network.py", line 91, 
in floating_ip_supported
return NetworkClient(request).floating_ips.is_supported()
  File "/home/david-lyle/horizon/openstack_dashboard/api/network.py", line 37, 
in __init__
neutron.is_extension_supported(request, 'security-group')):
  File "/home/david-lyle/horizon/horizon/utils/memoized.py", line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File "/home/david-lyle/horizon/openstack_dashboard/api/neutron.py", line 
1161, in is_extension_supported
extensions = list_extensions(request)
  File "/home/david-lyle/horizon/horizon/utils/memoized.py", line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File "/home/david-lyle/horizon/openstack_dashboard/api/neutron.py", line 
1152, in list_extensions
extensions_list = neutronclient(request).list_extensions()
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 102, in with_params
ret = self.function(instance, *args, **kwargs)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 522, in list_extensions
return self.get(self.extensions_path, params=_params)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 293, in get
headers=headers, params=params)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 270, in retry_request
headers=headers, params=params)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 200, in do_request
content_type=self.content_type())
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/client.py",
 line 170, in do_request
**kwargs)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/client.py",
 line 106, in _cs_request
raise exceptions.ConnectionFailed(reason=e)
ConnectionFailed: Connection to neutron failed: ('Connection aborted.', 
gaierror(-2, 'Name or service not known'))
Error while checking action permissions.
Traceback (most recent call last):
  File "/home/david-lyle/horizon/horizon/tables/base.py", line 1266, in 
_filter_action
return action._allowed(request, datum) and row_matched
  File "/home/david-lyle/horizon/horizon/tables/actions.py", line 136, in 
_allowed
return self.allowed(request, datum)
  File 
"/home/david-lyle/horizon/openstack_dashboard/dashboards/project/loadbalancers/tables.py",
 line 348, in allowed
if not api.network.floating_ip_supported(request):
  File "/home/david-lyle/horizon/openstack_dashboard/api/network.py", line 91, 
in floating_ip_supported
return NetworkClient(request).floating_ips.is_supported()
  File "/home/david-lyle/horizon/openstack_dashboard/api/network.py", line 37, 
in __init__
neutron.is_extension_supported(request, 'security-group')):
  File "/home/david-lyle/horizon/horizon/utils/memoized.py", line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File "/home/david-lyle/horizon/openstack_dashboard/api/neutron.py", line 
1161, in is_extension_supported
extensions = list_extensions(request)
  File "/home/david-lyle/horizon/horizon/utils/memoized.py", line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File "/home/david-lyle/horizon/openstack_dashboard/api/neutron.py", line 
1152, in list_extensions
extensions_list = neutronclient(request).list_extensions()
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 102, in with_params
ret = self.function(instance, *args, **kwargs)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 522, in list_extensions
return self.get(self.extensions_path, params=_params)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 293, in get
headers=headers, params=params)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py",
 line 270, in retry_request
headers=headers, params=params)
  File 
"/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/

[Yahoo-eng-team] [Bug 1488154] [NEW] OPENSTACK_KEYSTONE_ADMIN_ROLES not documented

2015-08-24 Thread David Lyle
Public bug reported:

OPENSTACK_KEYSTONE_ADMIN_ROLES was added in Kilo and used by
django_openstack_auth. However, it was never documented.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488154

Title:
  OPENSTACK_KEYSTONE_ADMIN_ROLES not documented

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  OPENSTACK_KEYSTONE_ADMIN_ROLES was added in Kilo and used by
  django_openstack_auth. However, it was never documented.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329506] Re: Global variables created on horizon.js

2015-08-24 Thread Ying Zuo
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1329506

Title:
  Global variables created on horizon.js

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Three global variables are defined on horizon.js shown below. It may
  cause naming conflict later on.

  ---

  //horizon/static/horizon/js/angular/horizon.js (juno1)

  var horizon_dependencies = ['hz.conf', 'hz.utils', 'ngCookies'];
  dependencies = horizon_dependencies.concat(angularModuleExtension);
  var horizonApp = angular.module('hz', dependencies)
    .config(['$interpolateProvider', '$httpProvider',
  function ($interpolateProvider, $httpProvider) {
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1329506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307732] Re: Schema created automatically without migration information

2015-08-24 Thread sean mooney
the auto generation behavior that created this bug has been removed by 
https://review.openstack.org/#/c/40296/

marking as invalid as neutron no longer generates tables from sql
alchemy models.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1307732

Title:
  Schema created automatically without migration information

Status in neutron:
  Invalid

Bug description:
  Currently when neutron-server starts up
  `base.metadata.create_all(engine)` is called which will create all
  needed tables for Neutron. The problem with this is that this does not
  insert Alembic migration data.

  Now, not everyone uses Alembic, but for those of us that do it should
  be an option to *not* create tables automatically. We've been through
  this with the Glance project and I think a good solution to this is to
  create a configuration option to *not* auto-create the database
  schema.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1307732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470574] Re: django_openstack_auth can't run tests in stable/kilo

2015-08-24 Thread Sergey Lukjanov
** Changed in: django-openstack-auth
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470574

Title:
  django_openstack_auth can't run tests in stable/kilo

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  gate jobs are failing for django_openstack_auth stable/kilo

  2015-06-30 20:02:03.884 |   File 
"/home/jenkins/workspace/gate-django_openstack_auth-python27/.tox/py27/local/lib/python2.7/site-packages/django/test/simple.py",
 line 102, in get_tests
  2015-06-30 20:02:03.885 | test_module = import_module('%s.%s' % 
(app_config.name, TEST_MODULE))
  2015-06-30 20:02:03.885 |   File "/usr/lib/python2.7/importlib/__init__.py", 
line 37, in import_module
  2015-06-30 20:02:03.885 | __import__(name)
  2015-06-30 20:02:03.885 |   File 
"/home/jenkins/workspace/gate-django_openstack_auth-python27/openstack_auth/tests/tests.py",
 line 30, in 
  2015-06-30 20:02:03.885 | from openstack_auth import policy
  2015-06-30 20:02:03.886 |   File 
"/home/jenkins/workspace/gate-django_openstack_auth-python27/openstack_auth/policy.py",
 line 22, in 
  2015-06-30 20:02:03.886 | from openstack_auth.openstack.common import 
policy
  2015-06-30 20:02:03.886 |   File 
"/home/jenkins/workspace/gate-django_openstack_auth-python27/openstack_auth/openstack/common/policy.py",
 line 90, in 
  2015-06-30 20:02:03.886 | from openstack_auth.openstack.common._i18n 
import _, _LE, _LW
  2015-06-30 20:02:03.886 |   File 
"/home/jenkins/workspace/gate-django_openstack_auth-python27/openstack_auth/openstack/common/_i18n.py",
 line 19, in 
  2015-06-30 20:02:03.887 | import oslo.i18n
  2015-06-30 20:02:03.887 | ImportError: No module named i18n
  2015-06-30 20:02:03.887 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-django_openstack_auth-python27/.tox/py27/bin/python
 openstack_auth/tests/run_tests.py'

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1470574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473588] Re: Provide an option to disable auto-hashing of keystone token

2015-08-24 Thread Sergey Lukjanov
** Changed in: django-openstack-auth
   Status: Fix Committed => Fix Released

** Changed in: django-openstack-auth
Milestone: None => 1.4.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473588

Title:
  Provide an option to disable auto-hashing of keystone token

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:

  Token hashing is performed to be able to support session with cookie
  backend. However, the hashed token doesn't always work.

  We should provide an option for user to turn off token hashing

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1473588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488142] [NEW] cc_growpart looks for removed string in growpart help output

2015-08-24 Thread James Bromberger
Public bug reported:

Hi
Line 90 of cc_growpart looks for the string '--update' from the help output of 
growpart:
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/config/cc_growpart.py#L90

(out, _err) = util.subp(["growpart", "--help"], env=myenv)
if re.search(r"--update\s+", out, re.DOTALL):
return True


However in version that ships as part of cloud-utils 0.26-2 in Debian, this 
string does not exist in the help output.

See also:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784004

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1488142

Title:
  cc_growpart looks for removed string in growpart help output

Status in cloud-init:
  New

Bug description:
  Hi
  Line 90 of cc_growpart looks for the string '--update' from the help output 
of growpart:
  
http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/config/cc_growpart.py#L90

  (out, _err) = util.subp(["growpart", "--help"], env=myenv)
  if re.search(r"--update\s+", out, re.DOTALL):
  return True

  
  However in version that ships as part of cloud-utils 0.26-2 in Debian, this 
string does not exist in the help output.

  See also:
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=784004

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1488142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488143] [NEW] Wrong translation string for Kilo

2015-08-24 Thread Paul Karikh
Public bug reported:

For Kilo release string translation for "Failed to remove router(s) from 
firewall %(name)s: %(reason)s" has invalid params:
"Failed to remove router(s) from firewall %(name)s: %(reason)s" -> 
"Yönlendiriciler %(firewall)s güvenlik duvarından kaldırılamadı: %(reason)s". 
https://www.transifex.com/openstack/horizon/viewstrings/#tr_TR/openstack-dashboard-translations-kilo/42547445?q=Failed%20to%20remove%20router(s)%20from%20firewall
So, the `firewall` param doesn't exist in the original english string.
It makes manage.py compilemessages command fail.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  For Kilo release string translation for "Failed to remove router(s) from 
firewall %(name)s: %(reason)s" has invalid params:
  "Failed to remove router(s) from firewall %(name)s: %(reason)s" -> 
"Yönlendiriciler %(firewall)s güvenlik duvarından kaldırılamadı: %(reason)s". 
https://www.transifex.com/openstack/horizon/viewstrings/#tr_TR/openstack-dashboard-translations-kilo/42547445?q=Failed%20to%20remove%20router(s)%20from%20firewall
- So, the firewall param doesn't exist in the original english string.
+ So, the `firewall` param doesn't exist in the original english string.
  It makes manage.py compilemessages command fail.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488143

Title:
  Wrong translation string for Kilo

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  For Kilo release string translation for "Failed to remove router(s) from 
firewall %(name)s: %(reason)s" has invalid params:
  "Failed to remove router(s) from firewall %(name)s: %(reason)s" -> 
"Yönlendiriciler %(firewall)s güvenlik duvarından kaldırılamadı: %(reason)s". 
https://www.transifex.com/openstack/horizon/viewstrings/#tr_TR/openstack-dashboard-translations-kilo/42547445?q=Failed%20to%20remove%20router(s)%20from%20firewall
  So, the `firewall` param doesn't exist in the original english string.
  It makes manage.py compilemessages command fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] Re: Remove Cinder V1 supprt

2015-08-24 Thread Steve Martinelli
** Changed in: python-openstackclient
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 supprt

Status in Cinder:
  In Progress
Status in devstack:
  In Progress
Status in heat:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in python-openstackclient:
  New
Status in Rally:
  In Progress
Status in tempest:
  In Progress

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488111] [NEW] Boot from volumes that fail in initialize_connection are not rescheduled

2015-08-24 Thread Samuel Matzek
Public bug reported:

Version: OpenStack Liberty

Boot from volumes that fail in volume initialize_connection are not
rescheduled.  Initialize connection failures can be very host-specific
and in many cases the boot would succeed if the instance build was
rescheduled to another host.

The instance is not rescheduled because the initialize_connection is being 
called down this stack:
nova.compute.manager _build_resources
nova.compute.manager _prep_block_device
nova.virt.block_device attach_block_devices
nova.virt.block_device.DriverVolumeBlockDevice.attach

When this fails an exception is thrown which lands in this block:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1740
and throws an InvalidBDM exception which is caught by this block:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2110

this in turn throws a BuildAbortException which causes the instance to not be 
rescheduled by landing the flow in this block:
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2004

To fix this we likely need a different exception thrown from
nova.virt.block_device.DriverVolumeBlockDevice.attach when the failure
is in initialize_connection and then work back up the stack to ensure
that when this different exception is thrown a BuildAbortException  is
not thrown so the reschedule can happen.

** Affects: nova
 Importance: Undecided
 Assignee: Samuel Matzek (smatzek)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Samuel Matzek (smatzek)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488111

Title:
  Boot from volumes that fail in initialize_connection are not
  rescheduled

Status in OpenStack Compute (nova):
  New

Bug description:
  Version: OpenStack Liberty

  Boot from volumes that fail in volume initialize_connection are not
  rescheduled.  Initialize connection failures can be very host-specific
  and in many cases the boot would succeed if the instance build was
  rescheduled to another host.

  The instance is not rescheduled because the initialize_connection is being 
called down this stack:
  nova.compute.manager _build_resources
  nova.compute.manager _prep_block_device
  nova.virt.block_device attach_block_devices
  nova.virt.block_device.DriverVolumeBlockDevice.attach

  When this fails an exception is thrown which lands in this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1740
  and throws an InvalidBDM exception which is caught by this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2110

  this in turn throws a BuildAbortException which causes the instance to not be 
rescheduled by landing the flow in this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2004

  To fix this we likely need a different exception thrown from
  nova.virt.block_device.DriverVolumeBlockDevice.attach when the failure
  is in initialize_connection and then work back up the stack to ensure
  that when this different exception is thrown a BuildAbortException  is
  not thrown so the reschedule can happen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364876] Re: Specifying both rpc_workers and api_workers make stoping neutron-server fail

2015-08-24 Thread Li Ma
** Also affects: oslo.service
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: Li Ma (nick-ma-z) => (unassigned)

** Changed in: oslo.service
 Assignee: (unassigned) => Li Ma (nick-ma-z)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364876

Title:
  Specifying both rpc_workers and api_workers make stoping neutron-
  server fail

Status in neutron:
  In Progress
Status in oslo-incubator:
  Invalid
Status in oslo.service:
  New

Bug description:
  Hi,

  By setting both rpc_workers and api_workers to something bigger than
  1, when you try to stop the service with e.g. upstart the stop doesn't
  kill all neutron-server processes, which result to failure when
  starting neutron-server again.

  Details:
  ==

  neutron-server will create 2 openstack.common.service.ProcessLauncher
  instances one the RPC service, the other for the WSGI API service, now
  the ProcessLauncher wasn't meant to be instantiated more than once in
  a single process and here is why:

  1. Each ProcessLauncher instance register a callback to catch signals like 
SIGTERM, SIGINT and SIGHUB, having two instances of ProcessLauncher mean 
signal.signal will be called twice with different callbacks, only the last one 
executed will take effect, i.e. Only one ProcessLauncher instance will
  catch the signal and do the cleaning.

  2. Each ProcessLauncher think that he own all children processes of
  the parent process, for example take a look at "_wait_child" method
  that will catch all killed children processes i.e. os.waitpid(0, ... .

  3. When only one ProcessLauncher instance is handling the process
  termination while the other doesn't (Point 1), this lead to race
  condition between both:

  3.1. Running "stop neutron-server" will kill also children
  processes too, but b/c we have 2 ProcessLauncher the one that didn't
  catch the kill signal will keep respawning new children processes when
  it detect that a child process died, the other wont because
  self.running was set to False.

  3.2. When children processes dies (i.e. stop neutron-server), one
  of the ProcessLauncher will catch that with os.waitpid(0, os.WNOHANG)
  (both do that), and if the death of a child process is catched by the
  wrong ProcessLauncher i.e. not the one that has it in his
  self.children instance variable, the parent process will hang forever
  in the loop below b/c self.children will always contain that child
  process:

   if self.children:
  LOG.info(_LI('Waiting on %d children to exit'), 
len(self.children))
  while self.children:
  self._wait_child()

  3.3. When a child process die if his death is catch by the wrong
  ProcessLauncher instance (i.e. not the one that have in in it's
  seld.children) then a replacement will never be spawned.

  Hopefully I made this clear.

  Cheers,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488096] [NEW] spelling mistake in test_images.py

2015-08-24 Thread Martin Tsvetanov
Public bug reported:

image is spelled wrong in tests/functional/v2/test_images.py on line
1089 and 1131

** Affects: glance
 Importance: Undecided
 Assignee: Martin Tsvetanov (martin-iva-tsvetanov)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Martin Tsvetanov (martin-iva-tsvetanov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488096

Title:
  spelling mistake in test_images.py

Status in Glance:
  New

Bug description:
  image is spelled wrong in tests/functional/v2/test_images.py on line
  1089 and 1131

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483382] Re: Able to request a V2 token for user and project in a non-default domain

2015-08-24 Thread Dolph Mathews
Fixed by https://review.openstack.org/#/c/208069/

** Changed in: keystone
   Importance: Undecided => High

** Changed in: keystone
   Status: New => Fix Committed

** Changed in: keystone
 Assignee: (unassigned) => Dolph Mathews (dolph)

** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1483382

Title:
  Able to request a V2 token for user and project in a non-default
  domain

Status in Keystone:
  Fix Committed
Status in Keystone kilo series:
  New
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Using the latest devstack, I am able to request a V2 token for user
  and project in a non-default domain. This problematic as non-default
  domains are not suppose to be visible to V2 APIs.

  Steps to reproduce:

  1) install devstack

  2) run these commands

  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 domain list
  
+--+-+-+--+
  | ID   | Name| Enabled | Description  
|
  
+--+-+-+--+
  | 769ad7730e0c4498b628aa8dc00e831f | foo | True|  
|
  | default  | Default | True| Owns users and 
tenants (i.e. projects) available on Identity API v2. |
  
+--+-+-+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 user list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+--+
  | ID   | Name |
  +--+--+
  | cf0aa0b2d5db4d67a94d1df234c338e5 | bar  |
  +--+--+
  gyee@dev:~$ openstack --os-identity-api-version 3 --os-username admin 
--os-password secrete --os-user-domain-id default --os-project-name admin 
--os-project-domain-id default --os-auth-url http://localhost:5000 project list 
--domain 769ad7730e0c4498b628aa8dc00e831f
  +--+-+
  | ID   | Name|
  +--+-+
  | 413abdbfef5544e2a5f3e8ac6124dd29 | foo-project |
  +--+-+
  gyee@dev:~$ curl -k -H 'Content-Type: application/json' -d '{"auth": 
{"passwordCredentials": {"userId": "cf0aa0b2d5db4d67a94d1df234c338e5", 
"password": "secrete"}, "tenantId": "413abdbfef5544e2a5f3e8ac6124dd29"}}' 
-XPOST http://localhost:35357/v2.0/tokens | python -mjson.tool
    % Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100  3006  100  2854  100   152  22164   1180 --:--:-- --:--:-- --:--:-- 22472
  {
  "access": {
  "metadata": {
  "is_admin": 0,
  "roles": [
  "2b7f29ebd1c8453fb91e9cd7c2e1319b",
  "9fe2ff9ee4384b1894a90878d3e92bab"
  ]
  },
  "serviceCatalog": [
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "id": "3a92a79a21fb41379fa3e135be65eeff",
  "internalURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "publicURL": 
"http://10.0.2.15:8774/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "region": "RegionOne"
  }
  ],
  "endpoints_links": [],
  "name": "nova",
  "type": "compute"
  },
  {
  "endpoints": [
  {
  "adminURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "id": "64338d9eb3054598bcee30443c678e2a",
  "internalURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "publicURL": 
"http://10.0.2.15:8776/v2/413abdbfef5544e2a5f3e8ac6124dd29";,
  "region": "RegionOne"
    

[Yahoo-eng-team] [Bug 1488074] [NEW] Moving translation to HTML for launch-instance source step

2015-08-24 Thread Rob Cresswell
Public bug reported:

We should clean out old gettext and move them into HTML files. This bug
addresses the move to launch-instance source step.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1488074

Title:
  Moving translation to HTML for launch-instance source step

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We should clean out old gettext and move them into HTML files. This
  bug addresses the move to launch-instance source step.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1488074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438093] Re: Redundant method _set_vm_state in conductor _live_migrate

2015-08-24 Thread jichenjc
After another look, I think 2 occurrance make it ok to write a helper
function

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438093

Title:
  Redundant method _set_vm_state in conductor _live_migrate

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In this file: 
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L600
  Implementing _set_vm_state() method in _live_migrate() method is unnecessary.

  It could be used method from line #590:

  def _set_vm_state_and_notify(self, context, instance_uuid, method, updates, 
ex, request_spec):
  scheduler_utils.set_vm_state_and_notify(
  context, instance_uuid, 'compute_task', 
   method, updates,ex, request_spec, self.db)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328367] Re: Do not set vm error state when raise MigrationError

2015-08-24 Thread jichenjc
The patch mentioned above are merged , I believe we can close the bug ?

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328367

Title:
  Do not set vm  error state when raise MigrationError

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Control Node: 101.0.0.20(also has compute service , but do not use it)
  Compute Node:  101.0.0.30

  nova version:
  2014.1.b2-847-ga891e04

  in control node nova.conf
  allow_resize_to_same_host = True
  and
  in compute node nova.conf
  allow_resize_to_same_host = False

  detail:
  1. boot an instance in compute node
  nova boot --image 51c4a908-c028-4ce2-bbd1-8b0e15d8d829 --flavor 84 --nic 
net-id=308840da-6440-4599-923a-2edd290971d3 --availability-zone 
nova:compute.localdomain migrate_test

  2. resize it to flavor type 1
  nova resize   migrate_test 1

  3.the instance has set to error state when resize failed.

  #nova list
  
+--+--++-+-+---+
  | a1424990-182a-4bc2-8c17-aa4808a49472 | migrate_test | ERROR  | resize_prep 
| Running | private=20.0.0.15 |
  
+--+--++-+-+---+

  #nova show
  
  | config_drive |  

 |
  | created  | 2014-06-09T09:31:35Z 

 |
  | fault| {"message": "", "code": 500, "details": "  File 
\"/opt/stack/nova/nova/compute/manager.py\", line 3104, in prep_resize |
  |  | node)

 |
  |  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 3058, in _prep_resize 
   |
  |  | raise 
exception.MigrationError(msg)   
|
  |  | ", "created": "2014-06-10T03:54:39Z"}
   |
  | flavor   | m1.micro (84)

 |
  | hostId   | 
f73013b029032929598a4a54586e4469c2c7cd676c147f6601f73c58
  

  error log in compute node:

  2014-06-10 11:54:48.372 ERROR nova.compute.manager 
[req-6a4ac25a-7d24-40c6-9f8d-435b4adb6fff admin admin] [instance: a1424990-182a
  -4bc2-8c17-aa4808a49472] Setting instance vm_state to ERROR
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] Traceback (most recent call la
  st):
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File "/opt/stack/nova/nova/c
  ompute/manager.py", line 5231, in _error_out_instance_on_exception
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] yield
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File "/opt/stack/nova/nova/c
  ompute/manager.py", line 3111, in prep_resize
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] filter_properties)
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File 
"/opt/stack/nova/nova/compute/manager.py", line 3104, in prep_resize
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] node)
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472]   File 
"/opt/stack/nova/nova/compute/manager.py", line 3058, in _prep_resize
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] raise exception.MigrationError(msg)
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc2-8c17-aa4808a49472] MigrationError: destination same as 
source!
  2014-06-10 11:54:48.372 TRACE nova.compute.manager [instance: 
a1424990-182a-4bc

[Yahoo-eng-team] [Bug 1488037] [NEW] Artifacts: HTTP 500 when uploading a blob file without specified size

2015-08-24 Thread Alexander Tivelkov
Public bug reported:

An attempt to upload a blob to an artifact without specifying the
content length in a header causes HTTP 500 error in Glance API.

** Affects: glance
 Importance: Undecided
 Status: Confirmed


** Tags: artifacts

** Tags added: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488037

Title:
  Artifacts: HTTP 500 when uploading a blob file without specified size

Status in Glance:
  Confirmed

Bug description:
  An attempt to upload a blob to an artifact without specifying the
  content length in a header causes HTTP 500 error in Glance API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488032] [NEW] "name 'NetworkRBAC' is not defined" Error when quering the db directly using models_v2

2015-08-24 Thread bharath
Public bug reported:

If we call db quiries directly with just importing the models_v2 causing
the below failures. Latest RBAC
changes(https://review.openstack.org/gitweb?p=openstack%2Fneutron.git;a=commitdiff;h=9a8d015052b0a6419e3b2adece8211fec9710c6e)
affected this

InvalidRequestError: When initializing mapper Mapper|Network|networks,
expression 'NetworkRBAC' failed to locate a name ("name 'NetworkRBAC' is
not defined"). If this is a class name, consider adding this
relationship() to the  class after
both dependent classes have been defined.


Instead of forcing the plugin and other modules to import RBAC , import the 
RBAC directly in models_v2.py

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488032

Title:
  "name 'NetworkRBAC' is not defined" Error when quering the db directly
  using models_v2

Status in neutron:
  New

Bug description:
  If we call db quiries directly with just importing the models_v2
  causing the below failures. Latest RBAC
  
changes(https://review.openstack.org/gitweb?p=openstack%2Fneutron.git;a=commitdiff;h=9a8d015052b0a6419e3b2adece8211fec9710c6e)
  affected this

  InvalidRequestError: When initializing mapper Mapper|Network|networks,
  expression 'NetworkRBAC' failed to locate a name ("name 'NetworkRBAC'
  is not defined"). If this is a class name, consider adding this
  relationship() to the  class
  after both dependent classes have been defined.

  
  Instead of forcing the plugin and other modules to import RBAC , import the 
RBAC directly in models_v2.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488021] [NEW] Display alembic branch in neutron-db-manage current

2015-08-24 Thread Matt Thompson
Public bug reported:

In my AIO test instance, I have just upgraded neutron from
7.0.0.0b3.dev200 to 7.0.0.0b3.dev356.  If I run 'neutron-db-manage
current', I see:

root@aio_neutron_server_container-47ef3931:~# neutron-db-manage current 
2>/dev/null
  Running current for neutron ...
2a16083502f3 (head)
1b4c6e320f79
  OK
root@aio_neutron_server_container-47ef3931:~#

Based on this, I can see that one branch is not on head, and that
migrations will need to be applied to that specific branch.

In our deployment tooling, what we currently do after neutron has
upgraded is run 'neutron-db-manage upgrade liberty_expand@head' while
neutron-server is up and running.  We then proceed to shut down all
neutron-server instances and run 'neutron-db-manage upgrade
liberty_contract@head'.  Once that has completed, we start up neutron-
server again.

What would be ideal is if the 'neutron-db-manage current' output also
indicated which alembic branch the migration ID was referring  to.  That
way, we could determine if we're on head for a given branch, and if not
proceed with applying the migrations for that specific branch.  For the
liberty_expand@head branch, this is a non-issue since these migrations
can run with neutron-server up and responding.  However, if we can avoid
having to shut down neutron-server unnecessarily when there are no
pending  liberty_contract@head migrations, this would give us a much
smoother deployment experience.

Please give me a shout if I can provide any further information.

Thanks,
Matt

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488021

Title:
  Display alembic branch in neutron-db-manage current

Status in neutron:
  New

Bug description:
  In my AIO test instance, I have just upgraded neutron from
  7.0.0.0b3.dev200 to 7.0.0.0b3.dev356.  If I run 'neutron-db-manage
  current', I see:

  root@aio_neutron_server_container-47ef3931:~# neutron-db-manage current 
2>/dev/null
Running current for neutron ...
  2a16083502f3 (head)
  1b4c6e320f79
OK
  root@aio_neutron_server_container-47ef3931:~#

  Based on this, I can see that one branch is not on head, and that
  migrations will need to be applied to that specific branch.

  In our deployment tooling, what we currently do after neutron has
  upgraded is run 'neutron-db-manage upgrade liberty_expand@head' while
  neutron-server is up and running.  We then proceed to shut down all
  neutron-server instances and run 'neutron-db-manage upgrade
  liberty_contract@head'.  Once that has completed, we start up neutron-
  server again.

  What would be ideal is if the 'neutron-db-manage current' output also
  indicated which alembic branch the migration ID was referring  to.
  That way, we could determine if we're on head for a given branch, and
  if not proceed with applying the migrations for that specific branch.
  For the liberty_expand@head branch, this is a non-issue since these
  migrations can run with neutron-server up and responding.  However, if
  we can avoid having to shut down neutron-server unnecessarily when
  there are no pending  liberty_contract@head migrations, this would
  give us a much smoother deployment experience.

  Please give me a shout if I can provide any further information.

  Thanks,
  Matt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488015] [NEW] Potential race condition in l3-ha handling of l2pop initial master selection

2015-08-24 Thread Ihar Hrachyshka
Public bug reported:

In _ensure_host_set_on_port, if no master is set for the router yet, we
get None from get_active_host_for_ha_router, and in that case we use the
reporting agent host to set as an active in the port bindings, assuming
that later, when master is elected, it will be reset to the proper
value.

The race could occur when we first fetch the active host for the port,
it's returned as None because it's not yet elected, then it's elected,
and only then we hit the database with our random host, so in the end
the port binding contains the host that does not potentially reflect the
master (assuming the agent that sent sync_routers() is not the one that
became the master).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488015

Title:
  Potential race condition in l3-ha handling of l2pop initial master
  selection

Status in neutron:
  New

Bug description:
  In _ensure_host_set_on_port, if no master is set for the router yet,
  we get None from get_active_host_for_ha_router, and in that case we
  use the reporting agent host to set as an active in the port bindings,
  assuming that later, when master is elected, it will be reset to the
  proper value.

  The race could occur when we first fetch the active host for the port,
  it's returned as None because it's not yet elected, then it's elected,
  and only then we hit the database with our random host, so in the end
  the port binding contains the host that does not potentially reflect
  the master (assuming the agent that sent sync_routers() is not the one
  that became the master).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1309058] Re: Invalid vcpus_used count after failed migration

2015-08-24 Thread Zhenzan Zhou
** Changed in: nova
 Assignee: (unassigned) => Zhenzan Zhou (zhenzan-zhou)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309058

Title:
  Invalid vcpus_used count after failed migration

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I have a setup consisting of controller + network + 2 x compute nodes.
  The compute nodes have 8 cores each and use only local storage for
  VMs.

  Initially on controller 1 I have 3 VMs running with one VCPU for each
  of them:

  2014-04-17 05:06:18.985 2137 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for 
mz216srv014.mds.t-mobile.net:mz216srv014.mds.t-mobile.net
  2014-04-17 05:06:38.244 2137 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
  2014-04-17 05:06:38.605 2137 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 25450
  2014-04-17 05:06:38.605 2137 AUDIT nova.compute.resource_tracker [-] Free 
disk (GB): 207
  2014-04-17 05:06:38.606 2137 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 5

  
  When I tried to migrate a VM from compute 1 to compute 2 I got the next error 
due to a missing ssh cert setup:

  2014-04-17 05:06:38.636 2137 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for 
mz216srv014.mds.t-mobile.net:mz216srv014.mds.t-mobile.net
  2014-04-17 15:29:57.239 2137 ERROR nova.compute.manager 
[req-74924eff-3043-473a-a38a-7341945826fc 90aec6fcee154bb3896301a8f98077ac 
9053ec1b088240d7a0844e479343097d] [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] Setting instance vm_state to ERROR
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] Traceback (most recent call last):
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 5536, in 
_error_out_instance_on_exception
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] yield
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735]   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3438, in 
resize_instance
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] block_device_info)
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735]   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4883, in 
migrate_disk_and_power_off
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] utils.execute('ssh', dest, 'mkdir', 
'-p', inst_base)
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735]   File 
"/usr/lib/python2.6/site-packages/nova/utils.py", line 164, in execute
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] return processutils.execute(*cmd, 
**kwargs)
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735]   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 
193, in execute
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] cmd=' '.join(cmd))
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] ProcessExecutionError: Unexpected error 
while running command.
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] Command: ssh 192.168.119.15 mkdir -p 
/var/lib/nova/instances/c852434a-5172-4eb5-ad6d-9c7dbaa35735
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] Exit code: 255
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] Stdout: ''
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] Stderr: 'Host key verification 
failed.\r\n'
  2014-04-17 15:29:57.239 2137 TRACE nova.compute.manager [instance: 
c852434a-5172-4eb5-ad6d-9c7dbaa35735] 
  2014-04-17 15:29:57.625 2137 ERROR oslo.messaging.rpc.dispatcher [-] 
Exception during message handling: Unexpected error while running command.
  Command: ssh 192.168.119.15 mkdir -p 
/var/lib/nova/instances/c852434a-5172-4eb5-ad6d-9c7dbaa35735
  Exit code: 255
  Stdout: ''
  Stderr: 'Host key verification failed.\r\n'
  2014-04-17 15:29:57.625 2137 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-04-17 15:29:57.625 2137