[Yahoo-eng-team] [Bug 1623799] [NEW] Serial console not show up on horizon dashboard

2016-09-14 Thread Dao Cong Tien
Public bug reported:

Issue
=

The console tab in Horizon doesn't show the console of an instance.

Steps to reproduce
==

* Install nova-serialproxy and nova-consoleauth
* Enable "serial console" feature in "nova.conf"
  [vnc]
  enabled=False
  [serial_console]
  enabled=True
  base_url=ws://:6083/
  serialproxy_host = 
  proxyclient_address = 
* Launch an instance
* Open the "console" tab of that instance

Expected behavior
=

The serial console of the instance should show up and allow user to
interact with.

Actual behavior
===

* Blank screen (not black screen) without any other info.

Logs & Env
==

* No error/warning logs in Nova and Horizon
* Nova CLI nova get-serial-console  worked correctly and returned a valid 
websocket url.

Version
===

* Used the latest devstack to install openstack with default
configuration except adding serial console settings.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1623799

Title:
  Serial console not show up on horizon dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Issue
  =

  The console tab in Horizon doesn't show the console of an instance.

  Steps to reproduce
  ==

  * Install nova-serialproxy and nova-consoleauth
  * Enable "serial console" feature in "nova.conf"
[vnc]
enabled=False
[serial_console]
enabled=True
base_url=ws://:6083/
serialproxy_host = 
proxyclient_address = 
  * Launch an instance
  * Open the "console" tab of that instance

  Expected behavior
  =

  The serial console of the instance should show up and allow user to
  interact with.

  Actual behavior
  ===

  * Blank screen (not black screen) without any other info.

  Logs & Env
  ==

  * No error/warning logs in Nova and Horizon
  * Nova CLI nova get-serial-console  worked correctly and returned a 
valid websocket url.

  Version
  ===

  * Used the latest devstack to install openstack with default
  configuration except adding serial console settings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1623799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623800] [NEW] Can't add exact count of fixed ips to port (regression)

2016-09-14 Thread Andrey Pavlov
Public bug reported:

environment: latest devstack. services: nova, glance, keystone, cinder, 
neutron, neutron-vpnaas, ec2-api
non-admin project.

we have a scenario when we create a port and then add two fixed ips to it.
Now neutron adds only one fixed_ip to this port but this Monday all was good.
And looks like that now it adds count-1 of passed new fixed_ips.


logs:

2016-09-15 09:13:47.568 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X GET 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json -H 
"User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}87af409cdd3b0396fa6954bdc181fddac54d823d" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:206
2016-09-15 09:13:47.627 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] RESP: [200] Content-Type: 
application/json Content-Length: 735 X-Openstack-Request-Id: 
req-4fcc4b7c-09d3-40b6-9332-a3974e422630 Date: Thu, 15 Sep 2016 06:13:47 GMT 
Connection: keep-alive 
RESP BODY: {"port": {"status": "DOWN", "created_at": "2016-09-15T06:13:46", 
"project_id": "c44a90bf24c14dcbac693c9bb8ac1923", "description": "", 
"allowed_address_pairs": [], "admin_state_up": true, "network_id": 
"93e7bdae-bb7b-4e3e-b33d-e80a561014ea", "tenant_id": 
"c44a90bf24c14dcbac693c9bb8ac1923", "extra_dhcp_opts": [], "updated_at": 
"2016-09-15T06:13:46", "name": "eni-30152657", "device_owner": "", 
"revision_number": 5, "mac_address": "fa:16:3e:12:34:dd", 
"port_security_enabled": true, "binding:vnic_type": "normal", "fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}], "id": "0be539d4-ed3c-4bba-8a25-9cb1641335ab", "security_groups": 
["2c51d398-1bd1-4084-8063-41bfe57788a4"], "device_id": ""}}
 _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:231
2016-09-15 09:13:47.628 14578 DEBUG neutronclient.v2_0.client 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] GET call to neutron for 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json used 
request id req-4fcc4b7c-09d3-40b6-9332-a3974e422630 _append_request_id 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:127


2016-09-15 09:13:47.628 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X PUT 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json -H 
"User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}87af409cdd3b0396fa6954bdc181fddac54d823d" -d '{"port": {"fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}, {"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e"}, 
{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e"}]}}' _http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:206
2016-09-15 09:13:48.014 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] RESP: [200] Content-Type: 
application/json Content-Length: 816 X-Openstack-Request-Id: 
req-0c86f7d1-ce47-4c9e-b842-1aa37c2ca024 Date: Thu, 15 Sep 2016 06:13:48 GMT 
Connection: keep-alive 
RESP BODY: {"port": {"status": "DOWN", "created_at": "2016-09-15T06:13:46", 
"project_id": "c44a90bf24c14dcbac693c9bb8ac1923", "description": "", 
"allowed_address_pairs": [], "admin_state_up": true, "network_id": 
"93e7bdae-bb7b-4e3e-b33d-e80a561014ea", "tenant_id": 
"c44a90bf24c14dcbac693c9bb8ac1923", "extra_dhcp_opts": [], "updated_at": 
"2016-09-15T06:13:47", "name": "eni-30152657", "device_owner": "", 
"revision_number": 6, "mac_address": "fa:16:3e:12:34:dd", 
"port_security_enabled": true, "binding:vnic_type": "normal", "fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}, {"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", 
"ip_address": "10.7.0.9"}], "id": "0be539d4-ed3c-4bba-8a25-9cb1641335ab", 
"security_groups": ["2c51d398-1bd1-4084-8063-41bfe57788a4"], "device_id": ""}}
 _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:231
2016-09-15 09:13:48.015 14578 DEBUG neutronclient.v2_0.client 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] PUT call to neutron for 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json used 
request id req-0c86f7d1-ce47-4c9e-b842-1aa37c2ca024 

[Yahoo-eng-team] [Bug 1584702] Re: IntegrityError occurs in archiving tables after a resized VM instance was deleted

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/323684
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1dfd79495e565eda3997b0a272c594a8d2c422d4
Submitter: Jenkins
Branch:master

commit 1dfd79495e565eda3997b0a272c594a8d2c422d4
Author: Takashi NATSUME 
Date:   Thu Sep 15 09:21:10 2016 +0900

Fix an error in archiving 'migrations' table

Add soft deleting 'migrations' table when the VM instance is deleted.
And add soft deleting 'migrations' table when archiving deleted rows
for the case to upgrade.

Change-Id: Ica35ce2628dfcf412eb097c2c61fdde8828e9d90
Closes-Bug: #1584702


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584702

Title:
  IntegrityError occurs in archiving tables after a resized VM instance
  was deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  After a resized VM instance was deleted, IntegrityError occurs when
  archiving tables (nova-manage db archive_deleted_rows).

  [How to reproduce]
  stack@devstack-master:~/nova$ openstack server list
  
+--+-+++
  | ID   | Name| Status | Networks  
 |
  
+--+-+++
  | 3a77cd99-3ee0-45af-a301-1016907efaba | server1 | ACTIVE | 
public=10.0.2.195, 2001:db8::3 |
  
+--+-+++
  stack@devstack-master:~/nova$ openstack server resize --flavor m1.small 
server1
  stack@devstack-master:~/nova$ openstack server resize --confirm server1
  stack@devstack-master:~/nova$ openstack server delete server1

  mysql> select instance_uuid, migration_type, status, deleted from migrations;
  
+--++---+-+
  | instance_uuid| migration_type | status| deleted 
|
  
+--++---+-+
  | 3a77cd99-3ee0-45af-a301-1016907efaba | resize | confirmed |   0 
|
  
+--++---+-+
  1 row in set (0.00 sec)

  mysql> select uuid, deleted from instances;
  +--+-+
  | uuid | deleted |
  +--+-+
  | 3a77cd99-3ee0-45af-a301-1016907efaba |   1 |
  +--+-+
  1 row in set (0.00 sec)

  stack@devstack-master:~/nova$ nova-manage db archive_deleted_rows 1000
  2016-05-23 19:23:08.434 WARNING nova.db.sqlalchemy.api [-] IntegrityError 
detected when archiving table instances: (pymysql.err.IntegrityError) (1451, 
u'Cannot delete or update a parent row: a foreign key constraint fails 
(`nova`.`migrations`, CONSTRAINT `fk_migrations_instance_uuid` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`))') [SQL: u'DELETE FROM 
instances WHERE instances.id in (SELECT T1.id FROM (SELECT instances.id \nFROM 
instances \nWHERE instances.deleted != %(deleted_1)s ORDER BY instances.id \n 
LIMIT %(param_1)s) as T1)'] [parameters: {u'param_1': 971, u'deleted_1': 0}]

  mysql> select instance_uuid, migration_type, status, deleted from migrations;
  
+--++---+-+
  | instance_uuid| migration_type | status| deleted 
|
  
+--++---+-+
  | 3a77cd99-3ee0-45af-a301-1016907efaba | resize | confirmed |   0 
|
  
+--++---+-+
  1 row in set (0.00 sec)

  mysql> select uuid, deleted from instances;
  +--+-+
  | uuid | deleted |
  +--+-+
  | 3a77cd99-3ee0-45af-a301-1016907efaba |   1 |
  +--+-+
  1 row in set (0.00 sec)

  [Environment]
  OS: Ubuntu 14.04 LTS(64bit)
  nova: master (commit 2505c5d8b1d9c075e20275ee903657640cc97c92)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1584702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618666] Re: deprecated warning for SafeConfigParser

2016-09-14 Thread janonymous
** Also affects: tempest
   Importance: Undecided
   Status: New

** Changed in: tempest
 Assignee: (unassigned) => janonymous (janonymous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618666

Title:
  deprecated warning for SafeConfigParser

Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in PBR:
  Fix Released
Status in python-swiftclient:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress

Bug description:
  tox -e py34 is reporting a deprecation warning for SafeConfigParser

  /octavia/.tox/py34/lib/python3.4/site-packages/pbr/util.py:207: 
DeprecationWarning: The SafeConfigParser class has been renamed to ConfigParser 
in Python 3.2. This alias will be removed in future versions. Use ConfigParser 
directly instead.
parser = configparser.SafeConfigParser()

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1618666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607746] Fix merged to neutron (master)

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369846
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3aa89f4d818ab6705c56287b58a268e9fd5113c8
Submitter: Jenkins
Branch:master

commit 3aa89f4d818ab6705c56287b58a268e9fd5113c8
Author: LIU Yulong 
Date:   Wed Sep 14 13:33:19 2016 +0800

Refactor for floating IP updating checks

1. Will not fill the API fip updating dict with tenant_id and id
anymore, floatingip_db will be passed around functions.
We need to make the `fip` dict come from the API as it orginal is.

2. Refactor some redundant log messages for fip's port tenant_id
check and IP version check.

Change-Id: Ic45c95d90f3aecfcb731453fb3fd62e6ed92893b
Partial-bug: #1607746


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607746

Title:
  Update floating IP extra attributes will unexpectedly disassociate it

Status in neutron:
  Fix Released

Bug description:
  Update floating IP extra attributes will unexpectedly disassociate it.

  Now floating IP can be disassociated via two different API data dict:
  {'port_id': null} or a dict without `port_id` key.

  And, floating IP can not be updated with its original port_id, you may get 
some bad request
  exception.

  Which will cause some know issues:
  1. Updating floating IP extra attributes, for instance description,
  will unexpectedly disassociate it. This behavior will interrupt the
  user's service traffic. And this is because that user can submit an
  empty request dict (without port_id parameter) for the floating IP
  updating API, and then it will be disassociated by default.
  2. If user try to update floating IP extra attribute with port_id
  that it associated with, the neutron API will return a bad request
  exception.
  So there is no way to update the floating IP extra attributes without
  changing it's association.

  And there is already a bug for the updating floating IP extra attributes 
issue:
  https://bugs.launchpad.net/neutron/+bug/1578523

  This bug will be used to handle the API behavior issues.

  -
  (moved from bug 1578523)

  2. Update floating IP extra attributes will unexpectedly disassociate it
  step 1: Associate
  neutron --debug floatingip-associate cd3b0496-bd2e-48a8-8706-48d4c4d85c44 
303af774-b12f-462e-a38a-1c616b6cc335

  step 2: Update floating IP description
  curl -g -i -X PUT 
http://controller:9696/v2.0/floatingips/cd3b0496-bd2e-48a8-8706-48d4c4d85c44.json
 -H \
  "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" \
  -H "X-Auth-Token: 531460c39c1744d1b5b83b6f20fc74a2" -d '{"floatingip": 
{"description": "303af774-b12f-462e-a38a-1c616b6cc335"}}'

  HTTP/1.1 200 OK
  Content-Type: application/json; charset=UTF-8
  Content-Length: 416
  X-Openstack-Request-Id: req-6c3bbbc6-9218-41d2-9a95-32bf44e0d145
  Date: Thu, 05 May 2016 06:27:07 GMT

  {"floatingip": {"router_id": null, "status": "ACTIVE", "description":
  "303af774-b12f-462e-a38a-1c616b6cc335", "dns_domain": "",
  "floating_network_id": "2cad629d-e523-4b83-90b9-c0cc0ba1250d",
  "fixed_ip_address": null, "floating_ip_address": "172.16.5.145",
  "port_id": null, "id": "cd3b0496-bd2e-48a8-8706-48d4c4d85c44",
  "tenant_id": "5ff1da9c235c4ebcaefeecf3fff7eb11", "dns_name": ""}}

  Port_id is None, it means that the floating IP is disassociated from
  the port 303af774-b12f-462e-a38a-1c616b6cc335. This is totally
  incorrect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623738] [NEW] Disabled state of host is not updated when reason is not provided.

2016-09-14 Thread Giridhar Jayavelu
Public bug reported:

When _set_host_enabled() in virt/libvirt/driver.py
is called to disable service status of a host without
providing disabled_reason, then "TypeError: cannot concatenate 'str' and
'NoneType' objects" is raised. This prevents the disabled state getting updated.

Before concatenating disable_reason with DISABLE_PREFIX, 
disabled_reason should be checked if it is defined or not.

** Affects: nova
 Importance: Undecided
 Assignee: Giridhar Jayavelu (gjayavelu)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Giridhar Jayavelu (gjayavelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623738

Title:
  Disabled state of host is not updated when reason is not provided.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When _set_host_enabled() in virt/libvirt/driver.py
  is called to disable service status of a host without
  providing disabled_reason, then "TypeError: cannot concatenate 'str' and
  'NoneType' objects" is raised. This prevents the disabled state getting 
updated.

  Before concatenating disable_reason with DISABLE_PREFIX, 
  disabled_reason should be checked if it is defined or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544861] Re: LBaaS: connection limit does not work with HA Proxy

2016-09-14 Thread Dustin Lundquist
** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544861

Title:
  LBaaS: connection limit does not work with HA Proxy

Status in neutron:
  In Progress
Status in octavia:
  In Progress

Bug description:
  connection limit does not work with HA Proxy.

  It sets at frontend section like:

  frontend 75a12b66-9d2a-4a68-962e-ec9db8c3e2fb
  option httplog
  capture cookie JSESSIONID len 56
  bind 192.168.10.20:80
  mode http
  default_backend fb8ba6e3-71a4-47dd-8a83-2978bafea6e7
  maxconn 5
  option forwardfor

  But above configuration does not work.
  It should be set at global section like:

  global
  daemon
  user nobody
  group haproxy
  log /dev/log local0
  log /dev/log local1 notice
  stats socket 
/var/lib/neutron/lbaas/fb8ba6e3-71a4-47dd-8a83-2978bafea6e7/sock mode 0666 
level user
  maxconn 5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623732] [NEW] failed: route add -net "0.0.0.0/0" gw "10.1.0.1"

2016-09-14 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/63/370363/3/gate/gate-grenade-dsvm-neutron-
ubuntu-trusty/5581f78/

http://logs.openstack.org/09/368709/3/gate/gate-grenade-dsvm-neutron-
ubuntu-trusty/d1195ea/

Allegedly errors in network startup:

Initializing random number generator... done.
Starting acpid: OK
cirros-ds 'local' up at 5.68
no results found for mode=local. up 5.99. searched: nocloud configdrive ec2
Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending select for 10.1.0.10...
Lease of 10.1.0.10 obtained, lease time 86400
route: SIOCADDRT: File exists
WARN: failed: route add -net "0.0.0.0/0" gw "10.1.0.1"
cirros-ds 'net' up at 7.01

Leads to usual:

tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.9 via SSH
timed out.

** Affects: neutron
 Importance: High
 Status: New


** Tags: gate-failure

** Changed in: neutron
Milestone: None => newton-rc1

** Changed in: neutron
   Importance: Undecided => High

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623732

Title:
  failed: route add -net "0.0.0.0/0" gw "10.1.0.1"

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/63/370363/3/gate/gate-grenade-dsvm-neutron-
  ubuntu-trusty/5581f78/

  http://logs.openstack.org/09/368709/3/gate/gate-grenade-dsvm-neutron-
  ubuntu-trusty/d1195ea/

  Allegedly errors in network startup:

  Initializing random number generator... done.
  Starting acpid: OK
  cirros-ds 'local' up at 5.68
  no results found for mode=local. up 5.99. searched: nocloud configdrive ec2
  Starting network...
  udhcpc (v1.20.1) started
  Sending discover...
  Sending select for 10.1.0.10...
  Lease of 10.1.0.10 obtained, lease time 86400
  route: SIOCADDRT: File exists
  WARN: failed: route add -net "0.0.0.0/0" gw "10.1.0.1"
  cirros-ds 'net' up at 7.01

  Leads to usual:

  tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.9 via
  SSH timed out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623732/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622833] Re: timestamp mechanism in linux bridge false positives

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369179
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a2bd0b4b53db8468681eb2905e2fbc2f9073869a
Submitter: Jenkins
Branch:master

commit a2bd0b4b53db8468681eb2905e2fbc2f9073869a
Author: Kevin Benton 
Date:   Mon Sep 12 22:27:33 2016 -0700

LinuxBridge: Use ifindex for logical 'timestamp'

With Xenial (and maybe older versions), the modified timestamps
in /sys/class/net/(device_name) are not stable. They appear to
work for a period of time, and then when some kind of cache clears
on the kernel side, all of the timestamps are reset to the latest
access time.

This was causing the Linux Bridge agent to think that the interfaces
were experiencing local changes much more frequently than they actually
were, resulting in more polling to the Neutron server and subsequently
more BUILD->ACTIVE->BUILD->ACTIVE transitions in the logical model.

The purpose of the timestamp patch was to catch rapid server REBUILD
operations where the interface would be deleted and re-added within
a polling interval. Without it, these would be stuck in the BUILD
state since the agent wouldn't realize it needed to wire the ports.

This patch switches to looking at the IFINDEX of the interfaces to
use as a sort of logical timestamp. If an interface gets removed
and readded, it will get a different index, so the original timestamp
comparison logic will still work.

In the future, the agent should undergo a larger refactor to just
watch 'ip monitor' for netlink events to replace the polling of the
interface listing and the timestamp logic entirely. However, this
approach was taken due to the near term release and the ability to
back-port it to older releases.

This was verified with both Nova rebuild actions and Nova interface
attach/detach actions.

Change-Id: I016019885446bff6806268ab49cd5476d93ec61f
Closes-Bug: #1622833


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622833

Title:
  timestamp mechanism in linux bridge false positives

Status in neutron:
  Fix Released

Bug description:
  The linux bridge agent is picking up too many false positives in its
  detection mechanism for when devices have been modified locally. In
  the following the 4 tap devices attached to a particular bridge had
  timestamps that jumped forward even though none of the interfaces
  actually changed:

  2016-09-13 00:13:38.744 14179 DEBUG 
neutron.plugins.ml2.drivers.agent._common_agent 
[req-82c02245-80fd-4712-baa6-cdd4033315d1 - -] Adding locally changed devices 
to updated set: set(['tap422b85d9-95', 'tap9b365584-34', 'tapee2684f8-51', 
'tap66ef2d8e-3b']) scan_devices 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py:397
  2016-09-13 00:13:38.744 14179 DEBUG 
neutron.plugins.ml2.drivers.agent._common_agent 
[req-82c02245-80fd-4712-baa6-cdd4033315d1 - -] Agent loop found changes! 
{'current': set(['tap422b85d9-95', 'tapee2684f8-51', 'tap6028e7a2-c0', 
'tap9b365584-34', 'tap0960ffac-f9', 'tap7ba5f865-54', 'tap66ef2d8e-3b', 
'tapfe427ba3-63', 'tap475f33ef-c3']), 'timestamps': {'tap422b85d9-95': 
1473725618.73996, 'tapee2684f8-51': 1473725618.73996, 'tap6028e7a2-c0': None, 
'tap9b365584-34': 1473725618.73996, 'tap0960ffac-f9': 1473725618.73996, 
'tap7ba5f865-54': 1473725616.7399597, 'tap66ef2d8e-3b': 1473725618.73996, 
'tapfe427ba3-63': 1473725616.7399597, 'tap475f33ef-c3': None}, 'removed': 
set([]), 'added': set([]), 'updated': set(['tap422b85d9-95', 'tap9b365584-34', 
'tapee2684f8-51', 'tap66ef2d8e-3b'])} daemon_loop 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py:448

  
  This leads to the agent refetching the details, which puts the port in BUILD 
and then back to ACTIVE. This leads to sporadic failures when tempest tests are 
asserting that a port should be in the ACTIVE status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623691] Re: security group in use conflict during test cleanup

2016-09-14 Thread Armando Migliaccio
** No longer affects: neutron

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623691

Title:
  security group in use conflict during test cleanup

Status in tempest:
  In Progress

Bug description:
  ft12.1: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesV1ActionsTest)_StringException: 
Traceback (most recent call last):
File "tempest/test.py", line 312, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 295, in tearDownClass
  teardown()
File "tempest/test.py", line 547, in clear_credentials
  cls._creds_provider.clear_creds()
File "tempest/common/dynamic_creds.py", line 411, in clear_creds
  self._cleanup_default_secgroup(creds.tenant_id)
File "tempest/common/dynamic_creds.py", line 358, in 
_cleanup_default_secgroup
  nsg_client.delete_security_group(secgroup['id'])
File "tempest/lib/services/network/security_groups_client.py", line 58, in 
delete_security_group
  return self.delete_resource(uri)
File "tempest/lib/services/network/base.py", line 41, in delete_resource
  resp, body = self.delete(req_uri)
File "tempest/lib/common/rest_client.py", line 307, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 665, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 778, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'type': u'SecurityGroupInUse', u'message': u'Security Group 
73bda5d7-e097-4f16-a6ae-96f314c7e885 in use.', u'detail': u''}

  It looks like the request to delete a security group is processed
  before the request to delete the port using it and thus the conflict:

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_33_845

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_34_337

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1623691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623691] Re: security group in use conflict during test cleanup

2016-09-14 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/370479

** Changed in: tempest
   Status: Invalid => In Progress

** Changed in: tempest
 Assignee: (unassigned) => Matthew Treinish (treinish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623691

Title:
  security group in use conflict during test cleanup

Status in tempest:
  In Progress

Bug description:
  ft12.1: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesV1ActionsTest)_StringException: 
Traceback (most recent call last):
File "tempest/test.py", line 312, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 295, in tearDownClass
  teardown()
File "tempest/test.py", line 547, in clear_credentials
  cls._creds_provider.clear_creds()
File "tempest/common/dynamic_creds.py", line 411, in clear_creds
  self._cleanup_default_secgroup(creds.tenant_id)
File "tempest/common/dynamic_creds.py", line 358, in 
_cleanup_default_secgroup
  nsg_client.delete_security_group(secgroup['id'])
File "tempest/lib/services/network/security_groups_client.py", line 58, in 
delete_security_group
  return self.delete_resource(uri)
File "tempest/lib/services/network/base.py", line 41, in delete_resource
  resp, body = self.delete(req_uri)
File "tempest/lib/common/rest_client.py", line 307, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 665, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 778, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'type': u'SecurityGroupInUse', u'message': u'Security Group 
73bda5d7-e097-4f16-a6ae-96f314c7e885 in use.', u'detail': u''}

  It looks like the request to delete a security group is processed
  before the request to delete the port using it and thus the conflict:

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_33_845

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_34_337

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1623691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623708] [NEW] OVS trunk management does not tolerate agent failures

2016-09-14 Thread Armando Migliaccio
Public bug reported:

It is clear that patch [1], will be unable to complete in time for RC1.
This bug report is tracking the effort to complete it post RC1.

[1] https://review.openstack.org/#/c/365176/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623708

Title:
  OVS trunk management does not tolerate agent failures

Status in neutron:
  New

Bug description:
  It is clear that patch [1], will be unable to complete in time for
  RC1. This bug report is tracking the effort to complete it post RC1.

  [1] https://review.openstack.org/#/c/365176/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1620028] Re: Nova issue - InternalError: (1049, u"Unknown database 'nova_api'")

2016-09-14 Thread Erlon R. Cruz
Im getting this error in a gate job:
http://logs.openstack.org/16/369516/6/check/gate-tempest-dsvm-full-
devstack-plugin-nfs-nv/cd3409a/logs/devstacklog.txt.gz

gerrit link: https://review.openstack.org/#/c/369516/

** Changed in: nova
   Status: Invalid => New

** Project changed: nova => devstack-plugin-sheepdog

** Project changed: devstack-plugin-sheepdog => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1620028

Title:
  Nova issue - InternalError: (1049, u"Unknown database 'nova_api'")

Status in devstack:
  New

Bug description:
  Hi all,

  I run stack.sh with devstack today. Devstack is still installed
  successfully but when I tracked stack.sh log I found an error:

  InternalError: (1049, u"Unknown database 'nova_api'")

  The detailed log is attached here:
  http://paste.openstack.org/show/566648/

  and full stack.sh log:
  https://drive.google.com/file/d/0B7Fzz6EvT2F9T0tVUHUtdk55SVE/view

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1620028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530275] Re: Live snapshot is corrupted (possibly race condition?)

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/363926
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6b20239a5d293f55889cd1bffa59e4792c75edbf
Submitter: Jenkins
Branch:master

commit 6b20239a5d293f55889cd1bffa59e4792c75edbf
Author: Sławek Kapłoński 
Date:   Wed Aug 31 20:28:36 2016 +

Fix race condition bug during live_snapshot

During live_snapshot creation, when nova starts block_rebase
operation in libvirt there is possibility that block_job is
not yet started and libvirt blockJobInfo method will return
status.end = 0 and status.cur = 0. In such case libvirt driver
does not wait to finish block rebase operation and that causes
a problem because created snapshot is corrupted.

This patch adds check if status.end != 0 to return information
that job is already finished.

Change-Id: I45ac06eae0b1949f746dae305469718649bfcf23
Closes-Bug: #1530275


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530275

Title:
  Live snapshot is corrupted (possibly race condition?)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We are using nova 2:12.0.0-0ubuntu2~cloud0. Instance disks are stored
  in qcow2 files on ext4 filesystem. When we live snapshot, 90% of the
  time the produced image is corrupted; specifically, the image is only
  a few megabytes (e.g. 30 MB) in size, while the disk size is several
  GB. Here is the log from a corrupted snapshot:

  2015-12-31 01:40:33.304 16805 INFO nova.compute.manager 
[req-80187ec9-a3e7-4eaf-80d4-1617da40989e 94b1e02c35204ca89bd5aea99ff5ef2b 
8341c85ad9ae49408fa25074adba0480 - - -] [instance: 
f9d52a00-8466-4436-b5b4-f0244d54dfe1] instance snapshotting
  2015-12-31 01:40:33.410 16805 INFO nova.virt.libvirt.driver 
[req-80187ec9-a3e7-4eaf-80d4-1617da40989e 94b1e02c35204ca89bd5aea99ff5ef2b 
8341c85ad9ae49408fa25074adba0480 - - -] [instance: 
f9d52a00-8466-4436-b5b4-f0244d54dfe1] Beginning live snapshot process
  2015-12-31 01:40:34.964 16805 INFO nova.virt.libvirt.driver 
[req-80187ec9-a3e7-4eaf-80d4-1617da40989e 94b1e02c35204ca89bd5aea99ff5ef2b 
8341c85ad9ae49408fa25074adba0480 - - -] [instance: 
f9d52a00-8466-4436-b5b4-f0244d54dfe1] Snapshot extracted, beginning image upload
  2015-12-31 01:40:37.029 16805 INFO nova.virt.libvirt.driver 
[req-80187ec9-a3e7-4eaf-80d4-1617da40989e 94b1e02c35204ca89bd5aea99ff5ef2b 
8341c85ad9ae49408fa25074adba0480 - - -] [instance: 
f9d52a00-8466-4436-b5b4-f0244d54dfe1] Snapshot image upload complete

  The entire operation completes in a couple of seconds, which is
  unexpected.

  While testing, I added some sleep calls to the _live_snapshot function
  in virt/libvirt/driver.py to debug the problem. A few live snapshot
  runs were successful, but I'm not confident that it fixed the problem.
  Anyway, here is the code that I changed:

  try:
  # NOTE (rmk): blockRebase cannot be executed on persistent
  # domains, so we need to temporarily undefine it.
  # If any part of this block fails, the domain is
  # re-defined regardless.
  if guest.has_persistent_configuration():
  guest.delete_configuration()

  # NOTE (rmk): Establish a temporary mirror of our root disk and
  # issue an abort once we have a complete copy.
  dev.rebase(disk_delta, copy=True, reuse_ext=True, shallow=True)

  +time.sleep(10.0)
  while dev.wait_for_job():
  -time.sleep(0.5)
  +time.sleep(5.0)

  dev.abort_job()
  libvirt_utils.chown(disk_delta, os.getuid())
  finally:
  self._host.write_instance_config(xml)
  if require_quiesce:
  self.unquiesce(context, instance, image_meta)

  And the resulting log (which indicates that it is sleeping for not
  just the initial 10 second call, but even more than that; this means
  wait_for_job is returning false immediately before applying the
  modification, but after the modification it is actually returning true
  after the initial sleep and seems to be performing correctly):

  2015-12-31 01:42:12.438 21232 INFO nova.compute.manager 
[req-f3cc4b5b-98b0-4315-b514-de36a07cb8ed 94b1e02c35204ca89bd5aea99ff5ef2b 
8341c85ad9ae49408fa25074adba0480 - - -] [instance: 
f9d52a00-8466-4436-b5b4-f0244d54dfe1] instance snapshotting
  2015-12-31 01:42:12.670 21232 INFO nova.virt.libvirt.driver 
[req-f3cc4b5b-98b0-4315-b514-de36a07cb8ed 94b1e02c35204ca89bd5aea99ff5ef2b 
8341c85ad9ae49408fa25074adba0480 - - -] [instance: 
f9d52a00-8466-4436-b5b4-f0244d54dfe1] Beginning live snapshot process
  2015-12-31 01:43:02.411 21232 INFO nova.virt.libvirt.driver 
[req-f3cc4b5b-98b0-4315-

[Yahoo-eng-team] [Bug 1602081] Re: Use oslo.context's policy dict

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/340195
Committed: 
https://git.openstack.org/cgit/openstack/cinder/commit/?id=bc5a2d9741dfea75b7be0448f7322bb1ef6f028c
Submitter: Jenkins
Branch:master

commit bc5a2d9741dfea75b7be0448f7322bb1ef6f028c
Author: Jamie Lennox 
Date:   Mon Jul 11 11:25:46 2016 +1000

Use to_policy_values for enforcing policy

oslo_context's to_policy_values provides a standard list of parameters
that policy should be able to be enforced upon. The combination of this
and from_environ lets oslo.context handle adding new values to policy
enforcement.

Closes-Bug: #1602081
Change-Id: I8f70580e7209412800aa7b948602b003392ef238


** Changed in: cinder
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602081

Title:
  Use oslo.context's policy dict

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  This is a cross project goal to standardize the values available to
  policy writers and to improve the basic oslo.context object. It is
  part of the follow up work to bug #1577996 and bug #968696.

  There has been an ongoing problem for how we define the 'admin' role.
  Because tokens are project scoped having the 'admin' role on any
  project granted you the 'admin' role on all of OpenStack. As a
  solution to this keystone defined an is_admin_project field so that
  keystone defines a single project that your token must be scoped to to
  perform admin operations. This has been implemented.

  The next phase of this is to make all the projects understand the X
  -Is-Admin-Project header from keystonemiddleware and pass it to
  oslo_policy. However this pattern of keystone changes something and
  then goes to every project to fix it has been repeated a number of
  times now and we would like to make it much more automatic.

  Ongoing work has enhanced the base oslo.context object to include both
  the load_from_environ and to_policy_values methods. The
  load_from_environ classmethod takes an environment dict with all the
  standard auth_token and oslo middleware headers and loads them into
  their standard place on the context object.

  The to_policy_values() then creates a standard credentials dictionary
  with all the information that should be required to enforce policy
  from the context. The combination of these two methods means in future
  when authentication information needs to be passed to policy it can be
  handled entirely by oslo.context and does not require changes in each
  individual service.

  Note that in future a similar pattern will hopefully be employed to
  simplify passing authentication information over RPC to solve the
  timeout issues. This is a prerequisite for that work.

  There are a few common problems in services that are required to make
  this work:

  1. Most service context.__init__ functions take and discard **kwargs.
  This is so if the context.from_dict receives arguments it doesn't know
  how to handle (possibly because new things have been added to the base
  to_dict) it ignores them. Unfortunately to make the load_from_environ
  method work we need to pass parameters to __init__ that are handled by
  the base class.

  To make this work we simply have to do a better job of using
  from_dict. Instead of passing everything to __init__ and ignoring what
  we don't know we have from_dict extract only the parameters that
  context knows how to use and call __init__ with those.

  2. The parameters passed to the base context.__init__ are old.
  Typically they are user and tenant where most services expect user_id
  and project_id. There is ongoing work to improve this in oslo.context
  but for now we have to ensure that the subclass correctly sets and
  uses the right variable names.

  3. Some services provide additional information to the policy
  enforcement method. To continue to make this function we will simply
  override the to_policy_values method in the subclasses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1602081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623703] [NEW] libvirt: hard reboot incorrectly create nova/instances//_disk with lvm backend

2016-09-14 Thread vu tran
Public bug reported:

Configure Nova compute to use lvm backend, spawn an instance based on
glance image, under folder nova/instances/ we don't see any
_disk file and this is correct.  Next, if we do a hard reboot on
this instance then libvirt incorrectly creates file
nova/instances//_disk.

Steps to reproduce under devstack:

* On compute node, create LVM volume group "image-lvm-local"
* On compute node, modify nova config file under [libvirt] to enable lvm 
backend with
  images_type = "lvm" and images_volume_group = "image-lvm-local"
* Start an instance no file nova/instances//_disk exists
* Do hard reboot on the instance and nova/instances//_disk is 
incorrectly created.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623703

Title:
  libvirt: hard reboot incorrectly create
  nova/instances//_disk with lvm backend

Status in OpenStack Compute (nova):
  New

Bug description:
  Configure Nova compute to use lvm backend, spawn an instance based on
  glance image, under folder nova/instances/ we don't see any
  _disk file and this is correct.  Next, if we do a hard reboot on
  this instance then libvirt incorrectly creates file
  nova/instances//_disk.

  Steps to reproduce under devstack:

  * On compute node, create LVM volume group "image-lvm-local"
  * On compute node, modify nova config file under [libvirt] to enable lvm 
backend with
images_type = "lvm" and images_volume_group = "image-lvm-local"
  * Start an instance no file nova/instances//_disk exists
  * Do hard reboot on the instance and nova/instances//_disk is 
incorrectly created.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623691] Re: security group in use conflict during test cleanup

2016-09-14 Thread Armando Migliaccio
It looks like port deletion (as a consequence of server deletion) is
racing with the deletion of the default security group. This most likely
happens if Nova does not ensure that the port is indeed gone before
assuming that the server is indeed deleted.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623691

Title:
  security group in use conflict during test cleanup

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in tempest:
  Invalid

Bug description:
  ft12.1: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesV1ActionsTest)_StringException: 
Traceback (most recent call last):
File "tempest/test.py", line 312, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 295, in tearDownClass
  teardown()
File "tempest/test.py", line 547, in clear_credentials
  cls._creds_provider.clear_creds()
File "tempest/common/dynamic_creds.py", line 411, in clear_creds
  self._cleanup_default_secgroup(creds.tenant_id)
File "tempest/common/dynamic_creds.py", line 358, in 
_cleanup_default_secgroup
  nsg_client.delete_security_group(secgroup['id'])
File "tempest/lib/services/network/security_groups_client.py", line 58, in 
delete_security_group
  return self.delete_resource(uri)
File "tempest/lib/services/network/base.py", line 41, in delete_resource
  resp, body = self.delete(req_uri)
File "tempest/lib/common/rest_client.py", line 307, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 665, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 778, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'type': u'SecurityGroupInUse', u'message': u'Security Group 
73bda5d7-e097-4f16-a6ae-96f314c7e885 in use.', u'detail': u''}

  It looks like the request to delete a security group is processed
  before the request to delete the port using it and thus the conflict:

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_33_845

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_34_337

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615632] Re: Horizon uses a table row class called 'status_unknown' when it should use 'table-warning'

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358642
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=041af0fd0a20165bc1f70cc14ffc8d50de516a47
Submitter: Jenkins
Branch:master

commit 041af0fd0a20165bc1f70cc14ffc8d50de516a47
Author: Rob Cresswell 
Date:   Mon Aug 22 14:17:13 2016 +0100

Replace table row 'status_unknown' class with 'warning' class

The default bootstrap styling for table rows uses the same classes as
alerts (warning, danger etc). Rather than layering additional logic
around this with a new class, we should just fall back to the documented
boostrap method.

Also resets the warning colour to its default bootstrap variable, rather
than carrying an altered version.

Change-Id: I3472244fcbbd121a8de48d78084554760dab6385
Closes-Bug: 1615632


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615632

Title:
  Horizon uses a table row class called 'status_unknown' when it should
  use 'table-warning'

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We're adding extra handling for a table row 'status_unknown' class; we
  should just use bootstraps 'warning' class, and default to the
  bootstrap handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623691] Re: security group in use conflict during test cleanup

2016-09-14 Thread Armando Migliaccio
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623691

Title:
  security group in use conflict during test cleanup

Status in neutron:
  New
Status in tempest:
  New

Bug description:
  ft12.1: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesV1ActionsTest)_StringException: 
Traceback (most recent call last):
File "tempest/test.py", line 312, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 295, in tearDownClass
  teardown()
File "tempest/test.py", line 547, in clear_credentials
  cls._creds_provider.clear_creds()
File "tempest/common/dynamic_creds.py", line 411, in clear_creds
  self._cleanup_default_secgroup(creds.tenant_id)
File "tempest/common/dynamic_creds.py", line 358, in 
_cleanup_default_secgroup
  nsg_client.delete_security_group(secgroup['id'])
File "tempest/lib/services/network/security_groups_client.py", line 58, in 
delete_security_group
  return self.delete_resource(uri)
File "tempest/lib/services/network/base.py", line 41, in delete_resource
  resp, body = self.delete(req_uri)
File "tempest/lib/common/rest_client.py", line 307, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 665, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 778, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'type': u'SecurityGroupInUse', u'message': u'Security Group 
73bda5d7-e097-4f16-a6ae-96f314c7e885 in use.', u'detail': u''}

  It looks like the request to delete a security group is processed
  before the request to delete the port using it and thus the conflict:

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_33_845

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_34_337

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621582] Re: use_usb_tablet and pointer_model have different defaults making switching hard

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/367909
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f04dd04342705c8dc745308662b698bb54debf69
Submitter: Jenkins
Branch:master

commit f04dd04342705c8dc745308662b698bb54debf69
Author: Sahid Orentino Ferdjaoui 
Date:   Fri Sep 9 05:55:39 2016 -0400

libvirt: add ps2mouse in choice for pointer_model

This commit adds option ps2mouse to pointer_model, and set the default
value of pointer_model to usbtablet to do not break upgrade regarding
the default behavior of use_usb_tablet.

WHY: use_usb_tablet is by default set to True and during the
deprecation phase of use_usb_tablet, operators which have set that
option to false can't have the same behavior by using pointer_model
since use_usb_tablet takes precedence. Now operators can use
pointer_model=ps2mouse.

Change-Id: Id18b5503799922e4096bde296a9e7bb4f2a994aa
Closes-Bug: #1621582


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1621582

Title:
  use_usb_tablet and pointer_model have different defaults making
  switching hard

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The use_usb_tablet config option is deprecated in Newton and replaced
  with the pointer_model config option. The use_usb_tablet option
  defaults to True, and pointer_model defaults to None, and the only
  choices are None and 'usbtablet'.

  If pointer_model is not set, then use_usb_tablet is used as a fallback
  while transitioning to the new pointer_model option.

  The problem is they have different default values/behaviors.

  Currently devstack sets use_usb_tablet=False, which gives us warnings
  in CI runs because the option is deprecated. But changing it to None
  will make the nova code fallback to CONF.use_usb_tablet:

  
https://github.com/openstack/nova/blob/df15e467b61fee781e78b07bf910d6b411bafd44/nova/virt/libvirt/driver.py#L4541

  So you can't just remove using use_usb_tablet if you want it disabled
  (set to False) because the code will use the default and set it to
  True.

  I tried setting pointer_model to '' to get around the None check in
  the nova code, but that fails because we're using choices with the
  config option so only None and 'usbtablet' are allowed:

  http://logs.openstack.org/26/367526/1/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/1479bdd/logs/screen-n-api.txt.gz?level=TRACE#_2016-09-08_17_31_42_490

  This makes the transition from use_usb_tablet to pointer_model hard
  for anyone that wants this set to False like devstack does.

  We could allow setting '' as a choice for pointer_model to workaround
  this until use_usb_tablet is gone. We could also default
  use_usb_tablet to False to mimic pointer_model, but that's a change in
  default behavior without any warning for a release.

  We could also just ignore this and drop use_usb_tablet in Ocata and
  anyone that was setting it in nova.conf will just not have it picked
  up and used, but that's annoying for anyone that wants to get ahead of
  cleaning out deprecation warnings before upgrading to ocata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1621582/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623691] [NEW] security group in use conflict during test cleanup

2016-09-14 Thread Armando Migliaccio
Public bug reported:

ft12.1: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesV1ActionsTest)_StringException: 
Traceback (most recent call last):
  File "tempest/test.py", line 312, in tearDownClass
six.reraise(etype, value, trace)
  File "tempest/test.py", line 295, in tearDownClass
teardown()
  File "tempest/test.py", line 547, in clear_credentials
cls._creds_provider.clear_creds()
  File "tempest/common/dynamic_creds.py", line 411, in clear_creds
self._cleanup_default_secgroup(creds.tenant_id)
  File "tempest/common/dynamic_creds.py", line 358, in _cleanup_default_secgroup
nsg_client.delete_security_group(secgroup['id'])
  File "tempest/lib/services/network/security_groups_client.py", line 58, in 
delete_security_group
return self.delete_resource(uri)
  File "tempest/lib/services/network/base.py", line 41, in delete_resource
resp, body = self.delete(req_uri)
  File "tempest/lib/common/rest_client.py", line 307, in delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "tempest/lib/common/rest_client.py", line 665, in request
resp, resp_body)
  File "tempest/lib/common/rest_client.py", line 778, in _error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: An object with that identifier already exists
Details: {u'type': u'SecurityGroupInUse', u'message': u'Security Group 
73bda5d7-e097-4f16-a6ae-96f314c7e885 in use.', u'detail': u''}

It looks like the request to delete a security group is processed before
the request to delete the port using it and thus the conflict:

http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-neutron-
dvr-
multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_33_845

http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-neutron-
dvr-
multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_34_337

** Affects: neutron
 Importance: High
 Status: New


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided => High

** Summary changed:

- security group in use
+ security group in use conflict during test cleanup

** Tags added: gate-failure

** Changed in: neutron
Milestone: None => newton-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623691

Title:
  security group in use conflict during test cleanup

Status in neutron:
  New

Bug description:
  ft12.1: tearDownClass 
(tempest.api.volume.test_volumes_actions.VolumesV1ActionsTest)_StringException: 
Traceback (most recent call last):
File "tempest/test.py", line 312, in tearDownClass
  six.reraise(etype, value, trace)
File "tempest/test.py", line 295, in tearDownClass
  teardown()
File "tempest/test.py", line 547, in clear_credentials
  cls._creds_provider.clear_creds()
File "tempest/common/dynamic_creds.py", line 411, in clear_creds
  self._cleanup_default_secgroup(creds.tenant_id)
File "tempest/common/dynamic_creds.py", line 358, in 
_cleanup_default_secgroup
  nsg_client.delete_security_group(secgroup['id'])
File "tempest/lib/services/network/security_groups_client.py", line 58, in 
delete_security_group
  return self.delete_resource(uri)
File "tempest/lib/services/network/base.py", line 41, in delete_resource
  resp, body = self.delete(req_uri)
File "tempest/lib/common/rest_client.py", line 307, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "tempest/lib/common/rest_client.py", line 665, in request
  resp, resp_body)
File "tempest/lib/common/rest_client.py", line 778, in _error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: An object with that identifier already exists
  Details: {u'type': u'SecurityGroupInUse', u'message': u'Security Group 
73bda5d7-e097-4f16-a6ae-96f314c7e885 in use.', u'detail': u''}

  It looks like the request to delete a security group is processed
  before the request to delete the port using it and thus the conflict:

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_33_845

  http://logs.openstack.org/77/346377/14/check/gate-grenade-dsvm-
  neutron-dvr-
  multinode/056b1cf/logs/new/screen-q-svc.txt.gz#_2016-09-14_20_51_34_337

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623091] Re: keystonemidleware dependency should be > 4.0.0

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370011
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9bbb0ce7a83e4af9f4ef04a35e6779dffaeb7e15
Submitter: Jenkins
Branch:master

commit 9bbb0ce7a83e4af9f4ef04a35e6779dffaeb7e15
Author: Itxaka 
Date:   Wed Sep 14 12:19:45 2016 +0200

Allow compatibility with keystonemiddleware 4.0.0

On keystonemiddleware 4.0.0 the base class is called
_BaseAuthProtocol, which was later changed to BaseAuthProtocol.
Due to this change keystone would not work with the 4.0.0
version, while it was still accepted in the requirements.
This fixes it by providing a fallback to the old naming

Change-Id: I859a2d15e63c8c857b0bcbb15c757b716c8c43ba
Closes-Bug: 1623091


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1623091

Title:
  keystonemidleware dependency should be > 4.0.0

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Right now keystonemiddleware requirement is as follows:

  keystonemiddleware!=4.1.0,!=4.5.0,>=4.0.0 # Apache-2.0

  
  Unfortunately, 4.0.0 (which is the minimum) wont work due to a breaking 
change that changes the _BaseAuthProtocol class to BaseAuthProtocol[0] and that 
class is used at keystone/middleware/auth.py [1]

  This was done in the change from 4.0.0 to 4.1.0 but the requirements
  were never bumped. Thus using latest keystone from master and
  keystonemiddleware == 4.0.0 results in failure:

  
  2016-09-13 17:06:05.465591 Traceback (most recent call last):
  2016-09-13 17:06:05.465603   File "/usr/bin/keystone-wsgi-admin", line 51, in 

  2016-09-13 17:06:05.465619 application = initialize_admin_application()
  2016-09-13 17:06:05.465624   File 
"/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 132, in 
initialize_admin_application
  2016-09-13 17:06:05.465632 config_files=_get_config_files())
  2016-09-13 17:06:05.465636   File 
"/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 69, in 
initialize_application
  2016-09-13 17:06:05.465641 startup_application_fn=loadapp)
  2016-09-13 17:06:05.465645   File 
"/usr/lib/python2.7/site-packages/keystone/server/common.py", line 50, in 
setup_backends
  2016-09-13 17:06:05.465651 res = startup_application_fn()
  2016-09-13 17:06:05.465654   File 
"/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 66, in loadapp
  2016-09-13 17:06:05.465659 'config:%s' % find_paste_config(), name)
  2016-09-13 17:06:05.465663   File 
"/usr/lib/python2.7/site-packages/keystone/version/service.py", line 53, in 
loadapp
  2016-09-13 17:06:05.465702 controllers.latest_app = deploy.loadapp(conf, 
name=name)
  2016-09-13 17:06:05.465709   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
  2016-09-13 17:06:05.465841 return loadobj(APP, uri, name=name, **kw)
  2016-09-13 17:06:05.465853   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
  2016-09-13 17:06:05.465868 return context.create()
  2016-09-13 17:06:05.465876   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
  2016-09-13 17:06:05.465897 return self.object_type.invoke(self)
  2016-09-13 17:06:05.465903   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
  2016-09-13 17:06:05.465909 **context.local_conf)
  2016-09-13 17:06:05.465921   File 
"/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call
  2016-09-13 17:06:05.465969 val = callable(*args, **kw)
  2016-09-13 17:06:05.465980   File 
"/usr/lib/python2.7/site-packages/paste/urlmap.py", line 31, in urlmap_factory
  2016-09-13 17:06:05.466084 app = loader.get_app(app_name, 
global_conf=global_conf)
  2016-09-13 17:06:05.466101   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
  2016-09-13 17:06:05.466124 name=name, global_conf=global_conf).create()
  2016-09-13 17:06:05.466138   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 362, in 
app_context
  2016-09-13 17:06:05.466146 APP, name=name, global_conf=global_conf)
  2016-09-13 17:06:05.466152   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 450, in 
get_context
  2016-09-13 17:06:05.466171 global_additions=global_additions)
  2016-09-13 17:06:05.466177   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 562, in 
_pipeline_app_context
  2016-09-13 17:06:05.466192 for name in pipeline[:-1]]
  2016-09-13 17:06:05.466197   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 454, in 
get_context
  2016-09-13 17:06:05.466217 section)
  2016-09-13 17:06:05.466243   File 
"/usr/lib/pytho

[Yahoo-eng-team] [Bug 1619771] Re: in placement api format of GET .../inventories does not match spec

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365633
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b221f11ee00fbf8b8a8c2b8e9ec9d761da2628b1
Submitter: Jenkins
Branch:master

commit b221f11ee00fbf8b8a8c2b8e9ec9d761da2628b1
Author: Chris Dent 
Date:   Mon Sep 5 12:11:04 2016 +

[placement] Correct serialization of inventory collections

The correct form is for resource_provider_generation to be its on
separate key, not repeated in each individual inventory. This was
probably caused by the refactoring that create _send_inventory and
_send_inventories and a lack of sufficient test coverage.

Fixing this involves changes in both the placement api service, and
in the (thus far) only existing client, the scheduler reporting
client used in the resource tracker. The reporting client can be a
bit simpler now because of the cleaner behavior in the api.

Tests have been updated accordingly.

Change-Id: I3af1c7686a45c1a0d70fe704d3c7938810eff6a3
Closes-Bug: #1619771


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619771

Title:
  in placement api format of GET .../inventories does not match spec

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The correct format is described at
  http://specs.openstack.org/openstack/nova-specs/specs/newton/approved
  /generic-resource-pools.html#get-resource-providers-uuid-inventories

  In that format the resource provider generation is its own top level
  key.

  In the code the generation is repeated per resource class which means
  we cannot retrieve the resource provider without first inspecting an
  inventory.

  We should fix this sooner than later so that we have a simpler
  resource tracker.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597145] Re: contrail-alarm-gen provisioning works only once

2016-09-14 Thread Raj Reddy
** Changed in: juniperopenstack/r3.0
   Status: In Progress => Fix Committed

** Changed in: juniperopenstack/r3.0.2.x
   Status: In Progress => Fix Committed

** Changed in: juniperopenstack/trunk
   Status: In Progress => Fix Committed

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597145

Title:
  contrail-alarm-gen provisioning works only once

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Juniper Openstack:
  Fix Committed
Status in Juniper Openstack r3.0 series:
  Fix Committed
Status in Juniper Openstack r3.0.2.x series:
  Fix Committed
Status in Juniper Openstack trunk series:
  Fix Committed

Bug description:
  contrail-alarm-gen provisioning works only once.
  fixup_contrail_alarm_gen assumes that the conf file is being changed for the 
first time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621345] Re: dhcp notifier should use core resource events

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/355117
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=181bdb374fc0c944b1168f27ac7b5cbb0ff0f3c3
Submitter: Jenkins
Branch:master

commit 181bdb374fc0c944b1168f27ac7b5cbb0ff0f3c3
Author: Kevin Benton 
Date:   Fri Aug 12 05:26:39 2016 -0700

Make DHCP notifier use core resource events

This makes the notifier subscribe to core resource events
and leverage them if they are available. This solves the
issue where internal core plugin calls from service plugins
were not generating DHCP agent notifications.

Closes-Bug: #1621345
Change-Id: I607635601caff0322fd0c80c9023f5c4f663ca25


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621345

Title:
  dhcp notifier should use core resource events

Status in neutron:
  Fix Released

Bug description:
  The current DHCP notifier only tells the DHCP agents things when
  ports/subnets/networks are modified via the Neutron API. This is
  problematic because service plugins may modify ports directly via the
  core plugin API, which results in the agents not being notified unless
  special care is taken to make modifications via utility functions that
  emit API events[1]. This leaves too much room for subtle
  inconsistencies between the data model and the DHCP agent. The DHCP
  notifier should just subscribe directly to the callbacks for
  networks/subnets/ports.


  1.
  
https://github.com/openstack/neutron/blob/dc6508aae2819f2b718785b4da2c11f30bdc3ffd/neutron/plugins/common/utils.py#L175-L183

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621650] Re: PortNotFound in DHCP agent logs

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/367679
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9d24490da8552542309dc7b9d6cbc695af4c6de6
Submitter: Jenkins
Branch:master

commit 9d24490da8552542309dc7b9d6cbc695af4c6de6
Author: Kevin Benton 
Date:   Thu Sep 8 15:19:06 2016 -0700

Handle racey teardowns in DHCP agent

Capture port not found exceptions from port updates of DHCP ports
that no longer exist. The DHCP agent already checks the return
value for None in case any of the other things went missing
(e.g. Subnet, Network), so checking for ports disappearing makes
sense. The corresponding agent-side log message for this has also
been downgraded to debug since this is a normal occurrence.

This also cleans up log noise from calling reload_allocations on
networks that have already been torn down due to all of the subnets
being removed.

Closes-Bug: #1621650
Change-Id: I495401d225c664b8f1cf7b3d51747f3b47c24fc0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621650

Title:
  PortNotFound in DHCP agent logs

Status in neutron:
  Fix Released

Bug description:
  The DHCP agent can call update_dhcp_port, in which case a PortNotFound
  exception can be thrown. This currently goes uncaught and leads to
  lots of log noise.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Remote%20error%3A%20PortNotFound%5C%22

  
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server 
[req-cadd9638-976b-4c45-8a67-b8e027c31b07 - -] Exception during message handling
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
133, in _process_incoming
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
150, in dispatch
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
121, in _do_dispatch
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server ectxt.value 
= e.inner_exc
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 82, in wrapped
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server 
traceback.format_exc())
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 77, in wrapped
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron/neutron/api/rpc/handlers/dhcp_rpc.py", line 274, in 
update_dhcp_port
  2016-09-08 07:23:35.069 13627 ERROR oslo_messaging.rpc.server return 
self._port_action(plu

[Yahoo-eng-team] [Bug 1623664] [NEW] Race between L3 agent and neutron-ns-cleanup

2016-09-14 Thread Gautam Kulkarni
Public bug reported:

I suspect a race between the neutron L3 agent and the neutron-netns-
cleanup script, which runs as a CRON job in Ubuntu. Here's a stack trace
in the router delete code path:

2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.metadata.driver.before_router_removed router, 
before_delete
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 141, in 
_notify_loop
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/driver.py", line 176, 
in before_router_removed
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
router.iptables_manager.apply()
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 423, in apply
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply()
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 431, in _apply
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply_synchronized()
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 457, in _apply_synchronized
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager save_output = 
self.execute(args, run_as_root=True)
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 159, in 
execute
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager raise 
RuntimeError(m)
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager RuntimeError:
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 'iptables-save']
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Exit code: 1
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdin:
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdout:
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stderr: Cannot 
open network namespace "qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8": No such 
file or directory
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 344, in 
_safe_router_removed
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 360, in 
_router_removed
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent self, router=ri)
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/registry.py", line 44, in 
notify
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
_get_callback_manager().notify(resource, event, trigger, **kwargs)
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 123, in 
notify
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent raise 
exceptions.CallbackFailure(errors=errors)
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent CallbackFailure: 
Callback neutron.agent.metadata.driver.before_router_removed failed with "
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 'iptables-save']
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Exit code: 1
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Stdin:
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Stdout:
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Stderr: Cannot open 
network namespace "qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8": No such file 
or directory
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent "
2016-08-03 03:30:03.393 2595 ERROR neutron.agent.

[Yahoo-eng-team] [Bug 1622783] Re: unnecessary DHCP provisioning block added when IP doesn't change

2016-09-14 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Won't Fix

** Changed in: neutron
Milestone: newton-rc1 => None

** Changed in: neutron
 Assignee: Kevin Benton (kevinbenton) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622783

Title:
  unnecessary DHCP provisioning block added when IP doesn't change

Status in neutron:
  Won't Fix

Bug description:
  Right now we insert a DHCP provisioning block on any port update.
  However, this isn't necessary if the IP address of the port hasn't
  actually changed. This can be problematic particularly if a port is
  updated via the internal core plugin API and the DHCP agent doesn't
  get notified of the change so the block doesn't get cleared.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623567] Re: It is possible to import package twice via plugin with enabled glance artifact repository

2016-09-14 Thread Mike Fedosin
** Changed in: fuel-plugin-murano
 Assignee: Kirill Zaitsev (kzaitsev) => Mike Fedosin (mfedosin)

** Project changed: fuel-plugin-murano => glance

** Changed in: glance
Milestone: 1.0.0 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1623567

Title:
  It is possible to import package twice via plugin with enabled glance
  artifact repository

Status in Glance:
  Confirmed

Bug description:
  Bug description:
  Currently it is possible to import any app several times via murano cli if 
you are using fuel murano plugin with enabled glance artifact repository.

  Steps to reproduce:
  1) deploy fuel 9.0
  2) install fuel murano plugin
  3) add 1 controller and 1 compute
  4) enable fuel murano plugin and enable glance artifact repository
  5) deploy environment
  6) ssh to the controller
  7) use "murano --murano-repo-url=http://storage.apps.openstack.org 
package-import com.example.databases.MySql" to import MySql. Use this command 
second time to import in again.

  Expected results:
  the second time command should tell that MySql is already exist. So it will 
be only one MySql package

  Actual results:
  MySql will be imported twice(see screenshot)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1623567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606231] Re: [RFE] Support nova virt interface attach/detach

2016-09-14 Thread Jim Rollenhagen
In ironic, this is a duplicate of an RFE to do the same:
https://bugs.launchpad.net/ironic/+bug/1582188

** Changed in: ironic
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606231

Title:
  [RFE] Support nova virt interface attach/detach

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is done from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1606231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623567] [NEW] It is possible to import package twice via plugin with enabled glance artifact repository

2016-09-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Bug description:
Currently it is possible to import any app several times via murano cli if you 
are using fuel murano plugin with enabled glance artifact repository.

Steps to reproduce:
1) deploy fuel 9.0
2) install fuel murano plugin
3) add 1 controller and 1 compute
4) enable fuel murano plugin and enable glance artifact repository
5) deploy environment
6) ssh to the controller
7) use "murano --murano-repo-url=http://storage.apps.openstack.org 
package-import com.example.databases.MySql" to import MySql. Use this command 
second time to import in again.

Expected results:
the second time command should tell that MySql is already exist. So it will be 
only one MySql package

Actual results:
MySql will be imported twice(see screenshot)

** Affects: glance
 Importance: Critical
 Assignee: Mike Fedosin (mfedosin)
 Status: Confirmed

-- 
It is possible to import package twice via plugin with enabled glance artifact 
repository
https://bugs.launchpad.net/bugs/1623567
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623505] Re: test_create_port_when_quotas_is_full breaks if you have dhcp agent running

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370122
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c66e343618afe4d08f6fcd990f8a6f32d4349988
Submitter: Jenkins
Branch:master

commit c66e343618afe4d08f6fcd990f8a6f32d4349988
Author: Kevin Benton 
Date:   Wed Sep 14 00:05:13 2016 -0700

Disable DHCP on test_create_port_when_quotas_is_full

This test sets the quota to 1 for a tenant and creates
two ports, ensuring 1 works and one fails. This breaks
though if dhcp is enabled on the subnet and a DHCP agent
is running for the deployment because the agent will take
up a port.

This patch disables DHCP on the subnet for the test.

Change-Id: Id6b114962d7635999b8c5408e33b55b7a23243ee
Closes-Bug: #1623505


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623505

Title:
  test_create_port_when_quotas_is_full breaks if you have dhcp agent
  running

Status in neutron:
  Fix Released

Bug description:
  test_create_port_when_quotas_is_full sets a tenant quota to 1 and then
  tries to create a port on a DHCP enabled subnet. So if you run this
  test with a DHCP agent running, it will fail (unless the agent is
  slow).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623628] [NEW] resource tracker need check disk size = 0 situation

2016-09-14 Thread jichenjc
Public bug reported:

we use 'root_gb' as root disk of instance when we doing update
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L696

however, size=0 is a valid case 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L645

so this makes the report of disk size is incorrect

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623628

Title:
  resource tracker need check disk size = 0 situation

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  we use 'root_gb' as root disk of instance when we doing update
  
https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L696

  however, size=0 is a valid case 
  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L645

  so this makes the report of disk size is incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617081] Re: Cannot get hugepages on PPC64

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/303564
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=abc24acfa1982a0ffccbe08a006ac7c7a9f4ecda
Submitter: Jenkins
Branch:master

commit abc24acfa1982a0ffccbe08a006ac7c7a9f4ecda
Author: Breno Leitao 
Date:   Fri Aug 26 18:36:00 2016 -0300

libvirt: add hugepages support for Power

Power architectures (arch.PPC64LE and arch.PPC64) support huge pages and
transparent huge pages. This patch just enables it on nova libvirt driver. A
reno note was also added to track this new feature.

This change also enables the test_does_want_hugepages unit test to run on 
the
architectures that support huge pages.

Closes-bug: #1617081
Change-Id: I22bc57a0b244667c716a54ca37c175f26a87a1e9


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1617081

Title:
  Cannot get hugepages on PPC64

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  We cannot spawn VMs with hugepages on PPC64, for example with the IBM P8 
systems.
  The analysis shows that it is due to
virt/libvirt/driver.py
   _has_hugepage_support()

  which has only:
 supported_archs = [arch.I686, arch.X86_64]

   arch.PPC64LE is missing here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1617081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623570] Re: Azure: cannot start walinux agent (Transaction order is cyclic.)

2016-09-14 Thread Scott Moser
** No longer affects: cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1623570

Title:
  Azure: cannot start walinux agent (Transaction order is cyclic.)

Status in cloud-init package in Ubuntu:
  Confirmed
Status in walinuxagent package in Ubuntu:
  In Progress
Status in cloud-init source package in Xenial:
  Confirmed
Status in walinuxagent source package in Xenial:
  Confirmed

Bug description:
  When bringing up the Azure datasource in cloud-init.service, cloud-
  init tries 'service start walinuxagent'.

  That previously worked fine, and the agent would start and then would
  produce the certificate files that cloud-init needed (for ssh keys and
  things).

  I found this when testing SRU for 0.7.7-31-g65ace7b-0ubuntu1~16.04.1
  but it is likely present also in 0.7.7-31-g65ace7b-0ubuntu1 (yakkety)

  Now, however we see a log like:
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: Getting 
metadata via agent.  hostname=smoser0914x cmd=['service', 'walinuxagent', 
'start']
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
hostname with allowed return codes [0] (shell=False, capture=True)
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking 
agent: ['service', 'walinuxagent', 'start']
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[WARNING]: agent command 
'['service', 'walinuxagent', 'start']' failed.
  Sep 14 14:53:19 smoser0914x [CLOUDINIT] util.py[DEBUG]: agent command 
'['service', 'walinuxagent', 'start']' failed.
  Traceback (most recent call last):
    File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 145, in get_metadata_from_agent
  invoke_agent(agent_cmd)
    File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 452, in invoke_agent
  util.subp(cmd, shell=(not isinstance(cmd, list)))
    File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1832, in subp
  cmd=args)
  cloudinit.util.ProcessExecutionError: Unexpected error while running command.
  Command: ['service', 'walinuxagent', 'start']
  Exit code: 1
  Reason: -
  Stdout: ''
  Stderr: "
    Failed to start walinuxagent.service: Transaction order is cyclic. See 
system logs for details.
    See system logs and 'systemctl status walinuxagent.service' for details

  I believe the relevant change is in 34a26f7f
    
https://git.launchpad.net/cloud-init/commit/?id=34a26f7f59f2963691e36ca0476bec9fc9ccef63
  That added multi-user.target to the list of After for 
cloud-init-final.service.

  Related bugs:
   * bug 1576692:  fully support package installation in systemd

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1623570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617859] Re: A compatibility issue in the obj_make_compatible method of MonitorMetric object

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/361792
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c651572d5acd8838b1c1c0be111414205c57
Submitter: Jenkins
Branch:master

commit c651572d5acd8838b1c1c0be111414205c57
Author: Takashi NATSUME 
Date:   Mon Aug 29 10:24:20 2016 +0900

Fix MonitorMetric obj_make_compatible

The 'obj_make_compatible' method of MonitorMetric object
doesn't work properly because conditional expression is not correct.
So fix it and add a unit test for it.

Change-Id: I9e5e8b975195b8120e6c10398c284d6a2f5efab9
Closes-Bug: #1617859


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1617859

Title:
  A compatibility issue in the obj_make_compatible method of
  MonitorMetric object

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In the obj_make_compatible method of MonitorMetric object, the
  'numa_nodes_values' in 'if' statement should be 'numa_membw_values'

  
---
  class MonitorMetric(base.NovaObject):
  (snipped...)
  def obj_make_compatible(self, primitive, target_version):
  super(MonitorMetric, self).obj_make_compatible(primitive,
 target_version)
  target_version = versionutils.convert_version_to_tuple(target_version)
  if target_version < (1, 1) and 'numa_nodes_values' in primitive:
  del primitive['numa_membw_values']
  
---

  
https://github.com/openstack/nova/blob/a5cc0be5c658974a4dd2b792e23c381fd8961e23/nova/objects/monitor_metric.py#L52

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1617859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558807] Re: Volume attached at different path than reported by nova/cinder when using config drive

2016-09-14 Thread Clark Boylan
** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558807

Title:
  Volume attached at different path than reported by nova/cinder when
  using config drive

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  New

Bug description:
  When booting a node with config drive then attaching a cinder volume
  we see conflicting information about where the cinder volume is
  attached. In this case the config drive is at /dev/vdb, the cinder
  volume is at /dev/vdc but volume show reports the cinder volume to be
  at /dev/vdb.

  This shows how one can get nova/cinder in this state using
  openstackclient (assuming that config drives attach to /dev/vdb):

  $ OS_CLIENT_CONFIG_FILE=../vexxhost/vexx.yaml venv/bin/openstack --os-cloud 
openstackci-vexxhost server create --config-drive True --key-name clarkb-work 
--image ubuntu-trusty --flavor v1-standard-2 clarkb-test
  
+--+--+
  | Field| Value
|
  
+--+--+
  | OS-DCF:diskConfig| MANUAL   
|
  | OS-EXT-AZ:availability_zone  |  
|
  | OS-EXT-STS:power_state   | 0
|
  | OS-EXT-STS:task_state| scheduling   
|
  | OS-EXT-STS:vm_state  | building 
|
  | OS-SRV-USG:launched_at   | None 
|
  | OS-SRV-USG:terminated_at | None 
|
  | accessIPv4   |  
|
  | accessIPv6   |  
|
  | addresses|  
|
  | adminPass| redacted 
|
  | config_drive | True 
|
  | created  | 2016-03-17T21:12:38Z 
|
  | flavor   | v1-standard-2 
(ca2a6e9c-2236-4107-8905-7ae9427132ff) |
  | hostId   |  
|
  | id   | cdaf7671-5a5c-4118-a422-d8aae096bd1e 
|
  | image| ubuntu-trusty 
(d42c4ce4-3fa0-4dcb-877e-1d66a05a4f8d) |
  | key_name | clarkb-work  
|
  | name | clarkb-test  
|
  | os-extended-volumes:volumes_attached | []   
|
  | progress | 0
|
  | project_id   | projectid
|
  | properties   |  
|
  | security_groups  | [{u'name': u'default'}]  
|
  | status   | BUILD
|
  | updated  | 2016-03-17T21:12:38Z 
|
  | user_id  | userid   
|
  
+--+--+
  $ OS_CLIENT_CONFIG_FILE=../vexxhost/vexx.yaml venv/bin/openstack --os-cloud 
openstackci-vexxhost volume create --size 100 clarkb-test
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | ca-ymq-2 |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2016-03-17T21:13:13.683882   |
  | description | None |
  | encrypted   | False|
  | id  | a2bba82c-ef08-48fe-9c3c-eafc8644207c |
  | multiattach

[Yahoo-eng-team] [Bug 1586268] Re: Unit test: self.assertNotEqual in unit.test_base.BaseTest.test_eq does not work in PY2

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/359293
Committed: 
https://git.openstack.org/cgit/openstack/panko/commit/?id=02d16e5f70d55173089832a4284ab75dbc4376a4
Submitter: Jenkins
Branch:master

commit 02d16e5f70d55173089832a4284ab75dbc4376a4
Author: Hanxi 
Date:   Tue Aug 23 20:48:43 2016 +0800

Base.Model not define __ne__() built-in function

Class base.Model defines __eq__() built-in function, but does
not define __ne__() built-in function, so self.assertEqual works
but self.assertNotEqual does not work at all in this test case in
python2. This patch fixes it.

Change-Id: I22e6b8e067638e148923e7a72aa8bfdc5f29b6df
Closes-Bug: #1586268


** Changed in: panko
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586268

Title:
  Unit test: self.assertNotEqual in  unit.test_base.BaseTest.test_eq
  does not work in PY2

Status in Ceilometer:
  Fix Released
Status in daisycloud-core:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Kosmos:
  New
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  New
Status in Panko:
  Fix Released
Status in python-barbicanclient:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-smaugclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-manilaclient:
  In Progress
Status in python-muranoclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in taskflow:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Version: master(20160527)

  In case cinderclient.tests.unit.test_base.BaseTest.test_eq 
self.assertNotEqual does not work.
  Class base.Resource defines __eq__() built-in function, but does not define 
__ne__() built-in function, so self.assertEqual works but self.assertNotEqual 
does not work at all in this test case.

  steps:
  1 Clone code of python-cinderclient from master.
  2 Modify the case of unit test: cinderclient/tests/unit/test_base.py
    line50--line62.
  def test_eq(self):
  # Two resources with same ID: never equal if their info is not equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hi'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  self.assertNotEqual(r1, r2)

  # Two resources with same ID: equal if their info is equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hello'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

  # Two resoruces of different types: never equal
  r1 = base.Resource(None, {'id': 1})
  r2 = volumes.Volume(None, {'id': 1})
  self.assertNotEqual(r1, r2)

  # Two resources with no ID: equal if their info is equal
  r1 = base.Resource(None, {'name': 'joe', 'age': 12})
  r2 = base.Resource(None, {'name': 'joe', 'age': 12})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

     Modify self.assertEqual(r1, r2) to self.assertNotEqual(r1, r2).

  3 Run unit test, and return success.

  After that, I make a test:

  class Resource(object):
  def __init__(self, person):
  self.person = person

  def __eq__(self, other):
  return self.person == other.person

  r1 = Resource("test")
  r2 = Resource("test")
  r3 = Resource("test_r3")
  r4 = Resource("test_r4")

  print r1 != r2
  print r1 == r2
  print r3 != r4
  print r3 == r4

  The result is :
  True
  True
  True
  False

  Whether r1 is precisely the same to r2 or not, self.assertNotEqual(r1,
  r2) return true.So I think self.assertNotEqual doesn't work at all in
  python2 and  should be modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1586268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623570] Re: Azure: cannot start walinux agent (Transaction order is cyclic.)

2016-09-14 Thread Scott Moser
So we have a couple of options here:
a.) use '__builtin__' mode in cloud-init for the walinux agent functionality.
this in theory should work, but we have not largely tested it.   Basically
this path has cloud-init doing the metadata service exchange itself rather than 
relying on walinux-agent to pull the files it needs and then using them.

I've noticed one issue with this, is that walinuxagent.service is not started.  
Per journalctl, 
  multi-user.target: Breaking ordering cycle by deleting job 
walinuxagent.service/start

b.) remove or change 'After' 'cloud-final' in walinuxagent.service
I'm not exactly sure why this is here, but I believe it was so that cloud-init 
had an opportunity to configure walinuxagent or otherwise stop them from 
fighting.  That said, since cloud-init.service is starting walinux-agent (and 
has been for quite some time), it would seem that an After of 'cloud-init' 
should be sufficient.

It seems that because of the cyclic issue, 'b' is basically required.


** Also affects: walinuxagent (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1623570

Title:
  Azure: cannot start walinux agent (Transaction order is cyclic.)

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in walinuxagent package in Ubuntu:
  New
Status in cloud-init source package in Xenial:
  Confirmed
Status in walinuxagent source package in Xenial:
  New

Bug description:
  When bringing up the Azure datasource in cloud-init.service, cloud-
  init tries 'service start walinuxagent'.

  That previously worked fine, and the agent would start and then would
  produce the certificate files that cloud-init needed (for ssh keys and
  things).

  I found this when testing SRU for 0.7.7-31-g65ace7b-0ubuntu1~16.04.1
  but it is likely present also in 0.7.7-31-g65ace7b-0ubuntu1 (yakkety)

  Now, however we see a log like:
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: Getting 
metadata via agent.  hostname=smoser0914x cmd=['service', 'walinuxagent', 
'start']
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
hostname with allowed return codes [0] (shell=False, capture=True)
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking 
agent: ['service', 'walinuxagent', 'start']
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[WARNING]: agent command 
'['service', 'walinuxagent', 'start']' failed.
  Sep 14 14:53:19 smoser0914x [CLOUDINIT] util.py[DEBUG]: agent command 
'['service', 'walinuxagent', 'start']' failed.
  Traceback (most recent call last):
    File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 145, in get_metadata_from_agent
  invoke_agent(agent_cmd)
    File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 452, in invoke_agent
  util.subp(cmd, shell=(not isinstance(cmd, list)))
    File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1832, in subp
  cmd=args)
  cloudinit.util.ProcessExecutionError: Unexpected error while running command.
  Command: ['service', 'walinuxagent', 'start']
  Exit code: 1
  Reason: -
  Stdout: ''
  Stderr: "
    Failed to start walinuxagent.service: Transaction order is cyclic. See 
system logs for details.
    See system logs and 'systemctl status walinuxagent.service' for details

  I believe the relevant change is in 34a26f7f
    
https://git.launchpad.net/cloud-init/commit/?id=34a26f7f59f2963691e36ca0476bec9fc9ccef63
  That added multi-user.target to the list of After for 
cloud-init-final.service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1623570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615948] Re: [api-ref]: Outdated link reference

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/359015
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b5073eb73562d028dfad68c0cb034812121948e9
Submitter: Jenkins
Branch:master

commit b5073eb73562d028dfad68c0cb034812121948e9
Author: Ha Van Tu 
Date:   Tue Aug 23 15:54:41 2016 +0700

[api-ref]: Update link reference

This patch updates link reference for "create keypair" in
Compute API create server [1].
Current reference link:
http://developer.openstack.org/api-ref-compute-v2.1.html#createKeypair
http://developer.openstack.org/api-ref-compute-v2.1.html#createFloatingIP
http://developer.openstack.org/api-ref-compute-v2.1.html#addFloatingIp
http://developer.openstack.org/api-ref-compute-v2.1.html#removeFloatingIp
Update reference link:
http://developer.openstack.org/api-ref/compute/#create-or-import-keypair
http://developer.openstack.org/api-ref/compute
/#create-allocate-floating-ip-address
http://developer.openstack.org/api-ref/compute
/#delete-deallocate-floating-ip-address
[1] http://developer.openstack.org/api-ref/compute/?expanded=
create-server-detail#create-server

Change-Id: I421b559a7c127abb8c8c97d08b579bedb080bbe4
Closes-Bug: #1615948


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615948

Title:
  [api-ref]: Outdated link reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Compute API create server [1] has "create keypair" refer to link [2].
  This link is outdated and should be changed to [3].

  [1] 
http://developer.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server
  [2] http://developer.openstack.org/api-ref-compute-v2.1.html#createKeypair
  [3] http://developer.openstack.org/api-ref/compute/#create-or-import-keypair

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623570] [NEW] Azure: cannot start walinux agent (Transaction order is cyclic.)

2016-09-14 Thread Scott Moser
Public bug reported:

When bringing up the Azure datasource in cloud-init.service, cloud-init
tries 'service start walinuxagent'.

That previously worked fine, and the agent would start and then would
produce the certificate files that cloud-init needed (for ssh keys and
things).

I found this when testing SRU for 0.7.7-31-g65ace7b-0ubuntu1~16.04.1
but it is likely present also in 0.7.7-31-g65ace7b-0ubuntu1 (yakkety)

Now, however we see a log like:
Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: Getting 
metadata via agent.  hostname=smoser0914x cmd=['service', 'walinuxagent', 
'start']
Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
hostname with allowed return codes [0] (shell=False, capture=True)
Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking 
agent: ['service', 'walinuxagent', 'start']
Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[WARNING]: agent command 
'['service', 'walinuxagent', 'start']' failed.
Sep 14 14:53:19 smoser0914x [CLOUDINIT] util.py[DEBUG]: agent command 
'['service', 'walinuxagent', 'start']' failed.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 145, in get_metadata_from_agent
invoke_agent(agent_cmd)
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 452, in invoke_agent
util.subp(cmd, shell=(not isinstance(cmd, list)))
  File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1832, in subp
cmd=args)
cloudinit.util.ProcessExecutionError: Unexpected error while running command.
Command: ['service', 'walinuxagent', 'start']
Exit code: 1
Reason: -
Stdout: ''
Stderr: "
  Failed to start walinuxagent.service: Transaction order is cyclic. See system 
logs for details.
  See system logs and 'systemctl status walinuxagent.service' for details

I believe the relevant change is in 34a26f7f
  
https://git.launchpad.net/cloud-init/commit/?id=34a26f7f59f2963691e36ca0476bec9fc9ccef63
That added multi-user.target to the list of After for cloud-init-final.service.

** Affects: cloud-init
 Importance: Undecided
 Status: Confirmed

** Affects: cloud-init (Ubuntu)
 Importance: High
 Status: Confirmed

** Affects: cloud-init (Ubuntu Xenial)
 Importance: High
 Status: Confirmed

** Also affects: ubuntu (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** No longer affects: ubuntu (Ubuntu)

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => High

** Description changed:

  When bringing up the Azure datasource in cloud-init.service, cloud-init
  tries 'service start walinuxagent'.
  
  That previously worked fine, and the agent would start and then would
  produce the certificate files that cloud-init needed (for ssh keys and
  things).
+ 
+ I found this when testing SRU for 0.7.7-31-g65ace7b-0ubuntu1~16.04.1
+ but it is likely present also in 0.7.7-31-g65ace7b-0ubuntu1 (yakkety)
  
  Now, however we see a log like:
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: Getting 
metadata via agent.  hostname=smoser0914x cmd=['service', 'walinuxagent', 
'start']
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
hostname with allowed return codes [0] (shell=False, capture=True)
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] DataSourceAzure.py[DEBUG]: invoking 
agent: ['service', 'walinuxagent', 'start']
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[DEBUG]: Running command 
['service', 'walinuxagent', 'start'] with allowed return codes [0] 
(shell=False, capture=True)
  Sep 14 14:53:18 smoser0914x [CLOUDINIT] util.py[WARNING]: agent command 
'['service', 'walinuxagent', 'start']' failed.
  Sep 14 14:53:19 smoser0914x [CLOUDINIT] util.py[DEBUG]: agent command 
'['service', 'walinuxagent', 'start']' failed.
  Traceback (most recent call last):
-   File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 145, in get_metadata_from_agent
- invoke_agent(agent_cmd)
-   File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 
line 452, in invoke_agent
- util.subp(cmd, shell=(not isinstance(cmd, list)))
-   File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 1832, in subp
- cmd=args)
+   File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceAzure.py", 

[Yahoo-eng-team] [Bug 1623573] [NEW] placement API functional test fixtures do not do appropriate stdout and stderr handling

2016-09-14 Thread Chris Dent
Public bug reported:

In a multi-process environment the logging that is done in the placement
API when it is funning under functional tests with gabbi interleaves,
resulting in illegible output in test runs. Makes it pretty hard to do
anything when a failure happens.

There are fixtures that other nova tests use that ought to be reusable
here.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623573

Title:
  placement API functional test fixtures do not do appropriate stdout
  and stderr handling

Status in OpenStack Compute (nova):
  New

Bug description:
  In a multi-process environment the logging that is done in the
  placement API when it is funning under functional tests with gabbi
  interleaves, resulting in illegible output in test runs. Makes it
  pretty hard to do anything when a failure happens.

  There are fixtures that other nova tests use that ought to be reusable
  here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1184473] Re: no way to resume a baremetal deployment after restarting n-cpu

2016-09-14 Thread Jay Faulkner
This bug has been incomplete for 2 years. Closing as invalid.

** Changed in: ironic
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1184473

Title:
  no way to resume a baremetal deployment after restarting n-cpu

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  If the nova-compute process is terminated while the baremetal PXE
  driver is waiting inside activate_node() for baremetal-deploy-helper
  to finish copying the image, there is no way to resume the deployment.
  Currently, recovery from this situation requires that the instance and
  the baremetal node be deleted, possible manual editing of the nova
  database, and waiting for the compute_manager to trigger
  update_available_resource and reap the dead compute_node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1184473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586268] Re: Unit test: self.assertNotEqual in unit.test_base.BaseTest.test_eq does not work in PY2

2016-09-14 Thread Julien Danjou
** No longer affects: gnocchi

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586268

Title:
  Unit test: self.assertNotEqual in  unit.test_base.BaseTest.test_eq
  does not work in PY2

Status in Ceilometer:
  Fix Released
Status in daisycloud-core:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Kosmos:
  New
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  New
Status in Panko:
  In Progress
Status in python-barbicanclient:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-smaugclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-manilaclient:
  In Progress
Status in python-muranoclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in taskflow:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Version: master(20160527)

  In case cinderclient.tests.unit.test_base.BaseTest.test_eq 
self.assertNotEqual does not work.
  Class base.Resource defines __eq__() built-in function, but does not define 
__ne__() built-in function, so self.assertEqual works but self.assertNotEqual 
does not work at all in this test case.

  steps:
  1 Clone code of python-cinderclient from master.
  2 Modify the case of unit test: cinderclient/tests/unit/test_base.py
    line50--line62.
  def test_eq(self):
  # Two resources with same ID: never equal if their info is not equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hi'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  self.assertNotEqual(r1, r2)

  # Two resources with same ID: equal if their info is equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hello'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

  # Two resoruces of different types: never equal
  r1 = base.Resource(None, {'id': 1})
  r2 = volumes.Volume(None, {'id': 1})
  self.assertNotEqual(r1, r2)

  # Two resources with no ID: equal if their info is equal
  r1 = base.Resource(None, {'name': 'joe', 'age': 12})
  r2 = base.Resource(None, {'name': 'joe', 'age': 12})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

     Modify self.assertEqual(r1, r2) to self.assertNotEqual(r1, r2).

  3 Run unit test, and return success.

  After that, I make a test:

  class Resource(object):
  def __init__(self, person):
  self.person = person

  def __eq__(self, other):
  return self.person == other.person

  r1 = Resource("test")
  r2 = Resource("test")
  r3 = Resource("test_r3")
  r4 = Resource("test_r4")

  print r1 != r2
  print r1 == r2
  print r3 != r4
  print r3 == r4

  The result is :
  True
  True
  True
  False

  Whether r1 is precisely the same to r2 or not, self.assertNotEqual(r1,
  r2) return true.So I think self.assertNotEqual doesn't work at all in
  python2 and  should be modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1586268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623545] [NEW] placement API allocation handling does not check min_unit, max_unit, step_size

2016-09-14 Thread Chris Dent
Public bug reported:

The min_unit, max_unit and step_size values of resource provider
inventory are not checked when submitting allocations. Only available
capacity.

This is a known issue, a deliberate decision was made to put it off
until later, but we need to record the presence of the issue. What's
suppose to happen is that in addition to checking capacity, we also want
to make sure that the allocation is between min_unit and max_unit and
cleanly divisible by step_size. These checks should happen in the OVO
code, not the API level.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623545

Title:
  placement API allocation handling does not check min_unit, max_unit,
  step_size

Status in OpenStack Compute (nova):
  New

Bug description:
  The min_unit, max_unit and step_size values of resource provider
  inventory are not checked when submitting allocations. Only available
  capacity.

  This is a known issue, a deliberate decision was made to put it off
  until later, but we need to record the presence of the issue. What's
  suppose to happen is that in addition to checking capacity, we also
  want to make sure that the allocation is between min_unit and max_unit
  and cleanly divisible by step_size. These checks should happen in the
  OVO code, not the API level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239481] Re: nova baremetal requires manual neutron setup for metadata access

2016-09-14 Thread Jim Rollenhagen
While this is an ironic-specific problem, I don't believe the fix here
is in ironic.

Seems that Neutron and/or ML2 mechanisms need to set a proper route for
this in the physical switch, but I'm not sure which layer that would be
in. (I assume usually the agent on the host does it)

** Changed in: ironic
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239481

Title:
  nova baremetal requires manual neutron setup for metadata access

Status in Ironic:
  Invalid
Status in neutron:
  Expired
Status in OpenStack Compute (nova):
  Won't Fix
Status in tripleo:
  Incomplete

Bug description:
  a subnet setup with host routes can use a bare metal gateway as long as there 
is a metadata server on the same network:
  neutron subnet-create ... (network, dhcp settings etc) host_routes 
type=dict list=true destination=169.254.169.254/32,nexthop= --gateway_ip=

  But this requires manual configuration - it would be nice if nova
  could configure this as part of bringing up the network for a given
  node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1239481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623517] [NEW] A PUT or POST sent to placement API without a content-type header will result in a 500, should be a 400

2016-09-14 Thread Chris Dent
Public bug reported:

If, by some twist of fate, a user agent send a PUT or POST requests to
the placement API without a content-type header, the service will have
an uncaught KeyError exception raised in webob as it tries to parse the
body of the request. Tests which thought they were testing for this were
not. The webob.dec.wsgify decorator is doing some work before the thing
which the test exercises gets involved. So further tests and guards are
required to avoid the 500.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api placement scheduler

** Tags added: api placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623517

Title:
  A PUT or POST sent to placement API without a content-type header will
  result in a 500, should be a 400

Status in OpenStack Compute (nova):
  New

Bug description:
  If, by some twist of fate, a user agent send a PUT or POST requests to
  the placement API without a content-type header, the service will have
  an uncaught KeyError exception raised in webob as it tries to parse
  the body of the request. Tests which thought they were testing for
  this were not. The webob.dec.wsgify decorator is doing some work
  before the thing which the test exercises gets involved. So further
  tests and guards are required to avoid the 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618728] Re: An error in neutron, while deleting the instance of ironic

2016-09-14 Thread Lucas Alvares Gomes
Hi Xu,

Thanks for reporting, judging by the small log there I don't see how
Ironic could have influenced in that error (I understand you are using
Ironic in ur installation). So, I'm marking this bug as "Incomplete" for
the u Ironic component unless you can provide more logs.

Thank you!

** Changed in: ironic
   Status: New => Incomplete

** Changed in: ironic
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618728

Title:
  An error in neutron, while deleting the instance of ironic

Status in Ironic:
  Invalid
Status in neutron:
  In Progress

Bug description:
  After running `nova delete`, as the node is cleaning.There is an error in the 
log of neutron-server.
  But the clean step can continue after this.

  
  Realse: mitaka

  neutron/server.log
  
__
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager 
[req-2ac73be1-cc08-4b3c-8d8b-e1100f64a8e4 6357988a703f462a8649d9e29f5a71ca 
bdbd273611f94a4ca2bbf20e72311b2a - - -] Error during notification for 
neutron.db.l3_db._notify_routers_callback port, after_delete
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/callbacks/manager.py", line 146, in 
_notify_loop
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1719, in 
_notify_routers_callback
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager 
l3plugin.notify_routers_updated(context, router_ids)
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager AttributeError: 
'NoneType' object has no attribute 'notify_routers_updated'
  2016-08-31 15:20:41.881 15845 ERROR neutron.callbacks.manager
  
__

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1618728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580648] Re: Two HA routers in master state during functional test

2016-09-14 Thread Hirofumi Ichihara
It seems keepalived limitation as Ann said.

** Changed in: neutron
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580648

Title:
  Two HA routers in master state during functional test

Status in neutron:
  Opinion

Bug description:
  Scheduling ha routers end with two routers in master state.
  Issue discovered in that bug fix - https://review.openstack.org/#/c/273546 - 
after preparing new functional test.

  ha_router.py in method - _get_state_change_monitor_callback() is
  starting a neutron-keepalived-state-change process with parameter
  --monitor-interface as ha_device (ha-xxx) and it's IP address.

  That application is monitoring using
  "ip netns exec xxx ip -o monitor address"
  all changes in that namespace. Each addition of that ha-xxx device produces a 
call to neutron-server API that this router becomes "master".
  It's producing false results because that device doesn't tell anything about 
that router is master or not.

  Logs from
  test_ha_router.L3HATestFailover.test_ha_router_lost_gw_connection

  Agent2:
  2016-05-10 16:23:20.653 16067 DEBUG neutron.agent.linux.async_process [-] 
Launching async process [ip netns exec 
qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1@agent2 ip -o monitor 
address]. start /neutron/neutron/agent/linux/async_process.py:109
  2016-05-10 16:23:20.654 16067 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', 'netns', 'exec', 
'qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1@agent2', 'ip', '-o', 
'monitor', 'address'] create_process /neutron/neutron/agent/linux/utils.py:82
  2016-05-10 16:23:20.661 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Monitor: ha-8aedf0c6-2a, 169.254.0.1/24 run 
/neutron/neutron/agent/l3/keepalived_state_change.py:59
  2016-05-10 16:23:20.661 16067 INFO neutron.agent.linux.daemon [-] Process 
runs with uid/gid: 1000/1000
  2016-05-10 16:23:20.767 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: qr-88c93aa9-5a, fe80::c8fe:deff:fead:beef/64, False 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:20.901 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: qg-814d252d-26, fe80::c8fe:deff:fead:beee/64, False 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:21.324 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: ha-8aedf0c6-2a, fe80::2022:22ff:fe22:/64, True 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:29.807 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: ha-8aedf0c6-2a, 169.254.0.1/24, True parse_and_handle_event 
/neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:29.808 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Wrote router 962f19e6-f592-49f7-8bc4-add116c0b7a3 state master 
write_state_change /neutron/neutron/agent/l3/keepalived_state_change.py:87
  2016-05-10 16:23:29.808 16067 DEBUG neutron.agent.l3.keepalived_state_change 
[-] State: master notify_agent 
/neutron/neutron/agent/l3/keepalived_state_change.py:93

  Agent1:
  2016-05-10 16:23:19.417 15906 DEBUG neutron.agent.linux.async_process [-] 
Launching async process [ip netns exec 
qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1 ip -o monitor address]. 
start /neutron/neutron/agent/linux/async_process.py:109
  2016-05-10 16:23:19.418 15906 DEBUG neutron.agent.linux.utils [-] Running 
command: ['ip', 'netns', 'exec', 
'qrouter-962f19e6-f592-49f7-8bc4-add116c0b7a3@agent1', 'ip', '-o', 'monitor', 
'address'] create_process /neutron/neutron/agent/linux/utils.py:82
  2016-05-10 16:23:19.425 15906 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Monitor: ha-22a4d1e0-ad, 169.254.0.1/24 run 
/neutron/neutron/agent/l3/keepalived_state_change.py:59
  2016-05-10 16:23:19.426 15906 INFO neutron.agent.linux.daemon [-] Process 
runs with uid/gid: 1000/1000
  2016-05-10 16:23:19.525 15906 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: qr-88c93aa9-5a, fe80::c8fe:deff:fead:beef/64, False 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:19.645 15906 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: qg-814d252d-26, fe80::c8fe:deff:fead:beee/64, False 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:19.927 15906 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: ha-22a4d1e0-ad, fe80::1034:56ff:fe78:2b5d/64, True 
parse_and_handle_event /neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:28.543 15906 DEBUG neutron.agent.l3.keepalived_state_change 
[-] Event: ha-22a4d1e0-ad, 169.254.0.1/24, True parse_and_handle_event 
/neutron/neutron/agent/l3/keepalived_state_change.py:73
  2016-05-10 16:23:28.544 15906 DEBUG neutron

[Yahoo-eng-team] [Bug 1623108] Re: Add 'newton' milestone tag to alembic branches

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369769
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=a473bde24ccfecc138ed4ddee1e8ee6af516241d
Submitter: Jenkins
Branch:master

commit a473bde24ccfecc138ed4ddee1e8ee6af516241d
Author: Henry Gessau 
Date:   Tue Sep 13 21:23:20 2016 -0400

Tag the alembic migration revisions for Newton

This allows the database to be upgraded with the command:
  neutron-db-manage upgrade newton

Depends-On: I5b9c02814bdc1945422184a84c49f9e67dcf24a9

Closes-Bug: #1623108

Change-Id: I91931c958e33c57515818e7f2d099f02783d6102


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623108

Title:
  Add 'newton' milestone tag to alembic branches

Status in neutron:
  Fix Released

Bug description:
  We do this for every release.

  Add a tag with the name of the milestone to the heads of all the
  alembic branches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623505] [NEW] test_create_port_when_quotas_is_full breaks if you have dhcp agent running

2016-09-14 Thread Kevin Benton
Public bug reported:

test_create_port_when_quotas_is_full sets a tenant quota to 1 and then
tries to create a port on a DHCP enabled subnet. So if you run this test
with a DHCP agent running, it will fail (unless the agent is slow).

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623505

Title:
  test_create_port_when_quotas_is_full breaks if you have dhcp agent
  running

Status in neutron:
  In Progress

Bug description:
  test_create_port_when_quotas_is_full sets a tenant quota to 1 and then
  tries to create a port on a DHCP enabled subnet. So if you run this
  test with a DHCP agent running, it will fail (unless the agent is
  slow).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623497] [NEW] Booting Ceph instance using Ceph glance doesn't resize root disk to flavor size

2016-09-14 Thread Matthew Booth
Public bug reported:

This bug is purely from code inspection; I haven't replicated it on a
running system.

Change I46b5658efafe558dd6b28c9910fb8fde830adec0 added a resize check
that the backing file exists before checking its size. Unfortunately we
forgot that Rbd overrides get_disk_size(path), and ignores the path
argument, which means it would previously not have failed even when the
given path didn't exist. Additionally, the callback function passed to
cache() by driver will also ignore its path argument, and therefore not
write to the image cache, when cloning to a ceph instance from a ceph
glance store (see the section starting if backend.SUPPORTS_CLONE in
driver._create_and_inject_local_root). Consequently, when creating a
ceph instance using a ceph glance store:

1. 'base' will not exist in the image cache
2. get_disk_size(base) will return the correct value anyway

We broke this with change I46b5658efafe558dd6b28c9910fb8fde830adec0.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: newton-rc-potential

** Tags added: newton-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623497

Title:
  Booting Ceph instance using Ceph glance doesn't resize root disk to
  flavor size

Status in OpenStack Compute (nova):
  New

Bug description:
  This bug is purely from code inspection; I haven't replicated it on a
  running system.

  Change I46b5658efafe558dd6b28c9910fb8fde830adec0 added a resize check
  that the backing file exists before checking its size. Unfortunately
  we forgot that Rbd overrides get_disk_size(path), and ignores the path
  argument, which means it would previously not have failed even when
  the given path didn't exist. Additionally, the callback function
  passed to cache() by driver will also ignore its path argument, and
  therefore not write to the image cache, when cloning to a ceph
  instance from a ceph glance store (see the section starting if
  backend.SUPPORTS_CLONE in driver._create_and_inject_local_root).
  Consequently, when creating a ceph instance using a ceph glance store:

  1. 'base' will not exist in the image cache
  2. get_disk_size(base) will return the correct value anyway

  We broke this with change I46b5658efafe558dd6b28c9910fb8fde830adec0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606231] Re: [RFE] Support nova virt interface attach/detach

2016-09-14 Thread Sylvain Bauza
Sorry, but technically I don't see a bug here, rather some behaviour
that should be modified, right?

I mean, you're providing support for detaching an interface in the
ironic driver, that's not a bug then.

If so, please follow the existing process where you should fill in a blueprint 
and ask for a spec-less implementation, that should be enough I guess.
http://docs.openstack.org/developer/nova/process.html#how-do-i-get-my-code-merged


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606231

Title:
  [RFE] Support nova virt interface attach/detach

Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is done from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1606231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1623488] [NEW] Image signature documentation modify barbican auth_endpoint

2016-09-14 Thread Darren
Public bug reported:

Description
===
By default Barbican uses http://localhost:5000/v3 for the auth_endpoint (where 
keystone is). Users should know that this can be changed in nova.conf. This 
will solve the issue of Barbican being unable to connect to Keystone.

Steps to reproduce
==
If keystone is not on localhost then Barbican will not being able to connect to 
Keystone. Also, using this documentation to create a signed image:

https://github.com/openstack/glance/blob/master/doc/source/signature.rst

Then booting the image using 'nova boot'.

Note: verify_glance_signatures must be set to true in nova.conf

Expected result
===
Barbican should connect to Keystone to authorize credentials when booting a 
signed image.

Actual result
=
Barbican cannot connect to Keystone and booting a signed image fails.

Environment
===
This is using the mitaka branch.


This also happens in Glance:
https://bugs.launchpad.net/glance/+bug/1620539

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623488

Title:
  Image signature documentation modify barbican auth_endpoint

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  By default Barbican uses http://localhost:5000/v3 for the auth_endpoint 
(where keystone is). Users should know that this can be changed in nova.conf. 
This will solve the issue of Barbican being unable to connect to Keystone.

  Steps to reproduce
  ==
  If keystone is not on localhost then Barbican will not being able to connect 
to Keystone. Also, using this documentation to create a signed image:

  https://github.com/openstack/glance/blob/master/doc/source/signature.rst

  Then booting the image using 'nova boot'.

  Note: verify_glance_signatures must be set to true in nova.conf

  Expected result
  ===
  Barbican should connect to Keystone to authorize credentials when booting a 
signed image.

  Actual result
  =
  Barbican cannot connect to Keystone and booting a signed image fails.

  Environment
  ===
  This is using the mitaka branch.


  This also happens in Glance:
  https://bugs.launchpad.net/glance/+bug/1620539

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623483] [NEW] In placement api links in resource provider representation links to aggregates but we never merged aggregates support

2016-09-14 Thread Chris Dent
Public bug reported:

The placement api returns a set of links for a resource provider when
GETting a list or a single. These links include a link to
/resource_providers/{uuid}/aggregates but that code was not merged for
newton, so it results in a 404. This is probably no big deal, but I
thought I better mention it.

The code for aggregates is done:
https://review.openstack.org/#/c/362766/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api placement scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623483

Title:
  In placement api links in resource provider representation links to
  aggregates but we never merged aggregates support

Status in OpenStack Compute (nova):
  New

Bug description:
  The placement api returns a set of links for a resource provider when
  GETting a list or a single. These links include a link to
  /resource_providers/{uuid}/aggregates but that code was not merged for
  newton, so it results in a 404. This is probably no big deal, but I
  thought I better mention it.

  The code for aggregates is done:
  https://review.openstack.org/#/c/362766/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563419] Re: [UI] sahara uses UTC time instead of set timezone

2016-09-14 Thread Vitaly Gridnev
** Changed in: sahara/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1563419

Title:
  [UI] sahara uses UTC time instead of set timezone

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Sahara:
  Fix Released
Status in Sahara mitaka series:
  Fix Released

Bug description:
  All time values that are shown in sahara dashboard are in UTC no
  matter what kind of timezone we have set in settings. It affects the
  Data Sources, Job Execution detail views and Cluster provision steps
  table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1563419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622616] Re: delete_subnet update_port appears racey with ipam

2016-09-14 Thread Kevin Benton
@Carl,

Still happening on rally runs with lots of concurrent subnet deletions
in the same network:

http://logs.openstack.org/17/369417/5/check/gate-rally-dsvm-neutron-
rally/b0d5b03/logs/screen-q-svc.txt.gz#_2016-09-14_10_58_28_812

** Changed in: neutron
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622616

Title:
  delete_subnet update_port appears racey with ipam

Status in neutron:
  New

Bug description:
  failure spotted in a patch on a delete_subnet call:

  
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
[req-746d769c-2388-48e0-8e09-38e4190e5364 tempest-PortsTestJSON-432635984 -] 
delete failed: Exception deleting fixed_ip from port 
862b5dea-dca2-4669-b280-867175f5f351
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 526, in delete
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 87, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 83, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 123, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
traceback.format_exc())
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 118, in wrapped
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 548, in _delete
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 618, in inner
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource return 
f(self, context, *args, **kwargs)
  2016-09-10 01:04:43.452 13725 ERROR neutron.api.v2.resource   File 
"/opt/stac

[Yahoo-eng-team] [Bug 1623473] [NEW] Overwrite node field by wrong value after ironic instance rebuild

2016-09-14 Thread Tomasz Czekajło
Public bug reported:

Hi,

When I rebuild ironic instance via nova, after the first rebuild the
node for the instance's overwritten by wrong value, thus next rebuild is
not possible.

Steps to reproduce
==
1. Spawn new ironic instance
2. Rebuild the instance
After this step you can see that hypervisor_hostname for the instance is 
totally different than before. (I use "nova show uuid" command to display 
information). When you display information for instance in ironic (ironic 
node-show --instance uuid) you can see that UUID of node is different than node 
in nova.

3. Second rebuild and we can see error as below.

http://paste.openstack.org/show/irCzuu5qucX6kF44X6oe/

Environment
===
Mitaka release and Ubuntu 16

My workaround
=
After debugging I've found where is bug(?).

https://github.com/openstack/nova/blob/stable/mitaka/nova/compute/manager.py#L2795

2795:compute_node = self._get_compute_info(context, self.host)
2796:scheduled_node = compute_node.hypervisor_hostname

[...]

5118:def _get_compute_info(self, context, host):
5119:return objects.ComputeNode.get_first_node_by_host_for_old_compat(
5120:context, host)

OK, let's dive deep

https://github.com/openstack/nova/blob/stable/mitaka/nova/objects/compute_node.py#L274

274:def get_first_node_by_host_for_old_compat(cls, context, host,
275:  use_slave=False):
276:computes = ComputeNodeList.get_all_by_host(context, host, use_slave)
277:# FIXME(sbauza): Some hypervisors (VMware, Ironic) can return 
multiple
278:# nodes per host, we should return all the nodes and modify the 
callers
279:# instead.
280:# Arbitrarily returning the first node.
281:return computes[0]

It's looks the method return the first node for the given host. In case
when we've hypervisor for ironic there is multiple nodes and the first
node which is return is random.

My workaround, nothing sophisticated but works for me:

--- manager.py_org  2016-09-14 13:50:37.807379651 +0200
+++ manager.py  2016-09-14 13:51:40.275126034 +0200
@@ -2793,7 +2793,11 @@
 if not scheduled_node:
 try:
 compute_node = self._get_compute_info(context, self.host)
-scheduled_node = compute_node.hypervisor_hostname
+#workaround for ironic
+if compute_node.hypervisor_type == 'ironic':
+scheduled_node = instance.node
+else:
+scheduled_node = compute_node.hypervisor_hostname
 except exception.ComputeHostNotFound:
 LOG.exception(_LE('Failed to get compute_info for %s'),
 self.host)

I've tested this issue on Mitaka release, but it seems the code is the
same in master branch.

That's all.
Regards

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic rebuild

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623473

Title:
  Overwrite node field by wrong value after ironic instance rebuild

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  When I rebuild ironic instance via nova, after the first rebuild the
  node for the instance's overwritten by wrong value, thus next rebuild
  is not possible.

  Steps to reproduce
  ==
  1. Spawn new ironic instance
  2. Rebuild the instance
  After this step you can see that hypervisor_hostname for the instance is 
totally different than before. (I use "nova show uuid" command to display 
information). When you display information for instance in ironic (ironic 
node-show --instance uuid) you can see that UUID of node is different than node 
in nova.

  3. Second rebuild and we can see error as below.

  http://paste.openstack.org/show/irCzuu5qucX6kF44X6oe/

  Environment
  ===
  Mitaka release and Ubuntu 16

  My workaround
  =
  After debugging I've found where is bug(?).

  
https://github.com/openstack/nova/blob/stable/mitaka/nova/compute/manager.py#L2795

  2795:compute_node = self._get_compute_info(context, self.host)
  2796:scheduled_node = compute_node.hypervisor_hostname

  [...]

  5118:def _get_compute_info(self, context, host):
  5119:return objects.ComputeNode.get_first_node_by_host_for_old_compat(
  5120:context, host)

  OK, let's dive deep

  
https://github.com/openstack/nova/blob/stable/mitaka/nova/objects/compute_node.py#L274

  274:def get_first_node_by_host_for_old_compat(cls, context, host,
  275:  use_slave=False):
  276:computes = ComputeNodeList.get_all_by_host(context, host, 
use_slave)
  277:# FIXME(sbauza): Some hypervisors (VMware, Ironic) can

[Yahoo-eng-team] [Bug 1623460] Re: can not ping neutron network from external network

2016-09-14 Thread John Davidge
Hi, this looks like a support request, not a bug. Please try
https://ask.openstack.org

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623460

Title:
  can not ping neutron network from external network

Status in neutron:
  Invalid

Bug description:
  After deploy openstack using kolla on three compute, I create neutron
  network successfully, but I can not ping the network from external
  network.

  because I have only one NIC, so I create a VLAN: eth0.20,
  neutron_external_interface: "eth0.20".

  if I assign a floating ip to an instance, It's error:
  External network ce554e2f-bc0d-47bc-95f4-6b9f9d2202ef is not reachable from 
subnet 9fe487c3-46b3-486e-ac14-60d03590792d. Therefore, cannot associate Port 
e23daebe-16d1-4189-a194-242fcd73e5ab with a Floating IP. Neutron server returns 
request_ids: ['req-184ca305-8af6-4671-aaea-494232c87abd']

  
  for more information, I upload two images on github, please open:
  https://raw.githubusercontent.com/greatbsky/openstack/master/1.png
  https://raw.githubusercontent.com/greatbsky/openstack/master/2.png

  [root@oscontroller ~]# ifconfig
  docker0: flags=4163  mtu 1500
  inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
  inet6 fe80::42:82ff:fe43:b91f  prefixlen 64  scopeid 0x20
  ether 02:42:82:43:b9:1f  txqueuelen 0  (Ethernet)
  RX packets 8  bytes 536 (536.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 9  bytes 690 (690.0 B)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  eth0: flags=4163  mtu 1500
  inet 192.168.1.61  netmask 255.255.255.0  broadcast 192.168.1.255
  inet6 fe80::2e0:66ff:fe85:6b24  prefixlen 64  scopeid 0x20
  ether 00:e0:66:85:6b:24  txqueuelen 1000  (Ethernet)
  RX packets 374  bytes 32803 (32.0 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 212  bytes 22583 (22.0 KiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  eth0.1: flags=4163  mtu 1500
  inet 192.168.1.61  netmask 255.255.255.0  broadcast 192.168.1.255
  inet6 fe80::2e0:66ff:fe85:6b24  prefixlen 64  scopeid 0x20
  ether 00:e0:66:85:6b:24  txqueuelen 0  (Ethernet)
  RX packets 0  bytes 0 (0.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 13  bytes 858 (858.0 B)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  eth0.20: flags=4163  mtu 1500
  inet 192.168.20.61  netmask 255.255.255.0  broadcast 192.168.20.255
  inet6 fe80::2e0:66ff:fe85:6b24  prefixlen 64  scopeid 0x20
  ether 00:e0:66:85:6b:24  txqueuelen 0  (Ethernet)
  RX packets 0  bytes 0 (0.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 10  bytes 732 (732.0 B)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  lo: flags=73  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  inet6 ::1  prefixlen 128  scopeid 0x10
  loop  txqueuelen 0  (Local Loopback)
  RX packets 14  bytes 1210 (1.1 KiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 14  bytes 1210 (1.1 KiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  veth4575b33: flags=4163  mtu 1500
  inet6 fe80::a415:6eff:fefd:7d1b  prefixlen 64  scopeid 0x20
  ether a6:15:6e:fd:7d:1b  txqueuelen 0  (Ethernet)
  RX packets 8  bytes 648 (648.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 17  bytes 1338 (1.3 KiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  [root@oscontroller ~]# ovs-vsctl show
  037a5215-0ba6-42db-96dc-865448a2ca07
  Bridge br-tun
  fail_mode: secure
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-c0a8015c"
  Interface "vxlan-c0a8015c"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="192.168.1.61", out_key=flow, remote_ip="192.168.1.92"}
  Bridge br-ex
  Port br-ex
  Interface br-ex
  type: internal
  Port "eth0.20"
  Interface "eth0.20"
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Bridge br-int
  fail_mode: secure
  Port "qg-4e2a1631-ff"
  tag: 6
  Interface "qg-4e2a1631-ff"
  type: internal
  Port "tap629b3552-d2"
  tag: 6
  Interface "tap629b3552-d2"
  type: internal
    

[Yahoo-eng-team] [Bug 1453264] Re: [SRU] iptables_manager can run very slowly when a large number of security group rules are present

2016-09-14 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron - 1:2014.1.5-0ubuntu6

---
neutron (1:2014.1.5-0ubuntu6) trusty; urgency=medium

  * iptables_manager can run very slowly when a large number of security group
rules are present (LP: #1453264)
- d/p/use-dictionary-for-iptables-find.patch: Use a dictionary for looking
  up iptables rules rather than an iterator.

 -- Billy Olsen   Mon, 29 Aug 2016 15:06:06
-0700

** Changed in: neutron (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1453264

Title:
  [SRU] iptables_manager can run very slowly when a large number of
  security group rules are present

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive icehouse series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  Fix Released

Bug description:
  [Impact]

  We have customers that typically add a few hundred security group
  rules or more.  We also typically run 30+ VMs per compute node.  When
  about 10+ VMs with a large SG set all get scheduled to the same node,
  the L2 agent (OVS) can spend many minutes in the
  iptables_manager.apply() code, so much so that by the time all the
  rules are updated, the VM has already tried DHCP and failed, leaving
  it in an unusable state.

  While there have been some patches that tried to address this in Juno
  and Kilo, they've either not helped as much as necessary, or broken
  SGs completely due to re-ordering the of the iptables rules.

  I've been able to show some pretty bad scaling with just a handful of
  VMs running in devstack based on today's code (May 8th, 2015) from
  upstream Openstack.

  
  [Test Case]

  Here's what I tested:

  1. I created a security group with 1000 TCP port rules (you could
  alternately have a smaller number of rules and more VMs, but it's
  quicker this way)

  2. I booted VMs, specifying both the default and "large" SGs, and
  timed from the second it took Neutron to "learn" about the port until
  it completed it's work

  3. I got a :( pretty quickly

  And here's some data:

  1-3 VM - didn't time, less than 20 seconds
  4th VM - 0:36
  5th VM - 0:53
  6th VM - 1:11
  7th VM - 1:25
  8th VM - 1:48
  9th VM - 2:14

  While it's busy adding the rules, the OVS agent is consuming pretty
  close to 100% of a CPU for most of this time (from top):

    PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
  25767 stack 20   0  157936  76572   4416 R  89.2  0.5  50:14.28 python

  And this is with only ~10K rules at this point!  When we start
  crossing the 20K point VM boot failures start to happen.

  I'm filing this bug since we need to take a closer look at this in
  Liberty and fix it, it's been this way since Havana and needs some
  TLC.

  I've attached a simple script I've used to recreate this, and will
  start taking a look at options here.

  
  [Regression Potential]

  Minimal since this has been running in upstream stable for several
  releases now (Kilo, Liberty, Mitaka).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1453264/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623469] [NEW] Unable to update the user with blank value after one modifcation

2016-09-14 Thread Kuldeep Khandelwal
Public bug reported:

The steps to produce the bug:
 1/ Login to openstack with user name: admin
 2/ Go to Identity -> Users -> Edit --> To update "admin" user
 3/ Choose primary project (No value assigned now) for admin user is admin --> 
Update user successful
 4/ go to Edit of admin user again and try to move back it on old value i.e 
'blank' but no option is there and have to select this or other available value.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1623469

Title:
   Unable to update the user with blank value after one modifcation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The steps to produce the bug:
   1/ Login to openstack with user name: admin
   2/ Go to Identity -> Users -> Edit --> To update "admin" user
   3/ Choose primary project (No value assigned now) for admin user is admin 
--> Update user successful
   4/ go to Edit of admin user again and try to move back it on old value i.e 
'blank' but no option is there and have to select this or other available value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1623469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621837] Re: Plugin API was silently changed for subnetpool dict extension functions

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/348279
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=10ada71486db33c6cb69f35811d0ca3dc547eff0
Submitter: Jenkins
Branch:master

commit 10ada71486db33c6cb69f35811d0ca3dc547eff0
Author: Ihar Hrachyshka 
Date:   Thu Jul 28 14:21:02 2016 +0200

objects: expose database model for NeutronDbObject instances

Sometimes object users need access to corresponding models that are used
to persist object data. While it's not encouraged, and object consumers
should try to rely solely on object API and fields, we should fulfill
this special need, at least for now.

One of use cases to access the corresponding database model are
functions registered by plugins to extend core resources. Those
functions are passed into register_dict_extend_funcs and expect the
model as one of its arguments.

Later, when more objects are adopted in base plugin code, and we are
ready to switch extensions to objects, we can pass to those functions
some wrappers that would trigger deprecation warnings on attempts to
access attributes that are not available on objects; and then after a
while finally switch to passing objects directly instead of those
wrappers. Of course, that would not happen overnight, and the path would
take several cycles.

To avoid the stored reference to the model to influence other code
fetching from the session, we detach (expunge) the model from the active
database session on every fetch.  We also refresh the model before
detaching it when the corresponding object had synthetic fields changed,
because that's usually an indication that some relationships may be
stale on the model.

Since we now consistently detach the model from the active session on
each fetch, we cannot reuse it. So every time we hit update, we now need
to refetch the model from the session, otherwise we will hit an error
trying to refresh and/or detach an already detached model. Hence the
change in NeutronDbObject.update to always trigger update_object
irrespective to whether any persistent fields were changed. This makes
test_update_no_changes test case incorrect, hence its removal.

Due to the way RBAC metaclass works, it may trigger cls.get_object in
the middle of object creation (to validate newly created RBAC entry
against the object). It results in duplicate expunge calls for the same
object model (one during object creation, another when fetching the same
object to validate it for RBAC). To avoid that, switched RBAC code from
objects API to direct objects.db_api.get_object calls that will avoid
triggering the whole model expunge/refresh machinery.

Now that we have models stored on objects, the patch switched back
plugin code to passing models in places where we previously, by mistake,
were passing objects into extensions.

Specifically, the switch for allowed address pairs occurred with
I3c937267ce789ed510373616713b3fa9517c18ac. For subnetpools, it happened
in I1415c7a29af86d377ed31cce40888631a34d4811. Neither of those was
released in Mitaka, so it did not break anyone using major releases.
Also, we have not heard from any trunk chaser that would be affected by
the mistake.

There are not other objects used in database code where we would pass
them into extensions, so we should be good.

Closes-Bug: #1621837
Change-Id: I130609194f15b89df89e5606fb8193849edd14d8
Partially-Implements: blueprint adopt-oslo-versioned-objects-for-db


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621837

Title:
  Plugin API was silently changed for subnetpool dict extension
  functions

Status in neutron:
  Fix Released

Bug description:
  Some Newton changes that were part of blueprint adopt-oslo-versioned-
  objects-for-db mistakenly changed plugin API for registered dict
  extension functions by passing objects instead of db models into those
  functions. We should not have done it, and should revert to passing
  models before Newton final release.

  Note: there is also another resource affected by the same issue:
  allowed address pairs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623460] [NEW] can not ping neutron network from external network

2016-09-14 Thread greatbsky
Public bug reported:

After deploy openstack using kolla on three compute, I create neutron
network successfully, but I can not ping the network from external
network.

because I have only one NIC, so I create a VLAN: eth0.20,
neutron_external_interface: "eth0.20".

if I assign a floating ip to an instance, It's error:
External network ce554e2f-bc0d-47bc-95f4-6b9f9d2202ef is not reachable from 
subnet 9fe487c3-46b3-486e-ac14-60d03590792d. Therefore, cannot associate Port 
e23daebe-16d1-4189-a194-242fcd73e5ab with a Floating IP. Neutron server returns 
request_ids: ['req-184ca305-8af6-4671-aaea-494232c87abd']


for more information, I upload two images on github, please open:
https://raw.githubusercontent.com/greatbsky/openstack/master/1.png
https://raw.githubusercontent.com/greatbsky/openstack/master/2.png

[root@oscontroller ~]# ifconfig
docker0: flags=4163  mtu 1500
inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
inet6 fe80::42:82ff:fe43:b91f  prefixlen 64  scopeid 0x20
ether 02:42:82:43:b9:1f  txqueuelen 0  (Ethernet)
RX packets 8  bytes 536 (536.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 9  bytes 690 (690.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
inet 192.168.1.61  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::2e0:66ff:fe85:6b24  prefixlen 64  scopeid 0x20
ether 00:e0:66:85:6b:24  txqueuelen 1000  (Ethernet)
RX packets 374  bytes 32803 (32.0 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 212  bytes 22583 (22.0 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0.1: flags=4163  mtu 1500
inet 192.168.1.61  netmask 255.255.255.0  broadcast 192.168.1.255
inet6 fe80::2e0:66ff:fe85:6b24  prefixlen 64  scopeid 0x20
ether 00:e0:66:85:6b:24  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 13  bytes 858 (858.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0.20: flags=4163  mtu 1500
inet 192.168.20.61  netmask 255.255.255.0  broadcast 192.168.20.255
inet6 fe80::2e0:66ff:fe85:6b24  prefixlen 64  scopeid 0x20
ether 00:e0:66:85:6b:24  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 10  bytes 732 (732.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10
loop  txqueuelen 0  (Local Loopback)
RX packets 14  bytes 1210 (1.1 KiB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 14  bytes 1210 (1.1 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth4575b33: flags=4163  mtu 1500
inet6 fe80::a415:6eff:fefd:7d1b  prefixlen 64  scopeid 0x20
ether a6:15:6e:fd:7d:1b  txqueuelen 0  (Ethernet)
RX packets 8  bytes 648 (648.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 17  bytes 1338 (1.3 KiB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[root@oscontroller ~]# ovs-vsctl show
037a5215-0ba6-42db-96dc-865448a2ca07
Bridge br-tun
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port "vxlan-c0a8015c"
Interface "vxlan-c0a8015c"
type: vxlan
options: {df_default="true", in_key=flow, 
local_ip="192.168.1.61", out_key=flow, remote_ip="192.168.1.92"}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth0.20"
Interface "eth0.20"
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Bridge br-int
fail_mode: secure
Port "qg-4e2a1631-ff"
tag: 6
Interface "qg-4e2a1631-ff"
type: internal
Port "tap629b3552-d2"
tag: 6
Interface "tap629b3552-d2"
type: internal
Port "qg-ba3451ef-a2"
tag: 2
Interface "qg-ba3451ef-a2"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tap21939cfb-56"
tag: 1
Interface "tap21939cfb-56"
type: internal
Port br-int
Interface br-int
type: internal
Port "qr-5b33

[Yahoo-eng-team] [Bug 1606231] Re: [RFE] Support nova virt interface attach/detach

2016-09-14 Thread Sam Betts
** Summary changed:

- vif_port_id of ironic port is not updating after neutron port-delete
+ [RFE] Support nova virt interface attach/detach

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606231

Title:
  [RFE] Support nova virt interface attach/detach

Status in Ironic:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  Steps to reproduce:
  1. Get list of attached ports of instance:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  
++--+--+---+---+
  | Port State | Port ID  | Net ID  
 | IP addresses  | MAC Addr 
 |
  
++--+--+---+---+
  | ACTIVE | 512e6c8e-3829-4bbd-8731-c03e5d7f7639 | 
ccd0fd43-9cc3-4544-b17c-dfacd8fa4d14 | 
10.1.0.6,fdea:fd32:11ff:0:f816:3eff:fed1:8a7c | 52:54:00:85:19:89 |
  
++--+--+---+---+
  2. Show ironic port. it has vif_port_id in extra with id of neutron port:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+
  3. Delete neutron port:
  neutron port-delete 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  Deleted port: 512e6c8e-3829-4bbd-8731-c03e5d7f7639
  4. It is done from interface list:
  nova interface-list 42dd8b8b-b2bc-420e-96b6-958e9295b2d4
  ++-++--+--+
  | Port State | Port ID | Net ID | IP addresses | MAC Addr |
  ++-++--+--+
  ++-++--+--+
  5. ironic port still has vif_port_id with neutron's port id:
  ironic port-show 735fcaf5-145d-4125-8701-365c58c6b796
  
+---+---+
  | Property  | Value   
  |
  
+---+---+
  | address   | 52:54:00:85:19:89   
  |
  | created_at| 2016-07-20T13:15:23+00:00   
  |
  | extra | {u'vif_port_id': 
u'512e6c8e-3829-4bbd-8731-c03e5d7f7639'} |
  | local_link_connection | 
  |
  | node_uuid | 679fa8a9-066e-4166-ac1e-6e77af83e741
  |
  | pxe_enabled   | 
  |
  | updated_at| 2016-07-22T13:31:29+00:00   
  |
  | uuid  | 735fcaf5-145d-4125-8701-365c58c6b796
  |
  
+---+---+

  This can confuse when user wants to get list of unused ports of ironic node.
  vif_port_id should be removed after neutron port-delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1606231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589575] Re: nova-compute is not starting after adding ironic configurations

2016-09-14 Thread Dmitry Tantsur
Nova was updated to support keystone V3 while connecting to Ironic, so
this should be fixed. Thanks!

** Changed in: ironic
   Status: New => Invalid

** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589575

Title:
  nova-compute is not starting after adding ironic configurations

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I am trying to configure Ironic. I am doing this on Centos Mitaka.
  After writing the ironic configurations in nova.conf nova-compute is
  going down due to not able get the node list from ironic. tail -f
  /var/log/ironic/ironic-api.log 2016-06-06 08:46:37.634 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:37.635 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0009820
  2016-06-06 08:46:39.639 3749 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:39.640 3749 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0009780 2016-06-06 08:46:41.644 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:41.645 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0010638
  2016-06-06 08:46:43.649 3749 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:43.650 3749 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0009439 2016-06-06 08:46:45.656 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:45.657 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0014479
  2016-06-06 08:46:47.663 3748 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:47.664 3748 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0011349 2016-06-06 08:46:49.670 3749 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:49.670 3749 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0010211

  ironic in nova.conf 
  [DEFAULT] 
  compute_driver=ironic.IronicDriver
  firewall_driver=nova.virt.firewall.NoopFirewallDriver 
scheduler_host_manager=nova.scheduler.ironic_host_manager.IronicHostManager 
  ram_allocation_ratio=1.0 
  reserved_host_memory_mb=0 
  scheduler_use_baremetal_filters=True 
  scheduler_tracks_instance_changes=False 
  [ironic] 
  auth_uri = http://192.168.56.105:5000 
  auth_url = http://192.168.56.105:35357 
  auth_region = RegionOne 
  auth_type = password 
  project_domain_id = default 
  user_domain_id = default 
  project_name = service 
  username = ironic 
  password = cloud123 
  api_endpoint=http://192.168.56.105:6385/v1

  I tried with keeping the keystone V2.0 confgurations but that is giving 
authentication problems.
  admin_username=ironic
  admin_password=IRONIC_PASSWORD
  admin_url=http://IDENTITY_IP:35357/v2.0
  admin_tenant_name=service
  api_endpoint=http://IRONIC_NODE:6385/v1

  when i try the above values in ironic as per officeial document I am getting 
below error
   oslo_service.service DiscoveryFailure: Could not determine a suitable URL 
for the plugin

  Keystone endpoints are with v3
  
+--+---+--+--+-+---+--+
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL  |
  
+--+---+--+--+-+---+--+
  | 043d8c1077a14f8f970631e0ce5a95f6 | RegionOne | keystone | identity 
| True| internal  | http://192.168.56.105:5000/v3|

  
  Ironic api and conductor services are up and running.

  ironic node-list
  
+--+--+---+-++-+
 | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance 
| 
+--+--+---+-++-+
 
+--+--+---+-++-+

  Ironic api and conductor services are up and running.

  Please suggest me what is missing here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1589575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622672] Re: Unknown filters aren't validated by the API

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365659
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c8f208c4656f4252dba0719558e5b476d337b126
Submitter: Jenkins
Branch:master

commit c8f208c4656f4252dba0719558e5b476d337b126
Author: Victor Morales 
Date:   Mon Sep 5 08:50:06 2016 -0500

Make optional the validation of filters

This fix covers the cases where it's required to be
flexible in the validation of unknown filters.

Change-Id: I1becad77d48556181c5667ad06b2971b8b8517b2
Partially-Implements: blueprint adopt-oslo-versioned-objects-for-db
Closes-Bug: #1622672


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622672

Title:
  Unknown filters aren't validated by the API

Status in neutron:
  Fix Released

Bug description:
  During the integration of Subnet Olso-Versioned Object, Artur
  discovered[1] that there are some cases where the API receives filters
  which are not defined in the model. It's necessary to modify the
  current implementation of OVO to support cases like:

  * Using 'admin_state_up' in Subnet model class.
  * Using 'network_id' and 'router:external' as filters for Network model class.

  [1] http://lists.openstack.org/pipermail/openstack-
  dev/2016-July/100286.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589575] Re: nova-compute is not starting after adding ironic configurations

2016-09-14 Thread Sam Betts
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1589575

Title:
  nova-compute is not starting after adding ironic configurations

Status in Ironic:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  I am trying to configure Ironic. I am doing this on Centos Mitaka.
  After writing the ironic configurations in nova.conf nova-compute is
  going down due to not able get the node list from ironic. tail -f
  /var/log/ironic/ironic-api.log 2016-06-06 08:46:37.634 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:37.635 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0009820
  2016-06-06 08:46:39.639 3749 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:39.640 3749 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0009780 2016-06-06 08:46:41.644 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:41.645 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0010638
  2016-06-06 08:46:43.649 3749 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:43.650 3749 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0009439 2016-06-06 08:46:45.656 3748 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:45.657 3748 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0014479
  2016-06-06 08:46:47.663 3748 INFO keystonemiddleware.auth_token [-]
  Rejecting request 2016-06-06 08:46:47.664 3748 INFO ironic_api [-]
  192.168.56.105 "GET /v1/nodes/detail HTTP/1.1" status: 401 len: 234
  time: 0.0011349 2016-06-06 08:46:49.670 3749 INFO
  keystonemiddleware.auth_token [-] Rejecting request 2016-06-06
  08:46:49.670 3749 INFO ironic_api [-] 192.168.56.105 "GET
  /v1/nodes/detail HTTP/1.1" status: 401 len: 234 time: 0.0010211

  ironic in nova.conf 
  [DEFAULT] 
  compute_driver=ironic.IronicDriver
  firewall_driver=nova.virt.firewall.NoopFirewallDriver 
scheduler_host_manager=nova.scheduler.ironic_host_manager.IronicHostManager 
  ram_allocation_ratio=1.0 
  reserved_host_memory_mb=0 
  scheduler_use_baremetal_filters=True 
  scheduler_tracks_instance_changes=False 
  [ironic] 
  auth_uri = http://192.168.56.105:5000 
  auth_url = http://192.168.56.105:35357 
  auth_region = RegionOne 
  auth_type = password 
  project_domain_id = default 
  user_domain_id = default 
  project_name = service 
  username = ironic 
  password = cloud123 
  api_endpoint=http://192.168.56.105:6385/v1

  I tried with keeping the keystone V2.0 confgurations but that is giving 
authentication problems.
  admin_username=ironic
  admin_password=IRONIC_PASSWORD
  admin_url=http://IDENTITY_IP:35357/v2.0
  admin_tenant_name=service
  api_endpoint=http://IRONIC_NODE:6385/v1

  when i try the above values in ironic as per officeial document I am getting 
below error
   oslo_service.service DiscoveryFailure: Could not determine a suitable URL 
for the plugin

  Keystone endpoints are with v3
  
+--+---+--+--+-+---+--+
  | ID   | Region| Service Name | Service Type 
| Enabled | Interface | URL  |
  
+--+---+--+--+-+---+--+
  | 043d8c1077a14f8f970631e0ce5a95f6 | RegionOne | keystone | identity 
| True| internal  | http://192.168.56.105:5000/v3|

  
  Ironic api and conductor services are up and running.

  ironic node-list
  
+--+--+---+-++-+
 | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance 
| 
+--+--+---+-++-+
 
+--+--+---+-++-+

  Ironic api and conductor services are up and running.

  Please suggest me what is missing here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1589575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621750] Re: Port does not revert device_owner to previous value in concurrent requests case

2016-09-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/367744
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5f9d8887076edaf3eef132e914766ba5e7136468
Submitter: Jenkins
Branch:master

commit 5f9d8887076edaf3eef132e914766ba5e7136468
Author: Anh Tran 
Date:   Fri Sep 9 11:18:18 2016 +0700

Fix Rollback port's device_owner

From this patch: https://review.openstack.org/#/c/341427/
Sometimes, port doesn't revert device_owner to previous value
in concurrent requests case.

This patch fixes this problem and adds unit test.

Change-Id: I864a559f0316e164caa065abd75c44fae971b571
Closes-Bug: #1621750


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1621750

Title:
  Port does not revert device_owner to previous value in concurrent
  requests case

Status in neutron:
  Fix Released

Bug description:
  From this patch: https://review.openstack.org/#/c/341427/
  Sometimes, port doesn't revert device_owner to previous value in concurrent 
requests case.

  $ neutron port-create --name port1 net1   |   Overlapped CIDR
  $ neutron port-create --name port2 net2   |

  mysql> select name, device_id, device_owner from ports;
  
+---+---+--+
  | name--| device_id 
--- | 
device_owner --- |
  
+---+---+--+
  | port1 | 
- | 
 |
  | port2 | 
- | 
 |
  
+---+---+--+

  
  $ neutron router-interface-add router-test port=port1 & neutron 
router-interface-add router-test port=port2

  Added interface 68c26144-4ae5-4316-a631-93d5e3a44fd8 to router router-test.
  Bad router request: Cidr 192.166.0.0/16 of subnet 
87d56713-e6f0-47ee-918d-759cb69b372d overlaps with cidr 192.166.100.0/24 of 
subnet 94cdfe4c-0e8e-40af-93fe-e8dcc2cb7484.

  
  WE EXPECTED IN DATABASE:
  mysql> select name, device_id, device_owner from ports;
  
+---+---+--+
  | name--| 
device_id | 
device_owner --- |
  
+---+---+--+
  | port1 | f872184e-031e-43ae-9bc7-e1f05137e09e 
 | network:router_interface |
  | port2 | 
- | 
 |
  
+---+---+--+

  BUT, CURRENT RESULT HERE:
  mysql> select name, device_id, device_owner from ports;
  
+---+---+--+
  | name--| device_id 
--- | 
device_owner --- |
  
+---+---+--+
  | port1 | f872184e-031e-43ae-9bc7-e1f05137e09e 
 | network:router_interface |
  | port2 | 
- | 
network:router_interface |
  
+---+---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1621750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623422] [NEW] delete_subnet update_port needs to catch SubnetNotFound

2016-09-14 Thread Kevin Benton
Public bug reported:

The code that updates the ports on a subnet to remove the fixed IPs of
the subnets being deleted needs to capture SubnetNotFound. A concurrent
deletion of another subnet on the same network will result in the
update_port call trying to set fixed IPs containing a subnet which no
longer exists, which results in SubnetNotFound.

This error was spotted in a Rally test:
http://logs.openstack.org/11/369511/3/check/gate-rally-dsvm-neutron-
rally/b188655/logs/screen-q-svc.txt.gz#_2016-09-14_07_20_06_111


The relevant request ID is req-befec696-04be-4b2c-94b4-8abb6eb195e0, the
paste for which is here: http://paste.openstack.org/show/576077/


It tried to update the port, got a concurrent operation error (due to another 
proc updating the same port to remove another subnet), and on retry it got a 
resourcenotfound. PortNotFound is already captured in delete_subnet, and the 
network has to exist for the port to still exist, so the only remaining thing 
to be missing is the Subnet of an ID being requested.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623422

Title:
  delete_subnet update_port needs to catch SubnetNotFound

Status in neutron:
  In Progress

Bug description:
  The code that updates the ports on a subnet to remove the fixed IPs of
  the subnets being deleted needs to capture SubnetNotFound. A
  concurrent deletion of another subnet on the same network will result
  in the update_port call trying to set fixed IPs containing a subnet
  which no longer exists, which results in SubnetNotFound.

  This error was spotted in a Rally test:
  http://logs.openstack.org/11/369511/3/check/gate-rally-dsvm-neutron-
  rally/b188655/logs/screen-q-svc.txt.gz#_2016-09-14_07_20_06_111


  The relevant request ID is req-befec696-04be-4b2c-94b4-8abb6eb195e0,
  the paste for which is here: http://paste.openstack.org/show/576077/

  
  It tried to update the port, got a concurrent operation error (due to another 
proc updating the same port to remove another subnet), and on retry it got a 
resourcenotfound. PortNotFound is already captured in delete_subnet, and the 
network has to exist for the port to still exist, so the only remaining thing 
to be missing is the Subnet of an ID being requested.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623425] [NEW] DNSNameServerDbObjectTestCase.test_filtering_by_fields fails sometimes

2016-09-14 Thread Ihar Hrachyshka
Public bug reported:

The test fails sometimes.

neutron.tests.unit.objects.test_subnet.DNSNameServerDbObjectTestCase.test_filtering_by_fields
-

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File 
"/home/vagrant/git/neutron/neutron/tests/unit/objects/test_base.py", line 1215, 
in test_filtering_by_fields'
b"'Filtering by %s failed.' % field)"
b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 1182, in assertItemsEqual'
b'return self.assertSequenceEqual(expected, actual, msg=msg)'
b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 1014, in assertSequenceEqual'
b'self.fail(msg)'
b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 690, in fail'
b'raise self.failureException(msg)'
b"AssertionError: Sequences differ: [{'subnet_id': 
'a8b63bc4-9799-4781-83c8-48e9491dcd5e', 'address': 'ioojfcuswf'}] != []"
b''
b'First sequence contains 1 additional elements.'
b'First extra element 0:'
b"{'subnet_id': 'a8b63bc4-9799-4781-83c8-48e9491dcd5e', 'address': 
'ioojfcuswf'}"
b''
b'+ []'
b"- [{'address': 'ioojfcuswf',"
b"-   'subnet_id': 'a8b63bc4-9799-4781-83c8-48e9491dcd5e'}] : Filtering by 
order failed."
b''

Reproducible with: ostestr  --regex
neutron.tests.unit.objects.test_subnet.DNSNameServerDbObjectTestCase.test_filtering_by_fields
--until-failure

Log example: http://logs.openstack.org/59/365659/10/check/gate-neutron-
python34/afb20dd/testr_results.html.gz

** Affects: neutron
 Importance: High
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: Confirmed


** Tags: gate-failure unittest

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
Milestone: None => newton-rc1

** Changed in: neutron
   Importance: Undecided => High

** Tags added: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623425

Title:
  DNSNameServerDbObjectTestCase.test_filtering_by_fields fails sometimes

Status in neutron:
  Confirmed

Bug description:
  The test fails sometimes.

  
neutron.tests.unit.objects.test_subnet.DNSNameServerDbObjectTestCase.test_filtering_by_fields
  
-

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/vagrant/git/neutron/neutron/tests/unit/objects/test_base.py", line 1215, 
in test_filtering_by_fields'
  b"'Filtering by %s failed.' % field)"
  b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 1182, in assertItemsEqual'
  b'return self.assertSequenceEqual(expected, actual, msg=msg)'
  b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 1014, in assertSequenceEqual'
  b'self.fail(msg)'
  b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 690, in fail'
  b'raise self.failureException(msg)'
  b"AssertionError: Sequences differ: [{'subnet_id': 
'a8b63bc4-9799-4781-83c8-48e9491dcd5e', 'address': 'ioojfcuswf'}] != []"
  b''
  b'First sequence contains 1 additional elements.'
  b'First extra element 0:'
  b"{'subnet_id': 'a8b63bc4-9799-4781-83c8-48e9491dcd5e', 'address': 
'ioojfcuswf'}"
  b''
  b'+ []'
  b"- [{'address': 'ioojfcuswf',"
  b"-   'subnet_id': 'a8b63bc4-9799-4781-83c8-48e9491dcd5e'}] : Filtering 
by order failed."
  b''

  Reproducible with: ostestr  --regex
  
neutron.tests.unit.objects.test_subnet.DNSNameServerDbObjectTestCase.test_filtering_by_fields
  --until-failure

  Log example: http://logs.openstack.org/59/365659/10/check/gate-
  neutron-python34/afb20dd/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623402] [NEW] ipam leaks DBReference error on deleted subnet

2016-09-14 Thread Kevin Benton
Public bug reported:

spotted in rally run with lots of concurrent subnet operations:


http://logs.openstack.org/11/369511/3/check/gate-rally-dsvm-neutron-rally/b188655/logs/screen-q-svc.txt.gz#_2016-09-14_07_20_10_547


2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
[req-14b41117-5a1b-4b0d-9962-14e9adb7048a c_rally_b76aa7ea_hhhcxI8u -] delete 
failed: Exception deleting fixed_ip from port 
1098dc9c-dd9e-4a11-8e6e-b999218d55aa
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 555, in delete
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 88, in wrapped
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 84, in wrapped
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 124, in wrapped
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
traceback.format_exc())
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 119, in wrapped
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 577, in _delete
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 618, in inner
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return f(self, 
context, *args, **kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 159, in wrapped
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource return 
method(*args, **kwargs)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 88, in wrapped
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-09-14 07:20:10.547 28514 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python

[Yahoo-eng-team] [Bug 1438520] Re: cloud-init on vivid upgrade causes sigterm, which aborts 'runcmd' execution

2016-09-14 Thread Mathew Hodson
** Project changed: cloud-init => ubuntu

** No longer affects: ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1438520

Title:
  cloud-init on vivid upgrade causes sigterm, which aborts 'runcmd'
  execution

Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  I used an openstack infrastructure with a vivid beta2 image.  End
  result, if there is cloud-init upgrade available, it installs and
  abort parts of the cloud-init execution.  (bad news for my user
  scripts!)  I'm not sure of all the fallout, but at least my 'runcmd'
  section was not executed (grep the logs for 'runcmd').

  From cloud-init-output.log:

  Preparing to unpack .../cryptsetup-bin_2%3a1.6.1-1ubuntu7_amd64.deb ...^M
  Unpacking cryptsetup-bin (2:1.6.1-1ubuntu7) over (2:1.6.1-1ubuntu5) ...^M
  Preparing to unpack .../cryptsetup_2%3a1.6.1-1ubuntu7_amd64.deb ...^M
  Unpacking cryptsetup (2:1.6.1-1ubuntu7) over (2:1.6.1-1ubuntu5) ...^M
  Preparing to unpack .../cloud-init_0.7.7~bzr1087-0ubuntu1_all.deb ...^M
  Cloud-init v. 0.7.7 running 'modules:final' at Tue, 31 Mar 2015 05:09:42 
+. Up 848.15 seconds.
  Cloud-init v. 0.7.7 finished at Tue, 31 Mar 2015 05:09:44 +. Datasource 
DataSourceOpenStack [net,ver=2].  Up 850.19 seconds

  From cloud-init.log:

  Mar 31 04:57:38 ubuntu [CLOUDINIT] util.py[DEBUG]: Running command 
['eatmydata', 'apt-get', '--option=Dpkg::Options::=--force-confold', 
'--option=Dpkg::options::=--force-unsafe-io', '--assume-yes', '--quiet', 
'dist-upgrade'] with allowed return codes [0] (shell=False, capture=False)
  Mar 31 05:09:41 ubuntu [CLOUDINIT] util.py[DEBUG]: Cloud-init 0.7.7 received 
SIGTERM, exiting...#012  Filename: /usr/lib/python3.4/subprocess.py#012  
Function: _eintr_retry_call#012  Line number: 491#012Filename: 
/usr/lib/python3.4/subprocess.py#012Function: _try_wait#012Line number: 
1514#012  Filename: /usr/lib/python3.4/subprocess.py#012  Function: 
wait#012  Line number: 1566
  Mar 31 05:09:41 ubuntu [CLOUDINIT] util.py[DEBUG]: apt-upgrade [eatmydata 
apt-get --option=Dpkg::Options::=--force-confold 
--option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet dist-upgrade] 
took 722.766 seconds
  Mar 31 05:09:41 ubuntu [CLOUDINIT] util.py[DEBUG]: Reading from /proc/uptime 
(quiet=False)
  Mar 31 05:09:41 ubuntu [CLOUDINIT] util.py[DEBUG]: Read 12 bytes from 
/proc/uptime
  Mar 31 05:09:41 ubuntu [CLOUDINIT] util.py[DEBUG]: cloud-init mode 'modules' 
took 761.227 seconds (761.23)
  Mar 31 05:09:42 ubuntu [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.7 
running 'modules:final' at Tue, 31 Mar 2015 05:09:42 +. Up 848.15 seconds.
  Mar 31 05:09:44 ubuntu [CLOUDINIT] stages.py[DEBUG]: Using distro class 

  Mar 31 05:09:44 ubuntu [CLOUDINIT] stages.py[DEBUG]: Running module 
rightscale_userdata () 
with frequency once-per-instance


  I'll attach full cloud-init logs and the userdata.  I used the
  following command to boot the instance:

  nova boot --key-name dpb --user-data ~/test.txt --image
  fc7aedfd-f465-48b9-9fc6-c826f3a0e81b --flavor 2 vivid-test

  and the image is this:

  ubuntu-released/ubuntu-
  vivid-15.04-beta2-amd64-server-20150325-disk1.img

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1438520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623390] [NEW] Wrong calculation of quotas

2016-09-14 Thread Andrey Kurilin
Public bug reported:

Neutron has rally non-voting job. It is red now.
Rally report: 
http://logs.openstack.org/44/367744/7/check/gate-rally-dsvm-neutron-neutron/b3af93f/rally-plot/results.html.gz#/NeutronNetworks.create_and_list_networks/failures

Scenario:
- [pre-step] create new test tenant
- [pre-step] set quotas for new tenant to allow create 100 networks 
(neutron.quotas.update networks=100)
- [repeat 100 times] create and list networks

Expected result: all 100 iterations are finished successfully == it is possible 
to create N networks if networks quotas is set to N
Actual result: The last iteration is failed == it is possible to create N-1 
networks if networks quotas is set to N

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623390

Title:
  Wrong calculation of quotas

Status in neutron:
  New

Bug description:
  Neutron has rally non-voting job. It is red now.
  Rally report: 
http://logs.openstack.org/44/367744/7/check/gate-rally-dsvm-neutron-neutron/b3af93f/rally-plot/results.html.gz#/NeutronNetworks.create_and_list_networks/failures

  Scenario:
  - [pre-step] create new test tenant
  - [pre-step] set quotas for new tenant to allow create 100 networks 
(neutron.quotas.update networks=100)
  - [repeat 100 times] create and list networks

  Expected result: all 100 iterations are finished successfully == it is 
possible to create N networks if networks quotas is set to N
  Actual result: The last iteration is failed == it is possible to create N-1 
networks if networks quotas is set to N

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623390/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp