[Yahoo-eng-team] [Bug 1477829] [NEW] Create port API with invalid value returns 500(Internal Server Error)

2015-07-23 Thread Koichi Miura
Public bug reported:

I executed "POST /v2.0/ports" with invalid value like a "null" as the parameter 
"allowed_address_pairs".
Then Neutron Server returned 500(Internal Server Error).

I expected Neutron Server just returns 400(Bad Request).

API Result and Logs are as follows.
[API Result]
stack@ubuntu:~/deg$ curl -g -i -X POST -H "Content-Type: application/json" -H 
"X-Auth-Token: ${token}" http://192.168.122.99:9696/v2.0/ports -d 
"{\"port\":{\"network_id\":\"7da5015b-4e6a-4c9f-af47-42467a4a34c5\",\"allowed_address_pairs\":null}}"
 ; echo
HTTP/1.1 500 Internal Server Error
Content-Type: application/json; charset=UTF-8
Content-Length: 150
X-Openstack-Request-Id: req-f44e7756-dd17-42c9-81e2-1c38e60a748e
Date: Thu, 23 Jul 2015 09:35:26 GMT

{"NeutronError": {"message": "Request Failed: internal server error
while processing your request.", "type": "HTTPInternalServerError",
"detail": ""}}

[Neutron Server Log]
2015-07-23 18:35:26.373 DEBUG neutron.api.v2.base 
[req-f44e7756-dd17-42c9-81e2-1c38e60a748e demo 
0522fc19a56b4d7ca32a9140d3d36a08] Request body: {u'port': {u'network_id': 
u'7da5015b-4e6a-4c9f-af47-42467a4a34c5', u'allowed_address_pairs': None}} from 
(pid=24318) prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:606
2015-07-23 18:35:26.376 ERROR neutron.api.v2.resource 
[req-f44e7756-dd17-42c9-81e2-1c38e60a748e demo 
0522fc19a56b4d7ca32a9140d3d36a08] create failed
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 146, in wrapper
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 119, in 
__exit__
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 136, in wrapper
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 396, in create
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 664, in prepare_request_body
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource 
attr_vals['validate'][rule])
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/allowedaddresspairs.py", line 52, in 
_validate_allowed_address_pairs
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource if len(address_pairs) 
> cfg.CONF.max_allowed_address_pair:
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource TypeError: object of type 
'NoneType' has no len()
2015-07-23 18:35:26.376 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Assignee: Koichi Miura (miura-koichi)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Koichi Miura (miura-koichi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477829

Title:
  Create port API with invalid value returns 500(Internal Server Error)

Status in neutron:
  New

Bug description:
  I executed "POST /v2.0/ports" with invalid value like a "null" as the 
parameter "allowed_address_pairs".
  Then Neutron Server returned 500(Internal Server Error).

  I expected Neutron Server just returns 400(Bad Request).

  API Result and Logs are as follows.
  [API Result]
  stack@ubuntu:~/deg$ curl -g -i -X POST -H "Content-Type: application/json" -H 
"X-Auth-Token: ${token}" http://192.168.122.99:9696/v2.0/ports -d 
"{\"port\":{\"network_id\":\"7da5015b-4e6a-4c9f-af47-42467a4a34c5\",\"allowed_address_pairs\":null}}"
 ; echo
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json; charset=UTF-8
  Content-Length: 150
  X-Openstack-Request-Id: req-f44e7756-dd17-42c9-81e2-1c38e60a748e
  Date: Thu, 23 Jul 2015 09:35:26 GMT

  {"NeutronError": {"message": "Request Failed: internal server error
  while processing your request.", "type": "HTTPInternalServerError",
  "detail": ""}}

  [Neutron Server Log]
  2015-07-23 18:35:26.373 DEBUG neutron.api.v2.base 
[req-f44e7756-dd17-42c9-81e2-1c38e60a748e demo 
0522fc19a56b4d7ca32a9140d3d36a08] Request body: {u'port': {u'network_id': 
u'7da5015b-4e6a-4c9f-af47-42467a4a34c5', u'allowed_address_pairs': None}} from 

[Yahoo-eng-team] [Bug 1477825] [NEW] app.module shouldd'n be generated

2015-07-23 Thread Shaoquan Chen
Public bug reported:

Currently app.module's JavaScript code is generated from Django
template, this causes many issues. JavaScript code should not be
generated except for pure data or data object that receives value from
the page's environment.

** Affects: horizon
 Importance: Undecided
 Assignee: Shaoquan Chen (sean-chen2)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Shaoquan Chen (sean-chen2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477825

Title:
  app.module shouldd'n be generated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently app.module's JavaScript code is generated from Django
  template, this causes many issues. JavaScript code should not be
  generated except for pure data or data object that receives value from
  the page's environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477825/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477822] [NEW] Too loose url regex for project/images//create

2015-07-23 Thread Lin Yang
Public bug reported:

How to reproduce:
Input a wrong url http:///project/images//createabcd into 
browser, it shows the view of creating snapshot instead of 404 as expect.

Root cause:
The current url regex '^(?P[^/]+)/create' is too loose, which will 
match all url start with 'create'.
https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/images/snapshots/urls.py#n27

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Yang (lin-a-yang)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Lin Yang (lin-a-yang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477822

Title:
  Too loose url regex for project/images//create

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  How to reproduce:
  Input a wrong url http:///project/images//createabcd 
into browser, it shows the view of creating snapshot instead of 404 as expect.

  Root cause:
  The current url regex '^(?P[^/]+)/create' is too loose, which 
will match all url start with 'create'.
  
https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/images/snapshots/urls.py#n27

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474236] Re: unexpect response when create an image with locations in api v2

2015-07-23 Thread Sabari Murugesan
In v2, locations can be added to an image using the PATCH method.

Please refer http://developer.openstack.org/api-ref-
image-v2.html#updateImage-v2.  If you are using the cli, that would be
glance --os-image-api-version 2 add-location --url  .


** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1474236

Title:
  unexpect response when create an image with  locations in api v2

Status in Glance:
  Invalid

Bug description:
  When create an image with the parameter 'owner' or 'locations' in api
  v2, it will raise an error like:

  403 Forbidden
  Attribute 'locations' is reserved.

  Reproduce:

  Create an image like:

  POSThttp://hostip:v2/images

  body:

  {
    "name": "v2_test",
    "tags": [
   "ubuntu",
   "quantal"
    ],
    "disk_format": "qcow2",
    "container_format": "bare",
    "locations": [
  {
    "url": "xx",
    "metadata":{}
  }
    ]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1474236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477786] [NEW] Rally Tests to Create and Delete Routers makes _ensure_default_security_group goes in endless loop

2015-07-23 Thread nkade...@gmail.com
Public bug reported:

As Part of run _ensure_default_security_group is called and ends up in a
continuous loop, when detecting a Pre existing Default Security Group.

Line 549 has a continue which ends up in infinite loop, should have Max
Retries and bail out.

The Test is run with 5 workers in Neutron

Rally Json Template :

{
"NeutronNetworks.create_and_delete_routers": [
{
"args": {
"network_create_args": {},
"subnet_create_args": {},
"subnet_cidr_start": "1.1.0.0/30",
"subnets_per_network": 2,
"router_create_args": {}
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 10
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
},
"quotas": {
"neutron": {
"network": -1,
"subnet": -1,
"router": -1
}
}
}
}
]
}


Neutron Log:
2015-07-23 14:03:06.288 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.313 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.337 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.359 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.387 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.410 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.433 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.457 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.480 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.504 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.533 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.577 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutron/db/securitygroups_db.py:548
2015-07-23 14:03:06.597 19411 DEBUG neutron.db.securitygroups_db 
[req-24fda616-62d4-425d-a2ad-f755b8b16358 ] Duplicate default security group 
b3bfe28880e74b49958e8ae34462
732d was not created _ensure_default_security_group 
/usr/lib/python2.7/site-packages/neutro

[Yahoo-eng-team] [Bug 1477783] [NEW] Handle Launch Instance errors when Cinder is disabled

2015-07-23 Thread Yash Bathia
Public bug reported:

The Launch Instance form raises an error when Cinder is disabled. There
should be checks to ensure if Cinder is enabled before populating the
volume id choices and volume snapshot id choices in the Launch Instance
form.

** Affects: horizon
 Importance: Undecided
 Assignee: Yash Bathia (ybathia)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Yash Bathia (ybathia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1477783

Title:
  Handle Launch Instance errors when Cinder is disabled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Launch Instance form raises an error when Cinder is disabled.
  There should be checks to ensure if Cinder is enabled before
  populating the volume id choices and volume snapshot id choices in the
  Launch Instance form.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1477783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465221] Re: Horizon running in newer django, the fields is now not sorted correctly.

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1465221

Title:
  Horizon running in newer django, the fields is now not sorted
  correctly.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Create User form has wrong order of fields.

  Correct order : "name", "email", "password", "confirm_password",
  "project", "role".

  Current order : "password", "confirm_password", "name", "email",
  "project", "role".

  this becomes the cause for integration test(test_create_delete_user)
  fail.

  Traceback (most recent call last):
  
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_user_create_delete.py",
 line 26, in test_create_delete_user
  self.assertTrue(users_page.is_user_present(self.USER_NAME))
  
File"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py",
 line 678, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Can be seen both on latest devstack, or gate-horizon-dsvm-integration
  gate job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447191] Re: TestOvsdbMonitor.test_killed_monitor_respawns hangs on clean environment

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447191

Title:
  TestOvsdbMonitor.test_killed_monitor_respawns hangs on clean
  environment

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  TestOvsdbMonitor.test_killed_monitor_respawns test waits for output of
  'ovsdb-client monitor Bridge' which doesn't produce any output if no
  bridge is present in ovsdb. The test itself doesn't create any bridge
  and relies on environment to have one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353939] Re: Rescue fails with 'Failed to terminate process: Device or resource busy' in the n-cpu log

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353939

Title:
  Rescue fails with 'Failed to terminate process: Device or resource
  busy' in the n-cpu log

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in nova package in Ubuntu:
  New

Bug description:
  [Impact]

   * Users may sometimes fail to shutdown an instance if the associated qemu
 process is on uninterruptable sleep (typically IO).

  [Test Case]

   * 1. create some IO load in a VM
 2. look at the associated qemu, make sure it has STAT D in ps output
 3. shutdown the instance
 4. with the patch in place, nova will retry calling libvirt to shutdown
the instance 3 times to wait for the signal to be delivered to the 
qemu process.

  [Regression Potential]

   * None


  message: "Failed to terminate process" AND
  message:'InstanceNotRescuable' AND message: 'Exception during message
  handling' AND tags:"screen-n-cpu.txt"

  The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
  but tempest does not always reports them as an error at the end.

  message: "Failed to terminate process" AND tags:"screen-n-cpu.txt"

  Usual console log:
  Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state "None" within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.

  http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
  full/90726cb/console.html#_2014-08-07_03_50_26_520

  Usual n-cpu exception:
  
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855

  2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 408, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 292, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/mana

[Yahoo-eng-team] [Bug 1296414] Re: quotas not updated when periodic tasks or startup finish deletes

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296414

Title:
  quotas not updated when periodic tasks or startup finish deletes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.

  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING,
  it means that the quota was not updated.  We should make sure we're
  always updating the quota when the instance DB record is removed.

  Soft delete kinda throws a wrench in this, though, because I think you
  want soft deleted instances to not count against quotas -- yet their
  DB records will still exist. In this case, it seems we may have a race
  condition in _delete_instance() -> _complete_deletion() where if the
  instance somehow was SOFT_DELETED, quotas would have updated twice
  (once in soft_delete and once in _complete_deletion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333365] Re: Deleting a VM port does not remove Security rules in ip tables

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/165

Title:
  Deleting a VM port does not remove Security rules in ip tables

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Deleting a VM port does not remove security rules associated to VM
  port in ip tables.

  
  Setup : 

  ICEHOUSE GA with KVM Compute node,network node, controller

  1. Spawn a VM with security group attached.
  2. Delete a VM port 
  3. Verify the ip tables


  VM IP  :  10.10.1.4
  Rules attached : TCP and icmp rule

  
  root@ICN-KVM:~# ovs-vsctl show
  f3b34ea5-9799-460d-99bb-26359fd26e38
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  Port "eth1"
  Interface "eth1"
  Bridge br-int
  Port br-int
  Interface br-int
  type: internal
  Port "qvof28b18dc-c3"<<<   VM tap port 
  tag: 1
  Interface "qvof28b18dc-c3"
  Port "int-br-eth1"
  Interface "int-br-eth1"
  ovs_version: "2.0.1"
  root@ICN-KVM:~#

  
  After Deleting a port security rules are still present in iptables.
  -

  oot@ICN-KVM:~# iptables-save | grep 28b18dc
  :neutron-openvswi-if28b18dc-c - [0:0]
  :neutron-openvswi-of28b18dc-c - [0:0]
  :neutron-openvswi-sf28b18dc-c - [0:0]
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-sg-chain
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-sg-chain
  -A neutron-openvswi-INPUT -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-of28b18dc-c
  -A neutron-openvswi-if28b18dc-c -m state --state INVALID -j DROP
  -A neutron-openvswi-if28b18dc-c -m state --state RELATED,ESTABLISHED -j RETURN
  -A neutron-openvswi-if28b18dc-c -p tcp -m tcp -j RETURN
  -A neutron-openvswi-if28b18dc-c -p icmp -j RETURN
  -A neutron-openvswi-if28b18dc-c -s 10.10.1.3/32 -p udp -m udp --sport 67 
--dport 68 -j RETURN
  -A neutron-openvswi-if28b18dc-c -j neutron-openvswi-sg-fallback
  -A neutron-openvswi-of28b18dc-c -p udp -m udp --sport 68 --dport 67 -j RETURN
  -A neutron-openvswi-of28b18dc-c -j neutron-openvswi-sf28b18dc-c
  -A neutron-openvswi-of28b18dc-c -p udp -m udp --sport 67 --dport 68 -j DROP
  -A neutron-openvswi-of28b18dc-c -m state --state INVALID -j DROP
  -A neutron-openvswi-of28b18dc-c -m state --state RELATED,ESTABLISHED -j RETURN
  -A neutron-openvswi-of28b18dc-c -j RETURN
  -A neutron-openvswi-of28b18dc-c -j neutron-openvswi-sg-fallback
  -A neutron-openvswi-sf28b18dc-c -s 10.10.1.4/32 -m mac --mac-source 
FA:16:3E:D4:47:F8 -j RETURN
  -A neutron-openvswi-sf28b18dc-c -j DROP
  -A neutron-openvswi-sg-chain -m physdev --physdev-out tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-if28b18dc-c
  -A neutron-openvswi-sg-chain -m physdev --physdev-in tapf28b18dc-c3 
--physdev-is-bridged -j neutron-openvswi-of28b18dc-c
  root@ICN-KVM:~#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420042] Re: DHCP port is changed when dhcp-agent restart

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420042

Title:
  DHCP port is changed when dhcp-agent restart

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When dhcp-agent is restarted dhcp port may change.  Obviously it
  causes problems since VMs don't notice this, old port may be reused,
  etc.

  It is easy to reproduce.

  before dhcp-agent restart)
  $ neutron port-list --device_owner network:dhcp
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | b72f0eb1-c1d1-4f36-a563-b4b9b5ccf562 |  | fa:16:3e:f7:c3:c5 | 
{"subnet_id": "9071fca2-c87e-4bd5-a2c6-89c54883acac", "ip_address": "10.0.0.2"} 
|
  
+--+--+---+-+

  1) stop dhcp-agent
  2) create port (ex. $ neutron port-create private)
  3) start dhcp-agent

  after)
  $ neutron port-list --device_owner network:dhcp
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 78653fa6-e153-41ab-b451-b40da6060415 |  | fa:16:3e:b8:b6:d3 | 
{"subnet_id": "9071fca2-c87e-4bd5-a2c6-89c54883acac", "ip_address": "10.0.0.4"} 
|
  
+--+--+---+-+

  It occurs as follows:
  1) network cache is initialized from state file with 'subnet empty'(!).
(_populate_networks_cache())
  2) pending port-create-end RPC event is handled 'before the first 
sync_state'(!).
  3) reload_allocations called and do 'disable' because subnet is empty.
then release_dhcp_port RPC is issued and previous dhcp port is deleted.
  4) new dhcp port is created when the first sync_state is executed. 
  note that network cache become normal at the first sync_state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404268] Re: Missing nova context during spawn

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404268

Title:
  Missing nova context during spawn

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  The nova request context tracks a security context and other request
  information, including a request id that is added to log entries
  associated with this request.  The request context is passed around
  explicitly in many chunks of OpenStack code.  But nova/context.py also
  stores the RequestContext in the thread's local store (when the
  RequestContext is created, or when it is explicitly stored through a
  call to update_store).  The nova logger will use an explicitly passed
  context, or look for it in the local.store.

  A recent change in community openstack code has resulted in the
  context not being set for many nova log messages during spawn:

  https://bugs.launchpad.net/neutron/+bug/1372049

  This change spawns a new thread in nova/compute/manager.py
  build_and_run_instance, and the spawn runs in that new thread.  When
  the original RPC thread created the nova RequestContext, the context
  was set in the thread's local store.  But the context does not get set
  in the newly-spawned thread.

  Example of log messages with missing req id during spawn:

  014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore "87c7fc32-042e-40b7-af46-44bff50fa1b4" lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
  2014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock "_locked_do_build_and_run_instance" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
  2014-12-13 22:20:31.012 18219 AUDIT nova.compute.manager 
[req-bd959d69-86de-4eea-ae1d-a066843ca317 None] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Starting instance...
  ...
  2014-12-13 22:20:31.280 18219 DEBUG nova.openstack.common.lockutils [-] 
Created new semaphore "compute_resources" internal_lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
  2014-12-13 22:20:31.281 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore "compute_resources" lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
  2014-12-13 22:20:31.282 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock "instance_claim" inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
  2014-12-13 22:20:31.284 18219 DEBUG nova.compute.resource_tracker [-] Memory 
overhead for 512 MB instance; 0 MB instance_claim 
/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:1272014-12-13 
22:20:31.290 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Attempting claim: memory 512 MB, disk 10 
GB2014-12-13 22:20:31.292 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total memory: 131072 MB, used: 12288.00 
MB2014-12-13 22:20:31.296 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] memory limit not specified, defaulting to 
unlimited2014-12-13 22:20:31.300 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total disk: 2097152 GB, used: 60.00 
GB2014-12-13 22:20:31.304 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] disk limit not specified, defaulting to 
unlimited
  ...

  2014-12-13 22:20:32.850 18219 DEBUG nova.network.neutronv2.api [-]
  [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4]
  get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.6/site-
  packages/nova/network/neutronv2/api.py:611

  Proposed patch:

  one new line of code at the beginning of nova/compute/manager.py
  _do_build_and_run_instance:

  context.update_store()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416496] Re: nova.conf - configuration options icehouse compat flag is not right

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416496

Title:
  nova.conf - configuration options icehouse compat flag is not right

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in openstack-manuals:
  Fix Released

Bug description:
  Table 2.57. Description of upgrade levels configuration options has
  the wrong information for setting icehouse/juno compat flags during
  upgrades.

  Specifically this section:

  compute = None (StrOpt) Set a version cap for messages sent to compute
  services. If you plan to do a live upgrade from havana to icehouse, you 
should set this option to "icehouse-compat"
  before beginning the live upgrade procedure

  This should be compute = , for example compute = icehouse
  when doing an upgrade from I to J.

  ---
  Built: 2015-01-29T19:27:05 00:00
  git SHA: 3e80c2419cfe03f86057f3229044cd0d495e0295
  URL: 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/compute/section_compute-options-reference.xml
  xml:id: list-of-compute-config-options

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300265] Re: some tests call assert_called_once() into a mock, this function doesn't exists, and gets auto-mocked, falsely passing tests

2015-07-23 Thread Alan Pevec
** Also affects: sahara/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300265

Title:
  some tests call assert_called_once() into a mock, this function
  doesn't exists, and gets auto-mocked, falsely passing tests

Status in neutron:
  Fix Released
Status in Sahara:
  Fix Committed
Status in Sahara kilo series:
  New

Bug description:
  neutron/tests/unit/agent/linux/test_async_process.py:
spawn.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
func.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_start.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_event.send.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_process.assert_called_once(pid)
  neutron/tests/unit/test_dhcp_agent.py:
log.error.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_post_mortem_debug.py:
mock_post_mortem.assert_called_once()
  neutron/tests/unit/test_linux_interface.py:
log.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/cisco/test_nexus_plugin.py:
mock_db.assert_called_once()
  neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py:
exec_fn.assert_called_once()
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_update_firewall.assert_called_once(
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_delete_firewall.assert_called_once(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1300265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382064] Re: Failure to allocate tunnel id when creating networks concurrently

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382064

Title:
  Failure to allocate tunnel id when creating networks concurrently

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When multiple networks are created concurrently, the following trace
  is observed:

  WARNING neutron.plugins.ml2.drivers.helpers 
[req-34103ce8-b6d0-459b-9707-a24e369cf9de None] Allocate gre segment from pool 
failed after 10 failed attempts
  DEBUG neutron.context [req-2995f877-e3e6-4b32-bdae-da6295e492a1 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  DEBUG neutron.plugins.ml2.drivers.helpers 
[req-3541998d-44df-468f-b65b-36504e893dfb None] Allocate gre segment from pool, 
attempt 1 failed with segment {'gre_id': 300L} 
allocate_partially_specified_segment 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py:138
  DEBUG neutron.context [req-6dcfb91d-2c5b-4e4f-9d81-55ba381ad232 None] 
Arguments dropped when creating context: {u'project_name': None, u'tenant': 
None} __init__ /usr/lib/python2.7/dist-packages/neutron/context.py:83
  ERROR neutron.api.v2.resource [req-34103ce8-b6d0-459b-9707-a24e369cf9de None] 
create failed
  TRACE neutron.api.v2.resource Traceback (most recent call last):
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  TRACE neutron.api.v2.resource result = method(request=request, **args)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 448, in create
  TRACE neutron.api.v2.resource obj = obj_creator(request.context, **kwargs)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 497, in 
create_network
  TRACE neutron.api.v2.resource tenant_id)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 160, 
in create_network_segments
  TRACE neutron.api.v2.resource segment = self.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 189, 
in allocate_tenant_segment
  TRACE neutron.api.v2.resource segment = 
driver.obj.allocate_tenant_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/type_tunnel.py", 
line 115, in allocate_tenant_segment
  TRACE neutron.api.v2.resource alloc = 
self.allocate_partially_specified_segment(session)
  TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/helpers.py", line 
143, in allocate_partially_specified_segment
  TRACE neutron.api.v2.resource raise 
exc.NoNetworkFoundInMaximumAllowedAttempts()
  TRACE neutron.api.v2.resource NoNetworkFoundInMaximumAllowedAttempts: Unable 
to create the network. No available network found in maximum allowed attempts.
  TRACE neutron.api.v2.resource

  Additional conditions: multiserver deployment and mysql.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249065] Re: Nova throws 400 when attempting to add floating ip (instance.info_cache.network_info is empty)

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249065

Title:
  Nova throws 400 when attempting to add floating ip
  (instance.info_cache.network_info is empty)

Status in OpenStack Compute (nova):
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  Ran into this problem in check-tempest-devstack-vm-neutron

   Traceback (most recent call last):
 File "tempest/scenario/test_snapshot_pattern.py", line 74, in 
test_snapshot_pattern
   self._set_floating_ip_to_server(server, fip_for_server)
 File "tempest/scenario/test_snapshot_pattern.py", line 62, in 
_set_floating_ip_to_server
   server.add_floating_ip(floating_ip)
 File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
108, in add_floating_ip
   self.manager.add_floating_ip(self, address, fixed_address)
 File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
465, in add_floating_ip
   self._action('addFloatingIp', server, {'address': address})
 File "/opt/stack/new/python-novaclient/novaclient/v1_1/servers.py", line 
993, in _action
   return self.api.client.post(url, body=body)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 234, in 
post
   return self._cs_request(url, 'POST', **kwargs)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 213, in 
_cs_request
   **kwargs)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 195, in 
_time_request
   resp, body = self.request(url, method, **kwargs)
 File "/opt/stack/new/python-novaclient/novaclient/client.py", line 189, in 
request
   raise exceptions.from_response(resp, body, url, method)
   BadRequest: No nw_info cache associated with instance (HTTP 400) 
(Request-ID: req-9fea0363-4532-4ad1-af89-114cff68bd89)

  Full console logs here: http://logs.openstack.org/27/55327/3/check
  /check-tempest-devstack-vm-neutron/8d26d3c/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424096] Re: DVR routers attached to shared networks aren't being unscheduled from a compute node after deleting the VMs using the shared net

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424096

Title:
  DVR routers attached to shared networks aren't being unscheduled from
  a compute node after deleting the VMs using the shared net

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  As the administrator, a DVR router is created and attached to a shared
  network. The administrator also created the shared network.

  As a non-admin tenant, a VM is created with the port using the shared
  network.  The only VM using the shared network is scheduled to a
  compute node.  When the VM is deleted, it is expected the qrouter
  namespace of the DVR router is removed.  But it is not.  This doesn't
  happen with routers attached to networks that are not shared.

  The environment consists of 1 controller node and 1 compute node.

  Routers having the problem are created by the administrator attached
  to shared networks that are also owned by the admin:

  As the administrator, do the following commands on a setup having 1
  compute node and 1 controller node:

  1. neutron net-create shared-net -- --shared True
 Shared net's uuid is f9ccf1f9-aea9-4f72-accc-8a03170fa242.

  2. neutron subnet-create --name shared-subnet shared-net 10.0.0.0/16

  3. neutron router-create shared-router
  Router's UUID is ab78428a-9653-4a7b-98ec-22e1f956f44f.

  4. neutron router-interface-add shared-router shared-subnet
  5. neutron router-gateway-set  shared-router public

  
  As a non-admin tenant (tenant-id: 95cd5d9c61cf45c7bdd4e9ee52659d13), boot a 
VM using the shared-net network:

  1. neutron net-show shared-net
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | f9ccf1f9-aea9-4f72-accc-8a03170fa242 |
  | name| shared-net   |
  | router:external | False|
  | shared  | True |
  | status  | ACTIVE   |
  | subnets | c4fd4279-81a7-40d6-a80b-01e8238c1c2d |
  | tenant_id   | 2a54d6758fab47f4a2508b06284b5104 |
  +-+--+

  At this point, there are no VMs using the shared-net network running
  in the environment.

  2. Boot a VM that uses the shared-net network: nova boot ... --nic 
net-id=f9ccf1f9-aea9-4f72-accc-8a03170fa242 ... vm_sharednet
  3. Assign a floating IP to the VM "vm_sharednet"
  4. Delete "vm_sharednet". On the compute node, the qrouter namespace of the 
shared router (qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f) is left behind

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  qrouter-ab78428a-9653-4a7b-98ec-22e1f956f44f
   ...

  
  This is consistent with the output of "neutron l3-agent-list-hosting-router" 
command.  It shows the router is still being hosted on the compute node.

  
  $ neutron l3-agent-list-hosting-router ab78428a-9653-4a7b-98ec-22e1f956f44f
  
+--+++---+
  | id   | host   | admin_state_up | 
alive |
  
+--+++---+
  | 42f12eb0-51bc-4861-928a-48de51ba7ae1 | DVR-Controller | True   | 
:-)   |
  | ff869dc5-d39c-464d-86f3-112b55ec1c08 | DVR-CN2| True   | 
:-)   |
  
+--+++---+

  Running the "neutron l3-agent-router-remove" command removes the
  qrouter namespace from the compute node:

  $ neutron l3-agent-router-remove ff869dc5-d39c-464d-86f3-112b55ec1c08 
ab78428a-9653-4a7b-98ec-22e1f956f44f
  Removed router ab78428a-9653-4a7b-98ec-22e1f956f44f from L3 agent

  stack@DVR-CN2:~/DEVSTACK/manage$ ip netns
  stack@DVR-CN2:~/DEVSTACK/manage$

  This is a workaround to get the qrouter namespace deleted from the
  compute node. The L3-agent scheduler should have removed the router
  from the compute node when the VM is deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426324] Re: VFS blkid calls need to handle 0 or 2 return codes

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426324

Title:
  VFS blkid calls need to handle 0 or 2 return codes

Status in ubuntu-cloud-archive:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  kilo-2 introduce blkid calls for fs detection on all new instances; if
  the specified key is not found on the block device, blkid will return
  2 instead of 0 - nova needs to deal with this:

  2015-02-27 10:48:51.270 3062 INFO nova.virt.disk.vfs.api [-] Unable to import 
guestfs, falling back to VFSLocalFS
  2015-02-27 10:48:51.476 3062 ERROR nova.compute.manager [-] [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] Instance failed to spawn
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] Traceback (most recent call last):
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2328, in 
_build_resources
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] yield resources
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2198, in 
_build_and_run_instance
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] flavor=flavor)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2329, in 
spawn
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] admin_pass=admin_password)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2728, in 
_create_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] project_id=instance['project_id'])
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 230, 
in cache
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] *args, **kwargs)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 507, 
in create_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] copy_qcow2_image(base, self.path, 
size)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 431, in 
inner
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] return f(*args, **kwargs)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 473, 
in copy_qcow2_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] disk.extend(target, size, 
use_cow=True)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 183, in extend
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] if not is_image_extendable(image, 
use_cow):
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py", line 235, in 
is_image_extendable
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] if fs.get_image_fs() in 
SUPPORTED_FS_TO_EXTEND:
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py", line 167, in 
get_image_fs
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] run_as_root=True)
  2015-02-27 10:48:51.476 306

[Yahoo-eng-team] [Bug 1422504] Re: floating ip delete deadlock

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1422504

Title:
  floating ip delete deadlock

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  rdo juno:

  2015-02-16 13:54:11.772 3612 ERROR neutron.api.v2.resource 
[req-5c6e13d3-56d6-476b-a961-e767aea637e5 None] delete failed
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 476, in delete
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py", line 183, in 
delete_floatingip
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
self).delete_floatingip(context, id)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1178, in 
delete_floatingip
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource router_id = 
self._delete_floatingip(context, id)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/db/l3_db.py", line 840, in 
_delete_floatingip
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
l3_port_check=False)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 984, in 
delete_port
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource port_db, 
binding = db.get_locked_port_and_binding(session, id)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/db.py", line 141, in 
get_locked_port_and_binding
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
with_lockmode('update').
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2369, in one
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource ret = 
list(self)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2411, in 
__iter__
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
self.session._autoflush()
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1198, in 
_autoflush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource self.flush()
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in 
flush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
self._flush(objects)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in 
_flush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, 
in __exit__
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in 
_flush
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
flush_context.execute()
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in 
execute
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
rec.execute(self)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 555, in 
execute
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource uow
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 117, 
in delete_obj
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource 
cached_connections, mapper, table, delete)
  2015-02-16 13:54:11.772 3612 TRACE neutron.api.v2.resource   Fi

[Yahoo-eng-team] [Bug 1437855] Re: Floating IPs should be associated with the first fixed IPv4 address

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437855

Title:
  Floating IPs should be associated with the first fixed IPv4 address

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  If a port attached to an instance has multiple fixed IPs and a
  floating IP is associated without specifying a fixed ip to associate,
  the behavior in Neutron is to reject the associate request. The
  behavior in Nova in the absence of a specified fixed ip, however, is
  to pick the first one from the list of fixed ips on the port.

  This is a problem if an IPv6 address is the first on the port because
  the floating IP will be NAT'ed to the IPv6 fixed address, which is not
  supported. Any attempts to reach the instance through its floating
  address will fail. This causes failures in certain scenario tests that
  use the Nova floating IP API when dual-stack IPv4+IPv6 is enabled,
  such as test_baremetal_basic_ops  in check-tempest-dsvm-ironic-pxe_ssh
  in https://review.openstack.org/#/c/168063

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430042] Re: Virtual Machine could not be evacuated because virtual interface creation failed

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430042

Title:
  Virtual Machine could not be evacuated because virtual interface
  creation failed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  I believe this issue is related to Question 257358
  (https://answers.launchpad.net/ubuntu/+source/nova/+question/257358).

  On the source host we see the successful vif plug:

  2015-03-09 01:22:12.363 629 DEBUG neutron.plugins.ml2.rpc 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
  2015-03-09 01:22:12.392 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Acquired semaphore "db-access" lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:377
  2015-03-09 01:22:12.436 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Releasing semaphore "db-access" 
lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:390
  2015-03-09 01:22:12.437 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
740634ca8c7a49418a39c429669f2f27. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
  2015-03-09 01:22:12.439 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
3264e8d7dd7c492d9aa17d3e9892b1fc. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
  2015-03-09 01:22:14.436 629 DEBUG neutron.notifiers.nova [-] Sending events: 
[{'status': 'completed', 'tag': u'14ac5edd-269f-4808-9a34-c4cc93e9ab70', 
'name': 'network-vif-plugged', 'server_uuid': 
u'2790be4a-5285-46aa-8ee2-c68f5b936c1d'}] send_events 
/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:237

  Later, the destination host of the evacuation attempts to plug the vif
  but can't:

  2015-03-09 02:15:41.441 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipxx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
  2015-03-09 02:15:41.485 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 not bound to the agent host ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:163

  The cause of the problem seems to be that the neutron port does not
  have is binding:host_id properly updated on evacuation, the answer to
  question 257358 looks like the fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438638] Re: Hyper-V: Compute Driver doesn't start if there are instances with no VM Notes

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438638

Title:
  Hyper-V: Compute Driver doesn't start if there are instances with no
  VM Notes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  The Nova Hyper-V Compute Driver cannot start if there are instances
  with Notes = None. This can be caused by the users, by manually
  altering the VM Notes or if there are VMs created by the users.

  Logs: http://paste.openstack.org/show/197681/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438819] Re: Router gets address allocation from all new gw subnets

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1438819

Title:
  Router gets address allocation from all new gw subnets

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When a new subnet is created on an external network, all existing
  routers with gateways on the network will get a new address allocated
  from it.  This could be pretty bad for IPv4 networks where the
  addresses are scarce and therefore valuable.  In some cases, the
  entire new subnet could be consumed by router gateway ports alone.

  Adding an IP address replaces the default route on a Neutron router.
  In Kilo, Neutron now automatically allocates an IP address for the WAN
  interface on Neutron routers when a subnet on the external network is
  created. Previously, there was a check to allow a maximum of one IP
  address on a Neutron router gateway port. This check, however, was
  removed, and this patch replaces that check and allows one IPv6
  address in addition to the IPv4 address to support dual-stack.

  The combination of the automatic update of a router gateway port upon
  creation of a subnet and the absence of a check on the number of fixed
  IPs causes a change in behavior to that of Neutron in the Juno
  release.

  An issue is that creation of a subnet with a gateway IP on the
  external network replaces all default routes of Neutron routers on
  that network. This is not the behavior operators expect based on
  previous releases, and is most likely not the behavior they want - and
  as a result it could cause loss of external connectivity to tenants
  based on the network configuration.

  We need to validate a router's gateway port during creation and update
  of a router gateway port by ensuring it has no more than one v4 fixed
  IP and one v6 fixed IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1438819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434429] Re: libvirt: _compare_cpu doesn't consider NotSupportedError

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434429

Title:
  libvirt: _compare_cpu doesn't consider NotSupportedError

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  Issue
  =

  The libvirt driver method "_compare_cpu" doesn't consider that the
  underlying libvirt function could throw a NotSupportedError (like 
  baselineCPU call in "host.py" module [1])

  
  Steps to reproduce
  ==

  * Create setup with at least 2 compute nodes
  * Create cinder volume with bootable image
  * Launch instance from that volume
  * Start live migration of instance to another host

  Expected behavior
  =

  If the target host has the same CPU architecture like the source host,
  the live migration should be triggered.

  Actual behavior
  ===

  The live migration gets aborted and rolled back because all libvirt
  errors gets treated equally.

  Logs & Env.
  ===

  section "libvirt" in "/etc/nova/nova.conf" in both nodes:

  [libvirt]
  live_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, 
VIR_MIGRATE_TUNNELLED
  disk_cachemodes = block=none
  vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  inject_partition = -2
  live_migration_uri = qemu+tcp://stack@%s/system
  use_usb_tablet = False
  cpu_mode = none
  virt_type = kvm

  
  Nova version
  

  /opt/stack/nova$ git log --oneline -n5
  90ee915 Merge "Add api microvesion unit test case for wsgi.action"
  7885b74 Merge "Remove db layer hard-code permission checks for flavor-manager"
  416f310 Merge "Remove db layer hard-code permission checks for 
migrations_get*"
  ecb306b Merge "Remove db layer hard-code permission checks for 
migration_create/update"
  6efc8ad Merge "libvirt: don't allow to resize down the default ephemeral disk"

  
  References
  ==

  [1] baselineCPU call to libvirt catches NotSupportedError; 
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/host.py#L753

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1434429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442357] Re: Brocade MLX plug-ins config options for switch need a group title

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442357

Title:
  Brocade MLX plug-ins config options for switch need a group title

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  In order to create config option reference docs, changes need to be
  made to the INI file and config registrations for Brocade MLX ML2 and
  L3 plug-ins to include a block name for the switch options.  Currently
  there is no block name as the block names are dynamic based on the
  switch_names value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442543] Re: oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some config files: /etc/neutron/neutron.conf

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442543

Title:
  oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some
  config files: /etc/neutron/neutron.conf

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When running lbaas unit tests on a system where neutron.conf is
  installed into /etc/neutron, and if the file does not have read
  permissions for the user running unit tests, I get the following
  error:

  {0}
  
neutron_lbaas.tests.unit.services.loadbalancer.agent.test_agent.TestLbaasService.test_main
  [0.113960s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"neutron_lbaas/tests/unit/services/loadbalancer/agent/test_agent.py", line 45, 
in test_main
  agent.main()
File "neutron_lbaas/services/loadbalancer/agent/agent.py", line 58, in 
main
  common_config.init(sys.argv[1:]){0} 
neutron_lbaas.tests.unit.services.loadbalancer.agent.test_agent.TestLbaasService.test_main
 [0.113960s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"neutron_lbaas/tests/unit/services/loadbalancer/agent/test_agent.py", line 45, 
in test_main
  agent.main()
File "neutron_lbaas/services/loadbalancer/agent/agent.py", line 58, in 
main
  common_config.init(sys.argv[1:])
File 
"/home/ihrachyshka/proj/openstack/neutron-lbaas/.tox/py27/src/neutron/neutron/common/config.py",
 line 185, in init
  **kwargs)
File 
"/home/ihrachyshka/proj/openstack/neutron-lbaas/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 1856, in __call__
  self._namespace._files_permission_denied)
  oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some 
config files: /etc/neutron/neutron.conf

File 
"/home/ihrachyshka/proj/openstack/neutron-lbaas/.tox/py27/src/neutron/neutron/common/config.py",
 line 185, in init
  **kwargs)
File 
"/home/ihrachyshka/proj/openstack/neutron-lbaas/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py",
 line 1856, in __call__
  self._namespace._files_permission_denied)
  oslo_config.cfg.ConfigFilesPermissionDeniedError: Failed to open some 
config files: /etc/neutron/neutron.conf

  This is because oslo.config tries to autodiscover config files, and
  read them, in case they exist. Unit tests should be isolated from
  those files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1439472] Re: OVS doesn't restart properly when Exception occurred

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1439472

Title:
  OVS doesn't restart properly when Exception occurred

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Wish this fix can be fixed into kilo. If it's not able due to the bad timing, 
wish this fix can be merged into stable/kilo.
  ---
  [The problem]
  If there is an Exception (such as DBConnectionError) occurred/occurring when 
OVS restart,
  OVS will return "every thing is OK :-)",
  while flow of created network in br-tun will NOT be recovered. :-(
  Unless user operation of OVS restart has been executed.
  ---
  [action and log]
  [q-agent.log]
  [[[create network and subnet and add it to DHCP agent]]]
  [[[I turned off MySQL]]]
  [[[But nothing happened]]]
  [[[Then I restarted OVS]]]
  [[[Here it goes...]]]
  ...
  ...
  ...
  2015-04-01 22:06:48.237 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Unable to sync tunnel IP 
192.168.122.96: Remote error: DBConnectionError (OperationalError) (2003, 
"Can't connect to MySQL server on '127.0.0.1' (111)") None None
  ...
  ...
  ...
  2015-04-01 22:06:56.060 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Agent tunnel out of sync 
with plugin!
  2015-04-01 22:06:56.061 DEBUG oslo_messaging._drivers.amqpdriver 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 
705639bc86ae44f4b4cc28715ce981e8 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311
  2015-04-01 22:06:56.062 DEBUG oslo_messaging._drivers.amqp 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is 
41f997f166c04cff986ff08eb298b3eb. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-01 22:06:56.085 DEBUG 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Unable to sync tunnel IP 
192.168.122.96: Remote error: DBConnectionError (OperationalError) (2003, 
"Can't connect to MySQL server on '127.0.0.1' (111)") None None
  ...
  ...
  ...
  2015-04-01 22:06:56.111 DEBUG oslo_messaging._drivers.amqpdriver 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] MSG_ID is 
6f012243c4844978a7b8181bedcafcc9 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311
  2015-04-01 22:06:56.112 DEBUG oslo_messaging._drivers.amqp 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] UNIQUE_ID is 
04ed1ccb78bb4ab495b9ebf40c2338f5. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:258
  2015-04-01 22:06:56.138 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-352cd26d-7278-483e-a873-7558d0f37acd None None] Error while processing VIF 
ports
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1522, in rpc_loop
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", 
line 1260, in process_network_ports
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 360, in 
setup_port_filters
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 219, in 
decorated_function
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent *args, **kwargs)
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 229, in 
prepare_devices_filter
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.context, 
list(device_ids))
  2015-04-01 22:06:56.138 3698 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 116, in 
security_group_info_for_devices
  2015-

[Yahoo-eng-team] [Bug 1440699] Re: VMs not receiving Router Advts in an HA network.

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440699

Title:
  VMs not receiving Router Advts in an HA network.

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Observations with the latest Neutron code.
  1. Create a network and an IPv6 SLAAC subnet.
  2. Create an HA router.
  3. Associate the IPv6 subnet to the HA router. 
  4. Spawn a VM and check if the VM is able to get SLAAC address. 

  You can see that VM has only the LLA and not the GUA derived from
  Router Advts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1440699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443186] Re: rebooted instances are shutdown by libvirt lifecycle event handling

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443186

Title:
  rebooted instances are shutdown by libvirt lifecycle event handling

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  This is a continuation of bug 1293480 (which created bug 1433049).
  Those were reported against xen domains with the libvirt driver but we
  have a recreate with CONF.libvirt.virt_type=kvm, see the attached logs
  and reference the instance with uuid
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78.

  In this case, we're running a stress test of soft rebooting 30 active
  instances at once.  Because of a delay in the libvirt lifecycle event
  handling, they are all shutdown after the reboot operation is complete
  and the instances go from ACTIVE to SHUTDOWN.

  This was reported to me against Icehouse code but the recreate is
  against Juno code with patch:

  https://review.openstack.org/#/c/169782/

  For better logging.

  Snippets from the log:

  2015-04-10 21:02:38.234 11195 AUDIT nova.compute.manager [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Rebooting instance

  2015-04-10 21:03:47.703 11195 DEBUG nova.compute.manager [req-
  8219e6cf-dce8-44e7-a5c1-bf1879e155b2 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  unplugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:49.299 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown successfully.

  2015-04-10 21:03:53.251 11195 DEBUG nova.compute.manager [req-
  521a6bdb-172f-4c0c-9bef-855087d7dff0 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  plugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:53.259 11195 INFO nova.virt.libvirt.driver [-]
  [instance: 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance running
  successfully.

  2015-04-10 21:03:53.261 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance soft rebooted
  successfully.

  **
  At this point we have successfully soft rebooted the instance
  **

  now we get a lifecycle event from libvirt that the instance is
  stopped, since we're no longer running a task we assume the hypervisor
  is correct and we call the stop API

  2015-04-10 21:04:01.133 11195 DEBUG nova.virt.driver [-] Emitting event 
 
Stopped> emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:01.134 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Stopped (Lifecycle Event)
  2015-04-10 21:04:01.245 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event "Stopped"; current vm_state: active, current task_state: None, 
current DB power_state: 1, VM power_state: 4
  2015-04-10 21:04:01.334 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_state in the DB to match the hypervisor.
  2015-04-10 21:04:01.463 11195 WARNING nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state: 1, current VM power_state: 4

  **
  now we get a lifecycle event from libvirt that the instance is started, but 
since the instance already has a task_state of 'powering-off' because of the 
previous stop API call from _sync_instance_power_state, we ignore it.
  **

  
  2015-04-10 21:04:02.085 11195 DEBUG nova.virt.driver [-] Emitting event 
 
Started> emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:02.086 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Started (Lifecycle Event)
  2015-04-10 21:04:02.190 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event "Started"; current vm_state: active, current task_state: 
powering-off, current DB power_state: 4, VM power_state: 1
  2015-04-10 21:04:02.414 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During sync_power_state the instance has 
a pending task (powering-off). Skip.
 

[Yahoo-eng-team] [Bug 1444112] Re: ML2 security groups only work with agent drivers

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444112

Title:
  ML2 security groups only work with agent drivers

Status in networking-odl:
  In Progress
Status in networking-ovn:
  Confirmed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The current ML2 integration with security groups makes a bunch of
  assumptions which don't work for controller based architectures like
  OpenDaylight and OVN. This bug will track the fixing of these issues.

  The main issues include the fact it assumes an agent-based approach
  and will send SG updates via RPC calls to the agents. This isn't true
  for ODL or OVN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1444112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444269] Re: OVS-agent: TypeError: unhashable type: 'list'

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444269

Title:
  OVS-agent: TypeError: unhashable type: 'list'

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  recently merged changes [1] [2] introduced a new crash: TypeError:
  unhashable type: 'list'

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIlR5cGVFcnJvcjogdW5oYXNoYWJsZSB0eXBlOiAnbGlzdCdcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNDMyMDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDI5MDc2MTA0NjI3fQ==

  [1] https://review.openstack.org/#/c/171003/
  [2] https://review.openstack.org/#/c/172756/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444146] Re: Subnet creation from a subnet pool can get wrong ip_version

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444146

Title:
  Subnet creation from a subnet pool can get wrong ip_version

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in python-neutronclient:
  Fix Committed

Bug description:
  The following command ends up creating a subnet with ip_version set to
  4 even though the pool is an ipv6 pool.

$ neutron subnet-create --subnetpool ext-subnet-pool --prefixlen 64
  network1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1444146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446161] Re: Support multiple IPv6 prefixes on internal router ports for an HA Router.

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446161

Title:
  Support multiple IPv6 prefixes on internal router ports for an HA
  Router.

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  As part of BP multiple IPv6 prefixes, we can have multiple IPv6 prefixes on
  router internal ports. Patch, I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1, adds
  the necessary support for a legacy router.

  For an HA router, instead of configuring the addresses on the router internal
  ports we should be updating the keepalived config file and let keepalived
  configure the addresses depending on the state of the router.

  Following are the observations with the current code for an HA router.
  1. IPv6 addresses are configured on the router internal ports (i.e., qr-xxx)
     irrespective of the state of the router. As the same IP is configured on
 multiple ports you will notice dadfailed status on the ports.
  2. Keepalived configuration is not updated with the new IPv6 addresses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445412] Re: performance of plugin_rpc.get_routers is bad

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445412

Title:
  performance of plugin_rpc.get_routers is bad

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  the get_routers plugin call that the l3 agent makes is serviced by a
  massive amount of SQL queries that lead the whole process to take on
  the order of hundreds of milliseconds to process a request for 10
  routers.

  This will be a blanket bug for a series of performance improvements
  that will reduce that time by at least an order of magnitude.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444497] Re: Instance doesn't get an address via DHCP (nova-network) because of issue with live migration

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/197

Title:
  Instance doesn't get an address via DHCP (nova-network) because of
  issue with live migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  When instance is migrated to another compute node, it's dhcp lease is not 
removed from the first compute node even after instance termination.
  If a new instance got the same IP which was present in the previous instance 
created on the the first compute node where dhcp lease for this IP remains, 
then the dnsmasq refuse DHCP request of the IP address for a new instance with 
different MAC.

  Steps to reproduce:
  Scenario:
  1. Create cluster (CentOS, nova-network with Flat-DHCP , Ceph for 
images and volumes)
  2. Add 1 node with controller and ceph OSD roles
  3. Add 2 node with compute and ceph OSD roles
  4. Deploy the cluster

  5. Create a VM
  6. Wait until the VM got IP address via DHCP (in VM console log)
  7. Migrate the VM to another compute node.
  8. Terminate the VM.

  9. Repeat stages from 5 to 8 several times (in my case - 4..6 
times was enough) until a new instance stops receiving IP address via DHCP.
  10. Check dnsmasq-dhcp.log (/var/log/daemon.log on the compute 
node) for messages like :
  =
  2014-11-09T20:28:29.671344+00:00 warning: not using configured address 
10.0.0.2 because it is leased to fa:16:3e:65:70:be

  This means that:
 I. An instance was created on the compute node-1 and got a dhcp lease:
   nova-dhcpbridge.log
  2014-11-09 20:12:03.811 27360 DEBUG nova.dhcpbridge [-] Called 'add' for mac 
'fa:16:3e:65:70:be' with ip '10.0.0.2' main 
/usr/lib/python2.6/site-packages/nova/cmd/dhcpbridge.py:135

II. When the instance was migrating from compute node-1 to node-3, 
'dhcp_release' was not performed on compute node-1, please check the time range 
in the logs : 2014-11-09 20:14:36-37
   Running.log (node-1)
  2014-11-09T20:14:36.647588+00:00 debug: cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf conntrack -D -r 10.0.0.2
  ### But there is missing a command like: sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.0.0.2 fa:16:3e:65:70:be

III. On the compute node-3, DHCP lease was added and it was successfully 
removed when the instance was terminated:
   Running.log (node-3)
  2014-11-09T20:15:17.250243+00:00 debug: cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.0.0.2 fa:16:3e:65:70:be

IV. When an another instance got the same address '10.0.0.2' and was 
created on node-1, it didn't get IP address via DHCP:
   Running.log (node-1)
  2014-11-09T20:28:29.671344+00:00 warning: not using configured address 
10.0.0.2 because it is leased to fa:16:3e:65:70:be

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Released
Status in Cinder kilo series:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in oslo-incubator:
  Fix Released

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo -> master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447242] Re: Use of allowed-address-pairs can allow tenant to cause denial of service in shared networks

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447242

Title:
  Use of allowed-address-pairs can allow tenant to cause denial of
  service in shared networks

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  By assigning the subnet gateway address to a port as an allowed
  address, a user can cause ARP conflicts and deny service to other
  users in the network. This can be exacerbated by the use of arping to
  send gratuitous ARPs and poison the arp cache of instances in the same
  network.

  Steps to reproduce:

  1. Build a VM. In this case, the network was a VLAN type with external=false 
and shared=true. 
  2. Assign the subnet gateway address as a secondary address in the VM
  3. Use the 'port-update' command to add the gateway address as an allowed 
address on the VM port
  4. Use 'arping' from iputils-arping to send gratuitous ARPs as the gateway IP 
from the instance
  5. Watch as the ARP cache is updated on other instances in the network, 
effectively taking them offline.

  This was tested with LinuxBridge/VLAN as a non-admin user, but may
  affect other combinations.

  Possible remedies may include removing the ability to use allowed-
  address-pairs as a non-admin user, or ensuring that the user cannot
  add the gateway_ip of the subnet associated with the port as an
  allowed address. Either of those two remedies may negatively impact
  certain use cases, so at a minimum it may be a good idea to document
  this somewhere.

  If you need more information please reach out to me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447084] Re: view hypervisor details should be controlled by policy.json

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447084

Title:
  view hypervisor details should be controlled by policy.json

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  When a user with non-admin permissions attempts to view the hypervisor
  details (/v2/2f8728e1c3214d8bb59903ba654ed6c1/os-hypervisors/1) , we
  see the following error :

  2015-04-19 21:34:22.194 23179 ERROR 
nova.api.openstack.compute.contrib.hypervisors 
[req-5caab0db-31aa-4a24-9263-750af6555ef5 
605c378ebded02d6a2deebe138c0ef9d6a0ddf39447297105dcc4eb18c7cc062 
9b0d73e660af434481a0a9b6d6a3bab7 - - -] User does not have admin privileges
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors Traceback (most recent call 
last):
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/hypervisors.py",
 line 147, in show
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors service = 
self.host_api.service_get_by_compute_host(context, hyp.host)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 3451, in 
service_get_by_compute_host
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors return 
objects.Service.get_by_compute_host(context, host_name)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 163, in wrapper
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors result = fn(cls, context, 
*args, **kwargs)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/objects/service.py", line 151, in 
get_by_compute_host
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors db_service = 
db.service_get_by_compute_host(context, host)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/db/api.py", line 139, in 
service_get_by_compute_host
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors use_slave=use_slave)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 214, in 
wrapper
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors 
nova.context.require_admin_context(args[0])
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 235, in 
require_admin_context
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors raise 
exception.AdminRequired()
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors AdminRequired: User does not 
have admin privileges

  
  This is caused because the 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api layer mandates that 
only an admin can perform this operation. This should not be the case. Instead 
the permissions should be controlled as per the rules defined in the nova 
policy.json. This used to work for non-admins till few days/weeks back

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447344] Re: DHCP agent: metadata network broken for DVR

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447344

Title:
  DHCP agent: metadata network broken for DVR

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When the 'metadata network' feature is enabled, the DHCP at [1] will not 
spawn a metadata proxy for DVR routers.
  This should be fixed.

  [1]
  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/dhcp/agent.py#n357

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448148] Re: Device not found: exceptions in l3 grenade

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448148

Title:
  Device not found: exceptions in l3 grenade

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  exceptions have been seen in screen-q-vpn.txt.gz at the gate, and it
  happens quite often recently:

   2015-04-23 15:55:56.236 ERROR neutron.agent.l3.agent 
[req-5ea4d9d1-66ab-444c-a66f-c48094f3582d None None] Failed to process 
compatible router 'aeb00076-1c9e-431d-973f-ce1123c918a7'
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/agent.py", line 452, in 
_process_router_update
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/agent.py", line 404, in 
_process_router_if_compatible
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
self._process_added_router(router)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/agent.py", line 412, in 
_process_added_router
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent ri.process(self)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/common/utils.py", line 346, in call
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent self.logger(e)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/common/utils.py", line 343, in call
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent return 
func(*args, **kwargs)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/router_info.py", line 605, in process
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
self._process_internal_ports()
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/router_info.py", line 361, in 
_process_internal_ports
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
self.internal_network_added(p)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/router_info.py", line 312, in 
internal_network_added
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
INTERNAL_DEV_PREFIX)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/l3/router_info.py", line 288, in 
_internal_network_added
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent prefix=prefix)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/linux/interface.py", line 264, in plug
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
ns_dev.link.set_up()
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/linux/ip_lib.py", line 276, in set_up
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
self._as_root([], ('set', self.name, 'up'))
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/linux/ip_lib.py", line 222, in _as_root
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
use_root_namespace=use_root_namespace)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/linux/ip_lib.py", line 69, in _as_root
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
log_fail_as_error=self.log_fail_as_error)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/linux/ip_lib.py", line 78, in _execute
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent 
log_fail_as_error=log_fail_as_error)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent   File 
"/opt/stack/old/neutron/neutron/agent/linux/utils.py", line 137, in execute
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent raise 
RuntimeError(m)
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent RuntimeError: 
  2015-04-23 15:55:56.236 3736 TRACE neutron.agent.l3.agent Command: ['ip', 
'netns', 'exec', u'qrouter-aeb00076-1c9e-431d-973f-ce1123c918a7', 'ip', 'link', 
'set', u'qr-91aaad43-1e', 'up']
  2015-04-23 15:55:56.236 3

[Yahoo-eng-team] [Bug 1447249] Re: Ironic: injected files not passed through to configdrive

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447249

Title:
  Ironic: injected files not passed through to configdrive

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  The ironic driver's code to generate a configdrive does not pass
  injected_files through to the configdrive builder, resulting in
  injected files not being in the resulting configdrive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450624] Re: Nova waits for events from neutron on resize-revert that aren't coming

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450624

Title:
  Nova waits for events from neutron on resize-revert that aren't coming

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  On resize-revert, the original host was waiting for plug events from
  neutron before restarting the instance. These aren't sent since we
  don't ever unplug the vifs. Thus, we'll always fail like this:

  
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 88, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 71, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 298, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher pass
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 284, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 348, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 326, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 314, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/

[Yahoo-eng-team] [Bug 1449363] Re: OVS-agent: "invalid IP address" in arp spoofing protection

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449363

Title:
  OVS-agent: "invalid IP address" in arp spoofing protection

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  arp spoofing code tries to install flows with arp_spa=ipv6_address and
  ovs-ofctl correctly complains.

  2015-04-26 00:17:36.844 ERROR neutron.agent.linux.utils 
[req-f516905e-77b4-4975-
  8b8d-5b3669cdda0d None None] 
  Command: ['ovs-ofctl', 'add-flows', 'br-int', '-']
  Exit code: 1
  Stdin: 
hard_timeout=0,idle_timeout=0,priority=2,arp,arp_spa=2003::3,arp_op=0x2,table=24,in_port=197,actions=normal
  Stdout:
  Stderr: ovs-ofctl: -:1: 2003::3: invalid IP address

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW52YWxpZCBJUCBhZGRyZXNzXCIgYW5kIGZpbGVuYW1lOiBcInEtYWd0LmxvZy5nelwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDMwMTk4NDczMjM3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451389] Re: Nova gate broke due to failed unit test

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451389

Title:
  Nova gate broke due to failed unit test

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  
  [x]

  
  ft1.13172: 
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/virt/vmwareapi/test_read_write_util.py", line 49, in 
test_ipv6_host_read
  verify=False)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 846, in assert_called_once_with
  return self.assert_called_with(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py",
 line 835, in assert_called_with
  raise AssertionError(msg)
  AssertionError: Expected call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dc&dsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, verify=False)
  Actual call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dc&dsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, params=None, verify=False)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451860] Re: Attached volume migration failed, due to incorrect arguments order passed to swap_volume

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451860

Title:
  Attached volume migration failed, due to incorrect arguments  order
  passed to swap_volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in openstack-ansible:
  Fix Released

Bug description:
  Steps to reproduce:
  1. create a volume in cinder
  2. boot a server from image in nova
  3. attach this volume to server
  4. use ' cinder migrate  --force-host-copy True  
3fa956b6-ba59-46df-8a26-97fcbc18fc82 openstack-wangp11-02@pool_backend_1#Pool_1'

  log from nova compute:( see attched from detail info):

  2015-05-05 00:33:31.768 ERROR root [req-b8424cde-e126-41b0-a27a-ef675e0c207f 
admin admin] Original exception being dropped: ['Traceback (most recent ca
  ll last):\n', '  File "/opt/stack/nova/nova/compute/manager.py", line 351, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n
  ', '  File "/opt/stack/nova/nova/compute/manager.py", line 4982, in 
swap_volume\ncontext, old_volume_id, instance_uuid=instance.uuid)\n', 
"Attribut
  eError: 'unicode' object has no attribute 'uuid'\n"]

  
  according to my debug result:
  # here  parameters passed to swap_volume
  def swap_volume(self, ctxt, instance, old_volume_id, new_volume_id):
  return self.manager.swap_volume(ctxt, instance, old_volume_id,
  new_volume_id)
  # swap_volume function
  @wrap_exception()
  @reverts_task_state
  @wrap_instance_fault
  def swap_volume(self, context, old_volume_id, new_volume_id, instance):
  """Swap volume for an instance."""
  context = context.elevated()

  bdm = objects.BlockDeviceMapping.get_by_volume_id(
  context, old_volume_id, instance_uuid=instance.uuid)
  connector = self.driver.get_volume_connector(instance)

  
  You can find: passed in order is "self, ctxt, instance, old_volume_id, 
new_volume_id" while function definition is "self, context, old_volume_id, 
new_volume_id, instance"

  this cause the 'unicode' object has no attribute 'uuid'\n" error when
  trying to access instance['uuid']


  BTW: this problem was introduced in
  https://review.openstack.org/#/c/172152

  affect both Kilo and master

  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451558] Re: subnetpool allocation not working with postgresql

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451558

Title:
  subnetpool allocation not working with postgresql

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The following is working with mysql but not with postgresql

  
  #$ neutron subnetpool-create pool --pool-prefix 10.0.0.0/8 
--default-prefixlen 24
  #$ neutron net-create net
  #$ neutron subnet-create net --name subnet --subnetpool pool

  
  The last command raises a 501 with postgresql with the stacktrace[2] in 
neutron-server, because _get_allocated_cidrs[1] performs a SELECT FOR UPDATE 
with a JOIN on an empty select! (allowed with mysql, not postgresql).



  [1]: 
https://github.com/openstack/neutron/blob/5962d825a6c98225c51bc6dd304b5c1ac89035ef/neutron/ipam/subnet_alloc.py#L40-L44
query = session.query(models_v2.Subnet).with_lockmode('update')
subnets = query.filter_by(subnetpool_id=self._subnetpool['id'])

  
  [2]: neutron-server stacktrace
  2015-05-04 21:47:01.939 ERROR neutron.api.v2.resource 
[req-a6c14f61-bdb2-4273-a231-df0a85fb33d8 demo 
b532b7a9302c45b18f06f68b41869ffa] create failed
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 83, in resource
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 461, in create
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 804, in create_subnet
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result, 
mech_context = self._create_subnet_db(context, subnet)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 795, in 
_create_subnet_db
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result = 
super(Ml2Plugin, self).create_subnet(context, subnet)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1389, in 
create_subnet
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource subnetpool_id)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 131, in wrapper
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1283, in 
_create_subnet_from_pool
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource ipam_subnet = 
allocator.allocate_subnet(context.session, req)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 141, in allocate_subnet
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return 
self._allocate_any_subnet(session, request)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 93, in 
_allocate_any_subnet
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource prefix_pool = 
self._get_available_prefix_list(session)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 48, in 
_get_available_prefix_list
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource allocations = 
self._get_allocated_cidrs(session)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/ipam/subnet_alloc.py", line 44, in 
_get_allocated_cidrs
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return (x.cidr for 
x in subnets)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2441, in 
__iter__
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2456, in 
_execute_and_instances
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 841, 
in execute
  2015-05-04 21:47:01.939 TRACE neutron.api.v2.resource return meth(self, 
multiparams, params)
  2015-05-04 21:47:

[Yahoo-eng-team] [Bug 1450682] Re: nova unit tests failing with pbr 0.11

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450682

Title:
  nova unit tests failing with pbr 0.11

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  test_version_string_with_package_is_good breaks with the release of
  pbr 0.11

  
nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/test_versions.py", line 33, in 
test_version_string_with_package_is_good
  version.version_string_with_package())
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.2.0-g9ec3421'

  
  
http://logs.openstack.org/27/169827/8/check/gate-nova-python27/2009c78/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451559] Re: subnetpool allocation not working with multiples subnets on a unique network

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451559

Title:
  subnetpool allocation not working with multiples subnets on a unique
  network

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The following scenario is not working:

  #$ neutron subnetpool-create pool --pool-prefix 10.0.0.0/8 
--default-prefixlen 24
  #$ neutron net-create net
  #$ neutron subnet-create net --name subnet0 10.0.0.0/24
  #$ neutron subnet-create net --name subnet1--subnetpool pool
  >>> returns a 409

  Last command fails because neutron tries to allocate to subnet1 the
  1st unallocated cidr from 10.0.0.0/8 pool => 10.0.0.0/24 but subnet
  net as already 10.0.0.0/24 as cidr (subnet0) and overlapping cidrs are
  disallowed on the same network!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453666] Re: libvirt: guestfs api makes nova-compute hang

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453666

Title:
  libvirt: guestfs api makes nova-compute hang

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  Latest Kilo code.

  In inspect_capabilities() of nova/virt/disk/vfs/guestfs.py, guestfs
  api, which is C-extension, will hang nova-compute process when it is
  invoked. This problem will result in message queue time out error and
  instance booting failure.

  And example of this problem is:

  2015-05-09 17:07:08.393 4449 DEBUG nova.virt.disk.vfs.api 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Using primary VFSGuestFS 
instance_for_image /usr/lib/python2.7/site-packages/nova/virt/disk/vfs/api.py:50
  2015-05-09 17:08:35.443 4449 DEBUG nova.virt.disk.vfs.guestfs 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Setting up appliance for 
/var/lib/nova/instances/0517e2a9-469c-43f4-a129-f489fc1c8356/disk qcow2 setup 
/usr/lib/python2.7/site-packages/nova/virt/disk/vfs/guestfs.py:169
  2015-05-09 17:08:35.457 4449 DEBUG nova.openstack.common.periodic_task 
[req-bb78b74b-bed7-450f-bd40-19686aab2c3e - - - - -] Running periodic task 
ComputeManager._instance_usage_audit run_periodic_tasks 
/usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:219
  2015-05-09 17:08:35.461 4449 INFO oslo_messaging._drivers.impl_rabbit 
[req-bb78b74b-bed7-450f-bd40-19686aab2c3e - - - - -] Connecting to AMQP server 
on 127.0.0.1:5671
  2015-05-09 17:08:35.472 4449 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager Traceback (most 
recent call last):
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1783, in 
_allocate_network_async
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager 
system_metadata=sys_meta)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 739, in 
_instance_update
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager **kwargs)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 308, in 
instance_update
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager updates, 
'conductor')
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py", line 194, in 
instance_update
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager service=service)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in 
call
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager retry=self.retry)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in 
_send
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
350, in send
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager retry=retry)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
339, in _send
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
243, in wait
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
149, in get
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager 'to message ID 
%s' % msg_id)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager MessagingTimeout: 
Timed out waiting for a reply to message ID 8ff07520ea8743c997b5017f6638a0df
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451576] Re: subnetpool allocation not ensuring non-overlapping cidrs

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451576

Title:
  subnetpool allocation not ensuring non-overlapping cidrs

Status in neutron:
  In Progress
Status in neutron kilo series:
  New

Bug description:
  _get_allocated_cidrs[1] locks only allocated subnets in a subnetpool
  (with mysql/postgresql at least) which ensures we won't allocate a
  cidr overlapping with existent cidrs but nothing disallows a
  concurrent subnet allocation to create a subnet in the same
  subnetpool.

  [1]:
  
https://github.com/openstack/neutron/blob/5962d825a6c98225c51bc6dd304b5c1ac89035etef/neutron/ipam/subnet_alloc.py#L40-L44

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455102] Re: some test jobs broken by tox 2.0 not passing env variables

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455102

Title:
  some test jobs broken by tox 2.0 not passing env variables

Status in Magnum:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New
Status in OpenStack-Gate:
  Confirmed
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-manilaclient:
  Fix Committed
Status in python-neutronclient:
  Fix Committed
Status in python-novaclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Committed

Bug description:
  Tox 2.0 brings environment isolation, which is good. Except a lot of
  test jobs assume passing critical variables via environment (like
  credentials).

  There are multiple ways to fix this:

  1. stop using environment to pass things, instead use a config file of
  some sort

  2. allow explicit pass through via -
  http://tox.readthedocs.org/en/latest/config.html#confval-passenv
  =SPACE-SEPARATED-GLOBNAMES

  This bug mostly exists for tracking patches, and ensuring that people
  realize there is a larger change here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/magnum/+bug/1455102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454434] Re: NoNetworkFoundInMaximumAllowedAttempts during concurrent network creation

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1454434

Title:
  NoNetworkFoundInMaximumAllowedAttempts during concurrent network
  creation

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  NoNetworkFoundInMaximumAllowedAttempts  could be thrown if networks are 
created by multiple threads simultaneously.
  This is related to https://bugs.launchpad.net/bugs/1382064
  Currently DB logic works correctly, however 11 attempts that code does right 
now might not be enough in some rare unlucky cases under extreme concurrency.

  We need to randomize segmentation_id selection to avoid such issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1454434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455439] Re: l3 agent: race may lead to creation of deleted routers

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455439

Title:
  l3 agent: race may lead to creation of deleted routers

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  During startup (or in case of any sync failure) l3 agent initiates full 
resync with neutron server.
  That means fetching all router info and adding update events for each router 
to agent processing queue.
  Important thing is that such events are added with SYNC priority which is 
lower than RPC priority of events resulting from users adding/updating/deleting 
routers. Another important thing is that agent won't ask server for router info 
later when processing SYNC events as all info was received initially on sync 
start.

  The race is when router is deleted during l3 agent resync: while
  router update event with SYNC priority may be still waiting in the
  queue, router deleted event with RPC priority added to the queue and
  processed. SYNC event will be processed later thus recreating router
  which was already deleted. Such routers will be deleted on agent node
  only in case of agent restart or another resync.

  One way to fix is to not fetch full routers info on resync start but just ids 
and get full router info when processing update for particular router. The 
dowside is that it increases rpc communications between agent and server and 
thus slows down both of them.
  Another way would be to delete all events (for all priorities) related to 
particular router when receiving router_deleted notification but seems 
PriorityQueue used by agent does not allow search and pop by parameters (that 
may also slow down processing).

  So I'm going to propose adding two events (for both priorities) to the
  queue on router deleted notification, so "deleted" event will be
  latest by timestamp for both priorities and router won't be recreated.
  In case no resync is happening during router deletion (normal case)
  additional router deleted event should not bring much burden to the
  agent as it's a pretty cheap call for unknown(deleted) router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456333] Re: ovs-agent: doesn't prevent arp requests with faked ips

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456333

Title:
  ovs-agent: doesn't prevent arp requests with faked ips

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Patch
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=aa7356b729f9672855980429677c969b6bab61a1
  setup rules on br-int to prevent faking the IP address in ARP replies.
  But it's also possible to poison a neighbour's ARP cache with a bogus
  ARP request, as the victim updates its cache on receipt of it. That is
  how arpcachepoison in scapy works.

  Here the attacker is 10.0.1.6 and the victim is 10.0.1.7

  victim# ip n
  10.0.1.6 dev tapfccaf7c3-01 lladdr fa:16:3e:33:58:4e STALE
  10.0.1.1 dev tapfccaf7c3-01 lladdr fa:16:3e:10:d3:b2 STALE

  attacker#  scapy
  INFO: Can't import python gnuplot wrapper . Won't be able to plot.
  INFO: Can't import PyX. Won't be able to use psdump() or pdfdump().
  WARNING: No route found for IPv6 destination :: (no default route?)
  Welcome to Scapy (2.2.0)
  >>> arpcachepoison("10.0.1.7", "10.0.1.1", interval=1)

  victim# ip n
  10.0.1.6 dev tapfccaf7c3-01 lladdr fa:16:3e:33:58:4e STALE
  10.0.1.1 dev tapfccaf7c3-01 lladdr fa:16:3e:33:58:4e STALE

  This is at the same level as
  https://bugs.launchpad.net/neutron/+bug/1274034, which was deemed not
  to be a security vulnerability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456822] Re: AgentNotFoundByTypeHost exception logged when L3-agent starts up

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456822

Title:
  AgentNotFoundByTypeHost exception logged when L3-agent starts up

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  On my single-node devstack setup running the latest neutron code,
  there is one AgentNotFoundByTypeHost exception found for the L3-agent.
  However, the AgentNotFoundByTypeHost exception is not logged for the
  DHCP, OVS, or metadata agents.  This fact would point to a problem
  with how the L3-agent is starting up.

  Exception found in the L3-agent log:

  2015-05-19 11:27:57.490 23948 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 1d0f3e0a8a6744c9a9fc43eb3fdc5153 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:311^M
  2015-05-19 11:27:57.550 23948 ERROR neutron.agent.l3.agent [-] Failed 
synchronizing routers due to RPC error^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent Traceback (most 
recent call last):^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 517, in 
fetch_and_sync_all_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent routers = 
self.plugin_rpc.get_routers(context)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/opt/stack/neutron/neutron/agent/l3/agent.py", line 91, in get_routers^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
router_ids=router_ids)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
156, in call^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
retry=self.retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent 
timeout=timeout, retry=retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 350, in send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent retry=retry)^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 341, in _send^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent raise result^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent RemoteError: 
Remote error: AgentNotFoundByTypeHost Agent with agent_type=L3 agent and 
host=DVR-Ctrl2 could not be found^M
  2015-05-19 11:27:57.550 23948 TRACE neutron.agent.l3.agent [u'Traceback (most 
recent call last):\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch\nexecutor_callback)\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', u'  File 
"/opt/stack/neutron/neutron/api/rpc/handlers/l3_rpc.py", line 81, in 
sync_routers\ncontext, host, router_ids))\n', u'  File 
"/opt/stack/neutron/neutron/db/l3_agentschedulers_db.py", line 290, in 
list_active_sync_routers_on_active_l3_agent\ncontext, 
constants.AGENT_TYPE_L3, host)\n', u'  File 
"/opt/stack/neutron/neutron/db/agents_db.py", line 197, in 
_get_agent_by_type_and_host\nhost=host)\n', u'AgentNotFoundByTypeHost: 
Agent with agent_ty
 pe=L3 agent and host=DVR-Ctrl2 could not be found\n'].^M

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460220] Re: ipset functional tests assume system capability

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460220

Title:
  ipset functional tests assume system capability

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Production code uses ipset in the root namespace, but functional
  testing uses them in non-root namespaces. As it turns out, that
  functionality requires versions of the kernel and ipset not found in
  all versions of all distributions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460673] Re: nova-manage flavor convert fails if instance has no flavor in sys_meta

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460673

Title:
  nova-manage flavor convert fails if instance has no flavor in sys_meta

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  nova-manage fails if instance has no flavor in sys_meta when trying to
  move them all to instance_extra.

  But mostly the instance_type table includes the correct information,
  so it should be possible to copy it from there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461024] Re: Notification is not sent on security group rule creation

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461024

Title:
  Notification is not sent on security group rule creation

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Security group rule before/after_create notifications are done in
  create_security_group_rule() from SecurityGroupDbMixin.

  But currently SecurityGroupServerRpcMixin is used to support security group 
extension in plugins. 
  It is derived from SecurityGroupDbMixin. Both have 
create_security_group_rule() method so in SecurityGroupServerRpcMixin it is 
overriden. 
  Hence create_security_group_rule() from SecurityGroupDbMixin is not used => 
notifications are not sent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456963] Re: VNC Console failed to load with IPv6 Addresses

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456963

Title:
  VNC Console failed to load with IPv6 Addresses

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  Description of problem:
  After installation with packstack of openstack over IPv6 address(All 
components using IPv6) VNC console is unreachable

  Version-Release number of selected component (if applicable):
  Packstack version-
  packstack Kilo 2015.1.dev1537.gba5183c
  RHEL version -
  Red Hat Enterprise Linux Server release 7.1 (Maipo)
  openstack versions:
  2015.1.0
  novnc-0.5.1-2.el7.noarch
  openstack-nova-cert-2015.1.0-3.el7.noarch
  openstack-nova-compute-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  python-novaclient-2.23.0-1.el7.noarch

  
  How reproducible:
  Try to open noVNC console via the web browser with IPv6 address

  Steps to Reproduce:
  1. Install openstack with IPv6 addresses for all components
  2. Login to the horizon dashboard using IPv6
  3. Launch an instance
  4. try to activate console

  Actual results:
  Console failed to connect - error 1006

  Expected results:
  Console should connect successfully

  Additional info:

  nova novnc log:
  2015-05-12 10:25:33.961 15936 INFO nova.console.websocketproxy [-] WebSocket 
server settings:
  2015-05-12 10:25:33.962 15936 INFO nova.console.websocketproxy [-]   - Listen 
on ::0:6080
  2015-05-12 10:25:33.962 15936 INFO nova.console.websocketproxy [-]   - Flash 
security policy server
  2015-05-12 10:25:33.962 15936 INFO nova.console.websocketproxy [-]   - Web 
server. Web root: /usr/share/novnc
  2015-05-12 10:25:33.963 15936 INFO nova.console.websocketproxy [-]   - No 
SSL/TLS support (no cert file)
  2015-05-12 10:25:33.965 15936 INFO nova.console.websocketproxy [-]   - 
proxying from ::0:6080 to None:None
  2015-05-13 10:33:12.084 15936 CRITICAL nova [-] UnboundLocalError: local 
variable 'exc' referenced before assignment
  2015-05-13 10:33:12.084 15936 TRACE nova Traceback (most recent call last):
  2015-05-13 10:33:12.084 15936 TRACE nova   File "/usr/bin/nova-novncproxy", 
line 10, in 
  2015-05-13 10:33:12.084 15936 TRACE nova sys.exit(main())
  2015-05-13 10:33:12.084 15936 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 49, in main
  2015-05-13 10:33:12.084 15936 TRACE nova port=CONF.novncproxy_port)
  2015-05-13 10:33:12.084 15936 TRACE nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", line 72, in proxy
  2015-05-13 10:33:12.084 15936 TRACE nova 
RequestHandlerClass=websocketproxy.NovaProxyRequestHandler
  2015-05-13 10:33:12.084 15936 TRACE nova   File 
"/usr/lib/python2.7/site-packages/websockify/websocket.py", line 1018, in 
start_server
  2015-05-13 10:33:12.084 15936 TRACE nova self.msg("handler exception: 
%s", str(exc))
  2015-05-13 10:33:12.084 15936 TRACE nova UnboundLocalError: local variable 
'exc' referenced before assignment
  2015-05-13 10:33:12.084 15936 TRACE nova 
  2015-05-13 10:52:41.893 3696 INFO nova.console.websocketproxy [-] WebSocket 
server settings:
  2015-05-13 10:52:41.893 3696 INFO nova.console.websocketproxy [-]   - Listen 
on ::0:6080
  2015-05-13 10:52:41.894 3696 INFO nova.console.websocketproxy [-]   - Flash 
security policy server
  2015-05-13 10:52:41.894 3696 INFO nova.console.websocketproxy [-]   - Web 
server. Web root: /usr/share/novnc
  2015-05-13 10:52:41.894 3696 INFO nova.console.websocketproxy [-]   - No 
SSL/TLS support (no cert file)
  2015-05-13 10:52:41.920 3696 INFO nova.console.websocketproxy [-]   - 
proxying from ::0:6080 to None:None
  2015-05-13 10:54:04.345 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connecting to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.380 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connected to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.388 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connecting to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.408 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connected to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.554 3979 INFO nova.console.websocketproxy 
[req-e47dae76

[Yahoo-eng-team] [Bug 1456823] Re: address pair rules not matched in iptables counter-preservation code

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456823

Title:
  address pair rules not matched in iptables counter-preservation code

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  There are a couple of issues with the way our iptables rules are
  formed that prevent them from being matched in the code that looks at
  existing rules to preserve counters. So the counters end up getting
  wiped out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466873] Re: neutron.tests.functional.agent.linux.test_keepalived keeps running keepalived processes

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466873

Title:
  neutron.tests.functional.agent.linux.test_keepalived keeps running
  keepalived processes

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Every testcase in neutron.tests.functional.agent.linux.test_keepalived
  keeps running keepalived process in system.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-23 Thread Alan Pevec
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  Fix Committed
Status in Glance kilo series:
  New
Status in glance_store:
  Fix Released
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  Fix Committed
Status in neutron kilo series:
  New
Status in python-muranoclient:
  Fix Committed
Status in python-muranoclient kilo series:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Committed

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457900] Re: dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs (break networks)

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457900

Title:
  dhcp_agents_per_network > 1 cause conflicts (NACKs) from dnsmasqs
  (break networks)

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  If neutron was configured to have more than one DHCP agent per network
  (option dhcp_agents_per_network=2), it causes dnsmasq to reject leases
  of others dnsmasqs, creating mess and stopping instances to boot
  normally.

  Symptoms:

  Cirros (at the log):
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK
  Usage: /sbin/cirros-dhcpc 
  Sending discover...
  Sending select for 188.42.216.146...
  Received DHCP NAK

  Steps to reproduce:
  1. Set up neutron with VLANs and dhcp_agents_per_network=2 option in 
neutron.conf
  2. Set up two or more different nodes with enabled neutron-dhcp-agent
  3. Create VLAN neutron network with --enable-dhcp option
  4. Create instance with that network

  Expected behaviour:

  Instance recieve IP address via DHCP without problems or delays.

  Actual behaviour:

  Instance stuck in the network boot for long time.
  There are complains about NACKs in the logs of dhcp client.
  There are multiple NACKs on tcpdump on interfaces

  Additional analysis: It is very complex, so I attach example of two
  parallel tcpdumps from two dhcp namespaces in HTML format.

  
  Version: 2014.2.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458718] Re: DB2 error occurs when neutron server enables multiple api workers

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458718

Title:
  DB2 error occurs when neutron server enables multiple api workers

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  When neutron server enables multiple api workers, it will use os.fork
  to start multiple neutron server process.  During this period, some
  DB2 error will occur as below, which shows we are trying to close a
  closed connection.  It seems like that pooled connection is shared by
  processes.

  2015-04-29 22:27:39.330 567 ERROR sqlalchemy.pool.QueuePool [-] Exception 
closing connection 
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool Traceback (most 
recent call last):
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/pool.py", line 250, in 
_close_connection
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool 
self._dialect.do_close(connection)
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 412, in 
do_close
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool 
dbapi_connection.close()
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool   File 
"/usr/lib64/python2.7/site-packages/ibm_db_dbi.py", line 688, in close
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool raise 
_get_exception(inst)
  2015-04-29 22:27:39.330 567 TRACE sqlalchemy.pool.QueuePool OperationalError: 
ibm_db_dbi::OperationalError: [IBM][CLI Driver] CLI0106E  Connection is closed. 
SQLSTATE=08003 SQLCODE=-9

  Currently neutron is using dispose() in child process to release the
  connnection and create new one. Actually we should dispose the pool
  before os.fork in father process.

  Reference to sqlalchemy
  doc(http://docs.sqlalchemy.org/en/latest/core/connections.html#basic-
  usage)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459467] Re: port update multiple fixed IPs anticipating allocation fails with mac address error

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1459467

Title:
  port update multiple fixed IPs anticipating allocation fails with mac
  address error

Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  A port update with multiple fixed IP specifications, one with a subnet
  ID and one with a fixed IP that conflicts with the address picked by
  the one specifying the subnet ID will result in a dbduplicate entry
  which is presented to the user as a mac address error.

  ~$ neutron port-update 7521786b-6c7f-4385-b5e1-fb9565552696 --fixed-ips 
type=dict 
{subnet_id=ca9dd2f0-cbaf-4997-9f59-dee9a39f6a7d,ip_address=42.42.42.42}
  Unable to complete operation for network 
0897a051-bf56-43c1-9083-3ac38ffef84e. The mac address None is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1459467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458119] Re: Improve stability and robustness of periodic agent checks

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458119

Title:
  Improve stability and robustness of periodic agent checks

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  In some cases due to DB controller failure, DB connections could be 
interrupted.
  This causes exceptions that sneak in looping call method effectively shutting 
loop down and preventing any further failover for particular resource time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465927] Re: varible'version' is undefine in function'_has_cpu_policy_support'

2015-07-23 Thread Alan Pevec
** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465927

Title:
  varible'version' is undefine in function'_has_cpu_policy_support'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New

Bug description:
  My running environment is
  openstack-nova-compute-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch
  openstack-nova-serialproxy-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch

  When boot a instance to a host with llibvirt version 1.2.10  and flavor key 
set with hw:cpu_policy=dedicated,
  there  is log with below:

  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
  line 3404, in _has_cpu_policy_support

  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
  line 524, in _version_to_string

  TypeError: 'module'  object is not iterable  in
  nova/virt/libvirt/driver.py

  Souce Code of K verison is below:

   def _has_cpu_policy_support(self):
  for ver in BAD_LIBVIRT_CPU_POLICY_VERSIONS:
  if self._host.has_version(ver):
  ver_ = self._version_to_string(version)
  raise exception.CPUPinningNotSupported(reason=_(
  'Invalid libvirt version %(version)s') % {'version': 
ver_})
  return True

  I thought this func is mistake in writing with

  ver_ = self._version_to_string(version)

  So when libvirt version is BAD_LIBVIRT_CPU_POLICY_VERSIONS there will
  be a TypeError

  It should be ver_ = self._version_to_string(ver)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461325] Re: keyerror in OVS agent port_delete handler

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1461325

Title:
  keyerror in OVS agent port_delete handler

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  [req-d746e623-8c6e-4e4d-b246-8ca689e0b8ad None None] Error while processing 
VIF ports
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/new/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1521, in rpc_loop
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.deleted_ports -= 
port_info['removed']
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent KeyError: 'removed'
  2015-06-02 23:21:44.836 24270 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1461325/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466921] Re: IptablesFirewallDriver making extra unnecessary calls to IpsetManager.set_members()

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466921

Title:
  IptablesFirewallDriver making extra unnecessary calls to
  IpsetManager.set_members()

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Currently, IptablesFirewallDriver iterates over a list of security
  group IDs and makes calls to IpsetManager.set_members() passing each
  security group ID. The problem is that this list of security group IDs
  can contain duplicates, which causes IpsetManager.set_members() to be
  repeatedly called with the same arguments. This method is idempotent,
  so there is nothing different happening after the first time it's
  called with a certain set of arguments; it should only be called once
  per set of arguments.

  IpsetManager.set_members() acquires an external file lock on ipset to
  perform its operations, so eliminating these unnecessary file lock
  acquisitions will have a positive effect on the performance of this
  code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468009] Re: disabling arp spoofing for 0.0.0.0/0 doesn't work

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468009

Title:
  disabling arp spoofing for 0.0.0.0/0 doesn't work

Status in neutron:
  Fix Committed
Status in neutron kilo series:
  New

Bug description:
  If 0.0.0.0/0 is in the allowed address pairs rules, the rule to allow
  that will fail to install because of the /0 prefix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468009/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-07-23 Thread Alan Pevec
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  New
Status in Manila:
  In Progress
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New
Status in Sahara:
  Confirmed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print "RESPONSE %s-%d" % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete" % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s" % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263665] Re: Number of GET requests grows exponentially when multiple rows are being updated in the table

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1263665

Title:
  Number of GET requests grows exponentially when multiple rows are
  being updated in the table

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  1. In Launch instance dialog select number of instances 10.
  2. Create 10 instances.
  2. While instances are being created and table rows are being updated the 
number of row update requests grows exponentially and a queue of pending 
requests still exists after all rows had beed updated.

  There is a request type:
  Request 
URL:http://donkey017/project/instances/?action=row_update&table=instances&obj_id=7c4eaf35-ebc0-4ea3-a702-7554c8c36cf2
  Request Method:GET

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1263665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417379] Re: KeyError returned when subnet-update enable_dhcp to False

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417379

Title:
  KeyError returned when subnet-update enable_dhcp to False

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Keyerror outputed in the trace log after set enable_dhcp of subnet to
  False.

  [reproduce]
   neutron net-create test
   neutron dhcp-agent-network-add ID_of_DHCP_agent test
   neutron subnet-create test 192.168.100.0/24 --name test1
   neutron subnet-create test 192.168.101.0/24 --name test2
   neutron subnet-update test2 --enable_dhcp False
   tailf /opt/stack/logs/q-dhcp.log

  [Trace log]
  
  2015-02-14 01:01:08.556 5436 DEBUG neutron.agent.dhcp.agent [-] resync 
(536ef879-baf5-405b-8402-303ff5e2e905): 
[KeyError(u'37f0b628-22e6-4446-8bb9-2c2176c5a646',)] _periodic_resync_helper 
/opt/stack/neutron/neutron/agent/dhcp/agent.py:189
  2015-02-14 01:01:08.557 5436 DEBUG oslo_concurrency.lockutils [-] Lock 
"dhcp-agent" acquired by "sync_state" :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:430
  2015-02-14 01:01:08.558 5436 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state
  2015-02-14 01:01:08.559 5436 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is a0f460425e904cc0b045336351d961d5 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:378
  2015-02-14 01:01:08.559 5436 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID 
is d3aff7b1f8744f5b909ef5bc6eded8d2. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:224
  2015-02-14 01:01:08.632 5436 DEBUG neutron.agent.dhcp.agent [-] Calling 
driver for network: 536ef879-baf5-405b-8402-303ff5e2e905 action: enable 
call_driver /opt/stack/neutron/neutron/agent/dhcp/agent.py:106
  2015-02-14 01:01:08.633 5436 DEBUG neutron.agent.linux.utils [-] Unable to 
access /opt/stack/data/neutron/dhcp/536ef879-baf5-405b-8402-303ff5e2e905/pid 
get_value_from_file /opt/stack/neutron/neutron/agent/linux/utils.py:168
  2015-02-14 01:01:08.633 5436 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 536ef879-baf5-405b-8402-303ff5e2e905.
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 116, in call_driver
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 207, in enable
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 934, in setup
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent port = 
self.setup_dhcp_port(network)
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
"/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 924, in setup_dhcp_port
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent for fixed_ip 
in dhcp_port.fixed_ips]
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent KeyError: 
u'37f0b628-22e6-4446-8bb9-2c2176c5a646'
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent
  2015-02-14 01:01:08.634 5436 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state complete
  2015-02-14 01:01:08.635 5436 DEBUG oslo_concurrency.lockutils [-] Lock 
"dhcp-agent" released by "sync_state" :: held 0.078s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:442
  

  ・All DHCP agents looks fine :-)
  ・Restart changes nothing. :-(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424593] Re: ObjectDeleted error when network already removed during rescheduling

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424593

Title:
  ObjectDeleted error when network already removed during rescheduling

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  In some cases when concurrent rescheduling occurs, the following trace
  is observed:

  ERROR neutron.openstack.common.loopingcall [-] in fixed duration looping call
  TRACE neutron.openstack.common.loopingcall Traceback (most recent call last):
  TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/loopingcall.py", 
line 76, in _inner
  TRACE neutron.openstack.common.loopingcall self.f(*self.args, **self.kw)
  TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/neutron/db/agentschedulers_db.py", line 269, 
in remove_networks_from_down_agents
  TRACE neutron.openstack.common.loopingcall {'net': binding.network_id,
  TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 239, in 
__get__
  TRACE neutron.openstack.common.loopingcall return 
self.impl.get(instance_state(instance), dict_)
  TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 589, in 
get
  TRACE neutron.openstack.common.loopingcall value = callable_(state, 
passive)
  TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/state.py", line 424, in 
__call__
  TRACE neutron.openstack.common.loopingcall 
self.manager.deferred_scalar_loader(self, toload)
  TRACE neutron.openstack.common.loopingcall   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 614, in 
load_scalar_attributes
  TRACE neutron.openstack.common.loopingcall raise 
orm_exc.ObjectDeletedError(state)
  TRACE neutron.openstack.common.loopingcall ObjectDeletedError: Instance 
'' has been deleted, or its row is 
otherwise not present.

  Need to avoid accessing db object after it has been deleted from db as 
attribute access may trigger this exception.
  This issue terminates periodic task of rescheduling networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398754] Re: LBaas v1 Associate Monitor to Pool Fails

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398754

Title:
  LBaas v1 Associate Monitor to Pool Fails

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Try to associate health monitor to pool in horizon, there's no monitor
  listed on page "Associate Monitor".

  Reproduce Procedure: 
  1.  Create Pool 
  2.  Add two members
  3.  Create Health Monitor
  4.  Click "Associate Monitor" button of pool resource
  5.  There's no monitor listed.

  ***
  At this point, use CLI to:
  1.  show pool, there's no monitor associated yet.
  ++--+
  | Field  | Value  |
  ++--+
  | health_monitors| |
  | health_monitors_status |  |
  ++--+

  2. list monitor, there's available monitor.
  $ neutron lb-healthmonitor-list
  +--+--++
  | id   | type | admin_state_up |
  +--+--++
  | f5e764f0-eceb-4516-9919-7806f409c1ae | HTTP | True   |
  +--+--++

  3. Assocaite monitor to pool. Succeeded.
  $ neutron lb-healthmonitor-associate  f5e764f0-eceb-4516-9919-7806f409c1ae  
mypool
  Associated health monitor f5e764f0-eceb-4516-9919-7806f409c1ae

  *

  Base on above info, it should be a horizon bug. Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433849] Re: Horizon crashes which user click the "Confirm resize" action multiple times while an instance is resizing

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1433849

Title:
  Horizon crashes which user click the "Confirm resize" action multiple
  times while an instance is resizing

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Steps to reproduce:

  1. Boot an instance
  2. Resize that instance
  3. When the instance is in Confirm resize state, click the "Confirm resize" 
action.
  4. After the action is clicked once, the "Confirm resize" action still shows 
up, click it couple times.
  5. You will see Horizon crashes with the following error:

  Cannot 'confirmResize' instance d1ba0033-4ce7-431d-a9dc-754fe0631fef
  while it is in vm_state active (HTTP 409)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1433849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440834] Re: Unit test tree does not match the structure of the code tree

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440834

Title:
  Unit test tree does not match the structure of the code tree

Status in networking-brocade:
  Fix Released
Status in networking-cisco:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The structure of the unit test tree does not currently correspond to
  the structure of the code under test.  This makes it difficult for a
  developer to find the unit tests for a given module and complicates
  non-mechanical evaluation of coverage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1440834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441400] Re: Move N1kv section from neutron tree's ml2_conf_cisco.ini to stackforge repo

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441400

Title:
  Move N1kv section from neutron tree's ml2_conf_cisco.ini to stackforge
  repo

Status in networking-cisco:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The change includes moving the N1kv section from the neutron tree's
  ml2_conf_cisco.ini file to the stackforge/networking-cisco project.

  The change will also include addition of a new parameter --
  'sync_interval' to the N1kv section, after the section is moved to the
  stackforge repo.

  sync_interval: configurable parameter for Neutron - VSM (controller)
  sync duration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1441400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443480] Re: Some of the neutron functional tests are failing with import error after unit test tree reorganization

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443480

Title:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

  
   Traceback (most recent call last):
  ImportError: Failed to import test module: 
neutron.tests.functional.scheduler.test_dhcp_agent_scheduler
  Traceback (most recent call last):
File "/usr/lib64/python2.7/unittest/loader.py", line 254, in _find_tests
  module = self._get_module_from_name(name)
File "/usr/lib64/python2.7/unittest/loader.py", line 232, in 
_get_module_from_name
  __import__(name)
File "neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py", 
line 24, in 
  from neutron.tests.unit import test_dhcp_scheduler as test_dhcp_sch
  ImportError: cannot import name test_dhcp_scheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442343] Re: Mapping openstack_project attribute in k2k assertions with different domains

2015-07-23 Thread Alan Pevec
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1442343

Title:
  Mapping openstack_project attribute in k2k assertions with different
  domains

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  New

Bug description:
  We can have two projects with the same name in different domains. So
  if we have a "Project A" in "Domain X" and a "Project A" in "Domain
  Y", there is no way to differ what "Project A" is being used in a SAML
  assertion generated by this IdP (we have only the openstack_project
  attribute in the SAML assertion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1442343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446405] Re: Test discovery is broken for the api and functional paths

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446405

Title:
  Test discovery is broken for the api and functional paths

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The following failures in test discovery were noted:

  https://review.openstack.org/#/c/169962/
  https://bugs.launchpad.net/neutron/+bug/1443480

  It was eventually determined that the use of the unittest discovery
  mechanism to perform manual discovery in package init for the api and
  functional subtrees was to blame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443916] Re: neutron.tests.functional.agent.test_ovs_flows fails with RuntimeError

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443916

Title:
  neutron.tests.functional.agent.test_ovs_flows fails with RuntimeError

Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  Both test_arp_spoof_allowed_address_pairs and
  test_arp_spoof_disable_port_security fail occasionally as follows:

  http://logs.openstack.org/80/172480/1/gate/gate-neutron-dsvm-
  functional/b72c1c7/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_ovs_flows.py", line 68, in 
test_arp_spoof_allowed_address_pairs
  pinger.assert_ping(self.dst_addr)
File "neutron/tests/functional/agent/linux/helpers.py", line 113, in 
assert_ping
  self._ping_destination(dst_ip)
File "neutron/tests/functional/agent/linux/helpers.py", line 110, in 
_ping_destination
  '-W', self._timeout, dest_address])
File "neutron/agent/linux/ip_lib.py", line 580, in execute
  extra_ok_codes=extra_ok_codes, **kwargs)
File "neutron/agent/linux/utils.py", line 137, in execute
  raise RuntimeError(m)
  RuntimeError: 
  Command: ['ip', 'netns', 'exec', 'func-75b049b8-cbf8-4dea-8a1d-4479b0b33659', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
  Exit code: 1
  Stdin: 
  Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.

  --- 192.168.0.2 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447288] Re: create volume from snapshot using horizon error

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447288

Title:
  create volume from snapshot using horizon error

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  When I try to create a volume from snapshot using the OpenStack UI it
  creates a new raw volume with correct size, but it's not created from
  a snapshot.

  $ cinder show 9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:08:53.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | v2s2
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  2  
 |
  |  snapshot_id  | None
 |
  |  source_volid | None
 |
  | status|  available  
 |
  |user_id|   c8163c5313504306b40377a0775e9ffa  
 |
  |  volume_type  | None
 |
  
+---+--+

  But when I use cinder command line everything seems to be fine.

  $ cinder create --snapshot-id 382a0e1d-168b-4cf6-a9ff-715d8ad385eb 1
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:15:08.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
b33ec1ef-9d29-4231-8d15-8cf22ca3c502 |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | None
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  1 

[Yahoo-eng-team] [Bug 1450535] Re: [data processing] Create node group and cluster templates can fail

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450535

Title:
  [data processing] Create node group and cluster templates can fail

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  * Probably a kilo-backport candidate *

  In an environment that uses a rewrite / to /dashboard (or whatever),
  trying to create a node group, cluster template or job will fail when
  we try to do a urlresolver.resolve on the path.  That operation isn't
  even necessary since the required kwargs are already available in
  request.resolver_match.kwargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447034] Re: DVR: floating IPs not working if initially associated with non-bound port

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1447034

Title:
  DVR: floating IPs not working if initially associated with non-bound
  port

Status in neutron:
  Fix Committed
Status in neutron kilo series:
  New

Bug description:
  Floating agent gw port is only created for compute host when floating ip is 
associated with a VM resided on this host [1].
  If associate neutron port with floating ip before booting a VM with that 
port, floating agent gw port won't be created (in case this is the first VM 
scheduled to a compute host).
  In that case l3 agent on compute host will receive router info with floating 
ip but no floating agent gw port: it will subscribe the router for fip 
namespace [2] but namespace itself won't be created [3]:
   [dvr_router.py]

  def create_dvr_fip_interfaces(self, ex_gw_port):
  floating_ips = self.get_floating_ips()
  fip_agent_port = self.get_floating_agent_gw_interface(
  ex_gw_port['network_id'])
  LOG.debug("FloatingIP agent gateway port received from the plugin: "
"%s", fip_agent_port)
  if floating_ips:
  is_first = self.fip_ns.subscribe(self.router_id)
  if is_first and fip_agent_port:
  if 'subnets' not in fip_agent_port:
  LOG.error(_LE('Missing subnet/agent_gateway_port'))
  else:
  self.fip_ns.create_gateway_port(fip_agent_port)
  ...  


  Since l3 agent already subscribed the router for fip_ns it won't ever
  create fip namespace for that router - this results in floating ips
  not working anymore for ANY subsequent VMs on that compute host, no
  matter if floating ip was associated with a VM or with a non-binded
  port (later associated with a VM).

  I see two possible fixes:
   - add callback for PORT UDATE event to dvr server code to react on port with 
floating ip being associated with a VM.
  This seems not optimal given lots of checks needed in the callback which will 
be called fairly often.

   - l3 agent on a compute host should request floating agent gw
  creation by rpc in case it receives router info with floating ips but
  no floating agent gateway. There is already such a method in agent to
  plugin rpc interface which now seems not used anywhere except tests.
  I'm not seeing any cons here so that's what I'm going to propose.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L214-L225
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_router.py#L502
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/dvr_router.py#L503-L507

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1447034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449260] Re: [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449260

Title:
  [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  1) Start up Horizon
  2) Go to Images
  3) Next to an image, pick "Update Metadata"
  4) From the dropdown button, select "Update Metadata"
  5) In the Custom box, enter a value with some HTML like 
'alert(1)//', click +
  6) On the right-hand side, give it a value, like "ee"
  7) Click "Save"
  8) Pick "Update Metadata" for the image again, the page will fail to load, 
and the JavaScript console says:

  SyntaxError: invalid property id
  var existing_metadata = {"

  An alternative is if you change the URL to update_metadata for the
  image (for example,
  
http://192.168.122.239/admin/images/fa62ba27-e731-4ab9-8487-f31bac355b4c/update_metadata/),
  it will actually display the alert box and a bunch of junk.

  I'm not sure if update_metadata is actually a page, though... can't
  figure out how to get to it other than typing it in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453074] Re: [OSSA 2015-010] help_text parameter of fields is vulnerable to arbitrary html injection (CVE-2015-3219)

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453074

Title:
  [OSSA 2015-010] help_text parameter of fields is vulnerable to
  arbitrary html injection (CVE-2015-3219)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  The Field class help_text attribute is vulnerable to code injection if
  the text is somehow taken from the user input.

  Heat UI allows to create stacks from the user input which define
  parameters. Those parameters are then converted to the input field
  which are vulnerable.

  The heat stack example exploit:

  description: Does not matter
  heat_template_version: '2013-05-23'
  outputs: {}
  parameters:
    param1:
  type: string
  label: normal_label
  description: hack=">alert('YOUR HORIZON IS PWNED')"
  resources: {}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456171] Re: [sahara] relaunch job fail if job is created by saharaclient and no input args

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1456171

Title:
  [sahara] relaunch job fail if job is created by saharaclient and no
  input args

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  If we launch a job from a job_template without input args by saharaclient 
directly (not in Horizon), and then relaunch it on Horizon, you will get an 
error. This is because Horizon thinks job_configs always have elements 'args'. 
You can refer to 
horizon/openstack_dashboard/dashboards/project/data_processing/jobs/workflows/launch.py,
 line 190
  job_args = json.dumps(job_configs['args'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1456171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458769] Re: horizon can't update subnet ip pool

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458769

Title:
  horizon can't update subnet ip pool

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  the update of ip pool in subnet reports success, but the refresh shows
  the data is not changed.

  steps to recreate:
  1. admin/network/subnet
  2. edit subnet/details/allocation pools
  3. save the changes
  4. check the subnet detail after success message shows

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462544] Re: Create Image: Uncaught TypeError: $form.on is not a function

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462544

Title:
  Create Image: Uncaught TypeError: $form.on is not a function

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  on master June 5 2015

  Project -> Images -> Create Image dialog

  Stacktrace
  
  horizon.datatables.disable_actions_on_submit  @   horizon.tables.js:185
  (anonymous function)  @   horizon.modals.js:26
  jQuery.extend.each@   jquery.js:657
  jQuery.fn.jQuery.each @   jquery.js:266
  horizon.modals.initModal  @   horizon.modals.js:25
  (anonymous function)  @   horizon.modals.js:177
  jQuery.event.dispatch @   jquery.js:5095
  jQuery.event.add.elemData.handle  @   jquery.js:4766
  jQuery.event.trigger  @   jquery.js:5007
  jQuery.event.trigger  @   jquery-migrate.js:493
  (anonymous function)  @   jquery.js:5691
  jQuery.extend.each@   jquery.js:657
  jQuery.fn.jQuery.each @   jquery.js:266
  jQuery.fn.extend.trigger  @   jquery.js:5690
  horizon.modals.success@   horizon.modals.js:48
  horizon.modals._request.$.ajax.success@   horizon.modals.js:342
  jQuery.Callbacks.fire @   jquery.js:3048
  jQuery.Callbacks.self.fireWith@   jquery.js:3160
  done  @   jquery.js:8235
  jQuery.ajaxTransport.send.callback@   jquery.js:8778

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460408] Re: fip namespace is not created when doing migration from legacy router to DVR

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460408

Title:
  fip namespace is not created when doing migration from legacy router
  to DVR

Status in neutron:
  Fix Committed
Status in neutron kilo series:
  New

Bug description:
  When creating a legacy router and migrating to a distributed router
  'fip' namespaces are not created on the compute nodes.

  
  Error from L3 Agent log:
  ===
  2015-05-31 13:35:55.935 103776 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b', 'ip', '-o', 
'link', 'show', 'fpr-2965187d-4'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2015-05-31 13:35:55.991 103776 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b', 'ip', '-o', 
'link', 'show', 'fpr-2965187d-4']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: Cannot open network namespace 
"fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b": No such file or directory
   execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:134
  2015-05-31 13:35:55.992 103776 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: 2965187d-452c-4951-88eb-4053cea88dae 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229

  How to reproduce
  ===
  1. Create a legacy router 
  # neutron router-create --distributed=False router1

  2. Associate the router with an internal network 
  # neutron router-interface-add router1 

  3. Set the router's gateway
  # neutron router-gateway-set router1 

  4. Launch an instnace
   # nova boot --flavor m1.small --image fedora --key-name cloudkey  --nic 
net-id= vm1

  5. Associate the Instance with a floating IP

  6. Check connectivity to an external network

  7. Migrate the router to a distributed router1
  # neutron router-update --admin_state_up=False router1
  # neutron router-update --distributed=True router
  # neutron router-update --admin_state_up=True router1

  8. Verify the 'snat' namespace is created on the 'dvr_snat' node but
  'fip' namespace aren't created on the compute nodes.

  Version
  ==
  RHEL 7.1
  python-neutron-2015.1.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459446] Re: can't update dns for an ipv6 subnet

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459446

Title:
  can't update dns for an ipv6 subnet

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  It's not possible to update ipv6 subnet info using Horizon. To
  recreate:

  Setup: create a new network (Admin->System->Networks->Create Network)
  create an ipv6 subnet in that network
  (new network Detail->Create Subnet)
  Network Address: fdc5:f49e:fe9e::/64 
  IP Version IPv6
  Gateway IP: fdc5:f49e:fe9e::1
  click create

  To view the problem: Edit the subnet
  (Admin->System->Networks>[detail]->Edit Subnet->Subnet Details
  attempt to add a DNS name server
  fdc5:f49e:fe9e::3

  An error is returned: "Error: Failed to update subnet
  "fdc5:f49e:fe9e::/64": Cannot update read-only attribute ipv6_ra_mode"

  however, it's possible to make the update using
  neutron subnet-update --dns-nameserver [ip] [id]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463363] Re: NSX-mh: Decimal RXTX factor not honoured

2015-07-23 Thread Alan Pevec
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463363

Title:
  NSX-mh: Decimal RXTX factor not honoured

Status in neutron:
  In Progress
Status in neutron kilo series:
  New
Status in vmware-nsx:
  Fix Committed

Bug description:
  A decimal RXTX factor, which is allowed by nova flavors, is not
  honoured by the NSX-mh plugin, but simply truncated to integer.

  To reproduce:

  * Create a neutron queue
  * Create a neutron net / subnet using the queue
  * Create a new flavor which uses an RXTX factor other than an integer value
  * Boot a VM on the net above using the flavor
  * View the NSX queue for the VM's VIF -- notice it does not have the RXTX 
factor applied correctly (for instance if it's 1.2 it does not multiply it at 
all, if it's 3.4 it applies a RXTX factor of 3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458928] Re: jshint failing on angular js in stable/kilo

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458928

Title:
  jshint failing on angular js in stable/kilo

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  http://logs.openstack.org/56/183656/1/check/gate-horizon-
  jshint/cd75430/console.html.gz#_2015-05-15_19_27_08_073

  Looks like this started after 5/14, since there was a passing job
  before that:

  http://logs.openstack.org/21/183321/1/check/gate-horizon-
  jshint/90ca4dd/console.html.gz#_2015-05-14_22_50_02_203

  The only difference I see in external libraries used is tox went from
  1.9.2 (passing) to tox 2.0.1 (failing).  So I'm thinking there is
  something with how the environment is defined for the jshint runs
  because it appears that .jshintrc isn't getting used, see the
  workaround fix here:

  https://review.openstack.org/#/c/185172/

  From the tox 2.0 changelog:

  https://testrun.org/tox/latest/changelog.html

  (new) introduce environment variable isolation: tox now only passes
  the PATH and PIP_INDEX_URL variable from the tox invocation
  environment to the test environment and on Windows also SYSTEMROOT,
  PATHEXT, TEMP and TMP whereas on unix additionally TMPDIR is passed.
  If you need to pass through further environment variables you can use
  the new passenv setting, a space-separated list of environment
  variable names. Each name can make use of fnmatch-style glob patterns.
  All environment variables which exist in the tox-invocation
  environment will be copied to the test environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459625] Re: TemplateDoesNotExist when click manage/unmanage volume

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459625

Title:
  TemplateDoesNotExist when click manage/unmanage volume

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  manage/unmanage template includes wrong templates:

  project/volumes/volumes/_unmanage_volume.html
  project/volumes/volumes/_manage_volume.html

  it should be admin/volumes/... instead of project/volumes/...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458803] Re: Allocation pool not updated upon subnet edit from Horizon

2015-07-23 Thread Alan Pevec
*** This bug is a duplicate of bug 1458769 ***
https://bugs.launchpad.net/bugs/1458769

** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458803

Title:
  Allocation pool not updated upon subnet edit from Horizon

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Using Kilo.
  When I edit the Allocation pool for a given subnet by "Editing" the subnet. 
  The "Save" operation succeeds but the "Allocation Pool" values aren't updated 
and the older values are still displayed (used).
  Tried this from the neutron cli and the neutron subnet-update 
--allocation-pool start=,end= --> works fine. Hence assuming this is a 
Horizon bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464991] Re: Errors are not handled correctly during image updates

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464991

Title:
  Errors are not handled correctly during image updates

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  To reproduce:

  Log in to horizon as unprivileged user. Navigate to image editing, try
  to mark an image as public.

  Observed result: an error message, stating: Danger: There was an error 
submitting the form. Please try again.
  Logs indicate, that an UnboundLocalError occurrs

File "/Users/teferi/murano/horizon/openstack_dashboard/api/glance.py", line 
129, in image_update
  return image
  UnboundLocalError: local variable 'image' referenced before assignment

  This is because image variable is not handled correctly in
  image_update function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466744] Re: Integration test test_image_register_unregister failing gate

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1466744

Title:
  Integration test test_image_register_unregister failing gate

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) kilo series:
  New

Bug description:
  Following test is failing on gate.

  Traceback (most recent call last):
  File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_sahara_image_registry.py",
 line 34, in test_image_register_unregister "Image was not registered.")
  File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py",
 line 678, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : Image was not registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1466744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464461] Re: delete action always cause error ( in kilo)

2015-07-23 Thread Alan Pevec
** Also affects: horizon/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464461

Title:
  delete action always cause error ( in kilo)

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) kilo series:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  When i did any delete actions (delete router, delete network etc...)
  in japanese environment , always get a error page.

  horizon error logs:
  -
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
89, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 223, 
in post
  return self.get(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 159, 
in get
  handled = self.construct_tables()
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 150, 
in construct_tables
  handled = self.handle_table(table)
File "/usr/lib/python2.7/site-packages/horizon/tables/views.py", line 125, 
in handle_table
  handled = self._tables[name].maybe_handle()
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1640, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1482, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File "/usr/lib/python2.7/site-packages/horizon/tables/actions.py", line 
302, in multiple
  return self.handle(data_table, request, object_ids)
File "/usr/lib/python2.7/site-packages/horizon/tables/actions.py", line 
828, in handle
  exceptions.handle(request, ignore=ignore)
File "/usr/lib/python2.7/site-packages/horizon/exceptions.py", line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/usr/lib/python2.7/site-packages/horizon/tables/actions.py", line 
817, in handle
  (self._get_action_name(past=True), datum_display))
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: 
ordinal not in range(128)
  -

  It occurs in japanese,korean,chinese,french and deutsche, not occurs
  in english and spanish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >