[Yahoo-eng-team] [Bug 1494961] Re: router_info's _get_existing_devices execution time is O(n)

2015-11-04 Thread Brad Behle
Marked this as Invalid since this isn't really a bug, just a request to
investigate improving this code improvement.  I've done the
investigation and couldn't find a good way to improve it

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494961

Title:
  router_info's _get_existing_devices execution time is O(n)

Status in neutron:
  Invalid

Bug description:
  router_info's _get_existing_devices execution time increases as the
  number of routers scheduled to a network node increases. Ideally, this
  execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513242] [NEW] StaleDataError in disassociate_floatingips

2015-11-04 Thread Edgar Cantu
Public bug reported:

If a VM with a floating IP is deleted while concurrently deleting the
floating IP (i.e. "nova delete" and "neutron floatingip-delete") the VM
may go into error state.

Nova receives a 500 error code from neutron because neutron fails to
delete the port and the following stack is observed:

2015-11-04 21:48:37.447 17275 ERROR neutron.api.v2.resource 
[req-764172bc-4938-440d-9e6a-dd4c0f120f1b ] delete failed
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, 
in resource
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 490, in 
delete
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 131, in wrapper
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 
1272, in delete_port
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource context, id, 
do_notify=False)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py", line 274, 
in disassociate_floatingips
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
do_notify=do_notify)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1361, in 
disassociate_floatingips
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource context, 
port_id)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/db/l3_db.py", line 1088, in 
disassociate_floatingips
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 'router_id': 
None})
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 483, 
in __exit__
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource self.rollback()
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 480, 
in __exit__
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource self.commit()
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 385, 
in commit
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
self._prepare_impl()
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 365, 
in _prepare_impl
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
self.session.flush()
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 
1986, in flush
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
self._flush(objects)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 
2104, in _flush
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
transaction.rollback(_capture_exception=True)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 
60, in __exit__
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
compat.reraise(exc_type, exc_value, exc_tb)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 
2068, in _flush
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
flush_context.execute()
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 
"/opt/neutron/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 
372, in execute
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource 
rec.execute(self)
2015-11-04 21:48:37.447 17275 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1513230] [NEW] Users have cross-tenant visibility on images

2015-11-04 Thread Mike
Public bug reported:

Using Kilo 2015.1.2 and Glance Client 0.17.0:

Using two users (demo in the demo tenant, alt_demo in the alt_demo
tenant, neither have the admin role), I am able to create an image with
is_public set to False as the demo user/tenant, and then show data/use
that image to create an instance as the alt_demo:

> env | grep OS_
OS_PASSWORD=secret
OS_AUTH_URL=http://localhost:5000/v2.0
OS_USERNAME=demo
OS_TENANT_NAME=demo

> glance image-create --container-format bare --disk-format raw --is-public 
> false --name demo_image
+--+--+
| Property | Value|
+--+--+
| checksum | None |
| container_format | bare |
| created_at   | 2015-11-04T21:33:14.00   |
| deleted  | False|
| deleted_at   | None |
| disk_format  | raw  |
| id   | 51215efe-3533-4128-a36f-a44e507df5d7 |
| is_public| False|
| min_disk | 0|
| min_ram  | 0|
| name | demo_image   |
| owner| None |
| protected| False|
| size | 0|
| status   | queued   |
| updated_at   | 2015-11-04T21:33:14.00   |
| virtual_size | None |
+--+--+

The image then does not appear in image-list:
> glance image-list
+--++-+--+---++
| ID   | Name   | Disk Format | 
Container Format | Size  | Status |
+--++-+--+---++
| 7eb66946-70c1-4d35-93d8-93a315710be9 | tempest_alt_image  | raw | 
bare | 947466240 | active |
| 50eccbfd-baf3-4f0e-a10d-c20292b01d9d | tempest_main_image | raw | 
bare | 947466240 | active |
+--++-+--+---++

With --all-tenants, it appears
> glance image-list --all-tenants
+--++-+--+---++
| ID   | Name   | Disk Format | 
Container Format | Size  | Status |
+--++-+--+---++
| 51215efe-3533-4128-a36f-a44e507df5d7 | demo_image | raw | 
bare |   | queued |
| 7eb66946-70c1-4d35-93d8-93a315710be9 | tempest_alt_image  | raw | 
bare | 947466240 | active |
| 50eccbfd-baf3-4f0e-a10d-c20292b01d9d | tempest_main_image | raw | 
bare | 947466240 | active |
| 8f1430dc-8fc0-467b-b006-acf6b481714e | test_snapshot  | raw | 
bare |   | active |
+--++-+--+---++

With image-show and the name, error message:
> glance image-show demo_image
No image with a name or ID of 'demo_image' exists.

With  image-show and the uuid, data:
> glance image-show 51215efe-3533-4128-a36f-a44e507df5d7
+--+--+
| Property | Value|
+--+--+
| container_format | bare |
| created_at   | 2015-11-04T21:33:14.00   |
| deleted  | False|
| disk_format  | raw  |
| id   | 51215efe-3533-4128-a36f-a44e507df5d7 |
| is_public| False|
| min_disk | 0|
| min_ram  | 0|
| name | demo_image   |
| protected| False|
| size | 0|
| status   | queued   |
| updated_at   | 2015-11-04T21:33:14.00   |
+--+--+

Now swap to alt_demo:
env | grep OS_
OS_PASSWORD=secret
OS_AUTH_URL=http://localhost:5000/v2.0
OS_USERNAME=alt_demo
OS_TENANT_NAME=alt_demo

Image list with --all-tenants shows the 

[Yahoo-eng-team] [Bug 1509477] Re: Create a Network-Arista release for liberty

2015-11-04 Thread Kyle Mestery
This is complete now:

https://pypi.python.org/pypi/networking_arista/2015.2.0

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509477

Title:
  Create a Network-Arista release for liberty

Status in networking-arista:
  New
Status in neutron:
  Fix Released

Bug description:
  Branch: stable/liberty
  New Tag: 2015.2

  The liberty release for networking-arista contains a number of new 
improvements such as:
  1. Support for multiple CVX instances
  2. Support for multiple neutron instances

  It also has fixes for:
  1. Shared routers
  2. Shared networks

  The complete list of changes in this release:
  1. 926ede02ab35ac44c6d8433d74293cf2be4b5d98: Migration of Arista drivers from 
neutron
  2. 03ce45a64cbc996480c6f750bb15c227c3bc1636: Removing unused dependency: 
discover 
  3. 1aa15e8b725d984119bafc3226331a9dc6bb57c0: Fix port creation on shared 
networks
  4. 72c64a55c9243645e40adc1d6af21b6e2c8be303: Change ignore-errors to 
ignore_errors
  5. 4a1f122c1bcf64054ebd716256f599bc36a470b7: Fix a spelling typo in error 
message 
  6. 36c441dbdc02404c4049d0f3b057a169fe857235: Fixed HA router network cleanup
  7. 3ac3b2e354fa2d3d615f7a46b171feaf273c181b: Using 'INTERNAL-TENANT-ID' as 
the network owner
  8. 61f937787db455738673c537d31c49a7b5401223: Supporting neutron HA. 
  9. fa3e7fa64813edff85884f48a529594f03521382: Adding support for multiple EOS 
instances. 
  10. 5dc17388f39a605334dda599e97d204796c32a57: Adding database migration 
scripts

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1509477/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513216] [NEW] Mismatched keystone api version produces cryptic 'Error: Openstack'

2015-11-04 Thread Andrew Bogott
Public bug reported:

The 'openstack' cli tool defaults to keystone version 2.0.  When pointed
to a v3 endpoint, it fails like this:

$ openstack service list
ERROR: openstack 

This can easily be resolved by setting OS_IDENTITY_API_VERSION=3 --
that's not obvious from the error message, though, and isn't even
obvious from log- and code-diving.

I propose that we actually detect the api version mismatch and error out
with a helpful message.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1513216

Title:
  Mismatched keystone api version produces cryptic 'Error: Openstack'

Status in OpenStack Identity (keystone):
  New

Bug description:
  The 'openstack' cli tool defaults to keystone version 2.0.  When
  pointed to a v3 endpoint, it fails like this:

  $ openstack service list
  ERROR: openstack 

  This can easily be resolved by setting OS_IDENTITY_API_VERSION=3 --
  that's not obvious from the error message, though, and isn't even
  obvious from log- and code-diving.

  I propose that we actually detect the api version mismatch and error
  out with a helpful message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1513216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512937] Re: CREATE_FAILED status due to 'Resource CREATE failed: NotFound: resources.pool: No eligible backend for pool

2015-11-04 Thread Alexander Duyck
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512937

Title:
  CREATE_FAILED status due to 'Resource CREATE failed: NotFound:
  resources.pool: No eligible backend for pool

Status in heat:
  Fix Committed
Status in neutron:
  Fix Released

Bug description:
  LB scenario tests seems to be failing with the following error.

  2015-11-03 20:55:53.906 | 2015-11-03 20:55:53.901 | 
heat_integrationtests.scenario.test_autoscaling_lb.AutoscalingLoadBalancerTest.test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.908 | 2015-11-03 20:55:53.902 | 

  2015-11-03 20:55:53.910 | 2015-11-03 20:55:53.905 | 
  2015-11-03 20:55:53.912 | 2015-11-03 20:55:53.906 | Captured traceback:
  2015-11-03 20:55:53.914 | 2015-11-03 20:55:53.908 | ~~~
  2015-11-03 20:55:53.915 | 2015-11-03 20:55:53.910 | Traceback (most 
recent call last):
  2015-11-03 20:55:53.918 | 2015-11-03 20:55:53.913 |   File 
"heat_integrationtests/scenario/test_autoscaling_lb.py", line 96, in 
test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.919 | 2015-11-03 20:55:53.914 | environment=env
  2015-11-03 20:55:53.921 | 2015-11-03 20:55:53.916 |   File 
"heat_integrationtests/scenario/scenario_base.py", line 56, in launch_stack
  2015-11-03 20:55:53.923 | 2015-11-03 20:55:53.918 | 
expected_status=expected_status
  2015-11-03 20:55:53.925 | 2015-11-03 20:55:53.920 |   File 
"heat_integrationtests/common/test.py", line 503, in stack_create
  2015-11-03 20:55:53.927 | 2015-11-03 20:55:53.922 | 
self._wait_for_stack_status(**kwargs)
  2015-11-03 20:55:53.929 | 2015-11-03 20:55:53.923 |   File 
"heat_integrationtests/common/test.py", line 321, in _wait_for_stack_status
  2015-11-03 20:55:53.931 | 2015-11-03 20:55:53.925 | fail_regexp):
  2015-11-03 20:55:53.933 | 2015-11-03 20:55:53.927 |   File 
"heat_integrationtests/common/test.py", line 288, in _verify_status
  2015-11-03 20:55:53.934 | 2015-11-03 20:55:53.929 | 
stack_status_reason=stack.stack_status_reason)
  2015-11-03 20:55:53.936 | 2015-11-03 20:55:53.930 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerTest-448494246/077130e4-429c-44fb-887a-2ac5c0d7a9b2 is 
in CREATE_FAILED status due to 'Resource CREATE failed: NotFound: 
resources.pool: No eligible backend for pool 
9de892e7-ef89-4082-8c1e-3fbba0eea7f6'

  lbaas service seems to be existing with error.

   CRITICAL neutron [req-d0fcef08-11a3-4688-9e8c-80535c6d1da2 None None]
  ValueError: Empty module name

  http://logs.openstack.org/09/232709/7/check/gate-heat-dsvm-functional-
  orig-mysql/6ebcd1f/logs/screen-q-lbaas.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1512937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513267] [NEW] network_data.json not found in openstack/2015-10-15/

2015-11-04 Thread Mathieu Gagné
Public bug reported:

The file "network_data.json" is not found in the folder
"openstack/2015-10-15/" of config drive, only in "openstack/latest/".

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513267

Title:
  network_data.json not found in openstack/2015-10-15/

Status in OpenStack Compute (nova):
  New

Bug description:
  The file "network_data.json" is not found in the folder
  "openstack/2015-10-15/" of config drive, only in "openstack/latest/".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1513267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513279] [NEW] routing with dhcpv6-stateful addressing is broken

2015-11-04 Thread Ritesh Anand
Public bug reported:

Not able to ping v6 address of vm on a different network.  With legacy router.
Setup has one controller/network node and two compute nodes.

Steps:
0. Add security rules to allow ping traffic. 
neutron security-group-rule-create --protocol icmp --direction ingress 
94d41516-dab5-413c-9349-7c9bc3a09e75
1. create two networks.
2. create ipv4 subnet on each (for accessing vm).
3. create ipv6 subnet on each with dhcpv6-stateful addressing.
 neutron subnet-create dnet1 :1::1/64 --name d6sub1 --enable-dhcp 
--ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful
4. create a router (not distributed).
5. add interface to router on each of the four subnets.
6. boot a vm on both networks.
7. Log into the guest vm and configure inteface to receive inet6 dhcp address; 
use dhclient to get v6 address.
8. Ping v6 address of the other guest vm. Fails!


ubuntu@dvm11:~$ ping6 :2::4
PING :2::4(:2::4) 56 data bytes
>From :1::1 icmp_seq=1 Destination unreachable: Address unreachable
>From :1::1 icmp_seq=2 Destination unreachable: Address unreachable
>From :1::1 icmp_seq=3 Destination unreachable: Address unreachable


Note: As we need to modify interface settings and use dhclient, ubuntu cloud 
image was used. One may need to set MTU to 1400 for communicating with ubuntu 
cloud image.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513279

Title:
  routing with dhcpv6-stateful addressing is broken

Status in neutron:
  New

Bug description:
  Not able to ping v6 address of vm on a different network.  With legacy router.
  Setup has one controller/network node and two compute nodes.

  Steps:
  0. Add security rules to allow ping traffic. 
  neutron security-group-rule-create --protocol icmp --direction ingress 
94d41516-dab5-413c-9349-7c9bc3a09e75
  1. create two networks.
  2. create ipv4 subnet on each (for accessing vm).
  3. create ipv6 subnet on each with dhcpv6-stateful addressing.
   neutron subnet-create dnet1 :1::1/64 --name d6sub1 --enable-dhcp 
--ip-version 6 --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode 
dhcpv6-stateful
  4. create a router (not distributed).
  5. add interface to router on each of the four subnets.
  6. boot a vm on both networks.
  7. Log into the guest vm and configure inteface to receive inet6 dhcp 
address; use dhclient to get v6 address.
  8. Ping v6 address of the other guest vm. Fails!

  
  ubuntu@dvm11:~$ ping6 :2::4
  PING :2::4(:2::4) 56 data bytes
  From :1::1 icmp_seq=1 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=2 Destination unreachable: Address unreachable
  From :1::1 icmp_seq=3 Destination unreachable: Address unreachable

  
  Note: As we need to modify interface settings and use dhclient, ubuntu cloud 
image was used. One may need to set MTU to 1400 for communicating with ubuntu 
cloud image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512937] Re: CREATE_FAILED status due to 'Resource CREATE failed: NotFound: resources.pool: No eligible backend for pool

2015-11-04 Thread Rabi Mishra
** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512937

Title:
  CREATE_FAILED status due to 'Resource CREATE failed: NotFound:
  resources.pool: No eligible backend for pool

Status in heat:
  Fix Committed
Status in neutron:
  Fix Committed

Bug description:
  LB scenario tests seems to be failing with the following error.

  2015-11-03 20:55:53.906 | 2015-11-03 20:55:53.901 | 
heat_integrationtests.scenario.test_autoscaling_lb.AutoscalingLoadBalancerTest.test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.908 | 2015-11-03 20:55:53.902 | 

  2015-11-03 20:55:53.910 | 2015-11-03 20:55:53.905 | 
  2015-11-03 20:55:53.912 | 2015-11-03 20:55:53.906 | Captured traceback:
  2015-11-03 20:55:53.914 | 2015-11-03 20:55:53.908 | ~~~
  2015-11-03 20:55:53.915 | 2015-11-03 20:55:53.910 | Traceback (most 
recent call last):
  2015-11-03 20:55:53.918 | 2015-11-03 20:55:53.913 |   File 
"heat_integrationtests/scenario/test_autoscaling_lb.py", line 96, in 
test_autoscaling_loadbalancer_neutron
  2015-11-03 20:55:53.919 | 2015-11-03 20:55:53.914 | environment=env
  2015-11-03 20:55:53.921 | 2015-11-03 20:55:53.916 |   File 
"heat_integrationtests/scenario/scenario_base.py", line 56, in launch_stack
  2015-11-03 20:55:53.923 | 2015-11-03 20:55:53.918 | 
expected_status=expected_status
  2015-11-03 20:55:53.925 | 2015-11-03 20:55:53.920 |   File 
"heat_integrationtests/common/test.py", line 503, in stack_create
  2015-11-03 20:55:53.927 | 2015-11-03 20:55:53.922 | 
self._wait_for_stack_status(**kwargs)
  2015-11-03 20:55:53.929 | 2015-11-03 20:55:53.923 |   File 
"heat_integrationtests/common/test.py", line 321, in _wait_for_stack_status
  2015-11-03 20:55:53.931 | 2015-11-03 20:55:53.925 | fail_regexp):
  2015-11-03 20:55:53.933 | 2015-11-03 20:55:53.927 |   File 
"heat_integrationtests/common/test.py", line 288, in _verify_status
  2015-11-03 20:55:53.934 | 2015-11-03 20:55:53.929 | 
stack_status_reason=stack.stack_status_reason)
  2015-11-03 20:55:53.936 | 2015-11-03 20:55:53.930 | 
heat_integrationtests.common.exceptions.StackBuildErrorException: Stack 
AutoscalingLoadBalancerTest-448494246/077130e4-429c-44fb-887a-2ac5c0d7a9b2 is 
in CREATE_FAILED status due to 'Resource CREATE failed: NotFound: 
resources.pool: No eligible backend for pool 
9de892e7-ef89-4082-8c1e-3fbba0eea7f6'

  lbaas service seems to be existing with error.

   CRITICAL neutron [req-d0fcef08-11a3-4688-9e8c-80535c6d1da2 None None]
  ValueError: Empty module name

  http://logs.openstack.org/09/232709/7/check/gate-heat-dsvm-functional-
  orig-mysql/6ebcd1f/logs/screen-q-lbaas.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1512937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513280] [NEW] firewall tests failing in gate-neutron-dsvm-api due to over quota

2015-11-04 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/96/237896/4/gate/gate-neutron-dsvm-
api/6e1d5c7/logs/screen-q-svc.txt.gz#_2015-11-04_20_14_34_810

2015-11-04 20:14:34.810 INFO neutron.api.v2.resource [req-da6d1e90-412c-
4c4b-b1ea-b4c5438c9665 FWaaSExtensionTestJSON-1462602448
FWaaSExtensionTestJSON-1008160789] create failed (client error): Quota
exceeded for resources: ['firewall_policy'].

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22create%20failed%20(client%20error):%20Quota%20exceeded%20for%20resources:%20%5B'firewall_policy'%5D%5C%22%20AND%20tags:%5C%22screen-q-svc.txt%5C%22

52 hits in 48 hours, check and gate, all failures.

** Affects: neutron
 Importance: Undecided
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: fwaas gate-failure

** Tags added: fwaas gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513280

Title:
  firewall tests failing in gate-neutron-dsvm-api due to over quota

Status in neutron:
  In Progress

Bug description:
  http://logs.openstack.org/96/237896/4/gate/gate-neutron-dsvm-
  api/6e1d5c7/logs/screen-q-svc.txt.gz#_2015-11-04_20_14_34_810

  2015-11-04 20:14:34.810 INFO neutron.api.v2.resource [req-da6d1e90
  -412c-4c4b-b1ea-b4c5438c9665 FWaaSExtensionTestJSON-1462602448
  FWaaSExtensionTestJSON-1008160789] create failed (client error): Quota
  exceeded for resources: ['firewall_policy'].

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22create%20failed%20(client%20error):%20Quota%20exceeded%20for%20resources:%20%5B'firewall_policy'%5D%5C%22%20AND%20tags:%5C%22screen-q-svc.txt%5C%22

  52 hits in 48 hours, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513280/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513313] [NEW] create vip failed for unbound method get_device_name() must be called with OVSInterfaceDriver instance as first argument

2015-11-04 Thread Kai Qiang Wu(Kennan)
Public bug reported:

We found our gate failed with following information

3:42.778 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-ebb92ee8-2998-4a50-baf1-8123ce76b071 admin admin] Create vip 
e3152b05-2c41-40ac-9729-1756664f437e failed on device driver haproxy_ns
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 221, in create_vip
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 348, in create_vip
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 344, in _refresh_device
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
254, in inner
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 337, in deploy_instance
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 92, in create
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
logical_config['vip']['address'])
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 248, in _plug
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager interface_name = 
self.vif_driver.get_device_name(Wrap(port))
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager TypeError: unbound 
method get_device_name() must be called with OVSInterfaceDriver instance as 
first argument (got Wrap instance instead)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager

** Affects: neutron
 Importance: Undecided
 Assignee: Kai Qiang Wu(Kennan) (wkqwu)
 Status: New

** Project changed: nova-loadbalancer => neutron

** Changed in: neutron
 Assignee: (unassigned) => Kai Qiang Wu(Kennan) (wkqwu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513313

Title:
  create vip failed for unbound method get_device_name() must be called
  with OVSInterfaceDriver instance as first argument

Status in neutron:
  New

Bug description:
  We found our gate failed with following information

  3:42.778 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-ebb92ee8-2998-4a50-baf1-8123ce76b071 admin admin] Create vip 
e3152b05-2c41-40ac-9729-1756664f437e failed on device driver haproxy_ns
  2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
  2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 221, in create_vip
  2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
  2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 348, in create_vip
  2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
  2015-11-05 03:23:42.778 30474 

[Yahoo-eng-team] [Bug 1510817] Re: stable/liberty branch creation request for networking-midonet

2015-11-04 Thread YAMAMOTO Takashi
** Changed in: networking-midonet
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510817

Title:
  stable/liberty branch creation request for networking-midonet

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Please cut stable/liberty branch for networking-midonet
  on commit 3943328ffa6753a88b82ac163b3c1023eee4a884.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1510817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Question #273752]: Questions of the way that multipip solve python package conflicts

2015-11-04 Thread Jiexi Zha
New question #273752 on anvil:
https://answers.launchpad.net/anvil/+question/273752

Hi, I recently used Anvil to build openstack kilo packages on centos 7.1 
server.  All bootstrap/prepare/build steps went well and built rpms were saved 
in local repos.

But when I tried to use some packages to deploy openstack, dependency conflicts 
occurred.

For example,
Some openstack components (i.e. glance) require: pbr>=0.6,!=0.7,<1.0 and 
sqlalchemy-migrate>=0.9.5

In my case, dependency pkg sqlalchemy-migrate==0.10.0 was chosen and built. But 
sqlalchemy-migrate 0.10.0 requires pbr>=1.3.
-

I know Anvil won't find this issue for it only solves one level dependency 
chain for openstack components. So I try to add a new requirements.txt file to 
pin the version for some packages, i.e. sqlalchemy-migrate==0.9.6.

Anvil did find this potential conflict (There are 4 sqlalchemy-migrate>=0.9.5 
and 1 sqlalchemy-migrate==0.9.6), but the result surprised me: Anvil insists 
that 'sqlalchemy-migrate>=0.9.5' should be the best match just because it has 
more requests!
-

I get a little bit confused of the scoring-best-match logic here, shouldn't we 
try our best to find the intersection of all requirements? If we cannot find 
intersection, then it's a severe conflict.

For example,
- expected: a>2
  requirements:
- a>1
- a>2
- expected: a>=1.5,<2
  requirements:
- a<3
- a<2
- a>=1.5
- expected: CONFLICT!!
  requirements:
- a < 1
- a > 2

-

I've already hacked the multipip tool with my 'find_intersection' method, just 
want to know why Anvil designs this scoring and best-match logic?

Regards
Jiexi

-- 
You received this question notification because your team Yahoo!
Engineering Team is an answer contact for anvil.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513216] Re: Mismatched keystone api version produces cryptic 'Error: Openstack'

2015-11-04 Thread Lin Hua Cheng
Keystone is not responsible for the OSC command.

Tagging OSC, I agree the error message can be further improved

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

** Changed in: python-openstackclient
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1513216

Title:
  Mismatched keystone api version produces cryptic 'Error: Openstack'

Status in OpenStack Identity (keystone):
  Invalid
Status in python-openstackclient:
  Confirmed

Bug description:
  The 'openstack' cli tool defaults to keystone version 2.0.  When
  pointed to a v3 endpoint, it fails like this:

  $ openstack service list
  ERROR: openstack 

  This can easily be resolved by setting OS_IDENTITY_API_VERSION=3 --
  that's not obvious from the error message, though, and isn't even
  obvious from log- and code-diving.

  I propose that we actually detect the api version mismatch and error
  out with a helpful message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1513216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486001] Re: Netapp ephemeral instance snapshot very slow

2015-11-04 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1486001

Title:
  Netapp ephemeral instance snapshot very slow

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When I try to snapshot a instance carved out on netapp ephemeral
  storage mounted on /var/lib/nova/instances, process seems to be taking
  very long. It almost does full image download everytime even for same
  instance. And also I don't think it takes advantage netapp native
  snapshot / flex clone feature. I think think it can be feature
  enhancement too, to have nova use netapp utility for snapshot..

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1486001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513313] [NEW] create vip failed for unbound method get_device_name() must be called with OVSInterfaceDriver instance as first argument

2015-11-04 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

We found our gate failed with following information

3:42.778 ERROR neutron_lbaas.services.loadbalancer.agent.agent_manager 
[req-ebb92ee8-2998-4a50-baf1-8123ce76b071 admin admin] Create vip 
e3152b05-2c41-40ac-9729-1756664f437e failed on device driver haproxy_ns
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager Traceback (most recent 
call last):
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 221, in create_vip
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
driver.create_vip(vip)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 348, in create_vip
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self._refresh_device(vip['pool_id'])
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 344, in _refresh_device
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager if not 
self.deploy_instance(logical_config) and self.exists(pool_id):
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
254, in inner
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager return f(*args, 
**kwargs)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 337, in deploy_instance
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
self.create(logical_config)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 92, in create
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager 
logical_config['vip']['address'])
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager   File 
"/opt/stack/new/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 248, in _plug
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager interface_name = 
self.vif_driver.get_device_name(Wrap(port))
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager TypeError: unbound 
method get_device_name() must be called with OVSInterfaceDriver instance as 
first argument (got Wrap instance instead)
2015-11-05 03:23:42.778 30474 ERROR 
neutron_lbaas.services.loadbalancer.agent.agent_manager

** Affects: neutron
 Importance: Undecided
 Assignee: Kai Qiang Wu(Kennan) (wkqwu)
 Status: New

-- 
create vip failed for unbound method get_device_name() must be called with 
OVSInterfaceDriver instance as first argument
https://bugs.launchpad.net/bugs/1513313
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513335] [NEW] disk allocation ratio should move to resource tracker

2015-11-04 Thread Eric Xie
Public bug reported:

1. version:
nova 12.0.0 Liberty

2. As mentioned in 
https://blueprints.launchpad.net/nova/+spec/allocation-ratio-to-resource-tracker,
cpu/mem allocation ratio have already been moved to resource tracker.
And disk allocation ratio MUST be moved as same.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513335

Title:
  disk allocation ratio should move to resource tracker

Status in OpenStack Compute (nova):
  New

Bug description:
  1. version:
  nova 12.0.0 Liberty

  2. As mentioned in 
https://blueprints.launchpad.net/nova/+spec/allocation-ratio-to-resource-tracker,
  cpu/mem allocation ratio have already been moved to resource tracker.
  And disk allocation ratio MUST be moved as same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1513335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457291] Re: Volume connection to destination host is not terminated after failed to block live migrate a VM with attached volume

2015-11-04 Thread Pawel Koniszewski
This has been fixed in Liberty - https://review.openstack.org/214434.
Because kilo is security-supported only marking this one as invalid.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457291

Title:
  Volume connection to destination host is not terminated after failed
  to  block live migrate a VM with attached volume

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I was tried to block live migrate a VM with a attached volume.  It was failed 
as expected due to change in bug https://bugs.launchpad.net/nova/+bug/1398999
  However, after the migration failed, the volume connection to the destination 
host is not terminated. This result in that the volume is not able to be 
deleted after attached from the VM(VNX is used a cinder backend).

  Log at the Destination host:
  Exception that made the live migration failed
  2015-05-20 23:01:43.644 ERROR oslo_messaging._drivers.common 
[req-ac891c95-e958-4166-9eb6-a459f05356f0 admin admin] ['Traceback (most recent 
call last):\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
142, in _dispatch_and_reply\nexecutor_callback))\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
186, in _dispatch\nexecutor_callback)\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 6681, in pre_live_migration\n   
 disk, migrate_data=migrate_data)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 443, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped\npayload)\n', '  
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/exception.py", line 71, in wrapped\nreturn f(self, 
context, *args, **kw)\n', '  File "/opt/stack/nova/nova/compute/manager.py", 
line 355, in decorated_function\nkwargs[\'instance\'], e, 
sys.exc_info())\n', '  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in 
__exit__\nsix.reraise(self.type_, self.value, self.tb)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 343, in decorated_function\n
return function(self, context, *args, **kwargs)\n', '  File 
"/opt/stack/nova/nova/compute/manager.py", line 5163, in pre_live_migration\n   
 migrate_data)\n', '  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 
5825, in pre_live_migration\nraise exception.MigrationError(reason=msg)\n', 
'MigrationError: Migration error: Cannot block migrate instance 
728f053b-0333-4594-b25b-1c104be66313 with mapped volumes\n']

  There is log to initialize the connection between the volume and the target 
host
  stack@ubuntu-server12:/opt/stack/logs/screen$ grep 
req-ac891c95-e958-4166-9eb6-a459f05356f0 screen-n-cpu.log | grep initialize
  2015-05-20 23:01:39.379 DEBUG keystoneclient.session 
[req-ac891c95-e958-4166-9eb6-a459f05356f0 admin admin] REQ: curl -g -i -X POST 
http://192.168.1.12:8776/v2/e6c8e065eee54e369a0fe7bca2759213/volumes/b895ded9-d337-45a0-8eb8-658faabf3e7e/action
 -H "User-Agent: python-cinderclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}7f619e45d624a874185329f549070513a45eb324" -d 
'{"os-initialize_connection": {"connector": {"ip": "192.168.1.12", "host": 
"ubuntu-server12", "wwnns": ["2090fa534685", "2090fa534684"], 
"initiator": "iqn.1993-08.org.debian:01:f261dc5728b2", "wwpns": 
["1090fa534685", "1090fa534684"]}}}' _http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:195

  But there is no request to terminate the connection between the volume and 
the target host
  stack@ubuntu-server12:/opt/stack/logs/screen$ grep 
req-ac891c95-e958-4166-9eb6-a459f05356f0 screen-n-cpu.log | grep 
terminate_connection

  In the cinder api log, the last request about the volume is initialize 
connection to the target host. There is no terminate connection request after 
that.
  stack@ubuntu-server12:/opt/stack/logs/screen$ grep 
b895ded9-d337-45a0-8eb8-658faabf3e7e screen-c-api.log
  
  2015-05-20 23:01:39.444 INFO cinder.api.openstack.wsgi 
[req-43a51010-2cb9-4cae-8684-3fa5f82c71de admin] POST 
http://192.168.1.12:8776/v2/e6c8e065eee54e369a0fe7bca2759213/volumes/b895ded9-d337-45a0-8eb8-658faabf3e7e/action
  2015-05-20 23:01:39.484 DEBUG cinder.volume.api 
[req-43a51010-2cb9-4cae-8684-3fa5f82c71de admin] initialize connection for 
volume-id: b895ded9-d337-45a0-8eb8-658faabf3e7e, and connector: {u'ip': 
u'192.168.1.12', u'host': 

[Yahoo-eng-team] [Bug 1428553] Re: migration and live migration fails with images_type=rbd

2015-11-04 Thread Pawel Koniszewski
So it is packstack related issue - please refer to
https://bugzilla.redhat.com/show_bug.cgi?id=968310

** Bug watch added: Red Hat Bugzilla #968310
   https://bugzilla.redhat.com/show_bug.cgi?id=968310

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
 Assignee: lvmxh (shaohef) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428553

Title:
  migration and live migration fails with images_type=rbd

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description of problem:
  The migration and live migration of instances fail when Nova is set to work 
with RBD as a back end for the instances disks. 
  When attempting to migrate an instance from one host to another an error 
prompt:

  Error: Failed to launch instance "osp5": Please try again later
  [Error: Unexpected error while running command. Command: ssh  mkdir -p /var/lib/nova/instances/98cc014a-0d6d-48bc-
  9d76-4fe361b67f3b Exit code: 1 Stdout: u'This account is currently not
  available.\n' Stderr: u''].

  The log show: http://pastebin.test.redhat.com/267337

  when attempting to run live migration this is the output:
  http://pastebin.test.redhat.com/267340

  There's a work around, to change the nova user settings on all the
  compute nodes, on the /etc/passwd file from sbin/nologin to bin/bash
  and run the command. I wouldn't recommend it, it creates a security
  breach IMO.

  Version-Release number of selected component (if applicable):
  openstack-nova-api-2014.2.2-2.el7ost.noarch
  python-nova-2014.2.2-2.el7ost.noarch
  openstack-nova-compute-2014.2.2-2.el7ost.noarch
  openstack-nova-common-2014.2.2-2.el7ost.noarch
  openstack-nova-scheduler-2014.2.2-2.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  openstack-nova-conductor-2014.2.2-2.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Set the nova to work with RBD as the back end of the instances disks, 
according to the Ceph documentation
  2. Launch an instance
  3. migrate the instance to a different host 

  Actual results:
  The migration fails and the instance status moves to error.

  Expected results:
  the instance migrates to the other host

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1428553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513353] [NEW] VPNaaS: leftid should be configurable

2015-11-04 Thread Yi Jing Zhu
Public bug reported:

Currently, both left & leftid are filled with external_ip automatically. But 
sometimes, user may want to specify the leftid as they desired, such as an 
email address.
It would be better if this can be supported.

Pre-conditions: None

Step-by-step reproduction steps:
1) create an ipsec connection from dashboard or CLI, there is no leftid option.

Version:
Stable Kilo/CentOS7/RDO

Thanks!

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Summary changed:

- leftid should be configurable
+ vpnaas - leftid should be configurable

** Summary changed:

- vpnaas - leftid should be configurable
+ VPNaaS: leftid should be configurable

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513353

Title:
  VPNaaS: leftid should be configurable

Status in neutron:
  New

Bug description:
  Currently, both left & leftid are filled with external_ip automatically. But 
sometimes, user may want to specify the leftid as they desired, such as an 
email address.
  It would be better if this can be supported.

  Pre-conditions: None

  Step-by-step reproduction steps:
  1) create an ipsec connection from dashboard or CLI, there is no leftid 
option.

  Version:
  Stable Kilo/CentOS7/RDO

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479264] Re: test_resync_devices_set_up_after_exception fails with "RowNotFound: Cannot find Bridge with name=test-br69135803"

2015-11-04 Thread Rossella Sblendido
If the bridge doesn't exists the exception is still not caught , which
is the right behaviour according to me because the agent tries to get
the ancillary ports only if it detects ancillary bridges. We were seeing
some race in the test clean up probably., which disappeared since I get
0 hit now. I will mark it as invalid, feel free to reopen it if the
problem persists.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479264

Title:
  test_resync_devices_set_up_after_exception fails with "RowNotFound:
  Cannot find Bridge with name=test-br69135803"

Status in neutron:
  Invalid

Bug description:
  Example: 
http://logs.openstack.org/88/206188/1/check/gate-neutron-dsvm-functional/a797b68/testr_results.html.gz
  Logstash: 

  ft1.205: 
neutron.tests.functional.agent.test_l2_ovs_agent.TestOVSAgent.test_resync_devices_set_up_after_exception(native)_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stdout

  pythonlogging:'': {{{
  2015-07-28 21:38:06,203 INFO [neutron.agent.l2.agent_extensions_manager] 
Configured agent extensions names: ('qos',)
  2015-07-28 21:38:06,204 INFO [neutron.agent.l2.agent_extensions_manager] 
Loaded agent extensions names: ['qos']
  2015-07-28 21:38:06,204 INFO [neutron.agent.l2.agent_extensions_manager] 
Initializing agent extension 'qos'
  2015-07-28 21:38:06,280 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Mapping 
physical network physnet to bridge br-int359443631
  2015-07-28 21:38:06,349  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int359443631 exceeds the 15 character limitation. It was 
shortened to int-br-in3cbf05 to fit.
  2015-07-28 21:38:06,349  WARNING 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Creating an 
interface named br-int359443631 exceeds the 15 character limitation. It was 
shortened to phy-br-in3cbf05 to fit.
  2015-07-28 21:38:06,970 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Adding 
test-br69135803 to list of bridges.
  2015-07-28 21:38:06,974  WARNING [neutron.agent.securitygroups_rpc] Driver 
configuration doesn't match with enable_security_group
  2015-07-28 21:38:07,061 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Agent out of 
sync with plugin!
  2015-07-28 21:38:07,062 INFO 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Agent tunnel 
out of sync with plugin!
  2015-07-28 21:38:07,204ERROR [neutron.agent.ovsdb.impl_idl] Traceback 
(most recent call last):
File "neutron/agent/ovsdb/native/connection.py", line 84, in run
  txn.results.put(txn.do_commit())
File "neutron/agent/ovsdb/impl_idl.py", line 92, in do_commit
  ctx.reraise = False
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 119, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File "neutron/agent/ovsdb/impl_idl.py", line 87, in do_commit
  command.run_idl(txn)
File "neutron/agent/ovsdb/native/commands.py", line 355, in run_idl
  br = idlutils.row_by_value(self.api.idl, 'Bridge', 'name', self.bridge)
File "neutron/agent/ovsdb/native/idlutils.py", line 59, in row_by_value
  raise RowNotFound(table=table, col=column, match=match)
  RowNotFound: Cannot find Bridge with name=test-br69135803

  2015-07-28 21:38:07,204ERROR [neutron.agent.ovsdb.native.commands] Error 
executing command
  Traceback (most recent call last):
File "neutron/agent/ovsdb/native/commands.py", line 35, in execute
  txn.add(self)
File "neutron/agent/ovsdb/api.py", line 70, in __exit__
  self.result = self.commit()
File "neutron/agent/ovsdb/impl_idl.py", line 70, in commit
  raise result.ex
  RowNotFound: Cannot find Bridge with name=test-br69135803
  2015-07-28 21:38:07,205ERROR 
[neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent] Error while 
processing VIF ports
  Traceback (most recent call last):
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 1569, in rpc_loop
  ancillary_ports)
File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", 
line 1104, in scan_ancillary_ports
  cur_ports |= bridge.get_vif_port_set()
File "neutron/agent/common/ovs_lib.py", line 376, in get_vif_port_set
  port_names = self.get_port_name_list()
File "neutron/agent/common/ovs_lib.py", line 313, in get_port_name_list
  return self.ovsdb.list_ports(self.br_name).execute(check_error=True)
File "neutron/agent/ovsdb/native/commands.py", line 42, in execute
  ctx.reraise = False
File 

[Yahoo-eng-team] [Bug 1513069] Re: allowed_address_pairs validator raises 500 when non-list value is specified from users

2015-11-04 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1477829 ***
https://bugs.launchpad.net/bugs/1477829

Sorry I checked the wrong branch :-(
It was fixed during Liberty.

** Description changed:

  allowed_address_pairs validator raises 500 when non-list value is specified 
from users.
  In the following example, a user specified True for allowed_address_pairs by 
mistake.
  In this case, neutron server should return BadRequest (400) instead of 
InternalServerError (500).
  
- releases: from Juno to Mitaka
+ Releases: Kilo, Juno
  
  How to reproduce:
  Send {u'port': {u'allowed_address_pairs': True}} for an existing port.
  
  2015-11-02 19:33:37.550 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] No route matched for PUT 
/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:97
  2015-11-02 19:33:37.551 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Matched PUT 
/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2015-11-02 19:33:37.552 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Route path: '/ports/{id}{.format}', defaults: 
{'action': u'update', 'controller': >} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2015-11-02 19:33:37.552 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Match dict: {'action': u'update', 
'controller': >, 'id': u'58d6d971-0519-4746-8e26-4f51185b92b9', 'format': 
u'json'} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2015-11-02 19:33:37.553 10988 DEBUG neutron.api.v2.base 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Request body: {u'port': 
{u'allowed_address_pairs': True}} prepare_request_body 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py:582
  2015-11-02 19:33:37.554 10988 ERROR neutron.api.v2.resource 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] update failed
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 501, in update
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 637, in 
prepare_request_body
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource 
attr_vals['validate'][rule])
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/extensions/allowedaddresspairs.py", 
line 56, in _validate_allowed_address_pairs
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource if 
len(address_pairs) > cfg.CONF.max_allowed_address_pair:
  2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource TypeError: object 
of type 'bool' has no len()
- 2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource 
+ 2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource
  2015-11-02 19:33:37.557 10988 INFO neutron.wsgi 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] 192.168.23.61,10.2.101.2 - - [02/Nov/2015 
19:33:37] "PUT /v2.0/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json HTTP/1.1" 
500 439 0.012048

** This bug has been marked a duplicate of bug 1477829
   Create port API with invalid value returns 500(Internal Server Error)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513069

Title:
  allowed_address_pairs validator raises 500 when non-list value is
  specified from users

Status in neutron:
  New

Bug description:
  allowed_address_pairs validator raises 500 when non-list value is specified 
from users.
  In the following example, a user specified True for allowed_address_pairs by 
mistake.
  In this case, neutron server should return BadRequest (400) instead of 
InternalServerError (500).

  Releases: Kilo, Juno

  How to reproduce:
  Send {u'port': {u'allowed_address_pairs': True}} for an existing port.

  2015-11-02 19:33:37.550 10988 DEBUG 

[Yahoo-eng-team] [Bug 1513041] Re: need to wait more than 30 seconds before the network namespace can be checked on network node when creating network(with 860 tenant/network/instance created)

2015-11-04 Thread Eugene Nikanorov
I would question the issue itself.
Processing 860 networks in 30 seconds is much much better than Juno has.
Since we had rootwrap demon, processing time reduced significantly. 

You can increase num_sync_threads in dhcp_agent.ini from default value
of 4 to higher and see if it helps.

** Changed in: neutron
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513041

Title:
  need to wait more than 30 seconds before the network namespace can be
  checked on network node  when creating network(with 860
  tenant/network/instance created)

Status in neutron:
  Opinion

Bug description:
  [Summary]
  need to wait more than 30 seconds before the network namespace can be checked 
on network node  when creating network(with 860 tenant/network/instance 

  created)

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the network namespace can be checked on network node immidiately when 
creating network

  [Reproduceable or not]
  reproducible when large number of tenant/network/instance configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more network, the name space of this network can only be
  checked on network node 30 seconds later ISSUE

  
  [Configration]
  config files for controller/network/compute are attached

  [logs]
  Post logs here.

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513048] Re: the container can not be pinged via name space, after 860 tenants/networks/container created

2015-11-04 Thread Ryan Moats
Changed this to invalid/low as this is a defect against kilo which is
now (according to [1]) security supported.  This needs to be retested
with liberty/master and re-filed.

[1] https://wiki.openstack.org/wiki/Releases

** Changed in: neutron
   Status: Triaged => Opinion

** Changed in: neutron
   Importance: High => Wishlist

** Changed in: neutron
   Status: Opinion => Invalid

** Changed in: neutron
   Importance: Wishlist => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513048

Title:
  the container can not be pinged via name space, after 860
  tenants/networks/container created

Status in neutron:
  Invalid

Bug description:
  [Summary]
  the container can not be pinged via name space, after 860 
tenants/networks/container created

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the container should be pinged via name space

  [Reproduceable or not]
  reproducible intermittently when large number of tenant/network/instance 
configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more tenant, 1 network/contaier in the tenant, the
  container can be in Active state, but can not be pinged via name space
  ISSUE

  
  [Configration]
  config files on controller/network/compute are attached

  [logs]
  instance can be in Active state:
  oot@ah:~# nova --os-tenant-id 73731bbaf2db48f89a067604e3556e05 list
  
+--+-+++-++
  | ID   | Name| Status 
| Task State | Power State | Networks   
|
  
+--+-+++-++
  | d5ba18d5-aaf9-4ed6-9a2b-71d2b2f10bae | mexico_test_new_2_1_net1_vm | ACTIVE 
| -  | Running | mexico_test_new_2_1_net1=10.10.32.3, 172.168.6.211 
|
  
+--+-+++-++
  root@ah:~# keystone tenant-list | grep test_new_2_1
  | 73731bbaf2db48f89a067604e3556e05 | mexico_test_new_2_1 |   True  |
  root@ah:~# neutron net-list | grep exico_test_new_2_1_net1
  | a935642d-b56c-4a87-83c5-755f01bf0814 | mexico_test_new_2_1_net1 | 
bed0330f-e0ea-4bcc-bc75-96766dad32a7 10.10.32.0/24  |
  root@ah:~#

  on network node:
  root@ah:~# ip netns | grep a935642d-b56c-4a87-83c5-755f01bf0814
  qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814
  root@ah:~# ip netns exec qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814 ping 
10.10.32.3
  PING 10.10.32.3 (10.10.32.3) 56(84) bytes of data.
  From 10.10.32.2 icmp_seq=1 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=2 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=3 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=4 Destination Host Unreachable>>>ISSUE

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files on controller/network/compute are attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513102] [NEW] Useless deprecation message for driver import

2015-11-04 Thread Brant Knudson
Public bug reported:


When a driver is specified using the full name (as in, the old config file is 
used), for example if I have:

 driver = keystone.contrib.federation.backends.sql.Federation

I get a deprecation warning:

31304 WARNING oslo_log.versionutils [-] Deprecated: direct import of
driver is deprecated as of Liberty in favor of entrypoints and may be
removed in N.

The deprecation warning is pretty useless. It should at least include
the string that was used so that I can figure out what to change.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1513102

Title:
  Useless deprecation message for driver import

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  
  When a driver is specified using the full name (as in, the old config file is 
used), for example if I have:

   driver = keystone.contrib.federation.backends.sql.Federation

  I get a deprecation warning:

  31304 WARNING oslo_log.versionutils [-] Deprecated: direct import of
  driver is deprecated as of Liberty in favor of entrypoints and may be
  removed in N.

  The deprecation warning is pretty useless. It should at least include
  the string that was used so that I can figure out what to change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1513102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513048] [NEW] the container can not be pinged via name space, after 860 tenants/networks/container created

2015-11-04 Thread IBM-Cloud-SH
Public bug reported:

[Summary]
the container can not be pinged via name space, after 860 
tenants/networks/container created

[Topo]
1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
(openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
root@ah:~# uname -a
Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@ah:~# 
root@ah:~# dpkg -l | grep neutron
ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

library
ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
root@ah:~#

[Description and expect result]
the container should be pinged via name space

[Reproduceable or not]
reproducible intermittently when large number of tenant/network/instance 
configed

[Recreate Steps]
1)use script to create: 860 tenants, 1 network/router in each tenant, 1 cirros 
container in each network, all containers are associate to FIP

2)create one more tenant, 1 network/contaier in the tenant, the
container can be in Active state, but can not be pinged via name space
ISSUE


[Configration]
config files on controller/network/compute are attached

[logs]
instance can be in Active state:
oot@ah:~# nova --os-tenant-id 73731bbaf2db48f89a067604e3556e05 list
+--+-+++-++
| ID   | Name| Status | 
Task State | Power State | Networks   |
+--+-+++-++
| d5ba18d5-aaf9-4ed6-9a2b-71d2b2f10bae | mexico_test_new_2_1_net1_vm | ACTIVE | 
-  | Running | mexico_test_new_2_1_net1=10.10.32.3, 172.168.6.211 |
+--+-+++-++
root@ah:~# keystone tenant-list | grep test_new_2_1
| 73731bbaf2db48f89a067604e3556e05 | mexico_test_new_2_1 |   True  |
root@ah:~# neutron net-list | grep exico_test_new_2_1_net1
| a935642d-b56c-4a87-83c5-755f01bf0814 | mexico_test_new_2_1_net1 | 
bed0330f-e0ea-4bcc-bc75-96766dad32a7 10.10.32.0/24  |
root@ah:~#

on network node:
root@ah:~# ip netns | grep a935642d-b56c-4a87-83c5-755f01bf0814
qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814
root@ah:~# ip netns exec qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814 ping 
10.10.32.3
PING 10.10.32.3 (10.10.32.3) 56(84) bytes of data.
>From 10.10.32.2 icmp_seq=1 Destination Host Unreachable
>From 10.10.32.2 icmp_seq=2 Destination Host Unreachable
>From 10.10.32.2 icmp_seq=3 Destination Host Unreachable
>From 10.10.32.2 icmp_seq=4 Destination Host Unreachable>>>ISSUE

[Root cause anlyze or debug inf]
high load on controller and network node

[Attachment]
log files on controller/network/compute are attached

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "log files and config files on controller/network/compute 
are attached"
   
https://bugs.launchpad.net/bugs/1513048/+attachment/4512713/+files/log_config.rar

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513048

Title:
  the container can not be pinged via name space, after 860
  tenants/networks/container created

Status in neutron:
  New

Bug description:
  [Summary]
  the container can not be pinged via name space, after 860 
tenants/networks/container created

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  

[Yahoo-eng-team] [Bug 1513041] [NEW] need to wait more than 30 seconds before the network namespace can be checked on network node when creating network(with 860 tenant/network/instance created)

2015-11-04 Thread IBM-Cloud-SH
Public bug reported:

[Summary]
need to wait more than 30 seconds before the network namespace can be checked 
on network node  when creating network(with 860 tenant/network/instance 

created)

[Topo]
1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
(openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
root@ah:~# uname -a
Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@ah:~# 
root@ah:~# dpkg -l | grep neutron
ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

library
ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
root@ah:~#

[Description and expect result]
the network namespace can be checked on network node immidiately when creating 
network

[Reproduceable or not]
reproducible when large number of tenant/network/instance configed

[Recreate Steps]
1)use script to create: 860 tenants, 1 network/router in each tenant, 1 cirros 
container in each network, all containers are associate to FIP

2)create one more network, the name space of this network can only be
checked on network node 30 seconds later ISSUE


[Configration]
config files for controller/network/compute are attached

[logs]
Post logs here.

[Root cause anlyze or debug inf]
high load on controller and network node

[Attachment]
log files attached

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "config files and log files for controller/network/compute"
   
https://bugs.launchpad.net/bugs/1513041/+attachment/4512706/+files/log_config.rar

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513041

Title:
  need to wait more than 30 seconds before the network namespace can be
  checked on network node  when creating network(with 860
  tenant/network/instance created)

Status in neutron:
  New

Bug description:
  [Summary]
  need to wait more than 30 seconds before the network namespace can be checked 
on network node  when creating network(with 860 tenant/network/instance 

  created)

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the network namespace can be checked on network node immidiately when 
creating network

  [Reproduceable or not]
  reproducible when large number of tenant/network/instance configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more network, the name space of this network can only be
  checked on network node 30 seconds later ISSUE

  
  [Configration]
  config files for controller/network/compute are attached

  [logs]
  Post logs here.

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files attached

To manage 

[Yahoo-eng-team] [Bug 1511109] Re: Python Tests are failing on Horizon because of incomplete mocking

2015-11-04 Thread Alan Pevec
** Also affects: horizon/liberty
   Importance: Undecided
   Status: New

** Changed in: horizon/liberty
 Assignee: (unassigned) => Matthias Runge (mrunge)

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon/liberty
   Importance: Undecided => High

** Changed in: horizon/liberty
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1511109

Title:
  Python Tests are failing on Horizon because of incomplete mocking

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) liberty series:
  In Progress

Bug description:
  openstack_dashboard.dashboards.project.instances.tests.InstanceTests
  are failing as the calls to flavor_list are not mocked on Nova.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1511109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412993] Re: Nova resize for boot-from-volume instance does not resize volume

2015-11-04 Thread John Garbutt
Marking this as invalid, as I think this is the correct behaviour.

There was talk of adding the ability to resize a volume during resize
using BDM, but thats a spec.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412993

Title:
  Nova resize for boot-from-volume instance does not resize volume

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Resizing an instance which booted from a volume to a new flavor with a
  bigger disk will not cause the volume to resize accordingly. This can
  cause confusion among the users, which will expect to have instances
  with bigger storage.

  Scenario:
  1. Have a glance image.
  2. Create a bootable volume from glance image.
  3. Create instance using volume and flavor having 10GB disk.
  4. Perform nova resize on instance to a new flavor having 20GB disk.
  5. After resize, see that the instance still has 10GB storage. Cinder volume 
still has the same size.

  This issue has been discussed on #openstack-nova and it was agreed
  upon to fail the resize operation, if the given instance is booted
  from volume and the given new flavor has a different disk size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513069] [NEW] allowed_address_pairs validator raises 500 when non-list value is specified from users

2015-11-04 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1477829 ***
https://bugs.launchpad.net/bugs/1477829

Public bug reported:

allowed_address_pairs validator raises 500 when non-list value is specified 
from users.
In the following example, a user specified True for allowed_address_pairs by 
mistake.
In this case, neutron server should return BadRequest (400) instead of 
InternalServerError (500).

Releases: Kilo, Juno

How to reproduce:
Send {u'port': {u'allowed_address_pairs': True}} for an existing port.

2015-11-02 19:33:37.550 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] No route matched for PUT 
/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:97
2015-11-02 19:33:37.551 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Matched PUT 
/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
2015-11-02 19:33:37.552 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Route path: '/ports/{id}{.format}', defaults: 
{'action': u'update', 'controller': >} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
2015-11-02 19:33:37.552 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Match dict: {'action': u'update', 
'controller': >, 'id': u'58d6d971-0519-4746-8e26-4f51185b92b9', 'format': 
u'json'} __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:103
2015-11-02 19:33:37.553 10988 DEBUG neutron.api.v2.base 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] Request body: {u'port': 
{u'allowed_address_pairs': True}} prepare_request_body 
/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py:582
2015-11-02 19:33:37.554 10988 ERROR neutron.api.v2.resource 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] update failed
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 501, in update
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 637, in 
prepare_request_body
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource 
attr_vals['validate'][rule])
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/extensions/allowedaddresspairs.py", 
line 56, in _validate_allowed_address_pairs
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource if 
len(address_pairs) > cfg.CONF.max_allowed_address_pair:
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource TypeError: object 
of type 'bool' has no len()
2015-11-02 19:33:37.554 10988 TRACE neutron.api.v2.resource
2015-11-02 19:33:37.557 10988 INFO neutron.wsgi 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] 192.168.23.61,10.2.101.2 - - [02/Nov/2015 
19:33:37] "PUT /v2.0/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json HTTP/1.1" 
500 439 0.012048

** Affects: neutron
 Importance: Low
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513069

Title:
  allowed_address_pairs validator raises 500 when non-list value is
  specified from users

Status in neutron:
  New

Bug description:
  allowed_address_pairs validator raises 500 when non-list value is specified 
from users.
  In the following example, a user specified True for allowed_address_pairs by 
mistake.
  In this case, neutron server should return BadRequest (400) instead of 
InternalServerError (500).

  Releases: Kilo, Juno

  How to reproduce:
  Send {u'port': {u'allowed_address_pairs': True}} for an existing port.

  2015-11-02 19:33:37.550 10988 DEBUG routes.middleware 
[req-7fbb782d-2537-4184-9e16-49e7d3285947 10dfb42eef5842b886a0c65ea5547175 
43bc5337a313424a8e746c1b0074de60] No route matched for PUT 
/ports/58d6d971-0519-4746-8e26-4f51185b92b9.json __call__ 

[Yahoo-eng-team] [Bug 1513109] [NEW] Ephemeral w/o FS gets added to fstab

2015-11-04 Thread Nate House
Public bug reported:

Using a DS that includes ec2 metadata with an ephemeral disk, even if
the ephemeral disk is not formatted with a filesystem, cc_mounts adds an
fstab entry that is likely invalid and potentially breaking reboot in
some cases.  I've seen this affect Debian based images and if needing to
reproduce, Rax OnMetal I/O flavor is a good candidate.  We are adjusting
the base images cloud config to avoid hitting this but, any other cloud
providers that provision non-formatted ephemeral disks would likely have
similar issues.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1513109

Title:
  Ephemeral w/o FS gets added to fstab

Status in cloud-init:
  New

Bug description:
  Using a DS that includes ec2 metadata with an ephemeral disk, even if
  the ephemeral disk is not formatted with a filesystem, cc_mounts adds
  an fstab entry that is likely invalid and potentially breaking reboot
  in some cases.  I've seen this affect Debian based images and if
  needing to reproduce, Rax OnMetal I/O flavor is a good candidate.  We
  are adjusting the base images cloud config to avoid hitting this but,
  any other cloud providers that provision non-formatted ephemeral disks
  would likely have similar issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1513109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513119] [NEW] Keystone startup failure due to missing MITAKA constant

2015-11-04 Thread Brian Elliott
Public bug reported:

The version of oslo.log referenced leads to this error and keystone
won't start:

2015-11-04 15:36:00.423 6292 CRITICAL keystone [-] AttributeError: type object 
'deprecated' has no attribute 'MITAKA'
2015-11-04 15:36:00.423 6292 ERROR keystone Traceback (most recent call last):
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/bin/keystone-all", line 10, in 
2015-11-04 15:36:00.423 6292 ERROR keystone sys.exit(main())
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/cmd/all.py", line 39, in main
2015-11-04 15:36:00.423 6292 ERROR keystone 
eventlet_server.run(possible_topdir)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/eventlet.py", line 155, in run
2015-11-04 15:36:00.423 6292 ERROR keystone 
startup_application_fn=create_servers)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/common.py", line 51, in setup_bac
kends
2015-11-04 15:36:00.423 6292 ERROR keystone res = startup_application_fn()
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/eventlet.py", line 146, in create
_servers
2015-11-04 15:36:00.423 6292 ERROR keystone admin_worker_count))
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/eventlet.py", line 64, in create_
server
2015-11-04 15:36:00.423 6292 ERROR keystone app = 
keystone_service.loadapp('config:%s' % conf, name)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/version/service.py", line 47, in loadapp
2015-11-04 15:36:00.423 6292 ERROR keystone controllers.latest_app = 
deploy.loadapp(conf, name=name)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 247, in loadapp
2015-11-04 15:36:00.423 6292 ERROR keystone return loadobj(APP, uri, 
name=name, **kw)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 272, in loadobj
2015-11-04 15:36:00.423 6292 ERROR keystone return context.create()
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 710, in create
2015-11-04 15:36:00.423 6292 ERROR keystone return 
self.object_type.invoke(self)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 144, in invoke
2015-11-04 15:36:00.423 6292 ERROR keystone **context.local_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/util.py", line 55, in fix_call
2015-11-04 15:36:00.423 6292 ERROR keystone val = callable(*args, **kw)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/urlm
ap.py", line 31, in urlmap_factory
2015-11-04 15:36:00.423 6292 ERROR keystone app = loader.get_app(app_name, 
global_conf=global_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 350, in get_app
2015-11-04 15:36:00.423 6292 ERROR keystone name=name, 
global_conf=global_conf).create()
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 362, in app_context
2015-11-04 15:36:00.423 6292 ERROR keystone APP, name=name, 
global_conf=global_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 450, in get_context
2015-11-04 15:36:00.423 6292 ERROR keystone 
global_additions=global_additions)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 559, in _pipeline_app_context
2015-11-04 15:36:00.423 6292 ERROR keystone APP, pipeline[-1], global_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 458, in get_context
2015-11-04 15:36:00.423 6292 ERROR keystone section)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 517, in _context_from_explicit
2015-11-04 15:36:00.423 6292 ERROR keystone value = 
import_string(found_expr)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 22, in import_string
2015-11-04 15:36:00.423 6292 ERROR keystone return 
pkg_resources.EntryPoint.parse("x=" + s).load(False)
2015-11-04 15:36:00.423 6292 ERROR 

[Yahoo-eng-team] [Bug 1512744] Re: Unable to retrieve LDAP domain user and group list on Horizon.

2015-11-04 Thread Irina Povolotskaya
** Also affects: fuel-plugins
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1512744

Title:
   Unable to retrieve LDAP domain user and group list on Horizon.

Status in Fuel Plugins:
  New
Status in OpenStack Identity (keystone):
  New

Bug description:
  I'm using openstack 7.0 with LDAP plugin "
  ldap-1.0-1.0.0-1.noarch.rpm"

  I need add LDAP user on new project on keystone.tld domain. Project
  creation on this domain working fine, but when i tryed to add  LDAP
  users on this project i see an error:  "Unable to retrieve LDAP domain
  user/group list "

  https://screencloud.net/v/xS09

  I can not use a user unless add him to the project

  On version 1.0.0 LDAP plugin this working fine, without critical
  problems.

  When i use CLI, i see erorr:
  openstack --os-auth-url http://172.16.0.3:5000/v3 --os-username Administrator 
--os-password Pass1234 --os-user-domain-name keystone.tld  user list
  ERROR: openstack Expecting to find domain in project - the server could not 
comply with the request since it is either malformed or otherwise incorrect. 
The client is assumed to be in error. (HTTP 400) (Request-ID: 
req-8f456d5d-afba-4289-957a-4eed91ee75cc)

  Log message on FuelUI (get all avaliable users on project):
  GET 
http://192.168.0.2:35357/v3/users?domain_id=19bca8582eae47b891e6b9d45fd6225b_project_id=ae96f8daec6c405a9e3b5d509a39db83
 HTTP/1.1" 500 143 keystoneclient.session: DEBUG: RESP: keystoneclient.session: 
DEBUG: Request returned failure status: 500

  Mirantis LDAP server  172.16.57.146 working fine.

  My LDAP settings:

  [ldap]
  suffix=dc=keystone,dc=tld
  query_scope=sub
  user_id_attribute=cn
  user=cn=Administrator,cn=Users,dc=keystone,dc=tld
  user_objectclass=person
  user_name_attribute=cn
  password=Pass1234
  user_allow_delete=False
  user_tree_dn=dc=keystone,dc=tld
  user_pass_attribute=userPassword
  user_enabled_attribute=enabled
  user_allow_create=False
  user_allow_update=False
  user_filter=
  url=ldap://172.16.57.146

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel-plugins/+bug/1512744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513120] [NEW] Keystone startup failure due to missing MITAKA constant

2015-11-04 Thread Brian Elliott
Public bug reported:

The version of oslo.log referenced leads to this error and keystone
won't start:

2015-11-04 15:36:00.423 6292 CRITICAL keystone [-] AttributeError: type object 
'deprecated' has no attribute 'MITAKA'
2015-11-04 15:36:00.423 6292 ERROR keystone Traceback (most recent call last):
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/bin/keystone-all", line 10, in 
2015-11-04 15:36:00.423 6292 ERROR keystone sys.exit(main())
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/cmd/all.py", line 39, in main
2015-11-04 15:36:00.423 6292 ERROR keystone 
eventlet_server.run(possible_topdir)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/eventlet.py", line 155, in run
2015-11-04 15:36:00.423 6292 ERROR keystone 
startup_application_fn=create_servers)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/common.py", line 51, in setup_bac
kends
2015-11-04 15:36:00.423 6292 ERROR keystone res = startup_application_fn()
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/eventlet.py", line 146, in create
_servers
2015-11-04 15:36:00.423 6292 ERROR keystone admin_worker_count))
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/server/eventlet.py", line 64, in create_
server
2015-11-04 15:36:00.423 6292 ERROR keystone app = 
keystone_service.loadapp('config:%s' % conf, name)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/keystone/keystone/version/service.py", line 47, in loadapp
2015-11-04 15:36:00.423 6292 ERROR keystone controllers.latest_app = 
deploy.loadapp(conf, name=name)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 247, in loadapp
2015-11-04 15:36:00.423 6292 ERROR keystone return loadobj(APP, uri, 
name=name, **kw)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 272, in loadobj
2015-11-04 15:36:00.423 6292 ERROR keystone return context.create()
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 710, in create
2015-11-04 15:36:00.423 6292 ERROR keystone return 
self.object_type.invoke(self)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 144, in invoke
2015-11-04 15:36:00.423 6292 ERROR keystone **context.local_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/util.py", line 55, in fix_call
2015-11-04 15:36:00.423 6292 ERROR keystone val = callable(*args, **kw)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/urlm
ap.py", line 31, in urlmap_factory
2015-11-04 15:36:00.423 6292 ERROR keystone app = loader.get_app(app_name, 
global_conf=global_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 350, in get_app
2015-11-04 15:36:00.423 6292 ERROR keystone name=name, 
global_conf=global_conf).create()
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 362, in app_context
2015-11-04 15:36:00.423 6292 ERROR keystone APP, name=name, 
global_conf=global_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 450, in get_context
2015-11-04 15:36:00.423 6292 ERROR keystone 
global_additions=global_additions)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 559, in _pipeline_app_context
2015-11-04 15:36:00.423 6292 ERROR keystone APP, pipeline[-1], global_conf)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 458, in get_context
2015-11-04 15:36:00.423 6292 ERROR keystone section)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 517, in _context_from_explicit
2015-11-04 15:36:00.423 6292 ERROR keystone value = 
import_string(found_expr)
2015-11-04 15:36:00.423 6292 ERROR keystone   File 
"/home/bde/venv/keystone/local/lib/python2.7/site-packages/paste/depl
oy/loadwsgi.py", line 22, in import_string
2015-11-04 15:36:00.423 6292 ERROR keystone return 
pkg_resources.EntryPoint.parse("x=" + s).load(False)
2015-11-04 15:36:00.423 6292 ERROR 

[Yahoo-eng-team] [Bug 1513048] Re: the container can not be pinged via name space, after 860 tenants/networks/container created

2015-11-04 Thread IBM-Cloud-SH
It is master kilo release 2015.1.2

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513048

Title:
  the container can not be pinged via name space, after 860
  tenants/networks/container created

Status in neutron:
  New

Bug description:
  [Summary]
  the container can not be pinged via name space, after 860 
tenants/networks/container created

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the container should be pinged via name space

  [Reproduceable or not]
  reproducible intermittently when large number of tenant/network/instance 
configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more tenant, 1 network/contaier in the tenant, the
  container can be in Active state, but can not be pinged via name space
  ISSUE

  
  [Configration]
  config files on controller/network/compute are attached

  [logs]
  instance can be in Active state:
  oot@ah:~# nova --os-tenant-id 73731bbaf2db48f89a067604e3556e05 list
  
+--+-+++-++
  | ID   | Name| Status 
| Task State | Power State | Networks   
|
  
+--+-+++-++
  | d5ba18d5-aaf9-4ed6-9a2b-71d2b2f10bae | mexico_test_new_2_1_net1_vm | ACTIVE 
| -  | Running | mexico_test_new_2_1_net1=10.10.32.3, 172.168.6.211 
|
  
+--+-+++-++
  root@ah:~# keystone tenant-list | grep test_new_2_1
  | 73731bbaf2db48f89a067604e3556e05 | mexico_test_new_2_1 |   True  |
  root@ah:~# neutron net-list | grep exico_test_new_2_1_net1
  | a935642d-b56c-4a87-83c5-755f01bf0814 | mexico_test_new_2_1_net1 | 
bed0330f-e0ea-4bcc-bc75-96766dad32a7 10.10.32.0/24  |
  root@ah:~#

  on network node:
  root@ah:~# ip netns | grep a935642d-b56c-4a87-83c5-755f01bf0814
  qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814
  root@ah:~# ip netns exec qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814 ping 
10.10.32.3
  PING 10.10.32.3 (10.10.32.3) 56(84) bytes of data.
  From 10.10.32.2 icmp_seq=1 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=2 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=3 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=4 Destination Host Unreachable>>>ISSUE

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files on controller/network/compute are attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513048] Re: the container can not be pinged via name space, after 860 tenants/networks/container created

2015-11-04 Thread Cedric Brandily
Incomplete seems better as we need more information in Liberty.

** Changed in: neutron
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513048

Title:
  the container can not be pinged via name space, after 860
  tenants/networks/container created

Status in neutron:
  Incomplete

Bug description:
  [Summary]
  the container can not be pinged via name space, after 860 
tenants/networks/container created

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the container should be pinged via name space

  [Reproduceable or not]
  reproducible intermittently when large number of tenant/network/instance 
configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more tenant, 1 network/contaier in the tenant, the
  container can be in Active state, but can not be pinged via name space
  ISSUE

  
  [Configration]
  config files on controller/network/compute are attached

  [logs]
  instance can be in Active state:
  oot@ah:~# nova --os-tenant-id 73731bbaf2db48f89a067604e3556e05 list
  
+--+-+++-++
  | ID   | Name| Status 
| Task State | Power State | Networks   
|
  
+--+-+++-++
  | d5ba18d5-aaf9-4ed6-9a2b-71d2b2f10bae | mexico_test_new_2_1_net1_vm | ACTIVE 
| -  | Running | mexico_test_new_2_1_net1=10.10.32.3, 172.168.6.211 
|
  
+--+-+++-++
  root@ah:~# keystone tenant-list | grep test_new_2_1
  | 73731bbaf2db48f89a067604e3556e05 | mexico_test_new_2_1 |   True  |
  root@ah:~# neutron net-list | grep exico_test_new_2_1_net1
  | a935642d-b56c-4a87-83c5-755f01bf0814 | mexico_test_new_2_1_net1 | 
bed0330f-e0ea-4bcc-bc75-96766dad32a7 10.10.32.0/24  |
  root@ah:~#

  on network node:
  root@ah:~# ip netns | grep a935642d-b56c-4a87-83c5-755f01bf0814
  qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814
  root@ah:~# ip netns exec qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814 ping 
10.10.32.3
  PING 10.10.32.3 (10.10.32.3) 56(84) bytes of data.
  From 10.10.32.2 icmp_seq=1 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=2 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=3 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=4 Destination Host Unreachable>>>ISSUE

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files on controller/network/compute are attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513048] Re: the container can not be pinged via name space, after 860 tenants/networks/container created

2015-11-04 Thread Cedric Brandily
It seems an explanation is required when updating the bug status

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513048

Title:
  the container can not be pinged via name space, after 860
  tenants/networks/container created

Status in neutron:
  Incomplete

Bug description:
  [Summary]
  the container can not be pinged via name space, after 860 
tenants/networks/container created

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the container should be pinged via name space

  [Reproduceable or not]
  reproducible intermittently when large number of tenant/network/instance 
configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more tenant, 1 network/contaier in the tenant, the
  container can be in Active state, but can not be pinged via name space
  ISSUE

  
  [Configration]
  config files on controller/network/compute are attached

  [logs]
  instance can be in Active state:
  oot@ah:~# nova --os-tenant-id 73731bbaf2db48f89a067604e3556e05 list
  
+--+-+++-++
  | ID   | Name| Status 
| Task State | Power State | Networks   
|
  
+--+-+++-++
  | d5ba18d5-aaf9-4ed6-9a2b-71d2b2f10bae | mexico_test_new_2_1_net1_vm | ACTIVE 
| -  | Running | mexico_test_new_2_1_net1=10.10.32.3, 172.168.6.211 
|
  
+--+-+++-++
  root@ah:~# keystone tenant-list | grep test_new_2_1
  | 73731bbaf2db48f89a067604e3556e05 | mexico_test_new_2_1 |   True  |
  root@ah:~# neutron net-list | grep exico_test_new_2_1_net1
  | a935642d-b56c-4a87-83c5-755f01bf0814 | mexico_test_new_2_1_net1 | 
bed0330f-e0ea-4bcc-bc75-96766dad32a7 10.10.32.0/24  |
  root@ah:~#

  on network node:
  root@ah:~# ip netns | grep a935642d-b56c-4a87-83c5-755f01bf0814
  qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814
  root@ah:~# ip netns exec qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814 ping 
10.10.32.3
  PING 10.10.32.3 (10.10.32.3) 56(84) bytes of data.
  From 10.10.32.2 icmp_seq=1 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=2 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=3 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=4 Destination Host Unreachable>>>ISSUE

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files on controller/network/compute are attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506948] Re: Release request of networking-cisco on stable/kilo: 2015.1.1

2015-11-04 Thread Sam Betts
** Changed in: networking-cisco
Milestone: None => 2.0.0

** Changed in: networking-cisco/kilo
   Status: New => Confirmed

** Changed in: networking-cisco/kilo
   Importance: Undecided => Medium

** Changed in: networking-cisco
Milestone: 2.0.0 => None

** No longer affects: networking-cisco/kilo

** Changed in: networking-cisco
Milestone: None => 1.1.0

** Changed in: networking-cisco
 Assignee: (unassigned) => Brian Demers (brian-demers)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506948

Title:
  Release request of networking-cisco on stable/kilo: 2015.1.1

Status in networking-cisco:
  Confirmed
Status in neutron:
  Confirmed

Bug description:
  
  In preparation for Liberty's semver changes, we want to re-release 2015.1.0 
as 1.0.0 (adding another tag at the same point: 
24edb9fd14584020a8b242a8b351befc5ddafb7e)

  
  New tag info:

  Branch:   stable/kilo
  From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
  New Tag:  1.1.0

  This release contains the following changes:

   d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
   f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: VLAN not 
created on switch
   d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
   0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubble up exceptions when 
Nexus replay enabled
   54fca8a047810304c69990dce03052e45f21cc23Quick retry connect to 
resolve stale ncclient handle
   0c496e1d7425984bf9686b11b5c0c9c8ece23bf3Update requirements.txt for 
ML2 Nexus
   393254fcfbe3165e4253801bc3be03e15201c36dUpdate requirements.txt
   75fd522b36f7b67dc4152e461f4e5dfa26b4ff31Remove duplicate entrypoints 
in setup.cfg
   178f40f2a43192687188661d5fcedf394321e191Cisco UCSM driver updates to 
handle duplicate creations
   11f5f29af3e5c4a2ed4b42471e32db49180693dfClean up of UCS Manager 
connection handle management.
   ad010718f978763e399f0bf9a0976ba51d3334ebFix Cisco CSR1KV script 
issues
   a8c4bd753ba254b062612c1bcd85000656ebfa44Replace retry count with 
replay failure stats
   db1bd250b95abfc267c8a75891ba56105cbeed8cAdd scripts to enable CSR 
FWaaS service
   f39c6a55613a274d6d0e67409533edefbca6f9a7Fix N1kv trunk driver: same 
mac assigned to ports created
   a118483327f7a217dfedfe69da3ef91f9ec6a169Update netorking-cisco files 
for due to neutrons port dictionary subnet being replaced with
   b60296644660303fb2341ca6495611621fc486e7ML2 cisco_nexus MD: Config 
hangs when replay enabled
   76f7be8758145c61e960ed37e5c93262252f56ffMove UCSM from extension 
drivers to mech drivers
   ffabc773febb9a8df7853588ae27a4fe3bc4069bML2 cisco_nexus MD: 
Multiprocess replay issue
   77d4a60fbce7f81275c3cdd9fec3b28a1ca0c57cML2 cisco_nexus MD: If 
configured, close ssh sessions
   825cf6d1239600917f8fa545cc3745517d363838Part II-Detect switch 
failure earlier-Port Create
   9b7b57097b2bd34f42ca5adce1e3342a91b4d3f8Retry count not reset on 
successful replay
   6afe5d8a6d11db4bc2db29e6a84dc709672b1d69ML2 Nexus decomposition not 
complete for Nexus
   ac84fcb861bd594a5a3773c32e06b3e58a729308Delete fails after switch 
reset (replay off)
   97720feb4ef4d75fa190a23ac10038d29582b001Call to get nexus type for 
Nexus 9372PX fails
   87fb3d6f75f9b0ae574df17b494421126a636199Detect switch failure 
earlier during port create
   b38e47a37977634df14846ba38aa38d7239a1adcEnable the CSR1kv devstack 
plugin for Kilo
   365cd0f94e579a4c885e6ea9c94f5df241fb2288Sanitize policy profile 
table on neutron restart
   4a6a4040a71096b31ca5c283fd0df15fb87aeb38Cisco Nexus1000V: Retry 
mechanism for VSM REST calls
   7bcec734cbc658f4cd0792c625aff1a3edc73208Moved N1kv section from 
neutron tree to stackforge
   4970a3e279995faf9aff402c96d4b16796a00ef5N1Kv: Force syncing BDs on 
neutron restart
   f078a701931986a2755d340d5f4a7cc2ab095bb3s/stackforge/openstack/g
   151f6f6836491b77e0e788089e0cf9edbe9b7e00Update .gitreview file for 
project rename
   876c25fbf7e3aa7f8a44dd88560a030e609648d5Bump minor version number to 
enable development
   a5e7f6a3f0f824ec313449273cf9b283cf1fd3b9Sync notification to VSM & 
major sync refactoring

  NOTE: this is a kilo release, so i'm not sure if we should follow the
  post versioning step in from:
  http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
  #sub-project-release-process

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1506948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1513041] Re: need to wait more than 30 seconds before the network namespace can be checked on network node when creating network(with 860 tenant/network/instance created)

2015-11-04 Thread IBM-Cloud-SH
It is master kilo release 2015.1.2

** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513041

Title:
  need to wait more than 30 seconds before the network namespace can be
  checked on network node  when creating network(with 860
  tenant/network/instance created)

Status in neutron:
  New

Bug description:
  [Summary]
  need to wait more than 30 seconds before the network namespace can be checked 
on network node  when creating network(with 860 tenant/network/instance 

  created)

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the network namespace can be checked on network node immidiately when 
creating network

  [Reproduceable or not]
  reproducible when large number of tenant/network/instance configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more network, the name space of this network can only be
  checked on network node 30 seconds later ISSUE

  
  [Configration]
  config files for controller/network/compute are attached

  [logs]
  Post logs here.

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513140] Re: block_device_mapping.connection_info is updated from None to 'null'

2015-11-04 Thread Matt Riedemann
Meh, this might actually break volume detach in the compute manager:

connection_info = jsonutils.loads(bdm.connection_info)

So let's just ignore this as won't fix for now.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513140

Title:
  block_device_mapping.connection_info is updated from None to 'null'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  While debugging bug 1489581 we were tracking some BDM updates in the
  cells API code:

  http://logs.openstack.org/66/241366/1/check/gate-tempest-dsvm-
  cells/1d7551e/logs/screen-n-cell-
  region.txt.gz#_2015-11-03_21_44_58_273

  http://logs.openstack.org/66/241366/1/check/gate-tempest-dsvm-
  cells/1d7551e/logs/screen-n-cell-
  region.txt.gz#_2015-11-03_21_44_58_332

  Which is a diff off:

  https://www.diffchecker.com/pqclw8j3

  mriedem@ubuntu:~/git$ diff bdm1.txt bdm2.txt 
  1c1
  < {u'guest_format': None, u'boot_index': 0, u'connection_info': None, 
u'snapshot_id': None, u'updated_at': u'2015-11-03T21:44:58.00', 
u'image_id': None, u'device_type': None, u'volume_id': 
u'35909d21-81b8-4fda-82b6-d3d75be61238', u'deleted_at': None, u'instance_uuid': 
u'2c9cecc1-c3db-4057-81bd-98e488c45ac2', u'no_device': False, u'created_at': 
u'2015-11-03T21:44:57.00', u'volume_size': 1, u'device_name': u'/dev/vda', 
u'disk_bus': None, u'deleted': False, u'source_type': u'volume', 
u'destination_type': u'volume', u'delete_on_termination': True}
  ---
  > {u'guest_format': None, u'boot_index': 0, u'connection_info': u'null', 
u'snapshot_id': None, u'updated_at': u'2015-11-03T21:44:58.00', 
u'image_id': None, u'device_type': u'disk', u'volume_id': 
u'35909d21-81b8-4fda-82b6-d3d75be61238', u'deleted_at': None, u'instance_uuid': 
u'2c9cecc1-c3db-4057-81bd-98e488c45ac2', u'no_device': False, u'created_at': 
u'2015-11-03T21:44:57.00', u'volume_size': 1, u'device_name': u'/dev/vda', 
u'disk_bus': u'virtio', u'deleted': False, u'source_type': u'volume', 
u'destination_type': u'volume', u'delete_on_termination': True}

  Note that the connection_info is updated from None to 'null' because
  of this code:

  https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L357

  connection_info_string = jsonutils.dumps(
  self.get('connection_info'))
  if connection_info_string != self._bdm_obj.connection_info:
  self._bdm_obj.connection_info = connection_info_string

  We shouldn't update the connection_info from None to 'null' since
  there are places in the code that expect None or a serialized dict for
  bdm.connection_info.  A string value of 'null' messes that up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1513140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506187] Re: [SRU] Azure: cloud-init should use VM unique ID

2015-11-04 Thread Dan Watkins
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => In Progress

** Changed in: cloud-init
 Assignee: (unassigned) => Dan Watkins (daniel-thewatkins)

** Changed in: cloud-init (Ubuntu Xenial)
 Assignee: Dan Watkins (daniel-thewatkins) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1506187

Title:
  [SRU] Azure: cloud-init should use VM unique ID

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Precise:
  New
Status in cloud-init source package in Trusty:
  New
Status in cloud-init source package in Vivid:
  New
Status in cloud-init source package in Wily:
  New
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  The Azure datasource currently uses the InstanceID from the
  SharedConfig.xml file.  On our new CRP stack, this ID is not
  guaranteed to be stable and could change if the VM is deallocated.  If
  the InstanceID changes then cloud-init will attempt to reprovision the
  VM, which could result in temporary loss of access to the VM.

  Instead cloud-init should switch to use the VM Unique ID, which is
  guaranteed to be stable everywhere for the lifetime of the VM.  The VM
  unique ID is explained here: https://azure.microsoft.com/en-us/blog
  /accessing-and-using-azure-vm-unique-id/

  In short, the unique ID is available via DMI, and can be accessed with
  the command 'dmidecode | grep UUID' or even easier via sysfs in the
  file "/sys/devices/virtual/dmi/id/product_uuid".

  Steve

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1506187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513142] [NEW] cloud-init ignores configured cloud_dir for status.json and result.json data

2015-11-04 Thread Jon McKenzie
Public bug reported:

Somewhere between cloud-init-0.7.4 and cloud-init-0.7.5, the cloud-init
runtime was refactored a bit so that certain cloud-init data is dropped
statically into certain locations (/var/lib/cloud/data and /run/cloud-
init) rather than locations in the configured cloud_dir.

The particular problem crops up in the 'status_wrapper' function in the
cloud-init runtime -- although the function accepts arguments for the
'data dir' and the 'link dir' (/var/lib/cloud/data and /run/cloud-init
respectively), the actual CLI does not pass any arguments (nor does it
allow the user to pass arguments either via the CLI or read from the
cloud.cfg).

Thus, cloud-init data is scattered in all of the following locations:

* The configured cloud_dir
* /var/lib/cloud/data
* /run/cloud-init

...which is contrary to the previous behavior (0.7.4 and prior), where
everything is co-located in the cloud_dir.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1513142

Title:
  cloud-init ignores configured cloud_dir for status.json and
  result.json data

Status in cloud-init:
  New

Bug description:
  Somewhere between cloud-init-0.7.4 and cloud-init-0.7.5, the cloud-
  init runtime was refactored a bit so that certain cloud-init data is
  dropped statically into certain locations (/var/lib/cloud/data and
  /run/cloud-init) rather than locations in the configured cloud_dir.

  The particular problem crops up in the 'status_wrapper' function in
  the cloud-init runtime -- although the function accepts arguments for
  the 'data dir' and the 'link dir' (/var/lib/cloud/data and /run/cloud-
  init respectively), the actual CLI does not pass any arguments (nor
  does it allow the user to pass arguments either via the CLI or read
  from the cloud.cfg).

  Thus, cloud-init data is scattered in all of the following locations:

  * The configured cloud_dir
  * /var/lib/cloud/data
  * /run/cloud-init

  ...which is contrary to the previous behavior (0.7.4 and prior), where
  everything is co-located in the cloud_dir.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1513142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513144] [NEW] Allow admin to mark agents down

2015-11-04 Thread Carlos Goncalves
Public bug reported:

Cloud administrators have monitoring systems externally placed watching
different types of resources of their cloud infrastructures. A cloud
infrastructure is comprehended not exclusively by an OpenStack instance
but also other components not managed by and possibly not visible to
OpenStack such as SDN controller, physical network elements, etc.

External systems may detect a fault on one of multiple of infrastructure
resources that subsequently may affect services being provided by
OpenStack. From a network perspective, an example of a fault can be the
crashing of openvswitch on a compute node.

When using the reference implementation (ovs + neutron-l2-agent),
neutron-l2-agent will continue reporting to the Neutron server its state
as alive (there's heartbeat; service's up ), although there's an
internal error caused by unreachability to the virtual bridge (br-int).
By means of external tools to OpenStack monitoring openvswitch, the
administrator knows there's something wrong and as a fault management
action he may want to explicitly set the agent state down.

Such action requires a new API exposed by Neutron allowing admins to set
(true/false) the aliveness state of Neutron agents.

This feature request goes in line with the work proposed to Nova [1] and
implemented in Liberty. The same is also being currently proposed to
Cinder [2]

[1] https://blueprints.launchpad.net/nova/+spec/mark-host-down
[2] https://blueprints.launchpad.net/cinder/+spec/mark-services-down

** Affects: neutron
 Importance: Undecided
 Assignee: Carlos Goncalves (cgoncalves)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Carlos Goncalves (cgoncalves)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513144

Title:
  Allow admin to mark agents down

Status in neutron:
  New

Bug description:
  Cloud administrators have monitoring systems externally placed
  watching different types of resources of their cloud infrastructures.
  A cloud infrastructure is comprehended not exclusively by an OpenStack
  instance but also other components not managed by and possibly not
  visible to OpenStack such as SDN controller, physical network
  elements, etc.

  External systems may detect a fault on one of multiple of
  infrastructure resources that subsequently may affect services being
  provided by OpenStack. From a network perspective, an example of a
  fault can be the crashing of openvswitch on a compute node.

  When using the reference implementation (ovs + neutron-l2-agent),
  neutron-l2-agent will continue reporting to the Neutron server its
  state as alive (there's heartbeat; service's up ), although there's an
  internal error caused by unreachability to the virtual bridge (br-
  int). By means of external tools to OpenStack monitoring openvswitch,
  the administrator knows there's something wrong and as a fault
  management action he may want to explicitly set the agent state down.

  Such action requires a new API exposed by Neutron allowing admins to
  set (true/false) the aliveness state of Neutron agents.

  This feature request goes in line with the work proposed to Nova [1]
  and implemented in Liberty. The same is also being currently proposed
  to Cinder [2]

  [1] https://blueprints.launchpad.net/nova/+spec/mark-host-down
  [2] https://blueprints.launchpad.net/cinder/+spec/mark-services-down

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513041] Re: need to wait more than 30 seconds before the network namespace can be checked on network node when creating network(with 860 tenant/network/instance created)

2015-11-04 Thread Cedric Brandily
It seems an explanation is required when updating the bug status

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513041

Title:
  need to wait more than 30 seconds before the network namespace can be
  checked on network node  when creating network(with 860
  tenant/network/instance created)

Status in neutron:
  Opinion

Bug description:
  [Summary]
  need to wait more than 30 seconds before the network namespace can be checked 
on network node  when creating network(with 860 tenant/network/instance 

  created)

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the network namespace can be checked on network node immidiately when 
creating network

  [Reproduceable or not]
  reproducible when large number of tenant/network/instance configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more network, the name space of this network can only be
  checked on network node 30 seconds later ISSUE

  
  [Configration]
  config files for controller/network/compute are attached

  [logs]
  Post logs here.

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499856] Re: latest doa breaks with new db layout

2015-11-04 Thread Gilles Mocellin
** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1499856

Title:
  latest doa breaks with new db layout

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ubuntu:
  New

Bug description:
  When upgrading to new horizon and doa, a mysql backed session engine
  sees this error:

  ERRORS:
  openstack_auth.User.keystone_user_id: (mysql.E001) MySQL does not allow 
unique CharFields to have a max_length > 255.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1499856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491949] Re: gate-tempest-dsvm-large-ops fails to deallocate instance network due to rpc timeout

2015-11-04 Thread Matt Riedemann
** Changed in: nova
   Status: Invalid => Confirmed

** Changed in: nova
   Importance: High => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491949

Title:
  gate-tempest-dsvm-large-ops fails to deallocate instance network due
  to rpc timeout

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/96/219696/4/check/gate-tempest-dsvm-large-
  ops/158f061/logs/screen-n-cpu-1.txt.gz?level=TRACE

  2015-09-03 15:11:10.090 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Failed to deallocate network for instance.
  2015-09-03 15:11:11.051 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Setting instance vm_state to ERROR
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] Traceback (most recent call last):
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2396, in 
do_terminate_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._delete_instance(context, 
instance, bdms, quotas)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/hooks.py", line 149, in inner
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] rv = f(*args, **kwargs)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2375, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] quotas.rollback()
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2338, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._shutdown_instance(context, 
instance, bdms)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2265, in _shutdown_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._try_deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2194, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] 
self._set_instance_obj_error_state(context, instance)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2189, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1812, in _deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] context, instance, 
requested_networks=requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   

[Yahoo-eng-team] [Bug 1513140] [NEW] block_device_mapping.connection_info is updated from None to 'null'

2015-11-04 Thread Matt Riedemann
Public bug reported:

While debugging bug 1489581 we were tracking some BDM updates in the
cells API code:

http://logs.openstack.org/66/241366/1/check/gate-tempest-dsvm-
cells/1d7551e/logs/screen-n-cell-region.txt.gz#_2015-11-03_21_44_58_273

http://logs.openstack.org/66/241366/1/check/gate-tempest-dsvm-
cells/1d7551e/logs/screen-n-cell-region.txt.gz#_2015-11-03_21_44_58_332

Which is a diff off:

https://www.diffchecker.com/pqclw8j3

mriedem@ubuntu:~/git$ diff bdm1.txt bdm2.txt 
1c1
< {u'guest_format': None, u'boot_index': 0, u'connection_info': None, 
u'snapshot_id': None, u'updated_at': u'2015-11-03T21:44:58.00', 
u'image_id': None, u'device_type': None, u'volume_id': 
u'35909d21-81b8-4fda-82b6-d3d75be61238', u'deleted_at': None, u'instance_uuid': 
u'2c9cecc1-c3db-4057-81bd-98e488c45ac2', u'no_device': False, u'created_at': 
u'2015-11-03T21:44:57.00', u'volume_size': 1, u'device_name': u'/dev/vda', 
u'disk_bus': None, u'deleted': False, u'source_type': u'volume', 
u'destination_type': u'volume', u'delete_on_termination': True}
---
> {u'guest_format': None, u'boot_index': 0, u'connection_info': u'null', 
> u'snapshot_id': None, u'updated_at': u'2015-11-03T21:44:58.00', 
> u'image_id': None, u'device_type': u'disk', u'volume_id': 
> u'35909d21-81b8-4fda-82b6-d3d75be61238', u'deleted_at': None, 
> u'instance_uuid': u'2c9cecc1-c3db-4057-81bd-98e488c45ac2', u'no_device': 
> False, u'created_at': u'2015-11-03T21:44:57.00', u'volume_size': 1, 
> u'device_name': u'/dev/vda', u'disk_bus': u'virtio', u'deleted': False, 
> u'source_type': u'volume', u'destination_type': u'volume', 
> u'delete_on_termination': True}

Note that the connection_info is updated from None to 'null' because of
this code:

https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L357

connection_info_string = jsonutils.dumps(
self.get('connection_info'))
if connection_info_string != self._bdm_obj.connection_info:
self._bdm_obj.connection_info = connection_info_string

We shouldn't update the connection_info from None to 'null' since there
are places in the code that expect None or a serialized dict for
bdm.connection_info.  A string value of 'null' messes that up.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: volumes

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513140

Title:
  block_device_mapping.connection_info is updated from None to 'null'

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  While debugging bug 1489581 we were tracking some BDM updates in the
  cells API code:

  http://logs.openstack.org/66/241366/1/check/gate-tempest-dsvm-
  cells/1d7551e/logs/screen-n-cell-
  region.txt.gz#_2015-11-03_21_44_58_273

  http://logs.openstack.org/66/241366/1/check/gate-tempest-dsvm-
  cells/1d7551e/logs/screen-n-cell-
  region.txt.gz#_2015-11-03_21_44_58_332

  Which is a diff off:

  https://www.diffchecker.com/pqclw8j3

  mriedem@ubuntu:~/git$ diff bdm1.txt bdm2.txt 
  1c1
  < {u'guest_format': None, u'boot_index': 0, u'connection_info': None, 
u'snapshot_id': None, u'updated_at': u'2015-11-03T21:44:58.00', 
u'image_id': None, u'device_type': None, u'volume_id': 
u'35909d21-81b8-4fda-82b6-d3d75be61238', u'deleted_at': None, u'instance_uuid': 
u'2c9cecc1-c3db-4057-81bd-98e488c45ac2', u'no_device': False, u'created_at': 
u'2015-11-03T21:44:57.00', u'volume_size': 1, u'device_name': u'/dev/vda', 
u'disk_bus': None, u'deleted': False, u'source_type': u'volume', 
u'destination_type': u'volume', u'delete_on_termination': True}
  ---
  > {u'guest_format': None, u'boot_index': 0, u'connection_info': u'null', 
u'snapshot_id': None, u'updated_at': u'2015-11-03T21:44:58.00', 
u'image_id': None, u'device_type': u'disk', u'volume_id': 
u'35909d21-81b8-4fda-82b6-d3d75be61238', u'deleted_at': None, u'instance_uuid': 
u'2c9cecc1-c3db-4057-81bd-98e488c45ac2', u'no_device': False, u'created_at': 
u'2015-11-03T21:44:57.00', u'volume_size': 1, u'device_name': u'/dev/vda', 
u'disk_bus': u'virtio', u'deleted': False, u'source_type': u'volume', 
u'destination_type': u'volume', u'delete_on_termination': True}

  Note that the connection_info is updated from None to 'null' because
  of this code:

  https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L357

  connection_info_string = jsonutils.dumps(
  self.get('connection_info'))
  if connection_info_string != self._bdm_obj.connection_info:
  self._bdm_obj.connection_info = connection_info_string

  We shouldn't update the connection_info from None to 'null' since
  there are places in the code that expect None or a serialized dict for
  

[Yahoo-eng-team] [Bug 1506948] Re: Release request of networking-cisco on stable/kilo: 2015.1.1

2015-11-04 Thread Sam Betts
** Also affects: networking-cisco/kilo
   Importance: Undecided
   Status: New

** Changed in: networking-cisco/kilo
Milestone: None => 1.1.0

** Changed in: networking-cisco
Milestone: 1.1.0 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506948

Title:
  Release request of networking-cisco on stable/kilo: 2015.1.1

Status in networking-cisco:
  Confirmed
Status in networking-cisco kilo series:
  New
Status in neutron:
  Confirmed

Bug description:
  
  In preparation for Liberty's semver changes, we want to re-release 2015.1.0 
as 1.0.0 (adding another tag at the same point: 
24edb9fd14584020a8b242a8b351befc5ddafb7e)

  
  New tag info:

  Branch:   stable/kilo
  From Commit:  d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6
  New Tag:  1.1.0

  This release contains the following changes:

   d9b9a6421d7ff92e920ed21b01ebc7bf49e38bd6Set default branch for 
stable/kilo
   f08fb31f20c2d8cc1e6b71784cdfd9604895e16dML2 cisco_nexus MD: VLAN not 
created on switch
   d400749e43e9d5a1fc92683b40159afce81edc95Create knob to prevent 
caching ssh connection
   0050ea7f1fb3c22214d7ca49cfe641da86123e2cBubble up exceptions when 
Nexus replay enabled
   54fca8a047810304c69990dce03052e45f21cc23Quick retry connect to 
resolve stale ncclient handle
   0c496e1d7425984bf9686b11b5c0c9c8ece23bf3Update requirements.txt for 
ML2 Nexus
   393254fcfbe3165e4253801bc3be03e15201c36dUpdate requirements.txt
   75fd522b36f7b67dc4152e461f4e5dfa26b4ff31Remove duplicate entrypoints 
in setup.cfg
   178f40f2a43192687188661d5fcedf394321e191Cisco UCSM driver updates to 
handle duplicate creations
   11f5f29af3e5c4a2ed4b42471e32db49180693dfClean up of UCS Manager 
connection handle management.
   ad010718f978763e399f0bf9a0976ba51d3334ebFix Cisco CSR1KV script 
issues
   a8c4bd753ba254b062612c1bcd85000656ebfa44Replace retry count with 
replay failure stats
   db1bd250b95abfc267c8a75891ba56105cbeed8cAdd scripts to enable CSR 
FWaaS service
   f39c6a55613a274d6d0e67409533edefbca6f9a7Fix N1kv trunk driver: same 
mac assigned to ports created
   a118483327f7a217dfedfe69da3ef91f9ec6a169Update netorking-cisco files 
for due to neutrons port dictionary subnet being replaced with
   b60296644660303fb2341ca6495611621fc486e7ML2 cisco_nexus MD: Config 
hangs when replay enabled
   76f7be8758145c61e960ed37e5c93262252f56ffMove UCSM from extension 
drivers to mech drivers
   ffabc773febb9a8df7853588ae27a4fe3bc4069bML2 cisco_nexus MD: 
Multiprocess replay issue
   77d4a60fbce7f81275c3cdd9fec3b28a1ca0c57cML2 cisco_nexus MD: If 
configured, close ssh sessions
   825cf6d1239600917f8fa545cc3745517d363838Part II-Detect switch 
failure earlier-Port Create
   9b7b57097b2bd34f42ca5adce1e3342a91b4d3f8Retry count not reset on 
successful replay
   6afe5d8a6d11db4bc2db29e6a84dc709672b1d69ML2 Nexus decomposition not 
complete for Nexus
   ac84fcb861bd594a5a3773c32e06b3e58a729308Delete fails after switch 
reset (replay off)
   97720feb4ef4d75fa190a23ac10038d29582b001Call to get nexus type for 
Nexus 9372PX fails
   87fb3d6f75f9b0ae574df17b494421126a636199Detect switch failure 
earlier during port create
   b38e47a37977634df14846ba38aa38d7239a1adcEnable the CSR1kv devstack 
plugin for Kilo
   365cd0f94e579a4c885e6ea9c94f5df241fb2288Sanitize policy profile 
table on neutron restart
   4a6a4040a71096b31ca5c283fd0df15fb87aeb38Cisco Nexus1000V: Retry 
mechanism for VSM REST calls
   7bcec734cbc658f4cd0792c625aff1a3edc73208Moved N1kv section from 
neutron tree to stackforge
   4970a3e279995faf9aff402c96d4b16796a00ef5N1Kv: Force syncing BDs on 
neutron restart
   f078a701931986a2755d340d5f4a7cc2ab095bb3s/stackforge/openstack/g
   151f6f6836491b77e0e788089e0cf9edbe9b7e00Update .gitreview file for 
project rename
   876c25fbf7e3aa7f8a44dd88560a030e609648d5Bump minor version number to 
enable development
   a5e7f6a3f0f824ec313449273cf9b283cf1fd3b9Sync notification to VSM & 
major sync refactoring

  NOTE: this is a kilo release, so i'm not sure if we should follow the
  post versioning step in from:
  http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
  #sub-project-release-process

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1506948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473909] Re: Error message during nova delete (Esxi based devstack setup using ovsvapp sollution)

2015-11-04 Thread Romil Gupta
** Tags added: networking-vsphere

** Project changed: nova => networking-vsphere

** Changed in: networking-vsphere
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473909

Title:
  Error message during nova delete (Esxi based devstack setup using
  ovsvapp sollution)

Status in networking-vsphere:
  Confirmed

Bug description:
  I am trying 
"https://github.com/openstack/networking-vsphere/tree/master/devstack; for 
OVSvApp solution , consisting 3 DVS.
  1. Trunk DVS
  2. Management DVS
  3. Uplink DVS

  I am using Esxi based devstack setup using vCenter. Also I am working
  with stable kilo.

  I could successfully boot an instance using nova boot.

  When I delete same instance using nova delete, API request is successful.
  VM deletes after a long. But in the mean time following error occurs - 



  
  2015-07-13 21:53:44.193 ERROR nova.network.base_api 
[req-760e73b5-9815-441d-931e-c0a57f8d32f3 None None] [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Failed storing info cache
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Traceback (most recent call last):
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/network/base_api.py", line 49, in 
update_instance_cache_with_nw_info
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] ic.save(update_cells=update_cells)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/objects/base.py", line 192, in wrapper
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] self._context, self, fn.__name__, 
args, kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 340, in object_action
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
156, in call
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] retry=self.retry)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] timeout=timeout, retry=retry)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 350, in send
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] retry=retry)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 341, in _send
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] raise result
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] InstanceInfoCacheNotFound_Remote: Info 
cache for instance 06a3de55-285d-4d0d-953e-7f99aed28e95 could not be found.
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Traceback (most recent call last):
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/conductor/manager.py", line 422, in _object_dispatch
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] return getattr(target, method)(*args, 
**kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/objects/base.py", line 208, in wrapper
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] return fn(self, *args, **kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
  2015-07-13 

[Yahoo-eng-team] [Bug 1513000] [NEW] neutron q-lbaas cannot start - ValueError: Empty module name

2015-11-04 Thread Victor Laza
Public bug reported:

When trying to start neutron q-lbaas on ubuntu devstack we receive this
error and was unable to track it to source:

ubuntu@nov-dvs-241480-1:~/devstack$ /usr/local/bin/neutron-lbaas-agent 
--config- 
file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/services/loadbalancer/ 
haproxy/lbaas_agent.ini & echo $! >/opt/stack/status/stack/q-lbaas.pid; fg || 
ec 
ho "q-lbaas failed to start" | tee "/opt/stack/status/stack/q-lbaas.failure"
[1] 6390
/usr/local/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini
No handlers could be found for logger "oslo_config.cfg"
2015-11-04 07:53:37.026 6390 INFO neutron.common.config [-] Logging enabled!
2015-11-04 07:53:37.027 6390 INFO neutron.common.config [-] 
/usr/local/bin/neutron-lbaas-agent version 8.0.0.dev226
2015-11-04 07:53:37.027 6390 DEBUG neutron.common.config [-] command line: 
/usr/local/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini 
setup_logging /opt/stack/neutron/neutron/common/config.py:191
2015-11-04 07:53:37.031 CRITICAL neutron 
[req-0c93d2b0-c58e-47cf-ad94-836d71a21e81 None None] ValueError: Empty module 
name
2015-11-04 07:53:37.031 6390 ERROR neutron Traceback (most recent call last):
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/usr/local/bin/neutron-lbaas-agent", line 10, in 
2015-11-04 07:53:37.031 6390 ERROR neutron sys.exit(main())
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent.py", 
line 61, in main
2015-11-04 07:53:37.031 6390 ERROR neutron mgr = 
manager.LbaasAgentManager(cfg.CONF)
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 70, in __init__
2015-11-04 07:53:37.031 6390 ERROR neutron self._load_drivers()
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/agent/agent_manager.py",
 line 95, in _load_drivers
2015-11-04 07:53:37.031 6390 ERROR neutron self.plugin_rpc
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 38, in 
import_object
2015-11-04 07:53:37.031 6390 ERROR neutron return 
import_class(import_str)(*args, **kwargs)
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/opt/stack/neutron-lbaas/neutron_lbaas/services/loadbalancer/drivers/haproxy/namespace_driver.py",
 line 71, in __init__
2015-11-04 07:53:37.031 6390 ERROR neutron vif_driver = 
importutils.import_object(conf.interface_driver, conf)
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 38, in 
import_object
2015-11-04 07:53:37.031 6390 ERROR neutron return 
import_class(import_str)(*args, **kwargs)
2015-11-04 07:53:37.031 6390 ERROR neutron   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 27, in 
import_class
2015-11-04 07:53:37.031 6390 ERROR neutron __import__(mod_str)
2015-11-04 07:53:37.031 6390 ERROR neutron ValueError: Empty module name
2015-11-04 07:53:37.031 6390 ERROR neutron 
q-lbaas failed to start
ubuntu@nov-dvs-241480-1:~/devstack$

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: devstack ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513000

Title:
  neutron q-lbaas cannot start - ValueError: Empty module name

Status in neutron:
  New

Bug description:
  When trying to start neutron q-lbaas on ubuntu devstack we receive
  this error and was unable to track it to source:

  ubuntu@nov-dvs-241480-1:~/devstack$ /usr/local/bin/neutron-lbaas-agent 
--config- 
  file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/services/loadbalancer/ 
  haproxy/lbaas_agent.ini & echo $! >/opt/stack/status/stack/q-lbaas.pid; fg || 
ec 
  ho "q-lbaas failed to start" | tee "/opt/stack/status/stack/q-lbaas.failure"
  [1] 6390
  /usr/local/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini
  No handlers could be found for logger "oslo_config.cfg"
  2015-11-04 07:53:37.026 6390 INFO neutron.common.config [-] Logging enabled!
  2015-11-04 07:53:37.027 6390 INFO neutron.common.config [-] 
/usr/local/bin/neutron-lbaas-agent version 8.0.0.dev226
  2015-11-04 07:53:37.027 6390 DEBUG neutron.common.config [-] command line: 
/usr/local/bin/neutron-lbaas-agent --config-file /etc/neutron/neutron.conf 
--config-file=/etc/neutron/services/loadbalancer/haproxy/lbaas_agent.ini 
setup_logging /opt/stack/neutron/neutron/common/config.py:191
  2015-11-04 07:53:37.031 CRITICAL neutron 

[Yahoo-eng-team] [Bug 1425108] Re: private _get_children() in sql backend doesn't support passing None values

2015-11-04 Thread Rodrigo Duarte
This is not a valid situation anymore.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1425108

Title:
  private _get_children() in sql backend doesn't support passing None
  values

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  The _get_children() method [1] uses the "in_" clause, which doesn't
  support passing None as part of the list (it is not considered).
  Passing None is a valid situation if we want to query for all root
  projects in the hierarchy.

  [1]
  
https://github.com/openstack/keystone/blob/master/keystone/resource/backends/sql.py#L86

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1425108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504726] Re: The vm can not access the vip of load balancer under DVR enviroment

2015-11-04 Thread Kyle Mestery
This is not valid in Kilo, as the issue is addressed per Swami in
comment #10.

Kilo is under security only patches now, so it's not clear we can merge
this there.

** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron/kilo
   Importance: Undecided => High

** Changed in: neutron/kilo
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504726

Title:
  The vm can not access the vip of load balancer under DVR enviroment

Status in neutron:
  Invalid
Status in neutron kilo series:
  Triaged

Bug description:
  Version
  ===
  Kilo

  Description
  ===
  The vip is on 192.168.1.0/24 subnet,and the vm is on 192.168.2.0/24 subnet. 
There is a router connected to the two subnets. For the computer node which the 
vm belong to, it's DVR l3-agent don't have ARP to the vip address. So the vm 
can not access the vip by route.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504726/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513179] [NEW] create volume workflow does not compute quota usage correctly

2015-11-04 Thread Doug Fish
Public bug reported:

The cinder quota on gigabytes is the sum of both volumes and snapshots.
This is not correctly reflected in the create volume dialog, which
allows the user to attempt to create volumes when there is not enough
quota available, which results in a useless error message.

To recreate the problem:
1) on Project->Compute->Volumes create a 1G empty volume
2) on the same panel create a snapshot of the new volume
3) on Identity->Projects->[your project] choose Modify Quota and set the quota 
for "Total Size of Volumes and Snapshots (GB) " to 3G.
4) Note that the quota usage (2 of 3) is correctly reflected on 
Project->Compute-Overview
5) on Project->Compute->Volumes click Create Volume
*Note the the quota is not accurately reflected
6) Attempt to create a new volume of size 2G.
*Note the obscure failure message "Error: Unable to create volume."

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1513179

Title:
  create volume workflow does not compute quota usage correctly

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The cinder quota on gigabytes is the sum of both volumes and
  snapshots. This is not correctly reflected in the create volume
  dialog, which allows the user to attempt to create volumes when there
  is not enough quota available, which results in a useless error
  message.

  To recreate the problem:
  1) on Project->Compute->Volumes create a 1G empty volume
  2) on the same panel create a snapshot of the new volume
  3) on Identity->Projects->[your project] choose Modify Quota and set the 
quota for "Total Size of Volumes and Snapshots (GB) " to 3G.
  4) Note that the quota usage (2 of 3) is correctly reflected on 
Project->Compute-Overview
  5) on Project->Compute->Volumes click Create Volume
  *Note the the quota is not accurately reflected
  6) Attempt to create a new volume of size 2G.
  *Note the obscure failure message "Error: Unable to create volume."

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1513179/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp