[Yahoo-eng-team] [Bug 1374919] [NEW] instance.security_groups must be a list

2014-09-28 Thread Thomas Goirand
Public bug reported:

In Django 1.7, these 2 issues show:

==
ERROR: test_instance_details_volume_sorting 
(openstack_dashboard.dashboards.project.instances.tests.InstanceTests)
--
Traceback (most recent call last):
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/openstack_dashboard/dashboards/project/instances/tests.py,
 line 704, in test_instance_details_volume_sorting
security_groups_return=security_group)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/openstack_dashboard/test/helpers.py,
 line 80, in instance_stub_out
return fn(self, *args, **kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/openstack_dashboard/dashboards/project/instances/tests.py,
 line 684, in _get_instance_details
return self.client.get(url)
  File /usr/lib/python2.7/dist-packages/django/test/client.py, line 467, in 
get
**extra)
  File /usr/lib/python2.7/dist-packages/django/test/client.py, line 285, in 
get
return self.generic('GET', path, secure=secure, **r)
  File /usr/lib/python2.7/dist-packages/django/test/client.py, line 355, in 
generic
return self.request(**r)
  File /usr/lib/python2.7/dist-packages/django/test/client.py, line 437, in 
request
six.reraise(*exc_info)
  File /usr/lib/python2.7/dist-packages/django/core/handlers/base.py, line 
111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/decorators.py,
 line 36, in dec
return view_func(request, *args, **kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/decorators.py,
 line 52, in dec
return view_func(request, *args, **kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/decorators.py,
 line 36, in dec
return view_func(request, *args, **kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/decorators.py,
 line 84, in dec
return view_func(request, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/django/views/generic/base.py, line 
69, in view
return self.dispatch(request, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/django/views/generic/base.py, line 
87, in dispatch
return handler(request, *args, **kwargs)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/tabs/views.py,
 line 72, in get
return self.handle_tabbed_response(context[tab_group], context)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/tabs/views.py,
 line 68, in handle_tabbed_response
return self.render_to_response(context)
  File 
/home/zigo/sources/openstack/juno/horizon/build-area/horizon-2014.2~b3/horizon/tabs/views.py,
 line 81, in render_to_response
response.render()
  File /usr/lib/python2.7/dist-packages/django/template/response.py, line 
103, in render
self.content = self.rendered_content
  File /usr/lib/python2.7/dist-packages/django/template/response.py, line 80, 
in rendered_content
content = template.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 148, in 
render
return self._render(context)
  File /usr/lib/python2.7/dist-packages/django/test/utils.py, line 88, in 
instrumented_test_render
return self.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
126, in render
return compiled_parent._render(context)
  File /usr/lib/python2.7/dist-packages/django/test/utils.py, line 88, in 
instrumented_test_render
return self.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
65, in render
result = block.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = self.render_node(node, context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 858, in 
render_node
return node.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/loader_tags.py, line 
65, in render
result = block.nodelist.render(context)
  File /usr/lib/python2.7/dist-packages/django/template/base.py, line 844, in 
render
bit = 

[Yahoo-eng-team] [Bug 1374931] [NEW] No response on launch an instance from volume snapshot

2014-09-28 Thread luogangyi
Public bug reported:

When I try to launch an instance of snapshot image (which was a snapshot
of a volume-backed instance), the 'launch button' has no response.

How to reproduce:
1. Launch an instance by using boot from image(create new volume)
2. Take a snapshot of this instance, this operation will produce a image which 
size is 0 byte.
3. Launch an instance by using the image generated in step 2, the 'launch 
button' has no response.

If I open the console of web browser, I can see the Error Message
 An invalid form control with name='volume_size' is not focusable. 

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: snapshot volume-backed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1374931

Title:
  No response on launch an instance from volume snapshot

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I try to launch an instance of snapshot image (which was a
  snapshot of a volume-backed instance), the 'launch button' has no
  response.

  How to reproduce:
  1. Launch an instance by using boot from image(create new volume)
  2. Take a snapshot of this instance, this operation will produce a image 
which size is 0 byte.
  3. Launch an instance by using the image generated in step 2, the 'launch 
button' has no response.

  If I open the console of web browser, I can see the Error Message
   An invalid form control with name='volume_size' is not focusable. 

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1374931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373868] Re: Should we allow all networks use allowed address pairs?

2014-09-28 Thread Eugene Nikanorov
Neutron already has max_allowed_address_pair configuration value in neutron 
conf. 
The default limit is 10. However it's not related to shared networks and is a 
limitation per one port.

I think it worth reaching out to openstack-dev mailing list and starting
a thread about this and then file a bug based on discussion.

Marking as invalid

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373868

Title:
  Should we allow all networks use allowed address pairs?

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Now we can add allowed address pair to every net's port if allowed
  address pair is enable.

  This will cause security problem in a shared network, I think.

  So we should add an limit for shared net or add a config entry in 
neutron.conf, so administrator
  can disables some net's ports' allowed address pairs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373820] Re: --sort-key option for neutron cli does not always work

2014-09-28 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1245337 ***
https://bugs.launchpad.net/bugs/1245337

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** This bug has been marked a duplicate of bug 1245337
   Neutron cmd net-list option: sort-key+sort-dir does not work as expected

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373820

Title:
  --sort-key option for neutron cli does not always work

Status in OpenStack Neutron (virtual network service):
  New
Status in Python client library for Neutron:
  New

Bug description:
  $neutron security-group-list --sort-key name
  
---
  idnamedescription

  
---
  3d2b51cf-8f30-4be8-b720-364e62c0ca45  als-Core-Router Inbound User 
Traffic
  9d9598da-27f8-46fc-9f5a-72e0968a2e2c  als-Core-Router Inbound User 
Traffic
  eb39ab0f-3974-4fa5-a7ad-6e94caec29c7  als-InternalIntra-Cluster Traffic

  However, it does not work with other neutron commands, although help
  on them describe it.

  for eg:

  sort-key on neutron net-list or floatingip-list does not work as follows:
  $neutron net-list --sort-key name
  
--
  idnamesubnets

  
--
  0d520976-480d-4e56-8dc9-f550eab660ee  SVC 
f06cdf57-dff4-4a93-823f-39fa534f2409 10.9.236.192/26
  123f7ac9-f357-407a-be27-cacde4f62476  umanet  
16232cfa-520c-4f33-8db2-a6754729dbe2 198.51.100.0/24
  18f732b3-1242-46ce-beb7-875703c10c3d  Mnet
4091477b-6961-4cbe-b08a-e22a0ac6ab25 10.0.6.0/24
  20d9167b-a5a2-49c9-adb8-b12cbf9ca73c  ext-net 
21f4bc85-3ec8-4c16-86f6-0a22a8d4b6ef 10.9.236.0/26 

   neutron floatingip-list --sort-key floating_ip_address
  
-+
  idfixed_ip_addressfloating_ip_address port_id

  
-+
  0541b567-40ba-451b-93d5-27886eb4  172.17.0.25 10.9.236.31 
6e3ae31b-1ebd-42d9-8f8e-c3e057f5736f
  08bf0e75-7307-4e23-95bf-6ca1705c406d  198.51.100.510.9.236.4  
a70cd4b7-2657-4cbc-8601-5e936cfecfae
  16d60b41-e113-4bf2-8f46-1f06f2545639  10.0.7.210.9.236.56 
7fe623b2-ac6e-465d-a916-75febb32e1a9
  251ff272-308c-4fc3-826e-d08d9ab68495  172.17.0.10 10.9.236.8  
0b42886f-7c50-4feb-9d31-b2e10a1a4f5d
  2bca19f3-b991-486e-99c7-771994a1347e  10.9.236.10

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374947] [NEW] HA should have integration tests

2014-09-28 Thread John Schwarz
Public bug reported:

Current HA related code should have integration tests merged to
upstream. All patches relevant to HA integration tests should be related
to this bug, until a proper blueprint is written for Kilo.

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = John Schwarz (jschwarz)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374947

Title:
  HA should have integration tests

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Current HA related code should have integration tests merged to
  upstream. All patches relevant to HA integration tests should be
  related to this bug, until a proper blueprint is written for Kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374947/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374946] [NEW] HA should have functional tests

2014-09-28 Thread John Schwarz
Public bug reported:

Current HA related code should have functional tests merged to upstream.
All patches relevant to HA functional tests should be related to this
bug.

** Affects: neutron
 Importance: Medium
 Assignee: John Schwarz (jschwarz)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = John Schwarz (jschwarz)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374946

Title:
  HA should have functional tests

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Current HA related code should have functional tests merged to
  upstream. All patches relevant to HA functional tests should be
  related to this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372666] Re: list_ports().get() times out waiting for response from Neutron API in TestSecurityGroupsBasicOps

2014-09-28 Thread Eugene Nikanorov
Indeed there is a gap of 48 seconds after the last Got semaphore db-
access lock

** Changed in: neutron
   Importance: Undecided = High

** No longer affects: python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372666

Title:
  list_ports().get() times out waiting for response from Neutron API in
  TestSecurityGroupsBasicOps

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  This request failed:

  http://logs.openstack.org/12/123112/1/check/check-tempest-dsvm-
  neutron-full/cdb7110/logs/screen-n-api.txt.gz#_2014-09-22_14_16_01_028

  2014-09-22 14:16:01.028 DEBUG nova.api.openstack.wsgi 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] Calling method 'bound method 
Controller.show of nova.api.openstack.compute.servers.Controller object at 
0x7f05ee9db610' _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:935
  2014-09-22 14:16:01.063 DEBUG neutronclient.client 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] 
  REQ: curl -i 
http://127.0.0.1:9696/v2.0/ports.json?device_id=40737ad4-4513-4027-b031-cf7cf519d5b5
 -X GET -H X-Auth-Token: 916a5769e0ba42339f45c3f6bb00f147 -H User-Agent: 
python-neutronclient
   http_log_req 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:140
  2014-09-22 14:16:31.065 DEBUG neutronclient.client 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] throwing ConnectionFailed : 
HTTPConnectionPool(host='127.0.0.1', port=9696): Read timed out. (read 
timeout=30) _cs_request 
/opt/stack/new/python-neutronclient/neutronclient/client.py:132
  2014-09-22 14:16:48.360 ERROR nova.api.openstack 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] Caught error: Connection to neutron 
failed: HTTPConnectionPool(host='127.0.0.1', port=9696): Read timed out. (read 
timeout=30)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/opt/stack/new/nova/nova/api/openstack/__init__.py, line 124, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
646, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py, line 
624, in _call_app
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack response = 
self.app(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack resp = 

[Yahoo-eng-team] [Bug 1372305] Re: Haproxy restart leads to incorrect Ceilometer LBaas related statistics

2014-09-28 Thread Eugene Nikanorov
Considering that major change is going to happen to lbaas api and
implementation, i wonder if it makes sense to put efforts into fixing
this

** Tags added: lbaas

** Changed in: neutron
   Importance: Undecided = Low

** Changed in: neutron
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372305

Title:
  Haproxy restart leads to incorrect Ceilometer LBaas related statistics

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  Ceilometer uses LBaaS API to collect load balance related statistics
  like bytes-in and bytes-out, then LBaaS plugin collects such counters
  from haproxy process via stats socket. However, when LBaaS object is
  updated, LBaaS agent will reconfigure haproxy and then restart haproxy
  process. All the counters will be cleared, which leads to incorrect
  statistics.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370348] Re: Using macvtap vnic_type is not working with vif_type=hw_veb

2014-09-28 Thread Eugene Nikanorov
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370348

Title:
  Using macvtap vnic_type is not working with vif_type=hw_veb

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When trying to boot an instance with a port using vnic_type=macvtap
  and vif_type=hw_veb I get this error in Compute log:

  TRACE nova.compute.manager  mlibvirtError: unsupported configuration:
  an interface of type 'direct' is requesting a vlan tag, but that is
  not supported for this type of connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374995] [NEW] VLAN underlay support for DVR (Distributed Virtual Router) for OVS L2 Agent

2014-09-28 Thread Vivekanandan Narasimhan
Public bug reported:

This bug is the placeholder for check-in that will enable VLAN underlay
support for DVR, for OVS L2 Agent.

** Affects: neutron
 Importance: Undecided
 Assignee: Vivekanandan Narasimhan (vivekanandan-narasimhan)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Vivekanandan Narasimhan (vivekanandan-narasimhan)

** Summary changed:

- VLAN underlay support for DVR (Distributed Virtual Router)
+ VLAN underlay support for DVR (Distributed Virtual Router) for OVS L2 Agent

** Description changed:

  This bug is the placeholder for check-in that will enable VLAN underlay
- support for DVR.
+ support for DVR, for OVS L2 Agent.

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374995

Title:
  VLAN underlay support for DVR (Distributed Virtual Router) for OVS L2
  Agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This bug is the placeholder for check-in that will enable VLAN
  underlay support for DVR, for OVS L2 Agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374999] [NEW] iSCSI volume detach does not correctly remove the multipath device descriptors

2014-09-28 Thread Sampath Priyankara
Public bug reported:

iSCSI volume detach does not correctly remove the multipath device
descriptors

tested environment:
nova-compute on Ubuntu 14.04.1, iscsi_use_multipath=True and iSCSI volume 
backend is EMC VNX 5300.

 I created 3 cinder volumes and attached them to a nova instance. Then I detach 
them one by one. First 2 volumes volumes detached successfully.  3rd volume 
also successfully detached but ended up with  failed multipaths. 
Here is the terminal log for last volume detach.

openstack@W1DEV103:~/devstack$ cinder list
+--++--+--+-+--+--+
|
 ID
 | Status | Name | Size | Volume Type | Bootable |
 Attached to
 |
+--++--+--+-+--+--+
| 56a63288-5cc0-4f5c-9197-cde731172dd8 | in-use | None | 1 |
 None
 | false | 5bd68785-4acf-43ab-ae13-11b1edc3a62e |
+--++--+--+-+--+--+
openstack@W1CN103:/etc/iscsi$ date;sudo multipath -l
Fri Sep 19 21:38:13 JST 2014
360060160cf0036002d1475f6e73fe411 dm-2 DGC,VRAID
size=1.0G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=-1 status=active
| |- 4:0:0:42 sdb 8:16 active undef running
| |- 5:0:0:42 sdd 8:48 active undef running
| |- 6:0:0:42 sdf 8:80 active undef running
| `- 7:0:0:42 sdh 8:112 active undef running
`-+- policy='round-robin 0' prio=-1 status=enabled
|- 11:0:0:42 sdp 8:240 active undef running
|- 8:0:0:42 sdj 8:144 active undef running
|- 9:0:0:42 sdl 8:176 active undef running
`- 10:0:0:42 sdn 8:208 active undef running
openstack@W1CN103:/etc/iscsi$ date;sudo iscsiadm -m session
Fri Sep 19 21:38:19 JST 2014
tcp: [10] 172.23.58.228:3260,4 iqn.1992-04.com.emc:cx.fcn00133400150.a7
tcp: [3] 172.23.58.238:3260,8 iqn.1992-04.com.emc:cx.fcn00133400150.b7
tcp: [4] 172.23.58.235:3260,20 iqn.1992-04.com.emc:cx.fcn00133400150.b4
tcp: [5] 172.23.58.236:3260,6 iqn.1992-04.com.emc:cx.fcn00133400150.b5
tcp: [6] 172.23.58.237:3260,19 iqn.1992-04.com.emc:cx.fcn00133400150.b6
tcp: [7] 172.23.58.225:3260,16 iqn.1992-04.com.emc:cx.fcn00133400150.a4
tcp: [8] 172.23.58.226:3260,2 iqn.1992-04.com.emc:cx.fcn00133400150.a5
tcp: [9] 172.23.58.227:3260,17 iqn.1992-04.com.emc:cx.fcn00133400150.a6

openstack@W1DEV103:~/devstack$ nova volume-detach 
5bd68785-4acf-43ab-ae13-11b1edc3a62e
56a63288-5cc0-4f5c-9197-cde731172dd8
openstack@W1DEV103:~/devstack$
openstack@W1DEV103:~/devstack$ cinder list
+--+---+--+--+-+--+--+
|
 ID
 | Status | Name | Size | Volume Type | Bootable |
 Attached to
 |
+--+---+--+--+-+--+--+
| 56a63288-5cc0-4f5c-9197-cde731172dd8 | detaching | None | 1 |
 None
 | false | 5bd68785-4acf-43ab-ae13-11b1edc3a62e|

+--+---+--+--+-+--+--+
openstack@W1DEV103:~/devstack$
openstack@W1DEV103:~/devstack$ cinder list
+--+---+--+--+-+--+-+
|
 ID
 | Status | Name | Size | Volume Type | Bootable | Attached to |
+--+---+--+--+-+--+-+
| 56a63288-5cc0-4f5c-9197-cde731172dd8 | available | None | 1 |
 None
 | false |
+--+---+--+--+-+--+-+
|
openstack@W1CN103:/etc/iscsi$ date;sudo multipath -l
Fri Sep 19 21:39:23 JST 2014
360060160cf0036002d1475f6e73fe411 dm-2 ,
size=1.0G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=-1 status=active
| |- #:#:#:# - #:# active undef running
| |- #:#:#:# - #:# active undef running
| |- #:#:#:# - #:# active undef running
| `- #:#:#:# - #:# active undef running
`-+- policy='round-robin 0' prio=-1 status=enabled
|- #:#:#:# - #:# active undef running
|- #:#:#:# - #:# active undef running
|- #:#:#:# - #:# active undef running
`- #:#:#:# - #:# active undef running
openstack@W1CN103:/etc/iscsi$ date;sudo iscsiadm -m session
Fri Sep 19 21:39:27 JST 2014
iscsiadm: No active sessions.

Then I manually removed the multipaths,
openstack@W1CN103:/etc/iscsi$ sudo multipath -f 
360060160cf0036002d1475f6e73fe411
openstack@W1CN103:/etc/iscsi$ sudo multipath -l
openstack@W1CN103:/etc/iscsi$

 I think the problem is in,
virt/libvirt/volume.py:LibvirtISCSIVolumeDriver
 def _disconnect_volume_multipath_iscsi(self, iscsi_properties, 
multipath_device):

End of this method executes following code to call 
remove_multipath_device_descriptor for remove the multipaths by multipath -f 
before return.

[Yahoo-eng-team] [Bug 940430] Re: nova-api should check UTF8 char in parameters

2014-09-28 Thread Christopher Yeoh
This doesn't appear to be reproducible anymore and we now handle utf8
host names correctly


** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/940430

Title:
  nova-api should check UTF8 char in parameters

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I got following error.

  root@localhost:~# nova --debug list
  connect: (keystone.thefreecloud.org, 5000)
  send: 'POST /v2.0/tokens HTTP/1.1\r\nHost: 
keystone.thefreecloud.org:5000\r\nContent-Length: 108\r\ncontent-type: 
application/json\r\naccept-encoding: gzip, deflate\r\nuser-agent: 
python-novaclient\r\n\r\n{auth: {tenantName: admin, 
passwordCredentials: {username: admin, password: X}}}'
  reply: 'HTTP/1.1 200 OK\r\n'
  header: Content-Type: application/json; charset=UTF-8
  header: Content-Length: 1148
  header: Date: Fri, 24 Feb 2012 16:13:00 GMT
  connect: (nova-api.thefreecloud.org, 8774)
  send: u'GET /v1.1/1/servers/detail HTTP/1.1\r\nHost: 
nova-api.thefreecloud.org:8774\r\nx-auth-project-id: admin\r\nx-auth-token: 
XX g\r\naccept-encoding: gzip, deflate\r\nuser-agent: 
python-novaclient\r\n\r\n'
  reply: 'HTTP/1.1 500 Internal Server Error\r\n'
  header: Content-Length: 133
  header: Content-Type: application/json; charset=UTF-8
  header: Date: Fri, 24 Feb 2012 16:13:00 GMT
  Traceback (most recent call last):
File /usr/local/bin/nova, line 9, in module
  load_entry_point('python-novaclient==2012.1', 'console_scripts', 'nova')()
File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 338, in 
main
  OpenStackComputeShell().main(sys.argv[1:])
File /usr/lib/python2.7/dist-packages/novaclient/shell.py, line 289, in 
main
  args.func(self.cs, args)
File /usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py, line 480, 
in do_list
  utils.print_list(cs.servers.list(search_opts=search_opts), columns,
File /usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py, line 
247, in list
  return self._list(/servers%s%s % (detail, query_string), servers)
File /usr/lib/python2.7/dist-packages/novaclient/base.py, line 69, in 
_list
  resp, body = self.api.client.get(url)
File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 130, in 
get
  return self._cs_request(url, 'GET', **kwargs)
File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 118, in 
_cs_request
  **kwargs)
File /usr/lib/python2.7/dist-packages/novaclient/client.py, line 101, in 
request
  raise exceptions.from_response(resp, body)
  novaclient.exceptions.ClientException: The server has either erred or is 
incapable of performing the requested operation. (HTTP 500)

  I got error

   byte 0xe7 in position 8: unexpected end of data
  (nova.api.openstack): TRACE: Traceback (most recent call last):
  (nova.api.openstack): TRACE:   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 64, in __call__
  (nova.api.openstack): TRACE: return req.get_response(self.application)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/request.py, line 1053, in get_response
  (nova.api.openstack): TRACE: application, catch_exc_info=False)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/request.py, line 1022, in call_application
  (nova.api.openstack): TRACE: app_iter = application(self.environ, 
start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth_token.py, line 212, 
in __call__
  (nova.api.openstack): TRACE: return self._forward_request(env, 
start_response, proxy_headers)
  (nova.api.openstack): TRACE:   File 
/usr/lib/python2.7/dist-packages/keystone/middleware/auth_token.py, line 344, 
in _forward_request
  (nova.api.openstack): TRACE: return self.app(env, start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/dec.py, line 159, in __call__
  (nova.api.openstack): TRACE: return resp(environ, start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/dec.py, line 159, in __call__
  (nova.api.openstack): TRACE: return resp(environ, start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/dec.py, line 159, in __call__
  (nova.api.openstack): TRACE: return resp(environ, start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/routes/middleware.py, line 131, in __call__
  (nova.api.openstack): TRACE: response = self.app(environ, start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/dec.py, line 159, in __call__
  (nova.api.openstack): TRACE: return resp(environ, start_response)
  (nova.api.openstack): TRACE:   File 
/usr/lib/pymodules/python2.7/webob/dec.py, line 159, in 

[Yahoo-eng-team] [Bug 1212195] Re: Flavor Extra Specs should check Metadata Items Quota

2014-09-28 Thread Christopher Yeoh
It's an admin api so I don't think we need to quota restrict it.

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1212195

Title:
  Flavor Extra Specs should check Metadata Items Quota

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  The flavor extra specs extension does not actually adhere to any quota 
restrictions during create.
  The API handles the MetadataLimitExceeded exception 
  (see 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/flavorextraspecs.py#L76)
  , but it looks like this exception would never get raised.

  By default the Metadata Items quota for a tenant is 128. With the current 
code, more than 128 flavor extra spec items can be created.
  Either of the two should be done:

  1. Enforce the metadata limit (use _check_metadata_properties_quota() just as 
used in instance metadata) , Or,
  2. If this limit should not be enforced, then the exception handling code 
should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1212195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365259] Re: Lack of failure information (Failure due to host key verification failure)displayed, during instance migration from one host to another

2014-09-28 Thread Christopher Yeoh
As mentioned above, this is an async call so we have already returned
from the API before we know the migration failed. The tasks API will
address this in the future (you'll need to poll but you will be able to
find out what happened).

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365259

Title:
  Lack of failure information (Failure due to host key verification
  failure)displayed, during instance migration from one host to another

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I am using (1 controller + compute1 + compute 2) openstack
  environment.

  During live migrate server from one compute host to another using CLI
  -

  If migration is failed due to host key verification failure from one
  compute host to another, then failure information should be as a
  output to console. otherwise user will not be able to know what is
  happening.

  For user, migration is successful but actually it is failed.

  
  Set of operation is as -
  1. 
  root@nechldcst-PowerEdge-2950:# nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | 1aea212b-0bee-498b-a10d-5b58a69e3293 | test-server | ACTIVE | -  | 
Running | demo-net=203.0.113.26 |
  
+--+-+++-+---+

  2. 
  root@nechldcst-PowerEdge-2950:# nova migrate 
1aea212b-0bee-498b-a10d-5b58a69e3293
  root@nechldcst-PowerEdge-2950:#
At this point user thinks that migration is successful but see below 
- 

  3. 
  root@nechldcst-PowerEdge-2950:# nova list
  
+--+-+++-+---+
  | ID   | Name| Status | Task State | 
Power State | Networks  |
  
+--+-+++-+---+
  | 1aea212b-0bee-498b-a10d-5b58a69e3293 | test-server | ERROR  | -  | 
Running | demo-net=203.0.113.26 |
  
+--+-+++-+---+

  4. 
  root@nechldcst-PowerEdge-2950:# nova show 1aea212b-0bee-498b-a10d-5b58a69e3293
  
+--+---+
  | Property | Value

 |
  
+--+---+
  | OS-DCF:diskConfig| MANUAL   

 |
  | OS-EXT-AZ:availability_zone  | nova 

 |
  | OS-EXT-SRV-ATTR:host | compute2 

 |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2 

 |
  | OS-EXT-SRV-ATTR:instance_name| instance-0003

 |
  | OS-EXT-STS:power_state   | 1

 |
  | OS-EXT-STS:task_state| -

 |
  | OS-EXT-STS:vm_state  | error

 |
  | OS-SRV-USG:launched_at   | 2014-09-04T03:41:08.00   

 |
  | OS-SRV-USG:terminated_at | -

[Yahoo-eng-team] [Bug 1373073] Re: Image v1 json client timeout while checking that image was deleted

2014-09-28 Thread Ghanshyam Mann
Thanks Roman for confirmation.

** Changed in: glance
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1373073

Title:
  Image v1 json client timeout while checking that image was deleted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  Description:
  - Icehouse Openstack cloud
  - Tempest commit 4b1b8cfd4526203e5706ec387cf6362ecaa5ed5b

  I'm getting tearDown class error after executing image tests

  testcase classname= name=tearDownClass 
(tempest.api.image.v1.test_images.CreateRegisterImagesTest) time=0.000
  failure type=testtools.testresult.real._StringException_StringException: 
Traceback (most recent call last):
File /home/user/WF_tempest/tempest-icehouse/tempest/api/image/base.py, 
line 58, in tearDownClass
  cls.client.wait_for_resource_deletion(image_id)
File 
/home/user/WF_tempest/tempest-icehouse/tempest/common/rest_client.py, line 
551, in wait_for_resource_deletion
  raise exceptions.TimeoutException
  TimeoutException: Request timed out

  After some investiogation, i've found that client tried to check
  deleted VM by

  def is_resource_deleted(self, id):
  try:
  self.get_image_meta(id)
  except exceptions.NotFound:
  return True
  return False

  But get_image_meta able to get metadata of deleted image, it just has
  status = deleted.

  My quick fix was:

  def is_resource_deleted(self, id):
  try:
  _, meta = self.get_image_meta(id)
  if meta['status'] == 'deleted':
  return True
  except exceptions.NotFound:
  return True
  return False

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1373073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366649] Re: Typo in keystone/common/base64utils.py

2014-09-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/119775
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=3f35c5c0f99180518739a08d1ade5ee1f4c2e726
Submitter: Jenkins
Branch:master

commit 3f35c5c0f99180518739a08d1ade5ee1f4c2e726
Author: Peter Razumovsky prazumov...@mirantis.com
Date:   Mon Sep 8 18:31:10 2014 +0400

Correct typos in keystone/common/base64utils.py docstrings

Closes-bug: #1366649
Change-Id: Ic3f4a3eb9da303a4da7d532f02f6c6e82a725924


** Changed in: keystone
   Status: Invalid = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1366649

Title:
  Typo in keystone/common/base64utils.py

Status in OpenStack Identity (Keystone):
  Fix Committed

Bug description:
  Typo in keystone/common/base64utils.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1366649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375108] [NEW] Failed to reboot instance successfully with EC2

2014-09-28 Thread Ghanshyam Mann
Public bug reported:


Failure happens in 
tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_reboot_terminate_instance.

pythonlogging:'': {{{
2014-09-28 09:05:33,105 31828 INFO [tempest.thirdparty.boto.utils.wait] 
State transition pending == running 3 second
2014-09-28 09:05:33,256 31828 DEBUG
[tempest.thirdparty.boto.test_ec2_instance_run] Instance rebooted - state: 
running
2014-09-28 09:05:35,003 31828 INFO [tempest.thirdparty.boto.utils.wait] 
State transition running == error 1 second
}}}


http://logs.openstack.org/14/124014/3/check/check-tempest-dsvm-postgres-full/96934ea/logs/testr_results.html.gz


CPU log - 

2014-09-28 09:05:34.741 ERROR nova.compute.manager 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] [instance: b28b2844-26a9-46ff-bcde-023a7604a06e] 
Cannot reboot instance: [Errno 2] No such file or directory: 
'/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Created new semaphore compute_resources 
internal_lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:206
2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Acquired semaphore compute_resources lock 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:229
2014-09-28 09:05:34.935 DEBUG nova.openstack.common.lockutils 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Got semaphore / lock update_usage inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:271
2014-09-28 09:05:34.970 INFO nova.scheduler.client.report 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Compute_service record updated for 
('devstack-trusty-rax-dfw-2448356.slave.openstack.org', 
'devstack-trusty-rax-dfw-2448356.slave.openstack.org')
2014-09-28 09:05:34.971 DEBUG nova.openstack.common.lockutils 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Releasing semaphore compute_resources lock 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:238
2014-09-28 09:05:34.971 DEBUG nova.openstack.common.lockutils 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Semaphore / lock released update_usage inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:275
2014-09-28 09:05:34.980 ERROR oslo.messaging.rpc.dispatcher 
[req-c3f47db4-2474-43c5-bc8e-178608366f8f InstanceRunTest-2056140302 
InstanceRunTest-1314839858] Exception during message handling: [Errno 2] No 
such file or directory: 
'/opt/stack/data/nova/instances/b28b2844-26a9-46ff-bcde-023a7604a06e/kernel.part'
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 88, in wrapped
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher payload)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 71, in wrapped
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 298, in decorated_function
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher pass
2014-09-28 09:05:34.980 28960 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, 

[Yahoo-eng-team] [Bug 1350172] Re: Building server failed in VMware Mine Sweeper

2014-09-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350172

Title:
  Building server failed in VMware Mine Sweeper

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  VMware Mine Sweeper often failed on the patch
  https://review.openstack.org/98278/ .

  See logs:
  http://208.91.1.172/logs/neutron/98278/11/414421/
  http://208.91.1.172/logs/neutron/98278/13/414451/
  etc.

  It maybe a bit different between PS11 and PS13.
  It is common that building server failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323079] Re: Some network ports down after reboot netnode

2014-09-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323079

Title:
  Some network ports down after reboot netnode

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  I've been struggling with this issue since the release of folsom, the
  first install. I'm running havana now and it still turns up.

  My network setups looks like this:
  - GRE setup
  - Directly connected interfaces to external

  Each compute node has two interfaces:
  - Mgmt network (GRE routing goes here)
  - External network (no ip address assigned)

  Example network:

  192.168.248.0/24 (Internal)
  192.168.248.1 router (internal interface)
  172.17.11.71 router (external interface)
  192.168.248.3 DHCP
  192.168.248.4 VM

  After a reboot of the network node I have to bring the admin state of
  the internal interface of the router down and then up (it was active).
  If i do this manually eg. ip nets exec qrouter-XXX exec ifconfig
  tap up/down it doesn't work. So something else gets changed.

  Moreover, sometimes it is required to do the same for the DHCP
  interface and even the VM interface before packets are being
  accepted/transmitted.

  I checked logs, tcpdumps, upgraded openvswitch and kernels. I dont
  have a clue any more really.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202053] Re: Memcache token backend issue upgrading from Grizzly

2014-09-28 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1202053

Title:
  Memcache token backend issue upgrading from Grizzly

Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  Noticed a possible upgrade issue with the pluggable token provider change:
  
https://github.com/openstack/keystone/commit/c238ace30981877e5991874c5b193ea7d5107419#L12L121

  If old PKI tokens still live in a usertoken- index value in memcache,
  keystone will try to use them as a key to get the actual token. Since
  they're  256 bytes this will likely raise an error and make token
  creation fail for any user with an old token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1202053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp