[Yahoo-eng-team] [Bug 1488820] [NEW] All floating IPs stop working after associating a new one

2015-08-26 Thread Hauke Bruno
Public bug reported:

This issue occurs on a fresh OpenStack Kilo installation (Ubuntu 14.04
LTS) with a single non-HA network node:

In general public access via floating IPs works, I can ping, ssh and so
on my instances.

But if I associate a new floating IP to a new instance, all floating IPs
(including the new associated one) stop working (no ping and ssh
possible). The strange thing: If I just wait 5 minutes or doing service
openvswitch-switch restart manually, everything went back working like
a charm.

I checked all neutron and ovs logs, but there aren't any errors.

Is there any periodic task running in the background every 5 minutes
which could affect that behavior?

cheers,
hauke

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488820

Title:
  All floating IPs stop working after associating a new one

Status in neutron:
  New

Bug description:
  This issue occurs on a fresh OpenStack Kilo installation (Ubuntu 14.04
  LTS) with a single non-HA network node:

  In general public access via floating IPs works, I can ping, ssh and
  so on my instances.

  But if I associate a new floating IP to a new instance, all floating
  IPs (including the new associated one) stop working (no ping and ssh
  possible). The strange thing: If I just wait 5 minutes or doing
  service openvswitch-switch restart manually, everything went back
  working like a charm.

  I checked all neutron and ovs logs, but there aren't any errors.

  Is there any periodic task running in the background every 5 minutes
  which could affect that behavior?

  cheers,
  hauke

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448014] [NEW] Delayed display of floating IPs

2015-04-24 Thread Hauke Bruno
Public bug reported:

Using nova 1:2014.1.4-0ubuntu2 (Icehouse) on Ubuntu 14.04.2 LTS

After associating a floating IP address to an instance in Build/Spawning
state, 'nova list' and 'nova show' need - per default - a lot of time
(up to 40 minutes) to display that floating IP.

Steps to reproduce:

* Launching instance via Horizon
* Associate a floating IP address while instance is in Build/Spawning state via 
Horizon

Expected result:

* 'nova list' and 'nova show' should print the floating IP consistently
* the floating IP should be part of the related row in 
nova.instance_info_caches database table consistently

Actual result:

* while in Build/Spawning state 'nova list' and 'nova show' displays the 
floating IP address
* while in Build/Spawning state the floating IP is part of the related row in 
nova.instance_info_caches

* when the instance is switching to Active/Running state, the floating
IP disappears in 'nova list', 'nova show' and the
nova.instance_info_caches entry

* a little later (related to heal_instance_info_cache_interval (see
below)) the floating IP reappears

Side note 1: This issue does not occur, if the floating IP is associated after 
launching (in Active/Running state).
Side note 2: In Horizon, the floating IP is listed all the time.
Side note 3: The floating IP is working (ping, ssh), even if not displayed.

Output of 'select * from nova.instance_info_cache':

Instance in Build/Spawning:
*** 38. row ***
   created_at: 2015-04-24 09:06:23
   updated_at: 2015-04-24 09:06:43
   deleted_at: NULL
   id: 1671
 network_info: [{ovs_interfaceid: b2c284ea-ef23-42e1-9522-b263f24db588, 
network: {bridge: br-int, subnets: [{ips: [{meta: {}, version: 4, 
type: fixed, floating_ips: [{meta: {}, version: 4, type: 
floating, address: 10.0.0.5}], address: 192.168.178.212}], version: 
4, meta: {dhcp_server: 192.168.178.3}, dns: [], routes: [], cidr: 
192.168.178.0/24, gateway: {meta: {}, version: 4, type: gateway, 
address: 192.168.178.1}}], meta: {injected: false, tenant_id: 
ee8d0dd2202243389179ba2eb5a29e8c}, id: 
276de287-a929-4263-aad5-3b30d6dcc8c9, label: neues-netz}, devname: 
tapb2c284ea-ef, qbh_params: null, meta: {}, details: {port_filter: 
true, ovs_hybrid_plug: true}, address: fa:16:3e:8a:32:19, active: 
false, type: ovs, id: b2c284ea-ef23-42e1-9522-b263f24db588, 
qbg_params: null}]
instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
  deleted: 0

Instance switches to Active/Running (floating_ips becomes empty):
*** 38. row ***
   created_at: 2015-04-24 09:06:23
   updated_at: 2015-04-24 09:07:04
   deleted_at: NULL
   id: 1671
 network_info: [{ovs_interfaceid: b2c284ea-ef23-42e1-9522-b263f24db588, 
network: {bridge: br-int, subnets: [{ips: [{meta: {}, version: 4, 
type: fixed, floating_ips: [], address: 192.168.178.212}], version: 
4, meta: {dhcp_server: 192.168.178.3}, dns: [], routes: [], cidr: 
192.168.178.0/24, gateway: {meta: {}, version: 4, type: gateway, 
address: 192.168.178.1}}], meta: {injected: false, tenant_id: 
ee8d0dd2202243389179ba2eb5a29e8c}, id: 
276de287-a929-4263-aad5-3b30d6dcc8c9, label: neues-netz}, devname: 
tapb2c284ea-ef, qbh_params: null, meta: {}, details: {port_filter: 
true, ovs_hybrid_plug: true}, address: fa:16:3e:8a:32:19, active: 
false, type: ovs, id: b2c284ea-ef23-42e1-9522-b263f24db588, 
qbg_params: null}]
instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
  deleted: 0

After ~ 40 minutes:
*** 38. row ***
  created_at: 2015-04-24 09:06:23
   updated_at: 2015-04-24 09:45:35
   deleted_at: NULL
   id: 1671
 network_info: [{ovs_interfaceid: b2c284ea-ef23-42e1-9522-b263f24db588, 
network: {bridge: br-int, subnets: [{ips: [{meta: {}, version: 4, 
type: fixed, floating_ips: [{meta: {}, version: 4, type: 
floating, address: 10.0.0.5}], address: 192.168.178.212}], version: 
4, meta: {dhcp_server: 192.168.178.3}, dns: [], routes: [], cidr: 
192.168.178.0/24, gateway: {meta: {}, version: 4, type: gateway, 
address: 192.168.178.1}}], meta: {injected: false, tenant_id: 
ee8d0dd2202243389179ba2eb5a29e8c}, id: 
276de287-a929-4263-aad5-3b30d6dcc8c9, label: neues-netz}, devname: 
tapb2c284ea-ef, qbh_params: null, meta: {}, details: {port_filter: 
true, ovs_hybrid_plug: true}, address: fa:16:3e:8a:32:19, active: true, 
type: ovs, id: b2c284ea-ef23-42e1-9522-b263f24db588, qbg_params: 
null}]
instance_uuid: f0d22419-1cac-47ce-9063-eee37fad97b9
  deleted: 0

The related part in nova-compute.log

2015-04-24 11:07:04.544 14860 INFO nova.virt.libvirt.driver [-] [instance: 
f0d22419-1cac-47ce-9063-eee37fad97b9] Instance spawned successfully.
[...]
2015-04-24 11:45:36.012 14860 DEBUG nova.compute.manager [-] [instance: 
f0d22419-1cac-47ce-9063-eee37fad97b9] Updated the network info_cache for 
instance _heal_instance_info_cache 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:4897


Current 

[Yahoo-eng-team] [Bug 1314269] [NEW] missing code after patch in nova/tests/virt/libvirt/test_imagebackend.py

2014-04-29 Thread Hauke Bruno
Public bug reported:

nova-compute.log on kvm:

2014-04-28 14:51:57.294 14659 ERROR nova.compute.manager 
[req-e9768eff-116d-4681-9ff6-3ab51dbbdee0 c460eaefef634e4bab938915224a6201 
e159f0e56ea545bd84529fc38063
ceee] [instance: 61a3e08f-b22f-47a8-a815-f586422860ef] Instance failed to spawn
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] Traceback (most recent call last):
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/compute/
manager.py, line 1423, in _spawn
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] block_device_info)
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/lib
virt/driver.py, line 2088, in spawn
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] write_to_disk=True)
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/lib
virt/driver.py, line 3084, in to_xml
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] disk_info, rescue, block_device_info)
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/lib
virt/driver.py, line 2960, in get_guest_config
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] inst_type):
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/lib
virt/driver.py, line 2737, in get_guest_storage_config
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] inst_type)
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/lib
virt/driver.py, line 2700, in get_guest_disk_config
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] self.get_hypervisor_version())
2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] TypeError: libvirt_info() takes exactly 6 
arguments (7
 given)

I found this bug https://bugs.launchpad.net/nova/+bug/1233188 and a
patch that should fix this issue
(https://review.openstack.org/#/c/72575/1//COMMIT_MSG).

But on Ubuntu 12.04.4 LTS with Openstack Havana installed from Ubuntu
Cloud Archive this issue is still available because one of the two
patched files from this commit isn't present. The changes from
https://review.openstack.org/#/c/72575/1/nova/tests/virt/libvirt/test_imagebackend.py
are missing. I manually added those lines in test_imagebackend.py and
Openstack is working fine.

My nova version:

ii  nova-common  1:2013.2.2-0ubuntu1~cloud0 
OpenStack Compute - common files
ii  nova-compute 1:2013.2.2-0ubuntu1~cloud0 
OpenStack Compute - compute node
ii  nova-compute-kvm 1:2013.2.2-0ubuntu1~cloud0 
OpenStack Compute - compute node (KVM)
ii  python-nova  1:2013.2.2-0ubuntu1~cloud0 
OpenStack Compute Python libraries
ii  python-novaclient1:2.15.0-0ubuntu1~cloud0   
client library for OpenStack Compute API

Let me know if you need further information, poorly I am not a developer
so my this report might seem a bit plain.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  nova-compute.log on kvm:
  
  2014-04-28 14:51:57.294 14659 ERROR nova.compute.manager 
[req-e9768eff-116d-4681-9ff6-3ab51dbbdee0 c460eaefef634e4bab938915224a6201 
e159f0e56ea545bd84529fc38063
  ceee] [instance: 61a3e08f-b22f-47a8-a815-f586422860ef] Instance failed to 
spawn
  2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] Traceback (most recent call last):
  2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/compute/
  manager.py, line 1423, in _spawn
  2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef] block_device_info)
  2014-04-28 14:51:57.294 14659 TRACE nova.compute.manager [instance: 
61a3e08f-b22f-47a8-a815-f586422860ef]   File 
/usr/lib/python2.7/dist-packages/nova/virt/lib
  virt/driver.py, line 2088, in spawn
  2014-04-28 14:51:57.294 14659 TRACE