Re: [openstack-dev] [nova][neutron] massive overhead processing "network-changed" events during live migration

2017-05-19 Thread Matt Riedemann

On 5/19/2017 1:40 PM, Chris Friesen wrote:
Recently we noticed failures in Newton when we attempted to live-migrate 
an instance with 16 vifs.  We tracked it down to an RPC timeout in nova 
which timed out waiting for the 'refresh_cache-%s' lock in 
get_instance_nw_info().  This led to a few other discoveries.


First, we have no fair locking in OpenStack.  The live migration code 
path was waiting for the lock, but the code processing the incoming 
"network-changed" events kept getting the lock instead even though they 
arrived while the live migration code was already blocked waiting for 
the lock.


I'm told that etcd gives us a DLM which is unicorns and rainbows, would 
that help us here?




Second, it turns out the cost of processing the "network-changed" events 
is astronomical.


1) In Newton nova commit 5de902a was merged to fix evacuate bugs, but it 
meant both source and dest compute nodes got the "network-changed" 
events.  This doubled the number of neutron API calls during a live 
migration.


As you noted below, that change was made specifically for evacuate. With 
the migration object we know the type of migration and could scope this 
behavior to just evacuate. However, I'm sort of confused by that change 
- why are we sending external events to the source compute during an 
evacuation? Isn't the source compute down and thus can't receive and 
process the event?




2) A "network-changed" event is sent from neutron each time something 
changes. There are multiple of these events for each vif during a 
live-migration.  In the current upstream code the only information 
passed with the event is the instance id, so nova will loop over all the 
ports in the instance and build up all the information about 
subnets/floatingIP/fixedIP/etc. for that instance.  This results in 
O(N^2) neutron API calls where N is the number of vifs in the instance.


While working on the patches you reference in #3 I was also working on 
seeing if we can do some bulk queries to Neutron:


https://review.openstack.org/#/c/465792/

It looks like that's not working though. Kevin Benton seemed to think at 
the time (it was late the other night) that passing a list of filter 
parameters would get turned into an OR in the database query, but I'm 
not sure that's happening (see that Tempest failed on that patch). I 
don't have a devstack handy but it seems we could prove this via simple 
curl requests.




3) mriedem has proposed a patch series 
(https://review.openstack.org/#/c/465783 and 
https://review.openstack.org/#/c/465787) that would change neutron to 
include the port ID, and allow nova to update just that port.  This 
reduces the cost to O(N), but it's still significant.


In a hardware lab with 4 compute nodes I created 4 boot-from-volume 
instances, each with 16 vifs.  I then live-migrated them all in 
parallel.  (The one on compute-0 was migrated to compute-1, the one on 
compute-1 was migrated to compute-2, etc.)  The aggregate CPU usage for 
a few critical components on the controller node is shown below.  Note 
in particular the CPU usage for neutron--it's using most of 10 CPUs for 
~10 seconds, spiking to 13 CPUs.  This seems like an absurd amount of 
work to do just to update the cache in nova.



Labels:
   L0: neutron-server
   L1: nova-conductor
   L2: beam.smp
   L3: postgres
-  - -  L0  L1  L2  L3
date   time dt occ occ occ occ
/mm/dd hh:mm:ss.dec(s) (%) (%) (%) (%)
2017-05-19 17:51:38.710  2.173   19.751.282.851.96
2017-05-19 17:51:40.012  1.3021.021.753.805.07
2017-05-19 17:51:41.334  1.3222.342.665.251.76
2017-05-19 17:51:42.681  1.347   91.793.315.275.64
2017-05-19 17:51:44.035  1.354   40.787.273.487.34
2017-05-19 17:51:45.406  1.3717.12   21.358.66   19.58
2017-05-19 17:51:46.784  1.378   16.71  196.296.87   15.93
2017-05-19 17:51:48.133  1.349   18.51  362.468.57   25.70
2017-05-19 17:51:49.508  1.375  284.16  199.304.58   18.49
2017-05-19 17:51:50.919  1.411  512.88   17.617.47   42.88
2017-05-19 17:51:52.322  1.403  412.348.909.15   19.24
2017-05-19 17:51:53.734  1.411  320.245.20   10.599.08
2017-05-19 17:51:55.129  1.396  304.922.27   10.65   10.29
2017-05-19 17:51:56.551  1.422  556.09   14.56   10.74   18.85
2017-05-19 17:51:57.977  1.426  979.63   43.41   14.17   21.32
2017-05-19 17:51:59.382  1.405  902.56   48.31   13.69   18.59
2017-05-19 17:52:00.808  1.425 1140.99   74.28   15.12   17.18
2017-05-19 17:52:02.238  1.430 1013.91   69.77   16.46   21.19
2017-05-19 17:52:03.647  1.409  964.94  175.09   15.81   27.23
2017-05-19 17:52:05.077  1.430  838.15  109.13   15.70   34.12
2017-05-19 17:52:06.502  1.425  525.88   79.09   14.42   11.09
2017-05-19 17:52:07.954  1.452  614.58   38.38   12.20   17.89
2017-05-19 17:52:09.380  1.426  763.25   68.40   12.36   16.08
2017-05-19 

[openstack-dev] [nova][neutron] massive overhead processing "network-changed" events during live migration

2017-05-19 Thread Chris Friesen
Recently we noticed failures in Newton when we attempted to live-migrate an 
instance with 16 vifs.  We tracked it down to an RPC timeout in nova which timed 
out waiting for the 'refresh_cache-%s' lock in get_instance_nw_info().  This led 
to a few other discoveries.


First, we have no fair locking in OpenStack.  The live migration code path was 
waiting for the lock, but the code processing the incoming "network-changed" 
events kept getting the lock instead even though they arrived while the live 
migration code was already blocked waiting for the lock.


Second, it turns out the cost of processing the "network-changed" events is 
astronomical.


1) In Newton nova commit 5de902a was merged to fix evacuate bugs, but it meant 
both source and dest compute nodes got the "network-changed" events.  This 
doubled the number of neutron API calls during a live migration.


2) A "network-changed" event is sent from neutron each time something changes. 
There are multiple of these events for each vif during a live-migration.  In the 
current upstream code the only information passed with the event is the instance 
id, so nova will loop over all the ports in the instance and build up all the 
information about subnets/floatingIP/fixedIP/etc. for that instance.  This 
results in O(N^2) neutron API calls where N is the number of vifs in the instance.


3) mriedem has proposed a patch series (https://review.openstack.org/#/c/465783 
and https://review.openstack.org/#/c/465787) that would change neutron to 
include the port ID, and allow nova to update just that port.  This reduces the 
cost to O(N), but it's still significant.


In a hardware lab with 4 compute nodes I created 4 boot-from-volume instances, 
each with 16 vifs.  I then live-migrated them all in parallel.  (The one on 
compute-0 was migrated to compute-1, the one on compute-1 was migrated to 
compute-2, etc.)  The aggregate CPU usage for a few critical components on the 
controller node is shown below.  Note in particular the CPU usage for 
neutron--it's using most of 10 CPUs for ~10 seconds, spiking to 13 CPUs.  This 
seems like an absurd amount of work to do just to update the cache in nova.



Labels:
  L0: neutron-server
  L1: nova-conductor
  L2: beam.smp
  L3: postgres
-  - -  L0  L1  L2  L3
date   time dt occ occ occ occ
/mm/dd hh:mm:ss.dec(s) (%) (%) (%) (%)
2017-05-19 17:51:38.710  2.173   19.751.282.851.96
2017-05-19 17:51:40.012  1.3021.021.753.805.07
2017-05-19 17:51:41.334  1.3222.342.665.251.76
2017-05-19 17:51:42.681  1.347   91.793.315.275.64
2017-05-19 17:51:44.035  1.354   40.787.273.487.34
2017-05-19 17:51:45.406  1.3717.12   21.358.66   19.58
2017-05-19 17:51:46.784  1.378   16.71  196.296.87   15.93
2017-05-19 17:51:48.133  1.349   18.51  362.468.57   25.70
2017-05-19 17:51:49.508  1.375  284.16  199.304.58   18.49
2017-05-19 17:51:50.919  1.411  512.88   17.617.47   42.88
2017-05-19 17:51:52.322  1.403  412.348.909.15   19.24
2017-05-19 17:51:53.734  1.411  320.245.20   10.599.08
2017-05-19 17:51:55.129  1.396  304.922.27   10.65   10.29
2017-05-19 17:51:56.551  1.422  556.09   14.56   10.74   18.85
2017-05-19 17:51:57.977  1.426  979.63   43.41   14.17   21.32
2017-05-19 17:51:59.382  1.405  902.56   48.31   13.69   18.59
2017-05-19 17:52:00.808  1.425 1140.99   74.28   15.12   17.18
2017-05-19 17:52:02.238  1.430 1013.91   69.77   16.46   21.19
2017-05-19 17:52:03.647  1.409  964.94  175.09   15.81   27.23
2017-05-19 17:52:05.077  1.430  838.15  109.13   15.70   34.12
2017-05-19 17:52:06.502  1.425  525.88   79.09   14.42   11.09
2017-05-19 17:52:07.954  1.452  614.58   38.38   12.20   17.89
2017-05-19 17:52:09.380  1.426  763.25   68.40   12.36   16.08
2017-05-19 17:52:10.825  1.445  901.57   73.59   15.90   41.12
2017-05-19 17:52:12.252  1.427  966.15   42.97   16.76   23.07
2017-05-19 17:52:13.702  1.450  902.40   70.98   19.66   17.50
2017-05-19 17:52:15.173  1.471 1023.33   59.71   19.78   18.91
2017-05-19 17:52:16.605  1.432 1127.04   64.19   16.41   26.80
2017-05-19 17:52:18.046  1.442 1300.56   68.22   16.29   24.39
2017-05-19 17:52:19.517  1.471 1055.60   71.74   14.39   17.09
2017-05-19 17:52:20.983  1.465  845.30   61.48   15.24   22.86
2017-05-19 17:52:22.447  1.464 1027.33   65.53   15.94   26.85
2017-05-19 17:52:23.919  1.472 1003.08   56.97   14.39   28.93
2017-05-19 17:52:25.367  1.448  702.50   45.42   11.78   20.53
2017-05-19 17:52:26.814  1.448  558.63   66.48   13.22   29.64
2017-05-19 17:52:28.276  1.462  620.34  206.63   14.58   17.17
2017-05-19 17:52:29.749  1.473  555.62  110.37   10.95   13.27
2017-05-19 17:52:31.228  1.479  436.66   33.659.00   21.55
2017-05-19 17:52:32.685  1.456  417.12   87.44   13.44   12.27
2017-05-19 17:52:34.128  1.443  368.31   87.08   11.95   14.70
2017-05-19