On 05/03/2016 03:14 AM, Daniel P. Berrange wrote:

There are currently many options for live migration with QEMU that can
assist in completion

<snip>

Given this I've spent the last week creating an automated test harness
for QEMU upstream which triggers migration with an extreme guest CPU
load and measures the performance impact of these features on the guest,
and whether the migration actually completes.

I hope to be able to publish the results of this investigation this week
which should facilitate us in deciding which is best to use for OpenStack.
The spoiler though is that all the options are pretty terrible, except for
post-copy.

Just to be clear, it's not really CPU load that's the issue though, right?

Presumably it would be more accurate to say that the issue is the rate at which unique memory pages are being dirtied and the total number of dirty pages relative to your copy bandwidth.

This probably doesn't change the results though...at a high enough dirty rate you either pause the VM to keep it from dirtying more memory or you post-copy migrate and dirty the memory on the destination.

Chris

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: [email protected]?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to