On 05/05/2014 09:20 PM, Juan Quintela wrote: > Alexey Kardashevskiy <a...@ozlabs.ru> wrote: >> On 04/13/2014 12:38 AM, Alexey Kardashevskiy wrote: >>> On 03/27/2014 08:01 PM, Markus Armbruster wrote: >>>> Adding Juan. >>> >>> >>> Ping? > > Patch is OK for me.
Who else needs to be ok to get this in upstream? :) Thanks! > As sender says, with guests doing anything/bigger than 1GB RAM is > basically implosible to get into the 30ms downtime. > > Later, Juan. > >> >> >> Ping? >> >> >>> >>>> >>>> Alexey Kardashevskiy <a...@ozlabs.ru> writes: >>>> >>>>> The existing timeout is 30ms which on 100MB/s (1Gbit) gives us >>>>> 3MB/s rate maximum. If we put some load on the guest, it is easy to >>>>> get page dirtying rate too big so live migration will never complete. >>>>> In the case of libvirt that means that the guest will be stopped >>>>> anyway after a timeout specified in the "virsh migrate" command and >>>>> this normally generates even bigger delay. >>>>> >>>>> This changes max_downtime to 300ms which seems to be more >>>>> reasonable value. >>>>> >>>>> Signed-off-by: Alexey Kardashevskiy <a...@ozlabs.ru> >>>>> --- >>>>> migration.c | 2 +- >>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>> >>>>> diff --git a/migration.c b/migration.c >>>>> index e0e24d4..02bbce9 100644 >>>>> --- a/migration.c >>>>> +++ b/migration.c >>>>> @@ -144,7 +144,7 @@ void process_incoming_migration(QEMUFile *f) >>>>> * the choice of nanoseconds is because it is the maximum resolution that >>>>> * get_clock() can achieve. It is an internal measure. All user-visible >>>>> * units must be in seconds */ >>>>> -static uint64_t max_downtime = 30000000; >>>>> +static uint64_t max_downtime = 300000000; >>>>> >>>>> uint64_t migrate_max_downtime(void) >>>>> { >>> >>> -- Alexey