From: Ivan Ren <ivan...@tencent.com> This patch fix a multifd migration bug in migration speed calculation, this problem can be reproduced as follows: 1. start a vm and give a heavy memory write stress to prevent the vm be successfully migrated to destination 2. begin a migration with multifd 3. migrate for a long time [actually, this can be measured by transferred bytes] 4. migrate cancel 5. begin a new migration with multifd, the migration will directly run into migration_completion phase
Reason as follows: Migration update bandwidth and s->threshold_size in function migration_update_counters after BUFFER_DELAY time: current_bytes = migration_total_bytes(s); transferred = current_bytes - s->iteration_initial_bytes; time_spent = current_time - s->iteration_start_time; bandwidth = (double)transferred / time_spent; s->threshold_size = bandwidth * s->parameters.downtime_limit; In multifd migration, migration_total_bytes function return qemu_ftell(s->to_dst_file) + ram_counters.multifd_bytes. s->iteration_initial_bytes will be initialized to 0 at every new migration, but ram_counters is a global variable, and history migration data will be accumulated. So if the ram_counters.multifd_bytes is big enough, it may lead pending_size >= s->threshold_size become false in migration_iteration_run after the first migration_update_counters. Signed-off-by: Ivan Ren <ivan...@tencent.com> --- migration/migration.c | 15 ++++++++++++++- migration/savevm.c | 1 + 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/migration/migration.c b/migration/migration.c index 8a607fe1e2..d35a6ae6f9 100644 --- a/migration/migration.c +++ b/migration/migration.c @@ -1908,6 +1908,11 @@ static bool migrate_prepare(MigrationState *s, bool blk, bool blk_inc, } migrate_init(s); + /* + * set ram_counters memory to zero for a + * new migration + */ + memset(&ram_counters, 0, sizeof(ram_counters)); return true; } @@ -3187,6 +3192,10 @@ static void *migration_thread(void *opaque) object_ref(OBJECT(s)); s->iteration_start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); + /* + * Update s->iteration_initial_bytes to match s->iteration_start_time. + */ + s->iteration_initial_bytes = migration_total_bytes(s); qemu_savevm_state_header(s->to_dst_file); @@ -3252,7 +3261,11 @@ static void *migration_thread(void *opaque) * breaking transferred_bytes and bandwidth calculation */ s->iteration_start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); - s->iteration_initial_bytes = 0; + /* + * Update s->iteration_initial_bytes to current size to + * avoid historical data lead wrong bandwidth. + */ + s->iteration_initial_bytes = migration_total_bytes(s); } current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME); diff --git a/migration/savevm.c b/migration/savevm.c index 79ed44d475..480c511b19 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -1424,6 +1424,7 @@ static int qemu_savevm_state(QEMUFile *f, Error **errp) } migrate_init(ms); + memset(&ram_counters, 0, sizeof(ram_counters)); ms->to_dst_file = f; qemu_mutex_unlock_iothread(); -- 2.17.2 (Apple Git-113)