* Lidong Chen (jemmy858...@gmail.com) wrote:
> RDMA migration implement save_page function for QEMUFile, but
> ram_control_save_page do not increase bytes_xfer. So when doing
> RDMA migration, it will use whole bandwidth.

Hi,
  Thanks for this,

> Signed-off-by: Lidong Chen <lidongc...@tencent.com>
> ---
>  migration/qemu-file.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/migration/qemu-file.c b/migration/qemu-file.c
> index 2ab2bf3..217609d 100644
> --- a/migration/qemu-file.c
> +++ b/migration/qemu-file.c
> @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t 
> block_offset,
>      if (f->hooks && f->hooks->save_page) {
>          int ret = f->hooks->save_page(f, f->opaque, block_offset,
>                                        offset, size, bytes_sent);
> -
> +        f->bytes_xfer += size;

I'm a bit confused, because I know rdma.c calls acct_update_position()
and I'd always thought that was enough.
That calls qemu_update_position(...) which increases f->pos but not
f->bytes_xfer.

f_pos is used to calculate the 'transferred' value in
migration_update_counters and thus the current bandwidth and downtime -
but as you say, not the rate_limit.

So really, should this f->bytes_xfer += size   go in
qemu_update_position ?

Juan: I'm not sure I know why we have both bytes_xfer and pos.

Dave

>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>              if (bytes_sent && *bytes_sent > 0) {
>                  qemu_update_position(f, *bytes_sent);
> -- 
> 1.8.3.1
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

Reply via email to