Re: [PATCH v3 17/23] multifd: Use normal pages array on the send side

2021-12-01 Thread Dr. David Alan Gilbert
* Juan Quintela (quint...@redhat.com) wrote:
> "Dr. David Alan Gilbert"  wrote:
> > * Juan Quintela (quint...@redhat.com) wrote:
> >> Signed-off-by: Juan Quintela 
> >
> > Can you explain a bit more what's going on here?
> 
> Sorry.
> 
> Until patch 20, we have what we had always have:
> 
> pages that are sent through multifd (non zero pages).  We are going to
> call it normal pages.  So right now, we use the array of pages that we
> are passed in directly on the multifd send methods.
> 
> But when we introduce zero pages handling around patch 20, we end having
> two types of pages sent through multifd:
> - normal pages (a.k.a. non-zero pages)
> - zero pages
> 
> So the options are:
> - we rename the fields before we introduce the zero page code, and then
>   we introduce the zero page code.
> - we rename at the same time that we introduce the zero page code.
> 
> I decided to go with the 1st option.
> 
> The other thing that we do here is that we introduce the normal array
> pages, so right now we do:
> 
> for (i = 0; i < pages->num; i++) {
> p->narmal[p->normal_num] = pages->offset[i];
> p->normal_num++:
> }
> 
> 
> Why?
> 
> Because then patch 20 becomes:
> 
> for (i = 0; i < pages->num; i++) {
> if (buffer_is_zero(page->offset[i])) {
> p->zerol[p->zero_num] = pages->offset[i];
> p->zeronum++:
> } else {
> p->narmal[p->normal_num] = pages->offset[i];
> p->normal_num++:
> }
> }
> 
> i.e. don't have to touch the handling of normal pages at all, only this
> for loop.
> 
> As an added benefit, after this patch, multifd methods don't need to
> know about the pages array, only about the params array (that will allow
> me to drop the locking earlier).
> 
> I hope this helps.

OK, so the code is OK, but it needs a commit message that explains all
that a bit more concisely.

Dave

> Later, Juan.
> 
-- 
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [PATCH v3 17/23] multifd: Use normal pages array on the send side

2021-11-30 Thread Juan Quintela
"Dr. David Alan Gilbert"  wrote:
> * Juan Quintela (quint...@redhat.com) wrote:
>> Signed-off-by: Juan Quintela 
>
> Can you explain a bit more what's going on here?

Sorry.

Until patch 20, we have what we had always have:

pages that are sent through multifd (non zero pages).  We are going to
call it normal pages.  So right now, we use the array of pages that we
are passed in directly on the multifd send methods.

But when we introduce zero pages handling around patch 20, we end having
two types of pages sent through multifd:
- normal pages (a.k.a. non-zero pages)
- zero pages

So the options are:
- we rename the fields before we introduce the zero page code, and then
  we introduce the zero page code.
- we rename at the same time that we introduce the zero page code.

I decided to go with the 1st option.

The other thing that we do here is that we introduce the normal array
pages, so right now we do:

for (i = 0; i < pages->num; i++) {
p->narmal[p->normal_num] = pages->offset[i];
p->normal_num++:
}


Why?

Because then patch 20 becomes:

for (i = 0; i < pages->num; i++) {
if (buffer_is_zero(page->offset[i])) {
p->zerol[p->zero_num] = pages->offset[i];
p->zeronum++:
} else {
p->narmal[p->normal_num] = pages->offset[i];
p->normal_num++:
}
}

i.e. don't have to touch the handling of normal pages at all, only this
for loop.

As an added benefit, after this patch, multifd methods don't need to
know about the pages array, only about the params array (that will allow
me to drop the locking earlier).

I hope this helps.

Later, Juan.




Re: [PATCH v3 17/23] multifd: Use normal pages array on the send side

2021-11-30 Thread Dr. David Alan Gilbert
* Juan Quintela (quint...@redhat.com) wrote:
> Signed-off-by: Juan Quintela 

Can you explain a bit more what's going on here?

Dave

> ---
>  migration/multifd.h  |  8 ++--
>  migration/multifd-zlib.c |  6 +++---
>  migration/multifd-zstd.c |  6 +++---
>  migration/multifd.c  | 30 +++---
>  migration/trace-events   |  4 ++--
>  5 files changed, 33 insertions(+), 21 deletions(-)
> 
> diff --git a/migration/multifd.h b/migration/multifd.h
> index 7496f951a7..78e73df3ec 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -104,14 +104,18 @@ typedef struct {
>  /* thread local variables */
>  /* packets sent through this channel */
>  uint64_t num_packets;
> -/* pages sent through this channel */
> -uint64_t num_pages;
> +/* non zero pages sent through this channel */
> +uint64_t num_normal_pages;
>  /* syncs main thread and channels */
>  QemuSemaphore sem_sync;
>  /* buffers to send */
>  struct iovec *iov;
>  /* number of iovs used */
>  uint32_t iovs_num;
> +/* Pages that are not zero */
> +ram_addr_t *normal;
> +/* num of non zero pages */
> +uint32_t normal_num;
>  /* used for compression methods */
>  void *data;
>  }  MultiFDSendParams;
> diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
> index f65159392a..25ef68a548 100644
> --- a/migration/multifd-zlib.c
> +++ b/migration/multifd-zlib.c
> @@ -106,16 +106,16 @@ static int zlib_send_prepare(MultiFDSendParams *p, 
> Error **errp)
>  int ret;
>  uint32_t i;
>  
> -for (i = 0; i < p->pages->num; i++) {
> +for (i = 0; i < p->normal_num; i++) {
>  uint32_t available = z->zbuff_len - out_size;
>  int flush = Z_NO_FLUSH;
>  
> -if (i == p->pages->num - 1) {
> +if (i == p->normal_num - 1) {
>  flush = Z_SYNC_FLUSH;
>  }
>  
>  zs->avail_in = page_size;
> -zs->next_in = p->pages->block->host + p->pages->offset[i];
> +zs->next_in = p->pages->block->host + p->normal[i];
>  
>  zs->avail_out = available;
>  zs->next_out = z->zbuff + out_size;
> diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
> index 6933ba622a..61842d713e 100644
> --- a/migration/multifd-zstd.c
> +++ b/migration/multifd-zstd.c
> @@ -121,13 +121,13 @@ static int zstd_send_prepare(MultiFDSendParams *p, 
> Error **errp)
>  z->out.size = z->zbuff_len;
>  z->out.pos = 0;
>  
> -for (i = 0; i < p->pages->num; i++) {
> +for (i = 0; i < p->normal_num; i++) {
>  ZSTD_EndDirective flush = ZSTD_e_continue;
>  
> -if (i == p->pages->num - 1) {
> +if (i == p->normal_num - 1) {
>  flush = ZSTD_e_flush;
>  }
> -z->in.src = p->pages->block->host + p->pages->offset[i];
> +z->in.src = p->pages->block->host + p->normal[i];
>  z->in.size = page_size;
>  z->in.pos = 0;
>  
> diff --git a/migration/multifd.c b/migration/multifd.c
> index 6983ba3e7c..dbe919b764 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -89,13 +89,13 @@ static int nocomp_send_prepare(MultiFDSendParams *p, 
> Error **errp)
>  MultiFDPages_t *pages = p->pages;
>  size_t page_size = qemu_target_page_size();
>  
> -for (int i = 0; i < p->pages->num; i++) {
> -p->iov[p->iovs_num].iov_base = pages->block->host + pages->offset[i];
> +for (int i = 0; i < p->normal_num; i++) {
> +p->iov[p->iovs_num].iov_base = pages->block->host + p->normal[i];
>  p->iov[p->iovs_num].iov_len = page_size;
>  p->iovs_num++;
>  }
>  
> -p->next_packet_size = p->pages->num * page_size;
> +p->next_packet_size = p->normal_num * page_size;
>  p->flags |= MULTIFD_FLAG_NOCOMP;
>  return 0;
>  }
> @@ -262,7 +262,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  
>  packet->flags = cpu_to_be32(p->flags);
>  packet->pages_alloc = cpu_to_be32(p->pages->allocated);
> -packet->pages_used = cpu_to_be32(p->pages->num);
> +packet->pages_used = cpu_to_be32(p->normal_num);
>  packet->next_packet_size = cpu_to_be32(p->next_packet_size);
>  packet->packet_num = cpu_to_be64(p->packet_num);
>  
> @@ -270,9 +270,9 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
>  strncpy(packet->ramblock, p->pages->block->idstr, 256);
>  }
>  
> -for (i = 0; i < p->pages->num; i++) {
> +for (i = 0; i < p->normal_num; i++) {
>  /* there are architectures where ram_addr_t is 32 bit */
> -uint64_t temp = p->pages->offset[i];
> +uint64_t temp = p->normal[i];
>  
>  packet->offset[i] = cpu_to_be64(temp);
>  }
> @@ -556,6 +556,8 @@ void multifd_save_cleanup(void)
>  p->packet = NULL;
>  g_free(p->iov);
>  p->iov = NULL;
> +g_free(p->normal);
> +p->normal = NULL;
>  multifd_send_state->ops->send_cleanup(p, _err);
>