Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-04-04 Thread Juan Quintela
"Dr. David Alan Gilbert"  wrote:
> * Juan Quintela (quint...@redhat.com) wrote:
>> Added doc comments for existing functions comment and rewrite them in
>> a common style.
>> 
>> Signed-off-by: Juan Quintela 
>> ---
>>  migration/ram.c | 348 
>> 
>>  1 file changed, 227 insertions(+), 121 deletions(-)
>> 

>>   *
>>   * If this is the 1st block, it also writes the block identification
>>   *
>> - * Returns: Number of bytes written
>> + * Returns the number of bytes written
>
> Do the doc tools recognise that to pick up the explanation
> for the return value?

No clue.  Following qemu/include/exec/memory.h

>> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t 
>> current_addr)
>>   *  -1 means that xbzrle would be longer than normal
>>   *
>>   * @f: QEMUFile where to send the data
>> - * @current_data:
>> - * @current_addr:
>> + * @current_data: contents of the page
>
> That's wrong.  The point of current_data is that it gets updated by this
> function to point to the cache page whenever the data ends up in the cache.
> It's important then that the caller uses that pointer to save the data to
> disk/network rather than the original pointer, since the data that's saved
> must exactly match the cache contents even if the guest is still writing to 
> it.

this is the current text:

* @current_data: pointer to the address of the page contents

This was Peter suggestion.

Rest of suggestions included. 

Thanks, Juan.



Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-04-03 Thread Juan Quintela
"Dr. David Alan Gilbert"  wrote:
> * Peter Xu (pet...@redhat.com) wrote:
>> Hi, Juan,
>> 
>> Got several nitpicks below... (along with some questions)
>> 
>> On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
>> 
>> [...]
>
>> > @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState
>> > *ms, PageSearchStatus *pss,
>> >  }
>> >  
>> >  /**
>> > - * flush_page_queue: Flush any remaining pages in the ram request queue
>> > - *it should be empty at the end anyway, but in error cases there may 
>> > be
>> > - *some left.
>> > + * flush_page_queue: flush any remaining pages in the ram request queue
>> 
>> Here the comment says (just like mentioned in function name) that we
>> will "flush any remaining pages in the ram request queue", however in
>> the implementation, we should be only freeing everything in
>> src_page_requests. The problem is "flush" let me think about "flushing
>> the rest of the pages to the other side"... while it's not.
>> 
>> Would it be nice we just rename the function into something else, like
>> migration_page_queue_free()? We can tune the comments correspondingly
>> as well.
>
> Yes that probably would be a better name.

done
>> > - * Allocate data structures etc needed by incoming migration with 
>> > postcopy-ram
>> > - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
>> > +/**
>> > + * ram_postococpy_incoming_init: allocate postcopy data structures
>> > + *
>> > + * Returns 0 for success and negative if there was one error
>> > + *
>> > + * @mis: current migration incoming state
>> > + *
>> > + * Allocate data structures etc needed by incoming migration with
>> > + * postcopy-ram postcopy-ram's similarly names
>> > + * postcopy_ram_incoming_init does the work
>> 
>> This sentence is slightly hard to understand... But I think the
>> function name explained itself enough though. :)
>
> A '.' after the first 'postcopy-ram' would make it more readable.
>
> Dave

Done.  Once there, I spelled postcopy correctly O:-)



Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-31 Thread Dr. David Alan Gilbert
* Juan Quintela (quint...@redhat.com) wrote:
> Added doc comments for existing functions comment and rewrite them in
> a common style.
> 
> Signed-off-by: Juan Quintela 
> ---
>  migration/ram.c | 348 
> 
>  1 file changed, 227 insertions(+), 121 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index de1e0a3..76f1fc4 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -96,11 +96,17 @@ static void XBZRLE_cache_unlock(void)
>  qemu_mutex_unlock();
>  }
>  
> -/*
> - * called from qmp_migrate_set_cache_size in main thread, possibly while
> - * a migration is in progress.
> - * A running migration maybe using the cache and might finish during this
> - * call, hence changes to the cache are protected by XBZRLE.lock().
> +/**
> + * xbzrle_cache_resize: resize the xbzrle cache
> + *
> + * This function is called from qmp_migrate_set_cache_size in main
> + * thread, possibly while a migration is in progress.  A running
> + * migration may be using the cache and might finish during this call,
> + * hence changes to the cache are protected by XBZRLE.lock().
> + *
> + * Returns the new_size or negative in case of error.
> + *
> + * @new_size: new cache size
>   */
>  int64_t xbzrle_cache_resize(int64_t new_size)
>  {
> @@ -323,6 +329,7 @@ static inline void terminate_compression_threads(void)
>  int idx, thread_count;
>  
>  thread_count = migrate_compress_threads();
> +
>  for (idx = 0; idx < thread_count; idx++) {
>  qemu_mutex_lock(_param[idx].mutex);
>  comp_param[idx].quit = true;
> @@ -383,11 +390,11 @@ void migrate_compress_threads_create(void)
>  }
>  
>  /**
> - * save_page_header: Write page header to wire
> + * save_page_header: write page header to wire
>   *
>   * If this is the 1st block, it also writes the block identification
>   *
> - * Returns: Number of bytes written
> + * Returns the number of bytes written

Do the doc tools recognise that to pick up the explanation
for the return value?

>   *
>   * @f: QEMUFile where to send the data
>   * @block: block that contains the page we want to send
> @@ -410,11 +417,14 @@ static size_t save_page_header(QEMUFile *f, RAMBlock 
> *block, ram_addr_t offset)
>  return size;
>  }
>  
> -/* Reduce amount of guest cpu execution to hopefully slow down memory writes.
> - * If guest dirty memory rate is reduced below the rate at which we can
> - * transfer pages to the destination then we should be able to complete
> - * migration. Some workloads dirty memory way too fast and will not 
> effectively
> - * converge, even with auto-converge.
> +/**
> + * mig_throotle_guest_down: throotle down the guest

one 'o'

> + *
> + * Reduce amount of guest cpu execution to hopefully slow down memory
> + * writes. If guest dirty memory rate is reduced below the rate at
> + * which we can transfer pages to the destination then we should be
> + * able to complete migration. Some workloads dirty memory way too
> + * fast and will not effectively converge, even with auto-converge.
>   */
>  static void mig_throttle_guest_down(void)
>  {
> @@ -431,11 +441,16 @@ static void mig_throttle_guest_down(void)
>  }
>  }
>  
> -/* Update the xbzrle cache to reflect a page that's been sent as all 0.
> +/**
> + * xbzrle_cache_zero_page: insert a zero page in the XBZRLE cache
> + *
> + * @current_addr: address for the zero page
> + *
> + * Update the xbzrle cache to reflect a page that's been sent as all 0.
>   * The important thing is that a stale (not-yet-0'd) page be replaced
>   * by the new data.
>   * As a bonus, if the page wasn't in the cache it gets added so that
> - * when a small write is made into the 0'd page it gets XBZRLE sent
> + * when a small write is made into the 0'd page it gets XBZRLE sent.
>   */
>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>  {
> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t 
> current_addr)
>   *  -1 means that xbzrle would be longer than normal
>   *
>   * @f: QEMUFile where to send the data
> - * @current_data:
> - * @current_addr:
> + * @current_data: contents of the page

That's wrong.  The point of current_data is that it gets updated by this
function to point to the cache page whenever the data ends up in the cache.
It's important then that the caller uses that pointer to save the data to
disk/network rather than the original pointer, since the data that's saved
must exactly match the cache contents even if the guest is still writing to it.

> + * @current_addr: addr of the page
>   * @block: block that contains the page we want to send
>   * @offset: offset inside the block for the page
>   * @last_stage: if we are at the completion stage
> @@ -530,13 +545,17 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t 
> **current_data,
>  return 1;
>  }
>  
> -/* Called with rcu_read_lock() to protect migration_bitmap
> - * rb: The RAMBlock  to search for 

Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-31 Thread Dr. David Alan Gilbert
* Peter Xu (pet...@redhat.com) wrote:
> Hi, Juan,
> 
> Got several nitpicks below... (along with some questions)
> 
> On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
> 
> [...]

> > @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, 
> > PageSearchStatus *pss,
> >  }
> >  
> >  /**
> > - * flush_page_queue: Flush any remaining pages in the ram request queue
> > - *it should be empty at the end anyway, but in error cases there may be
> > - *some left.
> > + * flush_page_queue: flush any remaining pages in the ram request queue
> 
> Here the comment says (just like mentioned in function name) that we
> will "flush any remaining pages in the ram request queue", however in
> the implementation, we should be only freeing everything in
> src_page_requests. The problem is "flush" let me think about "flushing
> the rest of the pages to the other side"... while it's not.
> 
> Would it be nice we just rename the function into something else, like
> migration_page_queue_free()? We can tune the comments correspondingly
> as well.

Yes that probably would be a better name.

> [...]
> 
> > -/*
> > - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> > - *   the two bitmaps, that are similar, but one is inverted.
> > +/**
> > + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
>   ^ should be n? ^^ canonicalize?
> 
> >   *
> > - * We search for runs of target-pages that don't start or end on a
> > - * host page boundary;
> > - * unsent_pass=true: Cleans up partially unsent host pages by searching
> > - * the unsentmap
> > - * unsent_pass=false: Cleans up partially dirty host pages by searching
> > - * the main migration bitmap
> > + * Helper for postcopy_chunk_hostpages; it's called twice to
> > + * canonicalize the two bitmaps, that are similar, but one is
> > + * inverted.
> >   *
> > + * Postcopy requires that all target pages in a hostpage are dirty or
> > + * clean, not a mix.  This function canonicalizes the bitmaps.
> > + *
> > + * @ms: current migration state
> > + * @unsent_pass: if true we need to canonicalize partially unsent host 
> > pages
> > + *   otherwise we need to canonicalize partially dirty host 
> > pages
> > + * @block: block that contains the page we want to canonicalize
> > + * @pds: state for postcopy
> >   */
> >  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool 
> > unsent_pass,
> >RAMBlock *block,
> 
> [...]
> 
> > +/**
> > + * ram_save_setup: iterative stage for migration
>   ^^ should be ram_save_iterate()?
> 
> > + *
> > + * Returns zero to indicate success and negative for error
> > + *
> > + * @f: QEMUFile where to send the data
> > + * @opaque: RAMState pointer
> > + */
> >  static int ram_save_iterate(QEMUFile *f, void *opaque)
> >  {
> >  int ret;
> > @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void 
> > *opaque)
> >  return done;
> >  }
> 
> [...]
> 
> > -/*
> > - * Allocate data structures etc needed by incoming migration with 
> > postcopy-ram
> > - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> > +/**
> > + * ram_postococpy_incoming_init: allocate postcopy data structures
> > + *
> > + * Returns 0 for success and negative if there was one error
> > + *
> > + * @mis: current migration incoming state
> > + *
> > + * Allocate data structures etc needed by incoming migration with
> > + * postcopy-ram postcopy-ram's similarly names
> > + * postcopy_ram_incoming_init does the work
> 
> This sentence is slightly hard to understand... But I think the
> function name explained itself enough though. :)

A '.' after the first 'postcopy-ram' would make it more readable.

Dave

> Thanks,
> 
> -- peterx
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK



Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-28 Thread Juan Quintela
Peter Xu  wrote:
> On Fri, Mar 24, 2017 at 12:44:06PM +0100, Juan Quintela wrote:

>> >
>> > Here the comment says (just like mentioned in function name) that we
>> > will "flush any remaining pages in the ram request queue", however in
>> > the implementation, we should be only freeing everything in
>> > src_page_requests. The problem is "flush" let me think about "flushing
>> > the rest of the pages to the other side"... while it's not.
>> >
>> > Would it be nice we just rename the function into something else, like
>> > migration_page_queue_free()? We can tune the comments correspondingly
>> > as well.
>> 
>> I will let this one to dave to answer O:-)
>> I agree than previous name is not perfect, but not sure that the new one
>> is mucth better either.
>> 
>> migration_drop_page_queue()?
>
> This is indeed a nitpick of mine... So please feel free to ignore it.
> :)
>
> But if we will keep the function name, I would slightly prefer that at
> least we mention in the comment that, this is only freeing things up,
> not sending anything out.

Added that to the comment.

Thanks, Juan.



Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-26 Thread Peter Xu
On Fri, Mar 24, 2017 at 12:44:06PM +0100, Juan Quintela wrote:
> Peter Xu  wrote:
> > Hi, Juan,
> >
> > Got several nitpicks below... (along with some questions)
> >
> > On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
> >
> > [...]
> >
> >>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
> >>  {
> >> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t 
> >> current_addr)
> >>   *  -1 means that xbzrle would be longer than normal
> >>   *
> >>   * @f: QEMUFile where to send the data
> >> - * @current_data:
> >> - * @current_addr:
> >> + * @current_data: contents of the page
> >
> > Since current_data is a double pointer, so... maybe "pointer to the
> > address of page content"?
> 
> ok. changed.
> 
> > Btw, a question not related to this series... Why here in
> > save_xbzrle_page() we need to update *current_data to be the newly
> > created page cache? I see that we have:
> >
> > /* update *current_data when the page has been
> >inserted into cache */
> > *current_data = get_cached_data(XBZRLE.cache, current_addr);
> >
> > What would be the difference if we just use the old pointer in
> > RAMBlock.host?
> 
> Its contents could have been changed since we inserted it into the
> cache.  Then we could end having "memory corruption" during transfer.

Oh yes. Hmm I noticed that the content will be changed along the way
(IIUC even before we insert the page into the cache, since we are
doing everything in migration thread, while at the same time vcpu
thread might be doing anything), but I didn't notice that we need to
make sure the cached page be exactly the same as the one sent to the
destination side, or the "diff" may not match. Thanks for pointing
out. :)

> 
> 
> > [...]
> >
> >> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState
> >> *ms, PageSearchStatus *pss,
> >>  }
> >>  
> >>  /**
> >> - * flush_page_queue: Flush any remaining pages in the ram request queue
> >> - *it should be empty at the end anyway, but in error cases there may 
> >> be
> >> - *some left.
> >> + * flush_page_queue: flush any remaining pages in the ram request queue
> >
> > Here the comment says (just like mentioned in function name) that we
> > will "flush any remaining pages in the ram request queue", however in
> > the implementation, we should be only freeing everything in
> > src_page_requests. The problem is "flush" let me think about "flushing
> > the rest of the pages to the other side"... while it's not.
> >
> > Would it be nice we just rename the function into something else, like
> > migration_page_queue_free()? We can tune the comments correspondingly
> > as well.
> 
> I will let this one to dave to answer O:-)
> I agree than previous name is not perfect, but not sure that the new one
> is mucth better either.
> 
> migration_drop_page_queue()?

This is indeed a nitpick of mine... So please feel free to ignore it.
:)

But if we will keep the function name, I would slightly prefer that at
least we mention in the comment that, this is only freeing things up,
not sending anything out.

> 
> 
> >
> > [...]
> >
> >> -/*
> >> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> >> - *   the two bitmaps, that are similar, but one is inverted.
> >> +/**
> >> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
> >   ^ should be n? ^^ canonicalize?
> 
> Fixed.
> 
> >> - * We search for runs of target-pages that don't start or end on a
> >> - * host page boundary;
> >> - * unsent_pass=true: Cleans up partially unsent host pages by searching
> >> - * the unsentmap
> >> - * unsent_pass=false: Cleans up partially dirty host pages by searching
> >> - * the main migration bitmap
> >> + * Helper for postcopy_chunk_hostpages; it's called twice to
> >> + * canonicalize the two bitmaps, that are similar, but one is
> >> + * inverted.
> >>   *
> >> + * Postcopy requires that all target pages in a hostpage are dirty or
> >> + * clean, not a mix.  This function canonicalizes the bitmaps.
> >> + *
> >> + * @ms: current migration state
> >> + * @unsent_pass: if true we need to canonicalize partially unsent host 
> >> pages
> >> + *   otherwise we need to canonicalize partially dirty host 
> >> pages
> >> + * @block: block that contains the page we want to canonicalize
> >> + * @pds: state for postcopy
> >>   */
> >>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool 
> >> unsent_pass,
> >>RAMBlock *block,
> >
> > [...]
> >
> >> +/**
> >> + * ram_save_setup: iterative stage for migration
> >   ^^ should be ram_save_iterate()?
> 
> fixed.  Too much copy and paste.
> 
> >
> >> + *
> >> + * Returns zero to indicate success and negative for error
> >> + *
> >> + * @f: QEMUFile where to send the data
> >> + * @opaque: RAMState pointer
> >> + */
> >>  static int 

Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-24 Thread Juan Quintela
Peter Xu  wrote:
> Hi, Juan,
>
> Got several nitpicks below... (along with some questions)
>
> On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:
>
> [...]
>
>>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>>  {
>> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t 
>> current_addr)
>>   *  -1 means that xbzrle would be longer than normal
>>   *
>>   * @f: QEMUFile where to send the data
>> - * @current_data:
>> - * @current_addr:
>> + * @current_data: contents of the page
>
> Since current_data is a double pointer, so... maybe "pointer to the
> address of page content"?

ok. changed.

> Btw, a question not related to this series... Why here in
> save_xbzrle_page() we need to update *current_data to be the newly
> created page cache? I see that we have:
>
> /* update *current_data when the page has been
>inserted into cache */
> *current_data = get_cached_data(XBZRLE.cache, current_addr);
>
> What would be the difference if we just use the old pointer in
> RAMBlock.host?

Its contents could have been changed since we inserted it into the
cache.  Then we could end having "memory corruption" during transfer.


> [...]
>
>> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState
>> *ms, PageSearchStatus *pss,
>>  }
>>  
>>  /**
>> - * flush_page_queue: Flush any remaining pages in the ram request queue
>> - *it should be empty at the end anyway, but in error cases there may be
>> - *some left.
>> + * flush_page_queue: flush any remaining pages in the ram request queue
>
> Here the comment says (just like mentioned in function name) that we
> will "flush any remaining pages in the ram request queue", however in
> the implementation, we should be only freeing everything in
> src_page_requests. The problem is "flush" let me think about "flushing
> the rest of the pages to the other side"... while it's not.
>
> Would it be nice we just rename the function into something else, like
> migration_page_queue_free()? We can tune the comments correspondingly
> as well.

I will let this one to dave to answer O:-)
I agree than previous name is not perfect, but not sure that the new one
is mucth better either.

migration_drop_page_queue()?


>
> [...]
>
>> -/*
>> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
>> - *   the two bitmaps, that are similar, but one is inverted.
>> +/**
>> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
>   ^ should be n? ^^ canonicalize?

Fixed.

>> - * We search for runs of target-pages that don't start or end on a
>> - * host page boundary;
>> - * unsent_pass=true: Cleans up partially unsent host pages by searching
>> - * the unsentmap
>> - * unsent_pass=false: Cleans up partially dirty host pages by searching
>> - * the main migration bitmap
>> + * Helper for postcopy_chunk_hostpages; it's called twice to
>> + * canonicalize the two bitmaps, that are similar, but one is
>> + * inverted.
>>   *
>> + * Postcopy requires that all target pages in a hostpage are dirty or
>> + * clean, not a mix.  This function canonicalizes the bitmaps.
>> + *
>> + * @ms: current migration state
>> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
>> + *   otherwise we need to canonicalize partially dirty host 
>> pages
>> + * @block: block that contains the page we want to canonicalize
>> + * @pds: state for postcopy
>>   */
>>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool 
>> unsent_pass,
>>RAMBlock *block,
>
> [...]
>
>> +/**
>> + * ram_save_setup: iterative stage for migration
>   ^^ should be ram_save_iterate()?

fixed.  Too much copy and paste.

>
>> + *
>> + * Returns zero to indicate success and negative for error
>> + *
>> + * @f: QEMUFile where to send the data
>> + * @opaque: RAMState pointer
>> + */
>>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>>  {
>>  int ret;
>> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>>  return done;
>>  }
>
> [...]
>
>> -/*
>> - * Allocate data structures etc needed by incoming migration with 
>> postcopy-ram
>> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
>> +/**
>> + * ram_postococpy_incoming_init: allocate postcopy data structures
>> + *
>> + * Returns 0 for success and negative if there was one error
>> + *
>> + * @mis: current migration incoming state
>> + *
>> + * Allocate data structures etc needed by incoming migration with
>> + * postcopy-ram postcopy-ram's similarly names
>> + * postcopy_ram_incoming_init does the work
>
> This sentence is slightly hard to understand... But I think the
> function name explained itself enough though. :)

I didn't want to remove Dave comments at this point, jusnt doing the
formating8 and put them consintent.  I agree that this 

Re: [Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-24 Thread Peter Xu
Hi, Juan,

Got several nitpicks below... (along with some questions)

On Thu, Mar 23, 2017 at 09:44:54PM +0100, Juan Quintela wrote:

[...]

>  static void xbzrle_cache_zero_page(ram_addr_t current_addr)
>  {
> @@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t 
> current_addr)
>   *  -1 means that xbzrle would be longer than normal
>   *
>   * @f: QEMUFile where to send the data
> - * @current_data:
> - * @current_addr:
> + * @current_data: contents of the page

Since current_data is a double pointer, so... maybe "pointer to the
address of page content"?

Btw, a question not related to this series... Why here in
save_xbzrle_page() we need to update *current_data to be the newly
created page cache? I see that we have:

/* update *current_data when the page has been
   inserted into cache */
*current_data = get_cached_data(XBZRLE.cache, current_addr);

What would be the difference if we just use the old pointer in
RAMBlock.host?

[...]

> @@ -1157,11 +1186,12 @@ static bool get_queued_page(MigrationState *ms, 
> PageSearchStatus *pss,
>  }
>  
>  /**
> - * flush_page_queue: Flush any remaining pages in the ram request queue
> - *it should be empty at the end anyway, but in error cases there may be
> - *some left.
> + * flush_page_queue: flush any remaining pages in the ram request queue

Here the comment says (just like mentioned in function name) that we
will "flush any remaining pages in the ram request queue", however in
the implementation, we should be only freeing everything in
src_page_requests. The problem is "flush" let me think about "flushing
the rest of the pages to the other side"... while it's not.

Would it be nice we just rename the function into something else, like
migration_page_queue_free()? We can tune the comments correspondingly
as well.

[...]

> -/*
> - * Helper for postcopy_chunk_hostpages; it's called twice to cleanup
> - *   the two bitmaps, that are similar, but one is inverted.
> +/**
> + * postcopy_chuck_hostpages_pass: canocalize bitmap in hostpages
  ^ should be n? ^^ canonicalize?

>   *
> - * We search for runs of target-pages that don't start or end on a
> - * host page boundary;
> - * unsent_pass=true: Cleans up partially unsent host pages by searching
> - * the unsentmap
> - * unsent_pass=false: Cleans up partially dirty host pages by searching
> - * the main migration bitmap
> + * Helper for postcopy_chunk_hostpages; it's called twice to
> + * canonicalize the two bitmaps, that are similar, but one is
> + * inverted.
>   *
> + * Postcopy requires that all target pages in a hostpage are dirty or
> + * clean, not a mix.  This function canonicalizes the bitmaps.
> + *
> + * @ms: current migration state
> + * @unsent_pass: if true we need to canonicalize partially unsent host pages
> + *   otherwise we need to canonicalize partially dirty host pages
> + * @block: block that contains the page we want to canonicalize
> + * @pds: state for postcopy
>   */
>  static void postcopy_chunk_hostpages_pass(MigrationState *ms, bool 
> unsent_pass,
>RAMBlock *block,

[...]

> +/**
> + * ram_save_setup: iterative stage for migration
  ^^ should be ram_save_iterate()?

> + *
> + * Returns zero to indicate success and negative for error
> + *
> + * @f: QEMUFile where to send the data
> + * @opaque: RAMState pointer
> + */
>  static int ram_save_iterate(QEMUFile *f, void *opaque)
>  {
>  int ret;
> @@ -2091,7 +2169,16 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>  return done;
>  }

[...]

> -/*
> - * Allocate data structures etc needed by incoming migration with 
> postcopy-ram
> - * postcopy-ram's similarly names postcopy_ram_incoming_init does the work
> +/**
> + * ram_postococpy_incoming_init: allocate postcopy data structures
> + *
> + * Returns 0 for success and negative if there was one error
> + *
> + * @mis: current migration incoming state
> + *
> + * Allocate data structures etc needed by incoming migration with
> + * postcopy-ram postcopy-ram's similarly names
> + * postcopy_ram_incoming_init does the work

This sentence is slightly hard to understand... But I think the
function name explained itself enough though. :)

Thanks,

-- peterx



[Qemu-devel] [PATCH 01/51] ram: Update all functions comments

2017-03-23 Thread Juan Quintela
Added doc comments for existing functions comment and rewrite them in
a common style.

Signed-off-by: Juan Quintela 
---
 migration/ram.c | 348 
 1 file changed, 227 insertions(+), 121 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index de1e0a3..76f1fc4 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -96,11 +96,17 @@ static void XBZRLE_cache_unlock(void)
 qemu_mutex_unlock();
 }
 
-/*
- * called from qmp_migrate_set_cache_size in main thread, possibly while
- * a migration is in progress.
- * A running migration maybe using the cache and might finish during this
- * call, hence changes to the cache are protected by XBZRLE.lock().
+/**
+ * xbzrle_cache_resize: resize the xbzrle cache
+ *
+ * This function is called from qmp_migrate_set_cache_size in main
+ * thread, possibly while a migration is in progress.  A running
+ * migration may be using the cache and might finish during this call,
+ * hence changes to the cache are protected by XBZRLE.lock().
+ *
+ * Returns the new_size or negative in case of error.
+ *
+ * @new_size: new cache size
  */
 int64_t xbzrle_cache_resize(int64_t new_size)
 {
@@ -323,6 +329,7 @@ static inline void terminate_compression_threads(void)
 int idx, thread_count;
 
 thread_count = migrate_compress_threads();
+
 for (idx = 0; idx < thread_count; idx++) {
 qemu_mutex_lock(_param[idx].mutex);
 comp_param[idx].quit = true;
@@ -383,11 +390,11 @@ void migrate_compress_threads_create(void)
 }
 
 /**
- * save_page_header: Write page header to wire
+ * save_page_header: write page header to wire
  *
  * If this is the 1st block, it also writes the block identification
  *
- * Returns: Number of bytes written
+ * Returns the number of bytes written
  *
  * @f: QEMUFile where to send the data
  * @block: block that contains the page we want to send
@@ -410,11 +417,14 @@ static size_t save_page_header(QEMUFile *f, RAMBlock 
*block, ram_addr_t offset)
 return size;
 }
 
-/* Reduce amount of guest cpu execution to hopefully slow down memory writes.
- * If guest dirty memory rate is reduced below the rate at which we can
- * transfer pages to the destination then we should be able to complete
- * migration. Some workloads dirty memory way too fast and will not effectively
- * converge, even with auto-converge.
+/**
+ * mig_throotle_guest_down: throotle down the guest
+ *
+ * Reduce amount of guest cpu execution to hopefully slow down memory
+ * writes. If guest dirty memory rate is reduced below the rate at
+ * which we can transfer pages to the destination then we should be
+ * able to complete migration. Some workloads dirty memory way too
+ * fast and will not effectively converge, even with auto-converge.
  */
 static void mig_throttle_guest_down(void)
 {
@@ -431,11 +441,16 @@ static void mig_throttle_guest_down(void)
 }
 }
 
-/* Update the xbzrle cache to reflect a page that's been sent as all 0.
+/**
+ * xbzrle_cache_zero_page: insert a zero page in the XBZRLE cache
+ *
+ * @current_addr: address for the zero page
+ *
+ * Update the xbzrle cache to reflect a page that's been sent as all 0.
  * The important thing is that a stale (not-yet-0'd) page be replaced
  * by the new data.
  * As a bonus, if the page wasn't in the cache it gets added so that
- * when a small write is made into the 0'd page it gets XBZRLE sent
+ * when a small write is made into the 0'd page it gets XBZRLE sent.
  */
 static void xbzrle_cache_zero_page(ram_addr_t current_addr)
 {
@@ -459,8 +474,8 @@ static void xbzrle_cache_zero_page(ram_addr_t current_addr)
  *  -1 means that xbzrle would be longer than normal
  *
  * @f: QEMUFile where to send the data
- * @current_data:
- * @current_addr:
+ * @current_data: contents of the page
+ * @current_addr: addr of the page
  * @block: block that contains the page we want to send
  * @offset: offset inside the block for the page
  * @last_stage: if we are at the completion stage
@@ -530,13 +545,17 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t 
**current_data,
 return 1;
 }
 
-/* Called with rcu_read_lock() to protect migration_bitmap
- * rb: The RAMBlock  to search for dirty pages in
- * start: Start address (typically so we can continue from previous page)
- * ram_addr_abs: Pointer into which to store the address of the dirty page
- *   within the global ram_addr space
+/**
+ * migration_bitmap_find_dirty: find the next drity page from start
  *
- * Returns: byte offset within memory region of the start of a dirty page
+ * Called with rcu_read_lock() to protect migration_bitmap
+ *
+ * Returns the byte offset within memory region of the start of a dirty page
+ *
+ * @rb: RAMBlock where to search for dirty pages
+ * @start: starting address (typically so we can continue from previous page)
+ * @ram_addr_abs: pointer into which to store the address of the dirty page
+ *within the global