On Wed, Jan 19, 2022 at 04:36:50PM +0000, Dr. David Alan Gilbert wrote:
> * Peter Xu (pet...@redhat.com) wrote:
> > This patch simplifies unqueue_page() on both sides of it (itself, and 
> > caller).
> > 
> > Firstly, due to the fact that right after unqueue_page() returned true, 
> > we'll
> > definitely send a huge page (see ram_save_huge_page() call - it will _never_
> > exit before finish sending that huge page), so unqueue_page() does not need 
> > to
> > jump in small page size if huge page is enabled on the ramblock.  IOW, it's
> > destined that only the 1st 4K page will be valid, when unqueue the 2nd+ time
> > we'll notice the whole huge page has already been sent anyway.  Switching to
> > operating on huge page reduces a lot of the loops of redundant 
> > unqueue_page().
> > 
> > Meanwhile, drop the dirty check.  It's not helpful to call test_bit() every
> > time to jump over clean pages, as ram_save_host_page() has already done so,
> > while in a faster way (see commit ba1b7c812c ("migration/ram: Optimize
> > ram_save_host_page()", 2021-05-13)).  So that's not necessary too.
> > 
> > Drop the two tracepoints along the way - based on above analysis it's very
> > possible that no one is really using it..
> > 
> > Signed-off-by: Peter Xu <pet...@redhat.com>
> 
> Yes, OK
> 
> Reviewed-by: Dr. David Alan Gilbert <dgilb...@redhat.com>
> 
> Although:
>   a) You might like to keep a trace in get_queued_page just to see
> what's getting unqueued
>   b) I think originally it was a useful diagnostic to find out when we
> were getting a lot of queue requests for pages that were already sent.

Ah, that makes sense.  How about I keep the test_bit but remove the loop?  I
can make both a) and b) into one tracepoint:

========
diff --git a/migration/ram.c b/migration/ram.c
index 0df15ff663..02f36fa6d5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1572,6 +1572,9 @@ static RAMBlock *unqueue_page(RAMState *rs, ram_addr_t 
*offset)
         migration_consume_urgent_request();
     }
 
+    trace_unqueue_page(block->idstr, *offset,
+                       test_bit((*offset >> TARGET_PAGE_BITS), block->bmap));
+
     return block;
 }
 
diff --git a/migration/trace-events b/migration/trace-events
index 3a9b3567ae..efa3a95f81 100644
--- a/migration/trace-events
+++ b/migration/trace-events
@@ -110,6 +110,7 @@ ram_save_iterate_big_wait(uint64_t milliconds, int 
iterations) "big wait: %" PRI
 ram_load_complete(int ret, uint64_t seq_iter) "exit_code %d seq iteration %" 
PRIu64
 ram_write_tracking_ramblock_start(const char *block_id, size_t page_size, void 
*addr, size_t length) "%s: page_size: %zu addr: %p length: %zu"
 ram_write_tracking_ramblock_stop(const char *block_id, size_t page_size, void 
*addr, size_t length) "%s: page_size: %zu addr: %p length: %zu"
+unqueue_page(char *block, uint64_t offset, bool dirty) "ramblock '%s' offset 
0x%"PRIx64" dirty %d"
 
 # multifd.c
 multifd_new_send_channel_async(uint8_t id) "channel %d"
========

Thanks,

-- 
Peter Xu


Reply via email to