Hello Peter, On Thu, Jul 7, 2022 at 2:54 PM Peter Xu <pet...@redhat.com> wrote: > > On Mon, Jul 04, 2022 at 05:23:14PM -0300, Leonardo Bras wrote: > > Signed-off-by: Leonardo Bras <leob...@redhat.com> > > --- > > qapi/migration.json | 7 ++++++- > > migration/migration.c | 2 ++ > > monitor/hmp-cmds.c | 4 ++++ > > 3 files changed, 12 insertions(+), 1 deletion(-) > > > > diff --git a/qapi/migration.json b/qapi/migration.json > > index 7102e474a6..fed08b9b88 100644 > > --- a/qapi/migration.json > > +++ b/qapi/migration.json > > @@ -55,6 +55,10 @@ > > # @postcopy-bytes: The number of bytes sent during the post-copy phase > > # (since 7.0). > > # > > +# @dirty-sync-missed-zero-copy: Number of times dirty RAM synchronization > > could > > +# not avoid copying zero pages. This is > > between 0 > > Avoid copying zero pages? Isn't this for counting MSG_ZEROCOPY fallbacks?
Yes, sorry, I think I got confused at some point between some cuts & pastes. It should be "not avoid copying dirty pages." I will fix that on a V4. > > > +# and @dirty-sync-count * @multifd-channels. > > I'd not name it as "dirty-sync-*" because fundamentally the accounting is > not doing like that (more in latter patch). Ok, I will take a look & answer there. > I also think we should squash > patch 2/3 as patch 3 only started to provide meaningful values. IIRC Previously in zero-copy-send implementation, I was asked to keep the property/capability in a separated patch in order to make it easier to review. So I thought it would be helpful now. > > > +# (since 7.1) > > # Since: 0.14 > > ## > > { 'struct': 'MigrationStats', > > @@ -65,7 +69,8 @@ > > 'postcopy-requests' : 'int', 'page-size' : 'int', > > 'multifd-bytes' : 'uint64', 'pages-per-second' : 'uint64', > > 'precopy-bytes' : 'uint64', 'downtime-bytes' : 'uint64', > > - 'postcopy-bytes' : 'uint64' } } > > + 'postcopy-bytes' : 'uint64', > > + 'dirty-sync-missed-zero-copy' : 'uint64' } } > > > > ## > > # @XBZRLECacheStats: > > diff --git a/migration/migration.c b/migration/migration.c > > index 78f5057373..048f7f8bdb 100644 > > --- a/migration/migration.c > > +++ b/migration/migration.c > > @@ -1027,6 +1027,8 @@ static void populate_ram_info(MigrationInfo *info, > > MigrationState *s) > > info->ram->normal_bytes = ram_counters.normal * page_size; > > info->ram->mbps = s->mbps; > > info->ram->dirty_sync_count = ram_counters.dirty_sync_count; > > + info->ram->dirty_sync_missed_zero_copy = > > + ram_counters.dirty_sync_missed_zero_copy; > > info->ram->postcopy_requests = ram_counters.postcopy_requests; > > info->ram->page_size = page_size; > > info->ram->multifd_bytes = ram_counters.multifd_bytes; > > diff --git a/monitor/hmp-cmds.c b/monitor/hmp-cmds.c > > index ca98df0495..5f3be9e405 100644 > > --- a/monitor/hmp-cmds.c > > +++ b/monitor/hmp-cmds.c > > @@ -307,6 +307,10 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict) > > monitor_printf(mon, "postcopy ram: %" PRIu64 " kbytes\n", > > info->ram->postcopy_bytes >> 10); > > } > > + if (info->ram->dirty_sync_missed_zero_copy) { > > + monitor_printf(mon, "missed zero-copy on: %" PRIu64 " > > iterations\n", > > + info->ram->dirty_sync_missed_zero_copy); > > I suggest we don't call it "iterations" because it's not the generic mean > of iterations. Yeah, I thought that too, but I could not think on anything better. What do you suggest instead? Best regards, Leo > > > + } > > } > > > > if (info->has_disk) { > > -- > > 2.36.1 > > > > -- > Peter Xu >