On Thu, May 15, 2025 at 10:41:45AM -0700, Si-Wei Liu wrote:
> 
> 
> On 5/14/2025 10:43 PM, Michael S. Tsirkin wrote:
> > On Wed, May 14, 2025 at 05:17:15PM -0700, Si-Wei Liu wrote:
> > > Hi Eugenio,
> > > 
> > > On 5/14/2025 8:49 AM, Eugenio Perez Martin wrote:
> > > > On Wed, May 7, 2025 at 8:47 PM Jonah Palmer <jonah.pal...@oracle.com> 
> > > > wrote:
> > > > > Current memory operations like pinning may take a lot of time at the
> > > > > destination.  Currently they are done after the source of the 
> > > > > migration is
> > > > > stopped, and before the workload is resumed at the destination.  This 
> > > > > is a
> > > > > period where neigher traffic can flow, nor the VM workload can 
> > > > > continue
> > > > > (downtime).
> > > > > 
> > > > > We can do better as we know the memory layout of the guest RAM at the
> > > > > destination from the moment that all devices are initializaed.  So
> > > > > moving that operation allows QEMU to communicate the kernel the maps
> > > > > while the workload is still running in the source, so Linux can start
> > > > > mapping them.
> > > > > 
> > > > > As a small drawback, there is a time in the initialization where QEMU
> > > > > cannot respond to QMP etc.  By some testing, this time is about
> > > > > 0.2seconds.  This may be further reduced (or increased) depending on 
> > > > > the
> > > > > vdpa driver and the platform hardware, and it is dominated by the cost
> > > > > of memory pinning.
> > > > > 
> > > > > This matches the time that we move out of the called downtime window.
> > > > > The downtime is measured as checking the trace timestamp from the 
> > > > > moment
> > > > > the source suspend the device to the moment the destination starts the
> > > > > eight and last virtqueue pair.  For a 39G guest, it goes from ~2.2526
> > > > > secs to 2.0949.
> > > > > 
> > > > Hi Jonah,
> > > > 
> > > > Could you update this benchmark? I don't think it changed a lot but
> > > > just to be as updated as possible.
> > > Jonah is off this week and will be back until next Tuesday, but I recall 
> > > he
> > > indeed did some downtime test with VM with 128GB memory before taking off,
> > > which shows obvious improvement from around 10 seconds to 5.8 seconds 
> > > after
> > > applying this series. Since this is related to update on the cover letter,
> > > would it be okay for you and Jason to ack now and then proceed to Michael
> > > for upcoming merge?
> > > 
> > > > I think I cannot ack the series as I sent the first revision. Jason or
> > > > Si-Wei, could you ack it?
> > > Sure, I just give my R-b, this series look good to me. Hopefully Jason can
> > > ack on his own.
> > > 
> > > Thanks!
> > > -Siwei
> > I just sent a pull, next one in a week or two, so - no rush.
> All right, should be good to wait. In any case you have to repost a v2 PULL,
> hope this series can be piggy-back'ed as we did extensive tests about it.
> ;-)
> 
> -Siwei

You mean "in case"?

> > 
> > 
> > > > Thanks!
> > > > 
> > > > > Future directions on top of this series may include to move more 
> > > > > things ahead
> > > > > of the migration time, like set DRIVER_OK or perform actual iterative 
> > > > > migration
> > > > > of virtio-net devices.
> > > > > 
> > > > > Comments are welcome.
> > > > > 
> > > > > This series is a different approach of series [1]. As the title does 
> > > > > not
> > > > > reflect the changes anymore, please refer to the previous one to know 
> > > > > the
> > > > > series history.
> > > > > 
> > > > > This series is based on [2], it must be applied after it.
> > > > > 
> > > > > [Jonah Palmer]
> > > > > This series was rebased after [3] was pulled in, as [3] was a 
> > > > > prerequisite
> > > > > fix for this series.
> > > > > 
> > > > > v4:
> > > > > ---
> > > > > * Add memory listener unregistration to vhost_vdpa_reset_device.
> > > > > * Remove memory listener unregistration from vhost_vdpa_reset_status.
> > > > > 
> > > > > v3:
> > > > > ---
> > > > > * Rebase
> > > > > 
> > > > > v2:
> > > > > ---
> > > > > * Move the memory listener registration to vhost_vdpa_set_owner 
> > > > > function.
> > > > > * Move the iova_tree allocation to net_vhost_vdpa_init.
> > > > > 
> > > > > v1 at 
> > > > > https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg02136.html.
> > > > > 
> > > > > [1] 
> > > > > https://patchwork.kernel.org/project/qemu-devel/cover/20231215172830.2540987-1-epere...@redhat.com/
> > > > > [2] 
> > > > > https://lists.gnu.org/archive/html/qemu-devel/2024-01/msg05910.html
> > > > > [3] 
> > > > > https://lore.kernel.org/qemu-devel/20250217144936.3589907-1-jonah.pal...@oracle.com/
> > > > > 
> > > > > Jonah - note: I'll be on vacation from May 10-19. Will respond to
> > > > >                 comments when I return.
> > > > > 
> > > > > Eugenio Pérez (7):
> > > > >     vdpa: check for iova tree initialized at net_client_start
> > > > >     vdpa: reorder vhost_vdpa_set_backend_cap
> > > > >     vdpa: set backend capabilities at vhost_vdpa_init
> > > > >     vdpa: add listener_registered
> > > > >     vdpa: reorder listener assignment
> > > > >     vdpa: move iova_tree allocation to net_vhost_vdpa_init
> > > > >     vdpa: move memory listener register to vhost_vdpa_init
> > > > > 
> > > > >    hw/virtio/vhost-vdpa.c         | 107 
> > > > > +++++++++++++++++++++------------
> > > > >    include/hw/virtio/vhost-vdpa.h |  22 ++++++-
> > > > >    net/vhost-vdpa.c               |  34 +----------
> > > > >    3 files changed, 93 insertions(+), 70 deletions(-)
> > > > > 
> > > > > --
> > > > > 2.43.5
> > > > > 


Reply via email to