* Daniel P. Berrangé (berra...@redhat.com) wrote: > On Thu, Sep 02, 2021 at 07:19:58AM -0300, Leonardo Bras Soares Passos wrote: > > On Thu, Sep 2, 2021 at 6:50 AM Daniel P. Berrangé <berra...@redhat.com> > > wrote: > > > > > > On Thu, Sep 02, 2021 at 06:34:01AM -0300, Leonardo Bras Soares Passos > > > wrote: > > > > On Thu, Sep 2, 2021 at 5:47 AM Daniel P. Berrangé <berra...@redhat.com> > > > > wrote: > > > > > > > > > > On Thu, Sep 02, 2021 at 03:38:11AM -0300, Leonardo Bras Soares Passos > > > > > wrote: > > > > > > > > > > > > I would suggest checkig in close(), but as mentioned > > > > > > > earlier, I think the design is flawed because the caller > > > > > > > fundamentally needs to know about completion for every > > > > > > > single write they make in order to know when the buffer > > > > > > > can be released / reused. > > > > > > > > > > > > Well, there could be a flush mechanism (maybe in io_sync_errck(), > > > > > > activated with a > > > > > > parameter flag, or on a different method if callback is preferred): > > > > > > In the MSG_ZEROCOPY docs, we can see that the example includes > > > > > > using a poll() > > > > > > syscall after each packet sent, and this means the fd gets a signal > > > > > > after each > > > > > > sendmsg() happens, with error or not. > > > > > > > > > > > > We could harness this with a poll() and a relatively high timeout: > > > > > > - We stop sending packets, and then call poll(). > > > > > > - Whenever poll() returns 0, it means a timeout happened, and so it > > > > > > took too long > > > > > > without sendmsg() happening, meaning all the packets are sent. > > > > > > - If it returns anything else, we go back to fixing the errors > > > > > > found (re-send) > > > > > > > > > > > > The problem may be defining the value of this timeout, but it could > > > > > > be > > > > > > called only > > > > > > when zerocopy is active. > > > > > > > > > > Maybe we need to check completions at the end of each iteration of the > > > > > migration dirtypage loop ? > > > > > > > > Sorry, I am really new to this, and I still couldn't understand why > > > > would we > > > > need to check at the end of each iteration, instead of doing a full > > > > check at the > > > > end. > > > > > > The end of each iteration is an implicit synchronization point in the > > > current migration code. > > > > > > For example, we might do 2 iterations of migration pre-copy, and then > > > switch to post-copy mode. If the data from those 2 iterations hasn't > > > been sent at the point we switch to post-copy, that is a semantic > > > change from current behaviour. I don't know if that will have an > > > problematic effect on the migration process, or not. Checking the > > > async completions at the end of each iteration though, would ensure > > > the semantics similar to current semantics, reducing risk of unexpected > > > problems. > > > > > > > What if we do the 'flush()' before we start post-copy, instead of after each > > iteration? would that be enough? > > Possibly, yes. This really need David G's input since he understands > the code in way more detail than me.
Hmm I'm not entirely sure why we have the sync after each iteration; the case I can think of is if we're doing async sending, we could have two versions of the same page in flight (one from each iteration) - you'd want those to get there in the right order. Dave > > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK