On Thu, Dec 05, 2024 at 10:03:58AM -0500, Peter Xu wrote: > On Thu, Dec 05, 2024 at 10:18:53AM -0300, Fabiano Rosas wrote: > > Daniel P. Berrangé <berra...@redhat.com> writes: > > > > > On Wed, Dec 04, 2024 at 03:51:27PM -0500, Peter Xu wrote: > > >> On Wed, Dec 04, 2024 at 08:02:31PM +0000, Daniel P. Berrangé wrote: > > >> > I would say the difference is like a graceful shutdown vs pulling the > > >> > power plug in a bare metal machine > > >> > > > >> > 'cancel' is intended to be graceful. It should leave you with a > > >> > functional > > >> > QEMU (or refuse to run if unsafe). > > >> > > > >> > 'yank' is intended to be forceful, letting you get out of bad > > >> > situations > > >> > that would otherwise require you to kill the entire QEMU process, but > > >> > still with possible associated risk data loss to the QEMU backends. > > >> > > > >> > They have overlap, but are none the less different. > > >> > > >> The question is more about whether yank should be used at all for > > >> migration only, not about the rest instances. > > >> > > >> My answer is yank should never be used for migration, because > > >> "migrate_cancel" also unplugs the power plug.. It's not anything more > > >> enforced. It's only doing less always. > > >> > > >> E.g. migration_yank_iochannel() is exactly what we do with > > >> qmp_migrate_cancel() in the first place, only that migrate_cancel only > > >> does > > >> it on the main channel (on both qemufiles even if ioc is one), however it > > >> should be suffice, and behave the same way, as strong as "yank". > > > > > > I recall at the time the yank stuff was introduced, one of the scenarios > > > they were concerned about was related to locks held by QEMU code. eg that > > > there are scenarios where migrate_cancel may not be processed promptly > > > enough due to being stalled on mutexes held by other concurrently running > > > threads. Now I would expect any such long duration stalls on migration > > > mutexes to be bugs, but the intent of yank is to give a recovery mechanism > > > that can workaround such bugs. The yank QMP command only interacts with > > > its own local mutexes. > > > > Ok, so that could only mean a thread stuck in recv() while holding the > > BQL. I don't think we have any other locks which would stop > > migrate_cancel from making progress or other stall situations that could > > be helped by a shutdown(). Note that most of locks around qemu_file were > > a late addition. I don't think that scenario is possible today. I'll > > have to do some tests. > > And if that is a real difference, I'd think whether we should simply make > migrate_cancel be oob-capable too.. IOW, I still think it'll be good to > stick with always one API to cancel a migration, no matter which it is. If > we want to move over to yank then I think we should move all migrate_cancel > operations into yank and deprecate "migrate_cancel', but that sounds > overkill.
Well migrate_cancel ought to be safer than yank. eg migrate_cancel (sh|c)ould refuse to run if issued during post-copy phase. Or even in precopy, if in the final vmstate copy & switchover phase we shouldn't need to cancel. yank meanwhile will always run, no matter what, because by design, it has no interaction with the migration code beyond knowing that a socket exists. I don't think we should combine them. They have alot of common, but there are subtle differences that are relevant to the scenarios in which thye are intended to be used. > There's only one thing that might not be oob-compatible there so far, which > is bdrv_activate_all(). But I plan to remove it very soon (so that disks > will be activated in the migration thread instead, just like failure cases). > > > > > On that note, how is yank supposed to be accessed? I don't see support > > in libvirt. Is there a way to hook into QMP after the fact somehow? > > > > -- > Peter Xu > With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|