On Fri, Dec 25, 2015 at 02:31:14PM -0800, Alexander Duyck wrote: > The PCI hot-plug specification calls out that the OS can optionally > implement a "pause" mechanism which is meant to be used for high > availability type environments. What I am proposing is basically > extending the standard SHPC capable PCI bridge so that we can support > the DMA page dirtying for everything hosted on it, add a vendor > specific block to the config space so that the guest can notify the > host that it will do page dirtying, and add a mechanism to indicate > that all hot-plug events during the warm-up phase of the migration are > pause events instead of full removals.
Two comments: 1. A vendor specific capability will always be problematic. Better to register a capability id with pci sig. 2. There are actually several capabilities: A. support for memory dirtying if not supported, we must stop device before migration This is supported by core guest OS code, using patches similar to posted by you. B. support for device replacement This is a faster form of hotplug, where device is removed and later another device using same driver is inserted in the same slot. This is a possible optimization, but I am convinced (A) should be implemented independently of (B). > I've been poking around in the kernel and QEMU code and the part I > have been trying to sort out is how to get QEMU based pci-bridge to > use the SHPC driver because from what I can tell the driver never > actually gets loaded on the device as it is left in the control of > ACPI hot-plug. > > - Alex There are ways, but you can just use pci express, it's easier. -- MST -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html