Re: [PATCH v9 Qemu 00/15] Add migration support for VFIO devices

2019-11-13 Thread Cornelia Huck
On Tue, 12 Nov 2019 22:35:09 +0530
Kirti Wankhede  wrote:

> Hi,
> 
> This Patch set adds migration support for VFIO devices in QEMU.
> 
> This Patch set include patches as below:
> Patch 1-3:
> - Define KABI for VFIO device for migration support for device state and newly
>   added ioctl definations to get dirty pages bitmap. These 3 patches are same 
> as
>   the first 2 patches in kernel patch set.

Meta: Might make sense to replace these three patches with a
placeholder for a linux-headers update, as we're reviewing this on the
kernel side anyway.




[PATCH v9 Qemu 00/15] Add migration support for VFIO devices

2019-11-12 Thread Kirti Wankhede
Hi,

This Patch set adds migration support for VFIO devices in QEMU.

This Patch set include patches as below:
Patch 1-3:
- Define KABI for VFIO device for migration support for device state and newly
  added ioctl definations to get dirty pages bitmap. These 3 patches are same as
  the first 2 patches in kernel patch set.

Patch 4-6:
- Few code refactor
- Added save and restore functions for PCI configuration space

Patch 7-12:
- Generic migration functionality for VFIO device.
  * This patch set adds functionality only for PCI devices, but can be
extended to other VFIO devices.
  * Added all the basic functions required for pre-copy, stop-and-copy and
resume phases of migration.
  * Added state change notifier and from that notifier function, VFIO
device's state changed is conveyed to VFIO device driver.
  * During save setup phase and resume/load setup phase, migration region
is queried and is used to read/write VFIO device data.
  * .save_live_pending and .save_live_iterate are implemented to use QEMU's
functionality of iteration during pre-copy phase.
  * In .save_live_complete_precopy, that is in stop-and-copy phase,
iteration to read data from VFIO device driver is implemented till pending
bytes returned by driver are not zero.

Patch 13:
- Add vfio_listerner_log_sync to mark dirty pages. Dirty pages bitmap is queried
  per container. All pages pinned by vendor driver through vfio_pin_pages
  external API has to be marked as dirty during  migration.
  When there are CPU writes, CPU dirty page tracking can identify dirtied
  pages, but any page pinned by vendor driver can also be written by
  device. As of now there is no device which has hardware support for
  dirty page tracking. So all pages which are pinned by vendor driver
  should be considered as dirty.
  In Qemu, marking pages dirty is only done when device is in stop-and-copy
  phase because if pages are marked dirty during pre-copy phase and content is
  transfered from source to distination, there is no way to know newly dirtied
  pages from the point they were copied earlier until device stops. To avoid
  repeated copy of same content, pinned pages are marked dirty only during
  stop-and-copy phase.

Patch 14:
- With vIOMMU, IO virtual address range can get unmapped while in pre-copy
  phase of migration. In that case, unmap ioctl should return pages pinned
  in that range and QEMU should report corresponding guest physical pages
  dirty.

Patch 15:
- Make VFIO PCI device migration capable. If migration region is not provided by
  driver, migration is blocked.

Yet TODO:
Since there is no device which has hardware support for system memmory
dirty bitmap tracking, right now there is no other API from vendor driver
to VFIO IOMMU module to report dirty pages. In future, when such hardware
support will be implemented, an API will be required in kernel such that
vendor driver could report dirty pages to VFIO module during migration phases.

Below is the flow of state change for live migration where states in brackets
represent VM state, migration state and VFIO device state as:
(VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)

Live migration save path:
QEMU normal running state
(RUNNING, _NONE, _RUNNING)
|
migrate_init spawns migration_thread.
(RUNNING, _SETUP, _RUNNING|_SAVING)
Migration thread then calls each device's .save_setup()
|
(RUNNING, _ACTIVE, _RUNNING|_SAVING)
If device is active, get pending bytes by .save_live_pending()
if pending bytes >= threshold_size,  call save_live_iterate()
Data of VFIO device for pre-copy phase is copied.
Iterate till pending bytes converge and are less than threshold
|
On migration completion, vCPUs stops and calls .save_live_complete_precopy
for each active device. VFIO device is then transitioned in
 _SAVING state.
(FINISH_MIGRATE, _DEVICE, _SAVING)
For VFIO device, iterate in  .save_live_complete_precopy  until
pending data is 0.
(FINISH_MIGRATE, _DEVICE, _STOPPED)
|
(FINISH_MIGRATE, _COMPLETED, STOPPED)
Migraton thread schedule cleanup bottom half and exit

Live migration resume path:
Incomming migration calls .load_setup for each device
(RESTORE_VM, _ACTIVE, STOPPED)
|
For each device, .load_state is called for that device section data
|
At the end, called .load_cleanup for each device and vCPUs are started.
|
(RUNNING, _NONE, _RUNNING)

Note that:
- Migration post copy is not supported.

v8 -> v9:
- Split patch set in 2 sets, Kernel and QEMU sets.
- Dirty pages bitmap is queried from IOMMU container rather than from
  vendor driver for per device. Added 2 ioctls to achieve this.

v7 -> v8:
- Updated comments for KABI
- Added BAR address validation check during PCI device's config space load as