* Denis V. Lunev (d...@virtuozzo.com) wrote:
> On 02/13/2018 05:59 PM, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berra...@redhat.com) wrote:
> >> On Tue, Feb 13, 2018 at 03:45:21PM +0100, Kevin Wolf wrote:
> >>> Am 13.02.2018 um 15:36 hat Daniel P. Berrangé geschrieben:
> >>>> On Tue, Feb 13, 2018 at 05:30:02PM +0300, Roman Kagan wrote:
> >>>>> On Tue, Feb 13, 2018 at 11:50:24AM +0100, Kevin Wolf wrote:
> >>>>>> Am 11.01.2018 um 14:04 hat Daniel P. Berrange geschrieben:
> >>>>>>> Then you could just use the regular migrate QMP commands for loading
> >>>>>>> and saving snapshots.
> >>>>>> Yes, you could. I think for a proper implementation you would want to 
> >>>>>> do
> >>>>>> better, though. Live migration provides just a stream, but that's not
> >>>>>> really well suited for snapshots. When a RAM page is dirtied, you just
> >>>>>> want to overwrite the old version of it in a snapshot [...]
> >>>>> This means the point in time where the guest state is snapshotted is not
> >>>>> when the command is issued, but any unpredictable amount of time later.
> >>>>>
> >>>>> I'm not sure this is what a user expects.
> >>>>>
> >>>>> A better approach for the save part appears to be to stop the vcpus,
> >>>>> dump the device state, resume the vcpus, and save the memory contents in
> >>>>> the background, prioritizing the old copies of the pages that change.
> >>>>> No multiple copies of the same page would have to be saved so the stream
> >>>>> format would be fine.  For the load part the usual inmigrate should
> >>>>> work.
> >>>> No, that's policy decision that doesn't matter from QMP pov. If the mgmt
> >>>> app wants the snapshot to be wrt to the initial time, it can simply
> >>>> invoke the "stop" QMP command before doing the live migration and
> >>>> "cont" afterwards.
> >>> That would be non-live. I think Roman means a live snapshot that saves
> >>> the state at the beginning of the operation. Basically the difference
> >>> between blockdev-backup (state at the beginning) and blockdev-mirror
> >>> (state at the end), except for a whole VM.
> >> That doesn't seem practical unless you can instantaneously write out
> >> the entire guest RAM to disk without blocking, or can somehow snapshot
> >> the RAM so you can write out a consistent view of the original RAM,
> >> while the guest continues to dirty RAM pages.
> > People have suggested doing something like that with userfault write
> > mode; but the same would also be doable just by write protecting the
> > whole of RAM and then following the faults.
> nope, userfault fd does not help :( We have tried, the functionality is not
> enough. Better to have small extension to KVM to protect all memory
> and notify QEMU with accessed address.

Can you explain why? I thought the write-protect mode of userfaultfd was
supposed to be able to do that; cc'ing in Andrea


> Den
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK

Reply via email to