Re: [PATCH RFC 9/9] KVM: Dirty ring support

2020-03-26 Thread Peter Xu
On Thu, Mar 26, 2020 at 02:14:36PM +, Dr. David Alan Gilbert wrote:
> * Peter Xu (pet...@redhat.com) wrote:
> > On Wed, Mar 25, 2020 at 08:41:44PM +, Dr. David Alan Gilbert wrote:
> > 
> > [...]
> > 
> > > > +enum KVMReaperState {
> > > > +KVM_REAPER_NONE = 0,
> > > > +/* The reaper is sleeping */
> > > > +KVM_REAPER_WAIT,
> > > > +/* The reaper is reaping for dirty pages */
> > > > +KVM_REAPER_REAPING,
> > > > +};
> > > 
> > > That probably needs to be KVMDirtyRingReaperState
> > > given there are many things that could be reaped.
> > 
> > Sure.
> > 
> > > 
> > > > +/*
> > > > + * KVM reaper instance, responsible for collecting the KVM dirty bits
> > > > + * via the dirty ring.
> > > > + */
> > > > +struct KVMDirtyRingReaper {
> > > > +/* The reaper thread */
> > > > +QemuThread reaper_thr;
> > > > +/*
> > > > + * Telling the reaper thread to wakeup.  This should be used as a
> > > > + * generic interface to kick the reaper thread, like, in vcpu
> > > > + * threads where it gets a exit due to ring full.
> > > > + */
> > > > +EventNotifier reaper_event;
> > > 
> > > I think I'd just used a simple semaphore for this type of thing.
> > 
> > I'm actually uncertain on which is cheaper...
> > 
> > At the meantime, I wanted to poll two handles at the same time below
> > (in kvm_dirty_ring_reaper_thread).  I don't know how to do that with
> > semaphore.  Could it?
> 
> If you're OK with EventNotifier stick with it;  it's just I'm used
> to doing with it with a semaphore; e.g. a flag then the semaphore - but
> that's fine.

Ah yes flags could work, though we probably need to be careful with
flags and use atomic accesses to avoid race conditions of flag lost.

Then I'll keep it, thanks.

> 
> > [...]
> > 
> > > > @@ -412,6 +460,18 @@ int kvm_init_vcpu(CPUState *cpu)
> > > >  (void *)cpu->kvm_run + s->coalesced_mmio * PAGE_SIZE;
> > > >  }
> > > >  
> > > > +if (s->kvm_dirty_gfn_count) {
> > > > +cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_size,
> > > > +   PROT_READ | PROT_WRITE, MAP_SHARED,
> > > 
> > > Is the MAP_SHARED required?
> > 
> > Yes it's required.  It's the same when we map the per-vcpu kvm_run.
> > 
> > If we use MAP_PRIVATE, it'll be in a COW fashion - when the userspace
> > writes to the dirty gfns the 1st time, it'll copy the current dirty
> > ring page in the kernel and from now on QEMU will never be able to see
> > what the kernel writes to the dirty gfn pages.  MAP_SHARED means the
> > userspace and the kernel shares exactly the same page(s).
> 
> OK, worth a comment.

Sure.

-- 
Peter Xu




Re: [PATCH RFC 9/9] KVM: Dirty ring support

2020-03-26 Thread Dr. David Alan Gilbert
* Peter Xu (pet...@redhat.com) wrote:
> On Wed, Mar 25, 2020 at 08:41:44PM +, Dr. David Alan Gilbert wrote:
> 
> [...]
> 
> > > +enum KVMReaperState {
> > > +KVM_REAPER_NONE = 0,
> > > +/* The reaper is sleeping */
> > > +KVM_REAPER_WAIT,
> > > +/* The reaper is reaping for dirty pages */
> > > +KVM_REAPER_REAPING,
> > > +};
> > 
> > That probably needs to be KVMDirtyRingReaperState
> > given there are many things that could be reaped.
> 
> Sure.
> 
> > 
> > > +/*
> > > + * KVM reaper instance, responsible for collecting the KVM dirty bits
> > > + * via the dirty ring.
> > > + */
> > > +struct KVMDirtyRingReaper {
> > > +/* The reaper thread */
> > > +QemuThread reaper_thr;
> > > +/*
> > > + * Telling the reaper thread to wakeup.  This should be used as a
> > > + * generic interface to kick the reaper thread, like, in vcpu
> > > + * threads where it gets a exit due to ring full.
> > > + */
> > > +EventNotifier reaper_event;
> > 
> > I think I'd just used a simple semaphore for this type of thing.
> 
> I'm actually uncertain on which is cheaper...
> 
> At the meantime, I wanted to poll two handles at the same time below
> (in kvm_dirty_ring_reaper_thread).  I don't know how to do that with
> semaphore.  Could it?

If you're OK with EventNotifier stick with it;  it's just I'm used
to doing with it with a semaphore; e.g. a flag then the semaphore - but
that's fine.

> [...]
> 
> > > @@ -412,6 +460,18 @@ int kvm_init_vcpu(CPUState *cpu)
> > >  (void *)cpu->kvm_run + s->coalesced_mmio * PAGE_SIZE;
> > >  }
> > >  
> > > +if (s->kvm_dirty_gfn_count) {
> > > +cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_size,
> > > +   PROT_READ | PROT_WRITE, MAP_SHARED,
> > 
> > Is the MAP_SHARED required?
> 
> Yes it's required.  It's the same when we map the per-vcpu kvm_run.
> 
> If we use MAP_PRIVATE, it'll be in a COW fashion - when the userspace
> writes to the dirty gfns the 1st time, it'll copy the current dirty
> ring page in the kernel and from now on QEMU will never be able to see
> what the kernel writes to the dirty gfn pages.  MAP_SHARED means the
> userspace and the kernel shares exactly the same page(s).

OK, worth a comment.

> > 
> > > +   cpu->kvm_fd,
> > > +   PAGE_SIZE * 
> > > KVM_DIRTY_LOG_PAGE_OFFSET);
> > > +if (cpu->kvm_dirty_gfns == MAP_FAILED) {
> > > +ret = -errno;
> > > +DPRINTF("mmap'ing vcpu dirty gfns failed\n");
> > 
> > Include errno?
> 
> Will do.
> 
> [...]
> 
> > > +static uint64_t kvm_dirty_ring_reap(KVMState *s)
> > > +{
> > > +KVMMemoryListener *kml;
> > > +int ret, i, locked_count = s->nr_as;
> > > +CPUState *cpu;
> > > +uint64_t total = 0;
> > > +
> > > +/*
> > > + * We need to lock all kvm slots for all address spaces here,
> > > + * because:
> > > + *
> > > + * (1) We need to mark dirty for dirty bitmaps in multiple slots
> > > + * and for tons of pages, so it's better to take the lock here
> > > + * once rather than once per page.  And more importantly,
> > > + *
> > > + * (2) We must _NOT_ publish dirty bits to the other threads
> > > + * (e.g., the migration thread) via the kvm memory slot dirty
> > > + * bitmaps before correctly re-protect those dirtied pages.
> > > + * Otherwise we can have potential risk of data corruption if
> > > + * the page data is read in the other thread before we do
> > > + * reset below.
> > > + */
> > > +for (i = 0; i < s->nr_as; i++) {
> > > +kml = s->as[i].ml;
> > > +if (!kml) {
> > > +/*
> > > + * This is tricky - we grow s->as[] dynamically now.  Take
> > > + * care of that case.  We also assumed the as[] will fill
> > > + * one by one starting from zero.  Without this, we race
> > > + * with register_smram_listener.
> > > + *
> > > + * TODO: make all these prettier...
> > > + */
> > > +locked_count = i;
> > > +break;
> > > +}
> > > +kvm_slots_lock(kml);
> > > +}
> > > +
> > > +CPU_FOREACH(cpu) {
> > > +total += kvm_dirty_ring_reap_one(s, cpu);
> > > +}
> > > +
> > > +if (total) {
> > > +ret = kvm_vm_ioctl(s, KVM_RESET_DIRTY_RINGS);
> > > +assert(ret == total);
> > > +}
> > > +
> > > +/* Unlock whatever locks that we have locked */
> > > +for (i = 0; i < locked_count; i++) {
> > > +kvm_slots_unlock(s->as[i].ml);
> > > +}
> > > +
> > > +CPU_FOREACH(cpu) {
> > > +if (cpu->kvm_dirty_ring_full) {
> > > +qemu_sem_post(>kvm_dirty_ring_avail);
> > > +}
> > 
> > Why do you need to wait until here - couldn't you release
> > each vcpu after you've reaped it?
> 
> We probably still need to wait.  

Re: [PATCH RFC 9/9] KVM: Dirty ring support

2020-03-25 Thread Peter Xu
On Wed, Mar 25, 2020 at 08:41:44PM +, Dr. David Alan Gilbert wrote:

[...]

> > +enum KVMReaperState {
> > +KVM_REAPER_NONE = 0,
> > +/* The reaper is sleeping */
> > +KVM_REAPER_WAIT,
> > +/* The reaper is reaping for dirty pages */
> > +KVM_REAPER_REAPING,
> > +};
> 
> That probably needs to be KVMDirtyRingReaperState
> given there are many things that could be reaped.

Sure.

> 
> > +/*
> > + * KVM reaper instance, responsible for collecting the KVM dirty bits
> > + * via the dirty ring.
> > + */
> > +struct KVMDirtyRingReaper {
> > +/* The reaper thread */
> > +QemuThread reaper_thr;
> > +/*
> > + * Telling the reaper thread to wakeup.  This should be used as a
> > + * generic interface to kick the reaper thread, like, in vcpu
> > + * threads where it gets a exit due to ring full.
> > + */
> > +EventNotifier reaper_event;
> 
> I think I'd just used a simple semaphore for this type of thing.

I'm actually uncertain on which is cheaper...

At the meantime, I wanted to poll two handles at the same time below
(in kvm_dirty_ring_reaper_thread).  I don't know how to do that with
semaphore.  Could it?

[...]

> > @@ -412,6 +460,18 @@ int kvm_init_vcpu(CPUState *cpu)
> >  (void *)cpu->kvm_run + s->coalesced_mmio * PAGE_SIZE;
> >  }
> >  
> > +if (s->kvm_dirty_gfn_count) {
> > +cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_size,
> > +   PROT_READ | PROT_WRITE, MAP_SHARED,
> 
> Is the MAP_SHARED required?

Yes it's required.  It's the same when we map the per-vcpu kvm_run.

If we use MAP_PRIVATE, it'll be in a COW fashion - when the userspace
writes to the dirty gfns the 1st time, it'll copy the current dirty
ring page in the kernel and from now on QEMU will never be able to see
what the kernel writes to the dirty gfn pages.  MAP_SHARED means the
userspace and the kernel shares exactly the same page(s).

> 
> > +   cpu->kvm_fd,
> > +   PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET);
> > +if (cpu->kvm_dirty_gfns == MAP_FAILED) {
> > +ret = -errno;
> > +DPRINTF("mmap'ing vcpu dirty gfns failed\n");
> 
> Include errno?

Will do.

[...]

> > +static uint64_t kvm_dirty_ring_reap(KVMState *s)
> > +{
> > +KVMMemoryListener *kml;
> > +int ret, i, locked_count = s->nr_as;
> > +CPUState *cpu;
> > +uint64_t total = 0;
> > +
> > +/*
> > + * We need to lock all kvm slots for all address spaces here,
> > + * because:
> > + *
> > + * (1) We need to mark dirty for dirty bitmaps in multiple slots
> > + * and for tons of pages, so it's better to take the lock here
> > + * once rather than once per page.  And more importantly,
> > + *
> > + * (2) We must _NOT_ publish dirty bits to the other threads
> > + * (e.g., the migration thread) via the kvm memory slot dirty
> > + * bitmaps before correctly re-protect those dirtied pages.
> > + * Otherwise we can have potential risk of data corruption if
> > + * the page data is read in the other thread before we do
> > + * reset below.
> > + */
> > +for (i = 0; i < s->nr_as; i++) {
> > +kml = s->as[i].ml;
> > +if (!kml) {
> > +/*
> > + * This is tricky - we grow s->as[] dynamically now.  Take
> > + * care of that case.  We also assumed the as[] will fill
> > + * one by one starting from zero.  Without this, we race
> > + * with register_smram_listener.
> > + *
> > + * TODO: make all these prettier...
> > + */
> > +locked_count = i;
> > +break;
> > +}
> > +kvm_slots_lock(kml);
> > +}
> > +
> > +CPU_FOREACH(cpu) {
> > +total += kvm_dirty_ring_reap_one(s, cpu);
> > +}
> > +
> > +if (total) {
> > +ret = kvm_vm_ioctl(s, KVM_RESET_DIRTY_RINGS);
> > +assert(ret == total);
> > +}
> > +
> > +/* Unlock whatever locks that we have locked */
> > +for (i = 0; i < locked_count; i++) {
> > +kvm_slots_unlock(s->as[i].ml);
> > +}
> > +
> > +CPU_FOREACH(cpu) {
> > +if (cpu->kvm_dirty_ring_full) {
> > +qemu_sem_post(>kvm_dirty_ring_avail);
> > +}
> 
> Why do you need to wait until here - couldn't you release
> each vcpu after you've reaped it?

We probably still need to wait.  Even after we reaped all the dirty
bits we only marked the pages as "collected", the buffers will only be
available again until the kernel re-protect those pages (when the
above KVM_RESET_DIRTY_RINGS completes).  Before that, continuing the
vcpu could let it exit again with the same ring full event.

[...]

> > +static int kvm_dirty_ring_reaper_init(KVMState *s)
> > +{
> > +struct KVMDirtyRingReaper *r = >reaper;
> > +int ret;
> > +
> > +ret = 

Re: [PATCH RFC 9/9] KVM: Dirty ring support

2020-03-25 Thread Dr. David Alan Gilbert
* Peter Xu (pet...@redhat.com) wrote:
> KVM dirty ring is a new interface to pass over dirty bits from kernel
> to the userspace.  Instead of using a bitmap for each memory region,
> the dirty ring contains an array of dirtied GPAs to fetch.  For each
> vcpu there will be one dirty ring that binds to it.
> 
> There're a few major changes comparing to how the old dirty logging
> interface would work:
> 
>   - Granularity of dirty bits
> 
> KVM dirty ring interface does not offer memory region level
> granularity to collect dirty bits (i.e., per KVM memory slot).
> Instead the dirty bit is collected globally for all the vcpus at
> once.  The major effect is on VGA part because VGA dirty tracking
> is enabled as long as the device is created, also it was in memory
> region granularity.  Now that operation will be amplified to a VM
> sync.  Maybe there's smarter way to do the same thing in VGA with
> the new interface, but so far I don't see it affects much at least
> on regular VMs.
> 
>   - Collection of dirty bits
> 
> The old dirty logging interface collects KVM dirty bits when
> synchronizing dirty bits.  KVM dirty ring interface instead used a
> standalone thread to do that.  So when the other thread (e.g., the
> migration thread) wants to synchronize the dirty bits, it simply
> kick the thread and wait until it flushes all the dirty bits to
> the ramblock dirty bitmap.
> 
> For more information please refer to the comments in the code.
> 
> Signed-off-by: Peter Xu 
> ---
>  accel/kvm/kvm-all.c| 426 -
>  accel/kvm/trace-events |   7 +
>  include/hw/core/cpu.h  |  10 +
>  3 files changed, 440 insertions(+), 3 deletions(-)
> 
> diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
> index 6d145a8b98..201617bbb7 100644
> --- a/accel/kvm/kvm-all.c
> +++ b/accel/kvm/kvm-all.c
> @@ -15,6 +15,7 @@
>  
>  #include "qemu/osdep.h"
>  #include 
> +#include 
>  
>  #include 
>  
> @@ -75,6 +76,47 @@ struct KVMParkedVcpu {
>  QLIST_ENTRY(KVMParkedVcpu) node;
>  };
>  
> +enum KVMReaperState {
> +KVM_REAPER_NONE = 0,
> +/* The reaper is sleeping */
> +KVM_REAPER_WAIT,
> +/* The reaper is reaping for dirty pages */
> +KVM_REAPER_REAPING,
> +};

That probably needs to be KVMDirtyRingReaperState
given there are many things that could be reaped.

> +/*
> + * KVM reaper instance, responsible for collecting the KVM dirty bits
> + * via the dirty ring.
> + */
> +struct KVMDirtyRingReaper {
> +/* The reaper thread */
> +QemuThread reaper_thr;
> +/*
> + * Telling the reaper thread to wakeup.  This should be used as a
> + * generic interface to kick the reaper thread, like, in vcpu
> + * threads where it gets a exit due to ring full.
> + */
> +EventNotifier reaper_event;

I think I'd just used a simple semaphore for this type of thing.

> +/*
> + * This should only be used when someone wants to do synchronous
> + * flush of the dirty ring buffers.  Logically we can achieve this
> + * even with the reaper_event only, however that'll make things
> + * complicated.  This extra event can make the sync procedure easy
> + * and clean.
> + */
> +EventNotifier reaper_flush_event;
> +/*
> + * Used in pair with reaper_flush_event, that the sem will be
> + * posted to notify that the previous flush event is handled by
> + * the reaper thread.
> + */
> +QemuSemaphore reaper_flush_sem;
> +/* Iteration number of the reaper thread */
> +volatile uint64_t reaper_iteration;
> +/* Status of the reaper thread */
> +volatile enum KVMReaperState reaper_state;
> +};
> +
>  struct KVMState
>  {
>  AccelState parent_obj;
> @@ -121,7 +163,6 @@ struct KVMState
>  void *memcrypt_handle;
>  int (*memcrypt_encrypt_data)(void *handle, uint8_t *ptr, uint64_t len);
>  
> -/* For "info mtree -f" to tell if an MR is registered in KVM */
>  int nr_as;
>  struct KVMAs {
>  KVMMemoryListener *ml;
> @@ -129,6 +170,7 @@ struct KVMState
>  } *as;
>  int kvm_dirty_ring_size;
>  int kvm_dirty_gfn_count;/* If nonzero, then kvm dirty ring enabled */
> +struct KVMDirtyRingReaper reaper;
>  };
>  
>  KVMState *kvm_state;
> @@ -348,6 +390,11 @@ int kvm_destroy_vcpu(CPUState *cpu)
>  goto err;
>  }
>  
> +ret = munmap(cpu->kvm_dirty_gfns, s->kvm_dirty_ring_size);
> +if (ret < 0) {
> +goto err;
> +}
> +
>  vcpu = g_malloc0(sizeof(*vcpu));
>  vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
>  vcpu->kvm_fd = cpu->kvm_fd;
> @@ -391,6 +438,7 @@ int kvm_init_vcpu(CPUState *cpu)
>  cpu->kvm_fd = ret;
>  cpu->kvm_state = s;
>  cpu->vcpu_dirty = true;
> +qemu_sem_init(>kvm_dirty_ring_avail, 0);
>  
>  mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
>  if (mmap_size < 0) {
> @@ -412,6 +460,18 @@ int kvm_init_vcpu(CPUState *cpu)
> 

[PATCH RFC 9/9] KVM: Dirty ring support

2020-02-05 Thread Peter Xu
KVM dirty ring is a new interface to pass over dirty bits from kernel
to the userspace.  Instead of using a bitmap for each memory region,
the dirty ring contains an array of dirtied GPAs to fetch.  For each
vcpu there will be one dirty ring that binds to it.

There're a few major changes comparing to how the old dirty logging
interface would work:

  - Granularity of dirty bits

KVM dirty ring interface does not offer memory region level
granularity to collect dirty bits (i.e., per KVM memory slot).
Instead the dirty bit is collected globally for all the vcpus at
once.  The major effect is on VGA part because VGA dirty tracking
is enabled as long as the device is created, also it was in memory
region granularity.  Now that operation will be amplified to a VM
sync.  Maybe there's smarter way to do the same thing in VGA with
the new interface, but so far I don't see it affects much at least
on regular VMs.

  - Collection of dirty bits

The old dirty logging interface collects KVM dirty bits when
synchronizing dirty bits.  KVM dirty ring interface instead used a
standalone thread to do that.  So when the other thread (e.g., the
migration thread) wants to synchronize the dirty bits, it simply
kick the thread and wait until it flushes all the dirty bits to
the ramblock dirty bitmap.

For more information please refer to the comments in the code.

Signed-off-by: Peter Xu 
---
 accel/kvm/kvm-all.c| 426 -
 accel/kvm/trace-events |   7 +
 include/hw/core/cpu.h  |  10 +
 3 files changed, 440 insertions(+), 3 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 6d145a8b98..201617bbb7 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -15,6 +15,7 @@
 
 #include "qemu/osdep.h"
 #include 
+#include 
 
 #include 
 
@@ -75,6 +76,47 @@ struct KVMParkedVcpu {
 QLIST_ENTRY(KVMParkedVcpu) node;
 };
 
+enum KVMReaperState {
+KVM_REAPER_NONE = 0,
+/* The reaper is sleeping */
+KVM_REAPER_WAIT,
+/* The reaper is reaping for dirty pages */
+KVM_REAPER_REAPING,
+};
+
+/*
+ * KVM reaper instance, responsible for collecting the KVM dirty bits
+ * via the dirty ring.
+ */
+struct KVMDirtyRingReaper {
+/* The reaper thread */
+QemuThread reaper_thr;
+/*
+ * Telling the reaper thread to wakeup.  This should be used as a
+ * generic interface to kick the reaper thread, like, in vcpu
+ * threads where it gets a exit due to ring full.
+ */
+EventNotifier reaper_event;
+/*
+ * This should only be used when someone wants to do synchronous
+ * flush of the dirty ring buffers.  Logically we can achieve this
+ * even with the reaper_event only, however that'll make things
+ * complicated.  This extra event can make the sync procedure easy
+ * and clean.
+ */
+EventNotifier reaper_flush_event;
+/*
+ * Used in pair with reaper_flush_event, that the sem will be
+ * posted to notify that the previous flush event is handled by
+ * the reaper thread.
+ */
+QemuSemaphore reaper_flush_sem;
+/* Iteration number of the reaper thread */
+volatile uint64_t reaper_iteration;
+/* Status of the reaper thread */
+volatile enum KVMReaperState reaper_state;
+};
+
 struct KVMState
 {
 AccelState parent_obj;
@@ -121,7 +163,6 @@ struct KVMState
 void *memcrypt_handle;
 int (*memcrypt_encrypt_data)(void *handle, uint8_t *ptr, uint64_t len);
 
-/* For "info mtree -f" to tell if an MR is registered in KVM */
 int nr_as;
 struct KVMAs {
 KVMMemoryListener *ml;
@@ -129,6 +170,7 @@ struct KVMState
 } *as;
 int kvm_dirty_ring_size;
 int kvm_dirty_gfn_count;/* If nonzero, then kvm dirty ring enabled */
+struct KVMDirtyRingReaper reaper;
 };
 
 KVMState *kvm_state;
@@ -348,6 +390,11 @@ int kvm_destroy_vcpu(CPUState *cpu)
 goto err;
 }
 
+ret = munmap(cpu->kvm_dirty_gfns, s->kvm_dirty_ring_size);
+if (ret < 0) {
+goto err;
+}
+
 vcpu = g_malloc0(sizeof(*vcpu));
 vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
 vcpu->kvm_fd = cpu->kvm_fd;
@@ -391,6 +438,7 @@ int kvm_init_vcpu(CPUState *cpu)
 cpu->kvm_fd = ret;
 cpu->kvm_state = s;
 cpu->vcpu_dirty = true;
+qemu_sem_init(>kvm_dirty_ring_avail, 0);
 
 mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
 if (mmap_size < 0) {
@@ -412,6 +460,18 @@ int kvm_init_vcpu(CPUState *cpu)
 (void *)cpu->kvm_run + s->coalesced_mmio * PAGE_SIZE;
 }
 
+if (s->kvm_dirty_gfn_count) {
+cpu->kvm_dirty_gfns = mmap(NULL, s->kvm_dirty_ring_size,
+   PROT_READ | PROT_WRITE, MAP_SHARED,
+   cpu->kvm_fd,
+   PAGE_SIZE * KVM_DIRTY_LOG_PAGE_OFFSET);
+if (cpu->kvm_dirty_gfns == MAP_FAILED) {
+ret = -errno;
+DPRINTF("mmap'ing vcpu