On Thu, 12 Apr 2012 19:56:45 -0300
Marcelo Tosatti <[email protected]> wrote:
> Other than potential performance improvement, the worst case scenario
> of holding mmu_lock for hundreds of milliseconds at the beginning
> of migration of huge guests must be fixed.
Write protection in kvm_arch_commit_memory_region() can be treated
similarly.
I am now checking that part together with my
KVM: Avoid zapping unrelated shadows in __kvm_set_memory_region()
because if we do rmap-based write protection when we enable dirty-logging,
sp->slot_bitmap necessary for kvm_mmu_slot_remove_write_access() can be
removed.
People who want to support more devices/slots will be happy, no?
But then, I need to find another way to eliminate shadow flushes.
Maybe I should leave sp->slot_bitmap removal to those who really want to
do that.
> > @@ -3121,15 +3121,23 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
> > struct kvm_dirty_log *log)
> > if (!dirty_bitmap[i])
> > continue;
> >
> > - is_dirty = true;
> > -
> > mask = xchg(&dirty_bitmap[i], 0);
> > dirty_bitmap_buffer[i] = mask;
> >
> > offset = i * BITS_PER_LONG;
> > - kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask);
> > + nr_protected += kvm_mmu_write_protect_pt_masked(kvm, memslot,
> > + offset, mask);
> > + if (nr_protected > 2048) {
>
> Can you expand on the reasoning behind this?
Sure.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html