On Thu, 5 Jan 2012 09:48:37 -0200
Marcelo Tosatti <[email protected]> wrote:

> On Wed, Jan 04, 2012 at 03:06:43PM +0900, Takuya Yoshikawa wrote:
> > It is possible that the __set_bit() in mark_page_dirty() is called
> > simultaneously on the same region of memory, which may result in only
> > one bit being set, because some callers do not take mmu_lock before
> > mark_page_dirty().
> > 
> > This problem is hard to produce because when we reach mark_page_dirty()
> > beginning from, e.g., tdp_page_fault(), mmu_lock is being held during
> > __direct_map():  making kvm-unit-tests' dirty log api test write to two
> > pages concurrently was not useful for this reason.
> > 
> > So we have confirmed that there can actually be race condition by
> > checking if some callers really reach there without holding mmu_lock
> > using spin_is_locked():  probably they were from kvm_write_guest_page().
> > 
> > To fix this race, this patch changes the bit operation to the atomic
> > version:  note that nr_dirty_pages also suffers from the race but we do
> > not need exactly correct numbers for now.
> > 
> > Signed-off-by: Takuya Yoshikawa <[email protected]>
> > ---
> >  virt/kvm/kvm_main.c |    2 +-
> >  1 files changed, 1 insertions(+), 1 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 7287bf5..a91f980 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -1543,7 +1543,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct 
> > kvm_memory_slot *memslot,
> >     if (memslot && memslot->dirty_bitmap) {
> >             unsigned long rel_gfn = gfn - memslot->base_gfn;
> >  
> > -           if (!__test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
> > +           if (!test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
> >                     memslot->nr_dirty_pages++;
> >     }
> >  }
> 
> Looks good to me.
> 

I was planning to avoid bitmap flip based on srcu by clearing each bit
right before rmap write protection.

for each gfn_offset
        __clear_bit();
        nr_dirty_pages--;
        rmap_write_protect();

This way, I could eliminate memslots allocations and srcu synchronizations
and get dirty log became faster.

Furthermore get dirty log could be called for selected pages without
affecting others.

Now I need to re-think how to achieve the same thing, and measure the effect
again.

        Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to