On Tue, 05 Jun 2012 10:40:15 +0200
Jan Kiszka <[email protected]> wrote:
> > diff -ur -i kvm-kmod-3.4/x86/kvm_main.c kvm-kmod-3.4-fix/x86/kvm_main.c
> > --- kvm-kmod-3.4/x86/kvm_main.c 2012-05-21 23:43:02.000000000 +0800
> > +++ kvm-kmod-3.4-fix/x86/kvm_main.c 2012-06-05 12:19:37.780136969 +0800
> > @@ -1525,8 +1525,8 @@
> > if (memslot && memslot->dirty_bitmap) {
> > unsigned long rel_gfn = gfn - memslot->base_gfn;
> >
> > - if (!test_and_set_bit_le(rel_gfn, memslot->dirty_bitmap))
> > - memslot->nr_dirty_pages++;
> > + __set_bit_le(rel_gfn, memslot->dirty_bitmap);
> > + memslot->nr_dirty_pages++;
> > }
> > }
> >
> > ~
> >
> > I think the root cause maybe: the acton of clear dirty_bitmap
> > don't sync with that of set nr_dirty_pages=0.
memslot->nr_dirty_pages should become 0 when dirty_bitmap is updated
by the SRCU-update.
Actually this number was used just for optimizing get_dirty's write
protection and did not need to be correct: if we could know !0, that's
enough.
> > but I don't realize why it works fine in new kernel.
>
> Takuya, any idea why this change could make a difference when running
> 3.4 kvm code on an older host kernel?
Assuming that the new little endian bitops functions are properly defined,
the only dirty logging problem I can think of is a rmap_write_protect race
bug which was recently fixed in 3.4 stable tree.
By the change above, memslot->nr_dirty_pages might be incremented more
than necessary, which would make get_dirty select the old write protection
method -- kvm_mmu_slot_remove_write_access() -- which was safe from the bug.
But I don't think this was the cause of the problem.
Thanks,
Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html