On Fri, Apr 27, 2012 at 02:07:57PM +0800, Xiao Guangrong wrote:
> On 04/27/2012 06:00 AM, Marcelo Tosatti wrote:
>
>
> >> static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu)
> >> {
> >> - /* Decrease the counter after walking shadow page table finished */
> >> - smp_mb__before_atomic_dec();
> >> - atomic_dec(&vcpu->kvm->arch.reader_counter);
> >> - rcu_read_unlock();
> >> + /*
> >> + * Make our reads and writes to shadow page tables globally visible
> >> + * before leaving READING_SHADOW_PAGE_TABLES mode.
> >> + */
> >
> > This comment is misleading. Writes to shadow page tables must be
> > performed with locked instructions outside the mmu_lock.
> >
>
>
> You mean that the write should guarantee a correct memory order by itself?
Yes.
> >> + smp_mb();
> >> + vcpu->mode = OUTSIDE_GUEST_MODE;
> >
> > Don't you want
> >
> > vcpu->mode = OUTSIDE_GUEST_MODE;
> > smp_mb();
> >
>
>
> It is unsafe i think, it is a problem if spte read / spte update is ordered
> to the behind of vcpu->mode = OUTSIDE_GUEST_MODE, like below:
>
> VCPU 0 VCPU 1
> commit_zapped_page:
> /*
> * setting vcpu->mode is reordered
> * to the head of read spte.
> */
> vcpu->mode = OUTSIDE_GUEST_MODE;
>
> see VCPU 0 is out-of-guest-mode, IPI is
> not sent, and the sp is free immediately.
>
> read spte;
> OOPS!!!
>
> (It is invalid since spte is freed.)
>
> smp_mb
Right. In that case a compiler barrier is sufficient (stores are not
reordered with earlier loads on x86).
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html