On Tue, 3 Jul 2012 04:55:35 -0300, Marcelo Tosatti <mtosa...@redhat.com> wrote:
> > 
> >    if (!zero_mask)
> >        goto again;
> 
> Can you please measure increased vmentry/vmexit overhead? x86/vmexit.c 
> of git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git should 
> help.
>
Sure will get back with the result.

> > +   /* 
> > +    * Guest might have seen us offline and would have set
> > +    * flush_on_enter. 
> > +    */
> > +   kvm_read_guest_cached(vcpu->kvm, ghc, vs, 2*sizeof(__u32));
> > +   if (vs->flush_on_enter) 
> > +           kvm_x86_ops->tlb_flush(vcpu);
> 
> 
> So flush_tlb_page which was an invlpg now flushes the entire TLB. Did
> you take that into account?
> 
When the vcpu is sleeping/pre-empted out, multiple request for flush_tlb
could have happened. And now when we are here, it is cleaning up all the
TLB.

One other approach would be to queue the addresses, that brings us with
the question: how many request to queue? This would require us adding
more syncronization between guest and host for updating the area where
these addresses is shared.

> > +again:
> > +           for_each_cpu(cpu, to_cpumask(f->flush_cpumask)) {
> > +                   v_state = &per_cpu(vcpu_state, cpu);
> > +
> > +                   if (!v_state->state) {
> 
> Should use ACCESS_ONCE to make sure the value is not register cached.
> \
> > +                           v_state->flush_on_enter = 1;
> > +                           smp_mb();
> > +                           if (!v_state->state)
> 
> And here.
> 
Sure will add this check for both in my next version.

> > +                                   cpumask_clear_cpu(cpu, 
> > to_cpumask(f->flush_cpumask));
> > +                   }
> > +           }
> > +
> > +           if (cpumask_empty(to_cpumask(f->flush_cpumask)))
> > +                   goto out;
> > +
> > +           apic->send_IPI_mask(to_cpumask(f->flush_cpumask),
> > +                               INVALIDATE_TLB_VECTOR_START + sender);
> > +
> > +           loop = 1000;
> > +           while (!cpumask_empty(to_cpumask(f->flush_cpumask)) && --loop)
> > +                   cpu_relax();
> > +
> > +           if (!cpumask_empty(to_cpumask(f->flush_cpumask)))
> > +                   goto again;
> 
> Is this necessary in addition to the in-guest-mode/out-guest-mode
> detection? If so, why?
> 
The "case 3" where we initially saw the vcpu was running, and a flush
ipi is send to the vcpu. During this time the vcpu might be pre-empted,
so we come out of the loop=1000 with !empty flushmask. We then re-verify
the flushmask against the current running vcpu and make sure that the
vcpu that was pre-empted is un-marked and we can proceed out of the
kvm_flush_tlb_others_ipi without waiting for sleeping/pre-empted vcpus.

Regards
Nikunj

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to