On Wednesday 20 January 2010 17:47:48 Avi Kivity wrote:
> On 01/20/2010 11:34 AM, Sheng Yang wrote:
> >> I think we can even do this from the I/O thread, without stopping a
> >> vcpu, since the colaesced mmio page is not tied to a vcpu but is a vm
> >> property.
> >
> > This one sounds better. But I've taken a look at the current userspace
code:
> >> #if defined(KVM_CAP_COALESCED_MMIO)
> >> if (kvm_state->coalesced_mmio) {
> >> struct kvm_coalesced_mmio_ring *ring =
> >> (void *) run + kvm_state->coalesced_mmio * PAGE_SIZE;
> >> while (ring->first != ring->last) {
> >>
> >> cpu_physical_memory_rw(ring->coalesced_mmio[ring->first].phys_addr,
> >> &ring->coalesced_mmio[ring->first].data[0],
> >> ring->coalesced_mmio[ring->first].len, 1); smp_wmb();
> >> ring->first = (ring->first + 1) % KVM_COALESCED_MMIO_MAX;
> >> }
> >> }
> >> #endif
> >
> > No protection for ring->first and ring->last? Seems it can writing the
> > same element pointed by ring->first twice, then skip one element at
> > (ring->first + 1)...
>
> ring->first is owned by userspace, while ring->last is owned by the
> kernel, so no protection is necessary except for the memory barrier.
> Can you elaborate on how it would fail?
>
This piece of code can only be executed on one thread/vcpu at same time? I
think different vcpus accessing/modifying ring->first at the same time would
cause problem.
But for a separate iothread which handle all userspace accessing, it should be
fine.
--
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html