On Mon, 2011-07-18 at 14:43 +0300, Avi Kivity wrote:
> On 07/18/2011 01:15 PM, Sasha Levin wrote:
> > On Mon, 2011-07-18 at 12:50 +0300, Avi Kivity wrote:
> > > On 07/18/2011 12:29 PM, Sasha Levin wrote:
> > > > > Hmm. This means we take the lock for every I/O, whether it hits
> > > > > coalesced mmio or not.
> > > > >
> > > > > We need to do the range check before taking the lock and the
> > > space check
> > > > > after taking the lock.
> > > > >
> > > >
> > > > I'll fix that.
> > > >
> > > > Shouldn't the range check be also locked somehow? Currently it is
> > > > possible that a coalesced region was removed while we are checking the
> > > > ranges, and we won't issue a mmio exit as the host expects
> > >
> > > It's "locked" using rcu.
> > >
> >
> > Where is that happening?
> >
> > All the coalesced zones are stored under the coalesced "device" in a
> > simple array. When adding and removing zones, kvm->slots_lock is taken -
> > I don't see anything which prevents a range check during zone removal
> > unless slots_lock prevents IO.
>
> Range check during slot removal is legal. While you are removing a
> slot, a concurrent write may hit or miss the slot; it doesn't matter.
>
> Userspace should flush the coalesced mmio buffer after removal to ensure
> there are no pending writes.
>
But the write may hit a non-existent slot.
Something like this:
Thread 1 Thread 2
----------------------------------
Check range |
Found slot |
| Remove slot
| Flush buffer
Get spinlock |
Write to buffer |
--
Sasha.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html