On Mon, 30 Jan 2012, Avi Kivity wrote:

> On 01/30/2012 05:07 PM, Avi Kivity wrote:
> > > +Parameters: None
> > > +Returns: 0 on success, -1 on error
> > > +
> > > +This signals to the host kernel that the specified guest is being paused 
> > > by
> > > +userspace.  The host will set a flag in the pvclock structure that is 
> > > checked
> > > +from the soft lockup watchdog.  This ioctl can be called during pause or
> > > +unpause.
> > > +
> > >  5. The kvm_run structure
> > >  
> > >  
> > > +/*
> > > + * kvm_set_guest_paused() indicates to the guest kernel that it has been
> > > + * stopped by the hypervisor.  This function will be called from the 
> > > host only.
> > > + */
> > > +static int kvm_set_guest_paused(struct kvm *kvm)
> > > +{
> > > + struct kvm_vcpu *vcpu;
> > > + struct pvclock_vcpu_time_info *src;
> > > + int i;
> > > +
> > > + kvm_for_each_vcpu(i, vcpu, kvm) {
> > > +         if (!vcpu->arch.time_page)
> > > +                 continue;
> > > +         src = &vcpu->arch.hv_clock;
> > > +         src->flags |= PVCLOCK_GUEST_STOPPED;
> >
> > This looks racy.  The vcpu can remove its kvmclock concurrently with
> > this access, and src will be NULL.
> >
> > Can you point me to the discussion that moved this to be a vm ioctl?  In
> > general vm ioctls that do things for all vcpus are racy, like here. 
> > You're accessing variables that are protected by the vcpu mutex, and not
> > taking the mutex (nor can you, since it is held while the guest is
> > running, unlike most kernel mutexes).
> >
> 
> Note, the standard way to fix this race is to
> kvm_make_request(KVM_REQ_KVMCLOCK_STOP) and kvm_vcpu_kick(vcpu), and do
> the update in vcpu_enter_guest().  But I think this is needless
> complexity and want to understand what motivated the move to a vm ioctl.
> 

I will hold off on fixing this race until we settle the vm or vcpu ioctl
question.

Eric

Attachment: signature.asc
Description: Digital signature

Reply via email to