> From: Avi Kivity [mailto:[email protected]]
> Sent: Tuesday, May 24, 2011 7:06 PM
> 
> On 05/24/2011 11:20 AM, Tian, Kevin wrote:
> > >
> > >  The (vmx->cpu.cpu != cpu) case in __loaded_vmcs_clear should ideally
> never
> > >  happen: In the cpu offline path, we only call it for the loaded_vmcss 
> > > which
> > >  we know for sure are loaded on the current cpu. In the cpu migration
> path,
> > >  loaded_vmcs_clear runs __loaded_vmcs_clear on the right CPU, which
> ensures
> > >  that
> > >  equality.
> > >
> > >  But, there can be a race condition (this was actually explained to me a
> while
> > >  back by Avi - I never seen this happening in practice): Imagine that cpu
> > >  migration calls loaded_vmcs_clear, which tells the old cpu (via IPI) to
> > >  VMCLEAR this vmcs. But before that old CPU gets a chance to act on that
> IPI,
> > >  a decision is made to take it offline, and all loaded_vmcs loaded on it
> > >  (including the one in question) are cleared. When that CPU acts on this
> IPI,
> > >  it notices that vmx->cpu.cpu==-1, i.e., != cpu, so it doesn't need to do
> > >  anything (in the new version of the code, I made this more explicit, by
> > >  returning immediately in this case).
> >
> > the reverse also holds true. Right between the point where cpu_offline hits
> > a loaded_vmcs and the point where it calls __loaded_vmcs_clear, it's 
> > possible
> > that the vcpu is migrated to another cpu, and it's likely that migration 
> > path
> > (vmx_vcpu_load) has invoked loaded_vmcs_clear but hasn't delete this vmcs
> > from old cpu's linked list. This way later when __loaded_vmcs_clear is
> > invoked on the offlined cpu, there's still chance to observe cpu as -1.
> 
> I don't think it's possible.  Both calls are done with interrupts disabled.

If that's the case then there's another potential issue. Deadlock may happen
when calling smp_call_function_single with interrupt disabled. 

Thanks
Kevin
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to