On Wed, Oct 31, 2018 at 11:07:22AM -0400, Waiman Long wrote: > On 10/31/2018 10:10 AM, Peter Zijlstra wrote: > > On Wed, Oct 31, 2018 at 09:54:17AM +0800, Yi Sun wrote: > >> On 18-10-23 17:33:28, Yi Sun wrote: > >>> On 18-10-23 10:51:27, Peter Zijlstra wrote: > >>>> Can you try and explain why vcpu_is_preempted() doesn't work for you? > >>> I thought HvSpinWaitInfo is used to notify hypervisor the spin number > >>> which is different with vcpu_is_preempted. So I did not consider > >>> vcpu_is_preempted. > >>> > >>> But HvSpinWaitInfo is a quite simple function and could be combined > >>> with vcpu_is_preempted together. So I think it is OK to use > >>> vcpu_is_preempted to make codes clean. I will have a try. > >> After checking codes, there is one issue to call vcpu_is_preempted. > >> There are two spin loops in qspinlock_paravirt.h. One loop in > >> 'pv_wait_node' calls vcpu_is_preempted. But another loop in > >> 'pv_wait_head_or_lock' does not call vcpu_is_preempted. It also does > >> not call any other ops of 'pv_lock_ops' in the loop. So I am afraid > >> we have to add one more ops in 'pv_lock_ops' to do this. > > Why? Would not something like the below cure that? Waiman, can you have > > a look at this; I always forget how that paravirt crud works. > > There are two major reasons why the vcpu_is_preempt() test isn't done at > pv_wait_head_or_lock(). First of all, we may not have a valid prev > pointer after all if it is the first one to enter the queue while the > lock is busy. Secondly, because of lock stealing, the cpu number pointed > by a valid prev pointer may not be the actual cpu that is currently > holding the lock. Another minor reason is that we want to minimize the > lock transfer latency and so don't want to sleep too early while waiting > at the queue head.
So Yi, are you actually seeing a problem? If so, can you give details?

