On 07/04/15 03:55, Waiman Long wrote:
> This patch adds the necessary Xen specific code to allow Xen to
> support the CPU halting and kicking operations needed by the queue
> spinlock PV code.

This basically looks the same as the version I wrote, except I think you
broke it.

> +static void xen_qlock_wait(u8 *byte, u8 val)
> +{
> +     int irq = __this_cpu_read(lock_kicker_irq);
> +
> +     /* If kicker interrupts not initialized yet, just spin */
> +     if (irq == -1)
> +             return;
> +
> +     /* clear pending */
> +     xen_clear_irq_pending(irq);
> +
> +     /*
> +      * We check the byte value after clearing pending IRQ to make sure
> +      * that we won't miss a wakeup event because of the clearing.

My version had a barrier() here to ensure this.  The documentation of
READ_ONCE() suggests that it is not sufficient to meet this requirement
(and a READ_ONCE() here is not required anyway).

> +      *
> +      * The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
> +      * So it is effectively a memory barrier for x86.
> +      */
> +     if (READ_ONCE(*byte) != val)
> +             return;
> +
> +     /*
> +      * If an interrupt happens here, it will leave the wakeup irq
> +      * pending, which will cause xen_poll_irq() to return
> +      * immediately.
> +      */
> +
> +     /* Block until irq becomes pending (or perhaps a spurious wakeup) */
> +     xen_poll_irq(irq);
> +}

David

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to