get_cpu_ptr() disables preemption and returns the ->flush_queue object of the current CPU. raw_cpu_ptr() does the same except that it not disable preemption which means the scheduler can move it to another CPU after it obtained the per-CPU object. In this case this is not bad because the data structure itself is protected with a spin_lock. This change shouldn't matter in general but on RT it does because the sleeping lock can't be accessed with disabled preemption.
Cc: Joerg Roedel <[email protected]> Cc: [email protected] Reported-by: [email protected] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> --- drivers/iommu/amd_iommu.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 4ad7e5e31943..943efbc08128 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -1911,7 +1911,7 @@ static void queue_add(struct dma_ops_domain *dom, pages = __roundup_pow_of_two(pages); address >>= PAGE_SHIFT; - queue = get_cpu_ptr(dom->flush_queue); + queue = raw_cpu_ptr(dom->flush_queue); spin_lock_irqsave(&queue->lock, flags); /* @@ -1940,8 +1940,6 @@ static void queue_add(struct dma_ops_domain *dom, if (atomic_cmpxchg(&dom->flush_timer_on, 0, 1) == 0) mod_timer(&dom->flush_timer, jiffies + msecs_to_jiffies(10)); - - put_cpu_ptr(dom->flush_queue); } static void queue_flush_timeout(unsigned long data) -- 2.14.1

