On Wed, Nov 25, 2015 at 10:38:00AM -0700, Jens Axboe wrote:
> On 11/25/2015 10:33 AM, Shaohua Li wrote:
> >On Wed, Nov 25, 2015 at 09:43:49AM -0700, Jens Axboe wrote:
> >>On 11/25/2015 09:40 AM, Woodhouse, David wrote:
> >>>On Wed, 2015-11-25 at 15:46 +0100, [email protected] wrote:
> >>>>On Tue, Nov 24, 2015 at 02:05:12PM -0800, Shaohua Li wrote:
> >>>>>The lib/iommu-common.c uses a bitmap and a lock. This implementation
> >>>>>actually uses a percpu_ida which completely avoids locking. It would be
> >>>>>possible to make lib/iommu-common.c use percpu_ida too if somebody wants
> >>>>>to do it, but I think this shouldn't be a blocker for these patches
> >>>>>giving it has huge performance gain.
> >>>>
> >>>>It doesn't "completely avoids locking", the percpu_ida code uses a lock
> >>>>internally too. Also, what is the memory and device address space
> >>>>overhead per cpu?
> >>>
> >>>A percpu lock doesn't bounce cachelines between CPUs very much, so from
> >>>that point of view it might as well not exist :)
> >>
> >>As long as the address space can remain more than ~50% empty, it is
> >>indeed practically lockless. Are we ever worried about higher
> >>utilization? If so, in my experience percpu_ida fails miserable near
> >>or at exhaustion.
> >
> >The patch uses TASK_RUNNING for tag allocation, so it doesn't wait. I
> >thought it's ok, no?
> 
> Even without the waiting it can end up sucking. If you are near
> exhaustion, multiple tasks allocating will end up stealing from each
> other.
> 
> Maybe it's not a concern here, if the space is big enough?

There are 128k tags. I think we can double it without problems within
32-bit DMA address space if necessary. But if driver holds the DMA
address space, we will have space shortage. If 64-bit DMA address is on
by default in the future, this will not be a concern.

> In any case, the pathological case isn't any worse than the normal
> case for the current code...

Yep, it should be not worse than current spinlock.

Thanks,
Shaohua
_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to