On 25.03.21 01:28, Mike Kravetz wrote:
From: Roman Gushchin <[email protected]>

cma_release() has to lock the cma_lock mutex to clear the cma bitmap.
It makes it a blocking function, which complicates its usage from
non-blocking contexts. For instance, hugetlbfs code is temporarily
dropping the hugetlb_lock spinlock to call cma_release().

This patch introduces a non-blocking cma_release_nowait(), which
postpones the cma bitmap clearance. It's done later from a work
context. The first page in the cma allocation is used to store
the work struct. Because CMA allocations and de-allocations are
usually not that frequent, a single global workqueue is used.

To make sure that subsequent cma_alloc() call will pass, cma_alloc()
flushes the cma_release_wq workqueue. To avoid a performance
regression in the case when only cma_release() is used, gate it
by a per-cma area flag, which is set by the first call
of cma_release_nowait().

Signed-off-by: Roman Gushchin <[email protected]>
[[email protected]: rebased to v5.12-rc3-mmotm-2021-03-17-22-24]
Signed-off-by: Mike Kravetz <[email protected]>
---


1. Is there a real reason this is a mutex and not a spin lock? It seems to only protect the bitmap. Are bitmaps that huge that we spend a significant amount of time in there?

Because I also read "Because CMA allocations and de-allocations are
usually not that frequent".

With a spinlock, you would no longer be sleeping, but obviously you might end up waiting for the lock ;) Not sure if that would help.

2. IIUC, if we would do the clearing completely lockless and use atomic bitmap ops instead, only cma_debug_show_areas() would see slight inconsistencies. As long as the setting code (-> allocation code) holds the lock, I think this should be fine (-> no double allocations).

(sorry if that has already been discussed)

--
Thanks,

David / dhildenb

Reply via email to