On Mon, Jun 08, 2015 at 06:06:59PM +0100, Catalin Marinas wrote:
> The kmemleak memory scanning uses finer grained object->lock spinlocks
> primarily to avoid races with the memory block freeing. However, the
> pointer lookup in the rb tree requires the kmemleak_lock to be held.
> This is currently done in the find_and_get_object() function for each
> pointer-like location read during scanning. While this allows a low
> latency on kmemleak_*() callbacks on other CPUs, the memory scanning is
> slower.
> 
> This patch moves the kmemleak_lock outside the core scan_block()
> function allowing the spinlock to be acquired/released only once per
> scanned memory block rather than individual pointer-like values. The
> memory scanning performance is significantly improved (by an order of
> magnitude on an arm64 system).
> 
> Signed-off-by: Catalin Marinas <[email protected]>
> Cc: Andrew Morton <[email protected]>
> ---
> 
> Andrew,
> 
> While sorting out some of the kmemleak disabling races, I realised that
> kmemleak scanning performance can be improved. On an arm64 system I
> tested (albeit not a fast one but with 6 CPUs and 8GB of RAM),
> immediately after boot an "time echo scan > /sys/kernel/debug/kmemleak"
> takes on average 70 sec. With this patch applied, I get on average 4.7
> sec.

I need to make a correction here as I forgot lock proving enabled in my
.config when running the tests. With all the spinlock debugging
disabled, I get 9.5 sec vs 3.5 sec. Still an improvement but no longer
by an order of magnitude.

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to