This idea was tried on per memcg lru lock patchset v18, and had a good
result, about 5%~20+% performance gain on lru lock busy benchmarks,
like case-lru-file-readtwice.

But on the latest kernel, I can not reproduce the result on my box.
Also I can not reproduce Tim's performance gain too on my box.

So I don't know if it's workable in some scenario, just sent out if
someone has interesting...

Alex Shi (4):
  mm/swap.c: pre-sort pages in pagevec for pagevec_lru_move_fn
  mm/swap.c: bail out early for no memcg and no numa
  mm/swap.c: extend the usage to pagevec_lru_add
  mm/swap.c: no sort if all page's lruvec are same

Cc: Konstantin Khlebnikov <koc...@gmail.com>
Cc: Hugh Dickins <hu...@google.com>
Cc: Yu Zhao <yuz...@google.com>
Cc: Michal Hocko <mho...@suse.com>
Cc: Matthew Wilcox (Oracle) <wi...@infradead.org>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: linux...@kvack.org
Cc: linux-kernel@vger.kernel.org

 mm/swap.c | 118 +++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 91 insertions(+), 27 deletions(-)

-- 
2.29.GIT

Reply via email to