On Thu 22-02-18 19:01:58, Andrey Ryabinin wrote:
> On 02/22/2018 06:44 PM, Michal Hocko wrote:
> > On Thu 22-02-18 18:38:11, Andrey Ryabinin wrote:
> >>>> with the patch:
> >>>> best: 1.04 secs, 9.7G reclaimed
> >>>> worst: 2.2 secs, 16G reclaimed.
> >>>> without:
> >>>> best: 5.4 sec, 35G reclaimed
> >>>> worst: 22.2 sec, 136G reclaimed
> >>> Could you also compare how much memory do we reclaim with/without the
> >>> patch?
> >> I did and I wrote the results. Please look again.
> > I must have forgotten. Care to point me to the message-id?
> The results are quoted right above, literally above. Raise your eyes
> up. message-id 0927bcab-7e2c-c6f9-d16a-315ac436b...@virtuozzo.com
OK, I see. We were talking about 2 different things I guess.
> I write it here again:
> with the patch:
> best: 9.7G reclaimed
> worst: 16G reclaimed
> best: 35G reclaimed
> worst: 136G reclaimed
> Or you asking about something else? If so, I don't understand what you
Well, those numbers do not tell us much, right? You have 4 concurrent
readers each an own 1G file in a loop. The longer you keep running that
the more pages you are reclaiming of course. But you are not comparing
the same amount of work.
My main concern about the patch is that it might over-reclaim a lot if
we have workload which also frees memory rahther than constantly add
more easily reclaimable page cache. I realize such a test is not easy
I have already said that I will not block the patch but it should be at
least explained why a larger batch makes a difference.