On Wed,  2 Jul 2014 09:13:46 +0900 Minchan Kim <minc...@kernel.org> wrote:

> Normally, I/O completed pages for reclaim would be rotated into
> inactive LRU tail without freeing. The why it works is we can't free
> page from atomic context(ie, end_page_writeback) due to vaious locks
> isn't aware of atomic context.
> 
> So for reclaiming the I/O completed pages, we need one more iteration
> of reclaim and it could make unnecessary aging as well as CPU overhead.
> 
> Long time ago, at the first trial, most concern was memcg locking
> but recently, Johnannes tried amazing effort to make memcg lock simple
> and got merged into mmotm so I coded up based on mmotm tree.
> (Kudos to Johannes)
> 
> On 1G, 12 CPU kvm guest, build kernel 5 times and result was
> 
> allocstall
> vanilla: records: 5 avg: 4733.80 std: 913.55(19.30%) max: 6442.00 min: 3719.00
> improve: records: 5 avg: 1514.20 std: 441.69(29.17%) max: 1974.00 min: 863.00

Well yes.  We're now doing unaccounted, impact-a-random-process work in
irq context which was previously being done in process context,
accounted to the process which was allocating the memory.  Some would
call this a regression ;)

> pgrotated
> vanilla: records: 5 avg: 873313.80 std: 40999.20(4.69%) max: 954722.00 min: 
> 845903.00
> improve: records: 5 avg: 28406.40 std: 3296.02(11.60%) max: 34552.00 min: 
> 25047.00

Still a surprisingly high amount of rotation going on.

> Most of field in vmstat are not changed too much but things I can notice
> is allocstall and pgrotated. We could save allocstall(ie, direct relcaim)
> and pgrotated very much.
> 
> Welcome testing, review and any feedback!

Well, it will worsen IRQ latencies and it's all more code for us to
maintain.  I think I'd like to see a better story about the end-user
benefits before proceeding.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to