Re: [PATCH 0/3] TLB flush multiple pages per IPI v4

2015-08-31 Thread Sébastien Wacquiez
On 04/25/2015 07:45 PM, Mel Gorman wrote:

> The performance impact is documented in the changelogs but in the optimistic
> case on a 4-socket machine the full series reduces interrupts from 900K
> interrupts/second to 60K interrupts/second.


Hello to the list,


this patch have a huge (positive) performance impact on my setup.

In the goal of building the best ever CDN, I run varnish web cache over
very big boxes (dual xeon 12 cores, 256 Gb Ram, 24 SSd, 2*40G ethernet).

Without going into varnish internal, it help to know that varnish have
multiple storage backend (memory, file, etc), and that the file backend,
(the one you use when you have caches drives), don't use read/write
syscall but mmap.

The raw performances of this server are very good : when using varnish
with memory storage only, it push 80Gbps of network traffic easily. When
reading/writing from/to the drives, you get 10GB/s of data. And you can
do both at the same time without performance loss.

Anyway, without this patch, using file storage backend and after warmup,
the performance of the server was limited to a frustrating 14 Gbps. At
start, varnish read from the http backend at ~ 30 Gbps, cache the data
in his huge mmap, the system write it to the disk, stream it to the
client, so everything looks ok. But instead of becoming quicker when the
hitrate goes up (as we alread have data in the cache), it became slower
and slower, to finally freeze for like 4-5 seconds every 10 sec or so.

After analysis, I found out the bottleneck is the system's capacity
to find free memory. If I get it correctly, when you read a "swapped
out" page of a mmaped file, the kernel have to find some free memory to
put the data it'll read from the drive. In my case, the disk are quick
enough to handle the change almost in real time, so I've a lot of
potential free memory (ie Inactive(file)). Really freeing this memory
(either in direct or hard reclaim) is done relatively slowly, ie, after
some tuning to avoid any direct reclaim (which was causing the freeze),
I ended up having 2 kswapd (it's a bi-socket numa node) process eating
100% of cpu for ~ 14 Gbps of traffic (or ~1.5 Millions reclaims/s)

After a chat with Rik van Riel and Mel Gorman, they suggest me to try
this patch, and the limitation immediately jumped at 33 Gbps, which was
in fact my upstream capacity, after a while I was able to achieve
60 Gbps without experiencing any issue.
Even the freezing part, happening in direct reclaim mode, is a lot
smoother ; on my test rig it sufficiently quick to not be seen as
unavailability by my supervision (which wasn't the case before).

The bad news is that after some time (like 24h) of stress testing, the
performance degrade, I guess due to some kind of fragmentation. Still,
the performance seems to be maintained to a higher level than the
vanilla kernel.

I suppose that this patch could also help a lot with database (which
often mmap their data) which have to reread huge dataset frequently.


Thanks a lot to Rik and Mel for the provided help, and feel free to mail
me if you have question.


Regards,


Sébastien Wacquiez

PS : the test were conducted with a 4.0.0 kernel.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 0/3] TLB flush multiple pages per IPI v4

2015-08-31 Thread Sébastien Wacquiez
On 04/25/2015 07:45 PM, Mel Gorman wrote:

> The performance impact is documented in the changelogs but in the optimistic
> case on a 4-socket machine the full series reduces interrupts from 900K
> interrupts/second to 60K interrupts/second.


Hello to the list,


this patch have a huge (positive) performance impact on my setup.

In the goal of building the best ever CDN, I run varnish web cache over
very big boxes (dual xeon 12 cores, 256 Gb Ram, 24 SSd, 2*40G ethernet).

Without going into varnish internal, it help to know that varnish have
multiple storage backend (memory, file, etc), and that the file backend,
(the one you use when you have caches drives), don't use read/write
syscall but mmap.

The raw performances of this server are very good : when using varnish
with memory storage only, it push 80Gbps of network traffic easily. When
reading/writing from/to the drives, you get 10GB/s of data. And you can
do both at the same time without performance loss.

Anyway, without this patch, using file storage backend and after warmup,
the performance of the server was limited to a frustrating 14 Gbps. At
start, varnish read from the http backend at ~ 30 Gbps, cache the data
in his huge mmap, the system write it to the disk, stream it to the
client, so everything looks ok. But instead of becoming quicker when the
hitrate goes up (as we alread have data in the cache), it became slower
and slower, to finally freeze for like 4-5 seconds every 10 sec or so.

After analysis, I found out the bottleneck is the system's capacity
to find free memory. If I get it correctly, when you read a "swapped
out" page of a mmaped file, the kernel have to find some free memory to
put the data it'll read from the drive. In my case, the disk are quick
enough to handle the change almost in real time, so I've a lot of
potential free memory (ie Inactive(file)). Really freeing this memory
(either in direct or hard reclaim) is done relatively slowly, ie, after
some tuning to avoid any direct reclaim (which was causing the freeze),
I ended up having 2 kswapd (it's a bi-socket numa node) process eating
100% of cpu for ~ 14 Gbps of traffic (or ~1.5 Millions reclaims/s)

After a chat with Rik van Riel and Mel Gorman, they suggest me to try
this patch, and the limitation immediately jumped at 33 Gbps, which was
in fact my upstream capacity, after a while I was able to achieve
60 Gbps without experiencing any issue.
Even the freezing part, happening in direct reclaim mode, is a lot
smoother ; on my test rig it sufficiently quick to not be seen as
unavailability by my supervision (which wasn't the case before).

The bad news is that after some time (like 24h) of stress testing, the
performance degrade, I guess due to some kind of fragmentation. Still,
the performance seems to be maintained to a higher level than the
vanilla kernel.

I suppose that this patch could also help a lot with database (which
often mmap their data) which have to reread huge dataset frequently.


Thanks a lot to Rik and Mel for the provided help, and feel free to mail
me if you have question.


Regards,


Sébastien Wacquiez

PS : the test were conducted with a 4.0.0 kernel.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/3] TLB flush multiple pages per IPI v4

2015-04-25 Thread Mel Gorman
The big change here is that I dropped the patch that batches TLB flushes
from migration context. After V3, I realised that there are non-trivial
corner cases there that deserve treatment in their own series. It did not
help that I could not find a workload that was both migration and IPI
intensive. The common case for IPIs during reclaim is kswapd unmapping
pages which guarantees IPIs. In migration, at least some of the pages
being migrated will belong to the process itself.

The main issue is that migration cannot have any cached TLB entries after
migration completes. Once the migration PTE is removed then writes can happen
to that new page. The old TLB entry could see stale reads until it's flushed
which is different to the reclaim case. This is difficult to get around. We
cannot just unmap in advance because then there are no migration entries
to restore and there would be minor faults post-migration. We can't batch
restore the migration entries because the page lock must be held during
migration or BUG_ONs get triggered. Batching TLB flushes safely requires
a major rethink of how migration works so lets deal with reclaim first on
its own, preferably in the context of a workload that is both migration
and IPI intensive.

The patch that increased the batching size was also removed because there
is no advantage when TLBs are flushed before freeing the page. To increase
batching we would have to alter how many pages are isolated from the LRU
which would be a different patch series.

Most reviewed-bys had to be dropped as the patches changed too much to
preserve them.

Changelog since V3
o Drop batching of TLB flush from migration
o Redo how larger batching is managed
o Batch TLB flushes when writable entries exist

When unmapping pages it is necessary to flush the TLB. If that page was
accessed by another CPU then an IPI is used to flush the remote CPU. That
is a lot of IPIs if kswapd is scanning and unmapping >100K pages per second.

There already is a window between when a page is unmapped and when it is
TLB flushed. This series simply increases the window so multiple pages can
be flushed using a single IPI.

Patch 1 simply made the rest of the series easier to write as ftrace
could identify all the senders of TLB flush IPIS.

Patch 2 collects a list of PFNs and sends one IPI to flush them all

Patch 3 tracks when there potentially are writable TLB entries that
need to be batched differently

The performance impact is documented in the changelogs but in the optimistic
case on a 4-socket machine the full series reduces interrupts from 900K
interrupts/second to 60K interrupts/second.

 arch/x86/Kconfig|   1 +
 arch/x86/include/asm/tlbflush.h |   2 +
 arch/x86/mm/tlb.c   |   1 +
 include/linux/init_task.h   |   8 +++
 include/linux/mm_types.h|   1 +
 include/linux/rmap.h|   3 +
 include/linux/sched.h   |  15 +
 include/trace/events/tlb.h  |   3 +-
 init/Kconfig|   8 +++
 kernel/fork.c   |   5 ++
 kernel/sched/core.c |   3 +
 mm/internal.h   |  15 +
 mm/rmap.c   | 119 +++-
 mm/vmscan.c |  45 ++-
 14 files changed, 224 insertions(+), 5 deletions(-)

-- 
2.3.5

Mel Gorman (3):
  x86, mm: Trace when an IPI is about to be sent
  mm: Send one IPI per CPU to TLB flush multiple pages that were
recently unmapped
  mm: Defer flush of writable TLB entries

 arch/x86/Kconfig|   1 +
 arch/x86/include/asm/tlbflush.h |   2 +
 arch/x86/mm/tlb.c   |   1 +
 include/linux/init_task.h   |   8 +++
 include/linux/mm_types.h|   1 +
 include/linux/rmap.h|   3 +
 include/linux/sched.h   |  15 +
 include/trace/events/tlb.h  |   3 +-
 init/Kconfig|   8 +++
 kernel/fork.c   |   5 ++
 kernel/sched/core.c |   3 +
 mm/internal.h   |  15 +
 mm/rmap.c   | 119 +++-
 mm/vmscan.c |  30 +-
 14 files changed, 210 insertions(+), 4 deletions(-)

-- 
2.3.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 0/3] TLB flush multiple pages per IPI v4

2015-04-25 Thread Mel Gorman
The big change here is that I dropped the patch that batches TLB flushes
from migration context. After V3, I realised that there are non-trivial
corner cases there that deserve treatment in their own series. It did not
help that I could not find a workload that was both migration and IPI
intensive. The common case for IPIs during reclaim is kswapd unmapping
pages which guarantees IPIs. In migration, at least some of the pages
being migrated will belong to the process itself.

The main issue is that migration cannot have any cached TLB entries after
migration completes. Once the migration PTE is removed then writes can happen
to that new page. The old TLB entry could see stale reads until it's flushed
which is different to the reclaim case. This is difficult to get around. We
cannot just unmap in advance because then there are no migration entries
to restore and there would be minor faults post-migration. We can't batch
restore the migration entries because the page lock must be held during
migration or BUG_ONs get triggered. Batching TLB flushes safely requires
a major rethink of how migration works so lets deal with reclaim first on
its own, preferably in the context of a workload that is both migration
and IPI intensive.

The patch that increased the batching size was also removed because there
is no advantage when TLBs are flushed before freeing the page. To increase
batching we would have to alter how many pages are isolated from the LRU
which would be a different patch series.

Most reviewed-bys had to be dropped as the patches changed too much to
preserve them.

Changelog since V3
o Drop batching of TLB flush from migration
o Redo how larger batching is managed
o Batch TLB flushes when writable entries exist

When unmapping pages it is necessary to flush the TLB. If that page was
accessed by another CPU then an IPI is used to flush the remote CPU. That
is a lot of IPIs if kswapd is scanning and unmapping 100K pages per second.

There already is a window between when a page is unmapped and when it is
TLB flushed. This series simply increases the window so multiple pages can
be flushed using a single IPI.

Patch 1 simply made the rest of the series easier to write as ftrace
could identify all the senders of TLB flush IPIS.

Patch 2 collects a list of PFNs and sends one IPI to flush them all

Patch 3 tracks when there potentially are writable TLB entries that
need to be batched differently

The performance impact is documented in the changelogs but in the optimistic
case on a 4-socket machine the full series reduces interrupts from 900K
interrupts/second to 60K interrupts/second.

 arch/x86/Kconfig|   1 +
 arch/x86/include/asm/tlbflush.h |   2 +
 arch/x86/mm/tlb.c   |   1 +
 include/linux/init_task.h   |   8 +++
 include/linux/mm_types.h|   1 +
 include/linux/rmap.h|   3 +
 include/linux/sched.h   |  15 +
 include/trace/events/tlb.h  |   3 +-
 init/Kconfig|   8 +++
 kernel/fork.c   |   5 ++
 kernel/sched/core.c |   3 +
 mm/internal.h   |  15 +
 mm/rmap.c   | 119 +++-
 mm/vmscan.c |  45 ++-
 14 files changed, 224 insertions(+), 5 deletions(-)

-- 
2.3.5

Mel Gorman (3):
  x86, mm: Trace when an IPI is about to be sent
  mm: Send one IPI per CPU to TLB flush multiple pages that were
recently unmapped
  mm: Defer flush of writable TLB entries

 arch/x86/Kconfig|   1 +
 arch/x86/include/asm/tlbflush.h |   2 +
 arch/x86/mm/tlb.c   |   1 +
 include/linux/init_task.h   |   8 +++
 include/linux/mm_types.h|   1 +
 include/linux/rmap.h|   3 +
 include/linux/sched.h   |  15 +
 include/trace/events/tlb.h  |   3 +-
 init/Kconfig|   8 +++
 kernel/fork.c   |   5 ++
 kernel/sched/core.c |   3 +
 mm/internal.h   |  15 +
 mm/rmap.c   | 119 +++-
 mm/vmscan.c |  30 +-
 14 files changed, 210 insertions(+), 4 deletions(-)

-- 
2.3.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/