[PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-07-06 Thread Mel Gorman
An IPI is sent to flush remote TLBs when a page is unmapped that was potentially accesssed by other CPUs. There are many circumstances where this happens but the obvious one is kswapd reclaiming pages belonging to a running process as kswapd and the task are likely running on separate CPUs. On

[PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-07-06 Thread Mel Gorman
An IPI is sent to flush remote TLBs when a page is unmapped that was potentially accesssed by other CPUs. There are many circumstances where this happens but the obvious one is kswapd reclaiming pages belonging to a running process as kswapd and the task are likely running on separate CPUs. On

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-11 Thread Mel Gorman
On Thu, Jun 11, 2015 at 05:02:51PM +0200, Ingo Molnar wrote: > > * Mel Gorman wrote: > > > > In the full-flushing case (v6 without patch 4) the batching limit is > > > 'infinite', we'll batch as long as possible, right? > > > > No because we must flush before pages are freed so the maximum

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-11 Thread Ingo Molnar
* Mel Gorman wrote: > > In the full-flushing case (v6 without patch 4) the batching limit is > > 'infinite', we'll batch as long as possible, right? > > No because we must flush before pages are freed so the maximum batching is > related to SWAP_CLUSTER_MAX. If we free a page before the

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-11 Thread Ingo Molnar
* Mel Gorman mgor...@suse.de wrote: In the full-flushing case (v6 without patch 4) the batching limit is 'infinite', we'll batch as long as possible, right? No because we must flush before pages are freed so the maximum batching is related to SWAP_CLUSTER_MAX. If we free a page before

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-11 Thread Mel Gorman
On Thu, Jun 11, 2015 at 05:02:51PM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: In the full-flushing case (v6 without patch 4) the batching limit is 'infinite', we'll batch as long as possible, right? No because we must flush before pages are freed so the maximum

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 10:26:40AM +0200, Ingo Molnar wrote: > > * Mel Gorman wrote: > > > On a 4-socket machine the results were > > > > 4.1.0-rc6 4.1.0-rc6 > > batchdirty-v6 batchunmap-v6 > > Ops

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 10:33:32AM +0200, Ingo Molnar wrote: > > * Mel Gorman wrote: > > > Linear mapped reader on a 4-node machine with 64G RAM and 48 CPUs > > > > 4.1.0-rc6 4.1.0-rc6 > > vanilla

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 10:21:07AM +0200, Ingo Molnar wrote: > > * Mel Gorman wrote: > > > On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote: > > > > > > * Mel Gorman wrote: > > > > > > > --- a/include/linux/sched.h > > > > +++ b/include/linux/sched.h > > > > @@ -1289,6 +1289,18 @@

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman wrote: > Linear mapped reader on a 4-node machine with 64G RAM and 48 CPUs > > 4.1.0-rc6 4.1.0-rc6 > vanilla flushfull-v6 > Ops lru-file-mmap-read-elapsed 162.88 ( 0.00%) 120.81 (

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman wrote: > On a 4-socket machine the results were > > 4.1.0-rc6 4.1.0-rc6 > batchdirty-v6 batchunmap-v6 > Ops lru-file-mmap-read-elapsed 121.27 ( 0.00%) 118.79 ( 2.05%) > >

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman wrote: > On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote: > > > > * Mel Gorman wrote: > > > > > --- a/include/linux/sched.h > > > +++ b/include/linux/sched.h > > > @@ -1289,6 +1289,18 @@ enum perf_event_task_context { > > > perf_nr_task_contexts, > > > }; > > >

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote: > > * Mel Gorman wrote: > > > --- a/include/linux/sched.h > > +++ b/include/linux/sched.h > > @@ -1289,6 +1289,18 @@ enum perf_event_task_context { > > perf_nr_task_contexts, > > }; > > > > +/* Track pages that require TLB

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman wrote: > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1289,6 +1289,18 @@ enum perf_event_task_context { > perf_nr_task_contexts, > }; > > +/* Track pages that require TLB flushes */ > +struct tlbflush_unmap_batch { > + /* > + * Each bit set is a

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1289,6 +1289,18 @@ enum perf_event_task_context { perf_nr_task_contexts, }; +/* Track pages that require TLB

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman mgor...@suse.de wrote: On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1289,6 +1289,18 @@ enum perf_event_task_context { perf_nr_task_contexts, };

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman mgor...@suse.de wrote: --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1289,6 +1289,18 @@ enum perf_event_task_context { perf_nr_task_contexts, }; +/* Track pages that require TLB flushes */ +struct tlbflush_unmap_batch { + /* + * Each bit set

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 10:21:07AM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: On Wed, Jun 10, 2015 at 09:47:04AM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1289,6

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 10:33:32AM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: Linear mapped reader on a 4-node machine with 64G RAM and 48 CPUs 4.1.0-rc6 4.1.0-rc6 vanilla

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman mgor...@suse.de wrote: On a 4-socket machine the results were 4.1.0-rc6 4.1.0-rc6 batchdirty-v6 batchunmap-v6 Ops lru-file-mmap-read-elapsed 121.27 ( 0.00%) 118.79 ( 2.05%)

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Ingo Molnar
* Mel Gorman mgor...@suse.de wrote: Linear mapped reader on a 4-node machine with 64G RAM and 48 CPUs 4.1.0-rc6 4.1.0-rc6 vanilla flushfull-v6 Ops lru-file-mmap-read-elapsed 162.88 ( 0.00%)

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-10 Thread Mel Gorman
On Wed, Jun 10, 2015 at 10:26:40AM +0200, Ingo Molnar wrote: * Mel Gorman mgor...@suse.de wrote: On a 4-socket machine the results were 4.1.0-rc6 4.1.0-rc6 batchdirty-v6 batchunmap-v6 Ops

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-09 Thread Rik van Riel
On 06/09/2015 01:31 PM, Mel Gorman wrote: > An IPI is sent to flush remote TLBs when a page is unmapped that was > potentially accesssed by other CPUs. There are many circumstances where > this happens but the obvious one is kswapd reclaiming pages belonging to > a running process as kswapd and

[PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-09 Thread Mel Gorman
An IPI is sent to flush remote TLBs when a page is unmapped that was potentially accesssed by other CPUs. There are many circumstances where this happens but the obvious one is kswapd reclaiming pages belonging to a running process as kswapd and the task are likely running on separate CPUs. On

[PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-09 Thread Mel Gorman
An IPI is sent to flush remote TLBs when a page is unmapped that was potentially accesssed by other CPUs. There are many circumstances where this happens but the obvious one is kswapd reclaiming pages belonging to a running process as kswapd and the task are likely running on separate CPUs. On

Re: [PATCH 2/4] mm: Send one IPI per CPU to TLB flush all entries after unmapping pages

2015-06-09 Thread Rik van Riel
On 06/09/2015 01:31 PM, Mel Gorman wrote: An IPI is sent to flush remote TLBs when a page is unmapped that was potentially accesssed by other CPUs. There are many circumstances where this happens but the obvious one is kswapd reclaiming pages belonging to a running process as kswapd and the